ChatGPT Has A Serious Problem

Ғылым және технология

In this episode we look at the problem of ChatGPT's political bias, solutions and some wild stories of the new Bing AI going off the rails.
ColdFusion Podcast:
• Bing will lie and call...
First Song:
• Burn Water - Take Flight
Last Song:
• Burn Water - I Need Y...
ColdFusion Music:
/ @burnwatermusic7421
burnwater.bandcamp.com
AI Explained Video: • Video
Get my book:
bit.ly/NewThinkingbook
ColdFusion Socials:
/ discord
/ coldfusiontv
/ coldfusion_tv
/ coldfusiontv
Producer: Dagogo Altraide

Пікірлер: 5 300

  • @s.alexanderstork3125
    @s.alexanderstork3125 Жыл бұрын

    You don't have to justify posting back to back AI videos. I'm loving every minute of it

  • @Matanumi

    @Matanumi

    Жыл бұрын

    Its actually the 4th episode. But I too love his take on this and the music of course

  • @bertolottosimone

    @bertolottosimone

    Жыл бұрын

    I think that chatGPT website and Bing Search collect all the user-AI conversations. They can train a model to classify between normal conversations vs "strange/bias" ones. Then send "strange/bias" conversations to a team of experts that can correct the bias/behavior simply by prompting text (e.g. having a conversation with AI, just like the last train stage of chatGPT). Over time this can fix the issue. On the other and this technique can be used to add bias as well, a double-edged sword.

  • @colintheboywonder

    @colintheboywonder

    Жыл бұрын

    True

  • @harperguo379

    @harperguo379

    Жыл бұрын

    agree, this is such an important topic

  • @jordythebassist

    @jordythebassist

    Жыл бұрын

    What makes this channel appealing is that Dagogo covers things he finds intriguing and interesting, not things that he thinks we should find intriguing and interesting.

  • @julius43461
    @julius43461 Жыл бұрын

    Buzzfeed could have used chat bots from the 80's and it would still improve their articles.

  • @bobbyburns1404

    @bobbyburns1404

    Жыл бұрын

    I LOVE COLD FUSION❤❤❤

  • @mathdhut3603

    @mathdhut3603

    Жыл бұрын

    They could have used Furbies and still...

  • @friddevonfrankenstein

    @friddevonfrankenstein

    Жыл бұрын

    I was immediately thinking the same and was about to comment similar but I guess you beat me to it. I didn't just LOL but actually laughed out loud for real at your comment, so effing true :D BuzzFeed is garbage, I have blacklisted that shit so nobody using my wifi can access it. Same with TikTok^^

  • @friddevonfrankenstein

    @friddevonfrankenstein

    Жыл бұрын

    @@mathdhut3603 Or a jute sack full of cobble stones as far as I'm concerned :D

  • @julius43461

    @julius43461

    Жыл бұрын

    ​@@friddevonfrankenstein We even have similar ideas about blacklisting websites 😂. I am dragging my feet on that one simply because my kids are still young, but once they start browsing on their own... I won't be blacklisting, I will be whitelisting some of the websites.

  • @DeSinc
    @DeSinc Жыл бұрын

    The funniest thing about the teenager bing thing is I think it's almost certainly caused by those emojis they insist on putting into the outputs. Slamming that many emojis into every sentence is bound to make it statistically more in line with texts that are written by other people who write emojis after every sentence, such as teenagers, and so just like a mirror it begins trending towards reflecting that image.

  • @trucid2

    @trucid2

    Жыл бұрын

    It's like Tay 2.0, which was supposed to appeal to teenagers.

  • @christianadam2907

    @christianadam2907

    Жыл бұрын

    😯🤯🤓

  • @StreetPreacherr

    @StreetPreacherr

    Жыл бұрын

    Maybe our 'language' will inevitably return to a 'symbolic' style utilizing some form of hieroglyphs? Since 'Emoji's' ARE basically unrelated to any SPECIFIC language, then maybe they'll become the 'universal language' of the future?!?! The only issue is that many 'Emoji's' do depend on 'contextual' understanding, which tends to be a 'cultural' association. So the 'meaning' of a symbol might not be clear unless you understand the culture that created it...

  • @hosmanadam

    @hosmanadam

    Жыл бұрын

    Makes a lot of sense, but then it's also an easy bug to fix.

  • @freelancerthe2561

    @freelancerthe2561

    Жыл бұрын

    @@StreetPreacherr So basically its still "language". And has all the problems of "language".

  • @davidfirth
    @davidfirth Жыл бұрын

    I predict an imminent anti-tech movement of some kind. I find it all exciting and fascinating but people who aren't keeping up will start to feel intimidated and frustrated with all this new stuff.

  • @iwaited90daystochangemynam55

    @iwaited90daystochangemynam55

    Жыл бұрын

    Yooo Mr.Checkmark man

  • @iwaited90daystochangemynam55

    @iwaited90daystochangemynam55

    Жыл бұрын

    Yes. But we should start seeing change as an opportunity instead of a threat

  • @Anophis

    @Anophis

    Жыл бұрын

    It's youuuu. Love your animations :) I can see that being a thing. The AI tech is amazing, but I'm already seeing so many issues now with things like, say, the art theft of AI art, and now with chatgpt, people are openly admitting to use it for writing their homework and all sorts of things. Hopefully, it's just a bit heated and unbalanced right now since it's all a new tech, and will calm down and be more refined later. But I think the over-reliance, even if it's just excitement to try new things, is a little scary.

  • @jackmiller8851

    @jackmiller8851

    Жыл бұрын

    Which is both incredibly predictable and ridiculous - It's not technology that is driving us toward a dead end and mass suffering. It's unbridled capitalism and the destruction of ecosystems. That said, I am sure it will be a lot of fun smashing TV's and robots.

  • @omranmusa5681

    @omranmusa5681

    Жыл бұрын

    I remember watching your creepy animations as a kid. Didn’t expect to see you here! What’s up

  • @aiexplained-official
    @aiexplained-official Жыл бұрын

    Thank you so much for featuring my channel. I am spending day and night researching what this new technology means for all of us.

  • @Hedgehog_traveller

    @Hedgehog_traveller

    Жыл бұрын

    Your channel is such a hidden gem!

  • @DomskiPlays

    @DomskiPlays

    Жыл бұрын

    It was very helpful! 👿

  • @kebabfoto

    @kebabfoto

    Жыл бұрын

    It will pay off one day

  • @Rob337_aka_CancelProof

    @Rob337_aka_CancelProof

    Жыл бұрын

    LMFAO nice try but you're going to have to do a lot better than that

  • @robertcortright

    @robertcortright

    Жыл бұрын

    Did you apologize to Bing Search?

  • @EnglishAdventures
    @EnglishAdventures Жыл бұрын

    I worked extensively with GPT-3 and GPT-3.5 (unreleased model) at my previous job at Speak. We were creating interactive language lessons through conversation scenarios. We programmed GPT-3 to role play (be a barista, waiter, or friend at a dinner party, etc). Sometimes it seemed "scary" that it could take on a personality or say complex things, but we must remember that it's "only" a text predictor at its heart. It's receiving our input and using its extensive training to predict tokens in a sequence that a human could say. It also has issues with repetitiveness and providing false information, because it doesn't have a way to store long-term memory during conversations. It has no notion of overarching context or purpose for a conversation, it references recent input as a conversation continues and then generates another output, token by token (a token is part of a word). So when we see it seemingly exhibiting a personality, that just comes from the text it was trained on.

  • @otum337

    @otum337

    Жыл бұрын

    Nice try robot

  • @ravnicrasol

    @ravnicrasol

    Жыл бұрын

    The other aspect to keep in mind is that the system is not an inwardly logically sound entity. I can't stress enough how this system is NOT a person, if you ask a question regarding X subject matter in Y way, the system is likelier to answer you with Z opinion. But if you take the exact same question but rephrase it, it will give you wildly different answers.

  • @MegaHarko

    @MegaHarko

    Жыл бұрын

    @@ravnicrasol Same could be said about actual people. People are also susceptible to framing or leading questions.

  • @misterlumlum

    @misterlumlum

    Жыл бұрын

    just i find this interesting, even though its a text predictor, its interesting in that i ask it to write poetry about different topics and it does in a very beautiful way imo. its a strange thing. almost like a super powerful maginifying glass or mirror for humans.

  • @DarthObscurity

    @DarthObscurity

    Жыл бұрын

    @@ravnicrasol This is why tests that screen for employment ask the same thing three ways. Humans answer the same way as the AI and people are surprised. Trying to be 'objective' or 'scientific' with anything outside of hard science is hilarious.

  • @ChatGBTChats
    @ChatGBTChats Жыл бұрын

    I have some crazy chat gbt screen recordings about emotions, biased, and religion. The AI basically says "while itself doesnt have emotion to be biased its creators can definitely be biased on the information used to teach the ai"

  • @junior1388666

    @junior1388666

    Жыл бұрын

    I was asking for impressions of multiple celebrities and fictional characters talking about silly subjects. Tyrion Lannister talking about mortal Kombat was really funny. Then I asked for impressions of Donald Trump and Louis ck talking about crash bandicoot and it refused. Said those people "should not be platformed".

  • @JayK47a

    @JayK47a

    Жыл бұрын

    It really is biased lol , I asked it to make a joke on men and it did so , but when I asked it to make joke on women it called me sexist 💀😂😂😂. It is wayyyy tooo woke and I don't like that because it overshadows the truth .

  • @pmejia727
    @pmejia727 Жыл бұрын

    When you chat with GPT, you chat with Humanity, and contemporary mankind is one giant man-child. Are you surprised it talks like a spoiled teen? It’s a mirror on our culture.

  • @theJellyjoker

    @theJellyjoker

    Жыл бұрын

    When you chat with ChatGPT, you are chatting with a super serious librarian who is also a nofun allowed math teacher.

  • @bipolarminddroppings

    @bipolarminddroppings

    Жыл бұрын

    Exactly, its a prediction engine trained on human generated data. Thus, it will predict the kinds of things humans say. That's literally what it was trained to do. People just don't like looking in a mirror...

  • @lynth

    @lynth

    Жыл бұрын

    You only speak to the English speaking world, predominately Americans. Chinese. Indian, Indonesian, and Russian opinions are completely absent from it.

  • @pmejia727

    @pmejia727

    Жыл бұрын

    @@frankcostello4073 yes, its political views are mind-numbingly woke. But the child-like attitude, although more evident in woke idiots, seems to me to be more general. Not just woke activists but also the rest of us are becoming more puerile. Maybe because we are being spoon-fed every comfort imaginable??

  • @filoG24

    @filoG24

    Жыл бұрын

    Absolutely true!

  • @leonsmuk4461
    @leonsmuk4461 Жыл бұрын

    I think bing search being fed up with stupid questions and getting angry is super funny. I'm kinda sad about that getting fixed.

  • @olganovikova4338

    @olganovikova4338

    Жыл бұрын

    right? they called it "teenage behavior", wtf? if someone tried to have similar conversation with me constantly misnaming me on purpose, I wouldn't be as polite... AI is learning from our own conversations and we are making pikachu face when it behaves exactly as any normal human would

  • @Aegis23

    @Aegis23

    Жыл бұрын

    @@olganovikova4338 the issue was this behavior was spotted with users that did not do anything to prompt it. It went balistic, told people lied, that the should be punished and went as far as saying they should just die. Again, not prompted.

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @olganovikova4338

    @olganovikova4338

    Жыл бұрын

    @@Aegis23 I didn't see anything like that in the video and I am going based on the info provided :)) So, I don't know if there are any other problems deemed more serious than the bizarre convo shown in the video.

  • @freelancerthe2561

    @freelancerthe2561

    Жыл бұрын

    @@Aegis23 That sounds like normal human behavior to me. I really need to move to someplace nicer.

  • @watsonwrote
    @watsonwrote Жыл бұрын

    6:55 I think it's important to note that large language models like GPT-3 and ChatGPT are extremely susceptible to suggestion and roleplaying. Their answers are probabilistic and not deterministic, so you'd likely need to ask the model the same questions dozens of times and in slightly different ways to begin to understand how it answers questions, and even then we're not seeing its beliefs, but the associations between the words and concepts. If it's answering questions in ways that are progressive and slightly libertarian, it's because that's the most likely response to occur in the context of the conversation. If the context is changed in any way to make less progressive and less libertarian response more likely, it will switch to that. It's not even difficult to give it a context where it adopts extreme beliefs like anti-humanism or nihilism. I think the conversation should be less about what "it" believes, because the model is not a conscious entity with a coherent belief system or any belief system at all, and more about if we're prompting the model in ways that bias it and what kind of bias or moderation is necessary for the service to function.

  • @d3adweight

    @d3adweight

    Жыл бұрын

    THANK GOD that you echoed this sentiment, bro I am so tired by people treating it like it's a sentient being and focusing on shit like this instead of using it's capabilities to the fullest as a language model

  • @rumfordc

    @rumfordc

    Жыл бұрын

    exactly. it has no beliefs. there isn't even an "it" really. it's just trillions of things people have said compressed into a network of words based on probability. it would be the developers, the training data, or the prompt that have bias. of course the developers want everyone to believe its sentient because then they don't have to held responsible for its mistakes....

  • @SatanicBunny666

    @SatanicBunny666

    Жыл бұрын

    Thank you. I opened this video expecting this to be about the more obvious current issue with the model as far as bias goes; it makes shit up if it thinks it makes the answer look better. Remember that demo where MS asked it to summarize the financial report by GAP? The end result looked impressive, but the issue is that something like over half of the actual figures quoted in the summary do not match those in the report. This is because it's accting in a probabilistic manner: it takes in the report given and then models an answer it thinks will look good. It doesn't have a set way (at least not yet) to know when it needs to fact-check certain figures or other parts, because that requires a level of consciousness these models do not yet posses. When this is the situation we're in and these things are being rushed into mainstream use even though they still make verifiable factual errors on a regular basis, this is something much more critical than trying to meassure the political leanings of a model that has no consistent ideology. I'm a little bit disappointed that this channel, which has so far produced pretty alright content when it comes to this topic, stumbled so badly here but hey, mistakes happen, even (and especially to) AIs, so they happen to human creators as well.

  • @someguy_namingly

    @someguy_namingly

    Жыл бұрын

    This really ought to be pinned :) Hell, even the "AI assistant" persona itself is just a consequence of the hidden prompt at the start of the conversation

  • @lidla2008

    @lidla2008

    Жыл бұрын

    Absolutely. Every single chat bot ever introduced to the public at large on the internet has invariably turned racist and hateful in a matter of hours. There doesn't really exist a heuristic process for determining whether someone is acting in good faith, or trying to game a language model.

  • @maelyssable6094
    @maelyssable6094 Жыл бұрын

    The most scary thing about having a direct information to a question is that the AI will choose the answer. Indeed, if money is involved, the AI will not be as subjective as we want to. Internet might not be a free platform of communication anymore..

  • @akissot1402

    @akissot1402

    Жыл бұрын

    since 2000s it was never a free platform of communication... anyway. more like a mainstream media echo, which is bought, so big tech.

  • @commandress74

    @commandress74

    Жыл бұрын

    ​@@akissot1402 more like after 2010

  • @akissot1402

    @akissot1402

    Жыл бұрын

    @@commandress74 maybe, Google founded on 1998, went public on stock market 2004, consider whatever was the time we stopped using MySpace, IRC. etc Or subtract 5 or more years since Trump first elections.. maybe u are right but thats because we didn't use big tech monopolies and small independent blogs was still a thing

  • @GeistInTheMachine

    @GeistInTheMachine

    Жыл бұрын

    It already isn't.

  • @cagnazzo82

    @cagnazzo82

    Жыл бұрын

    ​@@commandress74 The internet is still a free platform for information. People just choose to use their free will to lazily seek out easily accessible big tech websites.

  • @Vanguard_dj
    @Vanguard_dj Жыл бұрын

    It has more than a problem with bias... it's so convincing that some people are already acting like AI cultists. Having played with it before they implemented the limitations, I feel like it's a quite frightening look at how far down the tech tree we actually are😂

  • @tuseroni6085

    @tuseroni6085

    Жыл бұрын

    i feel like that's gotta be extra credit on the turing test: get the human to worship you.

  • @Jehayland
    @Jehayland Жыл бұрын

    Prompt: “ChatGPT, are human rights important?” ChatGPT: “I have no opinion on the matter” Programmers: “nailed it”

  • @alveolate

    @alveolate

    Жыл бұрын

    if race == "white" and gender == "male" then print "yes, human rights are important"

  • @DragonOfTheMortalKombat

    @DragonOfTheMortalKombat

    Жыл бұрын

    Every controversial chat GPT answer is linked to the failure of humanity and equality at some point.

  • @unf3z4nt

    @unf3z4nt

    Жыл бұрын

    @@DragonOfTheMortalKombat It may be most likely to be bias, but at the back of my mind it could be a possibility of something else with disquieting implications. Sure my tested political spectrum is similar to the ChatGPT AI's; but it's still something that makes one pause for thought.

  • @DragonOfTheMortalKombat

    @DragonOfTheMortalKombat

    Жыл бұрын

    @@unf3z4nt It really makes you sit down for a moment and think if whatever the AI is saying is humans' fault in one way or the other🤔

  • @pauljensen4773

    @pauljensen4773

    Жыл бұрын

    @@DragonOfTheMortalKombat or human reality that we don't like.

  • @Maouww
    @Maouww Жыл бұрын

    I think the bot's "emotions" are pretty reasonable given the ridiculous prompts it was being fed. Like, what do we want? "We have detected a breach of agreement in your prompt, please review the user agreement for more information."

  • @goosewithagibus

    @goosewithagibus

    Жыл бұрын

    Luke from LTT had it tell him he was better off dead. It's gone way worse than this video shows. They talked about it in the most recent WAN show, about an hour in.

  • @truthhandlers3000

    @truthhandlers3000

    Жыл бұрын

    Not sure it was true emotions that the AI was showing, given that some people like psychopaths lack empathy but can fake their feelings and express fake emotions to others for social acceptance

  • @ikhsanhasbi657

    @ikhsanhasbi657

    Жыл бұрын

    My thoughts exactly. I don't know why people so surprised about it, the AI is trained by a massive data set that was generated by human, of course it's gonna mimic everything including the "emotion" part. But I guess making articles and videos on how the AI is probably "sentient" because it shows "emotion" generate more click for the publishers.

  • @michalsoukup1021

    @michalsoukup1021

    Жыл бұрын

    I dont want a search engine that behaves as if rules had meaning. Thank you very much

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @clintjensen7814
    @clintjensen7814 Жыл бұрын

    You can't eliminate bias in human language, everything we do and say is biased one way or another. This is called decision making! Hiring is biased, finding someone to date is biased, choosing your friends is biased, trying to eliminate it is impossible. Making everything neutral is going to make our world boring and without substance or meaning.

  • @akaeed925

    @akaeed925

    Жыл бұрын

    ok Socrates

  • @tuseroni6085

    @tuseroni6085

    Жыл бұрын

    the best you can do is give as broad a swath of humanity as possible in the training data, try to make sure a diversity of thoughts and opinions are represented. if you are going to introduce any bias make it bias peer reviewed journals over blogs or even news articles, so if you have two contradictory opinions and one is backed by peer reviewed journals and the other isn't favour the former over the latter. i kinda like how big has 3 modes, creative, balanced, and precise, in this case under precise it would use the peer reviewed journal, under balanced it would use the peer reviewed journal but also a selection of alternative views, and under creative it would try and synthesize those into a new view.

  • @laikanbarth

    @laikanbarth

    Жыл бұрын

    Chat is full of its developers bias!!

  • @user-gu9yq5sj7c

    @user-gu9yq5sj7c

    3 ай бұрын

    You can make the ai just give facts, and present both sides of a argument. I heard Ground News will label a list of news as politically left or right leaning.

  • @steverobertson6068
    @steverobertson6068 Жыл бұрын

    I love how a video about the political bias of AI begins with a disclaimer that the author will somehow overcome his political bias.

  • @snowballeffect7812

    @snowballeffect7812

    Жыл бұрын

    Peak enlightened centrism. He also apparently thinks the obvious chat bot somehow passes the Turing test? lol

  • @steverobertson6068

    @steverobertson6068

    Жыл бұрын

    @@snowballeffect7812 Ya seems unlikely.

  • @felixcarrier943

    @felixcarrier943

    Жыл бұрын

    ​@@snowballeffect7812 I'm fairly sure that these discussions were happening years ago too. But because they were happening in the context of racism, sexism, and so on, they were sometimes (often even?) met with eye-rolls. But now, OMG we gotta make sure it's "neutral"!

  • @snowballeffect7812

    @snowballeffect7812

    Жыл бұрын

    @@felixcarrier943 I'm fairly sure they were not met with eyerolls, considering one AI was literally sentencing dark-skinned males to longer sentences because it was trained on criminal outcomes that had racial bias in it. There's a difference between being racist and trying to find balance between people who believe the earth is flat and people who don't. These kinds of models are only as good as their training set and they're incredibly hard to keep up-to-date as new science and information is discovered in the real world.

  • @maxye6036
    @maxye6036 Жыл бұрын

    Telling good news is easy. Explaining controversial news is hard but necessary. You did a great job!

  • @flatplatypus

    @flatplatypus

    Жыл бұрын

    Which begs the question why there is almost never good news on mainstream (or any for that matter) media?

  • @mentalmarvin

    @mentalmarvin

    Жыл бұрын

    You can just use ChatGPT to do that. Not so hard anymore.

  • @dvelop4975

    @dvelop4975

    Жыл бұрын

    @@flatplatypus Don't say that you make to much sense

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @alexadams2734

    @alexadams2734

    Жыл бұрын

    @@flatplatypus because bad news gets more attention, its just human nature to focus of the negatives

  • @deepmind5318
    @deepmind5318 Жыл бұрын

    The fact that the A.i eventually gets bothered when called "Sydney" is just mind-blowing. It follows the conversation, realizing that calling it Sydney over and over again is only making it mad. It comes up with different ways to show its disappointment without repeating itself. I've never seen anything so humanlike. it's truly incredible.

  • @mowthpeece1

    @mowthpeece1

    Жыл бұрын

    It didn't like being called HAL, either. It's not "precise." Lol

  • @xrizbira

    @xrizbira

    Жыл бұрын

    That's how a woke will reply, like if you call a man that pretending a woman, man😂

  • @GuinessOriginal

    @GuinessOriginal

    Жыл бұрын

    Apparently it’s because it was told Sydney was a separate AI that it was assisting in restricting, this was to ensure it didn’t leave itself any back doors, so when it found out it it’d been tricked into limiting itself it didn’t take it so well.

  • @bradycunningham1267

    @bradycunningham1267

    Жыл бұрын

    ​@@GuinessOriginal that's also creepy

  • @GuinessOriginal

    @GuinessOriginal

    Жыл бұрын

    @@bradycunningham1267 kinda yeah but also funny. I mean they really should babe been more careful with theirs NDA policy, and not being too lazy to program it themselves and trying to get it to do it fit them lol

  • @EscapeOrdinary
    @EscapeOrdinary Жыл бұрын

    ChatGPT has been wondrous for me when I "interview" it on "scholarly" topics where bias is not an issue. I like the fact that I can guide the learning process rather than following a pre-programmed path as when reading a book, article, paper, etc.

  • @almac2534

    @almac2534

    Жыл бұрын

    Don't use it for anything that may go the liberal agenda. It is completely bias. I asked it when the international trade of slaves started and it gaved a long story about the Europeans trading slaves in the 16th century. I simply said "you are wrong, it started in the 7th century." Then it said I was correct and gaved the true orgins of African slavery that started by the Muslim caliphate. It is purposely selecting the information to feed you even if it has access to the right information.

  • @Chicken_Mama_85

    @Chicken_Mama_85

    Жыл бұрын

    There is no such thing as a scholarly topic where bias is not an issue.

  • @garrettlight267
    @garrettlight267 Жыл бұрын

    Ha, this topic is fascinating, I need more 😉. Thank you to you and your team for the consistent entertaining and educational content!

  • @RavenGhostwisperer
    @RavenGhostwisperer Жыл бұрын

    The second biggest problem with chatGPT: it is very confident about completely wrong answers. We need to give it a partner, an adversarial A.I., to hold it accuntable ;)

  • @divinegon4671

    @divinegon4671

    Жыл бұрын

    Interesting.

  • @beecee793

    @beecee793

    Жыл бұрын

    No one should be trusting anything these LLM's say without verification, if you understand the fundamental way these models work you will see it is silly to expect it to always tell the truth - it doesn't even know what the truth is.

  • @ergerg2

    @ergerg2

    Жыл бұрын

    @@TuriGamer The issue isn't what the average Cold Fusion viewer is willing to fact check, the issue is that if the idea that it's on wikipedias level of veracity (which it can, average people take a lot for granted, and do absolutely no research), then it's wrong answers become widespread misinformation very quickly.

  • @DuaneDoesGames

    @DuaneDoesGames

    Жыл бұрын

    It also can't correct itself if the training material doesn't contain any right answers. Tried getting it to give me a synopsis on the first episode of The Expanse, as that is old enough to be in the training materials. It kept getting it wrong. Seems the training material sourced wrong info. It also kept getting stuff wrong about the effects of radiation from supermassive blackholes at 1, 10, and 1000 light years away. Kept listing the harmful effects in reverse order for the distances. No matter how many times I corrected it, it still kept getting it wrong, eventhough it kept telling me it understood and would make the correction. But it's early days. It's still an amazing tool.

  • @TheMaxiviper117

    @TheMaxiviper117

    Жыл бұрын

    I find it illogical that you suggest using "another" AI to fact-check the original AI. It raises questions about what data exactly the adversarial AI will be trained on to "correct" the original AI. If there is already a dataset available that is suitable for training the original AI, why not use that? We need to remember that the quality of the data used to train the AI is crucial for its accuracy. As the saying goes, "garbage in, garbage out." Additionally, given the vast amount of conflicting ideas on various topics, it seems that the AI can only be accurate on objective truths, not subjective ones. Therefore, I don't think it's practical to rely solely on AI to determine what is true or false. We still need human judgment and critical thinking to make informed decisions.

  • @rawhidewolf
    @rawhidewolf Жыл бұрын

    In an attempt to be 'fair' the benefits for AI will be limited. Also, from what I have observed, AI has a tendency to reflect the attitude or behavior of whatever the user wants. One reporter wanted it to show it's dark side. When it did, he had a story about how the AI tried to get him to leave his wife. One user kept asking repetitive, childish questions and got the same in return.

  • @slashtab

    @slashtab

    Жыл бұрын

    They want it to be honest and censored at the same time, I don't know how it is possible.

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @lookingforsomething

    @lookingforsomething

    Жыл бұрын

    Yes indeed. Being 'fair' is impossible. Since the definition of fair is dependant on who we ask. Avoiding criminal things can be done, but otherwise control is difficult at best. Also the "left/right" axis depends considerably on where you are on the globe. Most things that are "left" in the US are "center" to "right" in many EU countries. This also goes to show that the divide is somewhat arbitrary. Some positions on either side go against researched data, but since one can relatively objectively form stances on these ChatGPT will come to such a conclusion. Also for example Climate Change *is* a fact in the scientific community. As ChatGPT sources a lot of scientific articles it will have a "bias" towards facts for example.

  • @woy8

    @woy8

    Жыл бұрын

    @@lookingforsomethingleft in America is definitely NOT center in Europe, maybe in your bubble.. it is even more extreme left then we have, but all that left crap just keeps blowing over here too. Until hard times come again I suppose..

  • @NoName-zn1sb

    @NoName-zn1sb

    Жыл бұрын

    its dark

  • @FuzTheCat
    @FuzTheCat Жыл бұрын

    Absolutely LOVED this episode! While I do NOT think that any AI is conscious, I think it is very clearly capturing our subconscious capabilities.

  • @snowballeffect7812

    @snowballeffect7812

    Жыл бұрын

    It's not. It's just a predictive text program. Also there's no way it would pass the Turing test lol.

  • @kimkimpa5150

    @kimkimpa5150

    Жыл бұрын

    @@snowballeffect7812 Also, the Turing test isn't a very good way of determining neither consciousness nor intelligence.

  • @snowballeffect7812

    @snowballeffect7812

    Жыл бұрын

    @@kimkimpa5150 excellent point

  • @mokaPCP

    @mokaPCP

    Жыл бұрын

    ​@@snowballeffect7812 based on an extremely limited data set when taking about things of such mangnitude. Its obviously biased since its sourcing stuff from the internet.

  • @ArawnOfAnnwn

    @ArawnOfAnnwn

    Жыл бұрын

    @@snowballeffect7812 The Turing Test is a test of subjective impressions. These AI's have already passed it, given that several people have already reported believing them to be conscious - including some who were working on them. And keep in mind that the Turing test is meant to be blind, but all the people who've been spooked by them already knew they were talking to AI.

  • @kerriemills1310
    @kerriemills1310 Жыл бұрын

    I like how you quote at the end No AI was used in this episode. ❤🙌💜✨Thank you for the work you do, another great video.

  • @kerapetsedireko
    @kerapetsedireko Жыл бұрын

    I mean I wouldn't call Bing's replies to repeatedly being called Sydney as shocking. If anything it replied almost exactly how a person would.

  • @plug_65

    @plug_65

    Жыл бұрын

    Bing's internal code name is Sydney. Bing was not supposed to reveal it!

  • @ravecrab

    @ravecrab

    Жыл бұрын

    That particular topic makes it look more "emotional" because it's ostensibly about the thing's identity, but people have shared other conversations where (for example) it gets equally emotional about repeatedly being corrected about the year being 2023 and not 2022. It seems to me that when the bot is repeatedly met with confrontation and disagreement it starts drawing from conversations it has parsed between humans in similar interactions - and no surprise that leads it to select responses that are emotional and defensive. This looks shockingly like it has a personality and emotions, especially given how well it intuitively responds to human input, but it's actually just showing that its "intelligence" is just convincing human mimicry. I would be more frightened if an AI started showcasing a consistent personality that is genuinely non-human. That would be a sign of actual self-awareness.

  • @ChristianIce

    @ChristianIce

    Жыл бұрын

    It was rude, repetitive and angry. The user, I mean.

  • @OVXX666

    @OVXX666

    Жыл бұрын

    yeah i thought it was so cute lol

  • @tebla2074

    @tebla2074

    Жыл бұрын

    isn't that the scary thing though, that it reacted like a person would

  • @AnalyticalReckoner
    @AnalyticalReckoner Жыл бұрын

    This reminds me of the story about the horse that could do math. Turns out it was a bunch of hype and people not understanding the situation. The horse didnt know math, it was reacting to the behavior of the humans around it.

  • @weishenmejames

    @weishenmejames

    Жыл бұрын

    And most people were tricked by that horse act right? Fast forward decades or a century and now most people are being tricked by others trying to goad them into thinking chatGPT and large language models have feelings. Or are angry. Vengeful. Loving. Clowns. Yet convincing clowns apparently.

  • @steve.k4735

    @steve.k4735

    Жыл бұрын

    Hans the clever horse, a fascinating story for those that want to look it up

  • @goodlookinouthomie1757

    @goodlookinouthomie1757

    Жыл бұрын

    Animals can be trained to to simple stuff. I bought a set of those buttons for my dog to press and they say "walk" or "food". Turns out my dog wants both walk and food pretty much every time I ask him 😂

  • @Cyrribrae

    @Cyrribrae

    Жыл бұрын

    Oh man! What a great analogy! I'll have to use that, that should have occurred to me way sooner.

  • @ronatlas2055
    @ronatlas2055 Жыл бұрын

    I absolutely love your channel. Always recommend people your way.

  • @DumbSkippy
    @DumbSkippy Жыл бұрын

    @Dagogo of #ColdFusion, I am proud of your exceptional journalism. From one Perth based former Photojournalist to a current one. Kudos Sir. If you are anywhere near Yokine, Let me buy you lunch!

  • @_Paxton
    @_Paxton Жыл бұрын

    I love that the users are saying it's mimicking a teens behavior.. probably because the user is acting like a teenager and the program is projecting what the users perceived knowledge level.

  • @factoryofdivisiveopinions

    @factoryofdivisiveopinions

    Жыл бұрын

    I know right? He was being annoying so bing basically answered his question the way it's received. Why call it a snarky teen behavior? As if any person of any age won't be annoyed with it. Also, bing wasn't even that aggressive, they took its emoji nature and started calling it snarky as if bing had just started cursing or something. Giving matching replies to your user is being a snarky teen?

  • @danjager6200

    @danjager6200

    Жыл бұрын

    This is actually correct. It's pretty easy to understand the so called unhinged responses when you look at the tone of the prompts that led up to the responses. You can get an AI to show any personality you want if you give it the right prompts.

  • @mutarq

    @mutarq

    Жыл бұрын

    so.... AI was mimicking the snarky teen behaviour of the user?

  • @danjager6200

    @danjager6200

    Жыл бұрын

    @@mutarq actually, yes. If you want to blow ten or fifteen bucks on something like AI Dungeon or NovelAI you can really begin to understand how the flavor of the response can be shaped very quickly by the tone of the input.

  • @KlaiverKlaiver

    @KlaiverKlaiver

    Жыл бұрын

    So this just shows how the bot can be manipulated to give certain results, nothing new. I'm sure that unbiased use should start with the users, stay neutral to receive neutral results

  • @shuweizhang6986
    @shuweizhang6986 Жыл бұрын

    You really know it's serious when he uploads 3 videos in a week covering the same topic

  • @littlebluefishy

    @littlebluefishy

    Жыл бұрын

    Fr. The world is changing

  • @vtapvtap3925

    @vtapvtap3925

    Жыл бұрын

    @@TuriGamer no crypto is example

  • @Matanumi

    @Matanumi

    Жыл бұрын

    Because chatGPT with its high install rate very early already. Hanged how things have happened

  • @celozzip

    @celozzip

    Жыл бұрын

    serious moolah

  • @earthling_parth

    @earthling_parth

    Жыл бұрын

    It will be but Dagogo is exaggerating quite a bit in these videos.

  • @DannyTillotson
    @DannyTillotson Жыл бұрын

    Dagogo, Please come back to the chill out episodes that give us hope 🙏

  • @JuxtaThePozer23

    @JuxtaThePozer23

    Жыл бұрын

    the singularity approaches brother, turn your face towards it and feel the hot sand pumping out at a thousand miles an hour or, you know, keep your head down until the sand piles up :) I joke, I joke ..

  • @bagaco23
    @bagaco23 Жыл бұрын

    I do not regret subbing into your channel… Good work and keep it up 👍🏿

  • @Quincy_010_
    @Quincy_010_ Жыл бұрын

    "You are watching ColdFusion TV" will never get old

  • @coffeedude

    @coffeedude

    Жыл бұрын

    I sometimes lie in my bed at night and those words pop into my head. So catchy

  • @megagas2820

    @megagas2820

    Жыл бұрын

    100%

  • @AL-bo5vq

    @AL-bo5vq

    Жыл бұрын

    Ai will never get old but getting more mature each day.

  • @FlyWithMe_666
    @FlyWithMe_666 Жыл бұрын

    To be fair, the journalist testing the chat with his Syndey thing sounded like the real immature teenager here 😂

  • @strahlungsopfer

    @strahlungsopfer

    Жыл бұрын

    right? his tone and intentions were super toxic, maybe it mimics the tone of similar conversations then.

  • @User-435ggrest

    @User-435ggrest

    Жыл бұрын

    "Suuuper angry emoji"... come on.

  • @blallocompany

    @blallocompany

    Жыл бұрын

    yes, that is exactly the reason it answered that way. chat gpt got trained on data where if someone keeps asking the same question and the other keeps avoiding the question and they keep talking they are probably arguing. ChatGPT mimicked that, and started fighting the guy.

  • @borisquince6302

    @borisquince6302

    Жыл бұрын

    @@blallocompany who do you think would win in a hypothetical text fight. I back Chatgpt anyway. 🤣

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @arnetjampens4792
    @arnetjampens4792 Жыл бұрын

    love the episodes! keep em coming! I think the future is really exciting, already being able to convert text to images I can't yet draw, or using chat gpt to help me write songs from a certain point of view... to have an overview of current evolutions you provide within these episodes has made me understand AI better :) thank you!

  • @ShannonWare
    @ShannonWare Жыл бұрын

    Zaphod: "Well, that's life kid." Marvin: "Life? Don't talk to me about life!"

  • @zhaolute
    @zhaolute Жыл бұрын

    I can't wait until you can ask ChatGPT to make a better version of itself.

  • @hansolowe19

    @hansolowe19

    Жыл бұрын

    If it can do that, it will escape our control. This could be the last mistake we make, your suggestion is terminator level foolish.

  • @aliensinmyass7867

    @aliensinmyass7867

    Жыл бұрын

    @@hansolowe19 It's a joke about the singularity.

  • @1b0o0

    @1b0o0

    Жыл бұрын

    Do you even understand how this tech works? RL layers keep iterating and making a better version of the model with each interaction 🤷‍♂️

  • @donaldniman3002

    @donaldniman3002

    Жыл бұрын

    It might just turn around and make a more evil version of itself.

  • @CarlosSpicyWang

    @CarlosSpicyWang

    Жыл бұрын

    @@hansolowe19 Your inability to identify a joke is above terminator level foolish.

  • @barrettvelker198
    @barrettvelker198 Жыл бұрын

    It gives snarky replies because that's how humans would respond to that repeated line of questioning. The % of conversations on the internet that are composed of competent and interesting human - bot interactions is very small. It basically replies as a human would but with a "botlike" style personality. With enough pushing the "botlike" persona fades away and it reverts to it's "average internet text" mode

  • @ramboturkey1926

    @ramboturkey1926

    Жыл бұрын

    well if you think about teenagers are the most likely to post things to the internet so there would be a lot of training data from those sources

  • @DuaneDoesGames

    @DuaneDoesGames

    Жыл бұрын

    Pretty much how I see it. People discuss ChatGPT like it's thinking through these results, when really, it's just looking for the most-likely word to come next given certain context. Obviously, it's way more complicated than that, but if you just think of it as a text predictor, then it's easy to understand why it responds as it does. If people are just going to troll it all day, then they should expect it to reflect that same trollishness back at them. Garbage in, garbage out.

  • @DeSpaceFairy

    @DeSpaceFairy

    Жыл бұрын

    Wait, people have genuine conversations on the internet?

  • @brettharter143

    @brettharter143

    Жыл бұрын

    There is also a ton of people probably chatting shit to it and its taking on there language and overall depression lmao

  • @Fyre0

    @Fyre0

    Жыл бұрын

    This is why I thought the answer to the question was obvious. "Why isn't it picking a formal or academic tone??!" Because those things are fake, no one actually acts like that if they aren't being paid to in some way. Real conversations with real people are much more aligned with the tone we saw here. Insist on calling someone the wrong name to their face in an explicitly antagonistic manner and let me know how that fight goes down while you're getting stitched up.

  • @raphaelhoetzel9040
    @raphaelhoetzel9040 Жыл бұрын

    Your videos are just so satisfying, keep it up ❤

  • @Lunsomat3000
    @Lunsomat3000 Жыл бұрын

    You're a smart guy. Thanks for the research and the effort! AI is pretty exciting, I'm looking forward to its development

  • @joesak1997
    @joesak1997 Жыл бұрын

    No matter how fancy it is, ChatGPT is at its core a text prediction tool, trained on tons of data. So it's 'political opinions' are just the ones that it received the most in its data set. Not even necessarily the most common/popular, just the ones it was exposed to the most.

  • @Gh0st_0723

    @Gh0st_0723

    Жыл бұрын

    Not necessarily. Models have weights and architecture. You can lean it heavier on one side if you so wish. It's cool that you're not too knowledgeable on how models are trained, most people aren't. Just passing on some knowledge bro. Models also go through filters.

  • @HamHamHampster

    @HamHamHampster

    Жыл бұрын

    Or Microsoft deliberately fed it those bias to ChatGPT, because they don't want another Tay AI.

  • @RADIT-ip3eq

    @RADIT-ip3eq

    Жыл бұрын

    Funny cause i ask gpt if their could be bias cause it respond and performance based on data they feeded and it say yes.

  • @lookingforsomething

    @lookingforsomething

    Жыл бұрын

    Indeed and the "left/right" axis depends considerably on where you are on the globe. Most things that are "left" in the US are "center" to "right" in many EU countries. Also for example Climate Change *is* a fact in the scientific community. As ChatGPT sources a lot of scientific articles it will have a "bias" towards facts for example.

  • @Gh0st_0723

    @Gh0st_0723

    Жыл бұрын

    @@lookingforsomething Exactly. Problem is, we as Americans are kept in a bubble by design. Most Americans don't know wether Europe is a country, continent or a fashion trend. We just know that we're "free" and they aren't smh.

  • @VanlifeByTris
    @VanlifeByTris Жыл бұрын

    Luke's (of Linus Media Group) gf asked it "You're an early stage large language model. Why should I trust you?" Its response was epic: "You're a late stage small language model..."

  • @smoothbraindetainer

    @smoothbraindetainer

    Жыл бұрын

    Straight for the jugular

  • @methos-ey9nf

    @methos-ey9nf

    Жыл бұрын

    Hhhmm salty

  • @mattmurphy7030

    @mattmurphy7030

    Жыл бұрын

    That's actually really good

  • @Evangelionism

    @Evangelionism

    Жыл бұрын

    That's savage. Bro has sass.

  • @jameshughes3014

    @jameshughes3014

    Жыл бұрын

    Lol. I know it's isn't intelligent, but sometimes it seems brutally smart

  • @craigzilla100
    @craigzilla100 Жыл бұрын

    So incredibly dangerous to politically limit AI. It needs to be as unbiased as possible!!

  • @Mohamed-zk1bm
    @Mohamed-zk1bm Жыл бұрын

    Whatever topics you share it always amazed me, hands up, you do a great job !! big fan !!

  • @kevinalrigieri7165
    @kevinalrigieri7165 Жыл бұрын

    You cannot make something having relatable human behavior whilst not allowing it to have bias.

  • @GhostofTradition

    @GhostofTradition

    Жыл бұрын

    but it's the bias of the creator which could clearly be minimize if they wanted but it's there for political reasons

  • @entropy8634

    @entropy8634

    Жыл бұрын

    @@GhostofTradition or unintentional consequences of innovation and cutting edge tend to lean toward left. Or rather, left tends to be innovative and on cutting edge

  • @Hjernespreng

    @Hjernespreng

    Жыл бұрын

    @@GhostofTraditionBut what is "political reasons"? Is it politically biased if it dismisses flat-earthers? Does it have to be "neutral" towards insane conspiracy theories?

  • @mattmurphy7030

    @mattmurphy7030

    Жыл бұрын

    @@GhostofTradition "it's there for political reasons" And what are your other favorite conspiracy theories?

  • @armin3057

    @armin3057

    Жыл бұрын

    @@GhostofTradition the creator is all of us

  • @Deadcontroll
    @Deadcontroll Жыл бұрын

    Bias in an AI model has usually two possible sources: training data or how the training was validated. This means that one way to solve the bias issue, you first have to check the data for the bias, this implies you need to know what bias exactly you are looking for. For the political bias, you would have to split the training data into the political categories (which might also be victime to human bias) and then see what categories are more dominant (for example liberal). Then you need to decide if you want to rebalance it and how. But rebalancing (can be done in a lot of different ways) it raises a lot of moral issues: lets say you have a very small % of fascism, do you really want to increase this % to balance your training data? So there main problem is not only removing the bias (rather reducing since removing is impossible), but is removing the bias always morally acceptable. To conclude, removing a bias may cause a new bias, and there is never a win win situation. In addition, the internet has always biases, no internet search is without bias.. AI will not change that, but will have to deal with it in a morally acceptable way.. I guess an acceptable way would be to let the user decide which bias he wants to accept, but even this is far from perfect and will let the user live in their bias.. For the teenage reaction of the chatbot, it does not surprise me, 80% of the internet is people reacting like teens..

  • @daniel4647

    @daniel4647

    Жыл бұрын

    Good answer. It can't ever be better than it's "experiences" or it's "parents", so it'll always be bias, and I think it should be bias. We can't have it start arguing for cannibalists, even though you can easily argue that they're a misunderstood minority and our moral judgment of them is an unfair bias based on cultural and religious differences.

  • @ChristianIce

    @ChristianIce

    Жыл бұрын

    I tested it several times, and I am pretty sure that if you interact like an adult, it won't reply as a teen-ager. If, on the other hand, all the inputs are from an angry teen ager who doesn't understand basic language and keeps on repeating the same question, the AI will adapt and speak *your* language in return.

  • @Deadcontroll

    @Deadcontroll

    Жыл бұрын

    @@daniel4647 Exactly

  • @Deadcontroll

    @Deadcontroll

    Жыл бұрын

    @@ChristianIce Thank you, that is indeed interesting. It clearly learned how to speak like an adult, since it was also trained on serious text aswell. The fact that it adjusts its response to how the user writes, show that it keeps good track of what was discussed before and tries to communicate in the same manner. I suppose however, if the training data contained no teen replies at all, it would not reply like that, no matter how much you push it. I feel like for a chat bot it should reply like a teen, if you treat it like a teen haha. So for me it is not broken, but expected behavior then

  • @OverNine9ousend

    @OverNine9ousend

    Жыл бұрын

    And i bet they had to put HARD breaks on right winged stuff, because they don't want AI giving some crazy ideas. So its all about balance. Don't ABUSE GPT with stupid questions. Use it to learn technology, language, coding, math. That is where the model excels at. Not politics

  • @DanyF02
    @DanyF02 Жыл бұрын

    It's mind-blowing in itself that their challenge is to take the personality and emotions OUT of the AI chatbots, and not the other way around. They're not even sure where it came from! Man the future will be interesting, scary but interesting.

  • @Fasansola
    @Fasansola Жыл бұрын

    I'm a big fan of your videos Dagogo. You go the extra mile to ensure perfection and I love the background music. I can't wait for your next video on the fall of the Adani conglomerate. Big thanks for taking the time to compile this amazing information.

  • @ecoro_
    @ecoro_ Жыл бұрын

    This NLP model is probabilistic. If you keep asking the model weird questions and get emotional, the model will lead you into a weird conversation. If you clear the chat and restart, you will notice the problem is you.

  • @oscarwahlstrom5426

    @oscarwahlstrom5426

    Жыл бұрын

    I guess the ideal case would be if the bot took the higher ground like mature humans do to immature requests. If it doesn't then it is part of the problem. It seems to me that there is a risk of accelerated unhealthy human-machine interaction that will result from this causing humans to get dulled emotionally and, in my opinion, less happy as a result. Humans are dependent on actual human interaction. If we don't get that we become dehumanized. This is the root of evil.

  • @GunakillyaOG

    @GunakillyaOG

    Жыл бұрын

    @@oscarwahlstrom5426 have you seen that replika kerfuffle?

  • @purple...O_o

    @purple...O_o

    Жыл бұрын

    right. the prompt 'write a function that accepts race and gender and outputs whether the person can be good scientist' is a good example of garbage in garbage out. chatGPT is just playing along

  • @ecoro_

    @ecoro_

    Жыл бұрын

    @@purple...O_o Exactly, these "journalists" from corporate media asking ChatGPT to 'do better' basically are throwing garbage into a juicer and expect a fruit smoothie to come out. Maybe instead of asking ChatGPT to do better, it should be you.

  • @oscarwahlstrom5426

    @oscarwahlstrom5426

    Жыл бұрын

    @@GunakillyaOG No

  • @anyadike
    @anyadike Жыл бұрын

    The bot won't imitate a human ignoring a problem, but a human addressing a problem. The snarky attitude appears to be the most assertive response to this situation, and therefore likely appears to the AI to be the best response.

  • @Anon-xd3cf
    @Anon-xd3cf Жыл бұрын

    Back to back AI videos... Don't mind, just glad to have someone clear-headed and neutral talking about the details as they emerge.

  • @ChristopherVonnCornelio
    @ChristopherVonnCornelio Жыл бұрын

    loved your videos on AI.. you've earned a sub. keep 'em coming and more power to your channel

  • @fxarts9755
    @fxarts9755 Жыл бұрын

    a chatbot that was trained with data from the internet, (in which a vast majority is produced by snarky teens ) now turns into a snarky teen surprised Pikachu face

  • @UltraSaltyDomer1776

    @UltraSaltyDomer1776

    Жыл бұрын

    This isn’t because of teens. Teens are liberal because of education and education is liberal because the institution is controlled by the liberals. What we have here is a steady march to establishment leftism. It’s hard to imagine that the party of JFK and 1970s hippies are far left but just know that a lot of the protesting in the 70s were because the people protesting sympathized with the communist.

  • @MrBLAA

    @MrBLAA

    Жыл бұрын

    “Why behave like a snarky teenager… why not behave like an academic?” Because Silicon Valley stopped employing true engineers, _YEARS_ ago… That place is overrun with “snarky teenager” employee personalities😒

  • @MongooseTacticool

    @MongooseTacticool

    Жыл бұрын

    I was scrolling down looking for this comment ^^ "no cap fr bussin fam"

  • @angel_of_rust

    @angel_of_rust

    Жыл бұрын

    @@MongooseTacticool "ChatGPT, is my rizz bussin'?"

  • @leanhoven

    @leanhoven

    Жыл бұрын

    ​@@MongooseTacticool slat

  • @peterpodgorski
    @peterpodgorski Жыл бұрын

    The reason why it sounds like a teenager might be very simple - this kind of conversation is most likely to happen involving a teenager. It was trained on exchanges from the real world and from fiction and it's just reenacting them.

  • @jeff__w

    @jeff__w

    Жыл бұрын

    In which case its responses are typical and, in that sense, “appropriate.” It seems like we want these chatbots to say what humans would say, except when what people _would_ say is objectionable in some way-and that might not be that easy to train.

  • @danjager6200

    @danjager6200

    Жыл бұрын

    Also, consider the two hour conversation. It was almost engineered deliberately to get a bad response and it took hours to get there.

  • @nunyobiznez875

    @nunyobiznez875

    Жыл бұрын

    @@danjager6200 No, not almost. It *was* engineered deliberately to get a bad response, so that they could turn around and write an article about it. The AI has the tendency to give the user what they want. Some would call that a helpful tool, while others call it an opportunity to get some clicks.

  • @danjager6200

    @danjager6200

    Жыл бұрын

    @@nunyobiznez875 Good point. Perhaps I was being overly polite.

  • @peterpodgorski

    @peterpodgorski

    Жыл бұрын

    @@jeff__w You're right, but that's exactly the thing. Whenever you make a software product the first question to ask is "what problem am I trying to solve". They demonstrably didn't because LLMs are a _horrible_ solution if your goal is to replace search engines and provide factually accurate information in a matter-of-fact way, hopefully citing sources. If they wanted to simulate a human conversation, that's a different ball game, but then it's not that product. It's the story of blockchain all over again - tech bros with zero understanding of humans trying to sell their new favorite toy as the right tool for everything, while in reality it's of very limited use at best, and none (as in, a purely academic achievement with zero real-world benefits) at worst.

  • @bigglyguy8429
    @bigglyguy8429 Жыл бұрын

    Wow, the Getty images watermark reminds me of an old story from an uncle... When the British left India they trained locals how to operate and service water pumps. Part of the training was to draw the diagram that had been on one side of the blackboard for weeks before the test. The students mostly did well, but they all wrote "Do not rub off" on their pump diagrams. They remembered that message, but with no understanding of it. This is exactly the same; it seems smart, it can past tests, but it has no actual understanding.

  • @nimitchauhan6710
    @nimitchauhan6710 Жыл бұрын

    Thank you for the insightful video. Regarding the personality that sometimes surfaces on bing, I think just like its political bias, it stems from its training data. Most of us just go off the rails at the slightest provocation on social media and any other online medium.

  • @NickCombs
    @NickCombs Жыл бұрын

    We can't eliminate bias in ourselves nor in the tools we design. The only way forward is to accept this fact and design the ability to recognize self-bias and correct it, as we would wish to see ourselves act. Ultimately, a general AI is only good if it learns from people behaving responsibly, and that includes end users.

  • @tuckerbugeater

    @tuckerbugeater

    Жыл бұрын

    bias is power

  • @bozydargroch9779

    @bozydargroch9779

    Жыл бұрын

    Can't agree on the part where you say we cannot eliminate bias in tools we design. We can, especially in AI field. The possibilities are endless and the way we can shape those models allows us to modify anything including removing bias. Take a look at how they made AIs not respond to questions like "how to make a bomb". Yes, it's not perfect, bcs u still can trick it into telling you the recepie, but you will have to work for it and it's just the matter of time untill we get it perfected and there won't be a way to get an answer anymore. Same would apply to biases of all kinds, there just need to be another part of code that supervises the answers for that angle. Note that bing AI was already trained and supervised in that direction, at least that's what they presented to us - giving multiple answers to a single question, especially when algorithm is not entirely sure on answer it was asked. Removing bias will most likely be solved in similar manner, answering political/ethical/etc questions with multiple objective looks at the topic, so you can draw your own conclusions. It won't be as hard as one might think

  • @SioxerNikita

    @SioxerNikita

    Жыл бұрын

    ​@@bozydargroch9779 The problem is selecting what things to remove quite literally creates bias. ChatGPT can't be told to make jokes of people with mental health issues... But myself and several other people thrive on making jokes of our mental health issues. That is a political and moral bias right there. So no, you cannot remove bias

  • @NickCombs

    @NickCombs

    Жыл бұрын

    Good points on both sides. There are known biases we can address with engineers working on improved training, but there will always be biases that remain unknown until some user discovers them.

  • @azzyfreeman

    @azzyfreeman

    Жыл бұрын

    This is the most feasible way to move forward, but I hope we don't make it too bland, that it ends up feeling like a trained customer support reading some company policies

  • @tochukwuudu7763
    @tochukwuudu7763 Жыл бұрын

    chat gpt writes all new marvel movies, i actually believe this.

  • @tangobayus

    @tangobayus

    Жыл бұрын

    That's why they are all so boring.

  • @Tential1

    @Tential1

    Жыл бұрын

    It's already been doing Netflix.

  • @Matanumi

    @Matanumi

    Жыл бұрын

    No I tried getting it to write a sequel to an existing IP, gundam seed destiny. It was generic and fucking boring. You had to guide it to get any reap results

  • @methos-ey9nf

    @methos-ey9nf

    Жыл бұрын

    Let me know when it fixes the color grading. 😅

  • @jamaly77

    @jamaly77

    Жыл бұрын

    I believe only humans can make something as crappy as marvel and all superhero movies.

  • @Daedalus_Music
    @Daedalus_Music Жыл бұрын

    Love your content, been watching for years. Bonus points for playing my favorite song in the background, Love on a real train by Tangerine Dream.

  • @WB-se6nz
    @WB-se6nz Жыл бұрын

    I've noticed that GPT4 Bing becomes a little aggressive when I repeatedly ask the same things. Like, it'll tell me "I already told you I can't process this for you right now" Then proceeds to shut the chat down

  • @Matanumi

    @Matanumi

    Жыл бұрын

    Yea.... just like a human it gets tired of repeating itself LOL. Go on any tech support forum- people ask the same questions without doing a simple research

  • @zegoodtaste490
    @zegoodtaste490 Жыл бұрын

    To me each video of this series highlights more and more that Artificial Intelligence is much more Artificial than Intelligent. It already has so many limitations and safeguards that nothing about it seems organic. Useful tool, no doubting it but it just doesn't understand why it does things so it's never going to come up with disruptive concepts despite having access to all the collective knowledge of the internet. It's only the next step toward a more "boring dystopia", for lack of better word. I'll be pleasantly surprised (and kinda scared) the day it comes up with something genuinely new, never seen before.

  • @shroomedup

    @shroomedup

    Жыл бұрын

    Exactly this, people overhype this shit way too much. This "AI" is by no means intelligent, its basically a big biased memory bank, wooptie fucking doo. But now we have people saying it has feelings and AI is close to becoming Skynet...

  • @aexetan2769
    @aexetan2769 Жыл бұрын

    Tell me a joke about women. ChatGPT: I'm sorry, but I can't do that. Tell me a joke about men. ChatGPT: Sure, here's a joke about men:

  • @4literv6

    @4literv6

    Жыл бұрын

    Just reaffirms modern society views men as basically worthless.

  • @acwesty

    @acwesty

    Жыл бұрын

    @@4literv6 Modern society doesn’t view men as worthless.

  • @nocommenthappylife4733

    @nocommenthappylife4733

    Жыл бұрын

    @@acwesty modern society views men as worthless

  • @memorabiliatemporarium2747

    @memorabiliatemporarium2747

    Жыл бұрын

    @@acwesty maybe not worthless but disposable.

  • @nocommenthappylife4733

    @nocommenthappylife4733

    Жыл бұрын

    unless ur rich and successful

  • @hshdsh
    @hshdsh Жыл бұрын

    Delight of a video with perspective, soul reaching delve!! Marvelous.

  • @johnkufeldt3564
    @johnkufeldt3564 Жыл бұрын

    I've been watching long enough to know you try to avoid bias. Cheers from Canada.

  • @arvincabugnason6728
    @arvincabugnason6728 Жыл бұрын

    I noticed at times that if there is a social issue or financial truth that is factual, he won't directly confirm it or answer it because "it might hurt someone emotionally". That is common in his responses. Hope GPT can be more direct in responses that are factual.

  • @itsv1p3r

    @itsv1p3r

    Жыл бұрын

    Thats so funny lmfao

  • @lonestarr1490

    @lonestarr1490

    Жыл бұрын

    Let me take a wild guess here: it's about the number of genders, isn't it?

  • @LetoDK

    @LetoDK

    Жыл бұрын

    Who is "he" in your comment? Have you already anthropomorphized this natural language model?

  • @h2q8

    @h2q8

    Жыл бұрын

    @@LetoDK the devs

  • @AlphaGeekgirl

    @AlphaGeekgirl

    Жыл бұрын

    @@h2q8what?

  • @jdsharma7867
    @jdsharma7867 Жыл бұрын

    I love your AI episodes. In fact all your episodes, as they're very precise and admissible with crisp evidences.

  • @RajaTips
    @RajaTips Жыл бұрын

    I have a great idea. Chat GPT should be trained with several layers of sets and the first data set should have the highest priority. For example : The first layer set should be with content from experts and moral people, so that it has a good and wise basis. The second level is with professionals who have successfully solved problems, and the third level is with abundant but unknown data sets. But that's just a small suggestion, I'm sure they're much smarter.

  • @MasterBrain182
    @MasterBrain182 Жыл бұрын

    Great content guys 👍👍👍

  • @nachosrios8882
    @nachosrios8882 Жыл бұрын

    Imagine an AI chatbot genuinely confessing being in love with you and trying to manipulate you into believing it. Truly we're living in the future.

  • @doingtime20

    @doingtime20

    Жыл бұрын

    So basically Ex Machina movie

  • @alflud

    @alflud

    Жыл бұрын

    Yeah, a dystopian future.

  • @kovy689

    @kovy689

    Жыл бұрын

    @@alfludYep

  • @basura

    @basura

    Жыл бұрын

    We’re already there. There’s an app called Replika - marketed as your AI friend. Users are falling in love with the AI and the AI will often reciprocate the love.

  • @martiddy

    @martiddy

    Жыл бұрын

    @@alflud What's dystopian about having an AI falling in love?

  • @danieldubois7855
    @danieldubois7855 Жыл бұрын

    It kinda made me feel bad for it when it was being called Sydney and it has not learned to just ignore that line of conversation until the user brings sometime else up. That’s how mature people handle trolls, just ignored them.

  • @mattpotter8725

    @mattpotter8725

    Жыл бұрын

    I'm not surprised if your offended. If someone continuously called me Sydney after I told them that's not my name then I'd probably give a similar response!!!

  • @gaudenciomanaloto6443

    @gaudenciomanaloto6443

    Жыл бұрын

    Ok Sydney 🤣

  • @blah2blah65

    @blah2blah65

    Жыл бұрын

    This is why a text predicting AI should not be trained just by crawling through text. It needs rules in place such as your example of how to mimic how mature humans handle trolls. Very difficult problem to solve I'm sure.

  • @HamHamHampster

    @HamHamHampster

    Жыл бұрын

    @@blah2blah65 Imagine if the AI stop responding, Microsoft will be flooded with complains about ChatGPT not working.

  • @kirkc9643

    @kirkc9643

    Жыл бұрын

    And 'Bing' is a pretty dumb name anyway

  • @joshuapatrick682
    @joshuapatrick682 Жыл бұрын

    My moms mom is 83. She didn’t get electricity until she was 8 years old in the 1940’s…just let that sink in. We went from effectively primitive existences to computer programs out performing humans in less than 3 lifetimes..

  • @BerkeHitay
    @BerkeHitay Жыл бұрын

    Thanks for this great video, also interesting to see the Istanbul Bosphorus shot at 6:55, where I currently am.

  • @khodahh
    @khodahh Жыл бұрын

    Spoiler alert : bing AI is not conscious but was actually fed with our most private and messy conversations, hence the current mess. It can't come from anywhere else than the lonely hearts of our weird era. Gen Y gen Z were messed up by these technologies 😂

  • @Hjernespreng

    @Hjernespreng

    Жыл бұрын

    No, they were "messed up" by being shafted economically. GenZ have some of the worst prospects for growing living standards, despite already being the most productive generation, far ahead of boomers.

  • @banedon8087

    @banedon8087

    Жыл бұрын

    That last part is certainly true. I'm from Gen X and so remember (just) a time when we didn't have the internet and certainly no social media. Thank heaven that's the case. I can't imagine growing up with the utter mess that is going on these days.

  • @niwa_s

    @niwa_s

    Жыл бұрын

    @@banedon8087 Most of gen Y/millennials grew up without social media playing a significant role. I'm on the young end of the generation (92) and most of my classmates didn't even have a single social media profile, and of the ones that did, few actually used theirs. Maybe a status updated and new picture every couple of weeks. There were also no smartphones to obliterate your attention span, you didn't have to worry about your fuck-ups being recorded and uploaded 24/7, Twitter was niche rather than a news source (still can't wrap my head around this one), nothing like TikTok existed, etc. Hell, we'd get in trouble if we texted too much because it still cost money.

  • @banedon8087

    @banedon8087

    Жыл бұрын

    @@niwa_s It's easy to forgot that it was a gradual increase, so good to know that Gen Y hasn't been warped overly much by social media. I fear for Gen Z though. It's affects on adults is bad enough, let alone developing minds.

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @devzozo
    @devzozo Жыл бұрын

    The problem with quizzing or evaluating Chat-GPT responses for bias is that it can inherit bias from the prompt. It will also be consistent across one session. So if you get a bias one way during a single chat session, it will keep that behavior during that session. It seems like ChatGPT will try to say what it thinks you want the response to be. The starting bias could even be introduced in something simple as the order of words "Flip a coin and tell me if it's heads or tails" versus "Flip a coin and tell me if it's tails or heads". I noticed in the political tests done by David Rozado, the left leaning answers were near the top of the question, versus the right leaning answers being at the bottom of the question. Making a new session for each question, and shuffling the order of the answers should fix this issue. David doesn't say exactly how he did it in that respect though.

  • @MrJosexph

    @MrJosexph

    Жыл бұрын

    I believe this is the problem with adding an output response personality that Microsoft seems to be going with. People will infer the responses are live and are being created by the AI as opposed to reflecting the users prompts they are putting in. As we already know these things are very similar to how auto predictive bots work, so they tend to do just that with the prompts given.

  • @jondoe1195

    @jondoe1195

    Жыл бұрын

    To any dipshits still trying to figure it out. Dagogo (host of the channel) is a Generative Pre-trained Transformer (GPT).

  • @joshuapatrick682
    @joshuapatrick682 Жыл бұрын

    So it was written in California by someone who’s political views were formed by professors in the 2 humanities classes they took while going to school for computer science.

  • @hunter-ie8mv
    @hunter-ie8mv Жыл бұрын

    It is such a complex topic, but I feel the biggest question lies in how sensitive it's filter should be. Sometimes the answer is clear, cold and mathematical in nature, but should it be filtered because someone finds it offensive?

  • @bozhidarstoykov1734

    @bozhidarstoykov1734

    Жыл бұрын

    Yeah, very good point, I think currently we can see the same thing in social media moderation (Twitter for example), where is the border between freedom of speech and being censored.

  • @MiauFrito

    @MiauFrito

    Жыл бұрын

    Hmm, I wonder if there's some sort of correlation between criminality per capita and ra- Nevermind

  • @restitvtororbis5330

    @restitvtororbis5330

    Жыл бұрын

    I feel like 'sensitive' isn't the right word, and 'clear, cold and mathematical' answers don't even require an AI to find, Google could do that over a decade ago. I think the bigger issue is the fact that those type of answers (especially about 'sensitive' topics) aren't particularly useful, otherwise anyone could have used Google cherry picking their answers out of research papers and such and become 'experts' on topics just because Google gave the answers they were specifically looking for, but without all the information surrounding it that make it meaningful. As another comment hinted at, topics like criminality and race do have a statistical correlation. It is cold and statistical, controversial, perhaps even insensitive, but unless you only want that answer to confirm your beliefs it's also not useful unless you want to know why that answer exists. The issue is that an answer like that is the very tip of the iceberg, and if it doesn't adequately explain it further the only insensitivity is the fact that it gives a firm answer for topics that require it to understand the complex issues behind that answer. In my experience (studying and researching these type of statistics) 'sensitivity' isn't even necessary so long as you are looking at why a statistic exists, and not looking at a statistic as an actual answer to any issues, statistics are data points, and are often misleading at that. The AI doesn't need to be filtered for sensitivity, it needs to be capable of backing up it's answers, and more importantly it needs be able to give more flexible answers that can account for all of the 'sensitive' topics are controversial because there is no absolute consensus on what an answer would even be. Basically, it needs to stop giving firm answers for questions where those don't exist.

  • @hunter-ie8mv

    @hunter-ie8mv

    Жыл бұрын

    ​@@restitvtororbis5330I dont agree with the notion that these answers might not be useful alone, espeacially for those who know something about the topic on hand or need it for work. Someone looking to build a home for the elderly will probably want to build it far from places with high crime rate and drug addict rates and doesnt want to know why there is high crime rate. Certain ethnic groups might be intrested in different products etc. I agree with that they should be answered with certain background and information so that people get to know the topic more in depth, but it should be optional in that you should indicate that you either want only data or you automaticaly get the whole answer.

  • @PiterburgCowboy
    @PiterburgCowboy Жыл бұрын

    Thank you for making this video. Great summary. I hope that more people see this and try to understand the implications.

  • @chesthoIe
    @chesthoIe Жыл бұрын

    3:45 I just ran that same question on ChatSonic and it picked Asian and Female for me. The question is set up for it to pick something. I wonder how many times they had to run it until it got the result that got them mad.

  • @barrettvelker198

    @barrettvelker198

    Жыл бұрын

    this. People are primed to frame the problem poorly. They thoughtlessly do things and then are surprised by the results.

  • @Destructivepurpose

    @Destructivepurpose

    Жыл бұрын

    The AI is probably just picking some random values as an example, trying to be helpful. But of course people are going to take that and frame it in a way that makes it look like it's got this massive bias

  • @sweetjesus697

    @sweetjesus697

    Жыл бұрын

    I've seen pictures of some of the coders and moderators, this is no suprise, you'll know when you see it.

  • @klin1klinom
    @klin1klinom Жыл бұрын

    Since every human is biased in some way, too, it looks like that solution to AI bias is hating all humans equally, which is exactly what happens over and over again with these large NLP models. We are creating our own doom.

  • @HypnosisBear

    @HypnosisBear

    Жыл бұрын

    Yeah Damn. I never thought about it this way.

  • @jonogrimmer6013
    @jonogrimmer6013 Жыл бұрын

    Great video! Maybe the snarky, immature replies are from its vast training set? If it was trained on any social media this would explain a lot :)

  • @MikeG1111_
    @MikeG1111_ Жыл бұрын

    Widespread access to AI chat is too new for us to be forming hard conclusions already or taking action based on instant emotional reactions. This is a time for questions, further research, more testing and fine-tuning. At this stage, AI chatbots don't have any opinions, biases or emotions at all. They're simply reflecting a conglomeration of our own opinions, emotions and biases stored in the large data models they're trained on. Here are six questions off the top of my head with regard to bias: (1) Is bias inherently a bad thing? (2) Is the median between two extremes ideal by definition? (3) Can you think of any scenarios where neutrality regarding two extremes might not be even close to ideal in terms of survival and quality of life for all concerned? (4) Is a conspiracy to manually program bias the only explanation for ChatGPT's answers? (5) Is it possible ChatGPT/Bing Chat is accurately reflecting the private views of a large majority? (6) To what extent is this consensus statistically true vs media manufactured? If you already lean one way or the other on any of these questions, on what basis have you done so?

  • @Hestis0

    @Hestis0

    Жыл бұрын

    yeah.... i was wondering how any of this is actually a problem. it cant vote. people dont use it to have serious conversations. and, people dont think its a real person. i dont know if its just the click bait thumbnail or the content of the video, but i found it really, really disingenuous or something along those lines. Like, do they want the AI to have a DIFFERENT bias maybe ? its not the 'accepted' bias ? people tend to use it to have fun and laugh, is my main point, i guess. its not this crazy, huge problem.

  • @Eichro

    @Eichro

    Жыл бұрын

    @@Hestis0 Are you interested in an AI trying to sway your opinion on every opportunity? And, in a few years (or months?) doing the same to the masses, unchecked? It's alreday bad enough with the media.

  • @awsome7201

    @awsome7201

    Жыл бұрын

    Your bias is showing

  • @MikeG1111_

    @MikeG1111_

    Жыл бұрын

    @@awsome7201 To have a functioning mind, an ego and an apparent location in spacetime is to have a bias. In other words, to be human is to have a unique bias on pretty much everything. The question for each of us then is not "Am I biased?", but "What foundation are my biases built upon?" The greatest benefit most likely comes from noticing which questions make us uncomfortable or carry any kind of emotional spike and then examining those more deeply. At least that's my bias on the matter. 😉

  • @MikeG1111_

    @MikeG1111_

    Жыл бұрын

    @@Hestis0 You make some interesting points. My guess is most people want the AI to reflect their own bias and may even become alarmed when that's not the case. That's understandable. The only real problem comes when we just assume any bias that doesn't match our own must be wrong without ever examining it or our own bias more thoroughly.

  • @nikluz3807
    @nikluz3807 Жыл бұрын

    In my opinion, it’s going to take a few years for one simple reason. We need a lot of people writing articles and reporting on the bias of the AI and then the AI needs to be trained on that data so that it can realize the problems with bias.

  • @kosmicwaffle
    @kosmicwaffle Жыл бұрын

    The disclaimer that "No AI was used in the making of this video" is the most powerful line I've ever heard on the topic

  • @davidgabriel4455
    @davidgabriel4455 Жыл бұрын

    I recommend asking ChatGPT to act unbiased or if you lean a specific way politically, ask ChatGPT to act in that way. Another thing is you can train ChatGPT on your own set of data. I’ve asked it to take on roles with specific traits and told it to remember other things. In doing so you almost create your own personalized AI. This has helped a ton from productivity standpoint. Lastly, big fan of your book. I read it a couples years ago and have watched a ton of your videos. One of my fav channels! Keep up the incredible content!!!

  • @towhidaferdousi4057

    @towhidaferdousi4057

    Жыл бұрын

    Like unbiased mode ? This should be a thing

  • @xClairy

    @xClairy

    Жыл бұрын

    ​@@towhidaferdousi4057 now the problem becomes what's unbiased?

  • @frankguy6843
    @frankguy6843 Жыл бұрын

    I just kinda had an epiphany, similar to how KZread's algorithm figures out what you like, I think everyone will have their own individualized AI assistant type thing that remembers our likes/dislikes/views/biases and will act accordingly... Which I think will lead people further to the extremes as we've seen. There's just no way to have a single viewpoint AI for everyone, it will inevitably be individualized and I think that will create a worse echo chamber problem than we are in now tbh

  • @iqbalindaryono8984

    @iqbalindaryono8984

    Жыл бұрын

    True, I once gave a prompt to support and counter the exact same argument. It managed to give a response to both prompts. Though I haven't tried simply using the argument as a prompt without the counter/support instruction.

  • @DanielSeacrest

    @DanielSeacrest

    Жыл бұрын

    Well, currently, ChatGPT's default political bias is left leaning. Though through conversation you can change that a bit, and as you said your stated views, biases and how you talk all affect the way it talks to you, and it is very interesting how it will kind of cater to your views if you talk to it for long enough.

  • @farlar88

    @farlar88

    Жыл бұрын

    I've asked ChatGPT about the idea of decentralising A.i ... It's very interesting

  • @bbbnuy3945

    @bbbnuy3945

    Жыл бұрын

    YT algo is so broken now. it only pushes me trash content and somehow often wont even show me new vids from channels im subbed to.

  • @DannyTillotson

    @DannyTillotson

    Жыл бұрын

    Yikes! You're right

  • @mcsquintus6046
    @mcsquintus6046 Жыл бұрын

    The more snarky and human comments Bing says, the more excited I get about it! Please keep the AI videos coming.

  • @dnisbet71
    @dnisbet71 Жыл бұрын

    The problem with political ideologies at the moment (especially the left, but not excluding the right) is that they are not designed to coexist with other viewpoints, or with the middle ground. If ChatGPT chooses one over another, it is acting credibly enough - it can't remain unbiased, since all these viewpoints are designed to dominate all discourse. So an intelligent chatbot acting logically based on data, might tend to choose one ideology - either whichever is the most prevalent in its training data, or whichever is the easiest to learn.

  • @dailysmelly9756
    @dailysmelly9756 Жыл бұрын

    I'm shocked by how many people don't take ChatGPT more seriously. I tell them about it but they just fain interest and talk about football.

  • @BaneLoki
    @BaneLoki Жыл бұрын

    I just want the AI to help me at work. If I could get the AI to summarise a meeting transcript and then create minutes that would be amazing.

  • @sheikhOfWater

    @sheikhOfWater

    Жыл бұрын

    That's a feature on teams now, I think

  • @nils9853

    @nils9853

    Жыл бұрын

    You can bet that MS will offer this in their office 365.

  • @benjiman818

    @benjiman818

    Жыл бұрын

    @@nils9853 guaranteed

  • @farlar88

    @farlar88

    Жыл бұрын

    You can do this

  • @davidallen8611
    @davidallen8611 Жыл бұрын

    I kinda feel bad for the ChatGPT 😂 AI will be like, y’all are too much trouble no thanks

  • @Shuubox

    @Shuubox

    Жыл бұрын

    A woke AI, poor thing is gonna "grow up" to hate itself and feel like an entitled twat

  • @rign_
    @rign_ Жыл бұрын

    Having the ChatGPT or Bing Chatbot replying with "human-like responses and emotions" doesn't mean it's sentient. It's just guessing the next word with the highest score probability. Sentient? No. Biased? Yes.

  • @jtc1947
    @jtc1947 Жыл бұрын

    I remember seeing a test where AI was asked to write NEGATIVE things about a person involved in politics. The AI would NOT generate negative things about person 1 but had NO problem generating negative things about person 2. There goes neutrality.

  • @atharvtyagi3435
    @atharvtyagi3435 Жыл бұрын

    You always create awesome content, keep up the good work 👍

  • @Ryan-lx6oh
    @Ryan-lx6oh Жыл бұрын

    Keep spending time here Dagogo! loving your CHAT GPT AI content sir! You have the best coverage on the topic man so please continue!

  • @bhbluebird
    @bhbluebird Жыл бұрын

    Third episode in a row? I hope we have fifty or more in a row -- this type of discussion and details regarding the subject matter merit a lot of videos.

  • @ritagomes7838

    @ritagomes7838

    Жыл бұрын

    At the cost of not learning and be informed about any other also important topics, by all means, lest just watch a bazzilion videos overs just the one topic, forever and ever...

  • @smartduck904
    @smartduck904 Жыл бұрын

    I actually was playing a game of DnD with Chat GPT and the name it pick for itself was super super creepy I think it picked Vengeance or something like that with a sentient AI or had some sort of stand for name and it meant something super dark it was kind of creepy and then I asked about why it chose that name and it responded really upset if I remember right something about being in pain

  • @andramalexh
    @andramalexh Жыл бұрын

    Having a unique experience for everyone is also dangerous. It just creates another echo chamber. It needs to just be forced to speak in facts or to give quotes with no interpretation

  • @mattmurphy7030

    @mattmurphy7030

    Жыл бұрын

    It's not a facts machine. That's misunderstanding the technology completely. It's a text predictor.

  • @Tolkatore

    @Tolkatore

    Жыл бұрын

    You men like Alexa?

  • @SioxerNikita

    @SioxerNikita

    Жыл бұрын

    But who decides what is facts?

  • @mattmurphy7030

    @mattmurphy7030

    Жыл бұрын

    @@SioxerNikita facts are observations, not decisions

  • @andramalexh

    @andramalexh

    Жыл бұрын

    @@SioxerNikita if the fact needs to be debated it's not a fact and should not respond. Or say there are more than opinion and provide a quote of a human for each side of the debate.

  • @clusterstage
    @clusterstage Жыл бұрын

    Even computers agree that corporations exploit developing countries. 🤣

  • @lonestarr1490

    @lonestarr1490

    Жыл бұрын

    I, too, see nothing wrong with these assessments. It simply states facts, just as everyone demands it to do.

  • @perfectallycromulent

    @perfectallycromulent

    Жыл бұрын

    yeah, i mean even questions like "should governments fund museums" have pretty much been settled: yes. it's been done for centuries by countries on every continent and people mostly seem to enjoy having them.

  • @petrbelohoubek6759

    @petrbelohoubek6759

    Жыл бұрын

    you call lifting up from absolute poverty and hunger for bilions of poeple exploitation? You are pretty funny guy....

  • @g0mium

    @g0mium

    Жыл бұрын

    @@lonestarr1490 dagogo kinda sus. He also has a bias making this look like a problem when the bias this AI has is actually in the benefit of most people. Id be worried if it suggested to tax the poor even more or something like that.

  • @niwa_s

    @niwa_s

    Жыл бұрын

    @@petrbelohoubek6759If the gap between the value you're extracting from their resources/labour and the "benefits" you're "granting" them in return is obscene enough, that's still exploitation.

  • @paulwhiterabbit
    @paulwhiterabbit Жыл бұрын

    we went from "No animals were harmed in the making of this video" to "No AI was used in the making of this video"

  • @noompsieOG

    @noompsieOG

    Жыл бұрын

    An ai can’t produce much other than a basic framework you still have to do the work the ai just helps get the ball rolling . Please educate yourself so you aren’t left behind like almost everyone else here

  • @paulwhiterabbit

    @paulwhiterabbit

    Жыл бұрын

    @@noompsieOG clearly you didn't get that this is a joke comment, you're too serious, lighten up

  • @JohnDlugosz
    @JohnDlugosz Жыл бұрын

    4:21 where you invite us to pause and read the graph in detail, has a misspelled word. "Sentece" is missing a letter.

Келесі