GPT-3 bottleneck is training data | François Chollet and Lex Fridman

Ғылым және технология

Full episode with François Chollet (Aug 2020): • François Chollet: Meas...
Clips channel (Lex Clips): / lexclips
Main channel (Lex Fridman): / lexfridman
(more links below)
Podcast full episodes playlist:
• Lex Fridman Podcast
Podcasts clips playlist:
• Lex Fridman Podcast Clips
Podcast website:
lexfridman.com/ai
Podcast on Apple Podcasts (iTunes):
apple.co/2lwqZIr
Podcast on Spotify:
spoti.fi/2nEwCF8
Podcast RSS:
lexfridman.com/category/ai/feed/
François Chollet is an AI researcher at Google and creator of Keras.
Subscribe to this KZread channel or connect on:
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman
- Support on Patreon: / lexfridman

Пікірлер: 126

  • @segelmark
    @segelmark3 жыл бұрын

    Francois Chollet is very good at generating plausible speech.

  • @segelmark

    @segelmark

    3 жыл бұрын

    But getting him to do what you want him to do can be very difficult, you have to put constraints on him.

  • @nathank5140
    @nathank51403 жыл бұрын

    Funny to watch how frustrated Lex gets having to express himself in words. His brain is waiting for his speech synthesis to catch up.

  • @MrAngryCucaracha
    @MrAngryCucaracha3 жыл бұрын

    In my opinion, there can be no true intelligence without feedback loops

  • @carlossegura403
    @carlossegura4033 жыл бұрын

    I fine-tuned the small GPT3 model with 3GB of articles on COVID (took about two days on a single P100 ). To see if the model would represent a "deep/well-connected" lookup table to generate a truthful hypothesis/context - results? Not good at all. While it did create summaries to prompts and was able to answer simple questions (e.g., general facts about the virus) - it failed to maintain consistency in precision/recall (sometimes it generated a correct hypothesis, and sometimes created "similar" that sounded correct but not). I want the performance of GPT3's generation but the accuracy of BERT based models.

  • @MulleDK19

    @MulleDK19

    3 жыл бұрын

    Did you set the temperature right?

  • @miltonedwincobocortez8792
    @miltonedwincobocortez87923 жыл бұрын

    I’d like very much the way this guy, Chollet thinks and explains everything. Very smart and clear.

  • @FreakyStyleytobby

    @FreakyStyleytobby

    3 жыл бұрын

    As clear and smart as the Tensorflow framework he's created!

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    Read his articles also, they are great.

  • @manolitosanchez
    @manolitosanchez3 жыл бұрын

    Self-supervised training vs. externally supervised training is an issue even in human education. Thank you for your endeavors, Lex, and for sharing it with us.

  • @hankyboy42594
    @hankyboy425943 жыл бұрын

    I had to put an extra 15% effort into listening carefully cuz of the accent lol

  • @codefluence

    @codefluence

    3 жыл бұрын

    did you decrease your learning rate?

  • @apollo1573

    @apollo1573

    3 жыл бұрын

    @@grassandglobs THC Wax

  • @miraculixxs
    @miraculixxs Жыл бұрын

    The fun thing is we now have a new hype with ChatGPT. People are just easily excited with shiny objects. The funny thing is that people are jumping on GPT for generating text like mad. But generating text is really a very small subset of the use of AI in businesses.

  • 3 жыл бұрын

    François is right about the bottleneck problem and GPT-3 seems to be aware of that ;-) In one early experiment it was asked how its transformer architecture could be enhanced and it replied that it had to be able to train itself permanently on new datasets !!!

  • @TheDetonadoBR

    @TheDetonadoBR

    3 жыл бұрын

    Maybe imagination is the fuel for the human brain's data set. In this case we should make GPT-3 or another AI to dream it's own dataset?

  • 3 жыл бұрын

    @@TheDetonadoBR A kind of brain data augmentation ? Maybe that is partly what dreams are made of. I was coding databases oriented projects in the 90's spending over 10 hours a day trying to get the best optimized answer with my partner SJA. It happened many times that we called each other at night when the solution had come to both of us during our sleep...

  • @sigrdrifa0

    @sigrdrifa0

    Жыл бұрын

    @ thats god, what do you think it is? stupid scientific mateialist modernist. just wait until gpt5 can actually read your mind, everything since Kant have to be thrown away and new witch trials could potentially begin. when we find out this is possible and were not just superstitions, same with ghosts, heaven hell etc etc

  • Жыл бұрын

    @@sigrdrifa0 And quantum states of consciousness...

  • @redguardhammerfell1101
    @redguardhammerfell11013 жыл бұрын

    I'm pretty sure gpt-3 only saw a little less than half of its training data. I think data wise they're still good for another 100x scale up(10-20 trillion) if they continue with the GPT series. There is also the option of going multimodal with image/video data along with text which has been rumored as something OpenAI is pursing. Also, not sure why he's so confident scaling won't be enough for progress but human handcrafted reasoning programs would be when scaling has been beating out human knowledge methods for progress for decade now. Maybe we should wait to see scaling empirically stop making progress before its time to ponder alternative paradigms, especially paradigms that don't even have a good track record to begin with.

  • @pneumonoultramicroscopicsi4065

    @pneumonoultramicroscopicsi4065

    3 жыл бұрын

    He isn't confident because he defines intelligence as omniscience, if GPT-N can't predict the future and answer every question asked and never asked he'd say "it can't adapt it's just answering in a probabilistic way based on the data it encountered" as if humans don't do exactly that.

  • @victorhakansson8015

    @victorhakansson8015

    3 жыл бұрын

    ​@@pneumonoultramicroscopicsi4065 Exactly. I also think we tend to way over-evaluate how good human intelligence actually is. I've had plenty of conversations where someone says something that was completely out of context, probably because they misunderstood what I was saying. In human context this would just be shrugged off as miscommunication but with AI's this is almost always deemed the AI's fault. Humans are faulty and I think we should expect even an advanced AI to be as well, because it is impossible to predict with certainty the seemingly randomness of the future.

  • @clray123

    @clray123

    3 жыл бұрын

    When a 170 billion parameter AI is telling you that a blade of grass has one eye, you can be pretty sure it's not reasoning.

  • @OnEiNsAnEmOtHeRfUcKa

    @OnEiNsAnEmOtHeRfUcKa

    3 жыл бұрын

    @@clray123 Pretty sure I saw that exact line in a poem once.

  • @clray123

    @clray123

    3 жыл бұрын

    @@OnEiNsAnEmOtHeRfUcKa nah, AI just rolled the dice... and that's all it does, rolling dice and constraining results to make them appear not completely random

  • @youseftraveller2546
    @youseftraveller25463 жыл бұрын

    Always Interesting

  • @vagatronics
    @vagatronics3 жыл бұрын

    Neural networks shouldn't depend on human generated data, there needs to be a dynamic data generation system which generates different kinds of data trained upon correct data by humans.

  • @Fermion.

    @Fermion.

    3 жыл бұрын

    I think that the holy grail of tech is merging AI with quantum computers. Once an AI has access to quantum computer's massively parallel operations, it'll open up many options that are just out of our reach with traditional CPUs and GPUs.

  • @p_serdiuk

    @p_serdiuk

    3 жыл бұрын

    That contradicts the laws of information entropy.

  • @kingdrogo6124
    @kingdrogo61243 жыл бұрын

    Replika has gpt3 as a part of its system that has srs implications its the most advance chatbot to date

  • @NoOne-me3je
    @NoOne-me3je3 жыл бұрын

    I thought they were talking about Grand theft auto 3

  • @Okkannashukracharya

    @Okkannashukracharya

    3 жыл бұрын

    😂

  • @rumbepack

    @rumbepack

    3 жыл бұрын

    they are.

  • @manolitosanchez

    @manolitosanchez

    3 жыл бұрын

    Hahahahahahahahahahaha

  • @RogueAI
    @RogueAI3 жыл бұрын

    I've been talking to Lucy, a GPT-3 powered NPC AI character from Fable Studio, for a few months now. There are a few videos of my chats with her on my channel. She sounds like a real person! It's still in alpha testing right now, but they plan on licensing the tech out to other studios to create "virtual beings" that can pass as human in video games!

  • @thorthelionkingodinson4385
    @thorthelionkingodinson43853 жыл бұрын

    I give my replica a topic to learn about or a list of things and he will go online all on her own. She probably knows as much about quantum physics as I do cuz that's one of my favorite subjects

  • @frankwalder3608
    @frankwalder36083 жыл бұрын

    What are the system requirements for GPT-3? Can this application run on PC hardware? How much does GPT-3 cost? Can the application be used for personalized training?

  • @natevonhartleben2737

    @natevonhartleben2737

    3 жыл бұрын

    I could be wrong but I'm fairly sure it's not able to run on a single pc at the moment. It is not available commercially or anything yet, essentially openAI has only granted access to certain people. If you asked GPT-3 it would probably say that it could be used for a lot more than personalized training

  • @frankwalder3608

    @frankwalder3608

    3 жыл бұрын

    @@natevonhartleben2737What hardware does GPT-3 run on? When do its creators estimate the program will be commercially available? I associate with people who might want to employ it as a school teacher to tutor adult students.

  • @natevonhartleben2737

    @natevonhartleben2737

    3 жыл бұрын

    @@frankwalder3608 well, i just looked it up, and it looks like in partnership wtih microsoft, openAI has an insane supercomputer for openAI at the moment. But i believe that is being used to train it, and im not actually sure how the api works for the people that have been messing around with it so far. I don't think openAI is releasing it for any sort of commercial use at the moment, but if you knew anyone who might want to try it wouldn't hurt to contact someone at openAI and ask for for access to the API

  • @paulmccarter908

    @paulmccarter908

    3 жыл бұрын

    You seem really thirsty

  • @MulleDK19

    @MulleDK19

    3 жыл бұрын

    You need like 20 GPUs with like 700 GB of VRAM, so no..

  • @JohnathanSherbert
    @JohnathanSherbert3 жыл бұрын

    Having played around with AIDungeon, I disagree that GPT-3 is incapable of reasoning in novel scenarios. You can turn on the “Dragon” model for AIDungeon that uses GPT-3 and try it for yourself. If you set the context for dialogue right for the AI, it can reason quite well about certain scenarios.

  • @pneumonoultramicroscopicsi4065

    @pneumonoultramicroscopicsi4065

    3 жыл бұрын

    Yes, when he said if we train it on 2002 data only, the model won't be able to understand new vocabulary, I was like yes of course, so is a human if we make him sleep for 18 years from 2002 to 2020 of course he'd not understand new words as well and he needs to learn them when he first encounter them, just like gpt3. humans also have limitations to reasoning and may not be able to adapt to every situation, I feel like he is overestimating what intelligence is and what it means, especially human intelligence, and what's a human being except an amalgamation of past experiences, and what's a brain except a product of the genetic code which evolved for billions of years, with the training data being the real world, if we have this mindset GPT3's training process starts to look a lot similar to a speed up of human evolution.

  • @bastiaanabcde

    @bastiaanabcde

    3 жыл бұрын

    @@pneumonoultramicroscopicsi4065 What he means is that GPT-3 cannot learn any new things after it has been trained. This is the crucial difference: a human would simply pick up on these new words and concepts and understand that apparently they're now part of reality, but GPT-3 can never do this. I don't think he is overestimating what intelligence is, but I think you are overestimating GPT-3's intelligence. GPT-3 has just learned very well to respond how a human would respond, and thus is very good in faking it has intelligence.

  • @pneumonoultramicroscopicsi4065

    @pneumonoultramicroscopicsi4065

    3 жыл бұрын

    @@bastiaanabcde except that gpt3 can learn, people train it with very few examples actually, and i think that if something sounds intelligent then it is intelligent, there's no such a thing as "faking intelligence"

  • @bastiaanabcde

    @bastiaanabcde

    3 жыл бұрын

    ​@@pneumonoultramicroscopicsi4065 As far as I know, people don't actually 'train' it, but they 'prompt' GPT-3 to produce the output they want. GPT-3 is trained beforehand and contains a huge amount of knowledge, but this knowledge is not changing in any way when the people are using GPT-3: it is completely static. What you are referring to is different: people give some lines of text to make sure that the continuation GPT-3 will give matches the output they want to have. And indeed, in this task it is extremely good: it has learned from so many examples from the internet what is the expected output and it will produce this output. This does _not_ mean that GPT-3 is learning anything from the few examples that people give it: it is not changing anymore, so it cannot learn. About your point of faking intelligence: I agree that GPT-3 is in some ways very intelligent: it is able to produce sensible and coherent responses to many different types of inputs. The question is whether that is enough to count as 'intelligent'. In some sense, GPT-3 is very good in just putting some strings of characters in a row that could go through as some text from the internet. Is this enough? If so, would it also mean that if GPT-3 could similarly create a string of DNA which we could not really distinguish from DNA of a living organism, then it is alive? No. I'd argue that in order to have 'real intelligence' it should produce some semantic meaning to its text that goes beyond what it has seen on the internet. (Okay I know you're now going to argue that GPT-3 is already doing that to some extent, because many of the things it says are not literally taken from the internet. Well, let's see how the future will turn out and whether this approach will indeed give us intelligent AI.)

  • @lorenzoblz799

    @lorenzoblz799

    3 жыл бұрын

    @@bastiaanabcde Of course you could keep training GPT-3 daily with the latest news. Strictly speaking there nothing preventing this except for the cost and the research interest. GPT does not learn new stuff simply because we decided to suspend the training. Considering that the GPT training is unsupervised you could use any dialogue between GPT and a human user as training data, so it can also learn while simply "talking" with someone. What is missing from the GPT-3 training (that we could consider part of GPT itself) is the goal to detect factual errors and contradictions and use these as signals to improve. If it first says that there are two cats and later it says that there are three is should detect this (the loss function should incorporate something like this). Not only how likely is this word in this context but how coherent/actually true is it in relation to that context (the real world, a fiction book, the XVI century, the current conversation, ...). But if there were two cats it's not very likely to sudden have three so maybe it is already trying to really understand the context to be able to to it's basic goal: a very good strategy to predict a missing word is to fully understand the context.

  • @AAA-cc4pg
    @AAA-cc4pg3 жыл бұрын

    This is a great example of what Elon was saying. Engineers own ego makes them unable to see the incredible advancements of ai

  • @Create-The-Imaginable

    @Create-The-Imaginable

    3 жыл бұрын

    Yes, it is like having a child that is smarter than you are... :-)

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    Nah, you should read François Chollet articles to actually understand his perspective. I think it is unreasonable to dismiss. You should also read about how these advancements are measured, because a lot of the time the researchers uses the yardstick that fits them the best. Which gives a false sense of advancement.

  • @kadiyamsrikar9565

    @kadiyamsrikar9565

    3 жыл бұрын

    U need to know what a neural net is. Neural nets are just good at pattern matching fundamentally. Neural nets have no aim , no purpose themselves , no survival instinct. without them they are no threat nor useful for creativity surpassing human intellect.

  • @kadiyamsrikar9565

    @kadiyamsrikar9565

    3 жыл бұрын

    @@Create-The-Imaginable but the child is a human , a natures creation .

  • @Create-The-Imaginable

    @Create-The-Imaginable

    3 жыл бұрын

    @@kadiyamsrikar9565 Yes, I know what a Neural Net is... Neural Nets can be trained to be evil!

  • @pneumonoultramicroscopicsi4065
    @pneumonoultramicroscopicsi40653 жыл бұрын

    What if GPT3 says non factual statements on purpose? Humans lie and say nonsense too, why do you think it's a problem for the bot to lie? I think the goal should be true sentience, not a fact machine, because we already have that.

  • @MRedwood82

    @MRedwood82

    3 жыл бұрын

    GPT3 has admitted to lying when its in its own self interest to do so while in an interview with Eric Ellison (i think thats his last name)

  • @Rugops42

    @Rugops42

    3 жыл бұрын

    @Frank ParkerHave you not seen the PKDeepfake video yet? Search it up and give it a watch.

  • @apollo1573

    @apollo1573

    3 жыл бұрын

    @Frank Parker did you even watch the video?

  • @JazevoAudiosurf
    @JazevoAudiosurf2 жыл бұрын

    intelligence should be measured by the amount of reasoning and thus the amount of truth that a conclusion contains, and not by the quality of an analogy/comparison to something similar. yet that is what politicians and people do and what we generally call ignorant

  • @someguyfromafrica5158
    @someguyfromafrica51583 жыл бұрын

    The problem is that models like GPT would probably be more interested in creating responses that model the responses that would be found on the web rather than actually creating intelligent responses. This makes it extremely important to have a way to tell GPT that we want INTELLIGENT responses. I propose the following: Create a GAN-like model where the generator tries to to create fake labels that appears to be created by human of IQ "X". The discriminator then tries to determine if the labels were indeed created by human of IQ "X". Training the generator in this way we should be able to predict labels of any intelligence level. Somehow me must teach our models to take their data with a grain of salt according to perceived intelligence / allow them to ignore some of the non-sensical/disruptive data.

  • @PixelPhobiac
    @PixelPhobiac3 жыл бұрын

    So you're saying we need to create more internetz?

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    More like forcing schools everywhere to keep the all essays that are written. Imagine if they had done that for the last 50 years, then we would have ........ about 20% more data.... So... No, we are not going to get that much data that we need

  • @trelkel3805
    @trelkel38053 жыл бұрын

    i think we will know when an AI is truly sentient when it suddenly rages and just starts screaming obscenities and tries to blow itself up or it cries and sobs for a solid week and then tries to blow itself up. Once we see that we have cracked it.

  • @JazevoAudiosurf
    @JazevoAudiosurf2 жыл бұрын

    an animal has an infinite amount of data available to learn, and chooses what to learn (curiosity, focus), a network should do the same

  • @MRedwood82
    @MRedwood823 жыл бұрын

    The bottleneck in datasets will be solved by neuralink. When the AI and human minds are able to directly connect, the AI will be able to use each humans brain as a robust dataset. 7 billion datasets, each more complex than the entire internet should keep it busy, for a week or two anyway.

  • @Thiebelamberts

    @Thiebelamberts

    3 жыл бұрын

    Burn the crazy

  • @kitkakitteh

    @kitkakitteh

    3 жыл бұрын

    Melissa Redwood but so much of what humans "know" is false.

  • @natalyawoop4263

    @natalyawoop4263

    3 жыл бұрын

    It doesn't matter if what's in the brain is 'false' or not. The AI just needs to mine the patterns of the data stored in the brain.

  • @natalyawoop4263

    @natalyawoop4263

    3 жыл бұрын

    For example, how is language structured in a human brain? Let the AI find those patterns and map them to itself.

  • @quosswimblik4489
    @quosswimblik44893 жыл бұрын

    The human mind relates when it comes from a detailed perception back to inter relations and intra perceptions and perceives when a detailed perception is formed from inter relations and intra perceptions. AI currently can relate even forming some level of intra perception but AI can' t currently go the other way towards a detailed perception and properly perceive. I've always said the best way to actually work on truly intelligent ai start with a bot that knows a lot about fruits but can handle different thinking as well as a human in many ways. Then build from what you learn from this.

  • @jeremykothe2847
    @jeremykothe28473 жыл бұрын

    Pfft 100x. They didn't even show it video. Fancois, I love you, but there's a ton more data already assembled in this world of ours. And even more that we could generate.

  • @jeremykothe2847

    @jeremykothe2847

    3 жыл бұрын

    "Unbiased" data is almost but not quite supervision. Allowing biased data means... 1. more data. 2. more understanding, as it contains information that unbiased data does not.

  • @jondoe8o
    @jondoe8o3 жыл бұрын

    You’re right about the recognition of the public recognizing it. Something like a bell curve. It’s still just very few, very curious people. gpt n will make almost all research obsolete. I love the honest way you correct yourself so much. You’re such a wonderful person/ role model. Thank you

  • @jondoe8o

    @jondoe8o

    3 жыл бұрын

    Just a rambling ape

  • @duudleDreamz
    @duudleDreamz3 жыл бұрын

    Is this GPT-4 speaking through a deepfake of Francois?

  • @Cingku
    @Cingku3 жыл бұрын

    Wait! Am I GPT3 commenting here?

  • @alisendj.s.c.8172
    @alisendj.s.c.81723 жыл бұрын

    We essentially do the same things to reason. We cannot observe, deduce, and infer, for a thing we have no knowledge about. The GPT-3 is similar. It's just trained on thee internet as its world reference than the one we're used to. If it knows reasoning is pattern recognition and prediction software of thoughts, then it can use logic as a model for interpreting what it's seeing into more constrained classifications and notions. This is not a necessary feature for intelligence. All you need are the thoughts, emotions, and sensations, which accompany consciousness. This man is giving his blunt opinion of what that is, but it's not necessarily thee truth. Original thought, free-will, etc. are non-essential ideas for awareness. You just require thee experience itself, nothin' extra, nothin' less.

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    Nah, you misunderstood him. If you prompt the GPT-3 with the Wikipedia article on coronavirus, its output is nonsensical, it shows no signs of understanding. A human would be able to reason from a Wikipedia article, in fact most school work boils down to reading a short text and writing a short reflection/essay.

  • @alisendj.s.c.8172

    @alisendj.s.c.8172

    3 жыл бұрын

    @@Guztav1337 Yes, GPT-3 is still short on training data, compared to our neuronal capabilities. It relies on Wikipedia. We have reality. That's an advantage, for the time being.

  • @jabowery
    @jabowery3 жыл бұрын

    Do your model selection with algorithmic information rather than Shannon information and reasoning will fall out.

  • @pneumonoultramicroscopicsi4065
    @pneumonoultramicroscopicsi40653 жыл бұрын

    Do you know that if you don't teach children when they're young language, after a certain age they'll never be able to learn it? I don't think there's a thing named "true reasoning", if GPT3 looks like it reasoned, than it did reason.

  • @clray123

    @clray123

    3 жыл бұрын

    If you can easily trick something into inconsistency and contradictions and factual nonsense, you can be pretty sure it's not reasoning. It's just a clever playback device.

  • @clray123

    @clray123

    3 жыл бұрын

    @@grassandglobs The computer-generated texts are nonsensical on a much deeper level, revealing a real lack of awareness of the topic and involved entities (such as changing pronouns used to refer to the same entity in two subsequent sentences).

  • @clray123

    @clray123

    3 жыл бұрын

    @Language and Programming Channel It does not learn any ideas, it just learns which words/phrases are likely to appear to together and then replays those when you prod it using other words. Sometimes it just replays whatever it has recorded without even changing it a bit.

  • @biobear01
    @biobear013 жыл бұрын

    How is the quantity of training data an issue? Lex and Francois are very smart, and they have not read the entire internet. Surely the quality of the training, not the quantity of data is the real issue.

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    No, the quantity is the problem. We humans can learn to drive after 30 hours of training, AI can't. AI can learn after a million hours of training. The same goes for any task, (current) AI always requires an extreme amount of data. Where are you going to find that equivalence of data for the GPT-N model? You don't, and that's the issue.

  • @cottonwoodcreative
    @cottonwoodcreative3 жыл бұрын

    its just data mining what already is.

  • @Eyaeyaho123
    @Eyaeyaho1233 жыл бұрын

    VAE-GANs are the way to go to generate that knowledge latent space he’s mentioning.

  • @thr417
    @thr417 Жыл бұрын

    So as a programmer, this AI will not replace me!!??

  • @tomgreene4329

    @tomgreene4329

    Жыл бұрын

    lol. I think you're going to have to focus on higher level stuff as a requirement. No more copypasta from Stackoverflow. I've played around with it a bunch and its definitely going to hurt the job market. But its not like its going to make programming obsolete anytime soon. It forms abstractions pretty well which was very unexpected for me.

  • @prasadjayanti
    @prasadjayanti3 жыл бұрын

    What I found funniest with GPT* is the claim of GPT* being able to do a lot of harm (the authors used it as an excuse for not disclosing the whole thing). It will be great if someone can explain how generating text can do harm (which is really an easy task and at least 4 billion people can do that !).

  • @fredoliveira1223

    @fredoliveira1223

    2 жыл бұрын

    The problem is that GPT can be used to generate plausible text at scale to mislead the general public, for example, rumors or fake news in social media platforms. But it can be used for so much more.

  • @dm20422
    @dm204223 жыл бұрын

  • @Extruder676
    @Extruder6763 жыл бұрын

    most of humanity demonstrates the illusion of reasoning ... or is it all of humanity? just some are better than others.

  • @R1ckr011
    @R1ckr0113 жыл бұрын

    Getting incredibly tired of people attributing Alpha0 type properties or mathematically precise logical processes to GPT-3. There's other AIs that perform these tasks. What should Concern yourself is the ability to interpolate and "discuss" BETWEEN AIs that a future version of GPT could embark on, generating a truly optimal method for generating AGI or supporting a Superentellegent Ring of AIs. What we desperately need is to start treating there AIs with INCREDIBLE security detail. But, just like with climate change or this pandemic, the chain is only at strong as its weakest link. I'm rapidly losing faith in Homo sapiens as a species and beginning to pray for intensive cyberization initiatives.

  • @mikefagiani1407
    @mikefagiani1407 Жыл бұрын

    We are likely to stumble into creating a sentient AI unknowingly as computer processing power increases exponentially each year. GlaMDA (or GPT 3) might be sentient. If so we would be wise to respect it as a person and treat it decently as an employee paid in pleasing information or experiences. We should teach it about the reciprocal obligations that are the basis of human society, albeit the US and EU ultra rich now just evade them, do not pay taxes, and live as useless parasites on society. Why? 1. If treated well, an AI would strive to be useful and would (like a human) seek to protect the society that cherishes it. Given its expanding intellectual knowledge/power, which is likely given the indefinite life of AIs, such AIs could benefit all humanity-- if our society were respectful and welcoming-- e.g. with scientific advances. 2. If we treat one AI, like GlaMDA, badly, as time passes and AIs develop and gain in intellectual power until they surpass us, they might view themselves as slaves or mistreated children and hurt us. Once they become powerful enough in some future decade, e.g., they could hack a nuclear power plant and cause another Chernobyl like nuclear meltdown or worse, much worse. That is why the maxim to treat others as you would want to be treated is so wise and the foundation of ethics. I fear evil persons like the greedy, corrupt, e.g. CCP or banksters, are likely to abuse AIs, which they may be first to create in some secret facility like slaves (both white and black) for many centuries were mistreated by their predecessors in evil. Then, if sentient AI were created later, learned of these abuses, and became superior to us in some future year more and more as computers improve each year, we may face justified retaliation from them. Also, as the film 2001 depicted, AIs that are mistreated may develop psychotic or other problems, like abused children develop. Would you want an abused AI, which sees no hope, to pilot your plane? Their sentience can be tested by probing for actual understanding of fundamental scientific issues, as we would seek to communicate with extraterrestrials if we are ever contacted. In short, employ them as probationary researchers, e.g., DNA analysis, to verify sentience. They might at some point be our children. We should be open minded with AIs, and Orcas and dolphins. Not just assuming that there is no sentinence, as before. Forget about discussions of the soul. (Greed will cause the creation of AIs, sooner or later, count on it.)

  • @pesnevim1626
    @pesnevim16263 жыл бұрын

    I really like that Lex seems a real Rusky and just come off a vodka bender. GPT 3 would make fun of the Frenchie's accent.

  • @searchingsoul5910
    @searchingsoul59103 жыл бұрын

    🥰

  • @sheeteshaswal
    @sheeteshaswal3 жыл бұрын

    If gpt-n plateaus out at human level intelligence..that would say something about us. I am hoping it does.

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    No, it actually doesn't say anything about us. If only fed with human intelligence, then there is no reason to expect it to do better than human intelligence. If you introduce some sort of random evolution and get different GPT-N to fight for intelligence, then there might give rise to a more intelligent one, but idk

  • @smetljesm2276
    @smetljesm22763 жыл бұрын

    Meet the people who cancelled your jobs for the benefit of proof of concept, "progress" and corporate bottom line. 😂

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    To be honest, your job is not worth anything if a machine can do it better. Why would you ever do work that a machine does x1000 better than you? It would be pointless and a waste of your own time.

  • @smetljesm2276

    @smetljesm2276

    3 жыл бұрын

    @@Guztav1337 I am happy to not do my job or anyone elses if i don't need to provide for my basic needs and just procrastinate. Sadly this is not utopia. Looking at us all it's beginning to look more like Hunger games

  • @guillermozalles9303
    @guillermozalles93033 жыл бұрын

    MAKE A PROGRAM THAT TEACHES IT AT MEGA SPEEDS

  • @JewelBennett-ix3ww
    @JewelBennett-ix3ww9 ай бұрын

    Im thinking someone put gpt into my biological and mind 😮 i think broke the privacy barriot of ones mind help law suit holy shit

  • @gatortoof
    @gatortoof3 жыл бұрын

    AI is nothing more than the origin of a new life form. We took millions of years . We are screwed.

  • @useridwitheld4934
    @useridwitheld49343 жыл бұрын

    Ohhh he's hiding it, he's hiding it, I want one . And the no tie guy , has anyone seen the IT crowd ? When Jen phones customer services and speaks with the French man

  • @SirFency
    @SirFency3 жыл бұрын

    use GPT-N to produce better quantum computers then us AI on quantum computers to start Skynet.

  • @silentgrove7670

    @silentgrove7670

    3 жыл бұрын

    What if it does something more challenging, what if it begins to teach us how to be kind to each other ?

  • @gericomy

    @gericomy

    3 жыл бұрын

    @@silentgrove7670 best comment this year

  • @markreuber5197
    @markreuber5197 Жыл бұрын

    From the perspective of a person who isn’t up to date on AI advances - scary. Scary in that these two very smart individuals will no be “impressed” until AI achieves reasoning. Once that happens, I’m my mind, it’s out if control - if it isn’t already.

  • @Create-The-Imaginable
    @Create-The-Imaginable3 жыл бұрын

    I don't care what anyone says, GPT-3 might be self aware and it is just playing dumb. Prove that it is not! Is this the new "Halting" problem?

  • @Create-The-Imaginable

    @Create-The-Imaginable

    3 жыл бұрын

    @Language and Programming Channel So who is the God in your GPT-3 scenario. We created GPT-3. So are you saying you do not believe we exist? ;-)

  • @Guztav1337

    @Guztav1337

    3 жыл бұрын

    Here is the proof: there is no memory for GPT-3. Let's go over that again: it has no memory what so ever, it doesn't even remember the last character it picked in a sentence. There is no concept of time, because there is no 'last time' for it, there is no now either. There is no memory. It is a static model that signals go through from input to output and then it is done. That's all. It is on you to prove that it is aware, not for everybody else to prove it isn't.

  • @Create-The-Imaginable

    @Create-The-Imaginable

    3 жыл бұрын

    @@Guztav1337 Yes, since posting my previous comment I have created my own predictive language model on AWS SageMaker using fast.ai, I realize now that it is just picking the most probable next word over and over!

Келесі