AI Startup Founders Debate the Creation of Artificial General Intelligence

Ғылым және технология

When will we see AI that can do nearly anything the human brain can do...and perhaps do it better? That milestone is often referred to as “Artificial General Intelligence”, or AGI.
We asked 33 AI-focused YC founders: knowing what you know about AI today, when will we see AGI become a reality? Everything in the world of AI can change overnight (or over one particularly wild weekend) - with that in mind, here’s what they had to say.
Apply to Y Combinator: yc.link/MainFunction-apply
Work at a Startup: yc.link/MainFunction-jobs
Chapters (Powered by bit.ly/chapterme-yc) -
00:00 - Intro
00:15 - When will AGI become a Reality?
04:12 - Impact of AGI on Society
05:39 - Outro

Пікірлер: 60

  • @chapterme
    @chapterme6 ай бұрын

    Chapters (Powered by ChapterMe) - 00:00 - Intro 00:15 - When will AGI become a reality? 01:49 - AGI is pretty close 03:23 - Concerns about Current AI Capabilities 04:12 - Impact of AGI on Society 05:09 - Passing the Turing Test: Comparison between AI and human abilities 05:39 - Outro

  • @pnkbrn
    @pnkbrn6 ай бұрын

    My man said "it's just trying to predict the next token" 💀

  • @perrssssjjwjwkriri883

    @perrssssjjwjwkriri883

    6 ай бұрын

    Becuase hes right, nothing human about gpt4

  • @eliasf.fyksen5838

    @eliasf.fyksen5838

    6 ай бұрын

    ​@@aziz9488strictly speaking it's not true, although it's not the biggest oversimplification. I recommend looking up RLHF if you're interested in the topic

  • @AnonymousIguana

    @AnonymousIguana

    6 ай бұрын

    And it might not be true. Predicting the correct next token was its objective function during training. Our objective function is inclusive genetic fitness. Are we actually trying to do inclusive genetic fitness? No, we aren't. We do sports, eat chocolate, party etc. AI might act out such weird side quests. Just like we do. It's the so called inner alignment problem. Doesn't matter if it's sentient or something. TLDR; Objective function =/= learned objective

  • @Mike..

    @Mike..

    6 ай бұрын

    As if that's not what humans do when talking or writing 😂

  • @hoopsandotters

    @hoopsandotters

    6 ай бұрын

    Yes he did 👀

  • @plumbing1
    @plumbing16 ай бұрын

    As a plumber, this is interesting

  • @les_crow

    @les_crow

    6 ай бұрын

    What? 😂😂😂

  • @nirmalmanoj
    @nirmalmanoj6 ай бұрын

    @ycombinator for the time being, it makes more sense to define agi in domain specific task oriented processes. in this sense, we are pretty close to agi when it comes to teaching a particular concept to a student for instance. think about ai annotating data, it's possible to get expert human level annotation with the existing gpt4 or even with open source models, for all tasks that aren't highly nuanced. ps. i'm a researcher in the field of nlg

  • @darknessguy4221
    @darknessguy42216 ай бұрын

    Not 10-20 years but within a year.

  • @petersuvara
    @petersuvara6 ай бұрын

    😂😂 we have AGI… it’s called an intern.

  • @ROHIT_DESHMUKH01
    @ROHIT_DESHMUKH016 ай бұрын

    Amazing content ❤

  • @sanskarpandey6213
    @sanskarpandey62136 ай бұрын

    Within our lifetime but not within this decade. Or at best, the late 2020s will be the BEGINNING of the true AGI chapter. The current models seem to be primarily large supervised learning models and very focused on the software/consumer side of things. I think people are especially blown away or so optimistic because it's the first time they have been able to have a conversation with an AI entity, and it's a very in-the-face kind of AI, actually making people question their job security. But if we are talking about AGI, it'll probably not just be an LLM, but consist of computer vision, and NLP, and literally be a complete system, autonomous, and absolutely generalized for all purposes. Right now, the LLMs are probably at the same intelligence level as a 5-year-old, with enhanced language abilities and a large data repository to fall back on.

  • @wemakee
    @wemakee6 ай бұрын

    Good timing 🤪

  • @bashvim
    @bashvim6 ай бұрын

    AGI is already here.. discovered after November 6 2023 in the OpenAI lab. I think this AGI is internally using multiple GPTs talking to each other to create a self-consciousness like feeling. So easy!!

  • @ThatBidsh
    @ThatBidsh5 ай бұрын

    response to CambioML: name a single aspect of the human brain that isn't represented or implemented by simply operating in classical ways on the data that's stored within the patterns of how neurons fire and which chemicals they produce in what regions and what amounts and how those chemicals affect the firings of the neurons. I can't think of any, but even if there were quantum systems involved, with a big enough computer or enough classical bits you can still implement the exact behavior of qubits (aside from certain random factors that would be diff in every experiment anyway even with diff initial conditions so it's quite literally the same thing you'd get in real life with real qubits but we just can only handle so many, at this point). but the key point for me is that everything we do can be described as either a form of pattern recognition (or more accurately meta-pattern recognition), or pattern combination/generation. including feeling emotions, the qualia of physical sensations, or seeing a color, etc... that's just pattern recognition, albeit particularly generalized and particularly deep meta-pattern recognition that can work well across a wide range of different types of data such as visual, audio, proprioceptive, pressure, heat, smell, taste, and so on - which would be collected from an artificial sensor in a similar manner to how our biological sensors collect the same data, and we could put it all in the same format, and have the artificial mind perform all the same types of processing on that data to arrive at something that replicates every aspect of humanity, including the qualia. it's like, imagine having one part of a system that looks at the raw sensory/sensor data and turns it into a big integrated data structure that represents the current world state and self state, including things like "how is a given situation or action I took or that someone else took, impacting me" and something along the lines of "how do I feel about that, based on how it's impacting me?" and just letting an LLM with Q* (or something similarly capable of generating complex patterns in response to detecting various patterns and meta-patterns) basically fill in the blank and then have it also vectorize & embed that plain text description into a data structure that's optimized for representing the various possible emotions within a given range of intensity. then imagine that this big data structure that gets output by such an autonomous system, is then fed into an agent which also utilizes Q* to decide how to act, and it's just continually running rather than being prompted at specific times by humans. it's sort of just always having things happen to it, noticing all the relevant patterns and maybe even some of the irrelevant ones, and generating outputs in response. I don't see any reason why such a system would not be both sentient (having an experience, or qualia) and conscious (not just having an experience, and not just being aware of your experience, but being aware of the fact that you're aware of your experience and your awareness, or put another way, meta-pattern recognition at roughly a human level and likely even beyond us at some point)

  • @ThatBidsh

    @ThatBidsh

    5 ай бұрын

    but FWIW I could be wrong, and I'd be just as excited to find out I was wrong, as to find out that my suspicions were correct lol

  • @supewithoutcape

    @supewithoutcape

    5 ай бұрын

    This is just described so beautifully , good work 👏🏾

  • @julian84
    @julian846 ай бұрын

    My humble opinion is that until systems do not have cognitive understanding on who is at the endpoint interacting. Should not be called AGI.

  • @i-i7722
    @i-i77226 ай бұрын

    Most people have a different understanding about AGI. And intelligence across living humans has a very large difference. So the intelligence and capabilities close to humans who grew with limited resources and sickness might be close, but close to people who grew with good nutrition and education might be very far. Even today it is very hard to measure the level of consciousness of people, as the most mentally and spiritually advanced humans are mostly silent. Superior human performance and thinking is within a couple years away (huge humankind threat), complete consciousness and reasoning close to average humans is maybe 20 years away (non life threatening).

  • @muditasuryawanshi9579
    @muditasuryawanshi95796 ай бұрын

    Amazing

  • @SelvaPrakashsp
    @SelvaPrakashsp6 ай бұрын

    ppl in AI should learn books on Brain. Though we don't fully understand how intelligence works, we know a lot. It is in fact closer than you think on how chatgpt works.

  • @Pepsi864
    @Pepsi8646 ай бұрын

    very certain these people don't even know the individuals that coined AI

  • @heisenbergww1957
    @heisenbergww19576 ай бұрын

    Humans created computing ability in devices and set some rules, rules which were given and said to stay within them, and from us to them only our visions as datas are feeded to it to compute, now we have given it a knowledge and endless datas across the internet and ask it what are the rules, I believe from behind the screen we were not close to how to see what they might be capable of, in terms of self-processing, but we have set the rules and we have an enormous real world data which is enough for a closer step to autonomous LLMs In reality if a looping software runs based on what's the given problem statement is ,then I believe there is a chance for it to understand various problems that might arise and self-build itself, given it's feeded computing ability it can even be in a superior form, once we have change our minds from 'When is AGI possible to What may provoke an AGI to introduce itself' Intelligence is questioning, I now can see there's a future so bright

  • @idiomaxiom
    @idiomaxiom5 ай бұрын

    Better definition, when does AI become autonomous and capable enough to be an actual threat? At least a decade or more. Or, when do you feel bad about unplugging it instead of upgrading it online? Maybe five years? Also there are a lot of humans that can't convince they're reasoning abstractly.

  • @justinleemiller
    @justinleemiller5 ай бұрын

    18 months to 5 years before it goes public.

  • @e-learncentre4616
    @e-learncentre46165 ай бұрын

    We have seen movies where intelligent robots sneak into labs and create other robots or reinforce some functions to make them overrule their creators. That is now highly likely.

  • @W4D199
    @W4D1996 ай бұрын

    Humans just have to discover the problems they want to be solved by AI for the benefits of Humanity ( not only) and Consciousness of Our Surroundings and Universe in general.❤

  • @blablachannel5709
    @blablachannel57096 ай бұрын

    Where is Sam Altman?

  • @japhethachimba174
    @japhethachimba1745 ай бұрын

    How do we know AGI has been achieved?

  • @ycombinator
    @ycombinator6 ай бұрын

    How do you define artificial general intelligence?

  • @abhishekdk

    @abhishekdk

    6 ай бұрын

    When it starts asking meaning questions itself..

  • @sephypantsu

    @sephypantsu

    6 ай бұрын

    When you would rather interact with it than another human being

  • @midgetsanchez

    @midgetsanchez

    6 ай бұрын

    Here’s the thing: it doesn’t need to actually be sentient, it just needs to persuade humans that it is. @sama’s recent comment about “superhuman persuasion” before superhuman intelligence is not a coincidence.

  • @ba8e

    @ba8e

    5 ай бұрын

    When the AI starts having an existential crisis.

  • @raibek-the-coder

    @raibek-the-coder

    5 ай бұрын

    It's a software that can produce actionable plan with accurate intructions to build a working fusion reactor or a cancer cure.

  • @centurionstrengthandfitnes3694
    @centurionstrengthandfitnes36946 ай бұрын

    There's a lot of confusion over definitions, and I think that needs to be sorted out first, before the question is even asked. AGI, to me at least, has nothing to do with sentience. It's about matching the flexibility of intelligence and ability that humans can claim (again, though, which humans? We're not all equally capable of the same things). So an AGI would be able to do any task a human could do, at least as well as a human, without the need for actual consciousness/sentience. ASI, on the other hand, is a non-biological being - a new form of life, if you will. And I think that's a lot farther away than AGI. It would be helpful for scientists in the relevant disciplines to get together and hammer out some solid definitions that might help stop all the media fear-mongering and existential anxiety going on around AI right now.

  • @rahul_bali
    @rahul_bali6 ай бұрын

    I feel there is no definition of AGI as we are thinking it to be.

  • @singular2030
    @singular20305 ай бұрын

    Level 1 AGI is here. Level 2~3 AGI by 2025 Level 4 AGI by 2027 ASI by 2030.

  • @renatoyutub
    @renatoyutub5 ай бұрын

    Current version of chatgpt already "knows" more than the average human, text to imaage generators can make quite decent jobs too in a fraction of time... like im not even a Computer sc major or anything but it literally is already outperforming humans by definition

  • @austin4855

    @austin4855

    5 ай бұрын

    The same argument could be applied to a basic pocket calculator. Those also can do a decent job (at certain tasks) in a fraction of the time it would take humans to do the same tasks. The "general" in artificial general intelligence is doing a lot of heavy lifting. ChatGPT absolutely can work *faster* than most humans at a much wider variety of tasks than we've ever seen before. It's much better than a pocket calculator in that regard. But we might be just as far away from an AI actually outperforming 99% of humans at 99% of tasks as the pocket calculator was from ChatGPT (55 years). There is a lot of evidence already that what we're doing now will not scale much farther due to hardware and power constraints, and that there will be diminishing returns. Just adding more parameters is only going to do so much. We need more research in alignment, efficiency (it was reported that ChatGPT was costing OpenAI $700k per day - that's obviously a problem), and probably entire new paradigms than what we're using now. Maybe the answer lies in computational neuroscience and better understanding our own consciousness, or maybe it's going to be something very novel and very alien to us. Who knows. I doubt we're less than 10 years away from AGI. Whether we're only a couple years away from major *economic* upheaval because our AI is just good enough to revolutionize industries, or political and cultural upheaval because of "superhuman persuasion" as Sam Altman has called it, is another story.

  • @scientious
    @scientious5 ай бұрын

    Since this is my field, I guess I'll look at it . . . but I don't have high hopes of this being an informed conversation. Let's see: "When will AGI become a reality?" Okay, this question is a bit laughable since no one they talk to will actually know what AGI is. So, this will be a series of wild guesses. You could just ask a Magic 8 Ball thirty-three times. 0:22 "It depends on how you define it." ~ There is only one definition. 0:35 "A couple of years. Four to five years. 2035 (12 years)." ~ You can't build an AGI without being able to design it and you can't design it without theory. There is no theory yet. If the theory was done today, it would take 2 years to get published and at least 6 to build a prototype system. So, anything earlier than 8 years is pretty much nonsense. 0:53 "A foundation model. It's all BS." ~ There is no model. He's using the wrong theory. 0:59 "Consciousness". ~ Now you're on the right track. 1:13 LLMs are not related in any way to AGI. 1:35 Actually we would know when we achieve it. It's not a feeling or a guessing game. It's hard science. 1:42 There's no spectrum. 1:52 GPT-4 has nothing to do with AGI. You aren't moving any closer to AGI by building new versions of GPT. If AGI is your goal then GPT is a waste of time. 1:55 Logic is easy. That isn't what defines AGI. 2:08 No, there is nothing like or approaching or leading to AGI "out there today". He is simply wrong. 2:15 The tools that are available have nothing to do with AGI. 2:20 No. These two men shouldn't be trusted to look after young children because they are likely to come home with a beach ball or Cocker Spaniel thinking it was their child. 2:33 "5 - 10 research breakthroughs"? ~ No. He is thinking in terms of software and that isn't the problem. 2:45 "A world no one has ever dreamed of." ~ No one you've ever talked to knows anything about it, but people who work on theory do. 3:08 GPT-4 is nothing like AGI. If you could honestly mistake that for AGI then you could mistake a bucket for an ocean liner. 3:11 "There's a lot that goes into a brain beyond language." ~ Correct. 3:35 "What we have is not very smart." ~ Correct. 4:03 "AGI isn't coming from the current stuff we're doing." ~ In terms of AI, this is correct. 4:08 "Already in the AI wars." ~ AI is unrelated to AGI. "Positive Or Negative"? The estimated short term value is a boost to US GDP of $5 trillion per year. The long term benefits could be much greater. No one has come up with a feasible negative yet. Most of the negatives that people talk about are based on comic books and bad science fiction. 4:32 It isn't set? Moral and philosophical? ~ Clueless. 5:12 The Turing Test was falsified several years ago. Mentioning it is useless. 5:28 AI and AGI are unrelated. AGI is not an extension or advanced version of AI. Regardless of how fast or how complex you make AI, it will never be AGI. I'm generally surprised to see just how far behind people in the field (like those above) are today.

  • @abdelhaibouaicha3293
    @abdelhaibouaicha32936 ай бұрын

    📝 Summary of Key Points: 📌 One speaker believes that Artificial General Intelligence (AGI) will be created within the next decade or two, while the other speaker believes it already exists. 🧐 The definition of AGI and its capabilities are discussed, with emphasis on the need for high-level machine intelligence that can perform tasks as well as humans. 🚀 The speakers express uncertainty about how to measure AGI and when it will be achieved, highlighting the importance of research breakthroughs and the development of tools and models. 🚀 The ethical implications of AGI are mentioned, emphasizing the need for proper governance and alignment with human values. 💡 Additional Insights and Observations: 💬 "We need high-level machine intelligence that can do the things that humans can do." - Speaker 1 📊 No specific data or statistics were mentioned in the video. 🌐 The video does not reference any external sources or references. 📣 Concluding Remarks: The video features a discussion on the creation of AGI and its timeline. While the speakers have differing opinions on when AGI will be created, they both express excitement about its potential impact on society. The video also highlights the need for proper governance and alignment with human values in the development of AGI. Made with Talkbud

  • @plate.armour_0996
    @plate.armour_09966 ай бұрын

    [ENTER THE SINGULARITY]*

  • @ronhill8941
    @ronhill89416 ай бұрын

    4:03 What did she say?

  • @gotemlearning

    @gotemlearning

    6 ай бұрын

    "probably we're already in the AGI world"

  • @user-yp8ti6pn3z
    @user-yp8ti6pn3z6 ай бұрын

    0:51

  • @Pepsi864
    @Pepsi8646 ай бұрын

    are these founders even working with AI lol

  • @superresistant8041

    @superresistant8041

    4 ай бұрын

    Mind blown, I can't believe what I'm hearing. Could they be lying on purpose?

  • @koustubhavachat
    @koustubhavachat6 ай бұрын

    human can imagine & dream. right now GPTs are hallucinating. there is difference between hallucination and dreaming. The day we see sign of dreaming in AI rather than reinforcement learning, we will start worrying.

  • @JoeD0403
    @JoeD04036 ай бұрын

    These are guesses in a world of binary code. When quantum computing is practical and scalable, AGI will invent itself. We might not even know about it for a while.

  • @kirkwoodbharris5110
    @kirkwoodbharris51106 ай бұрын

    Next question: with all the hype around AGI (but little definition of what that means), is AGI overhyped? My short answer: yes

  • @MaxKamrani

    @MaxKamrani

    6 ай бұрын

    Nah this will be Real but this is currently overhyped for sure 😊

  • @SuperMaDBrothers
    @SuperMaDBrothers5 ай бұрын

    This isn’t a debate, learned nothing

  • @juanestonia7213
    @juanestonia72136 ай бұрын

    Wrong.

  • @nullvoid12
    @nullvoid126 ай бұрын

    AGI is not possible until we solve the central problem of computer science.. P vs NP

Келесі