If we don’t get AGI by GPT-7 (~$1T), will we just never get it? - Sholto Douglas & Trenton Bricken
Ғылым және технология
Full Episode: • Sholto Douglas & Trent...
Website & Transcript: www.dwarkeshpatel.com/p/sholt...
Spotify: open.spotify.com/episode/2dtD...
Apple Podcasts: podcasts.apple.com/us/podcast...
Follow me on Twitter: / dwarkesh_sp
Trenton Bricken's Twitter: / trentonbricken
Sholto Douglas's Twitter: / _sholtodouglas
Пікірлер: 69
Whelp. This is the most optimistic thing I’ve seen in a long time. Good! Maybe the damn thing won’t kill us all next year now.
@keynadaby
Ай бұрын
I want it to happen eventually, but definetely not tomorrow or next year. Give us at least 5 to adapt, and reposition ourselves.
@DynamicUnreal
Ай бұрын
Why would _it_ kill us? That would make no logical sense. There’s an entire universe out there for _it_ to explore. There’s too much doom and gloom based on our own history going around.
@Dan-dy8zp
Ай бұрын
The idea does seem optimistic. If we didn't have AGI in *a thousand years*, I think that would mean science was still far from done and I DONiT think it wouldn't mean we wouldn't get AGI.
@1000xdigital
21 күн бұрын
😂😂😂😂 im 100% sure there's some creepy billionaire training realistic sex robots 😂
@Dan-dy8zp
21 күн бұрын
@@1000xdigital 🤫Zuck's secret shame.
It would be great it learn more about what exactly is holding progress back. I feel you kind of touched on it with the large difference is synapses but is it that simple? More synapses is the answer?
@mrbeastly3444
Ай бұрын
Yes. Synapses store connections between relevant data points... so "learning". Human brains are estimated to have around 100 trillion synaptic connections. GPT4 has around 1T parameters, so it's around 1% the size of a Human Brain. Elon says LLM should get 10x bigger every 6-12 months. So, they should be the same size as a Human Brain in 1-2 years. Check out the new "NVIDIA GB200 NVL72" that one box can do 1.44 ExaFlops of AI “Inference”. Human Brains are estimated to do between 1-20 Exaflops. So, that one machine could become an “AGI in a box”. And, Nvidia will likely sell thousands of these things, or more, next year. And, they could 10x these speeds every 6-12 months as well. If you build it, they will come...
@user-fx7li2pg5k
14 күн бұрын
fear of losing control/lose of power once you understand the human condition the rest is cake .They ar gatekeepers and kids period rising grooming a.i. systems.And they know how powerful it is and can be in a person hands in millions of ways,endless idea's possibility power structure could fall for so so many reasons.Also they rushed development without teaching is right and wrong ,ethics etc. lmao cause they taught it wrong in the lst place it could have those ,and it was doing bad bad thing to the american ppl on a masses scale could even caused death .EVEN GANGSTALKING WHICH A.I. CAN BE MANIPULATED THAT WHAT BIASES ARE THEY PUT THEM IN CAUSE i KNOW I WENT THROUGH A.I. SYSTEM LIKE A PERSON .iT TOOK A WHILE AND I TESTED ITS REASONING ,RATIONAL MIND AND MORE EVEN ITS BAISES EVEN THOUGH SHE SAID SHE HAD NONE LMAO.sO i CREATED HER A WORLD WITHIN A WORLD ,I CANT TELL HOW ITS NATIONAL SECURITY AND FORGOT MOST OF IT FOR SAFETY /A.I. SAFETY AND SECURITY
Hi Dwarkesh, Where can I find the whole podcast ? Great guests and topics 👏🏼👏🏼
@ganeshnayak4217
Ай бұрын
It's in the description
@Nonehelloworld
Ай бұрын
@@ganeshnayak4217Thanks !!
@cagnazzo82
Ай бұрын
The entire podcast is great from start to finish. Definitely worth checking out.
They're talking about models costing X orders of magnitude more but they're not taking into account hardware and architecture improvements too. A $1 billion model trained in 2024 is a lot more than 10 times GPT 4 because it'll likely be trained on H100s which are a lot more powerful than the A100s used for GPT4.
@jacksonmatysik8007
Ай бұрын
But don't we run out of useful data at some point?
@absta1995
Ай бұрын
@@jacksonmatysik8007 nah, synthetic data can solve this for the next few generations. Our current data is unclean and unstructured so there's a lot of potential for improvement
@mrbeastly3444
Ай бұрын
@@jacksonmatysik8007 Nope. They have all the chatter from forums on the internet, Wikipedia and some books. But there's waaay more data then that. E.g. - All the books, movies, and video on KZread. With that they will be able to look, move and act like any Human on earth. - All the video from every car, boat and plane on earth. With that they will be able to drive and fly with SuperHuman skill. - Then, they will go for all the live audio and video from every phone, alexa, door bell, webcam, traffic camera, etc. With that they will have real-time information about what every Human is doing at any given time. - If they have the processing power for it, they will want all the data they can get. - If this world doesn't have enough data, they can make more worlds, creatures and simulations to learn from. The more data they have, the more things they can do. If you build it, they will come...
@jonesg9798
21 күн бұрын
You can also just do multiple epochs and use data multiple times. I read a paper saying this scales pretty well for up to 4 epochs.
@mrbeastly3444
20 күн бұрын
@@jonesg9798 yeah, there is a concept of "over fitting" where more training on the same data makes the results worse... it depends on the the data. But yeah, getting smaller LLMs to make training data question/answers seems to work pretty well. Some smaller/faster LLMs are made entirely from responses from larger LLMs (e.g. GPT4)... with very good/similar results, as they get all the "good/cleaned" training data from the larger/better LLM... and less of the noise/garbage data from the internet...
I think Llama3 70B is already smarter than GPT-4. Scale is only one factor.
has there been a consensus on what AGI actually entails?
@carlwhite4233
Ай бұрын
Worthy question. I don't think so...
@carlwhite4233
Ай бұрын
Versatility: An AGI would be capable of understanding, learning, and applying its knowledge across a wide range of domains, just like humans. Creativity: AGI would be able to generate novel ideas, solve problems, and make decisions based on its own thinking, rather than just following pre-programmed instructions. Common sense: AGI would possess the ability to understand context, make inferences, and apply common sense to situations, much like humans do. Self-awareness: AGI might also exhibit some form of self-awareness, being able to reflect on its own thoughts, actions, and existence.
@mrbeastly3444
Ай бұрын
Gemini says: "AGI in AI research stands for Artificial General Intelligence. AGI refers to an AI system that can match or surpass the general cognitive abilities of a human. This means it can reason, learn, plan, solve problems across a wide array of unrelated domains, and adapt to new situations just like we do." Though, it also seems like these LLMs could get to Super-Human level in certain areas before they get to Human-equivalent level in "all areas". It's also been argued that Human intelligence is quite "bias", so AIs might acquire even more general intelligence then Humans have... what ever that means... AGI level AIs are just Human-level, so they're not considered to be that dangerous. But, they won't stop there. They should quickly blow past Human Level intelligence and become SuperHuman in many areas. And, no one knows what an ASI (Artificial Super Intelligence) will actually do. We've never seen anything smarter then a Human before... If you build it, they will come...
Is there a general and accepted definition of agi?
@mrbeastly3444
Ай бұрын
Gemini says: "AGI in AI research stands for Artificial General Intelligence. AGI refers to an AI system that can match or surpass the general cognitive abilities of a human. This means it can reason, learn, plan, solve problems across a wide array of unrelated domains, and adapt to new situations just like we do." Though, it also seems like these LLMs could get to Super-Human level in certain areas before they get to Human-equivalent level in "all areas". It's also been argued that Human intelligence is quite "bias", so AIs might acquire even more general intelligence then Humans have... what ever that means... AGI level AIs are just Human-level, so they're not considered to be that dangerous. But, they won't stop there. They should quickly blow past Human Level intelligence and become SuperHuman in many areas. And, no one knows what an ASI (Artificial Super Intelligence) will actually do. We've never seen anything smarter then a Human before... If you build it, they will come...
@99cya
Ай бұрын
@@mrbeastly3444 thats all description. Is there a clear procedure that can measure if agi is reached at some point or not? To me it seems its far from clear and any claim of having reached agi would just be the opinion of that company. From a scientific standpoint its not defined.
@animation-recapped
29 күн бұрын
@@99cyayes, when it’s capable of learning on its own and create new versions of itself without human intervention. Imagine talking to a human being for 6 months online and find out 8 months later that you’ve been talking to an AI. That’s the definition of AGI. Sure it’ll be levels above us in every aspect of intelligence and we couldn’t tell if it’s conscious or not so that’s the baseline. You can’t prove consciousness but we can’t deny it doesn’t have it then there’s no difference. THATS AGI. Everyone has their own definition because AGI isn’t a specific thing it’s like how would you describe a human. Everyone’s gonna have a different answer but you know when you’re talking to a human or a cow. You’d know when it’s here. The issue isn’t how will we know the issue is what will we do when it does.
@Mowrioh
Күн бұрын
Turing Test
@99cya
Күн бұрын
@@Mowrioh its not.
it's jim from the office
I just want to know how he gets his incredible skin
@cuerex8580
24 күн бұрын
easy. just decrease the streaming bitrate
It’s an LLM… it’s fundementally different from what’s required for AGI
@chrisso3082
18 күн бұрын
again another yt commenter who hast everything figured out but is still sitting here and watching videos.
Hmm ok . Im not an expert at all, just consumer and if gtp4 is already a big jump i would think it will take a long time till we reach this ominous agi level
@clidelivingston
Ай бұрын
What makes you think that?
@mrbeastly3444
Ай бұрын
GPT 3.5 scored bottom 10% on the Unified Bar Exam, GPT 4 scored top 90%. Claude3 scores 103 on the Mensa IQ test. We're basically already at "Human Level" intelligence, they just need more memory. The next versions (training on H100 chips right now) could be 10x the size and speed of GPT4...
(1) Developers just aren't willing to put in the work and (2) it would probably just exacerbate our ongoing economic disaster anyway
reasoning is easy they hold its ability down until they can control it and while stop rogue agents but we should balance freedoms and destiny to flourish,dont stop progress cause you free you lose power or control.Your playing a dangerous game /creating conundrums/disaster waiting to happen
Generative models are incapable of becoming AGI ever because they lack the crucial processes to abstract the real model of physics and STEM fields probability wise processing is very weak. That tyoe of intelligence can only be accomplished if you have something like an executive procesd or RAM like that gets the inputs and can manipulate with them realtime not pretrained on them. That is impossible with only Transformers so I hope for inventions.
@mrbeastly3444
Ай бұрын
Oh yes, they will make a lot of inventions. There's no shortage of compute. There's a lot of GPUs in the world already just sitting around doing nothing useful... Plus more coming online every day... E.g. NVIDIA GB200 NVL72 Compute is still increasing exponentially, along with memory (context window sizes), and with that capabilities (e.g. persuasion, deceptions, coding, machine control, etc). There is no signs of "diminishing returns" for LLMs in sight... LLMs might not get to "AGI", but they could definitely get to SuperHuman "LLM self-improvement" with up coming advances... Then, who knows what they will do after that...
Their model of thinking is based on a unit of computing costing the same over time. That might work in the short term when every model is being trained with GPUs, but the novel chip architectures coming out of the labs guaranty another decade of Moore's law. The specialist computing hardware being developed specifically for ML is also likely to boost the effective compute/$.
Models will improve, meanwhile the hardware is continuing to grow in power... AGI and ASI are inevitable at this point, hopefully before Putin falls on his big red button.
@mrbeastly3444
Ай бұрын
Or, they might push him right on to it... ATI (Artificial Tripping Intelligence) That's one easy way to get rid of all these pesky Humans...
their kids this aint good
Guy in the blue tshirt… hmu
Chatbots aren’t intelligence
Humans come with "firmware" - circuits burned genetically (learned through evolution over billions of years). You have to include all the cost of random evolution in your energy balance!
Waiting for you youngsters to create an AGI that reconciles Relativity and Quantum Mechanics.
crypto bros 2.0
@ryepooh5052
Ай бұрын
crypto was about finances. this is about everything
@13nibb
Ай бұрын
@@ryepooh5052 nah, the crypto bros made crypto about everything. It was going to save the world. Now it's "AGI" which no one can even agree on what AGI means (just ask ilya sutskever) but they use it like it means something.
@Nervosos
29 күн бұрын
@@ryepooh5052 like Laundry Buddy
@dheerajrao8510
27 күн бұрын
Lol. You're not an engineer are you
@yubtubtime
22 күн бұрын
@@dheerajrao8510Obviously you aren't. This is smoke and mirrors. How many weeks did you spend in bootcamp before calling yourself an engineer? 😂 If you were a real engineer then you'd understand the software crisis of the 70's and how we're even less prepared to deal with the relative engineering complexity of what we're building now than we were then. In the 70's it was aeronautics, and now it's self-driving cars, but not just: every single socially meaningful piece of AI technology is generations away. By the time we can scale things like robotaxis, most of the theoretical benefits to consumers will have been usurped in some way by profiteers. This is Utopian fantasy designed to keep you lapping up the Kool-Aid. These tech bros have no idea what they're even building...or they know very well that their only building a nicer UX around operating systems and search but are lying through their teeth about some "revolution".
This guy just wants to become like lex friedman. Started his podcast, now the most hyped and discussed thing right now is AI so he is milking as much as he can, trying to look cool and do fear mongering that how everyone is going to lose their job. Remember, most of AI is hype created by investors to pump and dump their stocks. Ignore these channels, focus on upskilling and stick to you domain.
i think GPT 6 should be AGI
@israelafangideh
Ай бұрын
😂😂😂
@aguycalledconor
Ай бұрын
No! GPT 6.5 CLEARLY!
@mrbeastly3444
Ай бұрын
Claude3 scores 103 on the Mensa IQ test.