What if Dario Amodei Is Right About A.I.?

Back in 2018, Dario Amodei worked at OpenAI. And looking at one of its first A.I. models, he wondered: What would happen as you fed an artificial intelligence more and more data?
He and his colleagues decided to study it, and they found that the A.I. didn’t just get better with more data; it got better exponentially. The curve of the A.I.’s capabilities rose slowly at first and then shot up like a hockey stick.
Amodei is now the chief executive of his own A.I. company, Anthropic, which recently released Claude 3 - considered by many to be the strongest A.I. model available. And he still believes A.I. is on an exponential growth curve, following principles known as scaling laws. And he thinks we’re on the steep part of the climb right now.
When I’ve talked to people who are building A.I., scenarios that feel like far-off science fiction end up on the horizon of about the next two years. So I asked Amodei on the show to share what he sees in the near future. What breakthroughs are around the corner? What worries him the most? And how are societies that struggle to adapt to change and governments that are slow to react to them supposed to prepare for the pace of change he predicts? What does that line on his graph mean for the rest of us?
This episode contains strong language.
Mentioned:
- Sam Altman on The Ezra Klein Show (www.nytimes.com/2021/06/11/op...)
- Demis Hassabis on The Ezra Klein Show (www.nytimes.com/2023/07/11/op...)
- On Bullshit (press.princeton.edu/books/har...) by Harry G. Frankfurt
- “Measuring the Persuasiveness of Language Models (www.anthropic.com/research/me...) ” by Anthropic
Book Recommendations:
- The Making of the Atomic Bomb (www.simonandschuster.com/book...) by Richard Rhodes
- The Expanse (www.hachettebookgroup.com/ser...) (series) by James S.A. Corey
- The Guns of August (www.penguinrandomhouse.com/bo...) by Barbara W. Tuchman
Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.
You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at www.nytimes.com/article/ezra-....
This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

Пікірлер: 316

  • @HernandeSilva-ey3qd
    @HernandeSilva-ey3qd26 күн бұрын

    You have to admit that Dario’s transparency and openness is remarkable, courageous and very valuable. In contrast, think of the type of conversations you see from other CEOs in other organizations (across every industry) that hide behind business speak and never talk (or even hint) about risks, threats, concerns, etc. I think what we are seeing from CEOs and founders like Dario Amodei, Sam Altman, Mustafa Suleyman, etc is drastically different than what we see to from 99.9% of all other CEOs in “power” today. Also, Ezra is one amazing interviewer.

  • @hotshot-te9xw

    @hotshot-te9xw

    24 күн бұрын

    Better than openAI ill say that much

  • @genegray9895

    @genegray9895

    24 күн бұрын

    I wouldn't include Altman in that list. He hides behind business speak and downplays the risks while also lying profusely about the nature of the models and the impact they are having and will continue to have on the world.

  • @gokuvonlange1721

    @gokuvonlange1721

    16 күн бұрын

    Sam Altman is the most secretive. Especially after the board leave incident he's been much more secretive. In his recent interview at Stanford he said "not going to answer that" a couple times. Or he replies with wit and a look like "that's a stupid question, don't ask me that" and then stares at the audience until the interviewer uncomfortably switches to the next question. He's been dodging so many questions lately except the overhype for GPT-5

  • @Bronco541

    @Bronco541

    13 күн бұрын

    My take on Sam is hes being cautious/worried about the future impact and implications of GPT 5, right or wrong, it seems like major breakthroughs have been made which could perhaps make or break their company.

  • @cmw3737
    @cmw3737Ай бұрын

    The note about Claude knowing internally that it is lying, or at least is uncertain needs to be made accessible. The getting the agents to ask questions themselves can be a big improvement to zero shot tasks. Writing a prompt with enough detail to guide it toward a correct solution can be tedious and instead of the agentic flow of having to correct its first answer saying that's not quite right and then saying what is wrong it can be better to tell it to ask any questions if anything is ambiguous or unclear or it needs more information before giving an answer that it has a high confidence in. In order to do that it needs to access it's own level of certainty. That way you don't have to think of all details and instead let it create the model of the task and ask you (or a collaborative agent with a fuller picture) to fill in the details as needed until it reaches a threshold of confidence rather than making stuff up to give whatever best zero shot answer that it can come up with.

  • @DaveEtchells

    @DaveEtchells

    12 күн бұрын

    Good point - I’ve found that if I just ask an LLM if it’s sure it’s not hallucination, it’ll almost always catch itself.

  • @kyneticist
    @kyneticistАй бұрын

    So, just to clarify - academics and researchers have figured out the most likely risks, scale and general scenarios that AI development will likely make real in the short term. They also reason with confidence that once those risks materialise as actual catastrophes, nobody will do anything about the risks because there's too much money at stake.... and nobody sees a problem with this.

  • @beab5850

    @beab5850

    29 күн бұрын

    Exactly! Horrifying!

  • @AB-wf8ek

    @AB-wf8ek

    28 күн бұрын

    Yes, because that's essentially what corporations have done historically already. Exxon's own scientists knew what the effects of emissions from burning fossil fuels would be back in the 70's. What did they do? Microsoft, Apple, Google, Amazon Facebook; all of the largest tech companies - what have they done in the face of monopolistic practices, planned obsolescence, spammy ads, workers' rights, toxic social media and overall over consumption?

  • @franklangrell5824

    @franklangrell5824

    24 күн бұрын

    Exponential growth is radically more extreme. 1 doubled for 30 days = 2.147 billion

  • @nicholas6870

    @nicholas6870

    20 күн бұрын

    Wait, so you're saying short term gains for stock owners outweigh the long term survival of our species?

  • @41-Haiku

    @41-Haiku

    19 күн бұрын

    Some people at these companies do see a problem with this, but those that do either quit or get fired. Daniel Kokotejlod recently quit OpenAI because he "gave up hope that they would be responsible around the time of AGI." For everyone else, there's the grassroots movement PauseAI. They are speaking to politicians and the general public, seeking a global treaty and a moratorium on developing general-purpose AI systems that pose unknown or extreme levels of risk (AKA any models more capable than the ones we have now).

  • @rmutter
    @rmutter14 күн бұрын

    I feel fortunate to have been able to listen in on this outstanding discussion. I really enjoyed their bantering and wordplay. I find myself in awe of the intellectual power that has been harnessed in the creation of AI. Now, if we humans can find a means to adapt to the exponentially growing intellectual power of maturing AI systems, we may actually benefit from using them, instead of them using us.

  • @somnambuIa
    @somnambuIaАй бұрын

    1:02:15 EZRA KLEIN: When you imagine how many years away, just roughly, A.S.L. 3 is and how many years away A.S.L. 4 is, right, you’ve thought a lot about this exponential scaling curve. If you just had to guess, what are we talking about? DARIO AMODEI: Yeah, I think A.S.L. 3 could easily happen this year or next year. I think A.S.L. 4 - EZRA KLEIN: Oh, Jesus Christ. DARIO AMODEI: No, no, I told you. I’m a believer in exponentials. I think A.S.L. 4 could happen anywhere from 2025 to 2028.

  • @BadWithNames123

    @BadWithNames123

    27 күн бұрын

    AGI 2025-28

  • @juliodelcid4168

    @juliodelcid4168

    26 күн бұрын

    Silly question but what does ASL stand for?

  • @MuratUenalan

    @MuratUenalan

    26 күн бұрын

    @@juliodelcid4168It is mentioned they relate to biosafety levels. Then, *SL is *safety level. „A“ might be Ai, or Anthropic.

  • @juliodelcid4168

    @juliodelcid4168

    26 күн бұрын

    Yes I heard that, but was still left a little confused. Thanks mate

  • @CelebWorkout

    @CelebWorkout

    26 күн бұрын

    A very abbreviated summary of the ASL system is as follows: ASL-1 refers to systems which pose no meaningful catastrophic risk, for example LLMs released in 2018, or an AI system that only plays chess. ASL-2 refers to systems that show early signs of dangerous capabilities-for example, the ability to give instructions on how to build bioweapons-but where the information is not yet useful due to insufficient reliability or not providing information that, e.g., a search engine couldn’t. Current LLMs, including Claude, appear to be ASL-2. ASL-3 refers to systems that substantially increase the risk of catastrophic misuse compared to non-AI baselines (e.g., search engines or textbooks) or show low-level autonomous capabilities. ASL-4 and higher (ASL-5+) is not yet defined as it is too far from present systems, but will likely involve qualitative escalations in catastrophic misuse potential and autonomy.

  • @itsureishotout-itshotterin3985
    @itsureishotout-itshotterin3985Ай бұрын

    Ezra, your questions and your guidance of this conversation was masterful - you took a topic that is complex and jargon is tic and brought it to a level of easy consumption while still allowing your guest to explain the topic at a good depth.

  • @and1play5

    @and1play5

    27 күн бұрын

    No he didn’t, it was pedantic

  • @41-Haiku

    @41-Haiku

    19 күн бұрын

    I've been very impressed with Ezra lately.

  • @BrianMosleyUK
    @BrianMosleyUKАй бұрын

    This is such an entertaining and informative discussion. Well done and thank you.

  • @vamps3000
    @vamps30008 күн бұрын

    CEO of AI company hype his product, in another news water is wet

  • @hugegnarlyeyeball
    @hugegnarlyeyeballАй бұрын

    I like when he says that even though AI compute uses a lot of energy, we have to consider the energy it takes to produce the food a worker eats.

  • @privacylock855

    @privacylock855

    Ай бұрын

    Those darned employees. Demanding food, again. We just hate them. :)

  • @privacylock855

    @privacylock855

    Ай бұрын

    We are still going to have people, Right?

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    The hubris of these AI revolutionaries is just stunning.

  • @TheMrCougarful

    @TheMrCougarful

    Ай бұрын

    That was meant to sound like a threat. If you question overall energy consumption, well then, there is a solution you haven't thought about.

  • @connorcriss

    @connorcriss

    Ай бұрын

    Humans still have to eat if they aren’t working, right? Does he want people to starve?

  • @penguinista
    @penguinistaАй бұрын

    I am sure the people with access to the godlike AIs will be eager to hand off that power and privilege 'when it gets to a certain point'. Like the old saying: "Power causes prosocial motivation, ultimate power causes ultimate pro social motivation."

  • @marcussord5290

    @marcussord5290

    22 күн бұрын

    Multipolar traps. Arms race is our playbook- pro social must be a euphemism?

  • @letMeSayThatInIrish

    @letMeSayThatInIrish

    20 күн бұрын

    I am equally sure the unaligned godlike AI itself will be eager to hand off power to the people who built it.

  • @augustusomega4708

    @augustusomega4708

    19 күн бұрын

    if it has all knowing molecular intelligence, it would deliver the future in perfect waves of congruent logistics and optimum serendipity. A true measure of AGI is clairvoyance. The perfect measurement of a supreme intelligence and to know for sure its not some marketing delusion, is that it can predict the near future with full accuracy. Since we humans are so rare in fact life seems to be so, it would be decidedly unintelligent to destroy life.

  • @MrMichiel1983

    @MrMichiel1983

    19 күн бұрын

    @@augustusomega4708 Clairvoyance would be a trait of fantastical ASI. AGI would "merely" be able to replicate basic human tasks. That said some level of AGI represents a threshold for an exponential growth curve where the level of intelligence might soon be considered superhuman. Don't forget humans are considered intelligent, but are very capable of destroying life precisely because of that intelligence - the orthogonality thesis would state that compassion and intelligence are not on the same axis, although an argument from abundance and diversity would suggest an ASI would take some existential risk by allowing other unpredictable power around in exchange for some perceived utility. It is indeed decidedly unintelligent to destroy all life, but that hasn't stopped life from trying to - even cyanobacteria have managed to destroy their environment, no intelligence required - and no system will ever be "perfectly" intelligent.

  • @augustusomega4708

    @augustusomega4708

    19 күн бұрын

    @@MrMichiel1983 The threshold beyond AGI would seem incomprehensible I imagine. The 3 properties of GOD SPACE ...omnipresence TIME ...eternal DATA ....all knowing Space/Time/Data like that film "Lucy"

  • @mikedodger7898
    @mikedodger7898Ай бұрын

    34:08 This is an especially relevant section. Thank you! "Are you familiar with the philosoper Harry Frankfurt's book on bullshit?"

  • @RodCornholio

    @RodCornholio

    11 күн бұрын

    Very relevant.

  • @geaca3222
    @geaca3222Ай бұрын

    Great very informative conversation, thank you

  • @grumio3863
    @grumio3863Ай бұрын

    Thank you for calling that out. "Lord grant me chastity but not right now" I'd love to hear an actual game plan for actual democratization, instead of empty virtue signaling

  • @justinlinnane8043
    @justinlinnane8043Ай бұрын

    i live alone and am sliding gracefully into old age so the idea of an interesting dynamic AI assistant. is exciting up to a point . One that can organise life's essentails and also have an interesting conversation would be great . However . The thought that its higher functioning "Parent" AI has no real conception of Human alignment is terrifying !!

  • @831Miranda
    @831MirandaАй бұрын

    Excellent interview, thank you to both of you! Amadei is one of the better 'builder of psychopaths' (aka builders of AI tech) we have in the world today.

  • @glasperlinspiel
    @glasperlinspiel15 күн бұрын

    This is why anyone making decisions about the near future must read Amaranthine: how to create a regenerative civilization using artificial intelligence. It’s the difference between SkyNet and Iain Banks’ “Culture” and “Minds.”

  • @mollytherealdeal
    @mollytherealdeal23 күн бұрын

    What an excellent conversation! Thanks.

  • @kathleenv510
    @kathleenv510Ай бұрын

    Excellent, Ezra

  • @striderQED
    @striderQEDАй бұрын

    Technology has been advancing exponentially since the first rock was split into useful shapes. And yes we are just entering the upward curve.

  • @TheMrCougarful

    @TheMrCougarful

    Ай бұрын

    You are always on the exponential curve.

  • @Apjooz

    @Apjooz

    19 күн бұрын

    @TheMrCougarful Upward curve in terms of our own capabilities. For example the language models got suddenly interesting when their system memory started to approach the total memory of our own brain.

  • @TheMrCougarful

    @TheMrCougarful

    19 күн бұрын

    @@Apjooz AGI is alien Intelligence. Obviously, it can mimic some human capacities, and certainly, it can know what we know, having studied us. But apart from the obvious, we should make no assumptions about its current capabilities, and no assumptions about what it is ultimately capable of. More importantly, never pretend it is just like us. It is nothing like us. AGI is alien intelligence. What we discern from the surface is ultimately of no importance. How it answers questions is of no importance. How useful it makes itself is of no importance. All these things are camouflage. AGI is alien intelligence. If AGI landed on Earth on an intergalactic spacecraft, we would be better prepared for it than from having it emerge out of a computer model of human language. As it stands now, we are helpless to understand what has happened. But never forget, however else it appears on the surface, however useful it might make itself, AGI is an alien intelligence.

  • @skylark8828
    @skylark8828Ай бұрын

    AI is limited by the chip hardware it uses, so until the chip fab plants can be made obsolete somehow there won't be exponential increases in AI progress. GPT4 was released a year ago but there is no perceived exponential jump in capabilities, instead we are seeing multi-modal AI's and the refining of AI training methods along with throwing ever larger amounts of compute at it.

  • @MrMichiel1983

    @MrMichiel1983

    19 күн бұрын

    AI is indeed limited by its architecture and the computational capacity applied to that architecture. However, computer chips already have exponential growth; that's is colloquially called Moore's Law (although it's slowing down a bit, chip capacity is doubling roughly every so many years). Although I agree that LLMs have been overhyped, narrow AI like AlphaFold has been very successful in its domain. Also, don't forget GPT-5 is being trained right now, so we might see some jump in capabilities. Those jumps are likely only linear, since presumably capabilities scale logarithmically (and the current drive is mostly in scaling the current transformer architectures). I would myself argue that emergent capabilities will probably be pronounced best with combining token prediction with diffusion models - the model can spout some initial crap but then auto-correct itself with some expert determined amount of computation thrown at the diffusion. This is different than what DeepMind currently proposes to over-generate responses and then have expert systems choose the best answer. That end of pipe improvement of output might well work to some extent, but it takes an exponentially increasing amount of compute, whereas architecture changes could yield stable or growing capabilities with diminishing compute.

  • @ManicMindTrick

    @ManicMindTrick

    10 күн бұрын

    This is not true. The algorithms are clunky and poorly optimized and you have a lot of hardware overhang available to be exploited to its full power by something much more sophisticated and intelligent.

  • @skylark8828

    @skylark8828

    10 күн бұрын

    @@ManicMindTrick LLM's are still using brute force approaches, and throwing ridiculous amounts of compute at the problems they cannot overcome is not going to achieve anything meaningful let alone exponential growth in AI performance. The hype bubble is about to burst.

  • @JeanCharlesBastiani
    @JeanCharlesBastiani11 күн бұрын

    Hi Ezra, when you said you cannot find an analogy with something that was developed by private sector and government ultimately had to take control of it because it was too powerful, I think banking is a good one. Timescale is very different but banking was developed privately and ultimately states had to take some control of it through a central bank institution. Even if central banks remain independent they are for sure state and not private institutions.

  • @RodCornholio

    @RodCornholio

    11 күн бұрын

    Some AIs are open source, so that cannot be controlled by government. The choke point, right now, (where government could target) is the massive amounts of resources required for the most powerful AIs. So, for example, an AI on your computer, training on your writing and voice, can’t be controlled. But, some massive data and number crunching AI-center in Silicon Valley could be targeted by a state. Eventually, I predict (and hope), there will be a distributed AI…say, an app on your phone that you “feed” it data and/or it uses processing power on your phone (like some crypto) for training other data. In exchange for your help, perhaps, it awards you with digital currency, points, or (more likely) just the ability to use it.

  • @cynicalfairy
    @cynicalfairyАй бұрын

    "Your scientists were so preoccupied with whether or not they could they didn't stop to think if they should."

  • @minimal3734

    @minimal3734

    Ай бұрын

    Complete nonsense. They have thought carefully about what they are doing and why they are doing it.

  • @TheLegendaryHacker

    @TheLegendaryHacker

    Ай бұрын

    Funnily enough, the worry with Anthropic is more that they think so much about whether or not they should that they never do

  • @justinlinnane8043

    @justinlinnane8043

    Ай бұрын

    @@minimal3734 🤣🤣🤣🤣🤣🤣 you're kidding right ??

  • @minimal3734

    @minimal3734

    Ай бұрын

    @@justinlinnane8043 You believe that scientists in AI research do not think about consequences of their work?

  • @justinlinnane8043

    @justinlinnane8043

    Ай бұрын

    @@minimal3734 that exactly what I think !! worse still I think they know exactly the risks they're taking with our future but choose to ignore them so they can get rich beyond their wildest dreams !!

  • @incognitotorpedo42
    @incognitotorpedo4224 күн бұрын

    Dario Amodei: "The combination of AI and authoritarianism both internally and on the international stage is very frightening to me." Me: Me too.

  • @paulwary
    @paulwary19 күн бұрын

    Even if AI never does anything evil, it's mere existence is dangerous to the human psyche. But there is no going back. It's gonna be a wild ride.

  • @what-uc
    @what-uc9 күн бұрын

    Something that works as a thumbnail doesn't work as a 90 minute video

  • @nathanbanks2354
    @nathanbanks2354Ай бұрын

    Of course the big question I have is when will Anthropic's Claude 3 Opus subscription be available in Canada?

  • @dr.mikeybee
    @dr.mikeybee10 күн бұрын

    Dario is very smart. I enjoy his thinking.

  • @RaitisPetrovs-nb9kz
    @RaitisPetrovs-nb9kz23 күн бұрын

    I love the part at very end of the interview “I use sometimes “internal”model” …

  • @ajithboralugoda8906
    @ajithboralugoda8906Ай бұрын

    I agree Claude3 is the most powerful compared to the rest of the LLMs I did simple test of "Transliteration" form my language Sinhalese ( the mother tounge of Sinhalese People in Sri Lanka) .It excelled. IT could create the matching script sentence in my language and it then translated it into English precisely. Gemini did not have a clue and quit. ChatGPT tried but it was not as good as Claude3. Also it could show intuitive nuances into simple task like write a Poem which Rhymes but it definitely came on top

  • @michaelmartinez5365

    @michaelmartinez5365

    19 күн бұрын

    I enjoy my conversations with Claude 3. It's very friendly and engaging and makes me feel warm and fuzzy 😊

  • @gokuvonlange1721

    @gokuvonlange1721

    16 күн бұрын

    @@michaelmartinez5365 You're talking to a mathmatical distribution model.. I'm sure it makes you warm and fuzzy. But never make the mistake to anthomorphize these things

  • @doobiescoobie
    @doobiescoobie25 күн бұрын

    Interesting talk. When the models understand the known knowns and the known unknowns. Will it then expand human knowledge beyond unknown knowns and unknown unknowns? How will quantum computing expand these models?

  • @msabedra1
    @msabedra110 күн бұрын

    How do we know this isn’t just two AI agents talking to each other and gaslighting us?

  • @AB-wf8ek
    @AB-wf8ek28 күн бұрын

    47:43 Listen, if we're going to figure out how to make these dinosaur parks safe, we have to make the dinosaurs

  • @56whs

    @56whs

    25 күн бұрын

    Exactly. Ridiculous thing to say.

  • @incognitotorpedo42

    @incognitotorpedo42

    24 күн бұрын

    @@56whs I think you're misinterpreting the statement. He's saying that without the models to experiment with, to learn what they're capable of, you don't know what needs to be constrained. I don't think Jurassic Park is a great analogy, but it's funny.

  • @megavide0

    @megavide0

    20 күн бұрын

    49:28 "... RSPs [...] responsible scaling plans..."

  • @41-Haiku

    @41-Haiku

    19 күн бұрын

    ​@@incognitotorpedo42 You can just never build the dangerous models in the first place. PauseAI has serious policy proposals to make that feasible on an international level.

  • @MrMichiel1983

    @MrMichiel1983

    19 күн бұрын

    @@41-Haiku What models do you consider dangerous and what models not? And how are those traits related to the architecture and level of compute of those models? What are those proposals by PauseAI? And how would it be at all feasible to prevent people from building software that, by the way, at some point can write and improve itself? We can't stop hackers, so how would we stop some similar actor in the AI domain. To me the adage "you can just never build the dangerous models in the first place." seems to be a naive position. That is because in order to entertain that notion we must consider all people and all state actors to be rather benevolent a priori - or at least incapable of crossing some threshold where catastrophe gets its own dynamic. - eg diseases escaping labs. Dangerous AI workflows will indubitably be developed, both in military, social and medical domains, simply because of the massive (monetary) gains to be gained by both mankind and powerful individuals. We could also have "just not built the atomic bomb"...

  • @user-kz5cw2gj3w
    @user-kz5cw2gj3w24 күн бұрын

    He's right. What I've the latest generative AI programs do in the creative community is staggering. The rapid developments are and will continue to change our concepts of 'human creativity', what it is and what it means and not in a good way except for those that benefit from the spread of this technology.

  • @EthosEvolveAI
    @EthosEvolveAI25 күн бұрын

    It seems the obvious conclusion is that these systems are very likely to transform society. They have been trained on the contributions of all of humanity. Many people are going to be affected without their consent. The heart of the issue seems to be that we currently do not have an ethical system in place to ensure that these systems will be used for the true benefit of all. Relying on the same profit motive that has caused many of the problems we currently face is a recipe for disaster. If we don’t approach this endeavor with a new vision for equality and utopia for all, these tools will almost certainly lead to extreme power and exploitation of the people who make it all possible. It’s quite concerning to hear the developers seem to have no vision on how to avoid very bad things from happening. All we have to do is look at how governments and militaries solve problems now to see what happens when they hold all the power times a million. I sincerely hope that rather than dollar signs we find the heart and courage to imagine a truly better future for us all.

  • @adamkadmon6339
    @adamkadmon633918 күн бұрын

    On exponentials, who was right, Malthus or Verhulst?

  • @ili626
    @ili626Ай бұрын

    What are we going to do about money in politics, and how will open-source/decentralized ASI help by preventing a dystopian oligarchy.. or destroy us if any rogue actor can leverage such power? Ezra should be asking these questions.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Ezra will never ask questions that might make the wealthy and powerful feel uncomfortable.

  • @gregorymurphy6115

    @gregorymurphy6115

    Ай бұрын

    It won't matter because we will all be too busy being unemployed and starving

  • @Steve-xh3by

    @Steve-xh3by

    28 күн бұрын

    A technology that is too dangerous to democratize is also too dangerous to allow to be centrally controlled. There is no evidence that those in power, or those who seek power are naturally inclined to behave more benevolently than a random sample from the general public. In fact, there is much historical evidence that the inverse correlation is true. That is to say, those in power, or those who seek power (usually those who seek have a better chance of obtaining, so this is the same set) are MORE likely to be bad actors than a sample from the general public. So, I'd MUCH rather have everyone have access to something very powerful, than for that power to be centrally controlled.

  • @Cloudruler_

    @Cloudruler_

    21 күн бұрын

    if the general public gets these models open-sourced, we can use it to defend ourselves from big tech and the government

  • @RodCornholio

    @RodCornholio

    11 күн бұрын

    @@Cloudruler_ My thought exactly. Because I can say with utter certainty, that government (and government controlled companies) will seek to protect itself more than you or me. They will always want the upper hand.

  • @dr.mikeybee
    @dr.mikeybee10 күн бұрын

    Llama 3 may be big enough already with the right agents.

  • @bluebadger3223
    @bluebadger322323 күн бұрын

    Not surprising that a guy with a lot to gain by developing and deploying AI is 95 percent positive about it

  • @senethys
    @senethys7 күн бұрын

    The scaling laws are not exponential at all. Quite opposites. We are hitting the limits transformers and that is why we are now focusing on making inference a lot cheaper.

  • @ajeybs4030
    @ajeybs403024 күн бұрын

    Deep dive. Informative podcasts covering all fronts and forthcomings of AI

  • @garydecad6233
    @garydecad623324 күн бұрын

    If the focus on all AI in democracies would be on the existential issues facing us, namely bad actors in AI development ( building a cage around it), climate change over the next 50 years and protecting people and all living things, preventing misinformation from destroying our democracies, then our world would benefit. However, it’s fairly clear that the focus is on creating more billionaires.

  • @user-pl4pz2xn2c
    @user-pl4pz2xn2c14 күн бұрын

    we dont have exponential amount of data to feed it we dont have exponential amount of cpu gpu to feed it we dont have exponential amount of electricity to feed it so how exponential?

  • @TudorSicaru

    @TudorSicaru

    14 күн бұрын

    Take a look at Moore's law...once you have better and better chips, their efficiency also increases. Energy also is nowhere near "capped" so we can still "feed" higher and higher amounts, which will also not have to be exponential, due to efficiency increase in chips. They will also work on researching better and better learning algorithms, which in turn means more efficient models using less input data to train, or learning more using the same training data. Once you have really strong A.I. you'll have even better progress in terms of energy generation (maybe cold fusion, who knows) and it will also be involved in microchips design and algorithms design, which adds to that positive feedback loop...it's pretty obvious it won't be a linear growth. Also exponential doesn't mean the exponent has to be > 2, even if the exponent is 1.1, it's still a percentage increase per year (let's say it's measured yearly), which still follows a slope that curves upwards, faster than a linear growth. When people say "exponential" they don't mention the exponent, they just refer to the slope (function) that accelerates more and more. P.S. Even Moore's law doesn't have an exponent of 2 if measured yearly - the transistors double (so 2x) every ~18 months, not ~12 months...but it's still incredibly fast, if you look at the development of new tech in the past 20 years or so.

  • @RodCornholio

    @RodCornholio

    11 күн бұрын

    A smart organization will figure out how to use AI to enhance AI. For hypothetical example (and I know very little about the following…) A chip company using AI to run simulations or genetic algorithms to, effectively, “skip” generations of chips. So, perhaps, instead of having a design for a chip that would be expected for 2025, it’s closer to what is expected for 2035. Then you repeat the same _virtual iterative_ approach in software, using those “2035” chips. So, you have iterations of AIs running within a system, (simulated, not open to the public) with the goal of evolving the best one (survival of the fittest). Now, perhaps, you have created an AI that could be 10 years ahead in 6 months time. I think, though, that is more applicable to GI models. You are right, though, they are ultimately dependent on material resources…at least now. I’ve made the analogy that AI (language models) are in the “tool” stage; they are tools we use, sometimes good, sometimes bad in form and result. When AI agents become increasingly more independent, especially the General Intelligence type, is when it will truly be out of control and, perhaps, unstoppable (e.g. imagine a GI AI which figures out how to leave the nest - a central location - and exists, somehow, on the internet…its tentacles are everywhere, even your cell phone).

  • @Uristqwerty

    @Uristqwerty

    10 күн бұрын

    @@TudorSicaru Moor's Law started slowing down over a decade ago; the semiconductor companies themselves have been making statements to that effect periodically since 2010 as quantum physics makes it harder and harder to keep transistors reliable enough to compute with. On top of that, transistor count doesn't directly translate to computation speed. While smaller transistors should mean less distance for signals to travel, clock rate roughly capped out at 5GHz, meaning that one channel for them to improve speed is long dead. For scaling horizontally into multiple cores, there is substantial overhead in programming parallel algorithms, requiring more and more time wasted synchronizing the cores as the workload scales up, giving diminishing returns to *that* benefit of transistor count. Worse, current CPUs are limited by heat, so more and more of the transistors are being spent on specialized components that sit idle most clock cycles, for *heavily* diminishing returns in yet another area. If you've played a lot of idle games, you'd recognize this as a "soft cap", where even though you still have one multiplier growing exponentially, the actual value you care about is rapidly slowing down, as it gets less and less benefit from the exponential factor. Computer speed is on a S-shaped curve that started out exponential, but as each sub-factor hits a wall, it's levelling out.

  • @baddogmtv

    @baddogmtv

    6 күн бұрын

    Lets release free models to phones, that absorb as much audio/video and text queries to give Ai what it needs. Open ai and google....hold our beers.

  • @lizbathory1169
    @lizbathory116925 күн бұрын

    As hunter gatherers we didn't evolve to respond to nebulose and uncertain dangers, just the concrete and immediate ones, that's why is so difficult to get the collective to care about and act on a treat that being statically very probable, it is not perceive locally as an issue.

  • @danguillou713
    @danguillou713Ай бұрын

    I have no idea where other kinds of AI projects are at, but I don’t believe that generative AI algorithms like the chatbots or picture generators are displaying anything like exponential improvements. They just took a giant leap from rudimentary to viable. While that’s exciting and impressive and will cause a lot of important changes, I don’t think it’s intelligent to extrapolate that step into a trajectory. My sense is that these particular families of algorithms display the opposite kind of curve: diminishing returns. Every doubling of processing power and size of dataset yields a smaller qualitative improvement than the previous one.

  • @alexcaminiti

    @alexcaminiti

    Ай бұрын

    This. This is what the Internet did to people's brains. Dunning Krueger times a trillion. Beliefs and feelings are subjective, but they hold more weight and veer into objectivity when they are espoused by professionals. Something to consider.

  • @BritainRitten

    @BritainRitten

    Ай бұрын

    "Exponential" just means the rate of increase is itself increasing. We have clearly met that threshold. Pace in AI has been slower and has obviously increased tremendously. We are getting large objective, measurable improvements every ~3 month period by amounts that used to take a years or decades. This is *exactly* what you would expect in an exponential trend. Whether that exponential trend *continues* is another story - and not something you can know even if you 100% know the trend up until now has been exponential. Which it has. It may turn out to be an S-curve - but an S-curve just means exponential in the beginning, then hit an inflection point and become logarithmic. Either way, we can be very confident in at least some improvement in the future. We have learned a lot about what makes these machines better at learning.

  • @danguillou713

    @danguillou713

    Ай бұрын

    @@BritainRitten You are talking about AI development in general, yes? I wasn't, that's why I started my post with excluding all the R&D that is presumably going on with different kinds of AI. Again, I don't know what projects exist or where they are at. I'm talking about the generative algorithms that drive the large language models and a few image generators. I don't think what we have seen is best described as an exponential curve, I think a better way to think about it is "phase shift". The developers have been adding computer power, data and sophistication to their models for a long time, with very little interesting progress. At a certain point their systems reached a state where they started to display a qualitatively different kind of output. In real time that took months or a few years, but i think it should be better understood as instantaneous. I think I understand approximately how the language models do what they do. As a result they are really good at making sentences that pass the Turing test. But problems with AI writing arise from the lack of working model of the system they are operating on. They make directionless surreal dialogue because they aren't working from a model of interlocutors who are interacting with each other. They can't write structure or pacing, because they don't have a model of what a story is. They can't draw hands because they don't have an even rudimentary model of a hand's skeletal structure or function. They can't design castles for the same reason. They can't check the truthfulness of any statement, or recognize absurd statements, because they don't have model of the world to compare their statements to. These are inherent shortcomings of the fundamental way these algorithms generate stuff. I don't think brute force (more data, more processing power, more finetuning of the algorithms) is going to solve the fundamental shortcomings of these systems. I suspect the self-driving car software have run into similar problems, but I'd be interested if anyone knows more about why that research have been stalled for the last decade. Now, let me repeat that I don't know what kinds of AI research is going on with completely different models. Possibly some large company, university or government is on the brink of creating AI with working system models of whatever they are meant to operate on. Possibly it will turn out to be relatively simple to add this capability to chatbot AI systems, or invent some ingenious workaround ... but I haven't seen anything like that. And at least this guest aren't talking about anything like that, he's talking about adding more brute force. In summary, I think we are as close to, or as far from a breakthrough in general artificial intelligence as we were five years ago. From lack of information it seems equally possible that we'll see astonishing breakthroughs in this decade or that the problem will resist solution for another century. I am merely saying that the great leap that these specific systems recently made shouldn't be extrapolated to the field of AI in general. Cheers

  • @Luigi-qt5dq

    @Luigi-qt5dq

    Ай бұрын

    @@BritainRitten Exactly. If the rate of progress will continue or accellerate is an empirical question not a philosophical one, but given the funding talent and resources going into this field it is not unlikely. That it has been exponential it is out of question, but people still do not understand what an exponential and a derivative are. AGI maybe is not that difficult after all if this is human intelligence...

  • @Luigi-qt5dq

    @Luigi-qt5dq

    Ай бұрын

    @@danguillou713 It is possible to combine generative AI with search, I can reference some papers: Alpha Zero, Alpha Go , Liberatus, Alpha Geometry. All big labs are working in this direction. As an advice this is a deeply technical field, with people working on it for a decades, it is a bit embarassing hearing this statement from random people on the internet, on the same level of No Avx during the Pandemic:"In summary, I think we are as close to, or as far from a breakthrough in general artificial intelligence as we were five years ago"

  • @kokomanation
    @kokomanationАй бұрын

    This sounds like an AI generated conversation 😂

  • @Ben_D.

    @Ben_D.

    26 күн бұрын

    You should find an interview where you can see Dario as he speaks. He is quirky. Not at all a bot.

  • @collins4359
    @collins4359Ай бұрын

    how does this still have only 12k views

  • @jannichi6431

    @jannichi6431

    Ай бұрын

    Do TOTAL votes get added up when syphoned off by KZread type middlemen? Obviously I don't have a Podcast to know how viewers are calculated worldwide⁉️

  • @Saliferous

    @Saliferous

    Ай бұрын

    Ai fatigue.

  • @maxheadrom3088
    @maxheadrom3088Ай бұрын

    11:15 The Apple Newton could do that in .. .I don't know ... late 1980s or early 1990s.

  • @volkerengels5298
    @volkerengels52984 күн бұрын

    "I hope the US (....and it's allies) will win the race" This man **hopes** for a polarized world.

  • @fattyz1
    @fattyz119 күн бұрын

    There only one relevant question that someone, or everyone, will ask it, what do we do to win? Against whom? The good guys or the bad guys . Is there a difference? The winners will decide .

  • @megavide0
    @megavide020 күн бұрын

    26:21 ".. how persuasive these systems/ your systems are getting as they scale..."

  • @tommoody728
    @tommoody72818 күн бұрын

    I think super human intelligence is a good thing, in fact it may be essential for our continued survival as an advanced civilisation.

  • @anatoly.ivanov
    @anatoly.ivanov6 күн бұрын

    @01:16:46 - So Dario Amodei avoids replying to _the_ question about IP rights, twice? Including the very direct “hey, you’ve used my text” one from Ezra?! What’s the deal, then? As a director-producer, am I supposed to tell my actors, DP, VFX guys, costume, makeup, cooks, logistics… “You know what, work for free, cause you got UBI”?! And who’s going to pay for that UBI, which is supposed to be “basic”, not covering extra “discretionary” spending on stuff like “going to the cinema” or “paying Anthropic”? All that after taking all the planet’s electricity we’d might need to desalinate ocean water to drink and keep the AC on to survive? 😮🤯

  • @adrianojedaf
    @adrianojedafАй бұрын

    Resumen del video por ChatGPT: El guion del video sobre inteligencia artificial y su entrevista con Dario Amodei aborda varios puntos clave sobre el desarrollo y las implicaciones de la IA avanzada. Aquí tienes un resumen de los aspectos más importantes: 1. Leyes de Escalado y Predicciones Exponenciales Las leyes de escalado no son leyes per se, sino observaciones que indican que a medida que se incrementa el poder computacional y los datos disponibles para los sistemas de IA, sus capacidades mejoran exponencialmente. Este crecimiento exponencial puede ser difícil de comprender completamente, pero es crucial para anticipar el desarrollo de la IA. 2. Ritmos de Desarrollo vs. Percepción Social Existe una discrepancia entre el rápido avance de la tecnología de IA y la velocidad a la que la sociedad percibe y reacciona a estos cambios. Esto puede llevar a "explosiones" de reconocimiento y adaptación social que parecen súbitas y abruptas. 3. Impacto y Control de los Modelos de IA Los modelos avanzados como GPT-3 y Claude 3 muestran que la tecnología está en la parte más empinada de la curva exponencial. Esto sugiere que sistemas que antes parecían ciencia ficción podrían ser una realidad en un futuro cercano (2-5 años). Hay una preocupación significativa sobre quién debe controlar y regular estos poderosos sistemas de IA. Amodei y otros en el campo creen que no deberían ser los únicos en tomar decisiones sobre su implementación y uso. 4. Consideraciones de Seguridad y Éticas A medida que los modelos de IA se vuelven más capaces, aumenta la necesidad de considerar cuidadosamente cómo se implementan y se les permite actuar en el mundo real. La seguridad y la controlabilidad son problemas críticos, especialmente cuando los modelos comienzan a interactuar más directamente con entornos físicos y tomar decisiones autónomas. 5. El Futuro de la IA y la Inteligencia Artificial General (AGI) Mientras que la IA continua desarrollándose, la conversación está evolucionando desde crear modelos que superen tareas específicas, hacia sistemas que puedan realizar una amplia gama de tareas igual o mejor que los humanos. El debate sobre la AGI (Inteligencia General Artificial) es complejo y se centra en cuándo una IA será capaz de realizar cualquier tarea intelectual que un humano pueda, pero también en las implicaciones éticas y de seguridad de tal desarrollo. 6. Interpretación y Manipulación de Datos A medida que los sistemas de IA se vuelven más avanzados, también lo hacen sus habilidades para manipular e interpretar datos. Esto plantea riesgos significativos, especialmente en términos de desinformación o manipulación política o social. 7. Implicaciones Sociales y Económicas La adopción de IA tiene el potencial de transformar significativamente diversos sectores económicos y aspectos de la vida cotidiana. Sin embargo, también existe el riesgo de que estas tecnologías intensifiquen las desigualdades existentes y creen nuevos desafíos éticos y de gobernanza. Reflexiones Finales Este video y su guion resaltan tanto las promesas como los peligros de la IA avanzada. Mientras que la tecnología tiene el potencial de ofrecer mejoras significativas en muchas áreas, también requiere una regulación cuidadosa y consideración ética para evitar resultados negativos. La sociedad como un todo debe estar involucrada en la conversación sobre cómo desarrollar y desplegar IA de manera que beneficie a todos de manera equitativa y segura.

  • @Bronco541
    @Bronco54113 күн бұрын

    33:00 on "being better at persuasion by lying than telling the truth". Once again this should not be a surprise; humans are the same. People believe what they want to hear, not the truth.

  • @Bronco541

    @Bronco541

    13 күн бұрын

    I disagree that it is very hard to bullshit. Actually im inclined to think its easier for less intelligent people to bullshit. Its kind of what they do; they have a weaker understanding of truth and necessarily a different respect/relationship to it versus smater and more mature people.

  • @BrianMosleyUK
    @BrianMosleyUKАй бұрын

    38:50 I've wondered for a while, instinctively if a sense of the truth will be an emergent ability of next generation LLMs.

  • @AB-wf8ek

    @AB-wf8ek

    28 күн бұрын

    I think it's all about phrasing. At this point all they really need to do is place a confidence metric, which is simply based on how much of the training data correlates with the output. If developers simply included that, then people could judge better for themselves whether the information is accurate or not. Though this also needs to be taken with a grain of salt, because even the underlying training data can be manipulated by public relations campaigns, i.e. private sector propaganda, which is an older problem that's been around since mass media was invented.

  • @stephenboyington630
    @stephenboyington630Ай бұрын

    Having 100 Martin Shkrelis battling each other to make the most capable model is not good for humanity.

  • @vokoaxecer

    @vokoaxecer

    7 күн бұрын

    😂

  • @anatalelectronics4096
    @anatalelectronics409621 күн бұрын

    exponential rise is the definition of an explosion

  • @DavenH

    @DavenH

    20 күн бұрын

    quite the opposite

  • @crobinson93
    @crobinson93Ай бұрын

    I don’t need AI to do the fun things like planning my kid’s birthday party. I need AI to do things like mow my lawn or help me install my garage door opener. How about AI that performs complex medical procedures? The human race could actually use.

  • @SteveMayzak

    @SteveMayzak

    27 күн бұрын

    This is part of why AI is exciting imo. It won’t come to medical procedures all at once, it’s going to be small increments with the occasional leap that will appear as if it happened overnight. Think about the supply chain here. Improvements in tooling used in procedures designed with AI assistance, better diagnosing and imaging tools assisted by AI and many more. It will take a while but eventually this will feel like magic. Who knows how long it will take though. I take nobodies estimates seriously especially Elon. How long has he been promising self driving is right around the corner?

  • @privacylock855
    @privacylock855Ай бұрын

    When we all lose our jobs to AI, give us a Basic Income check. Pay for it with a tax on the productivity on the AI.

  • @TheMrCougarful

    @TheMrCougarful

    Ай бұрын

    Not doing. Get ready.

  • @Niblss

    @Niblss

    Ай бұрын

    It's shocking how the only thing you people can think of in a scenario where humans are obsolete is to keep going with capitalism, because crumbs are all you should ge You people terrify me

  • @volkerengels5298
    @volkerengels52984 күн бұрын

    I feel like listening to an Petro-CEO in the seventieth

  • @RodCornholio
    @RodCornholio11 күн бұрын

    Claude failed miserably yesterday when I asked it to calculate something relatively simple: the diameter of the earth at a specific latitude - 60 degrees - (described it clearly, so no misunderstanding could cause a mistake). The answer it gave (about 21 kilometers shorter than the diameter at the equator) was so far off, an 8th grader could have known Claude’s answer was far off. I pointed this out and it was still wrong after recalculating it. I had to teach it like it was an idiot before it “got” it and had it reflect on why it got it wrong. I’d bet if you tried the same experiment, it would still fail. And I bet ChatGPT would still fail if you asked it about Mexican food in New Mexico in the 1800s and it comes up with a list that sounds like a Taco Bell menu. The hallucinations and Dunning-Kruger like confidence that these language model AIs have is atrocious. You should trust an AI like you would with a know-it-all 7th grader who skipped a grade and thinks they’re the next Einstein.

  • @user-rk7nf1ot4b
    @user-rk7nf1ot4bАй бұрын

    I remember a year ago we we talking how chat gpt is going to change our life. And one year later, it's a moderately useful tool to reword letters. Many things like Google got worse because of AI use.

  • @TheMajesticSeaPancake

    @TheMajesticSeaPancake

    29 күн бұрын

    On one hand, I understand the overhype that these tools can already do everything. On the other hand it's a matter of years until they can. I see it as we're about two years away from agent systems being able to do any digital task.

  • @williamparrish2436

    @williamparrish2436

    6 күн бұрын

    You clearly haven't been using it right lol.

  • @TheMajesticSeaPancake

    @TheMajesticSeaPancake

    6 күн бұрын

    @@williamparrish2436 could have worded it better, meant *every* digital task.

  • @williamparrish2436

    @williamparrish2436

    6 күн бұрын

    ​@@TheMajesticSeaPancakemy response was to the original comment, not yours.

  • @Claire-cs3gl
    @Claire-cs3glАй бұрын

    You still work there? kzread.info/dash/bejne/oISWu9Z8ldPRdcY.htmlsi=deklNWmuEAAEJDva

  • @Arcticwhir
    @ArcticwhirАй бұрын

    34:26 thats what he just said..

  • @tristan7216
    @tristan721625 күн бұрын

    When does the exponential curve get us to AI that doesn't need so much compute and data to learn? We have an existence proof that an agent can learn to do things without a billion dollars worth of compute - us. But our brains are millions of times as energy efficient as GPUs.

  • @DavenH

    @DavenH

    20 күн бұрын

    Where is the accounting for the evolution, and world simulation that it required? We do not have an existence proof.

  • @DavenH

    @DavenH

    20 күн бұрын

    Also the millions of times as energy efficient - lets see some actual numbers. That's not passing the sniff test.

  • @brian5001
    @brian5001Ай бұрын

    What if the humans weren't the bad guys?

  • @TheMrCougarful

    @TheMrCougarful

    Ай бұрын

    That's funny.

  • @brian5001

    @brian5001

    Ай бұрын

    @@TheMrCougarful not if you are one of the other animals.

  • @TheMrCougarful

    @TheMrCougarful

    Ай бұрын

    @@brian5001 I am an animal. Believe me, I get it.

  • @jimgsewell

    @jimgsewell

    Ай бұрын

    Have you met any humans?

  • @brian5001

    @brian5001

    Ай бұрын

    @@jimgsewell you aren't even a solution to your own boredom.

  • @raoultesla2292
    @raoultesla229214 күн бұрын

    cute channel. Lockheed/MIT grad students/DARPA have surpassed your most Sci-Fi considerations 5+yrs ago.

  • @Eurydice870
    @Eurydice87021 күн бұрын

    Who wants to live in this AI world? I'm glad I'm old.

  • @seanharbinger
    @seanharbinger13 күн бұрын

    I doubt the autogenerated pile of words was very good.

  • @brett7077
    @brett707724 күн бұрын

    If AGI pans out (scaling laws hold), all of Ezra’s small minded questions will be laughable

  • @joannot6706
    @joannot6706Ай бұрын

    Putting the journalists huge head instead of the picture of the one interviewed is always weird. Are people in NYT that narcissistic?

  • @canadiangemstones7636

    @canadiangemstones7636

    Ай бұрын

    Is this your first podcast?

  • @joannot6706

    @joannot6706

    Ай бұрын

    Are you really gonna try to make the point that this is usual for podcasts?

  • @Fati817h

    @Fati817h

    Ай бұрын

    Yeah, he could have at least put the guest's image near himself or something

  • @GabeE3195

    @GabeE3195

    Ай бұрын

    Who gives a fuck, he does a good job

  • @penguinista

    @penguinista

    Ай бұрын

    I can think of a lot of podcasts that never change their thumbnail/screenshot image. Some of them have the image of the hosts, some don't. Upon consideration, I can't empathize with your complaint. Just seems like a stylistic choice.

  • @ProteusTG
    @ProteusTG18 күн бұрын

    All AI learning is fair use. People learn from others Why is an AI learning from people a problem? We all learn from work done by others.

  • @benmurray2931
    @benmurray293119 күн бұрын

    The problem is that he is compromised by his role. He has to hype the technology in order to justify the capital being poured into his company. Same for every CEO of every LLM/diffusion model startup out there. There are many researchers who disagree with this take, and have arguments as to why more scale is unlikely to have a transformative effect compared to where we are now. What if the function learned by AI is exponential in complexity, and so adding extra zeros doesn't dramatically increase the scope of problems that can be solved by it?

  • @eSKAone-
    @eSKAone-7 күн бұрын

    We are not in control. Humanity is its own animal. This is inevitable. Biology is only 1 step of evolution. So just chill out and enjoy life 💟🌌☮️

  • @garyjohnson1466
    @garyjohnson1466Ай бұрын

    Why not use AI robots to explore other planets and moons in our solar system, even construct bases, as well as stations orbiting planets, as they can operate for extended periods in outer space without oxygen or special suits, to do any number work, repair or construction of the station…

  • @naomieyles210

    @naomieyles210

    Ай бұрын

    We are using AI robots to explore Mars already. The rovers, and the Ingenuity copter, and even the landers, are Al robots. Their limitations show us the current forefront of AI robots working in hazardous environments.

  • @garyjohnson1466

    @garyjohnson1466

    Ай бұрын

    @@naomieyles210 yes, true, but in a limited capacity, many advancement have been made, I imagine someday they will be used onboard space station to perform hazardous repair work outside or to be part of the crew etc etc..

  • @naomieyles210

    @naomieyles210

    Ай бұрын

    @@garyjohnson1466 specialised little AI robots for specialised jobs in the vacuum of space. Totally agree, and much safer if astronaut spacewalks are limited to training exercises or as Plan B if the AI robot can't do something. The AI robots would also respond to danger alerts by hurrying to a predetermined safe invacuation (lockdown) point. Invacuation rather than evacuation. 🙂

  • @skylineuk1485

    @skylineuk1485

    29 күн бұрын

    Look what happened in Blade Runner!

  • @garyjohnson1466

    @garyjohnson1466

    29 күн бұрын

    @@skylineuk1485 yes, an like all created beings, they wanted to live, in the end he save the blade runner, showing his humanity, but Rachael was created without a termination date, but died giving birth, something that was supposed to be impossible..

  • @dprggrmr
    @dprggrmr28 күн бұрын

    It's all fun and games until the great ai war

  • @maxheadrom3088
    @maxheadrom3088Ай бұрын

    C'mon! Making AI and Oracle equal concepts is not only wrong but can also be dangerous! Dangerous because it dumbs down the listeners and also because Larry Ellison could end up suing!

  • @scottharrison812
    @scottharrison812Ай бұрын

    If AI can help me to connect to my car bluetooth for navigation and music - I’ll be happy.

  • @artificialintelligencechannel
    @artificialintelligencechannelАй бұрын

    Amodei is talking about the exponential curve and investing in more compute. But surely there must be a way to reach human-level AI more efficiently? Using hybrid systems?

  • @RodCornholio
    @RodCornholio11 күн бұрын

    Dario should train AI on his fluency in buzzword-corporate-speak. He sounds as if he’s selling AI stock or is shilling for it.

  • @jannichi6431
    @jannichi6431Ай бұрын

    How do people get selcted to test AGI today? Do they get paid? Anyone??

  • @Ben_D.

    @Ben_D.

    26 күн бұрын

    Open AI had a recent round of taking applications for volunteers to do red teaming. They ask a lot of questions of the volunteers, about education levels, languages, and so on. It is harder to get accepted than one might think.

  • @Tayo39
    @Tayo3920 күн бұрын

    a month old AI vid ??? tf is is wrong witchu, algorythm ?

  • @kevinnugent6530
    @kevinnugent6530Ай бұрын

    Full unlimited access can be allowed to 'our' government at the same time safety work is done to what will be released to the public.

  • @urbanlivingfilms4469
    @urbanlivingfilms446913 күн бұрын

    I also want to comment on the crypto mining problem wich he has a point but at the same time maybe not Bitcoin but that’s what we have we need that for a new change why not stop mining regular gold or cutting trees for paper money or mining mineral that hurts the earth for coins that’s loose value is crazy..we need to go nuclear solar and fusion energy

  • @quanchi6972
    @quanchi6972Ай бұрын

    this was an incredible interview, however i doubt you'll ever get Amodei back on simply because your attitude (not your questions) was rather catty and combative

  • @brett7077
    @brett7077Ай бұрын

    I don’t think Ezra gets it

  • @justmyopinion9883
    @justmyopinion9883Ай бұрын

    What was the name of the movie with the out of control robot? 2001 A Space Odyssey. That robot started doing what it wanted to do. What if AI starts doing what it wants to do? Scary 😧.

  • @privacylock855

    @privacylock855

    Ай бұрын

    HAL's problem came from conflicting instructions. The crew did not now about the missions true objective. Make contact with the Alien intelligence at Jupiter. As it got closer to the time his lie would be discovered, HAL mental state became unstable.

  • @John12050

    @John12050

    Ай бұрын

    I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.

  • @lifexmetric
    @lifexmetric16 күн бұрын

    If you ask an LLM to lie it will lie. Yet then you condemn it that it lied... 🤣🤦‍♂

  • @lifexmetric

    @lifexmetric

    16 күн бұрын

    Also what's your criteria to determine BS? It sounds like you are implying that we need some authority to stamp on what are the BS and what are not. Somehow people need guardian like you to filter through "truth" for us. 🤷‍♂

  • @lifexmetric

    @lifexmetric

    16 күн бұрын

    If Dario has not realized yet, you are talking to a pawn from an institution who aspire to be the king or at least to be the king's mouthpiece 🤦‍♂

  • @berniemadoff9688
    @berniemadoff9688Ай бұрын

    I'll save everyone here some time. An A.I. Guy likes A.I.

  • @matthewkeating-od6rl
    @matthewkeating-od6rl20 күн бұрын

    Have robot children they will be fine.

  • @nachenberg
    @nachenberg27 күн бұрын

    This episode, billed as an intellectual rendezvous, a gathering of great minds navigating the complex trajectories of AI's future, instead ushers the listener into a realm where surface-level discussions eclipse substantive dialogue, and personal queries derail a conversation poised to ascend into the realms of technological prophecy. I was shocked by the poor attempt at meaningful journalism, which was shamefully inconsiderate and personally insulting. This was especially disappointing coming from The New York Times, where one would expect an exchange characterized by intellectual reverence and a careful dissection of complex ideas. The episode featuring Dario Amodei, a beacon in the artificial intelligence landscape, promised an exploration into the exponential growth of AI capabilities, framed by scaling laws that predict not just progress, but a veritable explosion of technological prowess. However, what unfolds is a dialogue that feels more like an inquisition than an exploration, marred by an unsettling focus on the personal rather than the profound. Ezra Klein, whose guidance of the podcast typically embodies the pinnacle of journalistic prowess, here descends into a tone best described as discordantly informal, verging on invasive. The persistent delving into Amodei's personal life-inquiries about his family status-strike a jarring chord in what should have been a concentrated examination of AI's transformative potential. This discordance between the interviewer's approach and the intellectual caliber of the guest does not merely detract from the episode's value-it undermines the very core of what such a discourse aims to deliver. Listeners drawn to the Klein show expect sessions brimming with insights that are both incisive and accessible, offering not just information, but enlightenment. The episode's descent into the banalities of personal life over the complexities of technological innovation is a disservice not only to Amodei, a pearl of wisdom and a paragon of virtue, but also to the audience and the wider discourse on AI. Such a conversational misstep is particularly regrettable given the gravity of the topics at hand-the societal ramifications of AI, the ethical dilemmas it poses, and the policy frameworks required to manage its capabilities responsibly. Moving forward, one would aspire for a return to the standards Klein's podcast has previously established. An invitation to the minds sculpting our future, such as Amodei, presents a rare opportunity to delve into the existential questions of our era. It is crucial, then, that such discussions transcend the ordinary, striving instead to challenge, educate, and inspire-fulfilling the journalistic obligation to illuminate as much as inform. This episode, therefore, serves as a poignant reminder of the delicate equilibrium between personal connection and professional investigation, between engaging a guest and elevating the discourse. For an audience seeking a lighthouse of understanding in the turbulent seas of technological evolution, the hope persists that future episodes will not merely skim the surface but dive deep into the depths of dialogue that such monumental topics warrant. **A Discourse Diminished: Navigating the Chasm Between Potential and Performance** In the venerated corridors of journalistic enterprise, particularly under the aegis of The New York Times, one might expect an exchange marked by deference to intellect and a meticulous unpacking of complex ideas. The episode featuring Dario Amodei, a luminary in the artificial intelligence panorama, promised an excavation into the exponential escalations of AI capabilities, delineated by scaling laws that predict not just growth, but a veritable explosion of technological competence. Yet, what transpires is a dialogue that feels less like an exploration and more an interrogation, marked by a disconcerting emphasis on the personal rather than the profound. Ezra Klein, whose stewardship of the podcast usually epitomizes the zenith of journalistic acumen, here lapses into a tone that can only be described as dissonantly casual, bordering on the intrusive. The incessant probing into Amodei’s personal life-queries about his familial status-are jarringly out of place in what should have been a focused disquisition on AI’s transformative potentials. This misalignment between interviewer’s approach and the intellectual stature of the guest does not merely detract from the episode’s utility-it undermines the very essence of what such a discourse promises to deliver. Listeners drawn to the Klein show anticipate sessions enriched with insights that are both profound and accessible, offering not just information but illumination. The episode’s devolution into the trivialities of personal life rather than the intricacies of technological innovation is a disservice not only to Amodei but to the audience and the broader discourse on AI. Such a conversational misstep is particularly lamentable given the critical nature of the topics at hand-the societal implications of AI, the ethical quandaries it presents, and the policy frameworks necessary to harness its capabilities responsibly. Going forward, one would hope for a reclamation of the standards Klein's podcast has previously set. An invitation to the minds shaping our future like Amodei is a rare opportunity to delve into the existential questions of our time. It is imperative, then, that such discussions rise above the pedestrian, striving instead to challenge, educate, and inspire-fulfilling the journalistic mandate to enlighten as much as inform. This episode, then, stands as a poignant reminder of the delicate balance between personal rapport and professional inquiry, between engaging a guest and ennobling the dialogue. For an audience yearning for a beacon of understanding in the murky waters of technological evolution, the hope remains that future installments will not merely touch the surface but plunge into the depths of dialogue that such monumental topics deserve.

  • @AnthonyBurback
    @AnthonyBurback27 күн бұрын

    he's not even right about haircuts...

  • @Gee3Oh
    @Gee3Oh15 күн бұрын

    These AI people are selling pipe dreams. LLMs are just the predictive text on your phone keyboard except trained on larger internet-scrapped data instead of data proprietary to the company. Yes this has the effect of generating coherent sentences but it’s a parlor tricks. There’s no intelligence at play at all and they use warehouses full of low paid 3rd world workers to feed the model human answers to further disguise the parlor trick. The most useful AI development of recent will be Adobe’s generative fill. They actually have the license for the training data and the industry experience to integrate the machine learning tools where they’ll be most useful. Chatbots aren’t productive. They wont be setting up birthday parties anytime soon. They’ll always just generate plausible sounding but unreliable text.

  • @user-vm3ie6ft9g
    @user-vm3ie6ft9g7 күн бұрын

    No, no, no! The job of junior dev is to learn the job! Not to perform simple tasks!

  • @canadiangemstones7636
    @canadiangemstones7636Ай бұрын

    How many billion will it take to just give me good results on a google search, instead of 99% garbage?

  • @Jasper_the_Cat

    @Jasper_the_Cat

    Ай бұрын

    All I want is for it to generate a list of my availability in Outlook for a week and not come up with a hallucination. But yeah, they could start with improving Google search.

  • @GM-qz9fo

    @GM-qz9fo

    Ай бұрын

    Information is easier to find now than it ever has been.

  • @jannichi6431

    @jannichi6431

    Ай бұрын

    Ironically, the Wawei phone from years ago gave me much better Google searches! Now that.was.before KZread heavy usage and my algorithm certainly wouldn't be what.it.is now. !?!?!? D

  • @jeffkilgore6320

    @jeffkilgore6320

    28 күн бұрын

    Ridiculous comment. Ask it smarter search questions.

  • @rstray4801

    @rstray4801

    24 күн бұрын

    Need a Time Machine to send you back to 2011

  • @mrpicky1868
    @mrpicky186821 күн бұрын

    we all might die 2025-2028. get over it XD

  • @dovekie3437
    @dovekie343710 күн бұрын

    You could tell this interviewer was just DYING to take the moral highroad-I am surprised he held off for over an hour before showing his true colors: "I have written so many wonderful things and the AI is stealing my exceptional and unique prose, where is the compensation myself and other great people like myself?" It's already bad enough to stare at just a still photo of this guy. Nobody is using a variation on this guy's prose other than himself, that's for sure. He should be thankful he had enough publicly available writing that he doesn't need to pretrain a model to write a little for him.

  • @ArmaGeddon-iu1vv
    @ArmaGeddon-iu1vv27 күн бұрын

    The audacity of comparing the coding autists to musicians chasing chart positions...

  • @jeremyreagan9085
    @jeremyreagan9085Ай бұрын

    Technology was never my favorite subject. I love history and as we can see from our leaders in Texas here we sure as hell want you all to forget history all together. AI to me is just another Capitalist venture to have us be the test subjects for its abuses and have its creators reap all gains and not have to be held accountable for the role the technology does be it good or ill. Just as the internet before it in the 1980s and 90s. I grow to despise humans the longer I live on this poor little blue globe.

  • @maxheadrom3088
    @maxheadrom3088Ай бұрын

    I think your generalization is incorrect, Mr. Klein! I'll even give a way my secret to prove it: I'm not a cat ... I'm a human being. Having said that, I should also say that I can think in exponentials - I can even think in exponentials with imaginary powers! Also, I should remember that the human senses follow an exponential rule - that's why we use dB instead of N/m² to describe sound pressure! If a sound pressure level increases from 10 dB to 20 dB we perceive a doubling of the sound level even though the sound pressure increase a power of 2. Now ... there's a much bigger problem with this video: the person being interviewed is an investor in the field of AI - it's like interviewing a Big Tobacco executive about if cigarettes are addictive or if they cause cancer. I have to be honest - I just started listening to the podcast. 1 and half hours ... I'll be able to do a lot of house cleaning while I listen to it!