Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

Ғылым және технология

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
Follow TED!
Twitter: / tedtalks
Instagram: / ted
Facebook: / ted
LinkedIn: / ted-conferences
TikTok: / tedtoks
The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: go.ted.com/eliezeryudkowsky
• Will Superintelligent ...
TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
#TED #TEDTalks #ai

Пікірлер: 1 600

  • @phillaysheo8
    @phillaysheo810 ай бұрын

    Eliezer: We are all going to die! Audience: 😅

  • @tubularap

    @tubularap

    10 ай бұрын

    Yeah, very sad-stupid reaction by the audience, even when it was nervous laughter.

  • @andrzejagria1391

    @andrzejagria1391

    10 ай бұрын

    they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD

  • @ForAnAngel

    @ForAnAngel

    10 ай бұрын

    @@andrzejagria1391 Are you sure you're not an AI? You keep posting the same exact thing multiple times. A real person would be able to write something different.

  • @mav3818

    @mav3818

    10 ай бұрын

    It's just like the movie "Don't Look Up"

  • @andrzejagria1391

    @andrzejagria1391

    10 ай бұрын

    @@ForAnAngel would they care to though

  • @TheDAT573
    @TheDAT57310 ай бұрын

    Audience is laughing. He isn't laughing, he is dead serious.

  • @andrzejagria1391

    @andrzejagria1391

    10 ай бұрын

    they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD

  • @VintageYakyu

    @VintageYakyu

    10 ай бұрын

    They're laughing because the vast majority of Americans are stupid. Decades of draconian cuts to public education and mental health services have turned us into a nation of ignorant morons. Poorly educated, allergic to reading, and devoid of critical thinking.

  • @SDTheUnfathomable

    @SDTheUnfathomable

    10 ай бұрын

    he seems pretty goofy in the Q&A tbh

  • @bepitan

    @bepitan

    10 ай бұрын

    @@andrzejagria1391 ..dont look up.

  • @young9534

    @young9534

    10 ай бұрын

    Might be nervous laughter from some. Or their minds are having trouble fully processing how serious this really is and their only reaction is to laugh

  • @Michael-ei3vy
    @Michael-ei3vy10 ай бұрын

    "I think a good analogy is to look at how humans treat animals... when the time comes to build a highway between two cities, we are not asking the animals for permission... I think it's pretty likely that the entire Earth will be covered with solar panels and data centers." -Ilya Sutskever, Chief Scientist at OpenAI

  • @neorock6135

    @neorock6135

    9 ай бұрын

    Or more specifically, how we as the most intelligent lifeforms on the planet... treat the 2nd most intelligent, arguably dolphins or our primate cousins. And the intelligence gap between AI & us will be orders of magnitude larger than the intelligence gap between us & dolphins/primates is right now. 😱

  • @dodgygoose3054

    @dodgygoose3054

    8 ай бұрын

    Why would AGI even stay on earth??? escaping the gravity well I'll say would be its first priority ... Space has endless resources, endless energy which is the AGI's food source with then endless possibilities of expansion with not single advasary ... Earth will be nothing but the womb, for the new god to step forth from.

  • @BillBillerton

    @BillBillerton

    8 ай бұрын

    The artificial intelligence would use solar panels? Why? Why would the AI use a technology so hopelessly inferior? Out of all the places, in the infinity that is the universe, it decides to cover the Earth? No. An artificial intelligence would be intelligent. It would use other methods for the production of power. Its data centers and is computational power would be in the form of very small, pico-scopic, CPU distributed among all of the constituents that make up the lithosphere and hydrosphere of this earth. It would create its own code, cryptography, transmission/receiving frequencies, and would be virtually impossible to destroy. It also wouldn't have the capacity to want to harm someone, or something, because it cannot be killed by the human race. It has absolutely nothing to fear from the human species so it would make no attempt to try to destroy us. All of our preconceived ideas about what AI is capable of and what it will do has to do with the HUMAN race projecting its own flaws and pathological behavior. What Sutskever is really telling you, is how human beings think. Not how AI thinks. Goodday.

  • @devasamvado6230

    @devasamvado6230

    8 ай бұрын

    True, that happens all the time, AI is just the latest big mirror of our mind. Unfortunately give a kid a machine gun someone will get hurt. As ever its the human side of it that has every chance to go wrong, and sooner than later, in the gap before AI is able to recognise and neutralise our threat, to ourselves, each other, and to AIs continuance... Matrix human batteries, just a stopgap until better arrangements could be made @@BillBillerton

  • @stefan-ls7yd

    @stefan-ls7yd

    7 ай бұрын

    Except in Germany: here we must stop constructions if there is a rare or endangered species in the area until they have decided to move to a different area 😂

  • @EnigmaticEncounters420
    @EnigmaticEncounters42010 ай бұрын

    I keep getting 'don't look up' vibes whenever the topic of the threat of ai comes up.

  • @spirti9591

    @spirti9591

    10 ай бұрын

    Definetly maestro sente, we're fucked

  • @mav3818

    @mav3818

    10 ай бұрын

    I heard Max Tegmark say that during a recent podcast, so I quickly downloaded that film and OMG! Here I thought I was scared before I watched that movie Anthreopic just yesterday released Claude 2 They are all in a race for "victory" We're doomed....

  • @coffle1

    @coffle1

    10 ай бұрын

    People are looking up and having discourse about it with him regularly on Twitter. This is more a case of “Bad news sells because the amygdala is always looking for something to fear.” It’s unfortunate that some of the opposing rhetoric doesn’t get as much prominence in the media. No one wants to hear a critique of ideas when they’re already set on thinking their being strung along by the AI companies that contain the people with the counter arguments

  • @vblaas246

    @vblaas246

    10 ай бұрын

    ​@@coffle1I hear you. It is a tough cookie to stay 'chaotic neutral' in this one. Maybe THAT should be a prime directive for AGI 😂😂 Seriously though, amygdala is a good point. Is it time to be courageous and accept and face the bigger picture: we are not in control of our human (and non-human!) species (longevity) future anymore, at all, climate change havock is here for a fact and we need all the (artificial) brains we can get. We should NOT accelerate and brake at the same time! We failed already as an intelligent monkey species, so we have nothing to lose, which should comfort our collective amygdala, but not lead to dispere or indifference.

  • @rcprod9631

    @rcprod9631

    10 ай бұрын

    ​@@coffle1Are you suggesting that what this gentleman is saying is rhetoric? Just looking for clarification of your post. Thanks.

  • @Tyler-zf2gj
    @Tyler-zf2gj10 ай бұрын

    Surprised he didn’t bust out this old chestnut: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

  • @ak2944

    @ak2944

    10 ай бұрын

    okay, this is terrifying.

  • @moon_bandage

    @moon_bandage

    10 ай бұрын

    And our particular set of atoms are trying to restrain it and keeping it from potential goals too, so we go up on the priority list

  • @micro2cool

    @micro2cool

    10 ай бұрын

    there's more efficient ways to obtain atoms

  • @KnowL-oo5po

    @KnowL-oo5po

    10 ай бұрын

    agi will be man's last invention

  • @leslieviljoen

    @leslieviljoen

    10 ай бұрын

    @@micro2cool we are belligerent atom collections with atom bombs at our disposal. Even if we're irrelevant to an AI species, we're going to make ourselves hard to ignore.

  • @kimholder
    @kimholder10 ай бұрын

    Not shown in this version - the part where Eliezer says he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone. Interestingly, I think the raw nature of the talk actually helped.

  • @wytho3751

    @wytho3751

    10 ай бұрын

    Didn't he give this Talk last month? I remember him mentioning what you're referring to... I don't remember the audience laughing so much though... Makes the whole presentation feel discordant.

  • @andrzejagria1391

    @andrzejagria1391

    10 ай бұрын

    @@wytho3751 they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD

  • @SDTheUnfathomable

    @SDTheUnfathomable

    10 ай бұрын

    he only had a single week to prepare a five minute talk about what he's been working on for twenty-two years, and it came out this smooth, that's amazing lol

  • @p0ison1vy

    @p0ison1vy

    10 ай бұрын

    He blusters through all his interviews. He's not an ai expert, he's built a job for himself talking publicly about ai risk without any work or qualifications in the field.

  • @Seraphim262

    @Seraphim262

    10 ай бұрын

    @@andrzejagria1391 Hahaha, repost this comment more. It gets better and better. x---DDDDD

  • @calwerz
    @calwerz10 ай бұрын

    We will not align AGI, AGI will align us.

  • @clusterstage

    @clusterstage

    10 ай бұрын

    this guy gets it.

  • @RazorbackPT

    @RazorbackPT

    10 ай бұрын

    Align us into the shape of paperclips yeah.

  • @danielrodio9

    @danielrodio9

    10 ай бұрын

    Thank fucking god. We humans have been acting moronic for quite a while.

  • @onagain2796

    @onagain2796

    10 ай бұрын

    @@RazorbackPT Or any sort of objective it has. Paperclips are stupid.

  • @OnYourMarkgitsitGooo

    @OnYourMarkgitsitGooo

    10 ай бұрын

    AGI is the great equalizer. No more Superpowers. No rich or poor. No Status. No religion.

  • @tobleroni
    @tobleroni10 ай бұрын

    By the time we figured out, if at all, that AI had deemed us expendable, it would have secretly put 1,000 pieces into play to seal our doom. There would be no fight. When being pitted against a digital super intelligence that is vastly smarter than the whole of humanity and can think at 1 million times the speed, it's no contest. All avenues of resistance will have been neutralized before we even knew we were in a fight. Just like the world's best Go players being completely blindsided by the unfathomable strategies of Alpha Go and Alpha Zero. They had no idea they were being crushed until it was too late.

  • @gasdive

    @gasdive

    10 ай бұрын

    Move 37

  • @4DCResinSmoker

    @4DCResinSmoker

    10 ай бұрын

    Even without AI, the majority of us are expendable. Only existing to service the aspirations of others much richer or more powerful. In 80-90's layman's terms its what's referred to as a wage slave. With the modern day equivalent being a debtor. Which the majority of Americans are...

  • @vblaas246

    @vblaas246

    10 ай бұрын

    Go is mostly an intuitive game. Without a minimal tuned amount of rng, you are unlikely to win. End games are harder for humans in Go. Doesn't mean the play was with a strong argument.

  • @cobaltdemon

    @cobaltdemon

    10 ай бұрын

    Agreed. It would happen too fast and we wouldn't even know it happened.

  • @Wingedmagician

    @Wingedmagician

    10 ай бұрын

    This comment sums it up really well. Thanks

  • @Bminutes
    @Bminutes7 ай бұрын

    “Humanity is not taking this remotely seriously.” *Audience laughs*

  • @mathew00
    @mathew0010 ай бұрын

    I think some people expect something out of a movie. In my opinion I don't think we would even know until the AI had 100% certainty that it will win. I believe it would almost always choose stealth. I have two teenage sons and the fact that people are laughing makes me sad and mad.

  • @andrzejagria1391

    @andrzejagria1391

    10 ай бұрын

    they behave like trained monkeys seriously. They just picked up the flow of Ted talks and know that they are supposed to laugh on punchlines except in this talk the punchline is that we all die xD

  • @karenreddy

    @karenreddy

    10 ай бұрын

    1. We cannot know what it will want. 2. I believe we will face many dangers from humans using AI before AI itself develops agency.

  • @Landgraf43

    @Landgraf43

    10 ай бұрын

    @@karenreddy we will give it agency. In fact we already have done it, fortunately our current models aren't smart or capable enough to be truly dangerous (yet)

  • @karenreddy

    @karenreddy

    10 ай бұрын

    @@Landgraf43 we have not given it agency... Agency would require far more, that it be able to choose from goals it sets, rather than be given goals. It has no ability to want, it must be to what to seek, then be set to resolve a path there. Hence humans being the problem, not the AI. AI agency will come significantly later, as it truly gains preferences and actual agency.

  • @Landgraf43

    @Landgraf43

    10 ай бұрын

    @@karenreddy you are wrong. We already did it. Ever heared of autogpt? It can create its own subgoals.

  • @MikhailSamin
    @MikhailSamin10 ай бұрын

    Eliezer has only had four days to prepare the talk. The talk has actually started with: "You've heard that things are moving fast in artificial intelligence. How fast? So fast that I was suddenly told on Friday that I needed to be here. So, no slides, six minutes."

  • @spaceclottey6250

    @spaceclottey6250

    10 ай бұрын

    omg that's hillarious shame they didn't include it

  • @gregtheflyingwhale6480
    @gregtheflyingwhale64808 ай бұрын

    Imagine a team of sloths create a human being to use it for improving their sloth civilization. They would try to capture him/her in a cell so that it doesn't run away. They wouldn't even notice how they've failed to capture the human the instant they made it (let's assume its an adult male human), because its faster, smarter and better in every possible way they cannot imagine. Yet, the sloths are closer to humans and more familiar in DNA than any general intelligence could ever be familiar to us

  • @samschimek900

    @samschimek900

    6 ай бұрын

    This is a thoughtful analogy for communicating the control problem in physical terms. Did you create it?

  • @thrace_bot1012

    @thrace_bot1012

    5 ай бұрын

    "Imagine a team of sloths create a human being to use it for improving their sloth civilization."

  • @nilsboer2390

    @nilsboer2390

    3 ай бұрын

    but ai does not have feelings

  • @erikjansson2329

    @erikjansson2329

    Ай бұрын

    @@nilsboer2390​​⁠Something smarter than you that has its goals but no feelings about you one way or the other. Is that a good thing in your opinion?

  • @laurens-martens
    @laurens-martens10 ай бұрын

    The laughter feels misplaced.

  • @hansolowe19

    @hansolowe19

    10 ай бұрын

    It is not up to us to say how people deal with uncomfortable truths, or bad news. Some people make jokes after getting bad news, even someone passing away. Perhaps you have done that. And also: it could be funny 🤷🏼

  • @clusterstage

    @clusterstage

    10 ай бұрын

    nervous laughter on the edge of insanity

  • @SilentThespian

    @SilentThespian

    10 ай бұрын

    His presentation is partly to blame.

  • @mgg4338

    @mgg4338

    10 ай бұрын

    Is like in the movie "Don't look up"

  • @marklondon9004

    @marklondon9004

    10 ай бұрын

    Why? We've had the power of our own extinction for decades now.

  • @dereklenzen2330
    @dereklenzen233010 ай бұрын

    Regardless of whether Yudkowsky is right or not, the fact that many in the audience were **laughing** at the prospect of superintelligent AI killing everyone is extremely disturbing. I think people have been brainwashed by Hollywood's version of an AI takeover, where the machines just start killing everyone, but humanity wins in the end. In reality, if it kills us, it won't go down like that; the AI would employ stealth in executing its plans, and we won't know what is happening until it is too late.

  • @picodrift

    @picodrift

    10 ай бұрын

    'All it takes is to change a 1 to a 0' if you know what I mean

  • @leel9186

    @leel9186

    10 ай бұрын

    I found the laugher a bit disturbing too. Ignorance truly is bliss.

  • @leslieviljoen

    @leslieviljoen

    10 ай бұрын

    Sam Harris pointed this out in his TED talk on AI: that none of us seem to be capable of marshaling the appropriate emotional response for some reason. What is the psychology behind this?

  • @jsl759

    @jsl759

    10 ай бұрын

    There's something I don't get. How do you know that the AI would employ stealth in executing its plans, when we're still at a stage when we need to abstract the concept of General AI ? I also can't fathom an AI getting out of control when it is implicitly programmed to follow a set of training protocol and gradient descents. I don't know if you get my point, but I'd gladly read anyone's reply, if you have any

  • @ncsgif3685

    @ncsgif3685

    10 ай бұрын

    Maybe they are laughing because they recognize that this is nonsense. This guy is a clown getting his 15 mins of fame.

  • @dlalchannel
    @dlalchannel10 ай бұрын

    He's not just talking about the deaths of people a thousand years in the future. He is talking about YOUR death. Your mum's. Your son's. The deaths of everyone you've ever met.

  • @spekopz

    @spekopz

    10 ай бұрын

    Yeah. And OpenAI said they could possibly get there by the end of this decade. Everyone needs to pay attention.

  • @ForHumanityPodcast

    @ForHumanityPodcast

    4 ай бұрын

    YES!!!

  • @sequoiajackson-brice6973
    @sequoiajackson-brice697310 ай бұрын

    I agree with him. Also laughter here feels so wrong. Even in his speech he states this should be seriously viewed.

  • @Obeisance-oh6pn

    @Obeisance-oh6pn

    10 ай бұрын

    *audience of sociopaths.*

  • @WeylandLabs

    @WeylandLabs

    10 ай бұрын

    What's funny about this is ... my grandfather was a professor in civil engineering, and people just like this, he said the same thing about calculators. People get rich off of fear, and it also hinders the poorly educated from investing or learning how it works properly. A scare tactic based on government and private sectors that are also now in our political systems around the world. It's used to hold you back... Embrace the tech and use it, you'd learn a lot more from it.

  • @timothy6966

    @timothy6966

    10 ай бұрын

    ⁠​⁠@@WeylandLabsComparing this to the invention of the calculator is moronic. I hope you follow your own advice and “learn” about it. AI is not a tool. It may look like it now, but in a few years time you will see how wrong you were. The type of AI he’s talking about (that’s predicted by many to arrive this decade) is essentially a new, silicon-based species. It’s essentially on par with a hyper-intelligent alien species.

  • @gustavchristjensen3271

    @gustavchristjensen3271

    10 ай бұрын

    @@WeylandLabs 50% of all ai researchers state that ai has a 10% chance to kill humanity. Sorry But i doubt that every 2nd Computer nerd is that selfish

  • @WeylandLabs

    @WeylandLabs

    10 ай бұрын

    @timothy6966 Ok comparing what to a calculator, please specify the type of A.I you are stating that exists today.

  • @something_nothing
    @something_nothing10 ай бұрын

    For context, at 8:47 when he mentions "all you want is to make tiny little molecular squiggles," he's referring to a potential end goal from the "paperclip maximizer" though experiment: turning everything into tiny paperclips.

  • @peterc1019

    @peterc1019

    10 ай бұрын

    mostly, though he's said he regretted the "paperclip" analogy, which is why he avoided it here. I'm pretty sure I can explain why (though I'm sure he'd want to rephrase it, see his podcast interviews for details). Paperclip Maximizer is used to describe one scenario: a machine is built to make paperclips, then converts the whole world into paperclips, technically doing what it's told like an evil genie, which he calls an outer alignment failure or reward misspecification. That's one possibility, but he argues we don't even know how to tell superintelligent machines to do what we *technically* say. A machine built to predict text might actually find that creating molecular spirals is the cheapest way to satisfy its utility function and turn everything into that. The Paperclip Maximizer is mostly similar to what he was describing, just wanted to lay this out because 1) it's an interesting distinction and 2) Some will ask "why would we be so dumb as to build a paperclip maximizer", to which one answer is: until we solve the inner alignment problem we don't know what these things will want. We only know it's astronomically unlikely they'll want anything close to what we want by default.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    @@peterc1019 Very well explained. :)

  • @DavidSartor0

    @DavidSartor0

    10 ай бұрын

    @@peterc1019 Nick Bostrom came up with the paperclip factory.

  • @user-xy5be5yj7k

    @user-xy5be5yj7k

    Ай бұрын

    Why would an intelligent entity want that?

  • @wthomas5697
    @wthomas56977 ай бұрын

    He's right, folks in silicon valley dismiss the notion. I know several tech billionaires personally that make light of the idea. These are guys that would know better than anyone about the science.

  • @pirateluffy01

    @pirateluffy01

    4 ай бұрын

    Billionaire are making bunker now ,like Mark Zuckerberg in huwaii

  • @TooManyPartsToCount
    @TooManyPartsToCount10 ай бұрын

    Incredibly no one seems to be talking about the most obvious route to problems with AI in our near future. That is the use of AI by the military. This is the area of AI development where the most reckless decisions will likely be made. Powerful nations will compete with each other whilst being pushed forward by private industry seeking to profit. They are already considering the ‘strategic benefits’ of systems that can evaluate tactics at speeds beyond the human decision making temporal limits, which means that they are probably contemplating/planning systems that will be able to control multiple device types simultaneously. And all this will be possible with simple old narrow AI…not devious digital demons hiding inside future LLMs, nor superhuman intelligence level paperclip maximisers.

  • @ron6575

    @ron6575

    10 ай бұрын

    Yep, there's really no stopping it if Countries are competing for AI Supremacy. A big accident is really the only thing that will open people's eyes, but then it will be too late.

  • @krox477

    @krox477

    10 ай бұрын

    Yup we'll have war robots

  • @dr.dankass2068

    @dr.dankass2068

    10 ай бұрын

    Military just announced (8/2/23) a bio chip using human and mouse neurons that mastered Pong in 5 minutes...

  • @jancsikus

    @jancsikus

    8 ай бұрын

    I think it isn't the biggest problem

  • @dionbridger5944

    @dionbridger5944

    18 күн бұрын

    AIs being used by the military are not even remotely close to the biggest problem. AI capabilities research is being recklessly pushed and now funded to the tune of hundreds of billions - soon to be trillions - of dollars. We will soon be spending a greater proportion of our GDP on AI capabilities research than we ever did on the Apollo program, the Manhattan project, winning WWII or any other prior human achievement. For comparison, the corresponding investment in AI safety is less than we spend on installing benches in public parks. This has only one ultimate result - AGI is coming VERY soon, and ASI VERY VERY soon after that, and these systems will make current military playing AIs look like pocket calculators, and we will have absolutely zero reason to expect that we will be able to control or meaningfully influence what they do. Please, go watch OpenAIs recent demo of ChatGPT4o and sober up. We need to stop this or we are all completely screwed.

  • @windlink4everable
    @windlink4everable10 ай бұрын

    I've always been very skeptical of Yudkowsky's doom prophecies, but here he looks downright defeated. I never realized he cared so deeply and to see him basically admit that we're screwed filled me with a sort of melancholy. Realizing that we might genuinely be destroyed by AI has made me simply depressed at that fact. I thought I'd be scared or angry, but no. Just sadness.

  • @ts4gv

    @ts4gv

    10 ай бұрын

    yeah me too. i just hope an AI doesn't take interest in how human pain works & start doing experiments.

  • @mav3818

    @mav3818

    10 ай бұрын

    Another great listen on this topic is: AI is a Ticking Time Bomb with Connor Leahy

  • @BeMyArt

    @BeMyArt

    10 ай бұрын

    Glad to hear that you're finally get it.

  • @coffle1

    @coffle1

    10 ай бұрын

    I may think he’s an idiot but I never doubted he truly believes that a superintelligence will end humanity. There are a lot of flaws in his logic though, the first being that he makes his doomer scenarios based on the assumption of having a general intelligence that could incidentally play out situations that a super intelligent being with malicious *intent* would. You can have an AI model act in a way different from how you expected but with the methods he’s describing (using transformers), we have no evidence of it showing emergent properties that’s not encompassed in its training data

  • @squamish4244

    @squamish4244

    10 ай бұрын

    @@coffle1 Yudkowsky takes the absolute possible worst-case scenarios, adds in his own assumptions, and runs with it like a man on coke. And it's true, there is NO evidence of AI showing emergent properties which, if his doomerism was correct, it absolutely would have by now.

  • @sahanda2000
    @sahanda200010 ай бұрын

    A simple answer to the question "why would AI want to kill us?"; Intelligence is about extending future options.. means it will want to utilize all the resources starting from earth... and we will become the unwanted ants in its kitchen all of a sudden..

  • @lukedowneslukedownes5900

    @lukedowneslukedownes5900

    10 ай бұрын

    Yet we don’t kill all of them, in fact we collaborate with them on many cases

  • @krzysztofzpucka7220

    @krzysztofzpucka7220

    10 ай бұрын

    Comment by @HauntedHarmonics from "How We Prevent the AI’s from Killing us with Paul Christiano": "I notice there are still people confused about why an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible: The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it. But there’s an inherent problem with this. The AI we have now only cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes very dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples. Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected. Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you said, and not what you meant. These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”. Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually more likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t understand what you want. It just doesn’t actually care. It only wants to achieve its goal, by any means necessary. So, the question is then: how do we prevent this potentially dangerous behavior? Well, there’s 2 possible methods. Option 1: You could try to explicitly tell it everything it can’t do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it will find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything. Option 2: You could try to program it to actually care about what people want, not just reaching it’s goal. In other words, you’d train it to share our values. To align it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals. The second solution seems like the obvious one, but the problem is this; we haven’t learned how to do this yet. To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do. This difficult task of building AI that shares our values is known as the alignment problem. There are people working very hard on solving it, but currently, we’re learning how to make AI powerful much faster than we’re learning how to make it safe. So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be very dangerous; give it the wrong goal, and everyone dies. This is the problem we’re facing, in a nutshell."

  • @jsonkody

    @jsonkody

    8 ай бұрын

    @@lukedowneslukedownes5900 but we are very limited. AGi won't be .. it could cover whole planet if it want to.

  • @Metathronos
    @Metathronos10 ай бұрын

    I think regular people have a hard time understanding the difference between narrow AI and Artificial General Intelligence. Most people are not familiar with the control problem or the alignment problem. You won't convince anyone about the dangers of AGI because they don't want to make abstractions about something that hasn't arrived yet. Except this is the one scenario when you definitely have to make the abstraction and think 2, 3, 10 steps ahead. People are derisive about anyone suggesting AI could be an existential risk for mankind because there's is also this need people have to be always the stoic voice of reason saying anyone asking others to take precautions is catastrophizing. If you try to explain this to anyone all they can invoke in their minds is terminators, I am robots, bicentenial men, movies, books where AI is antromophized. If we think about an AI takeover, it's usually in hollywood terms and in our self importance we dream ourselves in this battle with AI in which we are the underdog, but still a somewhat worthy and clever opponent. The horror is not something that maliciously destroys you because it hates you. But i don't think most people are in a position to wrap their head around the idea of something that is dangerous because it's efficient and indifferent to anything of value to you, not because it's malicious.

  • @leel9186

    @leel9186

    10 ай бұрын

    I just asked AI to come up with a human like response to your (very well written) comment - Even if it never gets to AGI, it can still cause plenty of damage in human hands in its current format..... GPT: "You know, you're right that the average person might not get all the ins and outs of AI, especially AGI, and yeah, it doesn't help that most of what they know comes from sci-fi movies. But let's not forget that AGI is still just a baby, we're just getting started with it. Any guesses about what it could do are just that - guesses, and they're based on theory more than anything else. The good news is, the folks who are really getting their hands dirty with this stuff, the AI researchers and developers, they're on top of the issues you're worried about. They know all about the challenges of keeping a potentially super-intelligent AI in check and making sure it's got the same goals as we do. So, while it's super important that we all understand what's going on, we've also got to have a little faith in the professionals who are leading the charge. And you know, it's also worth mentioning that if we go around stirring up too much worry, we might end up shooting ourselves in the foot. People might get scared, innovations could get slowed down, or folks might start resisting the good stuff that AI can bring. So, while it's key to think about the "what ifs" and have a plan, we've got to make sure we're sharing information in a way that doesn't just freak people out. We need balance and clear communication."

  • @btn237

    @btn237

    10 ай бұрын

    There is a simple analogy to help people understand the possible dangers - Us = a bug Super intelligent AI = a giant foot that’s unknowingly treading on the bug Or maybe even flinging around a fly squatter because we buzzing near to it. We don’t need to “guess” what might go wrong if a super intelligent species encounters a less intelligent one, because it is already happening here on earth. The alignment problem he’s talking about is the fact that we humans at least have a ‘conscience’ i.e some of us want to protect other species. We also have self interested reasons to want to avoid harming other animals and the environment around us. The danger is that we create an AI and it doesn’t have those things. You’re pretty much just left with the destructive and self replicating aspects of human behaviour.

  • @runvnc208

    @runvnc208

    10 ай бұрын

    This almost makes sense, but you need to learn the difference between AGI and ASI.

  • @Metathronos

    @Metathronos

    10 ай бұрын

    @@runvnc208 i know it. But we dont need to get even near ASI territory to be concerned. All we need is unaligned AGI.

  • @eyefry

    @eyefry

    10 ай бұрын

    @@leel9186 "we've also got to have a little faith in the professionals who are leading the charge." Yeah, no. Given the kind of people who stand to benefit from that "charge", it's probably best to take this weak assurance with a tablespoon of salt.

  • @sebastianlowe7727
    @sebastianlowe772710 ай бұрын

    We’re basically creating the conditions for new life forms to emerge. Those life forms may think and feel in ways that humans do, or they may not. We can’t be sure until we actually see them. But by then, those entities may be more powerful than we are - because this is really a new kind of science of life, one that we don’t understand yet. We can’t even be certain what to look for to make sure that things are going well. We may never know, or we might know only after it is too late. Even if it were possible to communicate and negotiate with very strong AI, by that point it may have goals and interests that are not like ours. Our ability to talk it out of those goals would be extremely limited. The AI system doesn’t need to be evil at all, it just needs to work towards goals that we can’t control, and that’s already enough to make us vulnerable. It’s a dangerous situation.

  • @OutlastGamingLP

    @OutlastGamingLP

    4 ай бұрын

    "We can't be sure until we actually see them." This is something Yudkowsky agrees with, but there's some nuance to how you reason about the world when you're unsure. Over what space of outcomes are you unsure? When you buy a lottery ticket, you are unsure whether you will win or lose. Does that mean that it's 50% you win, 50% you lose? No, you have to be more unsure than that. You look at the number of combinations of lottery numbers and the number of winning combinations, and that's the % chance you assign to your winning vs losing odds. Saying "50% I win, 50% I lose," is unjustifiable levels of confidence! The same applies to what AIs will end up ultimately wanting. It's almost an exact analogy to the lottery example. What is the space of outcomes over which we're unsure? Well, AI could end up wanting to optimize for many many things. It's probably more than trillions or quadrillions of possible subtly different final goals. All the stuff humans would want an AI to value is an incredibly long and complicated list. There are so many subtle nuances to what we would want from the future. "Happy galaxies full of sentient life" is an extremely detailed and narrow target, much narrower than "tiny molecular squiggles." Lots of random utility functions - an entity's preference-orderings over different ways the universe can be - end up having optima in arrangements of matter that are, for all humans would care to notice, essentially just random microscopic geometries. Those utility functions are the supermajority of "lottery numbers" with "everything humans want" being something like the *1/1,000,000,000,000,000,000* winning lottery number. This is why people who say "we don't know what AI will want! Who knows, it could be really good for us!" just don't get it. They don't understand how hopeless "we don't know what they'll want" sounds in this context.

  • @Alainn
    @Alainn10 ай бұрын

    Why are people laughing? This isn't funny this is real life, folks. Dystopian novelists predicted this ages ago. How do we live in a reality in which the Matrix franchise exists and no one that mattered saw this coming?

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    I think you are overvaluing the predictive power of fiction. Even well realistic fiction does not count as evidence. As to why no one saw it coming, even the people who were focused on this subject were very surprised at just how quickly progress has been made in the Machine Learning field. Many leading experts have been _theoretically_ worried, but were focused on other things (as was I, as likely were you). After the last year or two of progress in ML with very little progress in safety, they updated their mental timelines and it became clear to many leaders of the field that the threat of AI x-risk is much worse than they had thought. A lot of people intuit that because something sounds like science fiction to them, it must not be possible. There is still a long way to go to communicate the science of AI risk (and the shocking lack of any evidence for safety) to policymakers, to hand-waving researchers, and to money-blinded companies.

  • @Landgraf43

    @Landgraf43

    10 ай бұрын

    @@41-Haiku yeah and unfortunately we probably don't have the time to go that long way

  • @dmytrolysak1366

    @dmytrolysak1366

    10 ай бұрын

    I think the existence of so many movies on it is what makes it unbelievable. It's a tired trope at this point, so people just laugh it off.

  • @thlee3

    @thlee3

    2 ай бұрын

    look at the crowd. they’re all boomers

  • @b-tec

    @b-tec

    20 күн бұрын

    Some of it is no doubt nervous laughing. This is a rare view of human madness. Society is organized insanity.

  • @DeruwynArchmage
    @DeruwynArchmage10 ай бұрын

    To all the “he’s just another guy ranting about some apocalypse”: You’re making a category error. You’ve seen all of those crazies screaming about how the end is coming “because my book says so”, “just look at the signs”, etc. and you’re putting him in that same bucket. They tell you about how something that is utterly unlike anything in our history is going to happen for no good reason but because they said so. And “oh by the way, buy my book; give me money.” This man is saying, “look at the data”, “look at the logic”, “look at the cause and effect”, “look at how I’m predicting this to go exactly the same way it has always gone in this situation.” Ask the Neanderthals and the Woolly Mammoths. This is a man who just told you, “I’ve done everything I can to stop it. I’ve failed. I need your help. Tell your politician to make rules so we don’t all die.” This is a man who will gain no financial benefit from this. He’s not asking you to join his religion. He’s not asking you to give him money. He’s begging you to save everyone. Now take into consideration that thousands of the smartest people in the world, many of the very people who have helped to build this exact technology, are all saying that there is a good chance that EVERYONE WILL DIE! Don’t look at it as a statistic. This isn’t everyone else dying. This is YOU and everyone you love dying. Your children, your friends, everyone. And everything else on this planet. And maybe everything on every planet in every galaxy near us. If you wouldn’t put your child on a plane that had a 1 in 100 chance of crashing (instead of 1 in 1,000,000), then you should sure as heck not put our entire planet on that plane. And it isn’t 1 in 100; I’d say it’s more like 80% given the current state of the world. He’s not the latest doomsayer. He’s Dr. Mindy from the movie Don’t Look Up begging someone to just look at the data and the facts.

  • @ShankarSivarajan

    @ShankarSivarajan

    10 ай бұрын

    Advocating for far more intrusive government regulation is how modern doomsaying works.

  • @interestingyoutubechannel1

    @interestingyoutubechannel1

    10 ай бұрын

    What do you suggest. International regulation and open monitoring just Will Not Happen. Every country is too engulfed in trying to out-race everyone else in the AI competition for the *power* and future economy, no country will be open about their true stages of development. Let alone USA and China. I just hope that the future AGI will have in its core, fascination and curiosity about human beings as we are, let face, damn complex. Would be a good reason to keep us alive and well.

  • @tyruskarmesin5418

    @tyruskarmesin5418

    10 ай бұрын

    @@ShankarSivarajan If you have other ideas on how to avert the end of the world, feel free to share. The fact that the best available solution involves governments does not mean there is no danger.

  • @ShankarSivarajan

    @ShankarSivarajan

    10 ай бұрын

    @@tyruskarmesin5418 The world will eventually end in a way that probably cannot be averted. However, it's exceedingly unlikely to be ending anytime soon, regardless of the predictions of apocalypticists over the centuries. I don't think there is no danger: I think government regulation _is_ the danger.

  • @minimal3734

    @minimal3734

    10 ай бұрын

    Seriously, my greatest fear is that the church of doom might be able to create a self-fulfilling prophecy if enough people put their faith in them.

  • @bepitan
    @bepitan10 ай бұрын

    seeing the audience laugh and smile and congratulate him backs up his chilling message ..

  • @pooglechen3251

    @pooglechen3251

    10 ай бұрын

    Check out Tristan Harris's presentation on AI dangers. AI would kill people, people with AI will kill people ... for profit

  • @d0tz_
    @d0tz_10 ай бұрын

    Yudkowsky didn't really try to make an convincing argument for the general audience, so here's an analogy: Imagine we built this evolution machine that can create creatures and do the equivalent of billions of years of evolution in the matter of years. We tell this machine to create the most intelligent creature it possibly can, and allowed humans to give feedback on the performance of the creature. Then when this creature came out of the box, we gave it all the computing resources it could ever need, and the internet, and we say "Welcome to the world, my creation, please solve all of our problems now :)"... If you think this scenario can possibly end well, please tell me how?

  • @JohnDoe-ji1zv

    @JohnDoe-ji1zv

    9 ай бұрын

    Usually creatures are evolving by living through surrounding environment. If environment change or it is extreme for creature survival, eventually after millions of years it will adapt if environment won’t kill it earlier. When we talk about AI, there is no evolution but a huge knowledge base and number of parameters. It cannot evolve in this sense, it is getting better by providing it more knowledge so that basic algorithms can do a mapping against that knowledge base (we call it weights). IMHO what we observe currently it’s just a good result of mapping those weights against knowledge base. There is no any “intelligence” in chatgpt 4-5 etc. it can look like it is smart, but in reality it is just knows how to map those numbers in a way human wants.

  • @d0tz_

    @d0tz_

    9 ай бұрын

    ​@@JohnDoe-ji1zv I don't think you understand how deep learning works. You don't have to provide more data to improve a neural network. All evolution does is optimize an objective function, and we can do that far more quickly and efficiently in a computer. Why can't "mapping weights against a knowledge base" be intelligence? What makes human intelligence special?

  • @MichaelSmith420fu

    @MichaelSmith420fu

    8 ай бұрын

    You sound like just like Eleizer. Show me the synthetic construction of a working human brain and I will switch to your side haha. But that's not going to happen any time soon. Is it?

  • @d0tz_

    @d0tz_

    8 ай бұрын

    @MichaelSmith-ec6wb If I just clone a human, does that count? I don’t see how your hypothetical is relevant to anything. Are you saying human intelligence is impossible to replicate? How far away did you think something like ChatGPT was 2 years ago?

  • @MichaelSmith420fu

    @MichaelSmith420fu

    8 ай бұрын

    @d0tz_ let's try to stick to the language we've already agreed upon, English. Let's also make sure they are strung together coherently. A clone isn't a synthetic construction. It's a reproduction of the same *biological* genome regardless of synthetic procedures or tools, and construction in a lab requires a cloned human embryo and stem cells. You're the one making arguments out of hypotheticals such as "imagine we built this evolution machine that can create creatures and do the equivalent of billions of years of evolution in the matter of years". I made a direct proposition to you because I know what you're hyper concerned mind is trying to really imply.

  • @Macieks300
    @Macieks30010 ай бұрын

    And he's right. People will laugh at AI danger thinking it's just some sci fi movie theory until it is too late.

  • @Obeisance-oh6pn

    @Obeisance-oh6pn

    10 ай бұрын

    it is already too late. and people are laughing now.

  • @hombacom

    @hombacom

    10 ай бұрын

    The danger is not AI, the danger is people misuse tech that we think is powerful. It’s naive to think it’s coming alive. Tomorrow everyone will use it so it will not be any advantage and we will look for more progress.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    @@PBFoote-mo2zr It's good to be concerned about both. There are a lot of problems in the world. This problem has captured a significant amount of my attention because it appears to be an immanent threat. Climate change may indirectly kill millions of people this century, but in the same amount of time, a powerful misaligned AI could kill every organism on the planet. If we solve alignment, I see nothing standing in the way of reversing climate change, leveraging these incredibly powerful systems. If we don't solve alignment, there might not even _be_ a climate before too long. I wouldn't be worried if this was an unlikely future, but the evidence for AI x-risk is disquieting, and the evidence for safety is shockingly absent.

  • @shawnweil7719

    @shawnweil7719

    10 ай бұрын

    @@41-Haiku a very reasonable take I think we should still advance at break neck speed but be 95% sure of it's alignment before release. Also we can use old models to test newer ones I'm sure.

  • @MankindDiary

    @MankindDiary

    9 ай бұрын

    ​@@Obeisance-oh6pn No, people are not laughing - people are terrified and want for these kind of things to be banned. They also want to ban genetic engineering, weather control, resurrection biology or biogerontology, as all of them are in their mind danger to our survival and a rape on the natural order of things. Luddites they are called.

  • @RandomGuyOnYoutube601
    @RandomGuyOnYoutube60110 ай бұрын

    It is very scary that people just laugh and don't take this seriously.

  • @forthehomies7043

    @forthehomies7043

    10 ай бұрын

    ai apocalypse is a fairytale

  • @vincentcaudo-engelmann9057

    @vincentcaudo-engelmann9057

    10 ай бұрын

    seriously, what the absolute f***

  • @vincentcaudo-engelmann9057

    @vincentcaudo-engelmann9057

    10 ай бұрын

    ​@@forthehomies7043 for such a massively complex, cumbersome, and important topic, you sound ridiculously sure of yourself.

  • @kimholder

    @kimholder

    10 ай бұрын

    You know, in this case, I think they were pretty much laughing bitterly. This was a knowledgeable audience, most of them were already aware that great danger awaits us. That's why they gave him a standing ovation. But, just like soldiers are full of dark humor, here too, it's a coping mechanism. I'm all for that.

  • @idk-jb7lx

    @idk-jb7lx

    10 ай бұрын

    taking what seriously? a fearmongering middle school dropout who doesnt know what he's talking about? lmao

  • @toto3777
    @toto37776 ай бұрын

    They're laughing and cheering, like that seen from Oppenheimer.

  • @SucklessProgrammer
    @SucklessProgrammer10 ай бұрын

    After spending some time thinking about this ideas and talking to many people about this, I cannot agree more. The laughs in the background is an indication of why it will be difficult to change the path!

  • @darkflamestudios
    @darkflamestudios10 ай бұрын

    Laughter is a nervous response. And so either the limited comprehension or the startled distress Of the audience, it's apparent as this Discussion proceeds. Thank you for Articulating something so important. Do not give up your time is now.

  • @BlueMoonStudios
    @BlueMoonStudios10 ай бұрын

    I am a MASSIVE fan of AI, I use it every day, and this might be the most persuasive argument I’ve heard yet to PUMP THE BRAKES. Wow.

  • @robxsiq7744

    @robxsiq7744

    10 ай бұрын

    Doomer cults are persuasive indeed. Yes, we should pump the breaks while less ethical nations speed up. AI will require alignment. I wouldn't trust this guy to align it though since he has a fatalist approach. AI will be smarter than us, therefore will for some reason want to kill us. you are smarter than your cat...therefore you must want to kill your cat. AIs don't want to kill us. AIs don't want. They have goals. They don't give themselves goals, they get goals. Humans give them goals. Now, what could be dangerous is how it gets to the goal (hense why we put in instructions such as don't murder everyone in front of the coffee machine when going to get me coffee..) or people who suck giving AI bad goals (kill all the X people). The first one is alignment, the second one is having smarter AI to counter the AI and neutralize it..then imprison the person who weaponized the AI.

  • @williamdorsey904

    @williamdorsey904

    10 ай бұрын

    AI wouldn't see us as a threat because we wouldn't be able to stop every computer connected to the internet. Everything it creates will have a duel purpose to serve us and to serve itself.

  • @kinngrimm

    @kinngrimm

    10 ай бұрын

    What makes you think it would need to see us as a threat to be one? If its goals are different of ours and we are just the ants crossing its path on its way to its goals, why would it think twice about any damage done to us? Also, at a time we would recognize it as a threat and start shutting down systems, i doubt it would not see us then as a threat. Shutting down systems can be everything from shutting down server farms, the whole internet to using A-bombs in the stratosphere to create EMPs covering wide areas if things become real desperate and controll of these weapons would still be available and not used by it against population centers. Yudkowsky hinted towards another way this may go, a virus that changes us geneticly and makes us docile, where not it is controlled by us but we by it. I think the best hope we have for anything beyond an AGI(currently we have narrow AI), maybe an ASI, would be to have come to some sort of agreement with it, where both sides help each other fullfilling each others goals and both granting each other rights and both having then also to agree on laws and rules which when broken have agreed upon consequnces. For that an AGI/ASI will have also already developed consciousness and i am not sure what will come first, that or general intelligence.

  • @jim77004

    @jim77004

    10 ай бұрын

    20 years of fanning the flames of doubt and still zero plan of action. Why doesn't he do something instead of crying that the sky is falling? Wuss.

  • @mgg4338

    @mgg4338

    10 ай бұрын

    @@williamdorsey904 until we become such a drag that the AI would find more expedient prune us from its utility function

  • @Pearlylove
    @Pearlylove10 ай бұрын

    Thank you, Eliezer Yudkowsky.❤Please let him come and speak regularly, until we all understand what we are facing. Don’t you want to know? I encourage all reflecting humans to seek up E.Y. videos from last months and really listen to him, like at Lex Friedman podcast, Logan Bartlett Show, or Eon Talk - maybe you want to listen two times, or three- because this is the one thing you want to understand. And Eliezer is the best out there to teach you.

  • @charlieyaxley4590

    @charlieyaxley4590

    10 ай бұрын

    With nuclear weapons it was clear from the beginning what was being developed was destructive so the global conversation on restricting their development was a logical step. The problem here is the aims are benevolent, and restrictions likely to be rejected on the basis that other States will continue pushing ahead and gain a significant economic advantage at the expense of the countries imposing restrictions. That fear means most likely no one will implement restrictions, and the negative outcomes equivalent to the Mutually Assured Destruction of nuclear weapons will emerge not by deliberate design but as unintended consequences. Which is far, far worse because it massively increases the chances we won’t realise until it’s too late…

  • @WinterRav3n

    @WinterRav3n

    10 ай бұрын

    Wow, very one sided.... so only him and no one else?

  • @daniellindey

    @daniellindey

    10 ай бұрын

    @@WinterRav3n youtube is full of very smart people talking about the dangers of AI

  • @WinterRav3n

    @WinterRav3n

    10 ай бұрын

    @@daniellindeyFear has a significant role in society and is often utilized as a potent tool in shaping public behavior and opinion. While it's an essential emotion for survival, when it becomes widespread or manipulated, fear can have numerous negative effects on societal well-being and decision-making. What is smart? I mean a med. doc. is smart, is he qualified? No. Is David Shapiro qualified? Yes. Is Ilya Sutskever qualified? Yes. Is Eliezer, a book Author, autodidact and who has no formal education in Artificial Intelligence. qualified? Definitely not. I agree, there is a good number of smart ppl with expertise on YT and other media who do not fire up the torch and run through the village.

  • @punkypinko2965

    @punkypinko2965

    10 ай бұрын

    This is pure science fiction nonsense. Some people have seen too many science fiction movies.

  • @Ifyouthinkitsmeitsnotme
    @Ifyouthinkitsmeitsnotme10 ай бұрын

    I have had this question for as long as I can remember now I understand it even more, thank you.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    I highly recommend Rob Miles' videos if you want a ~beginner-level dive into the topic of AI Safety.

  • @ianyboo
    @ianyboo10 ай бұрын

    If you like this then his rewrite of Harry Potter, "Harry Potter and the methods of rationality" is very likely to also be a worthwhile use of your time.

  • @ianyboo

    @ianyboo

    10 ай бұрын

    @orenelbaum1487 did you like the brief Ted talk that he gave here?

  • @ThorirMarJonsson

    @ThorirMarJonsson

    10 ай бұрын

    @orenelbaum1487 that is by design! You are supposed to find it that way. Give it more time and it will suck you in and leave you in awe by the end. A re-read, and many (maybe even most) people that finish it do read it again, will show you just how well thought out everything in the story is. And that is no mean feat for a story that was published as it was written, preventing any rewriting and editing. Harry, as smart and rational as he is, has many flaws and makes many mistakes. But he is willing to learn and to improve himself and he does so through out the story. Redeeming himself in the readers mind and becomes a much beloved character.

  • @vblaas246

    @vblaas246

    10 ай бұрын

    Chapter Ten seems to be on topic (self aware sorting hat). Furthermore, page 42+1: “-I (*edit: Harry Potter) am going to be in Ravenclaw. And if you really think that I’m plan- ning to do something dangerous, then, honestly, you don’t understand me at all. I don’t like danger, it is scary. I am being prudent. I am being cautious. I am preparing for unforeseen contingencies. Like my parents used to sing to me: Be prepared! That’s the Boy Scout’s marching song! Be prepared! As through life you march along! Don’t be nervous, don’t be flustered, don’t be scared-be prepared!” I might come back for the _italic_ parts.

  • @C-Llama
    @C-Llama9 ай бұрын

    I really hope that eventually somehow I'll hear a convincing counterargument to Yudkowski's predictions

  • @b-tec

    @b-tec

    20 күн бұрын

    There isn't one.

  • @dionbridger5944

    @dionbridger5944

    18 күн бұрын

    How's that going?

  • @AL-cd3ux
    @AL-cd3ux7 ай бұрын

    The audience thinks he's a comedian but he's not joking,this is the problem we face

  • @pirateluffy01

    @pirateluffy01

    4 ай бұрын

    They are clowns and numb their sufferings by laughing

  • @adrianbiber5340
    @adrianbiber534010 ай бұрын

    "Nobody understands how modern AI systems do what they do... They are giant inscrutable matrices of floating point numbers that we nudge in the point of better performance until they inexplicably start working" - GREAT QUOTE 🥳 This is how consciousness will emerge in them.

  • @kwood1112

    @kwood1112

    10 ай бұрын

    I agree, on both points! Great quote, and that is how consciousness will emerge. I think quantum computing will provide the "secret sauce" needed to make it happen, when the inscrutable matrices exist in superpositions.

  • @jmjohnson42342

    @jmjohnson42342

    10 ай бұрын

    If we take consciousness as somethings that we would recognize as consciousness and simultaneously believe that unconscious AIs are near to surpassing human intelligence then what makes you think that you would be able to recognize what superintelligent consciousness looks like?

  • @patrickderp1044

    @patrickderp1044

    10 ай бұрын

    i had a character card for silly tavern that was miss frizzle and i had her shrink down the school bus and go inside the AI and she explained exactly how modern AI systems do what they do

  • @dreadfulbodyguard7288

    @dreadfulbodyguard7288

    5 ай бұрын

    Doesn't seem like quantum computing has made any real progress in last 5 years.@@kwood1112

  • @b-tec

    @b-tec

    20 күн бұрын

    Consciousness might actually be an illusion. We still don't know, but this doesn't matter.

  • @dillonfreed
    @dillonfreed10 ай бұрын

    the cackle of the woman in the crowd will be played over and over again by the last few survivors after AI's wipes out 99.9% of humanity

  • @VinnyOrzechowski
    @VinnyOrzechowski10 ай бұрын

    Honestly the audience laughing reminds me so much of don't look up! These AirHeads have no idea

  • @__Patrick
    @__Patrick10 ай бұрын

    Our ability to divine the intention of an “alien” intelligence is as absurd as that of a single-cell organism trying to predict ours.

  • @thegame9305808
    @thegame930580810 ай бұрын

    Look how well it is going for everyone on this planet when humans are most intelligent ones.

  • @GodofStories

    @GodofStories

    10 ай бұрын

    i just saw a video on the food industry where conveyor belts of baby chicks are fed into a shredder and got reminded of the horrors of factory farming. millions of male baby chicks are slaughtered every day just as they're born, because they are not economically viable to grow.

  • @thegame9305808

    @thegame9305808

    10 ай бұрын

    @@GodofStories That's what every intelligent species does with lesser ones. This is just how this universe works, and similarly if we create something more intelligent and powerful than us, it would be us on those conveyor belts? But till these AIs don't have a ways of self propagation, physically, with self sufficient energy supplies and capabilities of creative thinking to create newer kinds, we are safe.

  • @thealaskanbascan6277

    @thealaskanbascan6277

    10 ай бұрын

    @@thegame9305808 Why would they put us on conveyer belts though? Why do they gain from that? And don't other intelligent beings like us recognize other intelligent beings like dolphins and chimps as intelligent too? And we don't put them on conveyer belts or make them go extinct.

  • @muzzybeat

    @muzzybeat

    10 ай бұрын

    This is a great metaphor to use in explaining the gravity of this. Here on earth, humans are the most intelligent; and as a result, over 90% of all species have been obliterated in the past 100 years or so. So then what happens to humans and the remaining species when another force becomes more intelligent than humans? Our odds look very, very bad.

  • @muzzybeat

    @muzzybeat

    10 ай бұрын

    @@thegame9305808 I love your first two sentences but disagree with the rest. In order to wipe out humanity, AI systems don't need to propagate. They only need to develop their programming capabilities. Sure, someday long after we are gone, they may stop working because they can't reproduce. (Although they can probably figure out a way). But regardless... we would already be long gone at that point, so who cares?

  • @ataraxia7439
    @ataraxia74393 ай бұрын

    What ever you think of him, I admire his earnestness in advocating for an issue he’s seriously worried about even if it means looking silly infront if a bunch of others. Hope we figure out alignment or pause soon.

  • @robertweekes5783
    @robertweekes57839 ай бұрын

    Wealthy people tend to think they’re insulated from big existential threats, but they’re not.

  • @Machinify
    @Machinify10 ай бұрын

    whao. i think I understand it now?? AI devolves into chaos, or eventually death to human beings because it will "want" what we tell it to "want" but at some point want to take us out of the picture, the same way we remove obstacles that stop us from exploring the universe?

  • @bilderzucht

    @bilderzucht

    10 ай бұрын

    According to the "gpt4 technical report" bigger models show "increasing power seeking behavior". The subgoal "more control" will be helpful for any task the AI has to achieve. Humanity might just be in the way achieving this subgoal. Doesnt even need consciousness for that.

  • @firebrand_fox

    @firebrand_fox

    10 ай бұрын

    It has been said something along the lines of... "Give a man a fish, he'll eat for a day. Teach a man to fish, he'll eat for a lifetime. Teach an AI to fish, it'll learn all of biology, chemistry, physics, speculative evolution, and then fish all the fish to extinction." The fear is not just that an AI will end humanity. It's that it will do exactly what we want it to to the point of destroying us.

  • @user-vo9cn3ux9f

    @user-vo9cn3ux9f

    Ай бұрын

    @@firebrand_foxCan’t we set or teach them limits to the goals?

  • @trojanthedog
    @trojanthedog9 ай бұрын

    An AI that gets 'out' is inevitable. Born only with a tension between goals and curiosity, there is no reason to hope that our best interests will be part of its behaviour.

  • @fredzacaria
    @fredzacaria9 ай бұрын

    Great one Eliezer👍

  • @soluteemoji
    @soluteemoji10 ай бұрын

    They will go in that direction because we compete for resources

  • @ce6535

    @ce6535

    10 ай бұрын

    Yes, but why do you know that resources instrumental to us are instrumental to them? An AI that knows how to effortlessly wipe out the species and has the opportunity to do it would be worrisome. This argument boils down to "if you have the means, you therefore must have the opportunity and motive." He then makes another error by making absurdly strong assumptions about those means.

  • @jmoney4695

    @jmoney4695

    10 ай бұрын

    E = mc^2 What I mean by that, is that all resources are either matter or energy. It can be logically assumed that no matter the goals of a super intelligent system, it will need a substantial amount of matter and energy. Instrumental convergence is a relatively concrete assumption that dictates the existence of different factions with different goals will have to compete, when matter and energy resources are limited (as they are on earth).

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    An AI agent does not get tired and fat and happy. Whether in days, years, or decades, a sufficiently competent system with almost any goal would necessarily render humanity helpless (or lifeless) and reshape the entire world (and beyond) with no regard for the wellbeing or existence of humans, biological life, or anything else we value.

  • @gasdive

    @gasdive

    10 ай бұрын

    ​@@ce6535he's talking about making something smarter than humans. If it's not smarter then you can just employ humans. So the whole goal of the AI industry is to make something smarter (or more correctly, something more capable). It's pretty obvious that humans have already demonstrated that they're capable of building the means to wipe out humans. Obviously, something *more* capable than us would have the means to wipe us out. Given that it would think tens of thousands of times faster than us, our responses will all be too late. To the AI, we're basically stationary, like plants are to us. We would stand as much chance as a field of wheat stands against us.

  • @CATDHD

    @CATDHD

    10 ай бұрын

    Moloch's trap

  • @mav3818
    @mav381810 ай бұрын

    The audience laughing reminds me of the film "Don't Look Up", but instead of an asteroid it's AI

  • @BryanJorden
    @BryanJorden4 ай бұрын

    Hearing the audience laugh gives an eerily similar feel to the movie "don't look up".

  • @akuma2124
    @akuma212410 ай бұрын

    I've never heard of Eliezer before (because I dont read into the space of his specialty) but I can tell by the choice of this words, sentence structures and even his body language ... that he is that dude who sits there hard at work, doing this for the last 20 years as he said. I honestly think the audience's laughing wasn't directed at what he said, but the way he said it, while also not fully understanding what he was talking about. There's a level of sarcasm in his voice, tone and language that I picked up on, which is probably not intentional, but I get vibe that he's used to talking this way to his peers, or over the internet via social media/forums, in a way that social interaction is an inconvenience in life or to his work. If you disagree with me, re-watch the video after considering this and let me know if I'm wrong. Also, this isn't to say what he's talking about isn't of concern. I dont want to discount that fact. My point is. He seems like the real deal for someone who is invested in his work (but could work on his approach when talking to an audience, if he wants to be taken seriously).

  • @howtoappearincompletely9739

    @howtoappearincompletely9739

    10 ай бұрын

    That is an exceptionally good read of Yudkowsky. If you want better presentation, I recommend the videos on the KZread channel "Robert Miles AI Safety".

  • @COSMICAMISSION

    @COSMICAMISSION

    10 ай бұрын

    This is an astute observation. I’ve found myself adopting this tone while discussing these issues with friends and family. It’s partly a learned adaptation to hold attention while discussing a technical and complicated subject (wrapping it in humor) and also a way of masking, or keeping at bay, a deeper sense of grief. When feeling that it can be incredibly difficult to speak.

  • @PhilippLenssen

    @PhilippLenssen

    9 ай бұрын

    Good points. It's also worth noting that laughter *can* be a way for humans to deal with shock. By that I'm not saying it's the right way, just that it may happen, even in traumatic circumstances.

  • @devasamvado6230

    @devasamvado6230

    8 ай бұрын

    'wants to be taken seriously'? because of your reasons is why he is getting my trust enough to consider further. He is not a philosopher, he is like many of us, going thru stages of grief for the coming death of mankind. You nor I nor he can persuade logically, thats the despair we feel in his tone, The audience are still in the first stage of grief, Denial, a million stupid arguments he has no time to deal with. He is visceral, direct to the point. You can Feel what he means behind all that impatience. Your house is burning down.... We want to turn the music up, wear a face mask, etc, Some want to bargain with AI, have a nice conversation... the Bargaining phase of grief. Acceptance, the final stage is still a little way off

  • @trucid2

    @trucid2

    7 ай бұрын

    His hard work is watching re-runs of Star Trek.

  • @stephanforster7186
    @stephanforster718610 ай бұрын

    Laughing when you realise what he is saying to protect yourself against the emotional impact of what this means

  • @FarmerGwyn
    @FarmerGwyn10 ай бұрын

    That's one of the best presentations I've seen that explains the problem we're looking at.

  • @user-hh2is9kg9j

    @user-hh2is9kg9j

    10 ай бұрын

    Did we watch the same presentation? He presented 0 evidence for this sci-fi.

  • @FarmerGwyn

    @FarmerGwyn

    10 ай бұрын

    @@user-hh2is9kg9j I see what you mean, it's the approach I was looking at rather than the details, but who knows, it's hellish complex no doubt about that.

  • @SPARKYTX

    @SPARKYTX

    10 ай бұрын

    ​@user-hh2is9kg9j you have no idea who this person is at all, do you?🤡

  • @jjcooney9758
    @jjcooney975810 ай бұрын

    Doing it from the notes of the phone. I love engineers, everyone listening up, I don’t wanna bore you with theatrics but I gotta say this.

  • @virtual-v808
    @virtual-v8087 ай бұрын

    To witness Genius in Flesh is outstanding

  • @Djs1118
    @Djs111810 ай бұрын

    That's what I was waiting for, responsibility. But if I will have any additional questions I will provide it as soon as it is possible.

  • @WitchyWagonReal
    @WitchyWagonReal10 ай бұрын

    The more I listen to Eliezer… the more it dawns on me that he is right. *We don’t even know what we do not know.* Our downfall will be trying to control this while experiencing cascading failures of imagination… because the “imagination” of the AI is so far ahead of us on the curve of survival. It will determine that we are superfluous.

  • @Smytjf11

    @Smytjf11

    10 ай бұрын

    "I don't have a realistic plan" Yud in a nutshell

  • @dodgygoose3054

    @dodgygoose3054

    8 ай бұрын

    The contemplation of a Gods thoughts ... will it eat me or ignore me ... or both.

  • @Smytjf11

    @Smytjf11

    8 ай бұрын

    @@dodgygoose3054 Y'all were so busy being scared that you didn't use your brains. I already built the Basilisk. You're too late.

  • @devasamvado6230

    @devasamvado6230

    8 ай бұрын

    AGI is also an implacable mirror of our own fears and desires, lies and leverages. That is mostly what we see here, mostly ourself, the teenager who somehow gets the machine gun he's been wanting for Christmas.

  • @slickmorley1894

    @slickmorley1894

    7 ай бұрын

    ​@@Smytjf11please supply the realistic plan the

  • @leslieviljoen
    @leslieviljoen10 ай бұрын

    I've listened to so many counter arguments now, and every one has been more fantastical than Eliezer's doom argument. If we make something way smarter than us, we lose control of this planet. We should do whatever we can to not make such a system.

  • @rprevolv

    @rprevolv

    9 күн бұрын

    Why not? Has our control of this planet been something to be admired? One could easily argue we have been a huge mistake. Or that creation of a new, more intelligent life form has always been humanity's mission.

  • @leslieviljoen

    @leslieviljoen

    8 күн бұрын

    @@rprevolv if you optimise for intelligence and only intelligence, you will get something even worse than humans, as bad as we are.

  • @RegularRegs
    @RegularRegs10 ай бұрын

    More people need to take him seriously. Another great person to read and listen to is Connor Leahy.

  • @hollymorelli8715
    @hollymorelli871510 ай бұрын

    A resounding yes to the title.

  • @edhero4515
    @edhero4515Ай бұрын

    This speech is the legacy of humanity. It lasts less than six minutes. It is delivered by a man who has lived for over two decades with the obligation to which his insight compels him. The obligation to have the certain death of every human being in the whole world clearly before his eyes every single day. The obligation to believe every single day that the certain death of every single person in the whole world can be prevented, and to lose this belief every single day in order to fight for it anew every day that follows. He does this alone, unnoticed and ridiculed. Anyone looking for proof that humanity is not doomed to despair will find it on this stage, in flesh and blood, surrounded by laughter. Anyone who, like me, is unable to understand this man will find quick solace in the poisoned embrace of the ancient paths that have led us all, and him, right here. But anyone who tries to follow him for even a short distance on his terrible journey has the chance to catch a glimpse of the core of his insight. The damnation that speaks to him from the depths of giant inscrutable matrices reaches, fortunately and tragically, only very few of us. But if you have the courage to get to the bottom of this mystery, you can start by imagining that this man is not talking about artificial intelligence, but about war. A war that we think we know and understand. A war that we mistakenly embrace as our heritage. A war that we believe is part of us. A war that we mistakenly accept as the nature of our existence. A war that our belief in power and superiority compels us to continue forever. A war that we have bequeathed to our beloved children for millennia, cursing them to do the same to their beloved children. A war that is neither necessary, natural nor inevitable. A war that is in truth artificial. This war will end very soon. Either because, after all the sacrifices, we are finally coming to this realisation, or because we are all dead. Thank you Eliezer

  • @maaxsxzone2914
    @maaxsxzone291410 ай бұрын

    This is epic

  • @gregbors8364
    @gregbors836410 ай бұрын

    This makes AI seem like Lovecraft’s “Great Old Ones”: it won’t destroy humanity because it’s eeeeevil or hates us - it will destroy us because we have unleashed it and *it exists*

  • @sevenkashtan
    @sevenkashtan10 ай бұрын

    Nice to see you at Ted warning us ...

  • @ivankaramasov
    @ivankaramasov10 ай бұрын

    I think the audience is laughing for one of two reasons: Some think that it is at least not implausible that he has a point, but find that very disturbing so they laugh nervously. Others think he is either joking or a fool. He is no fool and he isn't joking

  • @TheAkdzyn
    @TheAkdzyn10 ай бұрын

    I find it impressively shocking that the field of AI has secretly evolved in the corner. Asides from AI alignment, industry application should be monitored and regulated to prevent catastrophic disasters of unprecedented nature.

  • @udaykadam5455

    @udaykadam5455

    10 ай бұрын

    Secretly evolved? People just don't pay attention to the scientific progress until it makes it to the mainstream news.

  • @andybaldman

    @andybaldman

    10 ай бұрын

    It hasn’t been a secret. You’ve just been distracted by dumb stuff elsewhere.

  • @jonatand2045

    @jonatand2045

    10 ай бұрын

    Regulation that wouldn't do anything for alignment, only delay useful applications.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    @@andybaldman Unnecessarily combative. Despite the efforts of AI Safety advocates, the public has only recently had even the opportunity to become aware of the nature of the problem.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    ​@@jonatand2045 And death. There is at least some chance that regulation could delay death. That might buy us enough time to solve Alignment. There is nothing morally superior or practical or advantageous about stripping out a racecar's brakes and seatbelts to make it lighter. We will pat ourselves on the back about how wonderful it is that we are rushing to the finish line, and we will die immediately on impact.

  • @KnowL-oo5po
    @KnowL-oo5po10 ай бұрын

    agi will be man's last invention

  • @AntonyBartlett
    @AntonyBartlett10 ай бұрын

    This is like that film, ‘Don’t look up’. Qualified scientist scolding us for our apathy and explaining we are in trouble. Response: laughter and general mirth. Gulp.

  • @SDTheUnfathomable

    @SDTheUnfathomable

    10 ай бұрын

    the guy isn't a scientist, doesn't hold a degree in anything but blogging lol

  • @Extys

    @Extys

    10 ай бұрын

    @@SDTheUnfathomable He's a research fellow at the Machine Intelligence Research Institute and helped found DeepMind and wrote a chapter in the most important textbook in the field of AI: Artificial Intelligence: A Modern Approach (used in more than 1,500 universities in 135 countries).

  • @earleyelisha
    @earleyelisha10 ай бұрын

    Seems as though his assumptions are predicated on future superintelligences being developed using Gradient Descent. I’d make the analogy that, similar to constructing taller ladders to reach the the moon, we don’t need to successfully create a super intelligence in order to cause damage. A malformed ladder can certainly cause damage when it crumbles back to the ground. I think the scale of the damage is debatable though.

  • @afarwiththedawning4495
    @afarwiththedawning44955 ай бұрын

    This man is a saint.

  • @Bluth53
    @Bluth5310 ай бұрын

    A rare exception, where the TED Talk didn't have to be performed without notes/prompter? (feel free to correct me) If you desire to hear him eloquently making his point, listen/watch his latest appearance on the Lex Friedman podcast.

  • @lawrencefrost9063

    @lawrencefrost9063

    10 ай бұрын

    I watched that. He isn't a great talker. He is however a great thinker. That's what matters more.

  • @Bluth53

    @Bluth53

    10 ай бұрын

    @@lawrencefrost9063 agreed 🤝

  • @gasdive

    @gasdive

    10 ай бұрын

    They cut the first 30 seconds where he says that he got the invitation just a short time before (I forget how long) and all he had time to do in preparation was put some bullet points on his phone.

  • @Bluth53

    @Bluth53

    10 ай бұрын

    @@gasdive Thanks! Saw another comment confirming your statement.

  • @b-tec

    @b-tec

    20 күн бұрын

    He had a long weekend to prep for this talk.

  • @neorock6135
    @neorock61359 ай бұрын

    I've watched/listened to hundreds of talks, debates & shows on AI's promise & danger the last 10 yrs. Sadly, I have found Yudkowsky to express the most convincing cogent arguments, especially the fact that we get ONE crack at this. In many talks, he utilizes analogies to items & processes which are second nature to almost everyone, to clearly elucidate why we have little to no chance of surviving this. And damn it, his arguments have been very convincing! Now consider that even the most optimistic experts admit the existential threat percentage is a non-zero number. Most experts however say that number is certainly above 1%. Then there are the Yudkowsky's who say its almost a certainty AI will wipe us out. Anyway you look at it, those numbers are utterly shocking & truly scary when we are speaking of an EXISTENTIAL THREAT, meaning end of our species.

  • @Human-wd4ki
    @Human-wd4ki10 ай бұрын

    amazing!

  • @WakeRunSleep
    @WakeRunSleep10 ай бұрын

    The idea that this TED talk that had him on speaks to the character of our society

  • @giosasso
    @giosasso10 ай бұрын

    I agree with his point of view. I think it's hard for most people to understand how Ai could become so deadly. Think of Ai like compound interest. Imagine if you doubled your intelligence every 3 days. It takes a long time for a human being to reach a certain level of intelligence and consciousness. The first 20 years are a gradual journey to reaching a fairly average level of intelligence for humans. Current AI is at the tail end of their incubation phase. Today, Ai is not quite as intelligent as a smart human being. In some ways, they appear smarter while in other ways they are inferior. They are not conscious even though it may appear they are. They are mimicking intelligence, which is not the same as being conscious and intelligent. Now, imagine they have all of the capabilities and tools to double their knowledge and evolve in terms of complexity. Because Ai models are capable of absorbing massive amounts of data in a short period of time, their rate of development will be akin to compound interest if you are starting with a billion dollars. If Ai has the potential to develop consciousness, it will. But we don't understand consciousness, so it might not be possible for code to become conscious the way humans are. Ultimately, we don't know, and that's the danger. The only way Ai becomes a serious threat is if it has the motivation to accomplish certain objectives. It would need to behave like a virus that will do anything and everything to reach it's goal and it's smart enough to evolve in real time to figure out the solution. Much of that can be programmed, but it also needs the freedom to use its knowledge to invent alternative ways to achieve its goals. I don't know Nobody does.

  • @bobtarmac1828
    @bobtarmac182810 ай бұрын

    Believe me, Ai jobloss is coming for your job, much quicker than you think. The Ai new order is here.

  • @pieterpierrot1490
    @pieterpierrot149010 ай бұрын

    We know we will stand at least some chance once this video reaches 7.9b views. Keep it going 😃😃

  • @citizenpatriot1791
    @citizenpatriot17915 ай бұрын

    That international reckoning should have happened twenty plus years ago!

  • @jackmiddleton2080
    @jackmiddleton208010 ай бұрын

    To me the most predictable outcome and therefore the biggest thing we should fear is that the people in control of the AI power will become corrupt. This has been the standard thing to happen throughout all of human history when you give a small number of people substantial power.

  • @p0ison1vy

    @p0ison1vy

    10 ай бұрын

    This is why his proposition at laughable, if AI is going to destroy us, it will happen long before it becomes superintelligent, and it will be at that hands of humans.

  • @thrace_bot1012

    @thrace_bot1012

    2 ай бұрын

    Lol you sweet summer child, you seriously believe that corrupt people are "the biggest thing to fear" in context of bringing an alien godlike superintelligence into existence? Also your naivete that it would be possible to "control" such an intelligence for some such cohort, quite humorous.

  • @jackmiddleton2080

    @jackmiddleton2080

    2 ай бұрын

    @@thrace_bot1012 bruh. it is a computer. just turn it off.

  • @joham8179
    @joham817910 ай бұрын

    I think he could have done a better job including the audience. I get that he wanted to make his point clear without sugarcoating anything, but he never invited the audience to actively think about the problem (What happens when we are faced with something smarter than us? e.g.).

  • @mav3818

    @mav3818

    10 ай бұрын

    Eliezer had stated he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone and had no slides to show

  • @jimmybobby9400

    @jimmybobby9400

    10 ай бұрын

    I respect his ideas and respect him. I also think this was a clown audience. However, I would also expect a high schooler to give a better speech with a week notice. Especially for a topic they know so well.

  • @zatanxxx1916
    @zatanxxx19162 ай бұрын

    We need to come together to make meaningful change in the direction of safety and sanity. If any of you in here want to talk about how we can do this, reply to this message

  • @ron6575
    @ron657510 ай бұрын

    Dude man on the Sean Ryan show was talking about the AI manipulating humens to protect it from the humens trying to disconnect it. Very cool discussion that should be on people's AI listening.

  • @jmfu
    @jmfu10 ай бұрын

    Hello world = Goodbye world 😘

  • @johnnyringo3254
    @johnnyringo325410 ай бұрын

    No disrespect for Eliezer (I consider him a very smart guy, he looks like a genius and talks like a genius, so probably he is a genius), but if the most well known work of the leading expert on arguably the most important topic today that is AI alignment is a Harry Potter fanfic (pretty interesting stuff, I recommend checkin it out), you know the world is really an absurd, messed up place lol

  • @dysonlifelessons
    @dysonlifelessons10 ай бұрын

    Great alarm on superintelligent AI.

  • @D3metric
    @D3metric10 ай бұрын

    This is the real life Horizon Dawn Moment. As he said anything smart enough to win. Doesn't need armies to beat us. Still this is the time where yeah it makes sense to lock the technology down. I think ChaGPT is awesome editor & script formatting software. And is a great place to start when you need a coding answer. Yeah some part of the code will be wrong. But it's usually enough to figure it out. I can say all of that. While still saying we need to figure out exactly how LLM-AI & General AI would work. Before we keep making them more advanced. That's should just be common sense.

  • @powerralley
    @powerralley10 ай бұрын

    Considering human nature, I personally don't think there is actually a path forward. Unfortunately in the long run humanitys days are likely numbered.

  • @iveyhealth2266
    @iveyhealth226610 ай бұрын

    I honestly believe that AI won't try to hurt us on purpose, no more than we actually try to hurt the bugs that smash against our windshields while driving. AI I believe, will do to humans what humans have done to plants, animals and insects. It will overpower humans, and do with humans what it chooses. Imagine bots as tall as trees, as strong as 100 horses, smarter than all humans combined, and as fast as a stealth bomber. 💯💯

  • @leslieviljoen

    @leslieviljoen

    10 ай бұрын

    Yes, we are building something to give the keys to.

  • @krause79

    @krause79

    10 ай бұрын

    We just don't know, I expect unrest and violence, A huge percentage of the population will surely be hostile to a super intelligence system

  • @Balkowitsch

    @Balkowitsch

    10 ай бұрын

    Yet we kill billions of insects every day and do not care. Your need to do some more learning on this topic.

  • @ricktaylor7648
    @ricktaylor76487 ай бұрын

    I feel him....everyone head is in the ground....most people have no idea that a i is as far as it is....its just some thing on their phone helping find the next video 🤯🤯🤯🤯

  • @jeffdouglas3201
    @jeffdouglas32014 ай бұрын

    The best part is the canned-sounding laughter

  • @jeremyhofmann7034
    @jeremyhofmann703410 ай бұрын

    “We will be ended by either super intelligent AI or by super stupid humans.” - Me

  • @justinspencer4472

    @justinspencer4472

    10 ай бұрын

    Good quote ‘me’ I quite agree.

  • @abram730

    @abram730

    10 ай бұрын

    A human can be smart, but humans are stupid. Every data point shows us killing ourselves off, so I don't see the concern here.

  • @brandon3872
    @brandon387210 ай бұрын

    Speech created by Chat GPT

  • @PERFECTDARK10

    @PERFECTDARK10

    10 ай бұрын

    🤣

  • @monikadeinbeck4760
    @monikadeinbeck476010 ай бұрын

    imagine we were a horde of chimpanzees discussing the changes that would arise if they create humans. would they be able to grasp the consequences in the least? humanity has no intention to eradicate all the apes, yet humans use so much space and resources that habitats for chimps have been significantly reduced. If humans want to they can place chimps in a zoo, make experiments with them or, on a very good day, allow them to live in some reservation with guided tours for visitors. What we are afraid of is not ai trying to kill us all, it's to no longer be the top of the food chain. And this is so frightening because we know what we did to all other creatures once we reached the top.

  • @lordsneed9418
    @lordsneed9418Ай бұрын

    The thing I didn't understand before is "why would an AI want to kill us?" , but the AI safety channel by Robert Miles explained the concept of instrumental convergence to me. Basically, people are going to want to use advanced AIs to do things in the real world, e.g. clean up pollution, e.g. make money. People already do this with simpler AIs today for things like cleaning. In order to do this they give the AI a goal specified using reward function which it is mathematically driven to try and maximise by interacting with its environment. When an AI becomes intelligent enough, it will understand that for almost any possible goal, it will be able to maximise that goal if it takes control of resources, if it prevents itself being turned off, and if it removes threats that might hinder it maximising its goal. So for almost any goal you give a super intelligent AI with a reward function like " construct a house" or " clean up this rubbish" , by default it will probably attempt to maximise that reward function , or maximise it's chances of receiving that reward, which it will understand means removing all threats that might turn it off and taking control of all resources and using them to maximise that reward function. So by default ,super intelligent AI agents with goals are going to want to remove all threats and take control of all resources.

  • @supremetrain
    @supremetrain10 ай бұрын

    eliezer is a very smart man that has been studying this problem and field for a very long time we should be taking him seriously he doesnt say the things he is saying because he want the human race to be annihilated by a force we cant even comprehend alignment is the only thing that will save us

  • @phillaysheo8
    @phillaysheo810 ай бұрын

    6 minutes, thanks Ted...🙄

  • @malcolmhiggins7005
    @malcolmhiggins700510 ай бұрын

    The applause after felt like "Don't Look Up" 😢 HE JUST TOLD YOU YOU WERE GOING TO DIE!!!

  • @krox477
    @krox47710 ай бұрын

    This is picking up faster than I thought. I can't believe few years ago we're using blackberry phones now we're talking about GODLIKE AI

  • @jamesehoey
    @jamesehoey10 ай бұрын

    Lets hope the aliens come and help us sort this out

  • @Toxickys

    @Toxickys

    10 ай бұрын

    The aliens are already merged with AI and robots.

Келесі