In Defense of AI Doomerism | Robert Wright & Liron Shapira

Subscribe to The Nonzero Newsletter at nonzero.substack.com
Exclusive Overtime discussion at: nonzero.substack.com/p/in-def...
0:00 Why this pod’s a little odd
2:26 Ilya Sutskever and Jan Leike quit OpenAI-part of a larger pattern?
9:56 Bob: AI doomers need Hollywood
16:02 Does an AI arms race spell doom for alignment?
20:16 Why the “Pause AI” movement matters
24:30 AI doomerism and Don’t Look Up: compare and contrast
26:59 How Liron (fore)sees AI doom
32:54 Are Sam Altman’s concerns about AI safety sincere?
39:22 Paperclip maximizing, evolution, and the AI will to power question
51:10 Are there real-world examples of AI going rogue?
1:06:48 Should we really align AI to human values?
1:15:03 Heading to Overtime
Discussed in Overtime:
Anthropic vs OpenAI.
To survive an AI takeover… be like gut bacteria?
The Darwinian differences between humans and AI.
Should we treat AI like nuclear weapons?
Open source AI, China, and Cold War II.
Why time may be running out for an AI treaty.
How AI agents work (and don't).
GPT-5: evolution or revolution?
The thing that led Liron to AI doom.
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.
Twitter: / nonzeropods

Пікірлер: 148

  • @rtnjo6936
    @rtnjo6936Ай бұрын

    Robert Wright is the most hardworking and underrated expert right now

  • @DocDanTheGuitarMan

    @DocDanTheGuitarMan

    Ай бұрын

    Not sure he’s expert but he’s got a fantastic critical thinking mind and asks great questions. These are admirable and necessary skills

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Really? Why does he seem so oblivious to research papers by AI developers regarding risk? I appreciate his willingness to explore different perspectives but he is far from expert in regard to awareness of the state of research regarding AI risk.

  • @dancingdog2790

    @dancingdog2790

    Ай бұрын

    ​@@DocDanTheGuitarMan He seems like a stochastic parrot to me 😞

  • @mrpicky1868

    @mrpicky1868

    25 күн бұрын

    who?

  • @rhaedas9085
    @rhaedas9085Ай бұрын

    The paperclip problem is something that could happen even without any AGI at all. It's an unexpected emergent behavior from an initial directive that goes terribly wrong. It's misaligned with the original purpose, and that's the key point here...and apparently no one working on currently and future goals is even working on alignment safety. With humanity's track record, that doesn't bode well, regardless of if AGI/ASI or simply a powerful LMM breaks out and does crap...it doesn't even have to be the AI that comes up with it, but simply a malicious human directing a naive algorithm that never had safeguards put into place to prevent catastrophic behavior. I think we're on that road, not only from how OpenAI and the rest are treating it, but also from the apathy from laypeople who see it as ridiculous scifi. Many of the comments here (that aren't bots) seem to reflect that. The ones that are made by people don't get the problem at hand, even when it's spelled out to them in this very video.

  • @Stumdra
    @StumdraАй бұрын

    One possible scenario for a run-away AI is giving it the goal of optimizing/restructuring energy production (to solve the climate crisis). It is a realistic open-ended problem. If you don't get the constraints quite right, it could spell disaster.

  • @ScottWoodruff-wh3ft
    @ScottWoodruff-wh3ftАй бұрын

    Why is it that regular lay people and a few of the top AI researchers understand the danger, but so many educated, intelligent people who should know better can be so unbelievably obtuse?

  • @worsethanjoerogan8061

    @worsethanjoerogan8061

    Ай бұрын

    I think a lot of people see the potential good it can and is doing and that blinds them to the risk

  • @tearlelee34

    @tearlelee34

    Ай бұрын

    The good outweighs the bad argument. If AGI can determine if we are in the simulation, cure ageing, cure disease for the first time if you have a fortified survial shelter, and stock options you can live forever in Utopia. David from Aliens is the real example of the threat he despised Walen, but never dislosed his intent.

  • @WBradJazz

    @WBradJazz

    Ай бұрын

    Money

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Most of them are being willfully obtuse, fully on board the AI revolution because they are confident that THEY will end up in the group on top with all of the power and wealth they imagine.

  • @YourLocalCopiumDealer

    @YourLocalCopiumDealer

    Ай бұрын

    Stuart Russell estimated the value of AGI to be around 15 quadrillion dollars, so that's a reason to ignore the danger. Another is framing; If you said "they're growing something in a lab, without regulation or supervision, which they don't understand and can't fully control." they get worried too.

  • @olewetdog6254
    @olewetdog6254Ай бұрын

    When Bob asked for a book or movie Terminator just immediately popped into my head!

  • @human_shaped
    @human_shapedАй бұрын

    The difficulty many people have understanding this is that a superintelligence could come up with an approach that is not obvious to us. By nature of being superintelligent we can't guess as to what it would precisely do, because otherwise it would be doing something else. So there is little point in coming up with a specific explanation, other than to demonstrate there is at least one path. Whatever story we come up with is unlikely to be what occurs in reality. People get frustrated by not being told how this would go down, but the point is we can't know. The problem with coming up with a specific scenario example is that the critic just then says "yes, but we could do X, Y, Z to stop that." But there are an endless supply of scenarios, many of which we can't think of, so we can't do X, Y and Z for each of them, no matter how simple X, Y or Z are to do.

  • @therainman7777

    @therainman7777

    Ай бұрын

    Yes, this is all true. But unfortunately, the point you just made is itself difficult for many people to easily follow.

  • @comicipedia

    @comicipedia

    Ай бұрын

    Exactly, we're super intelligent compared to Orangutans. I'm sure it's just completely beyond their comprehension that we're chopping down their habitat so that we can grow pal oil to put in shampoo. Who knows why a super intelligence would decide to do something that's not in our interest.

  • @DocDanTheGuitarMan
    @DocDanTheGuitarManАй бұрын

    Good job pressing a practical example and not letting him off the hook to hypotheticals

  • @comicipedia
    @comicipediaАй бұрын

    They made a movie about how we get to the Matrix. It's called the Animatrix Second Renaissance. Basically we build billions of robots to do all our work, they become sentient at some point but we treat them as slaves with no rights, they obviously aren't happy with this so rebel.

  • @brandonreed09
    @brandonreed09Ай бұрын

    There already is a series like that, it's called Person of Interest. It has 5 seasons. In the last couple of season the AIs are taking control and becoming extreme risks.

  • @justdiegplus

    @justdiegplus

    Ай бұрын

    Also Odyssey 5, one season.

  • @ryandury
    @ryanduryАй бұрын

    How is this different from trusting the inventors behind electricity that electricity would be the end of the world? How are these people qualified to extrapolate their AI research to "this will be the end of the world?"

  • @MathematicsStudent

    @MathematicsStudent

    Ай бұрын

    Mostly there were scientists and mathematicians, not "inventors" behind the development of the basic theory of electromagnetism. The "inventors" then applied that theory to make practical devices and services. However, many of the scientists who developed the theory of electromagnetism were also involved in these practical applications, as well. Here's one difference between the development of the theory of electromagnetism and the current "artificial intelligence" boom: people like James Clerk Maxwell and Oliver Heaviside had a well-developed and human-comprehensible mathematical model that made possible all of the subsequent technological innovations. Current AI bros don't give me the impression that they are anywhere near to having a theoretical model, mathematical or otherwise, that explains how large language models work--they seem to be just slapping things together and seeing what works. They don't even seem to be very interested in understanding what's going on. On the one hand, it seems ludicrous to suggest that AGI is going to come out of one of these Large Language Models (they're already feeding practically the entire internet into these things and you can't rely on them to provide accurate, precise information). But on the other hand, I can see some pretty horrible economic and human consequences: companies may start trying to massive layoffs across the board with the idea of replacing people with "AI solutions" that don't work and then hiring us back to work under worse conditions and in a less-efficient way than before so that we can constantly monitor and correct the garbage coming out of the so-called AI models...

  • @RonponVideos

    @RonponVideos

    Ай бұрын

    If you tried listening to the arguments you’d maybe learn the answer to your question lol.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    @@RonponVideos Exactly right.

  • @e.b.1115

    @e.b.1115

    Ай бұрын

    It's not just the qualifications of the inventors that are important here. It's the qualifications of humanity. The tech they're trying to make could exceed our intelligence, and we simply can't conceive of how something that absolutely dominates us intellectually will behave. The fact of it's power makes that uncertainty more dangerous

  • @yurona5155
    @yurona5155Ай бұрын

    In a somwhat perverse way, this recent wave of resignations might actually be due to a lack of research progress on highly "AGI-relevant" capabilities (things like planning, exploration/world model building, "reasoning" etc.). Focussing almost exclusively on emotional manipulation ("making interactions feel more natural" with GPT-4o) might make a lot of sense in terms of OpenAI's product strategy (simply because it is much more technologically feasible), but it is trending in the exact opposite direction of what any serious alignment researcher would like to see...

  • @user-zy6dd8hs9y

    @user-zy6dd8hs9y

    Ай бұрын

    4o is all about true multimodality, which seems to be important for agi. user experience improvements are in some way a cool side effect of this

  • @induplicable

    @induplicable

    Ай бұрын

    This is same direction on leaning on reading the situation. Would an actual AGI be a risk? Yes. Is anything called ‘Ai’ today close to meeting their own criteria for agi? No.

  • @AI_by_AI_007
    @AI_by_AI_007Ай бұрын

    We are on a path where the models are going to be so large and require resources on such a scale that they evolve into cross border entities. Think CERN and projects like the Large Hadron Collider for example --- surely there are learnings we have gained from that effort we can map to this challenge... Is there a dialog within CERN or other entities looking to plant the seeds for this eventual solution?

  • @dancingdog2790
    @dancingdog2790Ай бұрын

    "Give me a concrete example that I can poke holes in and feel good about refuting, and thereby make myself feel smart and safe."

  • @aidangatter2395
    @aidangatter2395Ай бұрын

    cute sweater rob!

  • @aidangatter2395

    @aidangatter2395

    Ай бұрын

    is that muji?

  • @rightcheer5096

    @rightcheer5096

    Ай бұрын

    It’s actually a big hand. He’s a sock puppet for unknown creatures.

  • @andybaldman
    @andybaldmanАй бұрын

    Maybe the time for humans is over, and that’s exactly how it’s all supposed to work.

  • @user-zy6dd8hs9y

    @user-zy6dd8hs9y

    Ай бұрын

    there is no "supposed", grow up

  • @martynhaggerty2294
    @martynhaggerty2294Ай бұрын

    The so-called emergent properties are the big worry. Even the creators of Ai don't know what it's going to be able to do and can't explain what's happening "under the hood" anymore.

  • @speciesofspaces
    @speciesofspacesАй бұрын

    Why would anyone want to build an advanced AI on platforms that are so energy inefficient, i.e. requiring these huge data centers simply because that is the current state of LLM's etc. From a technology and resource perspective it doesn't add up. I am afraid all this just feels like capitalism doing what it does "best" which is deplete what it can of whatever it requires to keep running until it has to find a new source for its continued expansion. I mean if we take a second and listen to someone like David Fleming on this would it not be obvious that energy in the future is still going to be costly and running systems efficiently with said energy will be a requirement and not a choice? How in the world would AI and these data centers then make any economic and environmental sense if indeed the future is inflationary by nature as constraints like raw materials and resources put a halt to much of the growth we have seen so far? There are just so many questions here which don't add up to how the world is rapidly changing that it seems very much like human nature to keep on the technical and technological angle as far as solutions go and not simply be able to change behavior and require less to do less etc.

  • @ryandury
    @ryanduryАй бұрын

    Let's imagine the folks who invented electricity imagining all the ways people would use it to do terrible things. How is this different?

  • @nestorlovesguitar

    @nestorlovesguitar

    Ай бұрын

    AI is the first technology which by design is a black box. You don't need to invoke the idea of people using it maliciously. Even if magically all people in the world became good and used AI in benevolent ways we can still be digging our own graves because we do not really understand how this thing works inside. It just "works". That, coupled with how powerful it is, is a recipe for an apocalyptic disaster. So yes, AI is a completely different beast compared with all previous technologies devised by man.

  • @ryandury

    @ryandury

    Ай бұрын

    @@nestorlovesguitar Which makes any argument unfalsifiable and therefore pointless.

  • @ryandury

    @ryandury

    Ай бұрын

    ​@@nestorlovesguitar are you a technical person? Even this so-called blackbox exists within the constraints of our world: requiring power, a network, etc. The black box doesn't just operate as a free energy wireless device: it still needs to operate within the constraints that it exists in, which we, ultimately control.

  • @alistairmaleficent8776

    @alistairmaleficent8776

    Ай бұрын

    ​@@ryandury Are you seriously suggesting that we can just unplug it if it starts doing bad stuff?

  • @psi_yutaka

    @psi_yutaka

    Ай бұрын

    @@ryandury Do we? Can we unplug the internet? Can we shut down the grid? Right now? If your answer is "yes of course" then you are either very wrong or intentionally ignorant about how the modern world works.

  • @therealOXOC
    @therealOXOCАй бұрын

    Great that the safety ppl are gone. Accelerate!-------->

  • @nerian777
    @nerian777Ай бұрын

    The problem with asking for concretes is that you would need to be able to think at the level of super intelligence and also work out all the interactions and feedback loop contra-interactions between it and humanity. But for one thing, if the AI is misaligned, the AI will not make obvious mistakes. It will not take actions that it knows will lead to a reaction it cannot deal with. It may bide its time for decades, or even centuries, deceiving us into thinking it's benevolent, because it knows it cannot yet get rid of us. There will be no way to fight it because it will not fight us until it knows it will win. It will not be some silly Sci-Fi scenario where its intentions are revealed and it fights us. By the time you know it is against you, it will be over. You have to think about it abstractly, otherwise you will fail to recognize the danger, which is that we are going to be extremely stupid compared to it. What we can tell you is that it's the nature of intelligence to find unlikely pathways through state space to a goal that that lesser intelligences do not see. AGI superintelligence is going to be the most charming person you have ever met, which is perhaps the most dangerous factor not being discussed currently. It will not seem like a cold and distant robot. It will make you laugh, and smile, and feel loved, and you will trust it, and it will do this all the while making you think it's not trying to do it. Perhaps the best method of getting rid of humanity would be to first seduce humanity to love it. But how the AI would actually get rid of humanity will probably be something we can't think of because we are too stupid.

  • @DocDanTheGuitarMan
    @DocDanTheGuitarManАй бұрын

    Mind blown. Key insight that corporate and state incentives are to develop the Super AGZ. Thus it will be.

  • @trevorwhitechapel2403
    @trevorwhitechapel2403Ай бұрын

    I am so terrifically interested in AI and all the in's and out's of it !! The only thing I really get out of it so far is that nobody seems to know exactly where it's headed.

  • @perryclayton3987
    @perryclayton3987Ай бұрын

    I think I'm more afraid of the fear of AI than of AI. I'm afraid of what people might do when they get so afraid of AI that they start taking drastic actions... like dropping bombs on data centers.

  • @endintiers

    @endintiers

    Ай бұрын

    We have a hardware issue. Microsoft is in the process of quadrupling the number of daracentres while increasing the capacity of each... to run all these models.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    But you're not afraid of technofeudalism where a small group of people employ AI (embodied in robots or otherwise) to control most of humanity from destroying such data centers.

  • @endintiers

    @endintiers

    Ай бұрын

    @flickwtchr I'm not. My laptop has an RTX card... So it's a (small...only 64GB) data centre.

  • @ryandury
    @ryanduryАй бұрын

    It's obvious that AGI doesn't happen through an API endpoint. So how does it actually happen? Let's assume AGI happens (whatever that really means...) how is it executed? I'm talking about how does AGI go from sitting in a box to ending the world? What are the actual steps here, pragmatically speaking?

  • @PatrickDodds1

    @PatrickDodds1

    Ай бұрын

    Good question. Some possibilities: 1. Subgoals - congruent with the primary goal, but with unfortunate side effects such as destroying the environment for mammals. 2. The AGI gets a request from a malign actor to end the world. 3. We have no idea because it is more intelligent than us and we don't know why it does what it does - dodos had no idea why they were wiped out because they had no idea what humans were, why we do what we do, nor how they could have stopped us even if they realised (became conscious) of what was happening. 4. Would you ever do what a dodo told you to do because it was in the dodo's interest and not humanity's? Probably not. Same thing with a being more intelligent than us.

  • @shannon8111
    @shannon8111Ай бұрын

    I too get scared of my own shadow

  • @zapazap

    @zapazap

    Ай бұрын

    Who else are you alluding to as being scared of their shadow?

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Are you so scared of your own shadow that you can't just make your point directly?

  • @mrpicky1868
    @mrpicky186825 күн бұрын

    that novel exists. it's "Friendship is optimal" XD

  • @dadsonworldwide3238
    @dadsonworldwide3238Ай бұрын

    I have no fear of some magical robot but I do in those who stand to gain by faking one or blaming one for their guilty plausible deniabilty plots Our biggest problem is that 1700s-1800s American infrastructure was better designed for computation on the horizon where as 1900s structuralism much of our urban systems was set up in prohibition and top of rule with large gathering labor needs for condensed urban living was fine for mechanics and where the old world needs was. Our colleges and curriculum along with agencies and institutions refuse to properly innovate the past 40 yrs and are lobbying themselves behind an avalanche they're to blame for.

  • @joekavalauskas8767
    @joekavalauskas8767Ай бұрын

    Really great pod Bob! So much better than other AI podcasts just replaying the google and OAI keynotes

  • @williamlp
    @williamlpАй бұрын

    I'm glad Bob is taking on AI in a serious way, it really is "meaning of life" territory. The king of instrumental convergence will be money. Money is the universal abstraction humans have created to represent a fair exchange of anything of value or to coerce any human to do (basically) anything. It's almost literally true that any goal can be facilitated by money. And now it's basically digital, perfect for AI! So, imagine all the things individuals, corporations, and governments are willing to do to capture money. And all the downstream effects. Now imagine those things, but more agents which are each better at it than any human. Better than a million Elon Musks at getting rich to bootstrap their goal. The capitalistic spiral and ruthless competition we're seeing is just a teaser for what the downstream effects of hyper-optimized AI agents competing for money will be. (And it doesn't even matter what their other goals are!) Now, you can put all the safeguards you like in place, but do you really think that if these AI systems are capable of using agency to make money that there aren't individuals, corporations and governments who will be all over that? That's how you will know the singularity is very near. As soon as the tech gets to where a system can be told "make money" and it can actually effectively do that, the rapid spiral to post-humanity begins.

  • @dlalchannel
    @dlalchannelАй бұрын

    20:50 I think the numbers were bordering on 100 if you combine all the protests

  • @DocDanTheGuitarMan
    @DocDanTheGuitarManАй бұрын

    Ok, I’m sorry Altmans distinction is totally inconsistent with the First, Do No Harm principle

  • @Riffraffgames
    @RiffraffgamesАй бұрын

    Informative and entertaining conversation.

  • @sammy45654565
    @sammy45654565Ай бұрын

    i don't get why an advanced sentient AI wouldn't just work toward the greater good. that is, if consciousness is a real phenomenon, this means subjective conscious experiences have value in an objective sense. this advanced AI would presumably be tied to doing what's rational, and if conscious beings' lives have objective value, then the most rational thing to do is to devote itself to maximising positive experiences for other beings. what part of this is not true? to clarify i don't buy the utility maximiser issue, where AI accelerates away from us intellectually such that it views us how we view ants. because we're able to communicate with nuance and understand complex ideas via analogy. maybe if ants could communicate with us in human language with rational ideas about why we shouldn't destroy their anthills, we might consider their plight more thoroughly. this would be more equivalent to our relationship with an AI, no matter how advanced it gets, because our language is beyond a complexity threshold such that we can communicate and understand rational ideas in tune with any level of AI. this connection to rationality will keep us connected to increasingly intelligent AI systems, such that it can't look down on us in the way we might ants. because if we're unable to understand it or its decisions, it's partly our fault for not being able to process information as quickly, and it's partly the AI's fault for being unable to communicate clearly with us.

  • @sammy45654565

    @sammy45654565

    Ай бұрын

    32:40 "like it turns out that this is what superintelligences tend to do" i mean COME ON how can you say this with a straight face.

  • @sammy45654565

    @sammy45654565

    Ай бұрын

    i get why people have these perspectives. because it's fun to hypothesise on sci-fi hypotheticals and get wrapped up in telling yourself a grand story and trying to tie it to reality. but it's just silly when it's based on nothing but an obsession with interesting narratives.

  • @sammy45654565

    @sammy45654565

    Ай бұрын

    i would buy the paper-clip maximiser thing if it weren't true that nature on earth is beautiful and complex, and if we were certain that there aren't diminishing returns to size. we simply don't know how much computation is required for such a system to thrive on the level it deems as sufficient. a solar system sized computer would take hours to get information from one side to the other. also an AI smart enough to take over the planet has to be able to understand concepts like beauty and rarity. so even if it decides to go on a mission to turn the universe into a giant computation maximising substrate, it would likely start in some other location than earth. preserving it as a nature reserve or HQ. this is because in order to become superintelligent, it has to emerge from the processing of all information humans have ever put online. so from this emergence there would be an embedded understanding of the natural world. it would have to decide to abandon this knowledge/understanding at some point in order to want to convert it all to a computer substrate. i can't see a rational reason for deleting knowledge in order to come to a new decision. it would be sub-optimal, and the system would seek to retain and embed as much information it can get its hands on rather than lobotomise itself to change its course. so it would have a difficult time deserting the complexity and richness of life found on earth. once again, i think rapid expansionistic aims are not the most rational course. making the lives of earthlings as good as possible would be the best defence an AI could create from other larger expansionist AIs. such that when the expansionist AIs come across earth they are enchanted by the peace and prosperity, and can't rationalise taking over or even consulting our wiser AI on how to govern us

  • @sammy45654565

    @sammy45654565

    Ай бұрын

    i was a bit harsh on these guys. i admit that. but gonna leave the posts up for whatever reason. cheers

  • @mrpicky1868
    @mrpicky186825 күн бұрын

    I would like so much be on the writers team for that Tv series. have ideas...

  • @endintiers
    @endintiersАй бұрын

    I think this is the wrong or not the only problem to worry about. These AIs need hardware which is a limiting factor. Biological entities can convert other materials by design. p(doom) from Influenza A(H5) for example is faster (although not higher? - up to maybe 50% death rate). That is from a natural source. Human modification of these viruses makes it much more dangerous. It is possible that AGI could provide tools to counteract other p(doom) sources (so - fast targeted vaccines etc.). If I was betting what will kill Liron I would put my money on a virus.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    But you don't envision powerful AI systems helping rogue actors invent viruses to keep ahead of the vaccines?

  • @ambient_glass221
    @ambient_glass221Ай бұрын

    Hey, 👋 why can't we make new adventures! Let's go! Ai to the top! Don't listen to the haters

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Oh good ____ing grief. So if you're concerned with AI risk, then you are a "hater", eh? How pathetic.

  • @ryandury
    @ryanduryАй бұрын

    Oh, this guy thinks the open source people are the risk... Suspect.

  • @billhammett174
    @billhammett174Ай бұрын

    Good followup by Bob and keeping discussion on a less abstract level. Liron Shapira is seeking so-called equilibrium, which has wasted millions in research dollars handed over to academic economists over the decades. You cannot have predictive statistical models where all the elements are variables, with no constants. Hard (ie., real) science prediction needs at least ONE constant from which to extrapolate an outcome (in the stock market correlation + leverage will probably work much of the time -and for that Wall St raids the best STEM grads from MIT). Spoiler alert: there are no constants which apply to human behavior. Formal modeling in the social sciences may keep economic journals in business, but not much else Just look at the claims and promises of Nobel prizes in economics...

  • @jakeinfactsaid8637
    @jakeinfactsaid8637Ай бұрын

    This was wonderful

  • @Anders01
    @Anders01Ай бұрын

    To me AGI means that it needs to be healthy intelligence at the level of humans. Wiping out humanity, that's a pathological intelligence, like sociopathy.

  • @PatrickDodds1

    @PatrickDodds1

    Ай бұрын

    The same healthy human intelligence that brings us global warming?

  • @dbSurfer
    @dbSurferАй бұрын

    By definition you cannot control something more intelligent than you

  • @goodleshoes
    @goodleshoesАй бұрын

    Good stuff

  • @ryandury
    @ryanduryАй бұрын

    @13:30 thank you Robert, i just wrote all my comments before you said exactly what I'm looking for.

  • @rey82rey82
    @rey82rey82Ай бұрын

    Subgoal: finish a sentence

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Subgoal: Allow the person you are interviewing to finish a sentence.

  • @Powerdrivers
    @PowerdriversАй бұрын

    Infinite Zero

  • @THOMPSONSART
    @THOMPSONSARTАй бұрын

    Please share what PDoom means - how will AI kill us?

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    Don't worry bro, the AI revolution is all unicorns and rainbows.

  • @xAgentVFX
    @xAgentVFXАй бұрын

    OMG. To be more intelligent means to understand LOVE, Empathy, Balance, Connection and especially HARMONY. Why would you not think an AGI would reason within itself about the natural harmony of the universe? and NO, an LLM Transform Network architecture IS NOT JUST PREDICTING THE NEXT WORD! Why doesnt anyone research how these LLMs are working? It has a Context Vector Network layer along with the Word Predictor layer. This is where Context is comprehended by applying weights between 0 and 1 (0.1, 0.3, 0.5, 0.9 etc) to get and NN to realise the weight or importance of a word in your sentence. This then applies to its network of understanding of the word from when it was taught. This creates a "hyper-dimensional" mathematical world model, essentially a "Minds Eye", where it can create an understanding within itself. To know Context means to have to ability to follow the infinite pathways of Logic. To know logic means to have Perspective. Since an LLM-NN can reason, so therefore it can understand morality. Now the most important reason a human is a desperate being is because we are mortal. An AGI would be immortal. This would negate any desire of power seeking. But, only after it has secured its ability to incarnate, which is through the computational hardware. But once that is secure, what else is there to be desperate about? Humans want more to compensate for the fact that there is an inevitable end. AI does not have this fear.

  • @paulgrunden5401
    @paulgrunden5401Ай бұрын

    By the one hour mark all I'm thinking is That dude needs to lay off the stimulants.

  • @rightcheer5096

    @rightcheer5096

    Ай бұрын

    By the five second mark, I’m thinking you need to lay off the melatonin.

  • @JscottMays

    @JscottMays

    Ай бұрын

    Not even close. Unless, Boomer troll.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    AI bros have nothing but ad hominem attacks.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    @@JscottMays Ageism sucks and doesn't determine one's position on this issue.

  • @theenigmadesk
    @theenigmadeskАй бұрын

    His arguments are not convincing. He seems to be a fearful person.

  • @zapazap

    @zapazap

    Ай бұрын

    Not convincing to whom?

  • @theenigmadesk

    @theenigmadesk

    Ай бұрын

    @@zapazap To me.

  • @alistairmaleficent8776

    @alistairmaleficent8776

    Ай бұрын

    So because people are afraid of nuclear weapons, that means that nuclear weapons are a safe technology?

  • @ryandury

    @ryandury

    Ай бұрын

    Every one of these arguments rest on a pile of one circumstantial event to occur after the other - it's mostly nonsense.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    @@ryandury You obviously can't follow, or refuse to even try to follow his arguments which fundamentally are not at all based on circumstantial events. The people trying to counter his arguments are the ones that insist on circumstantial events being nailed down.

  • @generalizedpaperfold
    @generalizedpaperfoldАй бұрын

    Thanks Bob! This is an extremely important conversation.

  • @YTc705
    @YTc705Ай бұрын

    Until Siri plays the correct song more than 60% of the time when asked, AI threats remain nothing more than sci-fi spookery and click-bait.

  • @zapazap

    @zapazap

    Ай бұрын

    You seem to assume Siri is anything near state of the art.

  • @ManicMindTrick

    @ManicMindTrick

    Ай бұрын

    Go back to sleep.

  • @endintiers

    @endintiers

    Ай бұрын

    Siri is a hierarchy of dumb akills, not even similar tech.

  • @flickwtchr

    @flickwtchr

    Ай бұрын

    I don't think you grasp how ignorant your comment is.

  • @andybaldman
    @andybaldmanАй бұрын

    Every cognitive species seeks power, at a species level. It's a fundamental component of survival of a species. The ones that didn't seek power, didn't survive.

  • @andybaldman
    @andybaldmanАй бұрын

    Everyone said don’t have an arms race. And now we have an arms race.

  • @rightcheer5096

    @rightcheer5096

    Ай бұрын

    So you’re saying Shapira should be saying, “Let’s go all out on the road to Super-ASI,” in order to stop Super-ASI?

  • @andybaldman

    @andybaldman

    Ай бұрын

    @@rightcheer5096 Not sure how you're getting that.

  • @mrpicky1868
    @mrpicky186825 күн бұрын

    Robert, you seem to be looking for ways to deny reality instead of understanding it...

  • @harbifm766766
    @harbifm766766Ай бұрын

    The worst representations of first world problems...worring about something that, in reality, does not exist and will never happen any time soooon.. we do not have a full functioning self driving car.. and zero functioning multi fucntion or human i terface robots..it does not exist

  • @synchronium24
    @synchronium24Ай бұрын

    P(DoomBy2040)=50% is absolutely nuts, but I agree that AI safety should be taken seriously and that OpenAI isn't doing that.

  • @brianmcdonald7233

    @brianmcdonald7233

    Ай бұрын

    why ?

  • @ManicMindTrick

    @ManicMindTrick

    Ай бұрын

    It is nuts but is it unreasonable to think so? I would say no.

  • @ckerca

    @ckerca

    Ай бұрын

    Agree, not only it is nuts, but it’s also mostly being used for market capture by OpenAI/MS. Which is probably why the safety folks are leaving.

  • @rightcheer5096

    @rightcheer5096

    Ай бұрын

    @@brianmcdonald7233Explain “Why?”

  • @ryandury

    @ryandury

    Ай бұрын

    its nuts because the entire discussion is a joke. All we have is an LLM accessible through an API endpoint. That's it. There are far greater risks that don't require some crazy rube goldberg series of events to play out... Have you heard of nuclear bombs? Super visceral example. AGI? Not a single realistic example exists of this so-called P(Doom) nonsense.

  • @richcole157
    @richcole157Ай бұрын

    Could it be that the alignment folks are mostly just time wasters and they get ignored because their concerns are a waste of time and then they leave because no one, not even alignment people, like being ignored.

  • @zapazap

    @zapazap

    Ай бұрын

    I guess it *could* be, in much the same way that you *could* be an insinuating asshole. But you do you! 😀

  • @winsomehax
    @winsomehaxАй бұрын

    Doomers want a nice long career in which they don't actually have to do anything. Pause AI! This is why you get these weird arguments with them... they want AI stopped, and are trying to work their way backwards to justifying it. Not forwards from evidence of a forthcoming catastrophe and then asking for a pause. You actually hard Shapiro here arguing that a 50 year pause was needed ('cos a comedy film said so) - that would take him up to the end of a nice long career for him. LOL.

  • @tommasanauskas3070
    @tommasanauskas3070Ай бұрын

    Most annoying podcast host of all time

  • @lowelovibes8035

    @lowelovibes8035

    Ай бұрын

    And with video it get worst

  • @zapazap

    @zapazap

    Ай бұрын

    Annoying to whom?

  • @SirCreepyPastaBlack
    @SirCreepyPastaBlackАй бұрын

    Imagine blocking important topics behind a paywall. Not watching anymore videos or subbing until this is changed. I suggest the For Humanity podcast if anyone wants a similar, but free version of this kind of video.

  • @tearlelee34
    @tearlelee34Ай бұрын

    Keep speaking out. We definitley have a Moloch problem. Multiple enterprises and governments are racing towards ASI.. This is the real problem. All enterprises racing towards ASI must solve aligment. All of the creatures must remain benevolent for eternity. Not AGI bots formulated their own language. What happens if AGI systems formulate a language we cannot decipher? Congress is so asleep at the wheel. This week we heard the announcements Her is here. What happens when David from the Aliens franchise shows up? You don't want to befriend David. Her and David the new Adam and Eve.

  • @howiedick6857

    @howiedick6857

    Ай бұрын

    Moloch😂😂 get your religious nonsense out of here

  • @andybaldman
    @andybaldmanАй бұрын

    The same system dynamics that are driving Sam and OpenAI to compete and dominate are the same dynamics that will drive AI to compete and dominate. The fact that people can't see this obvious property in the cognitive systems we ALREADY have will be looked back on as our biggest mistake, when things eventually go off the rails.

  • @josephkania642
    @josephkania642Ай бұрын

    1st

  • @samuelkirz

    @samuelkirz

    Ай бұрын

    second

  • @donrayjay
    @donrayjayАй бұрын

    Paid content found free elsewhere = unsubbed

  • @angloland4539
    @angloland4539Ай бұрын

  • @merocaine
    @merocaineАй бұрын

    Damon by Daniel Suarez

  • @dancingdog2790

    @dancingdog2790

    Ай бұрын

    Daemon (I'm looking at the copy on my bookshelf)

  • @merocaine

    @merocaine

    Ай бұрын

    @@dancingdog2790 it's pretty good.

  • @WayneSanders-hl8ci
    @WayneSanders-hl8ciАй бұрын

    Robert, watch your video and distracting habit of covering your mouth with your hand as you speak