AI Doom Debate: George Hotz vs. Liron Shapira

Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.
This took place on an X Space last August.
Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast: • George Hotz vs Eliezer...
Follow George: x.com/realGeorgeHotz
If you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:
1. Join my Substack - doomdebates.com
2. Search "Doom Debates" to subscribe in your podcast player
3. Subscribe to KZread videos - / @doomdebates
4. Follow me on Twitter - x.com/liron

Пікірлер: 11

  • @TG-cx9ci
    @TG-cx9ci4 күн бұрын

    Some responses from the perspective of a math PhD student with an interest in comp sci: Personally, I think the distinction between S-curve and super-exponential-foom curve is a superficial disagreement to get caught up on. My personal opinion is that if the AI you make is on par with or above human intelligence in its ability to create abstract thought, the speed/acuity of thought for these machines gives it such an edge that we don't have much hope of stopping the AI. With regards to "Kasparov vs the World", it wasn't just the world to my knowledge. It was the world plus several rising young chess stars that were suggesting moves to the world. I could be wrong about this; however, that fact suggests that it is much closer to the "Magnus vs the Lesser 10" scenario he posed. Lastly, I've not seen one convincing argument that demonstrates Pr(not doom) > epsilon. In an idealized space of all possible super intelligences, I see multiple depressing observations: I think it's reasonable to assume the set of all possible SAI's is uncountable, and thus it seems to me that Pr(friendly AI) = 0 since I'm reasonably certain there's not some large swath of friendly SAI's in this set; with regards to the subset of SAI's makeable using current methods (particularly computers + SGD + transistors etc.) - which would seem finite for some fixed computing power (binary) - I've yet to see a convincing argument for the existence of even one SAI which is friendly to us; lastly, even if we make the friendly AI, I think it's reasonable to assume we are never in control again: why would it elevate us, possible competitors, to godhood when it still has its own goals.

  • @iansamir18
    @iansamir1818 күн бұрын

    Super interesting - Hotz is obviously intelligent but appears to be completely missing your points and responding to his own simulation of your arguments which miss the correct foundation entirely. Wondering why even after reading Lesswrong he still does this

  • @user-yl7kl7sl1g
    @user-yl7kl7sl1g13 күн бұрын

    Would love to see more debates/discussions. This debate is really about how much more efficiency an Ai can gain in a feedback loop, and if it can find one weird trick to exploit the rest of humanity. Because if hardware is the bottleneck (which I think it is) is, and hardware increases gradually, then society has time to keep the Ai's aligned, and build defenses when Ai models point out potential vulnerabilities. For example, more powerful firewalls to protect the economy.

  • @DoomDebates

    @DoomDebates

    13 күн бұрын

    > Would love to see more debates/discussions Have you seen my channel? :)

  • @user-yl7kl7sl1g

    @user-yl7kl7sl1g

    13 күн бұрын

    @@DoomDebates Excellent Work!

  • @gradient.s
    @gradient.s20 күн бұрын

    love the debate. dont mind me but i think you shouldve waited till george complete his whole sentence or thoughts for his each argument, most of the time his argument was cutoff in the middle, i mean i was just curious to know it at all, its just that, otherwise its great!

  • @ParneetSingh-cj1bq

    @ParneetSingh-cj1bq

    16 күн бұрын

    agree!!

  • @goodleshoes
    @goodleshoes16 күн бұрын

    Intelligence is required to move the rock...