Darren McKee on Uncontrollable Superintelligence

Ғылым және технология

Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
Timestamps:
00:00 Uncontrollable superintelligence
16:41 AI goals and the "virus analogy"
28:36 Speed of AI cognition
39:25 Narrow AI and autonomy
52:23 Reliability of current and future AI
1:02:33 Planning for multiple AI scenarios
1:18:57 Will AIs seek self-preservation?
1:27:57 Is there a unified solution to AI alignment?
1:30:26 Concrete AI safety proposals

Пікірлер: 4

  • @k14pc
    @k14pc6 ай бұрын

    great guest. fwiw my views essentially match his. will definitely check out the book

  • @AbsoluteDefiance
    @AbsoluteDefiance6 ай бұрын

    Very sharp fellow.

  • @dougg1075
    @dougg10755 ай бұрын

    Since when does the human race care of that much about safety? Especially when there is money to be made or a goal to be accomplished

  • @Pearlylove
    @Pearlylove3 ай бұрын

    Listened 30 minutes, and no real info yet, just that ppl are so dumb that hard to explain GI to them? Ppl are not that dumb, but they might lose interest at this pace, so hopefully you soon explode with info the next minutes!

Келесі