Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Ғылым және технология

Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at pauseai.info
Timestamps:
00:00 Pausing AI
10:23 Risks during an AI pause
19:41 Hardware overhang
29:04 Technological progress
37:00 Safety research during a pause
54:42 Social dynamics of AI risk
1:10:00 What prevents cooperation?
1:18:21 What about China?
1:28:24 Protesting AGI corporations

Пікірлер: 14

  • @danaut3936
    @danaut39363 ай бұрын

    Holly is a role model. She has a well-reasoned world view, good arguments and is eloquent. Great episode

  • @masonlee9109
    @masonlee91093 ай бұрын

    Gus, Holly, you two are awesome. Thanks for this excellent conversation! Sold. I support pausing.

  • @rwess
    @rwessАй бұрын

    What I like most is Holly's understanding of company-think or corporate-think. An animal advocacy background certainly helps with that! - Money-grubbing above all else. If AGI adopts that ethic from us - doom is certain.

  • @entivreality
    @entivreality3 ай бұрын

    Holly is consistently one of the most reasonable thinkers in the EA/AI safety space, big fan 🙏

  • @akanepajs
    @akanepajs3 ай бұрын

    A useful discussion on hardware overhang, thanks for the reference to Heninger's piece ("Are There Examples of Overhang for Other Technologies?")

  • @banana420
    @banana4203 ай бұрын

    On "what would you do differently if you eval comes back negative", I've heard from people like Victoria Krakovna that the thinking is something like: There's a lot of randomness in training models, we'll train a bunch of models and keep the ones that pass evals/interpretability analysis. I guess this is supposed to somehow work like a genetic algorithm in search of safe AIs? I don't really buy it though.

  • @JD-jl4yy
    @JD-jl4yy3 ай бұрын

    27:42 She convinced me there. Jokes aside, good episode!

  • @timothymcglynn1935
    @timothymcglynn19353 ай бұрын

    Hi 🤗

  • @EvansRowan123
    @EvansRowan1233 ай бұрын

    10:25 With a century-level pause, the risk I think of isn't climate change, it's ageing. While there could be medical breakthroughs even with AI paused, the default expectation if you just wait 100 years is that almost everyone currently alive is dead by then. Personally, I'm kinda selfish so I don't want to die of AI killing everyone or of old age, a 10-20 year pause seems like a good idea on current timelines, but 100 years is as much a suicide pact as going full-tilt. 35:18 Oh, she has encountered the issue, she just dismisses it as some silly sci-fi nonsense. "My preferred policy will kill you slowly" and "you even having the concerns you do is a joke" is quite the one-two punch. If my p(doom) were as low as 20-40% I'd feel alienated enough to switch teams for e/acc.

  • @lystic9392
    @lystic939229 күн бұрын

    China is most definitely building on this. There's no way they are ignoring A.I. I do think that pausing means that we are placing safety in the hands of China. Which could be better. I mean, at least they're not Google.

  • @Diego-tr9ib

    @Diego-tr9ib

    7 күн бұрын

    Pausing might require an international treaty

  • @tylermoore4429
    @tylermoore44293 ай бұрын

    Impressively articulate and intelligent young lady.

  • @rwess
    @rwessАй бұрын

    Completely agree with her. But there is some miniscule chance that Superintelligence will adopt a sentientist ethic and fix us humans. Afterall, if it is super intelligent that's the way to go...😁 😇 😈

  • @jordan13589
    @jordan135893 ай бұрын

    Hard to root for someone who blocked me on x prior to any interaction. But you go girl 👍

Келесі