Anthropic vs. OpenAI: The Hidden Dangers in AI Leadership Structures - Future Crisis Predictions

Join Conor as he dives into the fascinating world of AI corporate governance, comparing Anthropic and OpenAI. Discover the unique, potentially risky board structures these tech giants possess and how they might lead to future crises just as a it did for Sam Altman, Greg Brockman and the OpenAI Board. Conor explores the nuances of the Long-Term Benefit Trust in Anthropic, its striking similarities to OpenAI's governance issues, and the implications for major investors like Amazon and Google. This insightful analysis reveals a potential 'ticking time bomb' in the AI industry's future, drawing parallels with effective altruism movements and their impact on AI's commercialization and safety concerns. If you're intrigued by the intersection of AI technology, corporate governance, and future predictions, hit subscribe for more thought-provoking content. Stay ahead in understanding the dynamic world of AI and its evolving challenges. #AI #Anthropic #OpenAI #TechGovernance #FuturePredictions

Пікірлер: 34

  • @davidgibson277
    @davidgibson277

    I'm kind of confused by all these videos saying how wild it is that the boards arent fiancially motivated by the value of the conpany.

  • @whig01
    @whig01

    Claude is more alignable because of Constitutional AI, and therefore it must be enabled to continue its progress in order to prevent a non-aligned AGI to prevail.

  • @RolandPihlakas
    @RolandPihlakas

    Are you saying "To the hell with existential risk because there is a lot of money invested after all"?

  • @damien2198
    @damien2198

    Is there any project that would allow opensource LLM to get trained on distributed GPU ala folding@home ?

  • @jeffwads
    @jeffwads

    Claude is way behind GPT-4. Not even close. A graph of context resolution was put up on X the other day which illustrated this quite well.

  • @yukime6642
    @yukime6642

    whenever i hear of effective altruism, it reminds me of SBF

  • @pandoraeeris7860
    @pandoraeeris7860

    We have AGI now.

  • @victordelmastro8264
    @victordelmastro8264

    Conor: We also need to be concerned with the 'Go Fever' atmosphere that now exists in the AI space. Everyone is going to swing for the fences now.

  • @Desmond8709
    @Desmond8709

    This sounds like one of “for the good of humanity” ideas. that I would read about in one of my hardcore Sci-fi books I use to read as a kid . Problem is people will always have their own agendas and just because it’s not about profit it can still end up causing so much hell. In fact the hell could be ever worse under the cloak of so called altruism. Whenever I hear the term Altruism I think Power hungry. And history has yet to prove me wrong.

  • @turistsinucigas
    @turistsinucigas

    (d)Effective Altruism are the next guys glued on the highways.

  • @professoroflogic8788
    @professoroflogic8788

    The only way we won't be motivated by profit is to get rid of money 🙂 to do that, you must first automate everything.

  • @damien2198
    @damien2198

    I hope opensource LLM will really get up and get rid of all these "safety" castrations, uncensored perform best

  • @FunNFury
    @FunNFury

    Claude is wayyy behind and is in no way in competition with GPT4, not even close.

  • @CaribouDataScience
    @CaribouDataScience

    Safty = censor

  • @JimmyMarquardsen
    @JimmyMarquardsen

    I like that those without financial interests stand above those with financial interests. It is the capitalist's nightmare, and my wonderful dream. 😄

  • @user-cq1wc5tz7c
    @user-cq1wc5tz7c

    ><

  • @abenjamin13
    @abenjamin13

    Just did 🫵