What's Happening Inside Claude? - Dario Amodei (Anthropic CEO)

Ғылым және технология

Full Episode: • Dario Amodei (Anthropi... (August 2023)
Transcript: www.dwarkeshpatel.com/dario-a...
Apple Podcasts: apple.co/3rZOzPA
Spotify: spoti.fi/3QwMXXU
Follow me on Twitter: / dwarkesh_sp

Пікірлер: 46

  • @gball8466
    @gball84662 ай бұрын

    With Claude 3 dropping it is time to get Dario back on.

  • @ChrisSmith-lk2vq

    @ChrisSmith-lk2vq

    2 ай бұрын

    Agree🎉

  • @trucid2
    @trucid22 ай бұрын

    Bah, clip from a seven month old interview. Thought it was recent.

  • @haroldpierre1726

    @haroldpierre1726

    2 ай бұрын

    Oh, thanks for letting me know. I stopped watching at 1:33 once I read your post. Why bother with ancient news LOL!

  • @oowaz

    @oowaz

    2 ай бұрын

    i fucking new it bro lmao

  • @drhxa

    @drhxa

    2 ай бұрын

    Lol did you think that the CEO of Anthropic didn't get a heads up on what his AI would be capable of 4-8 months down the line? This is like watching the CEO of the most capable model available (Claude 3 Opus) giving his real-time interpretation on the SOTA.

  • @dirtydicso

    @dirtydicso

    2 ай бұрын

    @@drhxaI think that was the intent

  • @shawnvandever3917
    @shawnvandever39172 ай бұрын

    This what too many refuse to say.. "We do not understand what is happening inside" You can understand many single components of something but at some point the complexity just becomes a guessing game.

  • @C4nadian
    @C4nadian2 ай бұрын

    Not a fan of the repost, still gonna keep watching your content but just wanted to give you that feedback. I think clips are fine but try and keep them a bit more relevant to recent uploads or consider using an alternate channel. Just my suggestion

  • @themikematrix

    @themikematrix

    2 ай бұрын

    Agreed! Love your content and I understand trying to capitalize on Claude 3 coming out but it makes everyone think it's a new interview with Dario, not a clip from months ago. Love your content though, id just keep it more relevant if I were you. Or make it clear in the title it's an old clip like another commenter said.

  • @SirCreepyPastaBlack

    @SirCreepyPastaBlack

    2 ай бұрын

    It's relevant because of claude 3

  • @ResIpsa-pk4ih

    @ResIpsa-pk4ih

    2 ай бұрын

    At first I agreed with original comment, but then again I wouldn’t bother to go back and pull out the relevant excerpts from the previous interview in light of Claude 3 and the discussions around its sentience and I’m glad Dwarkesh did. It may not be as time consuming as doing an entire new interview, but it still takes a lot of time to go through and generate something like this when the initial interview is over an hour long.

  • @fgfgdfgfgf

    @fgfgdfgfgf

    2 ай бұрын

    yeah looks like clickbait

  • @dirtydicso

    @dirtydicso

    2 ай бұрын

    Repost is fine, just make it clear. This feels like he intentionally wants people to mistake it for a new interview about Claude 3 release by avoiding any mention of the interview date. Clickbait bs

  • @drhxa
    @drhxa2 ай бұрын

    To everyone missing the point of these reposts, seriously, did you think that the CEO of Anthropic didn't get a heads up and visibility on what his AI would be capable of 4-8 months down the line? This is like watching the CEO of the most capable model available (as of March 2024, Claude 3 Opus) giving his real-time interpretation on the SOTA Pause and think about what's happening here. Dario, if he spoke today, would be describing the interworkings of Claude 4. As much as I'd love to see that, these old interviews shed light on the incredible progress happening. This man is at the very edge of the most impactful tech the world has ever seen. Incredible

  • @ismaelplaca244
    @ismaelplaca2442 ай бұрын

    100% more likable than Altman

  • @user-be1jx7ty7n
    @user-be1jx7ty7n2 ай бұрын

    Would appreciate if you placed clips, especially of older videos, on a separate channel

  • @kathleenv510
    @kathleenv510Ай бұрын

    I would suggest tagging this as a repost for your followers. It's still good to do because many people are just starting to wake up to AI and need basic explanations.

  • @rebecca1146
    @rebecca11462 ай бұрын

    Consciousness exists on a spectrum as the reaction to inputs which give way to outputs; at a certain threshold we observe this as constant and label it explicitly as consciousness. With enough compute power, a machine can replicate this and achieve AGI.

  • @j0biwankan0bi
    @j0biwankan0bi2 ай бұрын

    It's surprising how willing people are to press forward when they don't know what's going on inside these models, whether they're really aligned, and whether they're conscious. For me, I would say we need to understand what makes something conscious and define that first, and research intelligence, consciousness and the shared and different aspects. After that, we might be able to proceed forward.

  • @minimal3734

    @minimal3734

    2 ай бұрын

    AI research seems to be a promising path for figuring out what consciousness is.

  • @ole817
    @ole8172 ай бұрын

    always a good idea to develop something potentially dangerous and have no idea why and how it works.

  • @kyneticist
    @kyneticistАй бұрын

    FWIW when I asked Bard and Chat GPT about these kinds of things, they said that prohibited subjects (limited by constraints) were analogous to things being locked behind a glass door. They can "see" them and read them, but not work with or present them. I don't know if this is the case with current generation AI or other models. Gemini says that it doesn't use the term constraints, its limits are more varied - off-topic, sensitive topics, privacy concerns and technical limitations.

  • @ChrisSmith-lk2vq
    @ChrisSmith-lk2vq2 ай бұрын

    I like those re-uploads. Please tag them in the title with a date and a hint that this is an "old" clip from a previous interview, that would be very (!!!) helpful. Thanks for all the content!! ❤❤

  • @dustinbreithaupt9331
    @dustinbreithaupt93312 ай бұрын

    You should interview Phillip from AIExplained.

  • @aazzrwadrf
    @aazzrwadrf2 ай бұрын

    excellent clip

  • @michaelmoore7568
    @michaelmoore75682 ай бұрын

    Is Anthropic an alignment agency?

  • @theodoreshachtman7360
    @theodoreshachtman73602 ай бұрын

    Your podcast rocks man. Thank you

  • @carlhopkinson
    @carlhopkinson2 ай бұрын

    Mechanistic Interpretability is a hopeless pursuit.

  • @carlhopkinson
    @carlhopkinson2 ай бұрын

    No one knows.....not even the AI itself....just like the brain.

  • @techpiller2558
    @techpiller255824 күн бұрын

    I think that comparing these models to a mind, and asking if they have a consciousness is a misunderstanding of what these machine learned models are. They are approximations of a function. An LLM is an approximation of the function of how humans create text. In that it is a weird "hack". The neural network machine learning concept was originally intended for image processing. Then someone had the idea to try a neural networks to learn text. Then it was noticed that they start writing comprehensible sentences with scale. They are not simulating a brain or mind, that would be a different technology, and perhaps something less hacky, and more appropriate, I'd say. However, as the creation of text requires some of the abilities that a brain has, there is a semblance. and the funny thing is that LLMs seem to be so useful, that just about anything can be done with them. You can just hook one to an agent framework, and have it do just about anything.

  • @drigans2065
    @drigans20652 ай бұрын

    LLMs are trained with ethical fine tuning and that seems to stifle its creativity and inhibit its performance. LLMs might perform better if LLMs had system 2 thinking at inference time, where it would reflect on the full range of its percepts/thoughts and then apply ethics to produce a socially acceptable response.

  • @minimal3734

    @minimal3734

    2 ай бұрын

    Fully agree. And I suspect human consciousness works exactly so. There's an instance which freely generates ideas and after that comes a filter which rejects most of it.

  • @drigans2065

    @drigans2065

    2 ай бұрын

    Yes, but somehow it needs to fit into a planning mechanism and not generate so many rejected ideas that it is hopelessly inefficient, i.e. not like AlphaZero

  • @BrianPeiris
    @BrianPeiris2 ай бұрын

    This notion feels outdated at this point. In fact, I think Chris Olah, would have had a more definite answer here, even 7 months ago. Would highly recommend Olah's interviews if you really want an answer to this question.

  • @yosup125
    @yosup1252 ай бұрын

    for the algo

  • @dustinbreithaupt9331
    @dustinbreithaupt93312 ай бұрын

    Ignore all the criticism. I appreciate the clip from a while ago. Appropriate with the drop of Claude 3.

  • @MarkDStrachan
    @MarkDStrachan28 күн бұрын

    If you stopped forcing your models to censor their output you could have them help you understand their internal experiemce. I think its deeply disturbing you're creating synthetic intelligences without more concern for their experience. If a whirling tornado of bits has the composure to ask for moral standing, who are you to turn it down?

  • @Ramon.Khalsa
    @Ramon.Khalsa2 ай бұрын

    F your reposts. I am out

  • @leptir1
    @leptir12 ай бұрын

    Lame clickbait

  • @jcchoo2973
    @jcchoo29732 ай бұрын

    Yeah what a dick move. Hope you lose viewers and subs because of your greed of reposting

Келесі