Brain & Transformers Work The Same Way - Association Is All You Need

Ғылым және технология

Full Episode: • Sholto Douglas & Trent...
Website & Transcript: www.dwarkeshpatel.com/p/sholt...
Spotify: open.spotify.com/episode/2dtD...
Apple Podcasts: podcasts.apple.com/us/podcast...
Follow me on Twitter: / dwarkesh_sp
Trenton Bricken's Twitter: / trentonbricken
Sholto Douglas's Twitter: / _sholtodouglas

Пікірлер: 66

  • @LoFiLatentSpace
    @LoFiLatentSpaceАй бұрын

    “This guy is really sample efficient” - Best compliment I’ve heard in weeks

  • @13371138
    @13371138Ай бұрын

    The level of excitement these guys seem to be experiencing daily is unreal.

  • @triton62674

    @triton62674

    Ай бұрын

    They're in it

  • @shawnvandever3917
    @shawnvandever3917Ай бұрын

    Many do not like it when I say intelligence is a lot of pattern matching. People tend to get lost in the illusions of conciseness

  • @nhinged

    @nhinged

    Ай бұрын

    Intelligence is mainly just the else loop for feedback of an environment

  • @EudaderurScheiss

    @EudaderurScheiss

    Ай бұрын

    i wouldnt downplay conciseness, since its an insanely sophisticated control net, that connects a ton of neural networks. to get that we will need a lot more of compute in an virtual enviourment . you could say that the whole universe is a simulation needed to achieve just that. one part of the human condition is also the way we filter information. we drop 99% in the can and compress the rest. if we would have access to it all we would probably not enjoy that

  • @shawnvandever3917

    @shawnvandever3917

    Ай бұрын

    @@EudaderurScheiss What I was saying is people try and look at intelligence through the lens of consciousness which is just an approximation of reality. If you try and figure out how intelligence works without bypassing consciousness you get something that looks far more complicated than what is really going on. 90 percent plus of the things we do does not use consciousness. Consciousness is very important for biological beings but is very doubtful needed for machine intelligence.

  • @martinpavlicek2299

    @martinpavlicek2299

    Ай бұрын

    ​​@@shawnvandever3917So you are that kind of person who say we can have real artificial inteligence without it being conscious. Ok. I have encountered that before. It is interesting line of thought. I am just curious. How do you understand consciousness and it's relation to inteligence? And biology? What animals do you think have consciousness? How do you think we got and retained consciousness?

  • @shawnvandever3917

    @shawnvandever3917

    Ай бұрын

    @@martinpavlicek2299 We need consciousness as a way to focus and move around the world. We make devices and machines aware of their surroundings all the time without the need for consciousness. You do not need consciousness for intelligence. There are studies that show decisions emerge into consciousness from the brain not the other way around.

  • @pandoraeeris7860
    @pandoraeeris7860Ай бұрын

    I'm in a boat going down the river.

  • @devon9075
    @devon9075Ай бұрын

    I feel like Dwarkeshs question about not worrying about super intelligence assumes humans inhabit the apex of associative capabilities. I think the space constraints of the human cranium imposed on us by our narrow pelvis and metabolic restrictions for our ancestors give us a real strong reason to suspect that is not true.

  • @mahavakyas002

    @mahavakyas002

    Ай бұрын

    intelligence can't be bound direclty by physical size; if that were true, gigantic herbivorous dinosaurs would have been "super intelligent."

  • @caseymurray7722

    @caseymurray7722

    Ай бұрын

    It's because modeling an AI through associative processes alone will never reach the same potential as human intelligence because he is measuring intelligence incorrectly. Intelligence is a spectrum.

  • @ishanaphale4451
    @ishanaphale4451Ай бұрын

    Can't wait for the full episode!

  • @leastofyourconcerns4615

    @leastofyourconcerns4615

    Ай бұрын

    yea we need it noow, go go!

  • @johnwilson7680
    @johnwilson7680Ай бұрын

    I'd be very interested in this episode if I can get it translated into English.

  • @darylallen2485

    @darylallen2485

    Ай бұрын

    Maybe you can find a Udemy course on how to speak "tech bro".

  • @futurisold
    @futurisoldАй бұрын

    The longer you think, the more you'll realize lots of important things can be cast as a sampling problem, including ourselves. Just as a statistical sample should be representative of the whole population, our perceptions and understandings are shaped by the samples of experiences we have. From this perspective we're all just sampling samples.

  • @user-ph9cu9jo8y
    @user-ph9cu9jo8yАй бұрын

    Imagine orders of magnitude more associations than any person or groups of people that is what AI can and will do. In that context humans are just much more limited than an AI.

  • @deordered.
    @deordered.Ай бұрын

    woah! ordering my popcorn ahead of time!

  • @shinkurt
    @shinkurtАй бұрын

    This is great

  • @ahmedshaikh3438
    @ahmedshaikh3438Ай бұрын

    I've been saying that. A lot of what thought is are associations meaning one thought is somehow connected to some other thought. That is how you flow from one thought to the next.

  • @leonardoperelli1322

    @leonardoperelli1322

    6 күн бұрын

    I definitely see the point. however, it is clear we also have a deductive, logical engine which establishes causality. in this sense, we associations and reasoning are the two ends of this spectrum. associations could be seen as correlation, while reasoning as causation. reasoning definitely doesn't seem a mirage, we have developed and comprehend logic. at the same time, we are doing a lot of association all the time. so do they co-exist? does one imply/contain the other? could it be that reasoning is merely an ordering of associations? It is more and more clear that both us and the models are capable of association, as this research clearly shows (and in general the fact that the models have any semantic understanding at all). the key point Is understanding how associations relates to reasoning.

  • @antonmaier2263
    @antonmaier2263Ай бұрын

    I have been saying this for years and wasn't taken seriously.

  • @PatrickDodds1
    @PatrickDodds1Ай бұрын

    I don't even understand his t-shirt.

  • @triton62674

    @triton62674

    Ай бұрын

    Haha real but honestly check out the field of interpretability, it's growing fast!

  • @mohammadkazemsadoughi3880
    @mohammadkazemsadoughi3880Ай бұрын

    Thanks as always. But I wish you would post the whole interview at once. I understand you will post it in a few days.. but why?

  • @djpete2009

    @djpete2009

    24 күн бұрын

    Editing??

  • @mohammadkazemsadoughi3880

    @mohammadkazemsadoughi3880

    23 күн бұрын

    @@djpete2009 I am asking for the viewers perspective. I think uploading full video will increase the number of view rates.

  • @analytic168
    @analytic168Ай бұрын

    Hi guys sorry to break this to you but... Sherlock Holmes.. is.. a.. fictional character. A better example if you really want to touch on a celebrated intelligence would be Albert Einstein. Do you really think he was just "pattern matching" to come up with special & general relativity, etc?

  • @aibutttickler

    @aibutttickler

    Ай бұрын

    yes

  • @lolololo-cx4dp

    @lolololo-cx4dp

    Ай бұрын

    ​@@aibutttickler then reproduce it

  • @j_fl0

    @j_fl0

    Ай бұрын

    Didn’t he famously use thought experiments that led him to those discoveries? That seems like a very concrete example of pattern matching

  • @Iamfafafel

    @Iamfafafel

    29 күн бұрын

    hilbert basically did the most basic "pattern matching"and derived the einstein field equations semi--independently right before einstein ironed out the details the pattern matching here refers to just do euler-lagrange on the simplest curvature function you can think of

  • @profkg6613
    @profkg6613Ай бұрын

    So David Hume and Ray Kurzwell were both right. These bits are priceless.

  • @xsuploader

    @xsuploader

    Ай бұрын

    yh I immediately thought of kurzweils description of the brain from ted

  • @MassDefibrillator

    @MassDefibrillator

    Ай бұрын

    I would encourage you to read David Hume. A key point he makes is that it's not all just associations (not that the idea existed in his time, it's a modern psychology invention). He concludes that the brain must be inbuilt with some kind of "internal impression" that allows sensory experience of instances of cause and effect, because cause and effect is not found in the data itself. It's in fact a direct contradiction of the idea that you can let data speak for itself through statistics.

  • @sapienspace8814
    @sapienspace8814Ай бұрын

    k-means clustering...

  • @stirredo1
    @stirredo1Ай бұрын

    Pehla

  • @JC-ji1hp
    @JC-ji1hpАй бұрын

    Way out of my element here what is a transformer?

  • @nhinged

    @nhinged

    Ай бұрын

    A architecture for a llm mostly working similar as how neurons work

  • @egor.okhterov

    @egor.okhterov

    Ай бұрын

    Transformer is improved RNN

  • @xsuploader

    @xsuploader

    Ай бұрын

    Attention based neural net architecture

  • @seventyfive7597
    @seventyfive7597Ай бұрын

    I am sorry, but this is borderline uninformed. Everyone with a very minimal background in neurosciences tries to imagine mappings to the brain. It may give the exterior look of it, and may surpass us one day, but no logical connection. This interview was as scientific as religion, "it sounds nice". So, if you learn neurosciences a bit more seriously, you see that the neural network of the mind is not only completely different in topology, as in many more neurons and synapses (1000 trillion, not 100 trillion as I've seen recently mentioned somewhere), and each neuron is slower. But the main difference is the more chaotic nature of the brain logic (in the mathematical "chaos" definition), and analog influence of neurons in the brain, association has nothing to do with transformers' attention blocks. This part of the interview had as much logic in it as religion.

  • @triton62674

    @triton62674

    Ай бұрын

    AI comment vibes

  • @collinf9943

    @collinf9943

    Ай бұрын

    No, look into Modern Hopfield Networks. The attention block, which is equivalent to an MHN with a trainable lookback mechanism, is simply projecting tokens to associative space and finding when tokens are similar when looking in this space. This would e.g. mean an attention head projects tokens to parentheses space, and most tokens would be close to zero but tokens with ( and ) would be nonzero. Then the MLP block does some functions based on these associations, e.g. for the bracket example, if an incoming token was (, the MLP layer would add information to the embedding of that token to make the next token prediction of ) more likely. This is essentially what they posit happens in a large scale for these transformer models, and they believe scaling them even further leads to reason automatically. However, as both you and I would argue, there is probably some secret sauce needed in terms of architecture for these models to properly start reasoning. The fact that no model yet has really surpassed GPT4 level in terms of reasoning hints at this need. I think it's possible scaling to very long context and allowing models to do some crazy in context learning might end up leading to reasoning LLMs, but I also think some system based in symbolic networks or one that develops richer embedding spaces (JEPAs) might work better.

  • @egor.okhterov

    @egor.okhterov

    Ай бұрын

    What's your disagreement exactly? Can you be more specific?

  • @leptir1

    @leptir1

    Ай бұрын

    This comment is the right take. Entirely so. That being said, we're not the "best" by default. On the other hand, the developments we see in the LLM world are inching in the right direction. Helps to be able to predict the obvious. High five, fellow neurd ;)

  • @seventyfive7597

    @seventyfive7597

    Ай бұрын

    @@triton62674 AI? I have years of neurosciences academic study and research, and in software more than 25 years in real time, AI and communications software engineering, and now every young brat with a keyboard will try to disregard my words without basis, instead of actually trying to logically refute my assertion 🤦‍♂

  • @jacobhholt
    @jacobhholtАй бұрын

    When you thought the hipster talk of "Crypto-Bro" finances was next level and marijuana legalization made the enlightened "Plant Medicine" entrepreneur, get a load of "A.I./ChatGPT- Bro" talk, it's just one long chain of loose, unfinished, talking points.

  • @pandoraeeris7860
    @pandoraeeris7860Ай бұрын

    I can't stand when ppl talk with their hands.

  • @caparcher2074

    @caparcher2074

    Ай бұрын

    Depends whether the hand motions actually help convey anything. Some people are good at it, others just wave their hands around distractedly.

  • @a-rod6336

    @a-rod6336

    Ай бұрын

    it's autism

  • @IemonandIime

    @IemonandIime

    Ай бұрын

    I am sorry to hear that

  • @xsuploader

    @xsuploader

    Ай бұрын

    smart people often have weird body language. Stupid people dont understand what they are saying and often point to their body language as a face-saving excuse for why they didnt understand.

Келесі