The Alien Mind of AI | Robert Wright & Steven Pinker

Subscribe to The Nonzero Newsletter at nonzero.substack.com
0:49 AI's human-like, but inhuman, language skills
6:58 Bob argues that LLMs don’t vindicate the ‘blank slate’ view of the mind
18:32 Do humans and AIs acquire language in totally different ways?
30:47 Will AIs ever quit hallucinating?
39:23 The importance (or not) of “embodied cognition”
47:26 What is it like to be an AI?
53:24 Why Steve is skeptical of AI doom scenarios
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Steven Pinker (Harvard University, Enlightenment Now, The Language Instinct). Recorded May 21, 2024.
Twitter: / nonzeropods

Пікірлер: 74

  • @r3lativ
    @r3lativАй бұрын

    Pinker continues to be extremely impressive. He is clearly informed about all this beyond basic pop science and he can communicate it in a very accessible way.

  • @endoalley680

    @endoalley680

    29 күн бұрын

    True Dat!

  • @randeepchauhan2668
    @randeepchauhan2668Ай бұрын

    Pinker even look like he was from the Enlightenment period.

  • @fullmatthew

    @fullmatthew

    Ай бұрын

    Haha yes, the Pinker hair is iconic

  • @squamish4244

    @squamish4244

    Ай бұрын

    He is bewigged.

  • @ClayFarrisNaff
    @ClayFarrisNaff24 күн бұрын

    Pinker is, as always, brilliant and illuminating. I learned a lot in listening, but I'm puzzled over his agreement with Wright (who is also well worth listening to) that anything may be sentient. (This is a stance Wright took decades ago, and I've always disagreed.) Wright tosses out the example of a thermostat -- but for that matter, why not a rock? What's odd to me is that a few beats later, Pinker notes our tendency to overascribe consciousness, such as a Rogerian chatbot (that just rephrases input as a question) or even animated triangles. This tendency surely points to the adaptive importance of recognizing intentionality in other humans ... which Pinker earlier suggests is essential to human language-learning. This comes close to Nicholas Humphrey's theory of consciousness as an effect of adaptation to complex, individual-centered sociality. Thermostats are subject to no selective pressures for sentience; they don't socialize and they can perform their function just fine without self-awareness. It therefore makes no sense to me to suggest that they might have this costly feature.* For similar reasons, it makes no sense to me to impute consciousness to trees (even if they do exchange chemical signals), fleas, or algaes. Bees? Well, maybe. *Costly, in that unless it's magic, it must be computational, and computation requires energy.

  • @gingerhipster
    @gingerhipster25 күн бұрын

    a general human intelligence makes mistakes.. We're having this conversation all wrong for reasons related to biases connected to both anthropomorphism and anthropocentrism. Intelligence is imperfect. We get around this through collaboration. We have AGI now, in potentia if not in practice, what we don't have is perfect AGI. Perfect AGI may be impossible but if it's not it'll come from collective artificial intelligences operating in shared purpose.

  • @Zidana123
    @Zidana123Ай бұрын

    I think I finally get why Bob is so enthusiastic about the topic of AI. The thrust of his career has been something like, building and articulating the model of the mind of the other. But up to this point, the mind of other has always been humies, which though animated by different cultures and narratives, is in its fundamentals basically the same. Now though... we have the actual Other, the unfathomable and essentially alien machine-mind! How delectable! How exciting!

  • @MrPhilosopher1950
    @MrPhilosopher1950Ай бұрын

    At the very start, I’m like “are they AIs” 😅

  • @charlesalexanderable
    @charlesalexanderableАй бұрын

    21:00 I think this is a bit off; the multidimensional vector representation is actually part of the explicit architecture (the encoder is set up to learn embeddings and attention heads learn related ones to combine); the determination of the values for the vectors is a learned process and not explicit though. But the act of putting tokens into embedding vector forms is not something that just emerges, it's explicit in the transformer architecture.

  • @michaelbarker6460

    @michaelbarker6460

    11 күн бұрын

    Yeah and just to add to this a bit I'm not saying I'm an expert at all in machine learning and in fact its the opposite, I just dabble in it for fun, but it if you want to train your own models I think it becomes clear to most people fairy early on that its just multivariable gradient descent calculus thats approximating a function. So in that sense, in the mathematical sense we know EXACTLY what its doing because we built it to do it. I think the discrepancy comes however when you go to translate that for people that don't have a ton of experience in machine learning, computers or math in general. Saying its just fancy calculus doesn't mean anything so you have to come up with analogies for each step and naturally someone can ask you to go more in depth at any point in the analogy and its in trying to make complete sense of our analogies I think is where this idea of "we actually have no idea what they're doing, its like they have a mind of their own" comes from. We can say that the computation is finding many different layers of patterns in the data. People will then ask "Well how does it know what patterns to find" the actual answer is again "multi variable calculus" but we can't say that so instead we try to develop the analogy further. We say its detecting things like edges, shapes, groups of things, borders, etc. And people will ask "well how does it have a concept of those things, where is that coming from, how does it know how to put them all together" to which the answer is it doesn't its just doing gradient descent at different layers of resolutions of the images. And this goes on and on and on where we have a good answer for it, its a shit ton of calculations, but in trying to make it accessible to others I think we're just going to fail to make it correspond with how we ourselves view and interact with the world. Even if thats similar to what our brain is doing, we aren't doing the calculations in our first person experiencing of the world but rather are the culmination of that (IF its like this, not that it is).

  • @geaca3222

    @geaca3222

    2 күн бұрын

    @@michaelbarker6460 Fascinating. thanks this makes things more explainable to me :)

  • @TheEarlVix
    @TheEarlVix28 күн бұрын

    Fantastic cerebral episode. Thanks Bob for teeing this up; your input and questions of Steven was truly synergistic!

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xuАй бұрын

    Dude you guys are awesome that was a great talk.

  • @HighTech636
    @HighTech636Ай бұрын

    Quite a meaty conversation

  • @olewetdog6254
    @olewetdog625429 күн бұрын

    So I'm guessing that no one who is actually involved in creating these models has ever been interviewed about how they work?

  • @tttrrrification
    @tttrrrificationАй бұрын

    I have the same intuition about LLMs are parallel to evolution

  • @mee834

    @mee834

    Ай бұрын

    What do you mean? They don't evolve, they learn by using a large data set that is itself a product of evolution (human language). They are infrering rules that are already there, not making up rules by themself. More like how humans learn physics.

  • @johnnywatkins
    @johnnywatkinsАй бұрын

    Would a reasonable way to begin to teach the AI intuitive physics from a spacial point of view, be too give it a 3D cad and tell it to make for example a castle and then give the same corrections you gave when it started trying to structure a sentence?

  • @NicholasWilliams-uk9xu

    @NicholasWilliams-uk9xu

    Ай бұрын

    Definitely, that's sounds like a great idea.

  • @merocaine
    @merocaineАй бұрын

    Does a submarine swim? Does a large language model think? This discussion doesn't make much sense to me.

  • @rawkvox

    @rawkvox

    18 күн бұрын

    are we just spinning words and extending metaphors or are we truly discovering how the world works

  • @yclept9
    @yclept9Ай бұрын

    Children learn language by learning to disassemble and reassemble cliches

  • @kraz007

    @kraz007

    Ай бұрын

    Upvoting the dumbest comment basically

  • @chuckbeattyo
    @chuckbeattyoАй бұрын

    22:00 Steven's remarks made me conjecture what the "taints" breakdown in Buddhism might relate to assigning meaning to words. Buddhist breakdown of our thinking, has any AI team been informed by this. Learning like kids using their other sensations, to learn meaning. Curious if Bob's considered how Buddhism psychology might plug into AI learning meaning. (Bob's "Why Buddhism Is True" sure led me into it.)

  • @jesselara1441
    @jesselara144122 күн бұрын

    Robot will have senses beyond humans.

  • @user-ok9ym9zm9m
    @user-ok9ym9zm9mАй бұрын

    I see all the possibilities 😮

  • @chadreilly
    @chadreillyАй бұрын

    Paperclipolypse sounds like GDP

  • @odiseezall
    @odiseezall26 күн бұрын

    This type of conversation that can express the higher level abstractions of the mechanisms of neural networks is very useful, but I would not limit myself to System I / world model / embeddings, because the computation is superficial and the algorithms are not designed, but arrived at by the evolution of the pre-training. The System II architectures during online learning will probably trim and replace a lot of the inefficiently learned substructures.

  • @markstuber4731
    @markstuber473110 күн бұрын

    8:20 ish or soon there after. In terms of natural selection developmenting brains for language, natural selection would favor human brains that are more similar to the average/most common wiring.

  • @Besseloff
    @Besseloff29 күн бұрын

    I think Pinker really should start questioning a number of his nativist assumptions regarding language per se. There is a huge mountain of very good empirical research that makes it pretty clear that a number of domain general cognitive abilities and social reasoning are both necessary and sufficient for the development of language. The principle of parsimony suggest that we do not need to reach for domain specific language modules in the brain. Michael Tomasello's work in this area is empirically robust and very persuasive. Also, Daniel Everett's writing on the importance of culture I think poses a very serious challenge to the Chomskyan approach altogether. Regarding Pinker's forthcoming book, it will be fascinating to see, at least for me, if there are any overlaps with Dan Everett's book Dark Matter of the Mind.

  • @vijaychandra2002

    @vijaychandra2002

    29 күн бұрын

    I don't think Nativists like Pinker completely reject domain general processes for language and other cognitive functions. They just believe that certain areas in the brain are specialized for specific functions. One just cannot deny this given how damage to certain areas leads to specific deficits. For example: a lesion in the perisylvian regions of the left hemisphere causes aphasia. People with aphasia mainly have language deficits, but it doesn't mean that they don't have other deficits like apraxia, agnosia etc. There would be no point in doing cortical mapping during neurosurgery to save important areas if there was no specialization?

  • @mitchelllanders8037

    @mitchelllanders8037

    19 күн бұрын

    Problem with applying the principle of parsimony is that natural selection built the mind, and it doesn't much care for human conceptions of parsimony (or rather, parsimony only applies when there isn't an additional constraint/framework ruling out particular explanations and/or making others more likely). We know that natural selection builds complexity slowly by making small changes over time and building incrementally upon previous designs; we also know that it tends to favor specific solutions to specific problems (i.e., "domain specific" adaptations--> packages of design features that are particularly well suited to solving particular problems well tend to outcompete "domain general" approaches and thus reproduce themselves more frequently). Given this backdrop, our a priori assumptions should shift: We should expect for the mind to contain domain specific language adaptations before positing domain general mechanisms. The fact that we may not need to invoke them to explain how we learn language doesn't mean they don't exist.

  • @SmarmyBastards
    @SmarmyBastards14 күн бұрын

    The beginning of this conversation about how AI "learns" makes me wonder how much children hallucinate while learning the basics. There seems to be lots of concern trying to eliminate hallucinations in AI training but my thought was that maybe that's part of the natural leaning process of the brain. Hell, I still enjoy a bit of hallucinations LOL!

  • @natokafa5238
    @natokafa523826 күн бұрын

    My two heroes🎉

  • @dubfitness595
    @dubfitness59524 күн бұрын

    wtf I didn't know Dan Dennett died

  • @jamescoll130
    @jamescoll130Ай бұрын

    Said it before, but you should really have Ed Zitron on for a dissenting opinion on the future of AI.

  • @TheEarlVix
    @TheEarlVix28 күн бұрын

    If I stumbled upon an old brass lantern buried in the sand on the beach, and if, as I rubbed it a genie popped out to offer me 3 wishes to come true, my first wish would be to have an evening dinner with Steven Pinker :-)

  • @vijaychandra2002
    @vijaychandra2002Ай бұрын

    Great conversation! I think you should have a chat with both Paul and Steve together. That's going to a greater conversation!

  • @Anders01
    @Anders01Ай бұрын

    It seems like natural language has a lot of intelligence packed into it. Language has evolved through thousands of years in very complex ways, so it makes sense. More astounding to me is the AI image generators, how can they do that? Maybe the natural world also has some kind of inbuilt intelligence that the AI models tap into.

  • @honkytonk4465

    @honkytonk4465

    15 күн бұрын

    Images are similar to language

  • @kevinamiri909
    @kevinamiri9096 күн бұрын

    I have seen many unpleasant interviewers during my life, sometimes is hard to recall which one was more unpleasant than other

  • @johnnywatkins
    @johnnywatkinsАй бұрын

    BLINK PINKER BLINK!!!

  • @jackohearts66

    @jackohearts66

    Ай бұрын

    😂 people from the future don't blink

  • @benjaminfranklin7263
    @benjaminfranklin7263Ай бұрын

    Games already simulate physics and there is already embodiment in games. NPCs are embodied. They have spatial awareness etc. Granted their AI is not that complex yet, but it's a matter of time until we get games with more complex AI. There is a recent video where ChatGPT was used in conjunction with the game Skyrim in order to make the NPCs give more interesting dialogue responses. Example (unscripted dialogue generated by ChatGPT): kzread.infoUgkxB2Jby5JQDenK1Jd4FBooGqsC98IksPvC Notice that the dialogue takes into account the background story of the character and how Lydia WOULD behave and think and feel.

  • @darwinlaluna3677
    @darwinlaluna36773 күн бұрын

    Be carefull

  • @Michelle_Wellbeck
    @Michelle_WellbeckАй бұрын

    Bob kinda looks like Richard Feynman

  • @cwcarson
    @cwcarsonАй бұрын

    Pinker is the smartest man to spell Steven with a 'v'.

  • @mee834
    @mee834Ай бұрын

    Self driving cars have rules, but they don't have what humans have: a value hierarchy that makes us follow the rules. Typical human hierarchy: 1. Protect the lives of the people in the car. 2. Protect the lives of the people outside the car. 3. Protect my property (the car). 4 Protect the property of the people outside the car. This gives us the 'why' we follow the rules. The AI simply have: Red light-> break. We have the hierarchy to evaluate the rules: Red light, yes, but no human or car anywhere in sight, so the rule is less important. Only very narrow minded people (not the people in this video) impose rules without reasoning (value hierarchy) behind them. Giving an AI the paper clip order is itself stupid, but if it where given this and didn't have a value hierarchy, it might actually make paper clips of us. But I am more afraid of robot people than people robots.

  • @darwinlaluna3677
    @darwinlaluna36773 күн бұрын

    Adios amigos

  • @ginger22ly
    @ginger22lyАй бұрын

    Hope to listen to this later. You should cover Indian election results in some detail and its significance for India and what the world can learn.

  • @seanmchugh6263
    @seanmchugh6263Ай бұрын

    Is the guy shouting because Pinker is old or because Pinker is a long way away?

  • @honkytonk4465

    @honkytonk4465

    15 күн бұрын

    Or the guy is hard of hearing

  • @seanmchugh6263

    @seanmchugh6263

    14 күн бұрын

    @@honkytonk4465 You're a kinder soul than I am. I honour your forbearance.

  • @johns.7297
    @johns.7297Ай бұрын

    What is going on with the word salad of schizophrenics. It is like a language model tht has gone off the rails.

  • @Besseloff

    @Besseloff

    29 күн бұрын

    Due to being delusional schizophrenics lose the capacity for a sensible theory of mind. They impute all kinds of bizarre motives and agency to people and the world around them, and see patterns where there are none. The entire backdrop of cultural context therefore stops being a reliable guide for engaging with the world. This of course results in the incoherent and often paranoid speech of schizophrenics.

  • @donaldrobertson1808
    @donaldrobertson1808Ай бұрын

    People have conspiracy theories because conspiracies exist.

  • @geoffreydawson5430
    @geoffreydawson5430Ай бұрын

    Why are Steven's eyeballs always saying, "I am locked into a moneytary blackmail situation and have to keep this bullshit up'?

  • @akniznik
    @akniznikАй бұрын

    I may have gained IQ points watching this

  • @johnnywatkins

    @johnnywatkins

    Ай бұрын

    So did the AI!!

  • @fullmatthew

    @fullmatthew

    Ай бұрын

    Pinker tends to do that to us haha

  • @kraz007

    @kraz007

    Ай бұрын

    Question is if it is on the usual 100 point scale or the DND scale! If you get from 18 to 20, that's genius, you know.

  • @Deepthoughts206

    @Deepthoughts206

    Ай бұрын

    Omg you just made my day. 😂😂😂😂 I am gonna use this everywhere thanks I just started watching

  • @sidndidhdudn2308

    @sidndidhdudn2308

    Ай бұрын

    Keep going! Soon you may reach double digits!

Келесі