The famous Chinese Room thought experiment - John Searle (1980)

I am writing a book! If you want to know when it is ready (and maybe win a free copy), submit your email on my website: www.jeffreykaplan.org/
I won’t spam you or share your email address with anyone.
Dualism: • What Philosophers Mean...
Behaviorism: • The Behaviorist Theory...
Identity Theory: • The Mind-Brain Identit...
Functionalism: • Functionalism
This is a video lecture about "Can Computers Think?" by John Searle. In this paper, Searle argues again a form of functionalism, which he calls "Strong AI". The argument rests on a thought experiment having to do with a non-Chinese speaker who is locked in a room with a lookup table, receiving inputs and providing outputs all in Chinese. Searle claims that syntax is never sufficient for semantics, and that digital computer only ever deal with syntax, so they therefore can never understand the meaning of a language. This is part of an introductory philosophy course.

Пікірлер: 2 000

  • @BrianWilcox1976
    @BrianWilcox1976 Жыл бұрын

    For me it’s like saying, “one water molecule is not wet, so no matter how many you put together you’ll never get wetness” (it’s an emergent property)

  • @Bronco541

    @Bronco541

    Жыл бұрын

    Thata what I was thinking. Do we know or are able to know, to what if any degree being aware is an emergant property of just a "simple" algorithm?

  • @Bronco541

    @Bronco541

    Жыл бұрын

    Or I wonder if Searl is right about form never truly being enough to get meaning... What if meaning is nothing more than form somehow... Dont ask me to elaborate im just spitballing dumb ideas

  • @REDPUMPERNICKEL

    @REDPUMPERNICKEL

    Жыл бұрын

    @@Bronco541 'Meaning' is not 'form' but they are related. I mean, just look at the 'form' of this sentence. The meaning of that sentence is *'encoded'* in its form. When that sentence got inside you it became a process. Actually, I think you'll agree, that sentence became a sub process of your being conscious process. In consequence your thoughts are now somewhat different. If you understand thinking to be behavior then you can see that its meaning has influenced your behavior. This is close to what 'meaning' means. The above is not written as well as it deserves but I believe it will affect your thoughts such that your thoughts may become more accurately reflective of actuality, imho naturally.

  • @franzmuller235

    @franzmuller235

    Жыл бұрын

    @@Bronco541 Thats what came to my mind also. How does a newborn learn? It learns to recognize forms, first his mothers head and his mothers breast, and then other forms. No one ever teaches a baby meaning to start with. The child first learns the meaning by recognizing forms of all kinds, and by recognizing how the forms interact with him and with each other.

  • @franzmuller235

    @franzmuller235

    Жыл бұрын

    @@yongkim3333 No, of course you can construct a sensor that senses wetness. You don't need a human, not even an animal.

  • @chadcurtis1531
    @chadcurtis1531 Жыл бұрын

    Douglas Hofstadter gave a great illustration of the systems argument in "Goedel Escher Bach" in one of his dialogues. One character, "Auntie Hill" is an anthill that can communicate in English with an anteater. The anteater can "read" the anthill and deduce its meaning. While none of the individual ants cannot understand language, the system as a whole can. The dialogue is quite profound, and I think illustrates quite well how semantics can arise out of syntax.

  • @Hermanubis1

    @Hermanubis1

    2 ай бұрын

    The jew can't help but say 'racist' at anything always attacking white people.

  • @AlejandroPiad
    @AlejandroPiad Жыл бұрын

    As a college professor of Computability Theory let me just say how brilliant your exposition of the Turing Machine and the Chinese Room experiment are, within the short time you had and of course taken into consideration the target audience. I spend the better part of 3 full lectures talking about Turing machines just to be able to formulate the Chinese Room experiment at the end.

  • @sirrealism7300

    @sirrealism7300

    Жыл бұрын

    What college do you teach at?

  • @sentinel2.064

    @sentinel2.064

    Жыл бұрын

    @@sirrealism7300 he’s definitely not a professor, his starting sentence is “As a” 🤣

  • @vytasffbismarck7001

    @vytasffbismarck7001

    Жыл бұрын

    @@sentinel2.064 he's*, pot calling the kettle N word cause its high

  • @selbalamir

    @selbalamir

    Жыл бұрын

    As Aristotle informed his students, an opinion based on Kudos has some value, but it is the lowest value of all. But a college professor would know that.

  • @pauls3075

    @pauls3075

    Жыл бұрын

    @@sentinel2.064 He definitely IS a 'professor', but your narrow minded view of the world doesn't allow for the fact that in Cuba the word professor means 'teacher'. If you'd bother to check his youtube homepage you would have been more informed. I'm guessing YOU are a Turing machine because you clearly don't understand what is going on.

  • @magellan500
    @magellan500 Жыл бұрын

    This reminds me of Chomsky’s famous example of how syntax and semantics are separate, and that you can create grammatically correct sentences that are meaningless, which was “Colorless green ideas sleep furiously.”

  • @justifiedhomicide5730

    @justifiedhomicide5730

    Жыл бұрын

    Quite frankly, good point. Just because transistors do perfect syntax doesn't mean by default that they can or can't do the 'correct meaning'. To a transistor there are two meanings, yes and no. To a neuron there is a range of meanings, almost like any number between -1 and 1. Even though neurons have no goddamn clue what the emergent simulation of the brain is, (despite the lack of ""semantics""), we still exist. Despite transistors have no goddamn clue what gravity is, they can still correctly simulate a falling object.

  • @JohnDlugosz

    @JohnDlugosz

    Жыл бұрын

    Wolfram's hour-long livestream about how ChatGPT works included examples of this; he gave one example of "The chair is happy". I thought that his examples, this one in particular, is _evocative_ of meaning, and could in fact be meaningful in context. So I offered it as a writing prompt to ChatGPT, asking to write in the style of various authors. I recall many of Hans Christian Anderson's stories give a point-of-view and cognition to some object, and ChatGPT(3.5) was able to channel this. For some other writers, it was more straightforward magical fantasy. For Isaac Asimov, the chair was cybernetic, filled with sensors and microprocessors so it could adapt to the needs of its users. Another time, I asked ChatGPT to generate 10 lines of nonsense. Interestingly, it was not gibberish but kept a syntactic correctness that only doesn't make sense when you consider the meaning overall, as with your (Chomsky's) examples. But, several of them sounded very poetic, and I directed ChatGPT to write a child's bedtime story using one of those "nonsense" lines as a starting point. Every night, the sun sang lullabies to the stars... hearing the line, we craft an entire context to _make_ it make sense.

  • @davidjooste5788

    @davidjooste5788

    Жыл бұрын

    Thats an 🎉inadvertant definition of woke

  • @kevinscales

    @kevinscales

    Жыл бұрын

    It's grammatically correct but doesn't get all of the forms/patterns of the language correct, if we look at syntax as all that is formalizable about the language then you can only get meaningful sentences from a machine that accurately manipulates those forms. I think meaning IS in the form, it's just difficult to grasp what that form is. Computers are getting pretty good at it though.

  • @pumkin610

    @pumkin610

    11 ай бұрын

    Luminous diagonal virtues eat destiny? Formless brave trees talk geometrically? Or as Aizen would say "Seeping crest of turbidity. Arrogant vessel of lunacy! Boil forth and deny! Grow numb and flicker! Disrupt sleep! Crawling queen of iron! Eternally self-destructing doll of mud! Unite! Repulse! Fill with soil and know your own powerlessness!"

  • @peves-
    @peves- Жыл бұрын

    I don't think squiggle and squaggle are racist in the way he was using them. I think that he was trying to make sense of two symbols that are foreign to him by giving them names. To him they are scribbles and squiggles on paper. He can't differentiate what they mean, but for his purposes he needed to call them something.

  • @peterkiedron8949

    @peterkiedron8949

    Жыл бұрын

    This proves that Kaplan is a machine that does not know meaning. of words it is using..

  • @stickman1742

    @stickman1742

    Жыл бұрын

    Of course it wasn't. I guess this guy is just another person in fear of being cancelled. What sad environments some people are forced to work in, living in constant fear.

  • @spanglestein66

    @spanglestein66

    Жыл бұрын

    My sentiments exactly..anything can be turned into an issue of race these day We can thank CRT for that

  • @stuartjakl

    @stuartjakl

    Жыл бұрын

    It’s not racist. It could be construed by some as disrespectful to their culture. I’m sure the Chinese have some less than stellar words for how our writing system looks to them. Others would say it’s a remnant of colonial thought. That any other writing system outside of the one with you are familiar with in the English speaking world is Squiggle Squaggle, a colonial era contemptuous term showing a disdain for foreign languages/writing systems and therefore it’s racist because colonialism was racist. Let’s consider the time when this thought experiment was published in a 1980 article by American philosopher John Searle. Born in 1932 he was obviously trying to use what would have been the most indecipherable, exotic, and probably the least studied language in the United States at that time. At least compared to European languages. The example was to show a language that was so different to the average student with a writing system that was unlike anything they were ordinarily used to. (Except maybe Chinese students). I’m sure we can come up with a name more fitting todays social climate The foreign language room? The alien room? The alien language room?

  • @vdanger7669

    @vdanger7669

    Жыл бұрын

    Love Kaplan but disappointed he couldn't pass up some good virtue signaling. We live in peak woke times though and I suspect he is a product of his academic herd environment.

  • @ericfolkers4317
    @ericfolkers4317 Жыл бұрын

    One problem I have with the Chinese Room is that you could create a similar metaphor for the machine that is the human mind. You have lots of people instead of one, each representing a neuron. They have a list of rules where if one of their neighbors hits them with a tennis ball, there is a rule of who you should or should not throw your own tennis ball at. Some people will have levers that will fill the room with chemicals people can detect (maybe by smell or sight or mechanical detectors) and people's rule books will have different rules depending on what chemicals are in the room. There might be plenty of fuzzy rules like, "if there's not much of chemical X in the room and you get hit with Sarah's tennis ball you can throw a ball at Tom or not, but if there is a whole lot of chemical X, you really need to throw that ball," or, "if chemical Y is filling the room pretty much all of the time, you can ignore it unless there's really a whole lot more than normal." Some people would have access to input information in some way and some people would be able to perform outputs. Is there any reason to think that a human brain couldn't be modeled this way, if we had enough people with enough tools (like the tennis balls and chemicals) and detailed enough instructions? Obviously none of the people working in the model brain would need to understand meaning of any of the inputs, they might not even be able to tell the difference between an input from the outside world and something that another worker has done. But the system as a whole could take inputs give outputs that seem to demonstrate understanding. If we reject the systems response as Searle does for his Chinese room, then we can't say the system understands any of the inputs. Since the system works the same way as our brain, how can Searle say that our brains can understand any semantic meaning? Wouldn't he require some kind of magic stuff that makes our brains work somehow differently from the model with people throwing tennis balls?

  • @donaldb1

    @donaldb1

    Жыл бұрын

    Well, yeah. Roughly, Searle thinks his thought experiments shows that brains don't exactly work like that. There must be something else about them, which we haven't discovered yet, which produces "real meaning", or original intensionality, as Searle calls it.

  • @mottykadosh

    @mottykadosh

    Жыл бұрын

    Brilliant, just nail it, the all room experience is just a joke.

  • @jimjimmy2179

    @jimjimmy2179

    Жыл бұрын

    Except that you making one very important assumption when writing this comment which is: That human intelligence is a "program" telling neurons how to manipulate themselves and that's all there is. So basically you are making a circular argument whereby you start by assuming such s "program" existence and cycling back by staring that that's how it can work. E.g. your claim doesn't have any logical proof as oppose to the Chinese Room that shows the difference between intelligence (i.e a capacity to understand meaning) and following a rule book without understanding. It shows it using drawing logical conclusions using well known definitions OUTSIDE of Chinese Room as oppose to your argument that that justifies the brain working by simply assuming that that's what it does. Besides majority of brain function is chemical and we know very little about it. The reason that people are obsessed with neurons is that that communicate using electric impulses that are easily measurable. There's one very important distinction as well: In the Chinese Room story one exactly knows where the mam takes its decisions from, it's the rule book. In real human being we can measure all sorts of brain activities either induced or decided by the person. However we are not able to measure the actual act of the very decision. E.g. we (as our neuroscience) have no clue where your decision to write that comment physically comes from :) even though you can mentally explain it.

  • @ericfolkers4317

    @ericfolkers4317

    Жыл бұрын

    @@jimjimmy2179 Thanks for the well thought out reply. I will point out that my example does take into account the chemical aspects of the brain, though that isn't central to our discussion. I'm not exactly saying that human intelligence is a program, but I am saying it can be modeled by a program. But if that's not the case, what alternative would there be? Keep in mind that my model allows for fuzzy and probabilistic instructions. We can keep expanding my model to be more complex as needed, the only thing we can't add is some worker in the system that, by himself, does understand the inputs. But then how does that one worker understand? If he understands then wouldn't he need a mind? Is there an aspect of his own mind that is capable of understanding on its own? Either we need some kind of "final understander" or we end up with an infinite regress. What could this "final understander" be? If we take it to be a system of cognitive processing parts then we have to accept the systems response. Is there some glob of brain stuff that does the understanding for us which isn't made up of some sort of system by which it does the understanding? Perhaps this is my failure of imagination but that sounds completely absurd to me. This glob would have to be made up of smaller particles right? If you scrape away enough of those particles, wouldn't it at some point lose its ability to understand? Unless the glob was actually just a single atom or single quark. So if the "final understander" isn't physical what could it be? A non-physical mind perhaps. If we take a mind to be a the emergent property of a brain, or other collection of physical bits then the mind is just another system. So if we take a mind to be an emergent property of physical things, and a mind understands, then we are back to accepting the systems response. If there is some part of the mind that is somehow more than just the processes and systems of physical things, then perhaps we are getting somewhere. But what would this part of the mind be? A soul or other sort of "real magic" as Daniel Dennett would call it? What else could it be? Unless I'm missing something we have reached a sort of dichotomy: either we believe in magic or we accept the systems response. If you need to posit some kind of magic to reject an argument, that's an indication that the argument is very strong. I suppose other possibilities are that there really are those single quarks that can understand, which is too ridiculous to consider, or that "understanding" is not something that anyone or anything is actually capable of. If that's the case we still seem to understand things and talk about the universe as if we understand things, and if the Chinese room (as a system) then we can treat it as if it understands things the same as us without worrying about if it actually understands anything (since actual understanding is impossible anyway).

  • @arturoidoyagamolina5109

    @arturoidoyagamolina5109

    Жыл бұрын

    @@ericfolkers4317 you formulated my thoughts in a way im totally incapable of doing lol. assuming that's the answer; the system response, then i guess it takes out a lot of the mystery and superiority out of the human existence, it liberates us in a sense. idk. we would stop looking and animals, or at any ai in the near future as inferior beings, or "not quite humans", "just not really sentient". it would open up a lot of ethical questions about how we treat future advanced ai(s) as well

  • @john_hind
    @john_hind Жыл бұрын

    'A simulation of a hurricane is not a hurricane', no, but a simulation of a word processor is a word processor, a simulation of a dictionary is a dictionary. I once wrote a dialog called 'John Searle Meets Another Intelligent Machine'. At the climax, the machine seems to get frustrated by Searle's obstinate refusal to accept its consciousness, thumps Searle, admits it was simulating anger to make a point and asks if knowing that makes the pain any less painful!

  • @stupidaf4529

    @stupidaf4529

    3 ай бұрын

    and then did searle thump the machine back and say, “stop pretending that hurt”?

  • @john_hind

    @john_hind

    3 ай бұрын

    @@stupidaf4529 Nope, he accepted he was insisting on a distinction that makes no difference and retired from philosophy! But your ending works too, with Searle the first casualty of a simulated war!

  • @pygmalionsrobot1896
    @pygmalionsrobot189610 ай бұрын

    You're expository style is energetic, inspiring, and I applaud you and your channel. Thank you.

  • @antonnie5177
    @antonnie51773 жыл бұрын

    you are saving my exam of next week

  • @Cloudbutfloating
    @Cloudbutfloating2 жыл бұрын

    @Jeffrey Kaplan I have already stumbled few times on your video lectures which i must say helped me allot trough writing the paper about Philosophy of Mind. You transfer the knowledge so fluently and yet don forget to mention important details. Thank you for excellent guidance in this discipline that catches my interest in whole.

  • @annaclarafenyo8185

    @annaclarafenyo8185

    Жыл бұрын

    He explains it correctly, it is just a form of academic fraud.

  • @xbzq

    @xbzq

    Жыл бұрын

    A lot. Allot is to assign or appoint a resource.

  • @notanemoprog

    @notanemoprog

    Жыл бұрын

    @@xbzq Yeah but that's second-guessing the OP's spelling prowess and also leaving the following word out of the analysis. Plain reading is clearly that Kaplan's lectures helped to apportion a long shallow often V-shaped receptacle for the drinking water or feed of domestic animals

  • @xbzq

    @xbzq

    Жыл бұрын

    @@notanemoprog You got it trough and trough. I like it allot.

  • @xbzq

    @xbzq

    Жыл бұрын

    @@notanemoprog I was thinking the same thing about you humans! More lifelike every day!

  • @dooleyfan
    @dooleyfan11 ай бұрын

    Speaking of Turing, what I found interesting is that the huts at Bletchley Park were essentially each isolated Chinese rooms where the codebreakers were responsible for different steps in the decryption process, following syntactical rules but not aware of the meanings behind their outputs, with the exception of the people at the end of the process.

  • @bojens865
    @bojens865 Жыл бұрын

    I met Searle a few years ago. I had had two car accidents resulting in concussions, and regaining conciousness in the hospital, as if awakening from a dream. The third time I was hit, but walked away; waiting to wake up in the emergency room again, but I never did. As it happened, Searle was speaking at the University the next day and I attended his lecture. He spoke of the Chinese room, which I had read in his book years before. After the talk, there were snacks and coffee in an adjoining room. Searle and his wife were sitting at a table by themselves and I asked to join them. I told them my experiences with loss and regaining of conciousness. Searle said the same thing happened to him. He hit his head skiing and made it back to the lodge with no memory of having done so. He was treated for concussion, after skiing for a mile while unconscious. At this point, philosophy students and professors showed up and started bouncing jargon of Searle and I left. I'd just had a private conversation with one of the worlds foremost philosophers; I wonder if I had in fact regained conciousness!

  • @JohnDlugosz

    @JohnDlugosz

    Жыл бұрын

    Try discussing philosophy with ChatGPT using the GPT-4 model. Just avoiding the strong mental blocks put in by the developers on top of the actual model is interesting in itself. It's also a surprise that _creativity_ emerges long before consciousness, with many of the building blocks of sapience and sentience still missing entirely. I've asked ChatGPT to output in Chinese. Is it an actual Chinese Room running in the Azure data center? But when I asked it to write a short poem for my wife incorporating some kind of pun or wordplay for her name, it generated a pair of couplets in Chinese and translated to English, and both versions rhyme but in different ways. I don't see filing cabinets full of instructions processing symbols, I experience the presence of a creative mind. Nothing like this task was pre-programmed and left as instructions to follow. But, a program processing tokens is _exactly_ what it is! But the instructions for the tokens are on a very primitive level, not directly relating to any high-level task. The activation patterns in the huge number of "parameters" (neurons) form a new, distinct, way of representing algorithms and high-level tasks. We can literally see now how that higher level emerges from the neural net, and is separate from the code that drives the individual neurons. BTW, lack of long-term memory later does not mean lack of immediate and short-term memory during, and does not imply he was not conscious when he was returning to the lodge. I experienced something similar recently during a minor medical procedure: the anesthesiologist explained that one of the ingredients was to "make me forget" and indeed I have no memory of the procedure. But when I had the same thing done once before, I remember everything about it.

  • @frontiervirtcharter

    @frontiervirtcharter

    Жыл бұрын

    Was Searle unconscious, or conscious but not retaining longterm memory of the events in the hours after the concussion?

  • @bojens865

    @bojens865

    Жыл бұрын

    @@frontiervirtcharter This was about 10 years ago. I remember he and his wife telling me he was unconscious

  • @brotherjongrey9375

    @brotherjongrey9375

    Жыл бұрын

    You still haven't regained consciousness

  • @starfishsystems

    @starfishsystems

    Жыл бұрын

    ​@@bojens865 Great story! And that is one of the real pleasures of living in the small academic world. We do get to meet with some very fine minds from time to time. I have somewhat the same story concerning Dan Dennett, just because I happened to attend a cognitive science conference where he was giving a talk. More to the point, here was a philosopher who thought it would be worthwhile to talk with cognitive scientists. -*- On the subject of consciousness, we know that we typically perform many of our routine tasks "on autopilot" while maintaining a kind of casual conscious situational awareness in case the need arises to step out of the task. Depending on choice of terminology, those tasks can reasonably be called "unconscious." And should the conscious supervisor - the part of the mind also most commonly responsible for conscious memory and recall - happen to become distracted, intoxicated, medicated, spaced out, or otherwise go offline for some reason, the unconscious processes may be able to continue unsupervised. It's the same brain, the same sensorium, the same accumulated body of knowledge, substantially the same mind, after all. I can well believe that Searle made it back while "unconscious" in this sense, and moreover not remembering any of the journey. An interesting question would be whether he has retained any "unconscious" memories of the experience. It would be hard to test for it, but assuming there were certain distinctive events along the way that might be memorable, the idea would be to look for markers of (possibly unconscious) recognition when some facsimile of the event were to be replayed for Searle to see. Perhaps he would become explicitly conscious of the event when reminded. Or it might produce a distinctive spike in neural activity, a slight time difference when responding to questions, a change in eye saccade rate, et cetera. These slight clues won't tell the whole story of such a complex system, but they are slowly helping us to frame the investigation. I started out in computer science in the 1970s, hoping to learn something about the nature of intelligence. At the time, I formed the impression that we'd need about a century to get there. That's a long time to contemplate, yet here we are halfway there already. And it feels about right. It's starting to come together. I think another fifty years will do it.

  • @therealzilch
    @therealzilch Жыл бұрын

    It's definitely the tied together beer cans connected to a windmill that understands Chinese. Searle is guilty of planting a red herring here. By having a human in the room who doesn't do any of the thinking, but merely looks up the rule and applies it, we are focussed on the human as the only thing in the room theoretically capable of understanding anything semantically. The depth and complexity and reflectivity of the "rule book" is passed off as "bits of paper". Nice explanation, a good complement to Douglas Hofstadter's classic fisking. Cheers from windy Vienna, Scott

  • @docbailey3265

    @docbailey3265

    Жыл бұрын

    Hmm. A new version of the ghost in the machine, only now it’s the machine in the machine. Simply replace the human in the room with a supercomputer that can instantly scan Chinese characters and has been programmed with the “Chinese rule book. There’s no need to drag some nasty sentient being into the mix. The Chinese text is feed into the room, or rather, supercomputer. The supercomputer then spits out the answer BASED ON SYNTAX AND PATTERN RECOGNITION ALONE. Have we created sentience, much less consciousness? Dennett would dismiss the whole endeavor as wrong BY DEFINITION ALONE, or at best “counter-intuitive”. I’m not yelling, BTW, I just don’t know how to post italics here. Cheers.

  • @therealzilch

    @therealzilch

    Жыл бұрын

    @@docbailey3265 Italics are done on youtube by bracketing your text with underscore characters. And I'll gladly respond to your comment as soon as I get on a real keyboard, as my swiping is painfully slow.

  • @undercoveragent9889

    @undercoveragent9889

    Жыл бұрын

    I sort of agree because in humans, language 'evolves' along the lines of 'utility', if I can put it that way, and the assessment of 'utility' is subjective and requires introspection. In other word, and I have yet to see the whole video, the 'interpreter' is not analogous to a 'mind' but rather, he is analogous to an 'algorithm' _utilized_ by self-interested organisms in order to advance in the world successfully.

  • @anxez

    @anxez

    Жыл бұрын

    Searle does a few intellectually dishonest things in this argument, honestly.

  • @docbailey3265

    @docbailey3265

    Жыл бұрын

    @@anxez Such as?

  • @jamesoakes4842
    @jamesoakes4842 Жыл бұрын

    I find that one of the things I keep coming back to when processing the Chinese Room experiment is that there's a big similarity to some debates between Creationists and Atheists. With Creationists, they will often challenge Atheists to explain what was the thing that touched off the beginning of the Universe, the "unmoved mover", which they reason must be outside of the universe as we know it to not violate known physics, therefore the existence of God, or some other supernatural entity, is proven. Similarly, with the existence of the Chinese Room, you can point to one element that needs more explanation: the instruction manual. If it can truly teach the individual to understand how to respond to a Chinese symbol well enough that it duplicates the responses from someone with a semantic understanding of Chinese, then I think it's impossible to say the manual wasn't created without a semantic understanding of Chinese being involved. If said understanding is inserted into the room in the form of the manual, then it's not really a closed system reliant solely on syntax. ...of course questioning the premise of a thought experiment isn't exactly revolutionary thought.

  • @charlesreid9337

    @charlesreid9337

    9 ай бұрын

    The problem with creationist..and radocal atheist "logic" is that all their arguments require strawmen. Let's consider the big bang.. per a creationist that should prove god exists..Someone had to make it hapoen so..god must ecist right? No..we do not know. There are many possible exllanations including god..science has no opinion on wjat it dlesnt know

  • @adriansmith6124

    @adriansmith6124

    8 ай бұрын

    But I think the what the experiment tries to show, is not that understanding doesn't exist outside the room, or in the manual. That the computer using Turing calculations cannot understand it.

  • @AndyCampbellMusic

    @AndyCampbellMusic

    8 ай бұрын

    There are only atheists? Nobody can or does believe in all possible imagined gods? If there was nothing then there would be no one to ask... Why is there nothing. If the claim is something always existed. Then so can something else? The universe, is sufficient unto itself, to explain itself and everything within it. 🤷‍♂️ If it wasn't there, there would be nobody to ask why it wasn't.

  • @cosmictreason2242

    @cosmictreason2242

    7 ай бұрын

    @@AndyCampbellMusicno it's not and no they can't. Only uncaused things can exist acausally. The universe is caused. Therefore it isn't self existent. You claim the universe is self existent but you simultaneously assert that the whole is the sum of its parts and you can't point to any part of the universe that's confirmed to be self existent. This isn't even the cosmological argument, it's just a refutation of your denial

  • @ronald3836

    @ronald3836

    5 ай бұрын

    Not even a Chinese babies are born with knowledge of Chinese, but without enough syntactic exposure they acquire real "understanding" of the Chinese language.

  • @enlilannunaki9064
    @enlilannunaki9064 Жыл бұрын

    Brilliant presentation! So glad I stumbled upon this channel. Thank you! Subscribed.

  • @Sunshine10101
    @Sunshine10101 Жыл бұрын

    Love your lectures. They are great!! Please keep it up. I am so grateful

  • @henrijames7337
    @henrijames7337 Жыл бұрын

    As someone who is on the autistic spectrum I'm fascinated by the idea that the experience of the person inside the Chinese Room would be similar to my own when dealing with neuro-typical interactions (social gatherings etc.) I often have no true understanding what the purpose or need for some of the elements, but do my best to mimic them or provide responses from a learned set of rules. I've read that some researchers have suggested that individuals with autism may have a "mind-blindness" or "theory of mind" deficit. In the context of the Chinese Room thought experiment, the idea of a person who manipulates symbols without truly understanding their meaning could be seen as a metaphor for individuals with autism who may have difficulty with understanding the meaning of language and social communication.

  • @bdwon

    @bdwon

    Жыл бұрын

    Neurotypical folks do not "truly understand" either. Their responses to social stimuli are simply more "typical," i.e., in accord with socially disseminated practices

  • @henrijames7337

    @henrijames7337

    Жыл бұрын

    @@bdwon I take it that by 'responses' you mean the observable behaviour of neurotypical individuals in social situations (in general) and that they may be more in line with what is expected or considered "normal" within their social context, even if they may not fully understand the purpose or need for those social interactions.

  • @14drumstix

    @14drumstix

    Жыл бұрын

    @@henrijames7337 I really like your take on this, very well put

  • @ajd6708

    @ajd6708

    Жыл бұрын

    @@henrijames7337 While I’m not the guy you responded to, that is what he meant.

  • @henrijames7337

    @henrijames7337

    Жыл бұрын

    @@ajd6708 Thanks, I sometimes have difficulty in 'getting' what people mean.

  • @mattmerc8513
    @mattmerc85132 жыл бұрын

    Thank you so much for your vids youve explained it far better than any other paper, research, or teacher that I've come across

  • @xbzq

    @xbzq

    Жыл бұрын

    That goes to show you don't come across many papers, research, or teachers.

  • @ozymandiasultor9480

    @ozymandiasultor9480

    Жыл бұрын

    @@xbzq well said.

  • @ozymandiasultor9480

    @ozymandiasultor9480

    Жыл бұрын

    Where have you studied philosophy and logic, at which university are professors so bad that a mediocre channel with simplistic explanations is so much better? I am not saying that this channel is bad, but it is for laymen, those are not exactly top-notch explanations.

  • @hb-robo

    @hb-robo

    7 ай бұрын

    @@xbzqwhy is everyone in this comment section such a brusque asshole? Perspective is relative, leave them alone

  • @jollyroger105
    @jollyroger1056 ай бұрын

    Thank you. I really appreciate you having put so much heart and soul into making your videos. I am truly enlightened.

  • @Inzomniac9
    @Inzomniac97 ай бұрын

    The background information you gave was perfect for understanding the experiment. Thanks!

  • @perfectionbox
    @perfectionbox Жыл бұрын

    The fact that mere symbolic processing can produce fairly good output is helpful in brain development. As a child correlates experiences to language symbols, the job is made easier by powerful symbol processing where guesses/estimates/predictions are often useful, and even after because much sensory input is garbled or missing, and intelligent subconscious guesswork fills in the gaps. We haven't created true general AI, but have uncovered an important piece.

  • @izansengun
    @izansengun2 жыл бұрын

    what a wonderfull way of teaching! Great content sir. Great job!

  • @ND-im1wn
    @ND-im1wn10 ай бұрын

    Amazing how much more this problem, video and explanation are today with ChatGPT. I understood this concept in an intuitive way but now I have the language to explain and understand it explicitly. Thanks!

  • @Raoul684
    @Raoul68410 ай бұрын

    Great explanation, again. I love these videos, so thought provoking. My addition against strong AI is to ask what is the computer doing, absent of any questions or inputs? That, to me, seems equally, if not more, relevant for consciousness. .

  • @DAG_42

    @DAG_42

    7 ай бұрын

    If an anaesthesiologist stops your flow of thoughts by chemicals, you go unconscious. That's just taking the symbol shuffler guy out of the Chinese room.

  • @dwinsemius
    @dwinsemius Жыл бұрын

    Well done. Thank you. I suffered through Searle's "Philosophy of Mind" course at Berkeley in 1970. It was mostly reading and considering Hume's writings. It was definitely NOT what I had been hoping for. My final paper in the course, heavily influenced by reading Julian Huxley, was my version of functionalism and an attack on Hume's rejection of induction as a sensible basis of knowledge. I was a physics major at the time so abandoning induction as a path to knowledge was unthinkable. (Also Hume's use of self-reflection as his primary data gathering tool is easily as fallible as induction.) I only got a B+ which I found annoying but totally understandable, given the distaste I had for Hume and by association Searle at that point. Then 10 years later Searle reappeared on my radar screen because his attack on Strong AI appeared in Scientific American. I found his arguments entirely unconvincing. I had already accepted the Turing test as a reasonable basis for assessing the expression of language as "intelligent" output of a process. A few years ago I found a tome from the late 1800's by Huxley on Hume, and I periodically pick it up and enjoy random bits of it.

  • @matswessling6600

    @matswessling6600

    Жыл бұрын

    induction isnt a path to knowledge. but that is not a problem since sience isnt based in induction.

  • @hinteregions

    @hinteregions

    Жыл бұрын

    Yeah me too. He seems not to understand what Dennett, for example, is doing because he isn't able to see all the implications of his own thought experiment. If we simply take his main thesis to the extreme, as we must and as he for some reason does not, with every single neuron replicated and whatever neurochemical signal it's about to transmit too - not really so different to the cause and effect that is the basis for Determinism if it is not the very essence of such - then yes, indeed this would necessarily be a perfect simulacra of his mind and his memories and thoughts and feelings too, as he takes his own for the purpose of the experiment. We might have to hook it up to some 'sensory inputs' and give it some way of communicating but I have to assume that's a trivial matter in this context. IF we could make such a marvellously complete copy of that human organ to Searle's very own specifications, properly and fully as opposed to his convenient 'partially,' THEN unfortunately Searle is hoist on his own petard. The fact that we cannot is irrelevant.

  • @nosuchthing8

    @nosuchthing8

    Жыл бұрын

    Thank you. I agree with your assessment, I read that article in SA too. What do we do with chat GPT? It seems close to passing the Turing test. Please try it if you have not already.

  • @dwinsemius

    @dwinsemius

    Жыл бұрын

    @@nosuchthing8 I have "chatted" with chatGPT 3.5. It's like a sociopathic college student. A bit like a Donald Trump but unlike that particular human actually 'speaks' in complete sentences with minimal digressions. Makes up stuff and cites non-existent citations to fill in and support syntactically correct slots in its explanations. It is built to sound good but has limited learning capacity. It also denies being human so perhaps close to Turing-passing but not yet convincing to me.

  • @nosuchthing8

    @nosuchthing8

    Жыл бұрын

    @@dwinsemius yes, I agree with your assessment. Close but no cigar. But let me give you an example. I asked it's interpretation of the fable, the emperors new clothes. Which as you know has the emperor parading around in his birthday suit because he's gaslighted by some villains. Chat GPT gave a very good assessment, and then I asked If there is a connection for burden of proof and readily explained how burden of proof was key to the story. So it's certainly close to passing the Turing test

  • @lindyl4257
    @lindyl42572 жыл бұрын

    This helped a lot thank you you're a great teacher

  • @magellan500
    @magellan500 Жыл бұрын

    Great brief presentation on these questions. I’m also a big fan of John Searle.

  • @chrischristenson4547
    @chrischristenson454711 ай бұрын

    I do enjoy your talks greatly I will continue listening to them.

  • @stevefoster6047
    @stevefoster6047 Жыл бұрын

    I was privileged to take Dr. Searle in collage and to hear his thought experiment from his lips, he was an excellent lecturer, and the class remains one of my favorites. However, I was no more persuaded by Dr. Searle back then than I am by @Jeffery Kaplan's excellent explanation of it. There are, in my opinion at least two glaring holes in his argument. The weaker of my two objections is this: 1) His claim that you could never come to learn the semantics of Chinese from reading, what we all have to agree must be an incredibly long and complex list of identification and manipulation rules, is highly suspect. He certainly never tested that hypothesis, and I assert that he has no logical basis other than his opinion for making that claim. For all we know, given many thousands of pages of manipulation rules, and thousands of years of following them, a human being may well indeed be able to piece together Chinese semantics. After all, we are "designed" to do just that, and as babies we learn our native language with much less data and time. 2) The stronger of my two objections is that Searle used slight-of-hand in how he has defined his the "computer", which he wants us to believe is just the human in the box, not the entire room and ALL of it's contents. I assert that is not the case. Rather, the "computer" is the entire system including the man, the instruction set, the input and output devices (baskets), and the room itself and all of it's other necessary contents that enable it to function. Consider if you take the man in the box out and just sit him in front of a Chinese speaker no rule book, no organizing components etc., JUST the man is not a "functioning computer". We know that "computer system" is very clearly capable of understanding Chinese. It is central to Searle's argument that it can. He describes the room, and its contents, from the point of view of Chinese observers, as indistinguishable from a native Chinese speaker. So it is patiently obvious that the entire computer is capable of understanding Chinese, and in my opinion the fact no subcomponent of it, the man, a basket, the rule book, you name any part you like, the fact that a subcomponent doesn't understand Chinese is simply irrelevant! Consider the man in the room, like me he can read and understand English, but my left eyelid cannot, nor my tongue, nor can any of my individual neurons, The fact that my parts cannot understand English does not prove that humans cannot understand English. Likewise, the fact that any part of a computer cannot understand Chinese does not prove that a computer cannot understand Chinese! (Edit: I had forgotten Searle's response to the system's objection, It's been 40 year's since I heard his lecture, but what he fails to explain is how strong AI can successfully answer Chinese questions with accuracy indistinguishably from a native speaker - per his thought experiment - and yet completely lack semantic understanding. Likewise, he fails to explain why with humans if you consider dividing us up into smaller and smaller subcomponents, you will at some point suddenly have a complete set of subcomponents yet none of which can understand english - unless his claim is that one of our atoms is the one that learns English semantics - it's not I presume, which seemingly proves that semantic understanding MUST arise as a system property, and therefore there is no logical reason to assert that it's impossible for strong AI, as a system, to exhibit that property)

  • @skoosharama

    @skoosharama

    Жыл бұрын

    26:58 if anyone wants Searle's response to the systems objection, that the entire system is nothing but a symbol-manipulating machine, and that knowledge of semantics alone - the symbols and the rules for manipulating them - is not enough to understand the semantic content of the symbols. The claim that an interlocutor that can pass the Turing Test of being externally indistinguishable from a Chinese-speaking person is therefore *necessarily* a person is difficult to justify; a person is not merely an entity that can perfectly imitate a person such as oneself, but an entity that one can reasonably suppose to have an internal life, as oneself does. I definitely don't believe that such a claim is "patently obvious".

  • @theconiferoust9598

    @theconiferoust9598

    10 ай бұрын

    do your cells and neurons understand English? or does your system as a whole, including your consciousness, understand it? what physical properties of your consciousness can we separate from your brain that show the physical "input -> output" that gives meaning to the words?

  • @skoosharama

    @skoosharama

    10 ай бұрын

    @@theconiferoust9598 Sure, we can agree that consciousness is an emergent property of certain complex systems. My contention is that we should not suppose that a text manipulation device has an interior life, or any awareness even approaching sentience, even if it is very, very good at text manipulation. The key here, I think, is that, while language could be thought of as a self-contained system, an entity without any perception of the tangible world in which humans live cannot possibly understand what the symbols refer to, i.e. what they mean. Our text manipulation program, unlike Searle's Chinese room (which at least includes a homunculus who might get this), most likely does not even understand that the characters are symbols at all, rather than mere characters and character strings with no extrinsic meaning outside of the rules of the language. It doesn't really matter how good ChatGPT gets at text prediction and mimicking human linguistic responses; it is still just a glorified version of Autocorrect that is incapable of understanding its own output. I would submit that it is incapable of understanding that its output even could mean something outside of itself and its statistical models, or what it would mean for its output to have such meaning. Let's put it this way: just because the human brain is a complex system out of which consciousness arises and that is also capable of complicated linguistic output, doesn't mean that *any* system that is capable of complicated linguistic output is the kind of complex system out of which consciousness arises.

  • @theconiferoust9598

    @theconiferoust9598

    10 ай бұрын

    @@skoosharama agreed. my response was mostly aimed at the op's objections.

  • @aaronmarchand999

    @aaronmarchand999

    7 ай бұрын

    ​@@skoosharama"The human brain is a complex system out of which consciousness arises"... Who says consciousness arises out of the brain.... Judging by the way you talk, perhaps you are less conscious than you think

  • @ameliagerson926
    @ameliagerson926 Жыл бұрын

    I actually can read Hebrew and was so excited I knew what that meant bc it was the only part of the video I confidently knew lol

  • @user-zi3qg9zq8p
    @user-zi3qg9zq8p Жыл бұрын

    It is like unconsciously grinding some skill without realising and feeling what you are doing, play on the piano, walk, write, type on the keyboard by using 10 fingers, learn any stuff. You just repeat something infinitely and boom you a master at something. I remember was grinding my pronunciation skill for my second language very hard by using shadowing technique and at some point I started to produce signals and sound very natural without any understanding what I am talking about. Later I understood, that feeling stays, somehow, on top of the computations and give you the additional power to ability to compute something or to auto correct errors. But the question is do the function, converting bunch of inputs into outputs in the specific order, can see the dreams, I believe it depends of the architecture of the hardware that run the process, it does not depend of the output signals that we can interpret as numbers or wise versa that have the meaning for us, in oher words the function that perform some computation and producing ideal output does not relate to feeling and being alive in any way, it is like to say that the smartphone is alive just because it can produce ideal screaming sound.

  • @nixedgaming
    @nixedgaming Жыл бұрын

    I am desperate to see how Searle would respond to the idea of a neural net matrix transformer, assuming he legitimately understood the math of it. My question is basically, why can’t “semantics” be an *emergent* property of a sufficient understanding of syntax? The paper “attention is all you need” basically demonstrates that a machine *kind of* grasps semantics from a type of mathematical transformation of language through encoder/decoder processes. Very fascinating, thanks for the lecture!

  • @ronald3836

    @ronald3836

    5 ай бұрын

    One possible answer is that Searle lacks the imagination for believing that semantics can arise out of sufficiently complex syntax. However, Searle seems to accept that a rule book can make you appear fluent in Chinese, so he seems to accept that syntax can do everything you need. But apparently that does not let him doubt that a human does not perceive semantics through complex syntactic processing... (Sorry for the double negation, haha.)

  • @AliceBelongs
    @AliceBelongs3 жыл бұрын

    This really helped me with my essay, thanks for uploading! :)

  • @jmiki89
    @jmiki89 Жыл бұрын

    Actually, if you think about it, that's almost exactly how infants learn their native languages in the first place, except they don't even have a rulebook, they have to figure that out for themselves. True, they get not only symbolic input, but for them the audial sequence for "mum" don't have any more meaning than for you or me hearing the made-up word "blarduk". They can differentiate between different frequencies and whatnot and try to mimic it via trial and error (the difference between them and the blarduk example is that we have much more experience making sounds with our vocal organs so we would make far less errors and hence need far fewer attempts to repeat this new word). And yes, babies have feedback loops to help them through the learning process but those are basically just another bunch of input. Yeah, there's might be some genetically imprinted social behavior patterns guiding which of these feedback inputs should be considered as positive and which are negative but all together those still can be paralelled with a deeper level rulebook from the chinese room experiment.

  • @erikmagnusson5713

    @erikmagnusson5713

    Жыл бұрын

    Good point. The feedback loop is what is missing in the Chinese Room. The rule book is never updated. The system never learns anything. So if the rule book doesn’t contain understanding/semantics and there is no mechanism for learning then the system will never understand semantics… …I now find the Chinese Room uninteresting…

  • @brotherpaul963

    @brotherpaul963

    Жыл бұрын

    @@erikmagnusson5713 Funny!!!

  • @sandornyemcsok4168

    @sandornyemcsok4168

    Жыл бұрын

    I agree. What the Chinese room is nothing else just a good presentation of how a computer nowaday works. That's all. Does it prove that a computer cannot be made to behave like a human? Absolutely not. Additionally think about how 'eternally' defined sematics is. Only simple things, like bread, wind, etc. do not change. But let's take something complex, for example "pious". How much its content has changed in the past centuries? In this case the semantics is dependent on the historical age and social context, above the individual.

  • @rickgilbert7460

    @rickgilbert7460

    Жыл бұрын

    I don't know that I agree. The infant learns that the sound "mum" is associated with the idea of a specific person by repeating it in the context of that person. Later, someone points to an actual tree and says "tree" and keeps doing that until the child learns the *semantic* understanding that the object in the yard "is a tree." So children learn the syntax by repetition of the syntactic rules, but they *also* learn the semantics by being taught them specifically, and separately from the syntax, right?

  • @jmiki89

    @jmiki89

    Жыл бұрын

    @@rickgilbert7460 but the face of their mother or the sight of a tree is nothing but just an other kind of sensory input without any kind of intinsic semantic meaning. True, one may argue that humans are genetically hardwired to facial recognition to a fault (we even can see faces in places where clearly aren't any), but the point is that the semantic is created inside the infant's mind via (as you pointed out) repetition and feedback. But in the thought experience, the person in the room was given a supposedly complete and exhaustive but static rulebook of the chinese language with which the room as a whole can imitate a full conversation, which begs the question: can such rulebook exists? From a perspective of a single human life it may seems that the semantic of the words are permanent and unchanging but (especially in the age of internet and smart devices) concepts are evolving, too. We call both a smartphone and Bell's original invetion a "telephone", but those are clearly different things connectend only by the vaguest of similarities. So the rulebook in the room needs a way to being updated, and the only entity capable of doing that is the person in the room, and to do that, he needs some kind of feedback which immediately leads us back to learning.

  • @presto709
    @presto709 Жыл бұрын

    This was great. I think I first learned of the Chinese Room from a book by Martin Gardner. I think I come down on the system response. The system does understand Chinese because looking at the input and giving the correct output is what understanding means. It's the same test we would give to a person who claims to understand Chinese.

  • @kid5Media

    @kid5Media

    Жыл бұрын

    No. Or, to tweak things a little, the person outside the room instead of passing in a question passes in the instruction to order a glass of tea. The person inside the room will die of thirst (unless rescued by the Infinite Monkey Typewriter Brigade).

  • @presto709

    @presto709

    Жыл бұрын

    @@kid5Media Interesting. We aren't told what the book inside the room will do when a nonquestion is the input. Postulate that the instruction book translates nonquestions into his language which he recognizes and reads. Interesting but I'm not sure how it changes anything.

  • @theconiferoust9598

    @theconiferoust9598

    10 ай бұрын

    the «system» includes humans with consciousness to interpret and glean meaning. in other words, you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works. its like saying meaning is self-evident in physical matter, symbols, or mathematics, as if a computer could take the word «love» and output every single iteration of the meaning that has ever been conceived, felt, lived by every human ever. there is no correct output. conversely, it seems there is no meaning without a conscious experience, and the «systems» response only affirms this.

  • @presto709

    @presto709

    10 ай бұрын

    @@theconiferoust9598 YOU WROTE you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works. REPLY The test wouldn't be if it gives the correct answer. It would be if it gives a convincingly human answer. Like the Turing test. If you ask "How is the weather" and the answer comes back "27" that would not make you think there was a mind at work. If you asked, "Is marriage a good thing?" You would not be looking for a correct answer, you would be looking for an answer that indicates an understanding of the question.

  • @presto709

    @presto709

    10 ай бұрын

    @@theconiferoust9598 YOU WROTE You are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works. REPLY I'm not saying that at all. When interacting with another person I do not require that all of his answers confirm my opinion of correct. Only they generally seem. to be responsive I might interact with a person who gives absolutely not "correct" answers in my opinion but still seems to clearly be a "mind".

  • @mattmiller4233
    @mattmiller4233 Жыл бұрын

    Great video! Very well explained. I would add two points, though - the first you mentioned very briefly, but it is worth stressing that the Chinese Room serves only as a refutation to functionalism in the purely *digital* sense, not in totality. The second is that Searle seems to lack (though I may have missed it in the text; please correct me if I did) any formalized definition of what, exactly, constitutes this "understanding" that he says the room lacks, or what sets such "understanding" apart from a sufficiently complex system of inputs and outputs. He seems to work on a fairly generalized seems of what *feels* like understanding, but fails to specify or quantify it (again, let me know if I missed something). Again, awesome work!

  • @ben_clifford

    @ben_clifford

    10 ай бұрын

    To address your first point: I think it's actually sufficient for Searle to contrive a highly-constrained, obtuse situation and say that he's refuted functionalism, and here's why... The core argument of functionalism is that only function matters, and not the system or substrate. So, to disprove such a theory, we only need to show a single counter-example. There's a more formal way to show this with predicate logic, but I think you get the idea.

  • @philplante6524
    @philplante6524 Жыл бұрын

    In the "system response", there was a rule book that instructed you how to manipulate the symbols. The rule book, which is part of the system, is the part that understands Chinese; otherwise your outputs would not be correct. The programmer who made the rule book is part of the system, and he/she has the understanding. In life, the brain programs itself: babies observe how the world works and start making up the rule book. Experiences are programmed in as neural networks.

  • @cronistamundano8189

    @cronistamundano8189

    Жыл бұрын

    I would add that the brain does not only programs itself, but is also "pre programed" inately (thats how babies "know" how cry when unconfortable - thats more than just sintax, it has semantics on it, and parents take some time but eventualy find out what the baby is trying to "say") and that other stimuli (the concept of handling and holding by Winnicot comes to mind) are also part of the rule book that is written outside the room.

  • @hinteregions

    @hinteregions

    Жыл бұрын

    Nice one.

  • @philplante6524

    @philplante6524

    Жыл бұрын

    @Murray Wight I see your point, but I think that the rule book captures the understanding: the rules are not random, they were created by someone one who understands Chinese. So the understanding is hard coded as a set of rules. I used to write engineering specifications for software systems. As the engineer, I determined how the system should react to various combinations of inputs - in essence, I created the rule book. The software developers just coded the software to implement the rules. In living systems, there is no external engineer or Chinese speaker to create the rules; we have to create our own rules based on experience and trial and error. There is no ghost in the machine, the rules are created within the machine.

  • @Olvenskol

    @Olvenskol

    Жыл бұрын

    I'm not sure that Searle's point that you cannot derive any meaning from just symbols is true. It's true enough in simple cases, but not all cases are simple. For example, modern computers with an adequate set of rules and data can identify dogs in pictures or tell apart one human face from another or state that two people are the same. This is accomplished using only rules and symbol manipulation(of '0' and '1's no less), but the result seems to require something that might be considered "understanding".

  • @hinteregions

    @hinteregions

    Жыл бұрын

    @@Olvenskol I think the Chinese room works at a very simple level (if you are denying your superhuman processor certain mental factulties, as I tried clumsily to say in my comment that is just below or above or somewhere). A normal person, who hasn't dabbled in encryption, wouldn't be able to work out the meaning, just follow instructions, sure. But Alan Turing or Noam Chomsky might do what we humans actually did do, we learned to make simple codes and then we learned to break them. A better example might be the Rosetta Stone - we had to work that thing out with only our understanding of other languages to guide us, and that is basically the same example as Chinese Room. I am saying his major thesis is broken as for some reason he doesn't do his thought experiment completely, or to its conclusion, which would by his own terms give him a perfect [digital] copy of his own brain that can only, by his own reasoning, have memories and feelings. Taking his main ideas to their conclusion I think all he's got is an artificial division, between his own brain and a theoretical perfect copy of it, that makes no sense to impose. I don't think he cares to accept, just like all legal systems, that there is neurolochemistry here.

  • @micry8167
    @micry8167 Жыл бұрын

    Excellent comments here. Can’t help assuming that Searle was motivated more by his distaste (for some ideas) than for a desire for hard truth. Namely, that a vast enough system of limited machines could be an analog for the human mind.

  • @themcons50
    @themcons507 ай бұрын

    wow. Great vid and presentation. Thank you sir, much respect.

  • @SumNutOnU2b
    @SumNutOnU2b8 ай бұрын

    Curious, if anyone can let me know... He quotes a couple times from page xx of "the reading". Uhh... So does that refer to just a particular textbook? Any chance that text is available (preferably free or cheap) somewhere?

  • @devtea
    @devtea Жыл бұрын

    Thank you so much for this video! This is by far the best explanation of the Chinese Room. Within Searle's imagined ideal conditions of the experiment, Searle is correct, and the conclusion would be true. At least it would have been true for, say, a digital computer such as a calculator or a watch. However since Searle's article wasn't concerned with the ideal conditions, and because Searle's article showed interest in extrapolating this into practice - there is indeed a version of a 'Systems' response to Searle's original article. One can argue that Searle's conclusion (that it is impossible for the system, or the individual person/central processing unit inside of it, to learn the semantic meaning of the language) is false - because it is impossible to guarantee Searle's imagined ideal conditions (the ideal circumstances of the room - perfect isolation and perfect access control that would prevent any unauthorized outside contact/exchange of information) for an extended length of time. Again, the reason Searle's conclusion (that it's impossible for the person/central processing unit to learn the semantic meaning) is false is because the ideal conditions imagined are impossible to guarantee. One can argue that there exists a positive, non-zero probability of an outside entity or force stepping into this setup (uninvited) and teaching the person/central processing unit inside the room the full semantic meaning of the entire language, without ever entering the room. For example, by passing information into the room on purpose, with the specific intent (i.e. the intent to train it, as one would train an Artificial General Intelligence). This experiment, given a sufficient length of time in real life, i.e. years, would encounter a non-zero likelihood that these ideal circumstances of the room would be altered by an outside party (i.e. an enthusiastic ML Ops Engineer). Since the person/central processing unit inside the room does have command/skill in some language (other than Chinese) that it uses to understand the instructions - it is not impossible for it to receive (from some enthusiastic person/entity outside the room) new information with instructions / process / method that build up into a full semantic understanding of Chinese, or any other language. This is for the classic digital computer. Especially if there's no time limitation, it's not impossible to perform this within much less than the mentioned 1000 years. Difficult and not likely, yes; labor intensive, yes; but not impossible at all; and with humans being humans - significantly probable. Of course, the above would not impart a 'human' experience of speaking and interacting in Mandarin or Cantonese. But a full understanding of the language? Yes. I.e. in the case of digital computer, if it can understand binary numbering system, it can understand Chinese. It will be able to hold a conversation with a human. It won't sound like a classmate or a neighbor or someone you can relate to, as a natural-born human, so the conversation will be likely pursued as less meangful, but the language comprehension would be complete. Again, Searle isn't wrong. Within the thought experiment, Searle's conclusion is technically correct. It just has limited utility. It's like performing a thought experiment where one requests us to grant them the assumption that telomeres (stretches of DNA) at the ends of our chromosomes do not clip off and get shorter each time one of our cells divide. If we grant that assumption, we'll be able to construct a setup where a human can live 'forever', or at least not die from aging. The thing is, since it's not possible to guarantee that "ideal" assumption, you and I are still losing about 26 base pairs per year, so while the thought experiment is really interesting, the conclusions from it alone have rather limited utility.

  • @tedphelps

    @tedphelps

    Жыл бұрын

    Beautiful thinking. Thoughts of all sorts do stand on the stage to be judged for their actual value to us in living. Part of the problem is too strong a belief in proving some idea 'false.' Instead, I feel that ideas influence me, have a certain weight and value, move me this way or that in a wide world that I live in. I am happy for all of this.

  • @echoawoo7195
    @echoawoo7195 Жыл бұрын

    The sensations you experience as a child are all symbols without semantics. The semantics derive from repeated exposure to those symbols. Given enough experience with a syntax, you can determine semantics. That's what infancy literally does. This entire thought experiment hinges on understanding not being an emergent property of a sufficiently complex information processing system

  • @echoawoo7195

    @echoawoo7195

    Жыл бұрын

    Go pick up a picture less book in a foreign language not part of your family and tell me you can't pick out the meaning of some word groupings once you see a large enough set of symbols

  • @mixingitupwithmina93
    @mixingitupwithmina93 Жыл бұрын

    Well done! Thank you for giving your gift of teaching to the world. You have just identified the lack in our world right now. I would suggest that the more powerful a group becomes the more syntaxicized it grows as it loses the ability to understand the semantics of the syntax it / they continue to regurgitate. Everyone gets on board the Turing train … excited to be a part of the syntax revolution. Lol. I am not picking on any one or any group - just a general semantic observation 🙂

  • @ChrisJones-xd1re
    @ChrisJones-xd1re10 ай бұрын

    Semantic understanding can emerge with sufficient instruction, and cannot, without it.

  • @ydmos
    @ydmos Жыл бұрын

    Maybe we're overestimating the role of "understanding" (semantics) here. Assume the mind is, in fact, the equivalent of a computer, that it is also in the Chinese room. Perhaps what we call understanding is just part of the programming, part of how it gets from the inputs (what we see, hear, touch, i.e., how our body senses the physical world) to the outputs (how we interact with that world). We've shown recently that one way AI does its thing is to come up with its own models to interpret input -- perhaps it's generating its own semantics. Under this view, our semantics is something we've created to process the world we live in, defined by how we sense it. A computer's semantics will be something else entirely, perhaps incompatible.

  • @prismaticsignal5607
    @prismaticsignal56073 жыл бұрын

    I bet you’re your students' favorite teacher.. XD Awesome lectures!!

  • @matbroomfield
    @matbroomfield Жыл бұрын

    So if you take a machine designed never to have understanding, it can't have understanding? What a superb insight. All you have to do is define a computer so narrowly that by definition it meets Searle's criteria, then it meets Searle's criteria? What a thought leader.

  • @bradleyboyer9979

    @bradleyboyer9979

    7 ай бұрын

    You seem to be ignorant of how all computers work. All computers operate in the way Searle described. Inputs and outputs. Doesn't matter if it was the first cpu ever created or the supercomputers of today (though Quantum Computing is arguably different due to our modern understanding of physics).

  • @ChadEnglishPhD
    @ChadEnglishPhD8 ай бұрын

    Great explanation. Three criticisms come to my mind. The first is essentially a false dichotomy; it asserts that "semantics" and "syntax" are mutually exclusive. Indeed, in the Chinese Room scenario, semantics is not produced within the scenario. But that does not mean semantics can never be produced from syntax. It presumes that what we call "semantics", "meaning", or "understanding" are not just built complex forms of syntax. Consider how we "learn". You input an apple to a system. By "input", I meant via senses: you see with your eyes many "images" of apples, meaning photons enter your eyes and cause a pattern of signals to your brain. You also "feel" an apple, meaning nerves in your fingers send signals to your brain. Taste, smell, and even sound of biting one ... all patterns of electrical signals. Your brain correlates each of these in their own domain: what is visually similar about all of them, smell similar, etc., and creates a "ledger" of templates of the apple based purely on domain (sight, smell, sound, taste, touch), and record in the ledger that these are all related to the same thing. Also on that list of inputs is language. If each time we recorded data in these domains on this item, we also heard the sounds (inpit signals coming frome ears) corresponding to the English word "apple", or saw images (signals from eyes) of shapes of a-p-p-l-e, then the domains (input doors) of audible or written speech also have correlated entries in the ledger. These templates are correlations and simplified re0resentations of apples, and correlate with other things in the ledger such as other round things, other ed things, other food, fruits, etc. Now suppose somebody "new" takes over, e.g., we forget that we've ever seen the word "apple". The symbol comes to the door: "What does an apple look like?", bit we don't understand English or letters. We open the ledger and look for those symbols. The response in the ledger is on the page with all of the other domains about apples. We get symbols at the door that look similar but slightly different, "What does an apple taste like?", and then "smell like", etc. But we aren't just rule following. We are also continually running the same correlation machine as above. We correlate the symbols at the door shaped "a-p-p-l-e" with the same page in the ledger, but different sections. We also correlate questions (symbols at the door) containing symbols "s-o-u-n-d" as top of any given page in the ledger, and "t-a-s-t-e" always has a response at the bottom of the page. Over time, we associate (correlate) the symbol "apple" with that page in the ledger, "sound" with the top line on the page, "taste" at bottom, "shape" in the middle. Now we see new symbols at the door appearing, with recurring "p-o-m-m-e". The ledger instructions say to look up the same page as "apple", and specific areas of the page, but send back "rouge" instead of "red". So now what is the difference between this situation and "understanding", "meaning", or "semantic"? We apply those words to the ability to draw on correlated patterns. We've "learned" through correlation and organized structure of the information what the symbols "apple" means (page in ledger), what "sound" means (top of page response), "taste" (bottom of page", etc. We learned that "pomme" is another symbol for "apple", and "rouge" is another symbol for "red". We learned these thing only through the same activities as the Chinese room. What we added was, (a) memory storage, (b) correlational computation, and (c) the ability to add to the ledger. All of these things are also done by digital computers. The Chinese Room scenario simply artificially limited the capabilities of digital computers, and the humans in the room. More complex behaviours can come from simple ones. A real person in that room could also remember symbols, recognize patterns in the symbols, and recognize organizational structures in the ledger, inputs, and outputs, and could "learn" Chinese in these patterns. Now, you might say they haven't learned the meaning because they can't associate the symbols for an apple to the real world apple, bit that is because we've artificially limited the input signals to messages at the door. They can understand an apple in terms of their input environment. The thought experiment assumes the pre-existing environment of all other senses we humans have, but are denied in the scenario. But in that context, humans also can't "understand" anything beyond our input environment. We don't have context for relativistic effects, X-rays, infrared, frequencies we can't hear, etc. Other beings with different input capabilities might "understand" different from us.

  • @finald1316
    @finald1316 Жыл бұрын

    Aside: there is a small nuance w/ chinese language. The symbols are tied to meanings not phonetics, so just like you can decrypt messages due to letter frequencies it is plausible that you could infer the meaning of some symbols (altough never be certain of) due to their expected frequency. The symbol for moon is tied to the month which relates to the feminine due the menstrual cicle. Not that you couldn't try the same approach in other languages, but they have a layer of indirectness due to being tied to phonetics.

  • @koenth2359

    @koenth2359

    Жыл бұрын

    That was what I was thinking. Ironically, the problem of decoding Enigma was therefore much more complex than this task.

  • @leslierhorer1412

    @leslierhorer1412

    Жыл бұрын

    Not only frequency,. but more importantly, context. If the inputs to the system allow it to assess when certain syntaxes are encountered, i.e. context, then the system can begin to make certain inferences about the syntax itself. This is the emergence of a realization of semantics from syntax in an empirical framework. I submit such an ability to scrutinize the syntax in such a way is critical to the development of a semantic organization, but that it is indeed possible if the coding is also malleable. In addition to his questionable assumptions, Searle seems to be completely ignoring these factors. He is asserting learning must be limited only to efficiency in the translation mechanism.

  • @koenth2359

    @koenth2359

    Жыл бұрын

    @@leslierhorer1412 It is not all too different from what an infant accomplishes when he/she is trying to make sense of all sensory inputs, and finally manages to learn a language. And the infant manages! (#Chomsky #Language Acquisition Device).

  • @finald1316

    @finald1316

    Жыл бұрын

    @@koenth2359 I am not aware how the enigma machine worked, but there are more words than letters. From a data standpoint, if the enigma works over letters, it should be easier to crack.

  • @finald1316

    @finald1316

    Жыл бұрын

    @@leslierhorer1412 reminds me of IQ tests, but can only be sure if you check the solutions. I guess that is another discussion related to how language is constructed. There is some correctness in ignoring that if you account for "older" AI solutions which are not generic in nature and are just algorithms for computing something very specific. However, when the system learns using an arbitrary number of layers, the "interpretation" of the input is not an algorithm set in stone, rather we have implemented a sort of meta algorithm, i.e. the instructions that will give us the instructions to return the correct chinese character.

  • @dowunda
    @dowunda Жыл бұрын

    How does Seale define what it means to understand something? Viewed exclusively in the physical world people can be seen as a kind of computer: The brain itself being a kind of Chinese Room.

  • @recompile

    @recompile

    Жыл бұрын

    Searle makes a convincing case that whatever it is that brains do, it is not mere computation. You might think you're a computer, but that's just because that's the most advanced technology we have at the moment. 100 years ago, you might have thought the brain was like clockwork. The idea that brains are like computers will seem just a silly in the not too distant future.

  • @dowunda

    @dowunda

    Жыл бұрын

    "Brain: an apparatus with which we think we think.“ - Ambrose Bierce

  • @bombmk2590

    @bombmk2590

    Жыл бұрын

    @@recompile I have yet to see a convincing argument that it is anything but computation. How could it be more?

  • @costadev8970

    @costadev8970

    Жыл бұрын

    ​@@bombmk2590 you have subjective experiences, a computer (symbol manipulator) does not.

  • @calorion

    @calorion

    Жыл бұрын

    @@recompile "The brain is like clockwork" is not really a silly idea. Unsophisticated, sure. As we get better computers, we'll get a better understanding of how brains work. But a physicalist determinist basically does think that the brain is like clockwork on a theoretical level.

  • @mohnaim5824
    @mohnaim58248 ай бұрын

    Impressive talk yet again, well done you are a natural inheritor of Sagan.

  • @foadkaros708
    @foadkaros708 Жыл бұрын

    Besides the fact this being nothing but world class quality content being shared accessible to the world, I was trying to figure out for a long time how you managed to learn to write mirrored so that it appeared correct to the viewer. Then it hit me, you mirrored the image, aboslut brilliant move!

  • @Gottenhimfella

    @Gottenhimfella

    7 ай бұрын

    Does his face look unlike the chirally correct one, or is it more (or less) sinister?

  • @emanuelbalzan7667
    @emanuelbalzan7667 Жыл бұрын

    Absolutely love this presentation. I have only one criticism - the description of Chinese symbols as "sqiggle squiggle" or "sqoggle sqoggle" is not racist. English (or Latin) characters would appear as such to a Chinese person who didn't know what they were. I am old fashioned enough to believe that the word racism should be reserved to describe beliefs of racial superiority and inferiority, and behaviors of injustice and exploitation that flow from such beliefs. I wouldn't bother anyone with this, except I really do feel we need to be a little less sensitive about these issues. They are increasingly being used to fracture a very polarized society. I would not take offense at anyone referring to my writing as "sqiggle sqiggle" or "sqoggle sqoggle" even if they could read English - but perhaps that's because my handwriting verges on the indecipherable anyway.

  • @t.b.a.r.r.o.

    @t.b.a.r.r.o.

    Жыл бұрын

    Agreed. Though I would call some English written language as "hodgepodge", or " "scribble-scrabble".

  • @GynxShinx

    @GynxShinx

    Жыл бұрын

    The only problem comes when actual racists are ostracized from society so they hide their actual views and dogwhistle them by saying stuff like "Chinese is just squiggle squiggle." When said by a racist, it implies that chinese people aren't smart enough to create real language which IS a supremacist idea. Now, should we react by calling someone a racist when they say "Chinese is a bunch of squiggles"? I doubt it. But should we be suspect of them? Sure. If you know this individual and know they don't do legit racist stuff, then they're fine.

  • @magicpigfpv6989

    @magicpigfpv6989

    Жыл бұрын

    Ask to see your doctors hand writing… that shit is nothing but squiggles!

  • @lokouba
    @lokouba Жыл бұрын

    I argue "Strong AI" won't have a necessity to truly "think" if their instructions are elaborate enough to give the ILUSION that it thinks. The actual subject of the experiment is not the person in the room, it is the person OUTSIDE the room. And the idea is that if the person inside the room is trained to find these characters quickly enough so they can respond as quickly as if they understood the message written on the paper. They could be convincing the person outside the room that they actually understand chinese. The idea, is that you can put a person inside of the room or an AI bot inside of the room and it would make no difference from the point of view of the person outside of the room, if you tell them there is another chinese person in there and tell them to write messages to them, they will likely believe its a human chinese speaker in both cases. The conclusion i draw from this is that you give "Strong AI" enough tools, enough instructrions and most importantly a "chinese room" to cover it's true nature it can pretend to be an actual being that "understands Semantics" because human beings are only able to communicate through Syntax.

  • @udarntors

    @udarntors

    Жыл бұрын

    This simple to refute. We are sharing meaning, not syntax, syntax may lack in a conversation or be minimal, but without shared semantics/meaning there is no communication. Exemple: you can understand a small child or a foreigner who does not use proper grammar. Here is some syntax: find . -type f -empty -prune -o -type f -printf "%s\t" -exec file --brief --mime-type '{}' \; | awk 'BEGIN {printf("%12s\t%12s ","bytes","type")} {type=$2; a[type]+=$1} END {for (i in a) printf("%12u\t%12s ", a[i], i)|"sort -nr"}' Here some meaning: Flower in the crannied wall, I pluck you out of the crannies, I hold you here, root and all, in my hand, Little flower-but if I could understand What you are, root and all, and all in all, I should know what God and man is. Alfred Tennyson in 1863

  • @lokouba

    @lokouba

    Жыл бұрын

    @@udarntors It seems you misunderstand the difference between syntax and semantic. You say you are presenting one example of syntax and one example of meaning. But you are in fact presenting syntax in both cases because semantic isn’t a “message” it’s a “concept”. The english language is a syntax, C+ is a syntax. And of course shared meaning is part of any conversation but my point exactly is that these AIs are programmed by people who understand the semantics of the words they are inserting in their repertoire of syntax. Because the relationship between syntax and semantic can sometimes be fuzzy, syntax itself can be utlized for deception too, that is the basis for the concept of “doublespeak” too. Language is only a form of expression, but humans truly lack a reliable way filter what syntax is backed up with “truth” or (in the case of AIs)“thought”.

  • @udarntors

    @udarntors

    Жыл бұрын

    @@lokouba I wasn't really clear in my little exposition there. So, i think that "syntax" and "semantics" are, in fact, as you say, concepts that pertain to language and linguistics. One covers the structure of language, the rules that govern the placement of words in a phrase, and we call this one syntax. The other is all about meaning, and the relation between words in linguistics, and we call it semantics. I see it as structure and content. Form and substance. So : "The crocodile flew between the slices." Correct syntax here. Absolutely meaningless. I am in total agreement with all you have said about the fact that you can fool humans with sufficient processing power and fiddling with the configuration to accommodate the social norm. My reply was about this statement only: "human beings are only able to communicate through Syntax." Syntax helps to communicate *correctly* as to the social conventions of the time you are in... So, my examples were, in fact, of a meaningless but beautifully structured line of bash and a meaningful poem. One is a command that will be interpreted and transformed into lights on a screen as programmed and the other is a tale of epistemology, causality, and determinism.

  • @lokouba

    @lokouba

    Жыл бұрын

    @@udarntors Aha i see. Maybe i should have worded that better. I meant to say communication between is only possible through usage some sort of syntax at least from my conventional understanding of what constitutes as “communication”.

  • @irrelevant_noob

    @irrelevant_noob

    Жыл бұрын

    @@udarntors and yet, Alejandro is right that in any communication only the syntax is "given" (or "shared"). The fact that one party attributes some specific meaning to the terms in the message has no effect on how the digital actor (a Turing machine, the person in the room, an AI bot, etc) will process the message. Whether or not that actor *_will_* in fact extract some (subjective) meaning(s) from the message is unknowable. But in any case, the meaning itself is not intrinsic in the message, it is only "in the eye of the beholder"/interpreted by whoever analyzes the message. @AlejandroRodriguezP that last part of the OP seems to me to be a kind of "Turing test" for semantics: is the digital actor in the room "good enough" to convince the outside person(s) that they understand Chinese? :-)

  • @quokka_11
    @quokka_1110 ай бұрын

    20:19 "You're never going to be able to figure out semantics from syntax." Except we're talking about human language, and you already have your own experience of that. With exposure to enough earnest material (not nonsense), you would eventually make meaningful connections and at least some of the semantics would emerge.

  • @Leao_da_Montanha
    @Leao_da_Montanha Жыл бұрын

    If humans could understand the semantic of words in the way to declassify the strong AI as Seasle is appointing, there would be no communication problems at all, as if the different order and explainations in the learning proccess would result in the same meaning to every word in every mind. In general, semantics are different for each individual and depend on the learning proccess they had, in depth, the memory we acquire for each symbol is updated when learning newer symbol until theres enough context for semantic. In other words, we all work as a turing machine fundamentally but arranged in a complex system. I would love to read comments to this, feel free to respond

  • @ronald3836

    @ronald3836

    5 ай бұрын

    It would be easier to comment if I disagreed with you, but I don't 🙂

  • @wirewyrm01
    @wirewyrm01 Жыл бұрын

    There is a paradox in the thought experiment. The person in the room is tasked with manipulating symbols, not with trying to figure out what they mean. Therefore, it follows naturally that the person in the room cannot (or more accurately, will not) figure out what the symbols mean. Indeed, the meaning of the symbols is completely irrelevant, so positing that the person could never understand the semantic meaning of the symbols is also irrelevant, because that was never part of the design. On the other hand, I would propose that Searle's assertion, that the person in the room can never gain an understanding of the symbols even if they tried, is false. Perhaps the person can not gain much insight from studying the symbols alone, but if the person studied the *instructions*, surely they would be able to glean some information about the symbols and their contextual use. Patterns will emerge from the frequency of use of certain symbols, the association of certain symbols with each other, and symbols occuring in question-answer pairs, and so on. Furthermore, from the frequency and sequence of "questions" received, the person can also start to observe patterns and eventually triangulate the semantic meanings of the symbols. In fact, many of these techniques are used in the study and research of dead languages. There are other problems that I can see with the thought experiment, but these are the most easily defined ones.

  • @LoraxChannel

    @LoraxChannel

    Жыл бұрын

    Yes. This is exaxrly why modern AI is fed huge language bases, and tasked with creating context and relationships and distinctions, just as we do in language. They are no longer limited to manipulating digits. I mean, that is the whole point of designing AGI, so it can learn and assign "meaning" indepenently.

  • @stefans.5043

    @stefans.5043

    11 ай бұрын

    the person inside the room will never know the meaning of the symbols when he doesn't know the question he is asked or the answer he gives. In this experiment he only acts on given instructions and not on human behavior like observing or recognizing patterns. And even when he sees patterns than still he doesn't know the sematic meaning of them. thts th dffrnts btwn hmns nd cmptrs, you probialy can read this last part only by knowing the meaning of the words and not the meaning of the symbol. A computer can not.

  • @fang_xianfu

    @fang_xianfu

    11 ай бұрын

    Yes - the other part is, where does the book come from? Real minds write their own book of rules to manipulate the Chinese symbols, and they edit their own book as they try new things and they succeed or fail.

  • @SatanDynastyKiller

    @SatanDynastyKiller

    11 ай бұрын

    I knew someone in here would be as smart as me lol - saves me typing it, thank you. The easiest way to cover it all- until we understand everything, we understand nothing. I genuinely think some of these “intellectuals” are not exactly what they claim they don’t claim to be…

  • @LoraxChannel

    @LoraxChannel

    11 ай бұрын

    @@SatanDynastyKiller It's smart as I. When you are claiming smartness, it matters.

  • @Conserpov
    @Conserpov Жыл бұрын

    If the people outside are determined to teach the person inside to understand Chinese, I think they can do it, to an extent. It may require at least two distinct inputs though. This problem comes up IRL with children who are blind and deaf from birth.

  • @teddydunn3513

    @teddydunn3513

    Жыл бұрын

    Exactly. The Chinese room is setup to treat visual sensory inputs as somehow special and more "real" than other inputs.

  • @pumkin610

    @pumkin610

    11 ай бұрын

    Blind from birth, that reminds me how we cant really describe colors in a way that would let an always has been blind person know what it really looks like, aside from black if you consider that to be a color, but that's only because i assumed that they'd be seeing black all the time since we see black when we're in the dark but they aren't seeing anything, like how hands and feet don't detect light itself. Red is hot, it's intense, some roses are red, blue is calm, the sky is blue, green is grass, yellow is the brightest color. Colors are their names and the specific visual sense that they are I suppose. Maybe we aren't seeing colors for what they are either, to me certain colors are certain emotions, certain vibes and certain things.

  • @pumkin610

    @pumkin610

    11 ай бұрын

    There's gotta be a touch based language, right

  • @theconiferoust9598

    @theconiferoust9598

    10 ай бұрын

    you can give them a rulebook to input->output and learn, but it is their conscious experience as a human that will allow them to find meaning, not the rulebook.

  • @ronald3836

    @ronald3836

    5 ай бұрын

    @@theconiferoust9598 the rules in the rule book/weights of the neural network/connections between neurons get modified as you learn.

  • @DonaldRichards-mr3lz
    @DonaldRichards-mr3lz7 ай бұрын

    A conversation with me is the same as navigating a decision making flow chart with extremely complex conditional statements and gates.

  • @zach358
    @zach3587 ай бұрын

    The conclusion seems more like a critique of what we do with digital computers and not the limits of its potential. The Chinese Room experiment is a static room with one set of never changing instructions. If there was several other rooms that continuously change and adjust the instructions of the Chinese Room over time, to thus give it purposes or knowledge beyond the basic rules originally set forth, then that would be a closer representation o the human mind (being taught by other human minds or other inputs) ... the only other thing you'd need to add is the sense of freedom of choice; the ability to think independently of tasks given, despite being given a task.

  • @davidn4125
    @davidn4125 Жыл бұрын

    I suppose the same could be said of the human mind as a computer program. If one were to map all the neural connections then you would be able to know the output of a person's brain given the input signal. It's just that we don't have a way of mapping the connections fully but that doesn't mean they aren't there. Also the mind does change over time but so can a computer program since AI's are now able to rewrite their own code.

  • @ronald3836

    @ronald3836

    5 ай бұрын

    Agreed. And even if the human mind is somehow "more" than a computer program, Searle's argument does not show this in any convincing way. Ultimately he has nothing better than "syntax can never be semantics" and "humans do semantics". But it is his lack of imagination that tells him that syntax cannot mimic semantics sufficiently closely that we cannot tell the difference. (And interestingly the premise of his experiment is that a syntactic rule book CAN mimic semantics.)

  • @Sergiopoo
    @Sergiopoo Жыл бұрын

    wouldn't a single neuron in the brain be the person in the chinese room, while the brain is the system?

  • @cosmictreason2242

    @cosmictreason2242

    7 ай бұрын

    Does a single neuron process input and output with correct syntax? That would be line saying a light bulb in the room could give the correct response

  • @saurabhpatel4352
    @saurabhpatel43528 ай бұрын

    Very interesting...i think the key question here would be what does it mean to 'understand' something and what exactly experiences this understanding. If we assume there is no mind then is understanding an illusion created by a certain dance of millions of neurons? When I say the word 'jump', you take that as input through your senses and that strikes so many different parts of the brain and brings forth a certain feeling from auditory, visual, motor etc centers and for sometime we keep all these neurons activated and their communication with each other brings forth feeling of understanding of the word jump. BUT how many neurons can we remove before this understanding fades. Is there any part of the brain that is just doing the understanding is the final "I" in side the brain? I can imagine that animals can also understand the word jump but maybe their understanding is at more rudimentary level perhaps same as a toddler. Maybe then understanding is at different sophistication levels from a simple remote car to human beings...could it also be that all human beings interacting together are creating an even more complex 'mind' that can 'understand' so that a nation understands something and reacts to other nations (sorry no coherent thought here going off on multiple tangents)

  • @sanjeevkulkarni4923
    @sanjeevkulkarni49237 ай бұрын

    Does the system that write which symbol to pick up as correct answer understand the Chinese language?

  • @jorgemachado5317
    @jorgemachado53172 жыл бұрын

    Whats Searle apparently has discovered is that a computer alone would be a zombie. But a human alone would be a zombie too. The semantics is not a intrinsic part of the world. The semantics is what emerges from sociability. If the strong AI spent time enought with humans i think it would stop to be a zombie at some point

  • @annabizaro-doo-dah

    @annabizaro-doo-dah

    2 жыл бұрын

    What about when humans perform myriad behaviours they have no understanding of? I was thinking in particular of syntax. I learnt no formal grammar at school; literally no syntax. Yet I understand how to write formal English pretty well, I think. I perform the function of writing formal English without understanding almost any of the rules on a conscious level, except perhaps how to use basic punctuation.

  • @jorgemachado5317

    @jorgemachado5317

    2 жыл бұрын

    ​@@annabizaro-doo-dah Unless you believe there is something like a material ground for syntax (which i don't think is true) i believe that this learning is just a mimetic process. That explains why thinks change historically. People are learning new ways to perform and changing those processes by the output of those same processes EDIT: By material i mean a physical object. Of course syntax is material in the sense that it exists as an abstract concept

  • @recompile

    @recompile

    Жыл бұрын

    You've completely misunderstood Searle. Go do the suggested reading and try again.

  • @jorgemachado5317

    @jorgemachado5317

    Жыл бұрын

    @@recompile You wrong! Hur dur Go read!

  • @superactivitylad
    @superactivitylad Жыл бұрын

    I like the "systems" response to the problem, and I think about it this way: My eyes do not understand symbols. My eyes receive light, do whatever it is they do with that light, then send that information through my optical nerve, into my brain, then neural pathways that were formed when I first learned about that symbol fire up again, and then a bunch of complicated interconnected stuff in my brain happens that makes "me" (the system as a whole) understand that I'm looking at the number 69, and the meaning behind it, and I say "nice." my point is that no individual part of my nervous system understands anything. they all individually just receive electrical or chemical information, and then do something with it, and send some kind of information to the next part. i believe its possible to design a system with digital computers that replicates how the brain works. we just need to first understand how all that "complicated interconnected stuff" works first.

  • @kenking3868
    @kenking3868 Жыл бұрын

    Great lecture. Thanks so much. Where does a baby get vocabulary from and how do they learn to respond? If a mother teaches a child to respond to pain as "thats nice" wont the child use that response? Syntax and semantics. Where do you draw the line?

  • @izboy98
    @izboy989 ай бұрын

    my right ear enjoyed this video slightly more than my left ear

  • @impyre2513
    @impyre2513 Жыл бұрын

    Personally, I feel like the systems response idea makes a lot more sense... But it only works if the system is able to self-modify. If this system as a whole is meant to represent someone that understands Chinese, then it must first demonstrate the ability to form responsive queries that actually make sense, and then potentially make adjustments to its programming depending on the responses received. But that hits the crux of the problem, because it would have to be a pretty fancy rulebook to have that functionality built-in.

  • @JohnDlugosz

    @JohnDlugosz

    Жыл бұрын

    ChatGPT, especially in GPT-4, fluently translates to Chinese and other languages. You can offer corrections or style guidance and it corrects itself and remembers that moving forward...but this does not alter the model! The nature of the Transformer holds the recent memory of conversation as the input to the next pass. GPT-4 is a literal Chinese Room, running on the Azure data center. Translating to Chinese is not something it practiced with feedback during the learning phase. All it did was read text in different languages and learn the patterns within those languages. Meanwhile, it gained the skills to converse convincingly, translate languages fluently, do algebra, solve logical problems, write code, create web pages, and much more, all from this "fancy rulebook". The code implementing the neural network is for processing tokens, input and output. When the system undergoes "deep learning", that code does not change. The learning is in the weights between the neurons (or "parameters"). So, once learning is complete, the knowledge is in this structure, not the (same) low-level code being executed.

  • @glenblake3021

    @glenblake3021

    Жыл бұрын

    Sure, but that's a problem with the analogy. It's meant to be an analogy for strong AI, and if you designed a system attempting to be a strong AI but it lacked the ability to modify its own rules, well, you've fundamentally misunderstood the problem you're trying to solve. Lack of imagination on Searle's part, imo. One of the more irritating things about the Chinese Room.

  • @rrshier

    @rrshier

    Жыл бұрын

    @@JohnDlugosz I disagree that ChatGPT is a literal Chinese Room if your statement that "You can offer corrections or style guidance and it corrects". That statement alone means you are solving the problem for the processor in the room. The Chinese room thought experiment is the idea that there is no connection between the language the processor in the room knows, and the language (with differing symbols) being passed through the room. Your statement provides the connection, thus NOT a literal Chinese room.

  • @harrygenderson6847

    @harrygenderson6847

    Жыл бұрын

    @@rrshier No, ChatGPT does not literally understand the meaning of the statement you are passing it. It 'reads' it as a series of tokens and applies some weightings to calculate the most likely follower. The model itself is part of the rulebook, and the prompt you give it is the string of arbitrary characters being fed in. It could tell you the meaning of English or Chinese, but it doesn't internally separate English and Chinese or derive truth from the statements or something. But it's an abstraction that we apply to the system, the same way we do when fellow humans create waves of pressure in the air. Also, just so you know, the 'Chinese room' is turing complete, and can thus be simulated by any turing machine (such as a digital computer) and can simulate any turing machine (such as a digital computer). So ChatGPT could be run inside a literal Chinese Room.

  • @RyanShier

    @RyanShier

    Жыл бұрын

    @@harrygenderson6847 - Actually the Chinese room is not turing complete, as there is no way of storing state, or having feedback from an output offered back in. There is a strictly defined set of rules which cannot change (that would be where a feedback path and state storage could come to play). In fact, as defined, it is the literal opposite of turing complete. If using the example given on Wiki, the person inside the room with the strict rule set it akin to HTML (non turing complete). GhatGPT on the other hand and the fact that you CAN indeed give it other state that it can store and use to give differing answers is indeed turing complete. If the 2ndary inputs are used for further training of the GPT model, then it is most certainly turing complete. In terms of understanding meaning, neither do we without context of other surrounding words, placement within a sentence, etc...

  • @YoutubeHandlesSuckBalls
    @YoutubeHandlesSuckBalls Жыл бұрын

    At the core, this argument is an argument from incredulity based on the fact that 'I' can 'see' and it is considered incredulous that a programmer could write code that has the experience of being an 'I' that is cabable of 'seeing', and by extension having the impression of having a sense of self. Searle's argument is that because a single neuron cannot understand chinese, it is not possible to have a chinese person.

  • @t.b.a.r.r.o.
    @t.b.a.r.r.o. Жыл бұрын

    All this and yet here we are, approaching the Singularity.

  • @DrVaderific
    @DrVaderific8 ай бұрын

    Loved the lecture. Quick question. Now with the invent of 'machine learning' and apps like ChatGPT, we know that we can teach these machines semantics in some sense. Perhaps the mind can be thought of a 'machine' that has the algorithms of 'machine learning'. Any thoughts?

  • @conspiracycrusader6687

    @conspiracycrusader6687

    8 ай бұрын

    Not the poster, but I don’t think we can consider a computer to be conscious until it says no without it being told it can do so

  • @numericalcode

    @numericalcode

    3 ай бұрын

    Locke, Hume and other empiricists long ago thought humans learn by generating associations just as programs like ChatGPT do.

  • @dylanberger3924
    @dylanberger39249 ай бұрын

    I love this thought experiment because of the two assumptions it grants computer science that we’ll likely never even see emerge from the field. 1) a perfect set of instructions. CS needs to produce a perfect state table for the turing machine, as assumed the book for you in the room is. 2) you are a human with a brain trying to pick up on meaning, specifically memory and cognitive ability. You are aware of the fact that these symbols could even represent an idea, and can pick up on traits you naturally know belong to language, eg pattern recognition. MAYBE, just MAYBE, you could learn chinese. A turing machine is a set of transistors firing, it isn’t looking for any of that - after all, how would it “think” to. I’ll probably elaborate and finish this when my phone isn’t about to die and I’m not this tired. But something to think about

  • @xirenzhang9126

    @xirenzhang9126

    5 ай бұрын

    spoiler alert: he never elaborated and finished his comment

  • @p.bckman2997
    @p.bckman2997 Жыл бұрын

    There's clearly intelligence (semantics) in the Chinese Room, it's just not the person in there that provides it. The actual intelligence comes from the rulebook, which require an actual intelligence to write (and possibly a super human one at that).

  • @DocBree13

    @DocBree13

    Жыл бұрын

    I’d argue that a book explaining semantics is not intelligent and does not understand semantics. Something which requires intelligence to write is not then made intelligent.

  • @p.bckman2997

    @p.bckman2997

    Жыл бұрын

    @@DocBree13 , well, it's a matter of how you frame it I guess, which is often the case in philosophy. The book is just paper and ink and clearly not not sentient, like you say. The instructions are meaningful though, I would say that the intelligence of the writer is stored on the pages. The intelligence that answers the input questions is the book writer, he's just using other means to so than to sit in the box and answer them himself.

  • @user-ju7dx8mu6d
    @user-ju7dx8mu6d7 ай бұрын

    Fascinating. Perhaps the system doesn't have semantics but as soon as the system does something with its output, the result is indistinguishable from understanding meaning. The box instructs a mechanical box to pick up the red block. The machine appears to understand "pick up", "red", and "block". Once an action is applied to the output, how is the machine's concept of meaning any different from whatever our concept of meaning is?

  • @neilgilstrap7839
    @neilgilstrap7839 Жыл бұрын

    This was great, thank-you. I wanted to say that while the "system" is dismissed, and there is commentary below about emergent behavior, from someone who studied AI and neural networks, my perspective on the dismissal of the "system" argument was that the system argument wasn't refined enough to EXPLAIN what is meant by the "system" to pose a threat to the thought experiment. Had it had been adequately explained instead of just as a "emergent system behavior" and instead directed at HOW that emergent behavior works, I think Searle would have had a lot harder time rebuking than he otherwise did. The short is this. While any given "box" indeed does not understand semantics, what needs to be added to the system argument to present the full rebuttal is the following 2 key points: 1) We agree that any given "box" can receive inputs and produce any output on a set of rules given to it. Suppose then that one of those "outputs" is readable as "semantic meaning" of the input. (As in the thought experiment). NOW, combine this with #2... 2) Suppose that the output of any given "box" is the INSTRUCTIONS for another box. Not just another INPUT. i.e. One box is outputing the instructions another box follows. Effectively, the "computer is writing its own code." When you consider the possibilities that #2 implies, that's how you can arrive at the proof/causality that the "system" can understand semantics. Simply put, in the Chinese Room Experiment, it is the Chinese who are providing the instructions for the computer such that given an INPUT the OUTPUT will have semantical meaning to the individuals who are reviewing the output. Yes, the box is not aware of the semantics of the output but the people who wrote the instructions for the box are very aware of that and so provided the INSTRUCTIONS so that the computer will produce semantically meaningful output. Then, all you have to do is realize that the output of a box could be INSTRUCTIONS for another box (i.e. the computer/brain in this case is writing its own code). And you'll quickly arrive that in fact, the system as a whole CAN produce semantics and furthermore, it's not just a weird emergent, unpredictable phenom. It just logically and simply follows how semantics can be produced and pretty easily so.

  • @amaarquadri
    @amaarquadri2 жыл бұрын

    I would push back against the idea that you can never learn semantics from syntax alone. I think given enough time in the Chinese room, you would eventually learn Chinese just as well as a native speaker. Consider the GPT-3 language model discussed in these videos kzread.info/dash/bejne/kWytuLF8ZMbPiMY.html, kzread.info/dash/bejne/gqWWpJJwnsLbgZc.html. Despite only learning based on the syntax of a large corpus of English language, it is able to construct coherent well thought out sentences. For all intents and purposes, it (or at least a future more advanced version of it) does "understand" the semantics of language. In a certain sense, if you zoom in enough human brains are just manipulating electrical inputs and producing electrical outputs with no understanding of what they mean semantically. Its just a set of particles following the laws of physics. Nonetheless, the system as a whole can develop understanding.

  • @guillecorte

    @guillecorte

    Жыл бұрын

    El punto es que debes poner "" porque sabes que no es "entendimiento" real. Además perdiste el punto: no es si luego de años en la habitación podrías o no aprender chino, sino que podrías responder "en chino" sin entenderlo realmente.

  • @MuantanamoMobile

    @MuantanamoMobile

    Жыл бұрын

    "Colorless green ideas sleep furiously" is a sentence composed by Noam Chomsky in his 1957 book Syntactic Structures as an example of a sentence that is grammatically well-formed, but semantically nonsensical. GPT--3 often makes Noam Chomskyesque statements that are syntactically correct but nonsensical, because it doesn't understand.

  • @francesconesi7666

    @francesconesi7666

    Жыл бұрын

    Advanced symbol manipulation =/= understanding

  • @perfectionbox

    @perfectionbox

    Жыл бұрын

    An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?

  • @danwylie-sears1134

    @danwylie-sears1134

    Жыл бұрын

    @@perfectionbox Programs are data. The absolute distinction you're appealing to, between the impossible-to-update rule book and the constantly-updated arrangement of papers in baskets, does not exist. It's an actual theorem that a suitably designed Turing machine can receive its program as part of the initial content of its tape, and the arbitrary input as the rest of the initial content, and no matter what other Turing machine you pick, there's an appropriate program part that can make the program-reader Turing machine match the output that the other one would give if it received just the arbitrary-input part on its tape. And with real-world computers, it's literally true that programs are data, stored on the hard drive or RAM or whatever, same as any other data.

  • @captaingerbil1234
    @captaingerbil12343 жыл бұрын

    I take the systems response as an approach to refute Searle. His argument almost seems to imply that we create the semantics of the word, when really all we do is assign it to objects and states already existing in the world and then we assign symbols to that semantic meaning.. I believe it possible to create a machine, that operates through computational methods, capable of understanding. Great lecture by the way.

  • @cf6755

    @cf6755

    2 жыл бұрын

    the person is in the womb is not the one like in chinese but the rule book if you kill the person that replaced him with somebody eise it would be the same thing because of the rumble the rule book is the person with the rule book is not the person who's writing the chinese it is the rule book

  • @recompile

    @recompile

    Жыл бұрын

    If you think it's possible, prove it. Show how meaning can be derived from pure syntax. Even a very simple example would refute Searle. So far, no one has been able to do it, despite the outrage his argument seems to generate in people.

  • @Matt-zp1jn

    @Matt-zp1jn

    Жыл бұрын

    The “Systems approach” , cannot CREATE or assign the semantics of the word or symbol. Searle is correct in that the syntax of the Turing computer etc, is basically hardware created that can organize symbols only according to the programming of the software of the rules the computer must follow. The semantics or meaning and understanding of the chinese symbols or binary coding must be ascribed to from the computer programmer or conscious intelligence OUTSIDE of the hardware, or thru digital wifi etc. Searle has successfully refuted that Strong A.I. and Functionalism is a wrong theory. Of course big Tech, social media giants, digital A.I. Scientists want to refute Searles theory and will use complex algorithms, a human like robot interface, digital wifi/Bluetooth information transfers from an intelligent self-conscious source (a human programmer lol), who will portray the A.I. robot human as capable of understanding semantic meanings instead of just grammer or syntax or whatever has been programmed into the software by an outside intelligent creative human being, ie the programmer. This is why I think they are going to strive towards a more Android approach where humans are “upgraded” with A.I. digital software thru a neural analogue-digital interface that allows the human being to take the syntax info and assign appropriate understanding an meaning to the software download into his neural brain link etc. It is a very questionable path and risk for humanity, imo.

  • @Capt_Caveman205
    @Capt_Caveman205 Жыл бұрын

    This is the 1st video of yours that has been recommended to me and its fascinating. I must go watch the ones ive missed but 1st i have to know...are you really writing everything backwards so that we can read it!!?

  • @rustworker
    @rustworker Жыл бұрын

    Feedback loops is the magic powder that makes consciousness and emotion and all

  • @shinypup
    @shinypup Жыл бұрын

    With the results we're seeing with large language models (e.g. ChatGPT) and how computers have been able to extract semantics in the form of embeddings, could you give a talk on if there are philosophical implications?

  • @yuck9510

    @yuck9510

    Жыл бұрын

    interesting question. with gpt, though, you can kind of just use the same argument, that it's simply really efficient and accurate at providing appropriate responses to prompts. that is to say, we should think of it less as an AI, and more as a really, really good predictive text machine

  • @ever-openingflower8737

    @ever-openingflower8737

    11 ай бұрын

    @@yuck9510 I wonder what the difference to elementary schoolchildren is in that regard. When I was first instructed how to write essays, I also learned about "useful phrases" etc. Isn't learning how to write good texts at school pretty much the same thing that this predictive text machine does? Like, it goes without saying children need to learn how to write with their hands, dexterity etc. But I mean the underlying thing of creating a text. I think philosophically, it is the same quality. Children have just started thinking about the world and someone teaches them how to use phrases to generate text. What is the essential difference?

  • @hassaan3861

    @hassaan3861

    11 ай бұрын

    As someone whose work is closely tied to chatgpt and open AI. My belief has gotten stronger that these systems don't understand anything but are extremely good at giving close approximations of understanding something. Also, most videos etc online are faked for views because to even get a semi decent output from chatgpt/dall e-2 you have to do the thing like 50 times and tweak the inputs in weird ways until you get a response that isn't completely BS.

  • @NullHand

    @NullHand

    11 ай бұрын

    ​@@ever-openingflower8737 Children learn to use verbal language first (and probably have an instinct to do so). As they first learn to write, they quite literally speak the "sentence" they want to write, and put it on paper. It's all dialog to them. This comes complete with using pronouns with no prior reference (I was writing about the doggy I was looking at....) Filling the sentence with verbal speech thinking pauses (umm), and verbal structures designed to get a body language acknowledgement (you know?) All of these are superfluous or counterproductive in most written sentences, and have to be trained out. The semantics in human text is piggybacking on the heavy lifting of associating physical world experiences to spoken (or signed) words. The "LLMs" might be trapped in a Chinese room, but neuromorphic image recognition "AIs" are not (they get to "see" images, to associate to with that "DOG" symbol). I strongly suspect that some AI lab somewhere has already connected the two.

  • @theconiferoust9598

    @theconiferoust9598

    10 ай бұрын

    any output of an A.I. model has been given its «picture» of meaning by humans. the real question is, what would the picture of «meaning» look like to a system that is learning only by observing inputs (i.e. not given or trained to give «correct» responses). we always seem to insert human understanding and consciousness into the equation.

  • @konstantinlozev2272
    @konstantinlozev2272 Жыл бұрын

    I would have liked to see a discussion on the design of the rulebook and how that design embodies semantics

  • @jasemo388

    @jasemo388

    Жыл бұрын

    Yes. It's almost like Searle took the interpretation of the incoming symbols - the Semantics - and made it separate in the rule-book just to remove agency from the person in the room and structure his analogy to prove his point.

  • @konstantinlozev2272

    @konstantinlozev2272

    Жыл бұрын

    @@jasemo388 yeah, modern day neural networks actually build and amend the rulebook as they get trained. And as the rulebook is construed to embody the semantics in this though experiment one cannot argue that neural networks represent the Chinese Room example.

  • @cybersandoval

    @cybersandoval

    Жыл бұрын

    Right, is the system writing the rulebook?

  • @jeff__w

    @jeff__w

    Жыл бұрын

    @@konstantinlozev2272 “…as the rulebook is construed to embody the semantics in this though[t] experiment…” I think in Searle’s example, the rulebook is meant to be construed as embodying the _syntax._ It’s an instruction manual that tells the person _how_ to put the various Chinese characters together, i.e., the form, but says nothing about the meaning. In that sense, to the extent that these latest chatbots can predict words, one at a time, to construct flawless sentences in English, they might represent Chinese rooms “perfected.”

  • @ErikHaugen

    @ErikHaugen

    Жыл бұрын

    @@konstantinlozev2272 Searle's room has paper, pencils, erasers, etc; it can store data from past questions. This doesn't really matter much to Searle's point, I think, although it would be impossible to pass the Turing Test without memory of some kind, I think.

  • @munchaking1896
    @munchaking18969 ай бұрын

    Alright I had a longer thought about this one, and you totally can teach a computer Semantics. 0s and 1s are the language that the computer understands so you have to talk to it in 1s and 0s. So lets say you feed the computer the word "Ball" (21A2A2), and you give it the description (it sees a whole lot of numbers) you are teaching the computer what the word "Ball" means in its own language, When ever you feed it the word "Ball" It will refer to the discription and sub descriptions of what the word "Ball" means (we call this a library), this is called learning. Each word in the description has its own description and meaning, which you will have to also teach the computer. This is teaching a todler how to talk, The sound "ba" and "all" makes the word ball and this is what the ball looks like. A ball is a round toy. You can teach a computer this in its own language and it will understand. But first you have to teach it the Syntax "subject" and "description". When you feed chinese into the room, and then SHOW the person inside it what happens when it spits out chinese at the other end, the person inside the box will immediately start to understand chinese (text). This is why Japanese and Chinese can use the same alphabet (Kanji), with the same meanings, but different interpretations of thier sounds and pronunciations.

  • @Flavia1989
    @Flavia1989Ай бұрын

    This feels like saying, a mathematician who has never studied anything but math, does not learn anything about fields like economics or physics in the process of studying math. And inutitivley imwould agree .... but if i think about it, i think they do? It's not functional in the sense that they can have expert opinions about economics or write books about it or do any of the things we socially recognize as knowing stuff about economics. But it would help them an enormous amout when talking to knowledgeable people in those fields, as those people could just show them equations to explain difficult concepts. They would be able to learn very quickly, because they can just skim all the hardest parts ... since they have already learned them. Assuming that they retain their skills (and why wouldn't they) the Chinese room person is in the same situation. They would have to learn the meanings of the symbols to actually use it ... but wow it would be so ridiculously helpful being able to draw on a complete knowldege of chinese syntax, it wiuld allow them to ask very intelligent questions, infer meanings they have not learned yet etc. So i would say to searle, you are wrong. this person does know a lot of chinese, you have just constructed an edge case with the specific intent of manipulating everybodys intuitions to say 'no'.

  • @timothyblazer1749
    @timothyblazer1749 Жыл бұрын

    Penrose went on to show that consciousness is non compuble, which is an additional blow to strong AI. Of course, AI proponents are basically ignoring both Searle and Penrose.

  • @N.i.c.k.H

    @N.i.c.k.H

    7 ай бұрын

    Penrose just proved that his model of consciousness is non-computable. Strong AI people would just say that all he had proved is that his model of consciousness is wrong. The latter seems more compelling as there is clearly no generally accepted, rigorous definition of what consciousness even is. When dealing with very smart people the loopholes are always in the premises not the reasoning.

  • @timothyblazer1749

    @timothyblazer1749

    7 ай бұрын

    @@N.i.c.k.H seriously? He used the most general form of definition that exists. Aka "the act of understanding", which then makes this totally clear that no matter what your definition is, Godel will apply because it's a subject-object relationship. This is baseline reality, and unless you or anyone else can show that the scientific method can be applied without that baseline quality of reality, it's not a proof of "his" theory. It's a proof of "the" theory. Put another way, There is no assertion you can make about "rigor" without a subject object relationship. We're in turtles all the way down territory. If Strong AI people think differently, They need to study set theory AND epistemology.

  • @gothboi4ever
    @gothboi4ever Жыл бұрын

    The ultimate question is how can we know something is true.

  • @brad1368

    @brad1368

    Жыл бұрын

    You can't

  • @hb-robo

    @hb-robo

    7 ай бұрын

    We construct premises that are agreed upon and then build the logical conclusions, generally. But the “agreeing upon” bit is a bit wishy washy

  • @skidz8426
    @skidz84267 ай бұрын

    It would seem to me this is exactly how the brain works. And more so subconscious. I have neuropathy, not the same thing as a diabetic but a form of neuropathy. My nerves don’t always fully work. One time while walking a woman’s hair landed on the back of my calf. I jumped grabbed my leg, my brain told me my leg had just been sliced open. Another time I had been working in the garage, shirtless, my brother came in to talk to me. I can not describe what it was that I felt on my back, maybe a small piece of paper. Sometimes I try to fight these feeling because they’re usually nothing. But I just couldn’t take this anymore I reached over and brushed my back it was a cockroach about the size of my 3 fingers. This thing crawled up to my back and I did not feel it except as a foreign material, that did not cause me alarm. My point with all of this is your brain doesn’t know what anything is. It only knows things from information that’s been passed to it. Imagine a baby born it’s not making any sound or really moving. But the baby is breathing and has a heart beat. Essentially the baby is in a coma. There’s nothing wrong with the baby, the child 20 years later wakes up. You are tasked with teaching this “child” but they don’t want to leave the room they’re in. And they don’t want you to bring any thing in the room. Now describe what being “wet” is like. What does it feel like to get out of a swimming pool Or the shower? You actually don’t even know what it feels like to be wet. Go take a shower dry off sit down and try to think what it’s like to be wet.

  • @UncleBoobs
    @UncleBoobs7 ай бұрын

    The most popular experiment in the 21st century will be the "double slit experiment in time". This will allow us to advance even further in computing allowing us to manipulate light and time in a controlled environment.

  • @75noki
    @75noki2 жыл бұрын

    תודה ❤🙏

  • @perfectionbox
    @perfectionbox Жыл бұрын

    An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?

  • @danwylie-sears1134

    @danwylie-sears1134

    Жыл бұрын

    Any halfway-adequate rule book would include, at least implicitly, instructions for rearranging the baskets and the labeled papers in a way that perfectly mimics learning of new idioms.

  • @recompile

    @recompile

    Жыл бұрын

    How many times are you going to post this? I replied to this exact post from you earlier.

  • @sschroeder8210

    @sschroeder8210

    Жыл бұрын

    I don't think your concept is valid: understanding something vs. understanding how to learn something are two different concepts. You might know English, right? But can you understand the semantics behind 'ZZuetixlo'? I presume not because it's a new word that I just created. So, do you not understand English anymore? Of course not. You simply haven't been given the chance to learn the new word that I've chosen to create. You still understand English and understand how you can learn new words. You simply haven't been given the opportunity (from me) to ask: "What is semantics behind your new word?" If a new word acts as a fundamental axiom and isn't derived from the concatenation of other words, then you shouldn't be capable of understanding the semantics of that new word... Thus, the 'System' shouldn't have to demonstrate something that we don't inherently have the capacity for doing when we express a sense of understanding...

  • @Timmy-en7qv
    @Timmy-en7qv10 ай бұрын

    I scored a 75 IQ which is similar to a low functioning 80 or high functioning 70 so I feel qualified to text here with you all, my peers. I have strong feelings about this topic but I kept falling asleep during the video and I am hungry. So I am going to fix my dinner by pouring the fruit loops on top of the milk not milk over cerial and leave you mistified as to what pearls of wisdom I could have shared here if I wasn't sleep deprived and hungry.

  • @istvann.huszar420
    @istvann.huszar4208 ай бұрын

    Let's address the elephant in the room (quite literally): that enormous set of instructions in the room is the semantics by itself. I appreciate the effort that went into creating this video, but I'll never get my 28 minutes back.

  • @anxez
    @anxez Жыл бұрын

    I can think of some direct counterexamples. 1: A Chinese Translating room. Set up the Chinese room but make the output of the room in a language known to the operator. Suddenly the situation changes, and the operator could pull semantics out of the text, we'd all agree on that. Maybe it would take years, maybe the rule set itself would be considered a way to teach Chinese. But what is the effective difference between the Chinese Room and the Chinese Translating Room? Short answer is context: the operator is able to connect the symbols to previous meaning. This is a mostly trivial difference because it is piggybacking on previous semantics to generate new semantics. But it does bring up a possible refinement 2: The Chinese Picture Room: Set up the Chinese room just the same: have a set of rules that govern syntax-in => syntax-out, bins which receive characters, bins which accept characters, only now have every set of characters come with a snapshot of the situation they appeared in. Once again, semantics have appeared, this time a little more robustly: the operator doesn't need any native language in order to learn Chinese from this set up. It may take years, they may be unable to speak the language, but the operator will be able to develop a semantic knowledge of the language. Heck, go one step further, and by feeding the person in the room pictures and characters in the right way, that person can make the rule set themselves without being programmed: because that's what babies do. And spoiler alert, that's what touring complete machines do too, by the way. Honestly though, this thought experiment does a lot of heavy lifting by not defining semantic understanding or proposing how it actually arises. He just takes it for granted and then gestures at how it doesn't arise from his specific scenario and when he's given a silver bullet argument against it the response is to just shrug it off.

  • @nitramreniar

    @nitramreniar

    Жыл бұрын

    To point number one: Changing this part of the setup might work logically as a thought experiment, but disjoints the connection between the Chinese Room and the digital systems it is meant to be analogous to. The fact that you could learn Chinese by being given random Chinese symbols and phrases and having instructions on how to translate them into a language you know, is based in the fact that you *already* know a language; that you already have the semantic understanding in one version and now only need a way to transfer those semantics into Chinese. The reason why the thought experiment has both input and output in a language you don't understand, is because we - for this context reasonably - assume, that the digital computer has no deeper language that it understands semantically and that it would just need a translation for it to truly learn and understand Chinese. On the second point I agree with you. In fact, I feel that the thought experiment already betrays a problem with the result in its own setup. The thought experiment ask us to imagine *ourselves* - a human mind - in the situation of a digital computer in this room and uses our human intuitive understanding of how we could never truly understand Chinese through this setup to reason that a digital machine could also never do this and thus digital machines are distinctly different from human minds. But: It started by reasoning that the human mind couldn't do this, so how can that show a difference between those two systems? The ultimate difference between how humans learn their first language not just syntactically, but with an understanding of the scemantics, is by more than just looking at random symbols as an input and figuring out the appropriate output - in fact written language is already a bad point of comparison, as written language is already something humans only understand through translation! It's a translation from the spoken and the internal understanding of language and the connected semantics *into* written language. Humans seem to learn their first language through engaging with it with a multitude of senses - we *experience* the language and the contexts in which different sounds and gestures are used untill we have enough of an understanding of the language itself to further explore nuances though the already established understanding of language we have.

  • @fiddley
    @fiddley Жыл бұрын

    How do you know your mind isn't just a Chinese room? Inputs come in, outputs go out.

  • Жыл бұрын

    Love it - I wrote my master thesis about cognition & mind and about John Searle in specific in 1997.

  • @wolframstahl1263

    @wolframstahl1263

    Жыл бұрын

    Then you seem to have some expertise in the field, definitely much more than I do. Can you answer me this, or do you know if Searle has anything to say about it: Does the existence of Chinese people disprove his conclusion of this thought experiment, and if not, why? He talks about a person with no understanding of Chinese being put in that room. How is that different from being born in China? An infant literally speaks no language, in the moment one is born there is only syntax to be experienced, yet we develop semantics. In being born, they are a person with as little understanding of Chinese as Searle himself, being put into a literal Chinese room.

  • @michaelmertens813
    @michaelmertens813 Жыл бұрын

    Thank you, this was really helpful.

  • @gabrielteo3636
    @gabrielteo3636 Жыл бұрын

    Although, I don't think I agree with functionalism, the neurons essentially work as syntax operators like bits in a computer. Searle's argument is each neuron does't have understanding. Well, duh!

  • @recompile

    @recompile

    Жыл бұрын

    You're assuming that the brain is a digital computer. The evidence in front of you strongly suggests otherwise.

  • @gabrielteo3636

    @gabrielteo3636

    Жыл бұрын

    @@recompile Not necessarily digital. It is only an analogy. Our neurons act more like gradients (analog), but it still depends on discrete ions, electrons and atoms. The neurons are just a type of biological machine. The neurons themselves don't have understanding. They just do what biological machines do according to physics.

  • @gamzeozata4554
    @gamzeozata45543 жыл бұрын

    I thought you wrote 'pide' at the first glance which is a Turkish word as the equivalent of a calzone. Thank you for the lesson! You are great!

  • @themerpheus

    @themerpheus

    3 жыл бұрын

    lol same here

  • @danwylie-sears1134

    @danwylie-sears1134

    Жыл бұрын

    Also Spanish for "ask".

  • @tnyw872621h8474h9
    @tnyw872621h8474h9 Жыл бұрын

    Hello, my question is: is there any connection to the idea that “syntax underdetermines semantics” to the more broad philosophy of science problem called the underdetermination of data?

  • @emerkaes9091
    @emerkaes90916 ай бұрын

    Here's my prison room experiment: There are two prison cells. From one cell you can't see, sense, smell anything what's going on in the other cell. But you can hear stuff from the other cell. In one cell there's Vietnamese prisoner who knows only vietnamese language and in the other cell there's a Norwegian who can only understand Norwegian. They never go out of their cell and guards don't communicate with them. A Vietnamese guy can speak, but the Norwegian cannot understand him. And vice-versa. But with enough time spent they can correctly repeat sentences the other guy said, they even memorize some poems and songs. After some time they can even say something and based on the answer say something else, but they have no idea what they are saying. They may be saying things like "-How you doing? -Not good. -I'm sorry to hear that", because they learnt it's quite common word-chain, but they don't understand what are they saying at all. But if their cell wall was made from glass, they could much more easily gain understanding of those words. If Vietnamese showed with finger his soup and said soup in Vietnamese, then the Norwegian experiencing ever soup in his life, seeing soup and associating heard words with soup, he would gain understanding of this token. My point is: the main thing about chinese room is not about machines. It's about importance of multisensorism in understanding. The same thing applies with communication with alien species: how will you tell aliens far far away what does soup mean if all they can recognize is electromagnetic wave you sent them? You may sent them a chain of tokens, but they cannot associate anything with those tokens(unless they would be able to decode them into vision, sound, smell, touch of our understanding and then convert them to their system of experiences, which may be much different), but still it doesn't prove those aliens are not sentient. If that was the case, we should consider anything we cannot contact using our set of senses as insentient.