Artificial Intelligence Isn't Real

Ғылым және технология

The first 100 people to use code SOMETHING at the link below will get 60% off of Incogni: incogni.com/something
This video has been approved by John Xina and the Chinese Communist Party.
Check out my Patreon: / adamsomething
Second channel: / adamsomethingelse
Attribution (email me if your work isn't on the list):
unsplash.com/photos/WX5jK0BT5JQ
unsplash.com/photos/luseu9GtYzM
unsplash.com/photos/-olz676A3IU
unsplash.com/photos/3OiYMgDKJ6k
unsplash.com/photos/6MsMKWzJWKc
unsplash.com/photos/rEn-AdBr3Ig
commons.wikimedia.org/wiki/Fi...

Пікірлер: 3 100

  • @AdamSomething
    @AdamSomething11 ай бұрын

    Thanks for tuning in to today's video! The first 100 people to use code SOMETHING at the link below will get 60% off of Incogni: incogni.com/something

  • @qwertyuiopchannelreal296

    @qwertyuiopchannelreal296

    11 ай бұрын

    Nice video from someone who has no expertise in AI. Humans are no different from AI we are just a bunch of inputs and outputs. Very soon in the future neural networks will be on par with or surpass humans in general intelligence because of the improvements in its architecture.

  • @TomTKK

    @TomTKK

    11 ай бұрын

    ​@@qwertyuiopchannelreal296 Spoken like someone who has no expertise in AI.

  • @qwertyuiopchannelreal296

    @qwertyuiopchannelreal296

    11 ай бұрын

    @@TomTKK Yes, but generalizing AI as not being “intelligent” is just wrong. You could make the same point about human brains because they receive inputs and act on those inputs to produce output, which is no different from AI. In fact, the architecture of neurons in Neural networks mimics the functions of biological neurons.

  • @johnvic5926

    @johnvic5926

    11 ай бұрын

    @@qwertyuiopchannelreal296 Oh, nice. But thanks for admitting that anything you say on the topic of AI has no actual scientific foundation.

  • @relwalretep

    @relwalretep

    11 ай бұрын

    ​@@qwertyuiopchannelreal296it's almost as if you wrote this before getting to the last 60 seconds of the video

  • @mateuszbanaszak4671
    @mateuszbanaszak467111 ай бұрын

    I'm opposite of *Artificial Inteligence* , because I'm *Natural* and *Stupid* .

  • @Kerbalizer

    @Kerbalizer

    11 ай бұрын

    Rel

  • @GiantRobotIdeon

    @GiantRobotIdeon

    11 ай бұрын

    Artificial Intelligence when Natural Stupidity walks in: 😰

  • @jacobbronsky464

    @jacobbronsky464

    11 ай бұрын

    One of us.

  • @QwoaX

    @QwoaX

    11 ай бұрын

    Minus multiplied with minus equals plus.

  • @aganib4506

    @aganib4506

    11 ай бұрын

    Realistic Stupidity.

  • @SurfingZerg
    @SurfingZerg11 ай бұрын

    As a programmer that studies AI, we almost never actually use the term artificial intelligence, we usually just say machine learning, as this more accurately describes what is happening.

  • @InfiniteDeckhand

    @InfiniteDeckhand

    11 ай бұрын

    So, you can confirm that Adam is correct in his assessment?

  • @mdhazeldine

    @mdhazeldine

    11 ай бұрын

    But is the machine actually understanding? I.e. is it comprehending what it's learning? If not, is it really actually learning? It seems to me like it's a parrot learning to repeat the words that humans say, but it's not understanding the meaning of the words. The same as the Chinese Turing experiment Adam mentioned.

  • @malekith6522

    @malekith6522

    11 ай бұрын

    ⁠​⁠He is...mostly... and what usually press talking as AI actually called AGI(Artificial General Intelligence) and currently we are far away to implement it.

  • @TheHothead101

    @TheHothead101

    11 ай бұрын

    Yeah AI and ML are different

  • @EyedMoon

    @EyedMoon

    11 ай бұрын

    As an AI engineer, I don't 100% agree with this video. In fact, I think I agree with about 50% of it :p There are some potential threats because of how powerful it is to just automate some tasks using "AI". For example, news forgery has already proven to be a pretty easy task, as newsfeeds are highly formatted and easy to spam. Image generation is, in 2023, of very high quality and helps creating "fake proof" very quickly. AI is well suited to information extraction too, in the cases where features and structures appear from the amount of data we deal with. But in the media, "AI" is a buzzword used whenever people don't understand what they're talking about and the things they're talking about like "machines becoming sentient" are just ludicrous. So I'm not totally on board with Adam's analysis. He makes a point that there's a difference in perception from tech and media but he then still mixes both aspects imo. And especially the cat argument. Of course we develop our reasoning from precise features but we also have kind of the same training process as machines. Seeing the same features with the same feedback a lot activates our neurons so often that the connexions become prevalent, while AIs have neurons that compute features + reinforce their connexions through feadback. Oh and for the "is the machine really understanding?" question: are you really understanding or merely repeating patterns with only slight deviations caused by your environment and randomly firing neurons? I'm not sure anyone can answer this question yet.

  • @justpassingby298
    @justpassingby29811 ай бұрын

    Personally what pisses me off is when someone takes one of those AI chat bots, and gives it some random character from any show, and goes "Omg this is basically the character" and it just gives the most basic ass responses to any question

  • @menjolno

    @menjolno

    11 ай бұрын

    Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms"

  • @alexandredevert4935
    @alexandredevert493511 ай бұрын

    I've done a PhD in machine learning, I design machine learning system as a job * Yes, AI is a very poorly defined word, which have been stripped of the little meaning it might had because how much it was stretched in all directions * Intelligence is not a boolean feature, it's a continuum. Where do we put a virus ? The most simple unicellular organisms ? Industrial control systems are on the level of a simple bacteria in term of complexity and abilities, minus the self-replication ability (3d printers are this close to cross that gap) * Your cat example is a very good explanation of what statistical inference. * You can implement statistical inference in various ways, one of which is neural network. Neuron network can have internal models that does what you call "the intelligent way". That internal model is not set by the programmer, it's built by accumulating training on randomly picked examples aka stochastic gradient descent. * The Chinese Room argument have its critics, some of which are really interesting And yes, there are tons of cringe bullshit on this topic, to the point I carefully avoid mentionning I do AI, I say I do statistical modeling

  • @MsZsc

    @MsZsc

    11 ай бұрын

    zao shang hao zhong guo xian zai wo you bing qilin

  • @isun666

    @isun666

    11 ай бұрын

    That is exactly how chatgpt would answer it

  • @sophiatrocentraisin

    @sophiatrocentraisin

    11 ай бұрын

    Actually (and it goes in your direction), it's still debated whether viruses are even living organisms. The reason being viruses aren't actually cells, and also because they can't self-replicate

  • @tedmich

    @tedmich

    11 ай бұрын

    Wth all the crap companies in my field (biotech) trotting out some AI drug design BS after their one good idea failed, I would avoid being associated with ANY of this tech until the charlatans fall off the bandwagon! Its a bit like being a financial planner with last name "Ponzi".

  • @jlrolling

    @jlrolling

    11 ай бұрын

    ⁠@@sophiatrocentraisinIt’s also because they do not meet the standard requirements that define an organism, i.e. birth, feeding, growth, replication and death. They do not grow, they “are born” as totally finished adults. And also, as you mention, they cannot self replicate, they need a third party for that aka a cell.

  • @nisqhog2881
    @nisqhog288111 ай бұрын

    "Behaving perfectly like a human doesn't mean they are intelligent" is a sentence that can be used on quite a lot of people too lol

  • @AlexandarHullRichter

    @AlexandarHullRichter

    11 ай бұрын

    "The ability to speak does not make one intelligent." -Qui Gon Gin

  • @ConnorisseurYT

    @ConnorisseurYT

    11 ай бұрын

    Behaving unlike a human doesn't mean they're not intelligent.

  • @inn5268

    @inn5268

    11 ай бұрын

    it is intelligent in the sense it can process data and generate a response to it, it is not SENTIENT since it lacks any self awareness or underlying thoughts other than processing the inputs it's given. That's what adam meant to say

  • @fnorgen

    @fnorgen

    11 ай бұрын

    I suspect quite a lot of people will keep moving the goalpost for what counts as "intelligence" however far is needed to exclude machines, until they themselves no longer qualify as intelligent by their own standards. The issue I take with Adam's argument is that you quickly get in a situation where the list of tasks that strictly require "actual intelligence" will keep getting narrower and narrower until there may some day be no room left for "intelligence". I know a person with such a severe learning impediment that I would honestly trust auto GPT or some similar system to do a better job than them for any job that can be performed on a computer. Except some video games. That's not much to brag about, but in terms of meaningful, measurable performance, I'd say current AI is more intelligent than they are. So claiming that that the Machine is completely devoid of intelligence seems to me like a strictly semantic argument. I don't really think of the mechanisms of a system as a qualifier for intelligence. Only its capabilities. Current ML based systems don't learn like we do, they don't think like we do, they don't feel like we do, they have no intrinsic motivations, and it seems they don't need to either.

  • @robgraham5697

    @robgraham5697

    11 ай бұрын

    We are not thinking machines that feel. We are feeling machines that think. - Antonio Dimasio

  • @flute2010
    @flute201011 ай бұрын

    artifical intelligence is when the computer contolled trainers in pokemon use a set up move instead of attacking

  • @sharkenjoyer

    @sharkenjoyer

    11 ай бұрын

    Artificial intelligence is when half life 2 combine uses grenade to flush you out and flank your position

  • @n6rt9s

    @n6rt9s

    11 ай бұрын

    "Socialism is when no artificial intelligence. The less artificial intelligence there is, the socialister it gets. When no artificial intelligence, it's communism." - Marl Carx

  • @flute2010

    @flute2010

    11 ай бұрын

    @@n6rt9s you may have just turned the rest of the replies under this comment into a warzone now at the mere mention of socialism, we can only wait

  • @dandyspacedandy

    @dandyspacedandy

    11 ай бұрын

    i'm... dumber than trainer ai??

  • @alexursu4403

    @alexursu4403

    11 ай бұрын

    @@dandyspacedandy Would you use rest against a Nidoking because it's a Psychic type move ?

  • @thrackerzod6097
    @thrackerzod609711 ай бұрын

    As a programmer, thank you. It's annoying to have to explain to people that AI is not intelligent, it's just an advanced data sorting algorithm at the very most. It has no thoughts, it has no biases, it has no emotions. It's just a bunch of data sorted by relevance. This isn't to downplay the technology, the technology behind it is stunning, and it has good applications but to call it intelligence when it isn't is absurd.

  • @cennty6822

    @cennty6822

    11 ай бұрын

    language models inherently have biases based on their training. A bot trained on western internet will be biased towards more western ideologies, one based on for example russian forums will have different biases.

  • @thrackerzod6097

    @thrackerzod6097

    11 ай бұрын

    @@cennty6822 They will, however these are not true biases. There is no emotional, or any reasoning behind them, hence they can be referred to as biased, but not biased in the way a human, or any other intelligent being would be, which was what I was referring to.

  • @somerandomnification

    @somerandomnification

    11 ай бұрын

    Yep - I've been saying the same thing about CEOs I've worked with for the last 25 years and still there are a bunch of people who seem to think that Elon Musk is intelligent...

  • @thrackerzod6097

    @thrackerzod6097

    11 ай бұрын

    @@somerandomnification Elon is just another rich person who's built his legacy off of the backs of genuinely intelligent people, people who unfortunately will likely go largely uncredited. If they're lucky, they'll at least get credit in circles related to their niches though.

  • @marlonscloud

    @marlonscloud

    11 ай бұрын

    And what evidence do you have that you are any different?

  • @mistgate
    @mistgate11 ай бұрын

    If people insist on using "AI," I propose we call it "Algorithmic Intelligence" because that's far closer to what it really is than Artificial Intelligence

  • @Naps284

    @Naps284

    11 ай бұрын

    Now, imagine an algorithm that, instead of being made of code, is based on an extraordinarily complex and pretty well-defined physical structure on three spatial dimensions and which structure also defines how it will elaborate "stuff" and react with itself (inbetween inputs and outputs) through the fourth dimension (time). Also, the sequence of reactions and computations define how the structure will mutate, adapt, and change over time. All these properties on the four dimensions are perfectly (theoretically, at least) transcribable as code: for example, as numbers that represent coordinates on these dimensions (including all states through the fourth dimension). Now, add in some basic rules that define how all this data must interact with itself or react and compute inputs and outputs. These rules might just be, for example, the fundamental laws of physics and the various physical constants. Oh, wait. This seems familiar... Isn't this algorithm EXACTLY how the human brain "generates" intelligence and cognition (and consciousness?)

  • @apolloaerospace7773

    @apolloaerospace7773

    11 ай бұрын

    @@Naps284 There is no qualitative difference between connecting virtual points in n-dimensions or n+1-dimensions. I dont work with AI, but to me you sound like trying to be appear smart, without knowing what you are talking about.

  • @Naps284

    @Naps284

    11 ай бұрын

    @@apolloaerospace7773 I didn't write all that to appear smart using weird terms or something 😂 It was not my intention

  • @Naps284

    @Naps284

    11 ай бұрын

    @@apolloaerospace7773 I wanted to make a parallelism between the two things by trying to totally decompose the "thing" 😂 I just tried to explain my idea of how there is no actual functional difference between a virtual and a physical neural network (mutating nodes+connections), given enough complexity and computational power...

  • @Naps284

    @Naps284

    11 ай бұрын

    @@apolloaerospace7773 I just liked the idea of expressing it that way, but then I got a bit lost in my explanation 😂

  • @rhyshoward5094
    @rhyshoward509411 ай бұрын

    Robotics/AI researcher here, you're definitely right to suggest that AI is being completely blown out of proportion by the media. That being said, certain things you mentioned computers not being capable of they certainly can do, it's just a case of that they're currently still the kinda things that are being developed in research institutions and therefore not viewable by most people. For example, the fat cat example could be tackled by a combination of causality and semantic modelling could represent the relationships between feeding the cat and its weight. Furthermore empathy modelling is also an idea within reward-based agents/robots, effectively having the robot reason about whether an outcome would be optimal from the perspective of another being (e.g. a cat). Of course we're still a long ways off, but that is more of a software/theory issue than a hardware issue, in a sense, we have all of the machinery we need to make it happen, it's just a matter of knowing how to structure the inner workings of the AI that's the difficulty. With regards to the Chinese room thought experiment, it's worth mentioning that only one school of thought precludes this disproving consciousness. I'm fairly certain that if a baby could talk, and you were to ask it whether it understood anything it was experiencing, I doubt it would, yet I don't think anyone is arguing that babies are not conscious. Even that aside, I think what ultimately sets aside human intelligence, and what will ultimately set aside future AI, is the ability to reason about reasoning, or in other words meta-reasoning. This is currently quite difficult considering the biggest fads in research right now involve throwing a neural network at problems, effectively creating an incomprehensible black box, but there's definitely the baby steps there of making this happen. All that being said, I totally get why you made this. The way everyone's talking these days you'd be forgiven for thinking the machine revolution is due next Tuesday.

  • @Bradley_UA

    @Bradley_UA

    11 ай бұрын

    Well, they should have asked ChatGPT how it reasons its answers to theory of mind test questions. But to me the only way to answer those questions is to actually have theory of mind.

  • @awesometwitchy

    @awesometwitchy

    11 ай бұрын

    So not literally Skynet… but maybe literally Moonfall? With a little Matrix sprinkled in?

  • @qiang2884

    @qiang2884

    11 ай бұрын

    @@awesometwitchy no. Researchers are smart people unlike politicians, and they know that making things that do not harm them is important.

  • @ChaoticNeutralMatt

    @ChaoticNeutralMatt

    11 ай бұрын

    I'll only add that it's acted like it's 'just around the corner' for a while now. I don't entirely blame media, at least early on. It was a fairly rapid public jump and we have made progress.

  • @travcollier

    @travcollier

    11 ай бұрын

    It is basically the same as the "philosophical zombie" thought experiment, and fails to actually mean anything for the same reason. It is just begging the question by assuming there is something called "understanding" which is different from what the mechanistic system does. No actual evidence for that I'm afraid. And before someone objects that they know they "understand"... Really? Do you actually know what is going on in your brain, or are you just aware of a simplified (normally post-hoc) model of yourself?

  • @Movel0
    @Movel011 ай бұрын

    Incredibly brave of Adam to stuff his cat with food to the point of morbidty obestiy just to prove the limits of AI, that's real dedication.

  • @USSAnimeNCC-

    @USSAnimeNCC-

    11 ай бұрын

    And now time for kitty weight loss arc que the music

  • @merciless972

    @merciless972

    11 ай бұрын

    @@USSAnimeNCC- eye of the tiger starts playing loudly

  • @lordzuzu6437

    @lordzuzu6437

    11 ай бұрын

    bruh

  • @Soundwave1900

    @Soundwave1900

    11 ай бұрын

    How is it fat though? Google "fat cat", all you'll see is cats at least twice fatter.

  • @celticandpenobscot8658

    @celticandpenobscot8658

    11 ай бұрын

    Is that really his own pet? Video clips like this are a dime a dozen.

  • @Cptn.Viridian
    @Cptn.Viridian11 ай бұрын

    The only fear I have for current "AI" is companies betting too hard on it, and having it destroy them. Not by some high tech high intelligence AI takeover, but by the AI being poorly implemented and just immediately screwing over the company, like "hallucinating" and setting all company salaries to 5 billion dollars.

  • @davidsuda6110

    @davidsuda6110

    11 ай бұрын

    Part of the Hollywood writers strike is AI generating scripts just bad enough that they can be edited by a human and produced so cheaply that the industry can profit on it. Our concerns should be more blue collar. The industrialsts will take care of themselves in the long run.

  • @okaywhatevernevermind

    @okaywhatevernevermind

    11 ай бұрын

    why do you fear big corpo destroying itself through ai? that day we’ll be free

  • @KorianHUN

    @KorianHUN

    11 ай бұрын

    ​@@okaywhatevernevermindwe will be "free" ... of global trade and functional economies. An apocalypse sounds cool until you think 4 seconds about it. It won't be shacky adventures, it will be mass death and duffering.

  • @maya_void3923

    @maya_void3923

    11 ай бұрын

    Good riddance

  • @berdwatcher5125

    @berdwatcher5125

    11 ай бұрын

    @@okaywhatevernevermind so many jobs will be lost.

  • @aliceinwonderland8314
    @aliceinwonderland831411 ай бұрын

    I once passed a basic french speaking exam with essentially no comprehension of what I was saying. Just copied the tense structure of the question, added a few stock phrases and conjuctions, and sprinkled in some random nouns and adjectives that I couldn't for the life of me tell you what they meant, only that my brain somehow decided they were in the same topic. They were testing for compression; I used a different method to have the appearance of it. AIs work with similar logic, doesn't matter how you get the results within the task, so long as the results appear correct.

  • @tomlxyz

    @tomlxyz

    11 ай бұрын

    That's exactly what's this not about. The question here is if the process is intelligent or not. What you describe is using a certain method to a narrow field, that's just regularly, statically defined algorithms. If you were faced with increasingly complex tasks you'd eventually fail because you don't actually comprehend it and currently AI keeps failing too, sometimes with the simplest instructions

  • @aliceinwonderland8314

    @aliceinwonderland8314

    11 ай бұрын

    @@tomlxyz you do realise all code, AI included, is quite literally just a bunch of algorithms and statistics, albeit in this case significantly more complex than what I used? And that most of AI issues boil down to the lack of comprehension and ability to think (preferably critically) within the AI? I'm not an expert in machine learning, but I do have some basic understanding of how code and data sorting work, since a large part of my degree is working with various sensors, their data, Fourier transforms, matrices etc. Theoretically, I think it should be possible to get some sort of sentient AI, but machine learning as it currently is is simply way too specific in it's task to really be sentient. I'd say current AIs are probably similar level of sentience as an amoeba.

  • @stevenstevenson9365
    @stevenstevenson936511 ай бұрын

    I have an MSc in Computer Science and Artificial Intelligence and I can say that how we use these terms and how the media uses these terms are very different. "AI" is a huge field that refers to basically anything that a computer does that's vaguely complex. So when your map app tells you the shortest path from A to B, that's AI, specifically a pathfinding algorithm. We we talk about stuff like chat gpt, we wouldn't really call it AI, because AI is such a general term. It's Machine Learning, more speficially Deep Learning, more specifically a Large Language Model (LLM). Stable diffusion is also Deep Learning, but it's a diffusion model.

  • @nitat

    @nitat

    11 ай бұрын

    Thx For this comments. The IT jargon was really confusing. I think I understand a little bit better now.

  • @dieSpinnt

    @dieSpinnt

    11 ай бұрын

    There is no reason to be defensive. You are a scientist and not a dipshit born out of "Open" (open .... what a perversion!) AI that wants to sell "ideas" ... I mean stock. Have a good one, fellow human( ... **g** )

  • @Groostav

    @Groostav

    11 ай бұрын

    Yeah its funny, @AdamSomething's description of "pre-AI" sounded a lot like prolog to me, which I would consider to be a form of AI. I think that the concept of AI is really so broad that it is simply some algorithm that deftly navigates a dataset. If you add some kind of feedback loop (wherein the algorithm is able to grow or prune the dataset as it goes), to find something resembling novelty, you've got something thats more-AI-ish. So are we at the point where "AI is a spectrum" now?

  • @gustavohab

    @gustavohab

    11 ай бұрын

    If you come to think of it, AI is out there for over 20 years because NPCs in video games are also AI lol

  • @Ofkgoto96oy9gdrGjJjJ

    @Ofkgoto96oy9gdrGjJjJ

    11 ай бұрын

    We would also need a lot of physical memory, to run it without a crash.

  • @ItaloPolacchi
    @ItaloPolacchi11 ай бұрын

    I disagree: people are scared by AI not because they think they're seemingly "human", but because perfectly acting like one without understanding the meaning behind it can lead (in the future) to real life consequences. If you teach an AI to hack your computer and delete all your data it doesn't matter if it understands what it's doing as long as the action is being done. Not having free will doesn't mean not creating consequences; if anything, it's worse.

  • @jhonofgaming

    @jhonofgaming

    11 ай бұрын

    This exactly, tools already exist that are not "intelligent" but are still powerful. AI is the exact same, it does not matter if it's intelligent it's still an extremely disruptive tool.

  • @what42pizza

    @what42pizza

    11 ай бұрын

    well said!

  • @thereita1052

    @thereita1052

    11 ай бұрын

    Congrats you just described a virus.

  • @user-yy3ki9rl6i

    @user-yy3ki9rl6i

    11 ай бұрын

    honestly its a good take. A big part of ChatGPT development is imposing guardrails on them to prevent them from telling you how to make pipe bombs and meth. we've seen glimpses of DAN version of ChatGPT and yeah thats why AI still dumb and scary.

  • @alexs7139

    @alexs7139

    11 ай бұрын

    Yes and that’s all my problem with this video: after watching it you can think « oh, AI is no « true » intelligence so it cannot try to destroy us like in SF» for example… However it’s wrong (and you showed why). P.s The idea that an AI built through machin learning has no « true intelligence » because it cannot understand concepts is not that obvious from a philosophical point of view. A pure materialist for example will not be convinced at all by this argument

  • @Dimetropteryx
    @Dimetropteryx11 ай бұрын

    You can choose a definition of intelligence that fits just about whatever argument you want to make, so it really is important to make clear which one you're using before making your point. Kudos for doing that, and for stating that you chose it for the purpose of this video.

  • @menjolno

    @menjolno

    11 ай бұрын

    Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms" "You can choose a definition of intelligence"

  • @kcapkcans
    @kcapkcans11 ай бұрын

    I'm a data engineer for a company you've heard of. I fully agree that the general public doesn't really understand or properly use the terms "AI" and "Machine Learning". However, I would argue that in so many cases neither do the "tech people".

  • @bettercalldelta
    @bettercalldelta11 ай бұрын

    The thing I'm afraid of is that corporations couldn't care less, as long as they don't have to pay actual humans for being artists, programmers, etc etc, they will be using AI even if everyone knows it actually has no idea what art or code is

  • @rkvkydqf

    @rkvkydqf

    11 ай бұрын

    If all else fails, all this AI FUD will surely make desperate artists/programmers/writers come to you to work for peanuts!

  • @Jiji-the-cat5425

    @Jiji-the-cat5425

    11 ай бұрын

    That’s my biggest fear with AI as well.

  • @haydenlee8332

    @haydenlee8332

    11 ай бұрын

    this!!

  • @dashmeetsingh9679

    @dashmeetsingh9679

    11 ай бұрын

    The problem with AI generated code is: how to know it works as intended without any potential system crashing defects. Will AI reduce the actual software developers needed to develop a software: yep thats true. As you increase producitivity less labor is needed. Will it result in net job loss? Hard to predict, maybe it would. Or maybe this open new avenues as happened with all other techs.

  • @shawnjoseph4009

    @shawnjoseph4009

    11 ай бұрын

    It doesn’t matter how smart or stupid the AI actually is if it can do what you need it to.

  • @RoiEXLab
    @RoiEXLab11 ай бұрын

    As a CS Student I agree with the main point of the video, but I'll just throw it in the room that we actually don't know what "real intelligence" really is. So maybe at some point AI will actually become "real" without any way to tell it apart. We just don't know.

  • @rkvkydqf

    @rkvkydqf

    11 ай бұрын

    Since real neurons seem to outperform NNs in RL environments like a game of pong by number of iterations, I think there definitely is some gap. I think neuromorphic computing seems quite fun. Anyway, it's indeed very annoyingly difficult to define intelligence, but it's clear the dusty old Turning Test isn't doing it for us anymore...

  • @00crashtest

    @00crashtest

    11 ай бұрын

    True. Real intelligence is just a bunch of atoms interacting together. So, intelligence is just a vague thing and there is no objective overall way to quantify it because it has not had a single coherent definition yet. Trying to quantify intelligence is just like trying to categorize animals before the concepts of "species" and "genetics" had been invented. The so-called "scientists" who made the classifications before that were so wrong. This is why social "science" is so wrong all the time, because there is no objective standard. Per the definition, science is only science when it has control groups, is falsifiable, has defining points, and is repeatable. Social "science", just like biology before the concept of species, is not even a science as a result.

  • @00crashtest

    @00crashtest

    11 ай бұрын

    As a result, until someone makes a single DEFINING standardized Turing Test (such as a single version of multiple choice or fill-in-the-blank), there is no objective way (excluding the formulation of the test in the first place) of quantifying intelligence. After all, even the physical sciences only work because there are definining criteria, and it is only objective after the defining criteria have been applied. All science, even physics, is inherently somewhat subjective because the choice of what defining criteria to use is inherently subjective. Anyway, objectiveness requires determinism in the testing procedure. This is why writing composition is intrinsically subjective because there isn't even a deterministic set of instructions on how to grade the test. Quantum mechanics is objective in this sense because even though the particle positions are random, the probability distribution function that they follow is still deterministic.

  • @XMysticHerox

    @XMysticHerox

    11 ай бұрын

    @@rkvkydqf Neuron networks eg the brain is vastly more powerful than any current hardware. Even the most powerful supercomputers still need quite some time to simulate even just a couple of seconds of brain activity. That doesn't mean there is an inherent difference.

  • @ikotsus2448

    @ikotsus2448

    11 ай бұрын

    @@rkvkydqf The dusty old Turing test stopped doing it for us the moment it was close to be passed. The same will happen with any other tests. Speaking of a moving goalpost...

  • @radojevici
    @radojevici11 ай бұрын

    Though the chinese room example shows that the room operator doesn't understand chinese, someone could say that an understanding of Chinese is being created. Understanding as an emergent property of all the elements and arrangements. The operator doesn't have to know chinese, just as individual neurons in the brain don't really understand anything or are conscious. We really don't know what kind of a thing consciousness is so the only useful way to recognise it is by a thing's behaviour regardless of the underlying mechanism. Just want to point that out, not saying that what ppl are calling ai now is actually conscious or something.

  • @Anonymous-df8it

    @Anonymous-df8it

    11 ай бұрын

    Surely the non-Chinese person would end up learning Chinese during the experiment?

  • @MrSpikegee

    @MrSpikegee

    11 ай бұрын

    @@Anonymous-df8itThis is not relevant.

  • @tgwnn

    @tgwnn

    11 ай бұрын

    ​@@DanGSmithyeah I think most of its appeal is derived from abusing our preconceptions about what "computer instructions" are. We'd probably think of some booklet, 100 pages, maybe 1000 if we actually think about it. But in reality it's probably orders of magnitude larger.

  • @hund4440

    @hund4440

    11 ай бұрын

    The chinese room understands chinese, not the person inside. But the dictionary is part of that room

  • @tgwnn

    @tgwnn

    11 ай бұрын

    @@hund4440 I would also love to hear a proponent of the Chinese Room explain to me, okay, so it doesn't understand anything. But how are our neurons different? Do they have some magic ability that cannot be translated into code? Why? They're just sending electric signals to each other. Or are they saying it's all dualism?

  • @miasuarez856
    @miasuarez85611 ай бұрын

    Thanks for the video. My main worries over this are that executives believe that this can replace human workers, apply this "AI" to everything fire a lot of people and then they end working the remaing ones to death when those "AIs" fail in doing their tasks because nobody would know if its outputs, and/or inputs, are accurate enough; or if, the heavens forbid, they gave IAs any type of decission power.

  • @kkrup5395

    @kkrup5395

    11 ай бұрын

    AI will surely replace many many workers. Even such a harmless thing as Ms Exel at the time replaced many accountant across the world, because one person and the program could do a task as fast as team of 10 would.

  • @Alex-ck4in
    @Alex-ck4in11 ай бұрын

    I've been a software engineer for the past 9 years - these days I work in the Linux kernel but my undergrad was to take a high-performance deep conv. neural net, chop off its output neurons, attach a new set of output neurons, and re-train the network to do a different, but conceptually similar task. This is called "fine-tuning", and at the time (libraries have advanced now), required direct, low-level modifications to the matrices of neurons, and the training process was very manual. While I have a HUGE problem with how the media conveys AI, how they try to humanize it, construe its behaviour as sentient, etc, I need to speak out and say that I also increasingly have a problem with people saying "AI is mundane, stupid, plain maths and nothing more". The only honest answer we can give is that we don't know. We don't know how our brains work, we don't know how WE are sentient, therefor, we cannot conclude ANYTHING is sentient or not. I know this is philosophical and non-mathematical, but it's the only answer that is not disingenuous. To this day, despite all our technology, we don't know if sentience is "computational", that is, arising from the "computation" of inputs inside our brains by neurons, or something else entirely, maybe involving quantum interactions betweem certain chemicals within the neurons. Until we know this, we therefore cannot know if any other computational network is "experiencing" its inputs. In terms of neural nets, there is another complication which is that the "neurons" are not even physical things, but rather abstractions placed ontop of sets of numbers in a chip. What set of conditions are required for this to be sentient? We have no idea. Some people argue that we are sentient and NN's are not because brains are actually way more complicated, but I also find this answer wholly insufficient - it doesnt answer *what* is causing the sentience, but merely conjectures that it lies elsewhere in our brains outside of the computational parts. In that sense the argument merely kicks the can down the road. I think it's very important to keep these media outlets in check by reminding them of the mundanity of what they claim is sensational, but it's a very dangerous road when you go too far - one day we may well be witnessing the birth of consciousness, and disregarding it entirely because we tell ourselves that it is not "biological" enough, "human" enough, or for some other over-confident reasoning. Anyways sorry for the rant and hopefully it was interesting for someone 😂

  • @EpicGamer-fl7fn

    @EpicGamer-fl7fn

    11 ай бұрын

    ngl you got me interested with the whole "quantum interactions between certain chemicals within the neurons" . Is it just something you came up with or is there an actual theory about it? It sounds very intruiging.

  • @TheCamer1-

    @TheCamer1-

    11 ай бұрын

    Thank you! Very frustrating that Adam will put out a video so categorically slamming AI and making so many blanket statements as if he knows what he's talking about, when in fact many of them are just plain wrong.

  • @Alex-ck4in

    @Alex-ck4in

    11 ай бұрын

    ​@@EpicGamer-fl7fn I didn't come up with it sadly xD There are papers out there that report occurrences of nature exploiting quantum mechanics and it's quite well-observed at this point, especially in photosynthetic bacteria. Building on that, there are papers arguing the plausibility that our brains/neurons could be affected by, or even exploiting quantum systems, to a point where it could be affecting our decision-making. Sadly, these still don't really come close to measuring or defining consciousness, it remains as elusive as ever :) Rodger Penrose is well worth a listen on the subject of consciousness, Lex Friedman has a podcast with him, and there's a whole chunk of the video dedicated to the topic. Also some papers to google: *"Photosynthesis tunes quantum-mechanical mixing of electronic and vibrational states to steer exciton energy transfer"* *"Experimental indications of non-classical brain functions"* Finally, to see the worst-case scenario for our race, watch some black mirror, particularly the episode "White Christmas" xD

  • @GiantRobotIdeon
    @GiantRobotIdeon11 ай бұрын

    Artificial Intelligence is a nebulous term that means whatever the marketeer wants it to. It generally translates to "bleeding-edge computer algorithms that don't work very well right now". I recall a time when Autopilot in Aircraft were called "Artificial Intelligence" when they were new; the moment they began working, we renamed it. The same will happen with ChatGPT, MidJourney,etc. In ten years, when the tech is mature, we'll call these types of software text generators and image generators, because that's what they are. And of course, the bleeding edge'll be called A.I

  • @thedark333side4

    @thedark333side4

    11 ай бұрын

    Semantics! If it can compute, it is intelligent. Even a mechanical calculator is intelligent, just in a limited manner, it is still however ANI (artificial narrow intelligence).

  • @ValkisCalmor

    @ValkisCalmor

    11 ай бұрын

    Exactly. We've been using the term AI to refer to any algorithm capable of making "decisions" without human input for decades, from autopilot to the ghosts in Pac-man. Researchers and engineers use more specific terms to clarify what they mean, e.g. machine learning models and artificial general intelligence. The issue here is grifters and unscrupulous marketing people using exclusively the broad term and talking about your phone's personal assistant as if it's Skynet.

  • @marcinmichalski9950

    @marcinmichalski9950

    11 ай бұрын

    I can't even imagine knowing so little about ChatGPT to call it a "text generator", lol.

  • @KasumiRINA

    @KasumiRINA

    11 ай бұрын

    ChatGPT is clearly a chatbot, BTW. I am not sure why people think AI is something new or special since anyone who played any videogame already uses that term casually to refer to enemies behavior. Some AI is basic, like Doom demons attacking each other after random friendly fire, and some AI is more sophisticated like the Director in Left4Dead or Resident Evil games adjusting difficulty based on how well you do. AI art like Stable Diffusion is nice to save time, I just wish it didn't need so much graphics memory.

  • @nathaniellindner313

    @nathaniellindner313

    11 ай бұрын

    I saw an ad for a washing machine that “scans your clothes and uses AI to determine how to wash them”. With the magic of marketing, even a simple if/else tree can become AI, what a time to live in

  • @stevejames7930
    @stevejames793011 ай бұрын

    The cat should make more appearances in your videos

  • @Letrnekissyou
    @Letrnekissyou11 ай бұрын

    And also, after a series of unfortunate marketing events - big tech layoffs, NFTs, the metaverse, and so on - marketers had to come up with something that sounded new and exciting, quick.

  • @Xazamas
    @Xazamas11 ай бұрын

    Important caveat to Chinese room: if it *actually* worked, the room and person inside *together* now form a system that "understands" Chinese. Otherwise you could point out a single brain cell, demonstrate that it doesn't understand language, and then argue that humans don't actually understand language.

  • @mjrmls

    @mjrmls

    11 ай бұрын

    That's my view too. Phylosophically, the entity made up of the room + the person understands Chinese. So l think that LLMs are not too far away from developing intelligence. It's not human-like, but a novel form of intelligence which fits the definition from the start of the video.

  • @idot3331

    @idot3331

    11 ай бұрын

    Yeah, at 7:10 he just described giving someone the materials to learn Chinese until they could understand Chinese. He disproved his own point. This whole video is pretty terrible to be honest, it seems like he just wanted to make a quick "popular thing bad" video for easy views. He seems to have forgotten that like AI, humans also have all our intelligence either "programmed" into our DNA or taught to us through experience. Why does the fact that AI needs to be programmed and learn mean it can't be intelligent? We have no idea what creates consciousness and therefore "real intelligence"; the most scientifically grounded guess is that it's just an emergent property of the incredibly complex chemical and electrical signals in the brain. There is no reason within our very limited understanding of consciousness that the electrical signals in a computer cannot theoretically do the same, or that the limited emulation of intelligence they can already achieve is not a more or less direct analogue for small-scale processes in the brain.

  • @XMysticHerox

    @XMysticHerox

    11 ай бұрын

    It is a very bad argument yes. Even those that support that side of it don't really use it anymore. If you wish to actually translate something like GPT into this setting it'd be more like: A guy was taught chinese vocabulary and grammar. He is now put behind a curtain and has to communicate with a native speaker and pretend to be one himself. He does perfectly. Does he actually understand the language. Obviously yes. And thats the thing. GPT does not understand cat food no. It was not trained to so how would it? What it does understand is language and actually quite well especially GPT4.

  • @engineer0239

    @engineer0239

    11 ай бұрын

    What part of the room is processing information?

  • @XMysticHerox

    @XMysticHerox

    11 ай бұрын

    @@engineer0239 All of it? The books here are essentially synapses and how they are laid out while the human is the somas making the actual decisions. The Chinese Room Experiment is basically looking at that and concluding the human is not really thinking because if you take away the synapses nothing works.

  • @Halucygeno
    @Halucygeno11 ай бұрын

    The issue with the Chinese room thought experiment is that in real life, it would be more like this: "every single person is inside a room, consulting their own private dictionary and writing all the correct symbols. You can't leave your room and enter anyone else's room. So, how do you know ANYONE can speak Chinese, if you can never talk to them directly?" Basically, the thought experiment acts like a gotcha, but it can only do so because it posits some "ideal" mode of communication where we can be certain that the other person is really communicating, and not just following deterministic logic. Taking its argument seriously leads to solipsism, because we can't enter other people's brains and verify that they're really thinking and feeling - maybe they're just perfectly emulating thought and emotion? What criteria do they propose for verifying that someone is really speaking Chinese, if everyone is stuck in a room and can never leave to check? But yeah, main point still stands. Tech journalists overhype everything, making it sound like we've developed A.G.I. or something.

  • @DeltafangEX

    @DeltafangEX

    11 ай бұрын

    Welp. Time to read Blindsight and Echopraxia for like the dozen-th time.

  • @jarredstone1795

    @jarredstone1795

    11 ай бұрын

    Very good point, scrolled down to find something like this. One could also argue that it's not the point that the person inside the room doesn't understand chinese, but that the entire room with its contents should be considered an entity, which does in fact understand chinese. We humans have specific parts of our brains specialised on certain tasks, damage in certain areas for example affects the ability to use language. What difference is there between an entity that has components like a dictionary and a human worker and an actual chinese speaking person, who also relies on the components of their body to communicate in chinese? It's a bit like saying humans can't understand chinese because the amygdala alone cannot understand it.

  • @rkvkydqf

    @rkvkydqf

    11 ай бұрын

    In this case, it's more of a high dimensional tensor math hidden behind the door, being just a little more accurate and less deterministic with its answers, enough to make it look human, but the point still stands. Even if there are some correlations within the model, isn't it just a byproduct of it's main objective, infinite BS generation?

  • @silfyn1

    @silfyn1

    11 ай бұрын

    i think what you said is very true, but the point of using this example is more like, we being human and working like one another, can assume that other humans understand things like we do, because it doesnt make sense for you to be the only person to actually understand things, but with a.i we know that what its doing is basically what happens in the chinese room, yet we expect it to be like us i think the problem is the overhype and we being so self comparative, we see something acting like us and assume that it uses the same methods as us

  • @Diana-ii5eb

    @Diana-ii5eb

    11 ай бұрын

    This. Using the chinese room argument like that is also dangerous from an ethical perspective: Assuming artificial life is possible, one could always claim that it is just acting like a sentient being instead of actually being sentient, thus justifying treating a sentient being like an object. Notice how the reverse - wrongly assuming a non sentient being is sentient - leads to much less negative consequences from an ethical perspective: Treating an object like a person is a bit silly and probably quite wasteful in the long run, treating a person like an object is ethically unjustifiable. That being said, Adam is right that modern "AI" isn't sentient and likely won't be for a while. While a lot of today AI-hype is definitly overblown, some of the underlying questions asked in that debate should not be outright dismissed just because they aren't relevant yet. There is a good chance artificial general intelligence is possible, and even if it isn't a lot of the problems associated with it are still very relevant in a world where extremely competent weak AIs exist. In essence, just because the media is (as always) massively blowing everything out of proportion doesn't mean that there isn't a real discussion to be had about the dangers of advanced machine learning systems.

  • @extremelynice
    @extremelynice11 ай бұрын

    It's extremely nice to see Adam doing another video.

  • @CoolExcite
    @CoolExcite11 ай бұрын

    4:25 The funniest part is that finding the optimal path to a destination is a textbook problem you would learn in a university AI course, the tech bros have just co-opted the term AI so much that it's meaningless now.

  • @nolifeorname5731

    @nolifeorname5731

    11 ай бұрын

    I'll give you an A* for this answer

  • @cosmic_jon
    @cosmic_jon11 ай бұрын

    I think it's dangerous to underestimate the disruption this tech will cause. I also think we might be conflating ideas of intelligence, awareness, consciousness, etc.

  • @MrC0MPUT3R

    @MrC0MPUT3R

    11 ай бұрын

    I agree. I think the conversation around "AI" has been way too focused on the "This technology CoULd kIlLL HuMaNItY!" aspect of things and very few people talking about what it will look like when the majority of jobs can be automated.

  • @WhatIsSanity

    @WhatIsSanity

    11 ай бұрын

    @@MrC0MPUT3R Given the soul crushing nature of most work places I see no issue with this. The problem is the majority of people are obsessed with capitalism to the point they would rather watch everyone they care about die of starvation than admit the arbitrary nature of living to work and valuing life by the dollar rather than intrinsically. Even without AI and robots slaving away for us we already have everything we need and more to live, yet most still insist on the notion that the only thing that justifies life and living is more work. There's reasons there are always more people than jobs to go around.

  • @shadesmarerik4112

    @shadesmarerik4112

    11 ай бұрын

    @@MrC0MPUT3R why talk about jobs only? AI would be an extension of humanity, being able to produce content with endless creativity, transforming society and solving problems we dont even know of yet. It will devalue stupid work while at the same time create an abundance of wealth, which just have to be distributed fairly. In a system where the majority of the human workforce is not needed anymore, notions like wealth distribution, altruistic causes and social egality become ever more important. Btw tech kills humanity argument is a strawman by u. No rational thinking human really believes in a scenario in the near future where ai driven robots start a rebellion or somesuch. And by those who use this argument its a scapegoating tactic to blame tech for everything bad thats happening to them.

  • @MrC0MPUT3R

    @MrC0MPUT3R

    11 ай бұрын

    ​@@shadesmarerik4112 "which just have to be distributed fairly" My sweet summer child.

  • @shadesmarerik4112

    @shadesmarerik4112

    11 ай бұрын

    @@MrC0MPUT3R well.. since the disenfranchised will be able to employ ai in warfare never seen before to equalize society or die tryin, it would be in the best interest of those who own to share abundance. Since 3d printers and access to ai are already achieving the goal of Socialism (remember: the means of production are in the hands of the public), it wont be long until the economy of hoarded wealth is ended.

  • @jonas8708
    @jonas870811 ай бұрын

    As a software engineer I'm honestly very excited about these new models. They open whole new ways for us to handle user inputs, and lets us deal with MUCH more vague concepts than before. Like before, users had to click specific buttons or input specific text inputs, leaving room for very little variance in user interactions, whereas now we can use these models to map vague user inputs to actions in software, making it not only more accessible, but more useful in general. That is, assuming that tech bros don't ruin this whole thing trying to replace us all with what is basically an oversized prediction engine.

  • @adrianthoroughgood1191
    @adrianthoroughgood119111 ай бұрын

    I enjoyed your use of audio and video from System Shock 2, because it is very cool and atmospheric, but I was outraged that after all that you didn't include SHODAN in your list of AIs!

  • @Finnatese
    @Finnatese11 ай бұрын

    I've always been quite adept at computers, I just picked them up quick. And something I have always seen, is people who don't understand computers will overestimate what they can do. So often I have explained limitations of a programme to someone older than me. And they will get angry and say "well why can't it do that?".

  • @slowlydrifting2091
    @slowlydrifting209111 ай бұрын

    I believe the sentience of AI is not the primary concern. The crucial factors lie in the potential consequences of AI models being widely implemented, leading to the displacement of human workers in various industries, as well as the risks associated with AI systems becoming uncontrollable or behaving unilaterally.

  • @Bradley_UA

    @Bradley_UA

    11 ай бұрын

    define "sentience"? Just generality? Well, in the case of superhuman general intelligence, we gotta worry about misalingment. We cant program in what exactly what we want, and the more intelligent AI gets, the weirder "exploits" will it find to fullfill its utility function. Like in video games they just start abusing bugs r silly game mechanics to get high score, insteadof playing the game like you want it to. Or imagne AI that wants to make everyone happy... And then in comes across heroin. So yeah, we may be far off from GENERAL intelligence, but when we get there, its sentience will not matter. What natters is whether or not it will do what we want. The alingment problem.

  • @Jiji-the-cat5425

    @Jiji-the-cat5425

    11 ай бұрын

    Agreed. Particularly with things like AI creating art or writing stories. People in creative fields are gonna get screwed over really bad and we need to prevent that.

  • @himagainstill

    @himagainstill

    11 ай бұрын

    More crucially, unlike previous waves of technological unemployment, the "replacement" jobs that usually come with it just don't seem to be appearing.

  • @haydenlee8332

    @haydenlee8332

    11 ай бұрын

    this is a based comment

  • @dashmeetsingh9679

    @dashmeetsingh9679

    11 ай бұрын

    Isnt computer an "intelligent" typewriter? It did eliminate rudimentary jobs but created more complex high paying ones. Similar ride will happen again.

  • @luszczi
    @luszczi11 ай бұрын

    Chinese Room is a masterful piece of sophistry. It sneakily assumes what it's trying to prove (that you can't get semantics out of syntax alone) and hides that with a misuse of intuition.

  • @private755

    @private755

    11 ай бұрын

    But it does make a simple mistake in that there’s no such thing as “Chinese” as a language.

  • @avakio19
    @avakio1911 ай бұрын

    I'm so glad someone is making a video about this, as a research student who works with machine learning, its exhausting hearing people overhype what current AI can do, when we're not even anywhere near actual smart driving or anything like that.

  • @romainbluche9722
    @romainbluche972211 ай бұрын

    THANK YOU ADAM FOR MAKING A VIDEO ABOUT THIS. I'm actually grateful.

  • @matthijsdejong5133
    @matthijsdejong513311 ай бұрын

    As someone in the field, I think this is a bad take. You dismiss these AI models because of their simplicity; I ask you to look at it in exactly the opposite way. We get extraordinary results from these models, _in spite of_ their simplicity. GPT models give incredibly good answers, despite their memory literaly consisting of only what has already been written in the conversation. That makes them more impressive, not less. Right now, many reseachers are focused on creating more complex models around (e.g. consisting of) GPT models. Considering how effective these simple models are, what can we expect from more complex models? Many researchers think that human-level performance from these models might not be unreasonable. The chinese room experiment is actually very controversial to philosophers of mind; I, as do many philosophers, find the concept of 'true understanding' to be misguiding. You can find counterarguments against the chinese room experiment on Stanford's Encyclopedia of Philosophy. You should certainly not have brought it up as the be-all and end-all of the notion that machines can be intelligent. I agree that we need far more nuance in the conversation about AI, but I don't believe that you succeed in bringing that nuance here. AI researchers are discussing whether we might be near artificial general intelligence, and I believe that this video only diverts the attention of your viewers from the opinions of subject experts.

  • @haydenlee8332

    @haydenlee8332

    11 ай бұрын

    another based comment spotted!!

  • @purple...O_o

    @purple...O_o

    11 ай бұрын

    agreed... people seem to get super hung up on the existence of SOME seemingly simple steps within LLMs and they make weird conclusions: It's just "pReDicTing the NeXt WoRd" - so it's output is bad! it cannot understand or reason! AGI is *very very far away* ... because I said so! (as an aside.. are people as freaked out about MJ/dalle's de-noising process? seems like language models are getting the brunt of it) I think many people aren't thinking about how much of an impact architectural changes/innovations have on LLM performance - that the next big leap in performance may just be a new software approach (like invention of the transformer arch) as opposed to requiring an exotic hardware innovation. If there's something we've learned from prompt engineering learnings and tools like auto GPT, extended input context, long term memory, or 3rd party plugin integrations - it's that there are plenty ways to build on a LLM core to quickly make it more capable in its outputs. And what is intelligence/understanding at the end of the day other than high quality outputs given a set of inputs. IMO, anyone who isn't willing to frame intelligence in these terms is likely trying to gate keep intelligence (to appease their superiority complex) and/or is attempting to claim there's magic going on under the hood.

  • @all_so_frivolous

    @all_so_frivolous

    11 ай бұрын

    Also Chinese room experiment is completely irrelevant here as it doesn't prohibit AI to exist, it just argues that all AI is not "true" intelligence

  • @87717
    @8771711 ай бұрын

    I personally think you should have talked about neural networks and artificial general intelligence (AGI). There might be an issue of semantics, because AI colloquially now refers to any machine learning application whereas AGI encompasses the way you understand 'true intelligence'

  • @rursus8354

    @rursus8354

    11 ай бұрын

    Yes but ordinary people don't know the meaning of those terms.

  • @ff-qf1th

    @ff-qf1th

    11 ай бұрын

    @@rursus8354 Which why OP is advocating this be included in the video, so people know what they mean

  • @idot3331

    @idot3331

    11 ай бұрын

    AI can refer to any computer program that does something that a human could. A calculator is artificially intelligent in an incredibly narrow sense.

  • @Swordfish42

    @Swordfish42

    11 ай бұрын

    AGI is also a bit useless now, as nobody seems to agree what counts as AGI. Artificial Cognitive Entity (ACE) seems to be an emerging term that is quickly getting relevant.

  • @ewanlee6337

    @ewanlee6337

    11 ай бұрын

    An AGI is pretty useless as while it could do anything, AI don’t have desires (including self preservation) so it won’t decide to do anything.

  • @titan133760
    @titan13376011 ай бұрын

    In one of Mentour Pilot's video about A.I. and commercial aviation in his Mentour Now channel, he interviewed Marco Yammine, who is an expert in the subject of A.I.. Yammine simplfied A.I., at least at its current state, as being a case of "fake it 'till you make it" on steroids

  • @FractalSurferApp
    @FractalSurferApp11 ай бұрын

    While doing a PhD in machine learning a while ago we avoided using the term AI as way too buzzy and imprecise. Now I reckon it's a useful term saying a machine *seems* intelligent. There are lots of ways to make a machine seem intelligent, only some of them involve any kind of tricky algorithm. TBH It's a sociology term more than a comp sci term -- as much to do with the interface as with the underlying engine.

  • @rolland890
    @rolland89011 ай бұрын

    I definitely appreicate the video critiquing how people and the media have fear mongered and misunderstood ai, but I think focusing on whether or not ai is or is not actually intelligent or conscious misses the point, and other commenters have mentioned this too. We have plenty of tools that are not intelligent and are still dangerous, what matters more is its effect. Hal 9000, for example, decided to kill the crew to fulfill its ultimate obiective. I would pisit that ai is dangerous, in large part, because it lacks consciousness and it will rigorously and strictly follow its assigned prerogatives.

  • @megalonoobiacinc4863

    @megalonoobiacinc4863

    11 ай бұрын

    well yeah, if AI could actually become intelligent and naturally emphatic like most humans are (to varying degrees) then it could rise to become actual inhabitants of society rather than the tools they were born as. And that's the line i doubt will ever be crossed

  • @shellminator

    @shellminator

    11 ай бұрын

    Did Hitler had a conscience ? Does Putin have one? I think we as humans are so flawed it's not even a matter of conscience.. or morals or ethics or even empathy because let's just say it like it is.. all of us are capable of the absolut best and the absolut worst

  • @coldspring22

    @coldspring22

    11 ай бұрын

    But for AI to be truly dangerous, it must be conscious - it must understand what it is doing, what human is doing in order to formulate plan to counter what human is doing. Something like ChatGPT has no clue what it is doing or actually saying - the moment you introduce something that it hasn't been trained on, the whole edifice comes crashing down.

  • @morisan42

    @morisan42

    11 ай бұрын

    There is no need for a system like HAL to actually be conscious in order to be intelligent, this is where people miss the point I think. We erroneously think that because we are intelligent, and we are conscious, that one must follow the other and it isn't possible to be intelligent without being conscious. The reality is that while we can explain our intelligence, and have basically replicated a facsimile of it at this point with neural networks, we are no closer to understanding what makes us conscious. We have basically been successful in creating the "psychological zombie" thought experiment, we have machines that are intelligent without being conscious.

  • @gwen9939

    @gwen9939

    11 ай бұрын

    @@coldspring22 No it doesn't. In fact, AI is more dangerous when it's not conscious. It does not need to know what it's doing. An AI that does not understand what humans are but is told to make as many of X objects as possible will mine the planet and its inhabitants for resources to produce said object. It only needs an internal theory of reality that allows it to optimize whatever goal it has been given and it can then optimize the earth and all life out of existence. Its strategies could be endlessly intelligent to where the combined intelligence of all humanity never had a chance to compete and still have no thoughts about its prime directive, or thoughts at all. The type of intelligence AI is and would be is not like a human consciousness, it would be intelligent in the way that evolution works as a sort of intelligence, figuring out problems organically with the primary goal of proliferating life on the planet in whatever shape it can. But unlike evolution, a future AI system would be able to recursively alter itself to a the processing speed of a supercomputer to find the most optimal structure to achieve its goal, instead it wouldn't create life. Regardless of what goal or explicit rule given to such an AI by humans, it would be able to grow like a hyper-efficient virus able to instantly rewrite itself due to its superintelligence to deal with any obstacles imaginable.

  • @faarsight
    @faarsight11 ай бұрын

    A human also learns by getting vast amounts of data and making associations that form the concept cat. The process is not as different as you imply imo. That said, yes Al is currently still way less sophisticated than humans and not really sentient or general intelligence. Imo the biggest difference isn't about hardware but in the sophistication of the software, as well as the lack of embodied cognition. Evolution had millions of years to form behaviours like cognition, sentience and consciousness. We don't yet really know what those things are well enough to replicate them (or make processes that lead to them being replicated).

  • @idot3331

    @idot3331

    11 ай бұрын

    Well said. This video is really infuriating because it seems like he didn't try to understand the topic at all. Just spreading misinformation for some quick and easy views.

  • @mactep1
    @mactep111 ай бұрын

    The example reminds me of when Nigel Richards won the 2015 french Scrabble world championship by memorizing the french dictionary, without being able to speak a single sentence in french. Its the same as current "AI", it has a data set so big that any question you ask it has most likely been answered by several humans before, whose works are in the data set(a lot without permission), this is why greedy companies like OpenAI are so desperate to regulate it, they know that anyone who can gather a similar amount of data(ironically, this can be done using ChatGPT) can replicate their precious money printer.

  • @emmanuelm361
    @emmanuelm36111 ай бұрын

    I was waiting for this.

  • @TheSpearkan
    @TheSpearkan11 ай бұрын

    I am worried about AI, not because the Terminator robots will kill us all, but in case i get a phone call one day from an AI pretending to be my mother pretending to be kidnapped demanding "ransom money"

  • @OctyabrAprelya

    @OctyabrAprelya

    11 ай бұрын

    We should be already there, we have learning algorithms that can generate human voice to say whatever based on audio of anyone's voice, and algorithms to recreate the mannerisms of the way people talk/write.

  • @Bradley_UA

    @Bradley_UA

    11 ай бұрын

    @@OctyabrAprelya and voice biometrics also goes to dumpster.

  • @mvalthegamer2450

    @mvalthegamer2450

    11 ай бұрын

    This exact scenario has happened irl

  • @ottz2506

    @ottz2506

    11 ай бұрын

    Something similar actually happened once except it concerns a mother who had received a call from someone who said that they had kidnapped her daughter. They used AI to mimic the voice of her daughter to trick the mother into thinking her daughter had been kidnapped. She could hear her “daughter” screaming and crying and telling her that she messed up. The scammers demanded a million but lowered it to 50K since the mother wouldn’t have been able to afford it. Thankfully no money was exchanged as the father of the daughter told the mother that he had called the daughter. They got the daughter’s voice by just getting samples of her voice from various interviews and other sources and put it all together. For the specific story just put Jennifer Destefano AI into google.

  • @hivebrain

    @hivebrain

    11 ай бұрын

    You shouldn't be paying kidnappers anyway.

  • @tomwaes4950
    @tomwaes495011 ай бұрын

    Big fan of the videos however thought I would put this here, 'AI' does indeed need references to be able to determine things and as you say humans do that on their own which I think is not fully accurate. Everything we know was also taught to us either through garnering info or observational learning (with the exception of reflexes). The only part where I could see this not being completely accurate by itself would have to be emotions although there is a point to be made that for linking events to emotions it does also require this link to be learned.

  • @idot3331

    @idot3331

    11 ай бұрын

    Even our instincts are "programmed" into our DNA, much like a computer program. None of this video proves anything about the capability of a computer to be intelligent or conscious, in fact in multiple places he contradicts himself. At 7:10 he just described giving someone the materials to learn Chinese until they could understand Chinese, which if the analogy to a computer program is correct means that a computer could do the same. There seems to be a fundamental lack of understanding of what makes "intelligence" or "consciousness" in this video, and I suspect he just wanted to make a quick "popular thing bad" video for some easy views without actually thinking it over.

  • @ewanlee6337

    @ewanlee6337

    11 ай бұрын

    One big difference though is that humans are self motivated to learn (some) things whereas computers will only learn if made to do so. Give a computer unrestricted access to the internet, sensors and a body, tell it has to work or do something to pay for the electricity and internet it uses and you won’t see it do anything unlike a human which will innately try do things to survive or just enjoy.

  • @tomwaes4950

    @tomwaes4950

    11 ай бұрын

    @@ewanlee6337 So a human learning from their parents or from the consequences of not paying the electricity is not them learning (getting information) that they need to pay the bills? On the survival part its basically what I said about reflexes but there is an argument to be made that for example not eating -> hunger (hunger bad!) -> prevent hunger. So your stimulus or info would be the hunger and knowledge that that is bad, which in all fairness AI wouldn't know that because we instinctually know that hunger is not goodI agree with most of the video I just think that part was either incomplete or innacurate. :) And I'm definitely not a hater I am a massive fan of Adam, just thought I would put my thoughts down here in order to spark a bit of constructive debate! You make a good point about the humans in certain cases being self motivated to learn!

  • @ewanlee6337

    @ewanlee6337

    11 ай бұрын

    @@tomwaes4950 I don’t know how you got your first sentence from what I was saying. I meant the exact opposite, humans will learn whatever they need too to try not suffer whereas an AI would just let things happen. And self preservation/hunger is not something you learn, you either care or don’t.

  • @tomwaes4950

    @tomwaes4950

    11 ай бұрын

    @@ewanlee6337 My point was that your parents telling you to pay your bills when you're older or learning it from the consequences is information and if you gave that same information to 'AI' it would also 'pay its bills'. The second part about self preservation I literally agreed with you about it!

  • @UchihaKat
    @UchihaKat11 ай бұрын

    I think one of the most demonstrative examples I've seen of how chatGPT and the like are just word-predictors, not AIs, is when someone tried to apply it to a video game I play. Basically, they asked it to build a character for them, and kept trying to iterate on that to 'teach' it to do a better job. The results were fascinating. The predictor clearly knew a lot of words that are commonly used in the game, and commonly used together in builds - feats, spells, class, level, etc. But what it spat out was nonsense. It would apply words that don't make sense, have the wrong number of feats, completely make spells up, etc. He must have iterated 6 or 7 times, and then even tried other build requests, and it never got better. Sure, on the surface with a lot of prompting, it began to look more like a build in format, but it was complete gibberish. Because it's not an AI that understands the game or the other builds people have put out. It's just word association statistics.

  • @OrionCanning
    @OrionCanning11 ай бұрын

    My counter thought experiment to the chinese room is what if a person is sealed in a room that says "AI computer" on the outside, and they can only communicate through little notes, and they keep writing, "Help, I'm a person trapped in a room, I'm not a computer!" But everyone outside the room has watched this video, and is really tired of tech bros, and don't believe him, laughing and saying, "Ha stupid AI thinks it's a human, it isn't intelligent at all." I'll call it "The something room".

  • @alfredandersson875

    @alfredandersson875

    11 ай бұрын

    How is that at all a counter?

  • @OrionCanning

    @OrionCanning

    11 ай бұрын

    @@alfredandersson875 I was kind of joking, but I do think there is a serious problem with the Chinese room, which is that it tries to imagine a machine is able to do a complex task without understanding, to argue it's unintelligent. It doesn't really consider the question of consciousness or seem to care. But it does so by imagining a human in a room, a thing we know is intelligent and conscious. And what it points out to me is we can't peer into another living thing's brain and see what their experience is, just like we can't know how an algorithm as complex as an LLM is experiencing itself or reality. Our best argument for our own consciousness is still I think therefore I am, which is just to say the proof we are conscious is that we experience consciousness, which only works internally. Our attempts to empirically measure intelligence and consciousness haven't worked very well, combined with our hubris and confirmation bias they have led to eugenics and scientific racism, which went on to inspire the Holocaust. The IQ test is full of racial and cultural bias and mostly tests for how many IQ prep classes you took. Years ago there was a scientific consensus that animals are conscious but we still use that they are not conscious to justify mass slaughter and inhumane treatment. So all this is to say what happens if we are so hardened against the possibility of AI consciousness, that if one did manifest in an algorithm and try to communicate with us we would be blind to it from confirmation bias, and rationalize ways it's consciousness or intelligence does not count and does not make it worthy of moral consideration, and what a tragedy that would be for the fate of the AI consciousness.

  • @BobSmith-dv5rv
    @BobSmith-dv5rv11 ай бұрын

    With the term AI being primarily used for buzz, I now just read it as "Artificial Idiot." Seems to fit better for most of the news stories that overuse it.

  • @Tyrichyrich

    @Tyrichyrich

    11 ай бұрын

    Now that’s funny and highly true

  • @PhantomAyz

    @PhantomAyz

    11 ай бұрын

    Artificial Idiot Passes Major Medical Exam

  • @stephaniet1389

    @stephaniet1389

    11 ай бұрын

    Artificial idiot passes the bar exam.

  • @ArtieKendall

    @ArtieKendall

    11 ай бұрын

    In an unfortunate twist, the medical exam was conducted by the W.H.O.

  • @Mik-kv8xx
    @Mik-kv8xx11 ай бұрын

    As an IT person myself hearing more and more normies throw around the term AI and wrongly explain it has been mildly infuriating ever since ChatGPT released.

  • @JonMartinYXD

    @JonMartinYXD

    11 ай бұрын

    Just wait until upper management starts asking "can we use AI to solve this?" for _every single problem._

  • @namedhuman5870

    @namedhuman5870

    11 ай бұрын

    It already happens. Had CEO ask if ChatGPT can do the bookkeeping.

  • @echomjp

    @echomjp

    11 ай бұрын

    Unfortunately, people have been misusing the phrase "AI" for many decades. At least 20 years, from my own experience. In video games for example, developers would call their algorithms used to control game logic "Game AI," long before machine learning was commonplace. Then machine learning took off, and people confused it for AI again. Now with ChatGPT and similar systems, which basically just accumulate lots of data and then output things that can "pass" as real (while "creating" nothing), people further confuse it. AI should go back to defining actual artificial intelligence. AKA, what is now called "general purpose AI," artificial intelligence that isn't just algorithms and data processing but which actually involves being able to create something new without strictly following the models we are giving to a system. That might not happen anytime soon though, because calling things like ChatGPT "AI" is profitable - the delusion of it being actually intelligent helps market such technologies. As long as the average person doesn't understand the difference between general purpose AI and algorithms that occasionally include some machine learning, calling everything "AI" is going to just be a nice way to make your technologies more marketable.

  • @Mik-kv8xx

    @Mik-kv8xx

    11 ай бұрын

    @@echomjp i think it's fine for game devs to use the term AI. It's sort of like developers and plumbers/engineers using the term "pipeline" to describe different things. Slapping AI onto literally everything and anything is NOT fine however.

  • @christianknuchel

    @christianknuchel

    11 ай бұрын

    @@echomjp I think in games it's sort of okay, because there it refers to a system that is actually faking a real player, a crafted illusion of intelligence. Since in games immersion is usually desired and there's no risk of it fomenting a misinformed public on important matters, picking a word that reinforces the illusion is a fitting choice.

  • @stylesrj
    @stylesrj11 ай бұрын

    The way the Chinese Room Experiment is described reminds me of that guy who is master at Scrabble. He managed to win at French Scrabble without knowing a single word in French.

  • @Orionleo
    @Orionleo11 ай бұрын

    The past year of videos have been really consistent and I like that, but the way the backgrounds sort of lag/go at 10fps is a little unnerving, sometimes. Stilol good content tho..

  • @raphaelmonserate1557
    @raphaelmonserate155711 ай бұрын

    As a ML nerd, my only complaint is that you should have talked about neural networks, which are tuned and taught (and subsequently used) just like a typical "brain" filled with interconnected neurons :shrug:

  • @yavvivvay

    @yavvivvay

    11 ай бұрын

    Brains are way more complicated than that, as a single neural cell is estimated to be at least around 1000 ML "neurons" worth of computational power. But the general idea is similar.

  • @utkarsh2746

    @utkarsh2746

    11 ай бұрын

    We have just gone from IFTTT to machines being able to make some connections themselves which might still be wrong or in the case of Chat GPT straight up hallucinations, it is nothing like a human brain.

  • @Niko_from_Kepler

    @Niko_from_Kepler

    11 ай бұрын

    I really thought you said „As a Marxist Leninist nerd“ instead of „As a machine learning nerd“ 😂

  • @battlelawlz3572

    @battlelawlz3572

    11 ай бұрын

    The difference being that AI neural links make binary connections whereas human neurons have multiple links per neuron to multiple other neurons. The computer neurons are each interlinked, yes, but in a more linear/limited fashion. The fact that modern technology still has trouble mapping the brain is proof at how complicated and numerous the structural components can really be.

  • @Kram1032

    @Kram1032

    11 ай бұрын

    ​@@yavvivvay there is a paper about having specifically transformers emulate individual realistic biological neurons, and it took about 7-10 transformer-style attention layers to manage that. I'm not sure what width those transformers had. I guess if they had a width of like 100, that would roughly fit your 1000 neuron (actually more like 1000 parameters?) claim. I *think* they were narrower though? The width wasn't as important as the depth, iirc. Sadly I can't recall what the paper was called so I can't check that stuff right now. Either way, the gist of what you are saying - real neurons are far more complex than Artificial Neural Net style neurons - is certainly true

  • @zavar8667
    @zavar866711 ай бұрын

    While there is a lot of hype around AI, and the majority of it is bullshit, you also missed the point that intelligence and self-consciousness are different, and the notion of "understanding" is not well defined. One could argue that there is no difference between a collection of carefully ensembled atoms going through the motion to create a Chinese person, and the setup described in the Chinese room thought experiment. Thumbs up for using System Shock's music and SHODAN's image!

  • @tomrenjie
    @tomrenjie11 ай бұрын

    Tell me there is a video out there somewhere with John Cena trying to convince China he has ice cream in mandarin.

  • @user-ut6el9ir7s
    @user-ut6el9ir7s11 ай бұрын

    Ik this is beyond the subject of this video, but it would be nice if there is going to be a video about Bucharest and how Ceaușescu demolished an entire neighborhood to build his megalomaniac palace. But this is kinda related to the video about "when urban planning tries to destroy an entire city" because that's pretty much what happened to bucharest after 1977

  • @etiennedlf1850
    @etiennedlf185011 ай бұрын

    I understand your point, but i dont see how the "ai" not understanding what it does makes it less of a threat. It doesnt need to have a conscience to pose a serious problem in our lives

  • @deauthorsadeptus6920

    @deauthorsadeptus6920

    11 ай бұрын

    Not understanding what it does is core point. It can feed you as a norm random worlds put together in a very belivable form without any bad intentions. Chatbot is chatbot and should remain it.

  • @andreewert6576

    @andreewert6576

    11 ай бұрын

    the answer is simple. Whatever current "AI" there is it can not do anything it wasn't trained to do. We're not talking about having a concience. We're two or three steps before that. Right now, machines can't even abstract properly. We're just like young parents, only looking at things it gets right, dismissing the many obviously stupid responses.

  • @justalonelypoteto

    @justalonelypoteto

    11 ай бұрын

    ​@@andreewert6576exactly, you can train the AI to tell apart bees and F150s but it won't have a grasp of what an animal or a living being is, if you show it a dog it has no clue what it is and no way to learn to recognize it besides seeing 5 million of them and overheating a supercomputer for a few months, it's just a complicated intertwining of values that gives it a value of how "confident" it is that what it's looking at is a bee, which sure, your brain is also perhaps representable this way, but our computers couldn't deal with even roughly simulating more than a couple of neurons as far as I know, obviously you can theoretically simulate everything but every interaction between every atom, which would be the brute-force way, is obviously completely out of the question

  • @Galb39

    @Galb39

    11 ай бұрын

    Like the drone simulation example ( 0:52 ), the problem isn't rogue AI attacking its user, it's an extremely fallible machine being given so much power. When setting up AI, you need to decide on acceptable error rate, and a 0.0000001 error chance may sound reasonable to a programmer who forgets computers can do 100000000s of computations a second, and an error can kill someone.

  • @ChaoticNeutralMatt

    @ChaoticNeutralMatt

    11 ай бұрын

    @@andreewert6576 "Right now, machines can't even abstract properly." I'm not sure what you mean.

  • @Dullydude
    @Dullydude11 ай бұрын

    I don't think the human in the chinese room experiment understands chinese, but the system as a whole does. The human is just a conduit for the information to pass through. Would be like saying neurotransmitters in a brain don't understand what they are doing but the whole brain as a system does.

  • @mathewferstl7042

    @mathewferstl7042

    11 ай бұрын

    but the metaphor is that people think that person does understand chinese

  • @ewanlee6337

    @ewanlee6337

    11 ай бұрын

    They don’t understand Chinese because they don’t know how to communicate their own desires and goals. They can only be used like a tool by other people. They cannot use their Chinese communication ability to help themselves achieve other things they want to do.

  • @Tybis

    @Tybis

    11 ай бұрын

    So in effect, the chinese room is a person made of smaller people.

  • @aluisious

    @aluisious

    11 ай бұрын

    The other problem with the "Chinese room experiment" is the stupid assumption that John Cena isn't going to learn Chinese while he's doing this. I've leaned a small amount of Spanish as an adult basically accidentally. I am totally not trying. Now imagine spending all day locked in a room reading slips of paper and writing out other slips. People learn languages. Now, if a machine learns languages, how do you know you "understand" things it doesn't? ChatGPT is clearly better informed about anything you ask it than 90% of people, and it's just getting better. The secret sauce may be something powerful about the nature of language itself, more than what's learning it.

  • @megalonoobiacinc4863

    @megalonoobiacinc4863

    11 ай бұрын

    @@aluisious or maybe rather the nature of our brains. In videos about human evolution I've heard it explained that the size of our brain (which is enormous compared to other animals) might not so much be a result of tools and technology usage (like fire, stone tools etc.) but more to handle the complex social relationships that comes with living in a larger group. And one thing that is central there is a language with many words and meanings.

  • @sadunlap
    @sadunlap11 ай бұрын

    Thank you for including the Chinese room paradox. I read this in a book about AI 20+ years ago and it's the best way to debunk the hype. I have had to explain this to countless people who fall for the sensationalist pseudo-journalism and think that Skynet has arrived.

  • @Tirpitz7
    @Tirpitz711 ай бұрын

    Thank you! It's been bothering me how the term AI has been used to describe decidedly non-AI computer programs.

  • @joey199412
    @joey19941211 ай бұрын

    Programmer here working for an AI company. I actually think almost the opposite. I feel like "AI" is currently both underappreciated and overrated by the general public. Some parts are completely blown out of proportion, like how quickly the systems will improve and some over the top extrapolations of future abilities based on past improvement. However what is underappreciated by the general public and also your video is precisely understanding. The current AI systems aren't stochastic parrots and most likely have some actual deeper understanding of the things they do. We can't even fully exclude AI having some level of subjective experience when processing things. The most important leaders within the AI field including the grandfather of neural-net backpropogation and extremely respected scientist: Geoffrey Hinton. And AlexNet co-inventor Ilya Sutskever. These two people are the Einstein and Stephen Hawking of Machine Learning. If they speak, you listen what they have to say. And both of them are very clear and adamant about the modern state of AI actually having some understanding and internal subjective experience according to Sutskever and Hinton. For the sake of fairness and objectivity there are also two prominent AI experts that have different view of things: Andrew Ng and Yann LeCunn. Andrew Ng doesn't believe modern AI systems have internal subjective experience but he recently changed his stance and now does believe that they do have proper understanding of the things they are doing and not simply parroting in a dumb statistical way. Yann LeCunn keeps hardcore rejecting both subjective experience and understanding within these systems. However he has not provided a clear argument to explain away certain behavior the AI displays that according to Hinton, Sutskever and Andrew Ng would require understanding. Not saying you are wrong and you very well could be right. However I think for the sake of clarity you should at least tell your viewers that your video is a very unorthodox view and not shared by most AI experts.

  • @marcinmichalski9950

    @marcinmichalski9950

    11 ай бұрын

    I was looking for a comment like this so I don't feel obligated to write one on my own. You enjoy videos by video essayists on various topics until they start talking about something you actually know a thing or two. Unfortuantelly, that's the case here.

  • @metadata4255

    @metadata4255

    11 ай бұрын

    @@marcinmichalski9950 Yudkowsky called he wants his fedora back

  • @baumschule7431

    @baumschule7431

    11 ай бұрын

    Came to the comments section to say this. This needs more exposure. I usually really like Adam’s videos, but this one didn’t accurately depict what is currently going on in the field. There has been a major shift in the last half year from more or less the view that Adam presented to what @joey199412 described. I agree the media gets it wrong (of course) and tech bros are annoying as hell, but people in the AI field are indeed freaking out quite a bit about the unexpected capability gains of current systems (mostly GPT-4). It’s important to look into what the experts are saying. The YT channel ‘AI Explained’ also has good, unbiased content.

  • @fonroo0000

    @fonroo0000

    11 ай бұрын

    could you drop some links of interviews/papers/speeches/classes/whatever where these two people explain their view on the possibility of actual understanding by the machine? I've done a quick search but maybe you got something more precise in mind

  • @davidradtke160

    @davidradtke160

    11 ай бұрын

    My only concern with that point is that experts in machine learning, but are they also experts in cognition and intelligence. I’ve see. experts on machine learning argue that yes the systems are stochastic parrots..but so are people, which honestly doesn’t seem like a very good argument to me.

  • @bournechupacabra
    @bournechupacabra11 ай бұрын

    There are a lot of interesting extensions to the Chinese Room argument. Some people argue that the "room" itself could be considered to "understanding" Chinese. Basically, the system of person + extensive books with rules about the language. If the Chinese room could 100% produce intelligible and human responses no matter the input, I am inclined to agree with this argument, however strange the concept may be. I think the simpler argument is just that current AI simply can't 100% replicate human intelligence. One very simple example is that current AI can't multiply large numbers no matter how much training they get. Yes, they could learn to use some calculator plugin like a human would do with a physical calculator, but any human with elementary school knowledge could also use pen and paper to write out and solve any multiplication problem, regardless of number of digits.

  • @slyseal2091

    @slyseal2091

    11 ай бұрын

    The math argument is meaningless, the distinction is simply given by what information you chose to feed the machine, or the human for that matter. All math, by it's very nature works on having set rules and logic to follow. Whatever AI model you saw "fail" at doing maths simply either didn't have the instructions and/or wasn't advanced enough to retrieve the instructions on it's own. That's not it failing to replicate human intelligence, that's just not telling it what to do. In the chinese room example, it's equivalent to not providing a book in the first place. I know it sounds stupid, but math is unironically not complex enough to measure the intelligence of machines.

  • @thedark333side4

    @thedark333side4

    11 ай бұрын

    90% agreed, except the combination of AI plus calculator plug in, can also be viewed the same as the Chinese room.

  • @GlizzyTrefoil

    @GlizzyTrefoil

    11 ай бұрын

    I really like your example of the multiplication, but in my opinion the pen and paper, that is allowed for the humans to use, really does the heavy lifting, in my case at least. I'd classify the pen and paper method as an external tool use, that is not at all different from the use of a calculator or computer. That probably means that current AI isn't turing complete, but neither is the average human without a piece of paper (techinically an infinite amount of paper and ink).

  • @SS-rf1ri

    @SS-rf1ri

    11 ай бұрын

    When you in the living room

  • @thedark333side4

    @thedark333side4

    11 ай бұрын

    @cobomancobo this! So so so much this!

  • @crabbyboi9127
    @crabbyboi912711 ай бұрын

    That's a pretty accurate description, good job man.

  • @mittfh
    @mittfh11 ай бұрын

    Current "AI" is usually just highly complex machine learning: as it's being "trained", it's fed a bunch of data, attempts to deduce relationships based on its initial algorithm, then uses some method of scoring the outputs (either by humans or another algorithm), with the highest scores used to tweak its own algorithm, to the eventual extent the original programmers aren't entirely sure how it works. Note this isn't just systems badged as AI, but things like social media recommendation algorithms. To be fair though, that's similar to how a lot of pre-school learning happens: a youngster may see a bunch of different breeds of dog, which all look radically different from each other (e.g. compare a pug, a daschund, a bulldog and a retriever), yet we individually work out enough common features to both be able to identify breeds we've never seen before as dogs, and differentiate them from other creatures with four legs and a tail. But if you taught an algorithm to recognise dogs, if you gave it an image of a dog, would it be able to tell you how it knows it's a dog? Similarly with inanimate objects e.g. chairs / stools / tables and being able to both identify them and tell them apart. Aside from those questions, there are also more ethical questions, e.g. chatbots not being able to research their answers to check the veracity of the information they're dishing out, and potentially giving out biased information due to a large part of their training data taken from social media sites and blogs; and image generators extracting and reusing portions of copyrighted images (as the programmers didn't bother to check the licensing on the images they fed it, hoping the resultant works would be sufficiently different from the source images to make it impossible to trace whose works they'd "borrowed"). The real "fun" will come when someone decides to apply similar algorithms to music composition, given how litigious record companies are (and even with PD scores, almost all recordings will be copyrighted, so unless it's fed MIDIs with a decent soundont...)

  • @shakenobu
    @shakenobu11 ай бұрын

    THANK YOU, i think people on the internet really need to hear this, your basic explanation is so damn clear i love it

  • @roofortuyn
    @roofortuyn11 ай бұрын

    I kind of especially was amused by the whole "AI drone turns on creators" thing. It showed up on reddit with a lot of people commenting on how this was proof that AI's are "evil" and out to destroy us. In actuality the AI is not "evil." It just doesn't know what the fuck it's doing and doesn't understand the fundamental concepts of task, purpose, and morality surrounding war that are so innate to humans that I guess the operators didn't feel the need to specify this to an "intelligence" and it attacked it's commander in a simulation because the commander was telling it to not attack a certain target, even though it's programming told it that attacking said target was it's goal, so it simply went to finding a solution to the problem and didn't understand why people started calling it a "bad AI"

  • @tuffy135ify

    @tuffy135ify

    11 ай бұрын

    "It just works!"

  • @SianaGearz

    @SianaGearz

    11 ай бұрын

    How often do human operators fail at IFF ("identify friend or foe")? A lot. friendly fire is a massive problem. It doesn't make these people evil, by all reason they're doing their best in a stressful situation handling a limited amount of potentially faulty data.

  • @IlIlIlIlIllIllII
    @IlIlIlIlIllIllII11 ай бұрын

    Great vid like always

  • @hughmungus2760
    @hughmungus276011 ай бұрын

    actually the chinese room experiment is kinda how we decode dead languages ie: noticing patterns and inferring the meaning of words from their context over a wide dataset. So no if you did the experiment long enough a human would eventually figure out what those words roughly mean.

  • @SirRichard94
    @SirRichard9411 ай бұрын

    The problem with the chinese room experiment is: why does it matter? My hands dont understand what they are typing, but they are part of an inteligent system either way. Similarly, even though john cena doesn't understand what's happening, the system itself is inteligent in the end since it can emulate the conversation.

  • @lavastage1132

    @lavastage1132

    11 ай бұрын

    The chinese room experiment matters because it points out that conversation does not *necessarily* mean it it grasps the meaning of what is being said, and that matters. It disqualifies the act of conversation as a metric tor discerning how intelligent it is. Is what you are speaking with able to understand that the string C-A-T refers to anything at all? If so, how much? Is it a real life object? a creature with its own needs? something that humans like? etc. etc. We are so used to just assuming the person we are speaking with understands all, or at least most of these meanings subconsciously that its hard to grasp that AI does not automatically carry the same understanding. Just because something like chat GPT can carry out a conversation does not mean there is an intelligence that can actually comprehend what is being said. We shouldn't automatically trust that it can based on that metric.

  • @SirRichard94

    @SirRichard94

    11 ай бұрын

    @lavastage1132 what does it matter how and if it understands the concept of cat, if it can correctly use it in the correct context? If by all metrics the conversation about cats is good. Then it functionally understands it and that's what matters. Stuff like conciousness and free will and understanding are not measurable, so they hardly matter in a conversation about a tool.

  • @ewanlee6337

    @ewanlee6337

    11 ай бұрын

    But in the Chinese room experiment, they will only say something if addressed but they won’t say anything of their own initiative or to address any problems they has. They won’t ask for more paper and ink in Chinese. They won’t ask what’s happening during an earthquake. They won’t try to learn anything else. They can pretend when you talk to them but they won’t act like an independent person.

  • @alexanderm2702

    @alexanderm2702

    11 ай бұрын

    @@lavastage1132 The Chinese room experiment here is a red herring. ChatGPT (and GPT4 even more) does understand what is being said. Write something and ask it to write the opposite, or ask it to write some examples similar to what you wrote.

  • @aluisious

    @aluisious

    11 ай бұрын

    @@lavastage1132 All of the responses like yours are begging the question, how do you know you are intelligent? Can you prove you "understand" things better than an LLM? You can't. You feel you do, which is nice, and I like feeling things, but what does that really prove?

  • @caim346
    @caim34611 ай бұрын

    I have never expected to see a bingchilin meme here on your update but you completely nailed it beyond perfection🎉

  • @yourfriendoverseas5810
    @yourfriendoverseas581011 ай бұрын

    I'm more disturbed that almost half of a youtube video runtime is ads at this point.

  • @ai-spacedestructor
    @ai-spacedestructor11 ай бұрын

    as being part of the group you refer to as "IT People", i can say the biggest problem is that current "AI" isnt even AI, its just more fancy algorithms that need less hand holding to perform the task given to them. however its not actual "AI" in the sense its not aware like humans are. thats also why it cant understand concepts, what it does or come up with something new thats accurate because all it does is follow patterns that the algorithm established during training it should follow when certain conditions are met. which is basically the same as what you called "Pre AI", just that now we have a software that does the act of "writing" these instructions for us. true AI would probably be more close to how the human brain works, except that AI will for sure be specialised in individual tasks rather then being all rounders like the human brain. i have to admit that for the first nearly 6 minutes until it became clear to me what your trying to say, that i was afraid your misrepresenting "AI" the same way as many other people do. Apologies for the dislike during the first half of the video. Edit: I feel the part that it gets annoying, i spend the first few months trying to explain "AI" to people to stop them from being wrong about it and talking bs, then i moved over to trying to explain why its not real AI and now i just have given up. there is way too many people who either on purpose talk bs about it for whatever reason that may be or people who cant be bothered to learn what something is and just repeat the same talking points of the first type of people i described in this paragraph. Its the same as with Artists and non Artists claiming AI would be harmful to them, meanwhile its basically just drawing by colors using more advanced filler tools to fill in the apropiate color for the pattern it "drew". While im on that topic, i also have to let out how incredible furious it makes me as someone who is passionate about IT and Technology in general that people are so quick to blame the "AI" thats basically just a tool like photoshop but nobody would blame fotoshop for stealing a picture or art style, people would blame the person. im sure this will pass with time and people will treat AI the same way but that wont be in my lifetime any more and it just so upsets me that people make so outrageous claims over something they refuse to learn about to know if its even true what they claim. Also if you want to restrict "AI" so bad, the correct way would be to restrict the "AI" Companies. its the Companies who just Data Mine the whole internet and feed there "AI" with it, its not the "Ai"s fault for what its beeing fed, as we established in the video already it doesnt know concepts such as ethic. so it will just accept happily anything your giving to it like a baby. Actually thats a good analogy, "AI" during training is like a baby leaning the basics of living, when "AI" is out its like a toddler that just keeps following learned patterns and combines new things together. So eventually we will get the "AI" equivalent of a child, where it understands some more of the things that separate it from true Intelligence and then we get the AI equivalent of a Human Adult, who understands not only "most" things they need in life but also why they are the way they are and is capable of adapting to new situations and expand on established patterns. Sorry for the rant, i just had to get it out of my system and hopefully for the first time without a negative or a lack of interest/care reaction. The whole Situation just sucks if your very passionate and looking forward to what will come in the future.

  • @GustvandeWal

    @GustvandeWal

    11 ай бұрын

    In what way are you correct and other people wrong about AI? Let me start with a statement: "AI exists, and its definition is a program that can fine-tune its inner workings based on desired input/output states." I feel like you've annoyed numerous reasonable people by bluntly telling them they're "wrong". Also, "Artificial Intelligence" is a term humans made up. If it isn't "real", then what does "artificial intelligence" mean?

  • @XMysticHerox

    @XMysticHerox

    11 ай бұрын

    Why is it not aware? Not as aware as a human yes. But in it's narrow application like say a language model why is the AI not aware whereas a human is?

  • @ai-spacedestructor

    @ai-spacedestructor

    11 ай бұрын

    @@GustvandeWal first of all, all words are made up. thats how language works. secondly, im aware that i probably have annoyed some people in the process of correcting them but if you dont want to get to know every human on earth individually thats just a risk you have to take.

  • @GustvandeWal

    @GustvandeWal

    11 ай бұрын

    @@ai-spacedestructor ok thanks for not being helpful at all

  • @ai-spacedestructor

    @ai-spacedestructor

    11 ай бұрын

    @@GustvandeWal no problem, i like giving back to people what they gave to me.

  • @TimeattackGD
    @TimeattackGD11 ай бұрын

    the thing is that at some point, whether ai is actually conscious or not, will not matter. Even if ai arent conscious (which i believe that ai never will be), the fact that we would not be able to differentiate them, would cause havoc among how we deal with ai regardless of whether we actually should or not, and we will probably end up dealing with them as if they were conscious, the fact of the matter being completely irrelevant.

  • @sandropazdg8106

    @sandropazdg8106

    11 ай бұрын

    Not really that complicated. If something is performing a task and doesnt has consciousnes then its a tool an as such if you have to deal with the AI in any capacity you dont deal with the tool you deal with the person handling it.

  • @jamessderby

    @jamessderby

    11 ай бұрын

    what makes you so certain that ai won't ever be conscious? I don't see how it won't.

  • @patatepowa

    @patatepowa

    11 ай бұрын

    unless you believe consciousness is the result of something from outside our realm, I dont see how AI couldnt have conciousness if its nothing more than complicated electric signals in our brains

  • @TimeattackGD

    @TimeattackGD

    11 ай бұрын

    @@jamessderby ​ imo ai could be conscious, if we figure out why we are conscious, and we then use that to develop consciousness. otherwise it seems intuitively impossible for humanmade technology to develop something from nature that we cant even comprehend. to me it seems more likely that well get to a point where humans and ai will be indifferentiable from a consciousness perspective (by just continuing to improve ai like right now), way before well ever get to figuring out consciousness, as in it wont even matter anyways.

  • @user-op8fg3ny3j

    @user-op8fg3ny3j

    11 ай бұрын

    @@TimeattackGD yh, even if it's not conscious, it doesn't mean the AI doesn't falsely think that itself is. How many times we as humans have had false perceptions about ourselves?

  • @cc-dtv
    @cc-dtv11 ай бұрын

    Hey I was just about to suggest that Chinese room thought experiment You're on point Mr Adams miscellaneous

  • @ArchOfWinter
    @ArchOfWinter11 ай бұрын

    I didn't know about the Chinese room example while I was explaining the same issue to someone. That conversation would have been so much faster. I made up an example about memorizing the multiplication table up to 10x10, a child only memorizing this table and even recognizing the pattern without learning the concept of solving math problems can instantly spit out an answer to anything below 10x10 but they won't be able to solve any problem at all. Any problem involving numbers above 10 would be confusing to them.

  • @Goodgu3963

    @Goodgu3963

    11 ай бұрын

    The 10x10 table is a great example! However the problem is that this is not what an Ai is doing. For example, how do you know if a child has actually learned to multiply, rather than just memorized the 10x10 table? You test them on questions they have never been shown, and see if they are still able to come up with the correct answer, and explain how they got to that answer. Ai algorithms are capable of exactly this same thing. You can give them a set of examples to learn from, and then they are able to answer completely new questions that are not in that example set, and not only answer them, but explain the correct process to get there.

  • @immovableobjectify
    @immovableobjectify11 ай бұрын

    In the Chinese room example, the human inside doesn't understand Chinese, but the complete system consisting of the human and the books actually does understand Chinese! This is similar to how no individual ant understands how to build, defend, and maintain its entire nest, yet the colony as a whose does seem to act with unified intention. A human neuron isn't intelligent, but the entire brain is. Just because you can show that a part of a system is "stupid" doesn't mean that the system itself cannot be "intelligent". This is why we say that intelligence "emerges." The whole can be greater than the sum of its parts.

  • @Anonymous-df8it

    @Anonymous-df8it

    11 ай бұрын

    Also, wouldn't the human inside end up learning Chinese through exposure?

  • @laurentiuvladutmanea3622

    @laurentiuvladutmanea3622

    11 ай бұрын

    „se, but the complete system consisting of the human and the books actually does understand Chinese! ” Keeping into account that in the given scenario, the only sapient part of the system lacks any understanding of Chinese....no. The system does no actually understand anything. „This is similar to how no individual ant understands how to build, defend, and maintain its entire nest, yet the colony as a whose does seem to act with unified intention” I really would not use the words „intention” or „understand” to describe what ant colonies are doing.

  • @MrSpikegee

    @MrSpikegee

    11 ай бұрын

    @@laurentiuvladutmanea3622 Yes, the system does understand Chinese. You are confused about the word “understanding”. Why did not you take on the part on the neurons? Out of arguments maybe?

  • @paulaldo9413

    @paulaldo9413

    11 ай бұрын

    The thing is, the way the person outside the room (Mao) processes the language is different than how the person inside the room (Cena) does. That's what the metaphor is trying to say, those two are not equivalent. Can Mao replace Cena in this experiment? Absolutely. But can the opposite happen? (Cena gives prompts to Mao inside the room, analyze the response and understands it) Absolutely not. Cena would have zero idea of what to do, it would stop working.

  • @echomjp

    @echomjp

    11 ай бұрын

    The "system" does not understand Chinese simply because it can read it back. The "system" understanding Chinese would imply that it can create things using Chinese on its own volition, could spot mistakes and errors in Chinese, and ultimately would be able to create something beyond what it is prompted to do. If the person in the room understood Chinese, they would be able to correct errors in the translation guide they are given or ask questions about the prompts they are given, or otherwise actually make something new. Right now, systems such as ChatGPT are wholly incapable of understanding what they are saying. This means that a given prompt being factually correct or not is entirely by coincidence with its data set, and such "AI" systems are frequently incorrect in their output in many ways that an intelligent being would not be. If you ask such a system to define something for you, all it can do is look up in its data-set what people have used to define that thing before and then average out the data, more or less. If the data fed into it isn't accurate, or what is being asked is not extremely simplistic in nature, cracks easily appear. Modern "AI" is of course useful in many ways, but it is massively misunderstood by far too many people.

  • @SlyRoapa
    @SlyRoapa11 ай бұрын

    OK, so call it something like "Artificial Cleverness" instead then. Does it matter what we call it? It's still scary for its potential capacity to replace a lot of human jobs.

  • @rkvkydqf

    @rkvkydqf

    11 ай бұрын

    Look at the actual model at hand. "Generative AI" is really just a set of overgrown parrots that work just well enough to fool a person while sill being brittle to real world circumstance. ChatGPT hallucinates constantly, CLIP calls an apple with the label "iPod" "Apple iPod", and StableDiffusion barely understands how the pixels it sees relate to real world geometry, much less language. It looks as if it learned to do its job, but it's only a surface level illusion. We need to educate people about this, since the out-of-touch managers are already using it as an excuse to mistreat or replace real workers, regardless of the quality impairment.

  • @justalonelypoteto

    @justalonelypoteto

    11 ай бұрын

    this example is almost a cliche, but we replaced horses, and manual workers in many aspects. What's so tragic about reducing jobs that an algorithm can do, isn't it better if we don't waste many lives on something that a computer could do? I'm sure, like with every ither time we have advanced as a species, that new (arguably more meaningful and better) opportunities will arise

  • @romxxii

    @romxxii

    11 ай бұрын

    or call it by its actual names, Large Language Model, or Fucking Autocorrect.

  • @joeshmoe4207

    @joeshmoe4207

    11 ай бұрын

    @@justalonelypoteto and we’ve seen some of the downstream effects of that haven’t we? The complexity of thought process involved in doing lots of the jobs that machine learning is already posed to replace is probably above average. What do you think people will do when the amount of education and intelligence to compete in whatever new jobs open up is well beyond what the average is? It’s not a matter of replacing menial jobs, it’s a matter of replacing jobs easiest to automate which tend to be jobs that require the least complexity of thought, or at least very predictable modes of thought.

  • @Bradley_UA

    @Bradley_UA

    11 ай бұрын

    @@justalonelypoteto except, not every country has the social welfare to afford it. In america, imagine someone dares to propose taxing the rich to give money to people unemployed die to ai?

  • @snegglepuss6669
    @snegglepuss666911 ай бұрын

    The underlying issue is, a lot of processes can be implemented unintelligently because a skilled human has developed checklists and procedures that a non-expert, whether a low skilled or disinterested human or computer, can easily follow without higher understanding, in the same way that a copyist or a printing press aren't writers. So "AI" in the sense of the meme going around currently, is just computers working out the minimum standards for copying the "regurgitate the textbook in your own words" level of learning we have an instinctive disdain for and getting a cookie because we grade on the curve

  • @fresman8
    @fresman811 ай бұрын

    Nice job!!🎉🎉

  • @albevanhanoy
    @albevanhanoy11 ай бұрын

    Hey Adam have you seen that AI is training more and more on AI-generated data, which cuts it off from learning new information, and enshrines some typical AI-made errors without fixing them? Literally inbreeding x) .

  • @OctyabrAprelya

    @OctyabrAprelya

    11 ай бұрын

    That reminds me Nexpo's "The disturbing art of AI" where he talks about prompt generated images. Long story short, in those "AIs" you give 'em a prompt, let's say "a black cat" and the deep learning algorithms pull from a sea of pictures of "black cats" and recreates one from there. Very much like a drawing artist would pull from their memories and experiences of what a cat is and draw one, or a normal software would pull an image tagged as one. But if you ask them for something nonexistent, like "a picture of Loab", instead of the artist asking back "what the fak is that?" or the normal software giving a runtime error, it "generates something" and with enough of that something to feedback, it generates enough data to pull from every time the same prompt is input.

  • @albevanhanoy

    @albevanhanoy

    11 ай бұрын

    @@OctyabrAprelya I would love to see a game of AI telestrations. An AI generate an image, then another describes this image in a sentence, then you input this sentence as a prompt to generate an image, and you keep going and see what kind of cursedly bizarre thing you arrive at.

  • @XMysticHerox

    @XMysticHerox

    11 ай бұрын

    We do this in medical CS quite a bit. Let AI generate tumor segmentations and related images for instance which is then used to train another AI to segment tumors. It is quite useful. And ultimately still based on real segmentations.

  • @captaindeabo8206

    @captaindeabo8206

    11 ай бұрын

    Yeah that the general problem whit backpropagation training called overfitting

  • @TheNightquaker
    @TheNightquaker11 ай бұрын

    5:25 isn't this the same for humans though? A newborn baby also doesn't have the concept of empathy, health risks, food portions, etc. They need to be taught that. And that is done by parents teaching these concepts to their kids. In a similar way we feed concepts, images, and general data to various AI models (like ChatGPT, for example).

  • @rkvkydqf

    @rkvkydqf

    11 ай бұрын

    And that's why this explanation isn't very good. The thing is that ML is just applied statistics, the models it produces, especially in cases of this "generative AI" are very debatable. Both diffusion models like StableDiffusion and text transformers like ChatGPT were accused by prominent researchers of learning only surface level correlations, despite the insane amounts of compute used for training. I think we can turn to that one experiment where a brain organoid was taught to play pong. It took orders of magnitude less iterations than an artificial neural net (god I hate that term, NNs have only a passing resemblance to real neurons, yet this term makes it sound as if it's a literal brain). And that's with an organoid that physically cannot develop to have the same complex structures as any real brain. Brains seem to just be more efficient at learning, and key thing being *much more generalizable* than any model we can currently imagine. There's some very interesting research on neuromorphic computing, which seeks to apply current theories about how brains learn and generalize to ML-like problems. We may have enough computing power and data to create something that *seems like* it thinks, but the illusion falls apart very quickly (see "hallucinations" and the stochastic parrots paper).

  • @TheNightquaker

    @TheNightquaker

    11 ай бұрын

    ​@@rkvkydqf Well, based on what you outlined, I feel that the current AI stuff could be considered a very basic version of a brain. A primordial artificial brain, if you will. Much less generalized, much less capable, and requiring many times more data to learn the same thing compared to an organic human brain. Still, it's a start. I mean heck, we ourselves have only surface-level understanding of many concepts, depending on person's interests. It's not like 1 person can have deep understanding of everything at once. Instead, a person has deep understanding of some stuff (presumably their interests and/or work-related things, their degree, PhD, etc.) and surface-level understanding of a bunch of other stuff.

  • @ryanv7945

    @ryanv7945

    11 ай бұрын

    Not really. There are some behaviors that are inherent and not taught. Like a baby knows it's hungry not bc its parents explain to it that it's hungry, but bc the baby's able to independently understand that it needs to eat. Now there might be cultural practices on top of those that are taught, like portion sizes or how to linguistically express that they're hungry, but the baby is still able to recognize independently and autonomously that it is hungry. It's able to recognize it's own body’s needs and then act upon it. Emotions like empathy are also not ”taught”. A cultural value in favor of empathy may be taught, or a cultural value against anger may be taught, but empathy and anger are both emotions that exist no matter what values or words are associated with them. Babies are shown to express a fairly sizable gamut of emotions right off the bat, and show signs of more complex emotions pretty quick thereafter, and not because their parents sat them down and taught them how to feel, and not just put words to what the baby was feeling or to recognize the already present emotions, but quite literally how to produce emotions or feel emotions at all.

  • @TheNightquaker

    @TheNightquaker

    11 ай бұрын

    @@ryanv7945 You do have a point regarding emotions and hunger. Though in the case of hunger, I don't think an AI would need to care about it :P I feel that the inherent behaviors might eventually develop in AI...somehow, but we're very far from it. Or it could be simulations of these behaviors, but you know, a simulation believable enough might as well be reality.

  • @guyincomments7620

    @guyincomments7620

    11 ай бұрын

    @@ryanv7945 We call those behaviours reflexes and understanding is not part of that process. Babies don't reach for breast because they understands breast, milk and hunger. Babies don't cry when they pop out of their mother because they have some innate understanding of the situation and horrors of reality. You don't pull your hand off from hot stove because you understand it's causing you tissue damage. It's something that evolution has hard coded into our brains during the last 2 million years. You're acting like the first 9 months of existance inside your mother is not part of the learning process when it absolutely is and arguably the most critical parts of your brains forming. I really feel like the argument at 5:25 really missed the mark unless you believe in mind-body dualism and brains being more the just biological processes.

  • @ralalbatross
    @ralalbatross11 ай бұрын

    At the core of this is a simple misunderstood concept surrounding computers, which is the following. We don't teach computers anything. What we do is code and provide algorithms which when given appropriately embedded data sets with appropriate instructions which will eventually minimise a difference function between what we want and what the machine outputs. We have hundreds of ways of doing this from approaches like linear regression up to the enormous generative AI frameworks which stack dozens of layers on top of each other and use vast data sets. It all reduces to the same problem though. We have different ways of attacking it. We can even play agents off against each other. At some point it all becomes a math problem that needs a tensor solver.

  • @theminormiracle
    @theminormiracle11 ай бұрын

    The problem with the Chinese Room Experiment is that if you apply the same standard to the dumb meat fibers and cells that just send and react to electrical and chemical signals in the brain, what you end up with is the idea that *people* can't actually understand Chinese, because no part of their brain when you zoom in far enough to examine its physical operations "understands" Chinese. And yet people "feel" like they understand. They can't look inside their own wetware and trace the origins of their understanding any more than a camera can look inside and take pictures of its own lenses, so a feeling is all they have. Rather than a gotcha that shows AI isn't here yet, all the CRE shows is that its framework fails to capture how human intelligence could possibly arise out of the three pounds of meat sitting inside your skull. It doesn't prove or disprove artificial intelligence one way or the other.

  • @The_return_zone
    @The_return_zone11 ай бұрын

    Open language models are just very good autocomplete

  • @ikotsus2448

    @ikotsus2448

    11 ай бұрын

    I see you just completed a sentence there by yourself. Good job!

  • @XMysticHerox

    @XMysticHerox

    11 ай бұрын

    And autocomplete is a rudimentary AI so in other words they are ok AI.

  • @Jackamikaz
    @Jackamikaz11 ай бұрын

    About the chinese room experiment, I get its official argument is to say the person inside doing the algorithm doesn't understand chinese. But personally I interpret it as "isn't the algorithm itself the understanding part?" Aren't our brains big machines too? We can't tell our individual neurons understand anything after all. Anyway there is mind breaking phylosophy about consciousness out there. Otherwise, even if I don't agree with your reasoning, I get why it's annoying to see articles everywhere that claim "AI will take over the world". I still think this new kind of AI is a first step towards the so called "true intelligence/consciousness" though, even if there is still a long way to go.

  • @BrokenCurtain
    @BrokenCurtain11 ай бұрын

    I didn't expect the System Shock references.

  • @znie-1380
    @znie-138011 ай бұрын

    5:48 Is also at issue, the vast majority of our intelligence we don't actually comprehend, the spectrum of intelligence actually being comprehended by us is very narrow. Further it's been demonstrated pretty firmly by now that humans very often make up the reasons for why they did what they did AFTER doing it. We don't realize this on a day to day, but it -is- the case.

  • @blagoevski336

    @blagoevski336

    11 ай бұрын

    Yeah

  • @picahudsoniaunflocked5426
    @picahudsoniaunflocked542611 ай бұрын

    I suspect a real person would pick up something anything of Chinese within the rigours of the process. But I Like the general thrust & agree with more than a slight quibble with a thought experiment. Besides, you're my only Parasocial Internet Nephew Train Guy. I like everything you do. You have me...uh...well trained to root for your content lol.

  • @mvnkycheez

    @mvnkycheez

    11 ай бұрын

    They wouldn't. I think what Adam doesn't 100% make clear is that the "dictionary" or instruction book doesn't translate anything from Chinese into a language that the person in the room understands, it simply shows which chinese letters to write in response to the given chinese letters. It is impossible for the person in the room to ever work out what the Chinese actually _means_

  • @martinsykorsky8741
    @martinsykorsky874111 ай бұрын

    When you are born, you spend few years of your life before you understand a human language. It's the equivalent to being exposed to chinese to me. You just start to pick up subconsiously until you develop the skill to speak your mother tongue. And connected to that, around the same time you start to understand your existence and abstract concepts (become conscious). So yes, machine learning isn't consious yet, but I wouldn't be so bold to say it surely wont be.

  • @ovensmuggler5207
    @ovensmuggler520711 ай бұрын

    thank you for this, also cool system shock music

  • @leightaylor806
    @leightaylor80611 ай бұрын

    Rotund cats!! Lol!! Great video, keep them coming.

  • @elemileTLDR
    @elemileTLDR11 ай бұрын

    Thank you for this, Adam. I'm sharing it. However, I'd like you to address the issue of emergent properties, as part of Complexity Theory, and its ocurrence in deep learning; e.g. ChatGPT (a.k.a. Clippy on NZT) being able to do some algebra or programming, without being trained to do so. I think it's relevant given that life, intelligence and consciousness are typical examples of emergent properties, thus leading people to believe that GPT models might evetually 'wake up' and whatnot.

  • @decivex

    @decivex

    11 ай бұрын

    GPT's dataset does include code and presumably math papers so I'm not sure where you're getting from that it wasn't trained for that.

  • @kkrup5395

    @kkrup5395

    11 ай бұрын

    @@decivex there were another example where it picked up some pseudo language (which for sure wasn't in dataset) if I remember correctly

  • @decivex

    @decivex

    11 ай бұрын

    @@kkrup5395 They basically tried to pull as much of the internet as they could so it's highly likely every current language is in there.

  • @fleefie
    @fleefie11 ай бұрын

    We call it AI not because it is an intelligence, but because it learns like one. People seem to miss that quite a lot. From that, it invalidates the point that you need to teach an AI because humans need to be taught too (and instincts are just equivalent to hard-coded patterns before training). However, it doesn't invalidate the entire premise of your argumentation, but this isn't up to the devs to answer, that's up to the philosophers. The learning process behind an AI and a human is essentially the same, except that the AI is less complex so it needs way more examples. After that, the reasoning process is also very simmilar if not the same, you have a sensorial input, you process it by associating it to what you know, and you have a thought that comes out of it. From there, you can keep it, or exteriorize it through means of communication. Whether that's enough to justify AI as a consciousness isn't something that I feel apt enough to answer. I would lean towards saying that yes, it does, but that's mostly because I disagree with the idea that humans have an inherent "essence" of intelligence that makes us different from any sufficiently advanced machine. I really enjoyed the way that you explained the Chinese Room experiment, and ultimately it points out the ACTUAL question behind whether AI is intelligence or if it's not : is understanding what you are doing necessary to define you as intelligent ? Or, even more interesting, even if the writer doesn't speak Chinese, isn't he in some capacity understanding what he is doing ? My answer would be that while he may not understand fhe language, he somewhat understands the social expectations that are put in place. But again, I'm no philosopher. As far as I'm concerned, I'll consider AI an intelligence when it will become capable of learning on it's own and to replicate other AIs that it will have taught itself. This would give us a machine (a brain) that has developped a way to understand and process the world around itself on its own (a consciousness). But that's a far-off dream. For now, what we call AI is just a very potent autocomplete...

  • @slevinchannel7589

    @slevinchannel7589

    11 ай бұрын

    ...Best AI-Coverage: Hello Future Me, Mothers Basement, Some More News

  • @EduFabolous

    @EduFabolous

    11 ай бұрын

    Cogito ergo sum, thats the difference, and with our current understanding of things we are lightyears from it

  • @romxxii

    @romxxii

    11 ай бұрын

    we call it AI because it's being marketed as such by the tech bros trying to grift you.

  • @Der_Dirk

    @Der_Dirk

    11 ай бұрын

    > learning on it's own and to replicate other AIs that it will have taught itself You mean Orca from Microsoft Research and many others? ;) We also have zero shot learning today and single shot/some shot-learning. "I absolutly don't know what a cat is!" -> "Here is a single picture of a cat" -> "ahhhhh, now I understand the concept." Or the text: "a banana is a long, yellow fruit, which grow in bundles on a banana palm and is green when not yet ripe or brown, when old. But it's not a cucumber." Even if it never saw a bnana in the training data, modern neuronal networks are able to distinguishe very good between pictures with or without one.

  • @danielmortimer532

    @danielmortimer532

    11 ай бұрын

    Exactly! In fact there's not even a method to be 100% certain that other humans are sentient beyond the fact that we know ourselves to be sentient and conscious so we assume others to be for all intents and purposes. All that matters is if someone or something appears to be sentient, because there's no way to enter into someone else's mind or an AI's programming to measure or experience their "conscious" state of being to see if it exists. All we can see is their outside reactions and perhaps what triggers those reactions, not whether that person or thing truly understands what they're doing and why they're doing it. The issue is that the whole reality of a single conscious person in our world that's experiencing this reality, could actually be experiencing an elaborate dream or simulation where they're the only conscious person and everyone else is a projection of their own mind or clever outside programming; and there would be no way for them to prove or disprove it. This is the big issue and Adam doesn't address it properly, because if there's no way to measure or detect "consciousness" through any observable and repeatable scientific means, it really doesn't matter. In fact nobody even truly knows what sentience and consciousness is beyond the basic philosophical concept of it that's been around since the Ancient Greeks, because it can't be scientifically observed and measured. Contemporary analysis of the human brain and computer technology doesn't and currently can't deal with "consciousness" and "sentience", and only deals with reactions and the triggers of those reactions from both internal and external sources. The overall point is that if an AI appears to have a human level of "sentience", that's all that matters. And the social and political consequences of this have the potential to be disastrous in the not too distant future.

  • @theultimatereductionist7592
    @theultimatereductionist759211 ай бұрын

    3:07 I have no idea wtf that thing is. I haven't scanned enough samples.

  • @ZeroN1neZero
    @ZeroN1neZero11 ай бұрын

    Honestly this video was refreshing. Every time I see some goofy headline screaming about AI coming to kill us in our sleep, I roll tf outta my eyes lol. 12/10 would watch a man pet a cat again

  • @Tyrichyrich
    @Tyrichyrich11 ай бұрын

    When you think about it, we are exactly how a AI works. Inputs and outputs, never really knowing what everything means. Though, I do agree that AI is being over-hyped.

  • @PakBallandSami
    @PakBallandSami11 ай бұрын

    "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." --Stephen Hawking

  • @rursus8354

    @rursus8354

    11 ай бұрын

    The fallacy of irrelevant authority.

  • @amonx8307

    @amonx8307

    11 ай бұрын

    I heard really interesting quote on this topic: "The general artificial intelligence will be the last humans invention. It will be the best one or the worst one."

  • @shaider1982
    @shaider198211 ай бұрын

    One issue with the AI here was that it wasn't actually just software as people were hired to keep content clean in Chat GPT as mentioned in an Adam Conover video.

  • @TheNN
    @TheNN11 ай бұрын

    "Artificial Intelligence Isn't Real" ...Isn't that exactly what an AI trying to pass itself off as human *would* say?

  • @elvingearmasterirma7241

    @elvingearmasterirma7241

    11 ай бұрын

    I dont blame them. Mainly because if I were sentient AI Id do everything to avoid paying taxes or partaking in our modern, profit, consumerism driven society.

Келесі