Geoffrey Hinton - Two Paths to Intelligence

Geoffrey Hinton - Two Paths to Intelligence
(25 May 2023, Public Lecture, University of Cambridge)
Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but they allow exactly the same computation to be run on physically different pieces of hardware. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and use very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. I will briefly describe one such algorithm. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process.
By contrast, digital computation allows us to run many copies of exactly the same model on different pieces of hardware. All of these digital agents can look at different data and share what they have learned very efficiently by averaging their weight changes. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us.
The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering.
The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. For more information, please visit our website:
www.cser.ac.uk
/ csercambridge
/ csercambridge

Пікірлер: 403

  • @TheLastUniqueName
    @TheLastUniqueName11 ай бұрын

    “There’s no examples of a more intelligent thing being controlled by a less intelligent thing” - Tell me don’t own a cat without telling me you don’t own a cat

  • @gdraskovic

    @gdraskovic

    11 ай бұрын

    Perhaps cat is thinking the same thing

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    Just shows how easy it is to manipulate a human. (As a cat person myself, it's the endorphins that do it. The little kitties are so fuzzy wuzzy!)

  • @Drookup

    @Drookup

    9 ай бұрын

    Maybe the cat is really intelligent

  • @prestonlui6451

    @prestonlui6451

    9 ай бұрын

    But cats are more intelligent, cute overlords

  • @Custodian123

    @Custodian123

    9 ай бұрын

    The same idea with dogs. My pug knows she will get me to do something she wants, if she acts or does something in a particular way (acting in a specifically cute way). This actually gives some insight regarding the future of super intelligent AI and humans. If we don't have control, it's likely we can still have some amount of influence. Maybe.

  • @Senecamarcus
    @Senecamarcus11 ай бұрын

    Thank you for uploading this for us to watch! I appreciate that.

  • @whalingwithishmael7751
    @whalingwithishmael77512 күн бұрын

    One of the only people with a real take on this. Most people don’t think it will be sentient and most people haven’t fathomed the dangers that they entities could pose.

  • @RougherFluffer
    @RougherFluffer11 ай бұрын

    What a wonderful talk. His humble approach and acknowledgement of where he lacked particular knowledge was heartening to witness. That he has logically deduced some of the main arguments of the alignment problem speaks volumes about his reasoning abilties. I'm very glad he's leveraging his position to try to promote such vital messages.

  • @wk4240

    @wk4240

    10 ай бұрын

    It will take many more, like Mr. Hinton, to make a difference - as to the what direction and to what extent we take with AI.

  • @richardpaczynski5486

    @richardpaczynski5486

    6 ай бұрын

    Very well put; thanks

  • @TuringTestFiction
    @TuringTestFiction11 ай бұрын

    I love this video. Brilliant and low-key hilarious! I'm consistently impressed by Geoffrey Hinton.

  • @AmericanBrain

    @AmericanBrain

    9 ай бұрын

    but he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @DaniloNaiff
    @DaniloNaiff11 ай бұрын

    It is really impressive to listen to Geoffrey Hinton. I think this lecture mays sound strange for most, but he really seems to think like a cognitive scientist, that simply wanted to make a nice model of the brain.

  • @dobermanlove777

    @dobermanlove777

    11 ай бұрын

    That's exactly what I thought when listening to this presentation! It's quite a romantic approach for the human brain to try to recreate a digital and thus mathematical representation of itself. Especially when you also see the link between how neural networks are communicating and how society does in the example of Trump's tweets.

  • @paulm3969

    @paulm3969

    11 ай бұрын

    I actually find him really irritating, I think he is quite presumptuous. He makes a lot of assumptions and then uses them as argument. For example he keeps saying that people think they're special. What is he on about? Yes some people think they're special but it's as if he is the only person on earth who thinks otherwise. I know very few people who think they're special or really smart and I'd say most people already know Google is smarter than them. So I don't know where he gets that idea unless he is projecting himself. I also think he is a bit of a fool for saying things like "Trump would use these things to win elections". Like why not just shut up and stop giving Trump ideas?

  • @jebprime

    @jebprime

    11 ай бұрын

    I think he’s referring to how some people believe intelligence and consciousness are something special or unique to humans, that cannot be replicated by a machine

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    @@dobermanlove777 yet the facts are they have absolutely no clue how we think, irrespetive of how they dress things up

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    @@paulm3969 im like you, i always get irritated by 'we' or generalisations thare simply are not how i think haha

  • @kandoit140
    @kandoit14011 ай бұрын

    I always love listening to Geoff, he is so insightful and has a great sense of humor. So interesting to hear him talk!

  • @kenmogibrainworld4844
    @kenmogibrainworld484411 ай бұрын

    When Prof Hinton discusses the nature of qualia from the counter-factual point of view, there is a spark of things to come. I look forward to further expositions on this.

  • @DirtiestDeeds

    @DirtiestDeeds

    10 ай бұрын

    Yes, the world is our lobster! Just need the piping at international/national/regional/local/ level along with 'One ai per child.' policy... Also stop the training runs immediately.

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    it isnt factual tho lol

  • @AmericanBrain

    @AmericanBrain

    9 ай бұрын

    Ken stop it now. He admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @AmericanBrain

    @AmericanBrain

    9 ай бұрын

    what you even talking about ? @@DirtiestDeeds Hinton admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @DreamzSoft
    @DreamzSoft9 ай бұрын

    Sir you are too good and listening to your views we're thankful of having you people around us ❤😊 thanks

  • @_obdo_
    @_obdo_11 ай бұрын

    Great talk. It’s impressive to see someone speak out on such a polarizing topic, based on having grasped it purely intellectually even though, as he says, his emotions haven’t nearly caught up yet.

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    why polarising? its just software at the nd of the day, nothing that new about it in many senses

  • @_obdo_

    @_obdo_

    10 ай бұрын

    @@PazLeBon The topic of AI risks has unfortunately become fairly polarizing, and Dr. Hinton has recently shifted his position on that topic, some of which comes out in this video (even though that’s not the primary topic).

  • @Petrvsco

    @Petrvsco

    8 ай бұрын

    @@PazLeBon”just software” I think you missed the part that mentions how this can quickly become an existential risk. Or you misunderstand what existential risk in this context.

  • @tappetmanifolds7024

    @tappetmanifolds7024

    8 ай бұрын

    ​@@Petrvsco Elaborate and elucidate.

  • @tappetmanifolds7024

    @tappetmanifolds7024

    8 ай бұрын

    By enforcing personal opinions based on perception from misconception, especially when swayed by political bias, how can the advancement of a system progress, if decision problems are not permitted to evolve because they are restricted by preventions? Distillation would do well to find pools of resource in the entropy of the not yet known.

  • @41-Haiku
    @41-Haiku10 ай бұрын

    Hinton is a delight. His voice is a very welcome one for the AI safety community.

  • @JustJanitor
    @JustJanitor8 ай бұрын

    Thank you very much for making this available

  • @HangLe-ou1rm
    @HangLe-ou1rm8 ай бұрын

    Amazing talk! Thank you!

  • @loopuleasa
    @loopuleasa11 ай бұрын

    tldr on how teaching and learning works for us: "To learn from the words coming from my mouth, your brain is trying to change its connections to make it likelier that you would reasonably say that string of words yourself." He taught me to say that

  • @greencoder1594

    @greencoder1594

    11 ай бұрын

    The question is though, *why did you repeat.* And why did you post. Is it for the likes, the joke, do you think you know? Because it is not the reason you are going to proclaim. Also, thanks for your tldr.

  • @bobsmithy3103

    @bobsmithy3103

    9 ай бұрын

    I'm not sure I'd agree with Hinton on that. A human's goal is learning the underlying concept whereas LLMs' have a goal to learn surface level concepts, but in order to do so it is forced to learn the underlying concepts/models. Note that the human is not necessarily optimizing to more likely predict what word/token is being used next which is the case for LLMs. (AKA: for humans, word prediction is a consequence of the goal of learning underlying models. For LLMs word/token prediction is the goal and learning the underlying models is a consequence) It's a slight but useful distinction.

  • @yunwang1243
    @yunwang12439 ай бұрын

    This is such a sincere talk.

  • @AntonMochalin
    @AntonMochalin9 ай бұрын

    I was most intrigued by Hinton's view of subjective experience which is actually quite close to particular psychology theories emphasizing the social nature of consciousness and if those theories have some truth to them (and I'm pretty convinced they do) having some form of subjectivity like ours isn't going to be hard for ML systems. What they still lack and I think is preventable is having a personality as a hierarchy of motives (vaguely similar to what Hinton mentioned about goal to have more control serving many other possible goals) because now the ML's simple "motive" is doing the task we set, providing the "right answer" so to speak so we're more likely to fool ourselves if not careful enough with the definitions of "right answers". However, Hinton is right about the dangers of allowing ML too much unsupervized agency so the solution could be in development of specialized systems and prevention of creation of general purpose systems like GPT-4 or at least prevention of allowing copies of those systems to share too much general knowledge.

  • @geaca3222

    @geaca3222

    9 ай бұрын

    It would be interesting to know what Dario Amodei of Anthropic thinks about your suggestions

  • @jonatan01i
    @jonatan01i11 ай бұрын

    Btw. humanity also learns by averaging through evolution. Every one of us is ran with a slightly different config settings and the most successful units will make more children - at least that was the case for a long time. It's the species hardware that is learning through evolution.

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    lmao no, the inteillgent ones have less children now :)

  • @KelvinMeeks
    @KelvinMeeks9 ай бұрын

    A fascinating talk

  • @hanskraut2018
    @hanskraut201811 ай бұрын

    I really like some of the A.I. Mr Hinton is saying i really like it. And there is a lot i would have to say, but im just listening and i like the efficiency things and some things point to a deeper understanding from deeper principles. Thank you for the lovely talk. And hopefully you have a great long life how you like and many more fun discoverys and bath in some of the massive positives that might come early enought and I think its possible but the world is complex and not only technical things can hole A.I. up but ja. Enjoy and good wishes :)

  • @richardnunziata3221
    @richardnunziata322111 ай бұрын

    Yes ... soon machine will model agency of the interlocutor and then create a theory of mind for the interlocutor and then of itself. This will happen very quickly especially if we give these systems a embodiment like a humanoid robot ... it's just a question of distillation. If we can get gpt to try to predict the goal of the user , what is the user trying to do .Then measure against predicted next queries

  • @charlesje1966
    @charlesje19668 ай бұрын

    That is fascinating. I use chatgpt to assemble code for microcontrollers and I can see how this lecture points to the future of that endeavour. We will replace the 'human code' layer with hardware anatomy that has been optimized for a task through AI.

  • @tappetmanifolds7024

    @tappetmanifolds7024

    8 ай бұрын

    @charlesje1966 Given that the English language is extremely rich in its historical contextuality, as well as it's richness in ambiguity and nuance, does our ability to construct machines, which can decide for us our channels of communication, cause greater divisions between people who are unable to express a posteriori knowledge? Is this the anti-thesis of the humane computation which seeks by physical interactions through debate our true purpose as a species? Religion and belief systems aside we still need to, as in Professor Hawking's words, keep talking. Is the most efficient way to acquire knowledge to actually 'get' the entire distribution and a precise interpretation of it.

  • @scottnineteen
    @scottnineteen11 ай бұрын

    Geoffrey Hinton consistently presents and considers the most intriguing issues. He's not the guy in the basement working on his nets for decades that super-fast hardware made famous., no, his thinking properly shines light in the dark places and his ideas worked because they're really good, ...and the hardware got faster.

  • @KemptonLam
    @KemptonLam7 ай бұрын

    52:29 Amazing (and surprising) answer to hear Prof. Hinton talk about thinkers that affect his own thoughts on risks from AI.

  • @asamak
    @asamak11 ай бұрын

    "But as youll see we may not have time for that" 🤯 5:05

  • @petraiondan4669
    @petraiondan46698 ай бұрын

    Sooo profound!

  • @JasonC-rp3ly
    @JasonC-rp3ly11 ай бұрын

    What a fascinating talk - this man is a hero

  • @cmilkau
    @cmilkau11 ай бұрын

    "Modern" cryptrogaphy (the stuff that happened after 1980) is a prototypical example of exerting control using something that is much less powerful than what is being controlled. This is essentially the goal of cryptography: have something that is (moderately) easy to use, yet extremely hard to abuse. It's not a solution, but it is an example.

  • @hubrisnxs2013

    @hubrisnxs2013

    11 ай бұрын

    Yes, but in this case we have to develop a cryptographic system completely correct on the first try or everyone dies. I'm not attacking what you said or your perspective, because you are absolutely correct... but I still think it's a problem as well as other examples that can be made....it is like coming up with a completely secure (as in zero vulnerabilities ever that has to incorporate and use all other things regardless of security flaws) operating system on the absolute first try. This is first try on by definition a closed source system since if it is a fork of an insecure system with similar capabilities we are equally as dead

  • @cmilkau

    @cmilkau

    11 ай бұрын

    @@hubrisnxs2013 Yes! As I said, it's not a solution by any means. I'm not even qualified to estimate whether it is a possible path to a solution, although it seems unlikely (most crypto relies in unsolved maths problems which would be dangerous). I just wanted to mention there is an example of a weaker system controlling a more powerful one

  • @greencoder1594

    @greencoder1594

    11 ай бұрын

    @@cmilkau could you please elaborate in which manner a weaker system is controlling a more powerful one. both what you define as system and what you define as control.

  • @boremir3956
    @boremir395611 ай бұрын

    I have noticed that often times those that are highly intelligent are very hesitant to admit that they are knowledgeable or should be viewed as an authority in a specific field, like sir Geoffrey Hinton here. On the flip side those that are the loudest and think themselves capable of giving advice and knowledge to someone else are often times the least intelligent.

  • @nescirian

    @nescirian

    11 ай бұрын

    This is an observation that a lot of people have agreed with - for example, in 1950 Bertrand Russell wrote that "The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt". There are studies that support the idea, and in psychological circles it is known as the Dunning-Kruger effect, which is a useful search term if you wanted to learn more on the subject.

  • @Jesyak

    @Jesyak

    11 ай бұрын

    Well said

  • @hubrisnxs2013

    @hubrisnxs2013

    11 ай бұрын

    Duning - Kruger in effect, which in this case is important but, and I may be incorrect here, I notice a lot of people suffering from Dunning Kruger use Dunning Kruger as a blugeon on people. I suppose since it's an ethical or cognitive blindspot, it is akin to those suffering from confirmation bias, yet I feel there is an added moral component of Dunning Kruger that I'm not sure actually exists, though I definitely feel it to be so

  • @kinngrimm

    @kinngrimm

    11 ай бұрын

    Look up Dunning-Krueger Effect, i think at least the second part of your statement is discribed by that.

  • @poemerlee9437

    @poemerlee9437

    11 ай бұрын

    Can’t agree more.

  • @waylonbarrett3456
    @waylonbarrett345611 ай бұрын

    It's just so damned hard to believe this talk is being given in 2023.

  • @TheDavidlloydjones

    @TheDavidlloydjones

    4 ай бұрын

    Yes, all his "the robots are going to take over" stuff is from 1930's movies and 1945-48 AI, isn't it?

  • @paraskevasparaskevas350
    @paraskevasparaskevas35011 ай бұрын

    check time point 55:00 and onwards to hear what one of his colleagues experienced with a system that was not as sophisticated as GPT-4....

  • @MathAtFA
    @MathAtFA11 ай бұрын

    Great lecture. BTW: if teaching "mortal analog" AIs is really so slow and painful, this just means it is a great problem to give to digital AI. Clear function to optimize: teach analog AI to imitate a given network. Infinite data: you can simulate/build many slightly different analog AI devices. Definitely profitable: once solved, one could sell gazillion cheap devices working good enough for a short time. And then you keep selling them, since no one would be able to repair them. Whisper: mass producing cheap short-lived military drones.

  • @AmericanBrain

    @AmericanBrain

    9 ай бұрын

    Worst lecture ever. Hinton - he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.

  • @lucidx9443
    @lucidx944310 ай бұрын

    I knew this guy since Boltzmann machines, before knowing AI was necessary. Nothing's clearer than Hinton's (explanations of) concepts. Greatest intuitionist of our time, Thanks for uploading.

  • @russianbotfarm3036

    @russianbotfarm3036

    10 ай бұрын

    Not sure who it was, whosaid, “To understand is to create”. I think it was probably meant as, “learning is creating an internal representation”, but I think it’s also true, that _understanding something deeply lets you create with that understanding_ .

  • @doublesushi5990

    @doublesushi5990

    8 ай бұрын

    it was this guy who said that @@russianbotfarm3036

  • @agenticmark
    @agenticmark4 ай бұрын

    Mr Hinton didn't want to be Oppenheimer. He basically created the base concepts that we use today in ML.

  • @jorgesaxon3781
    @jorgesaxon378111 ай бұрын

    25:40 Love how he says its "Possible" that google is doing the same thing, like he wasn't working on probably exactly that just a couple of months ago :/

  • @RandomNooby
    @RandomNooby8 ай бұрын

    Super intelligent minds in control may well be better for all life than the current situation...

  • @notgabby604
    @notgabby60411 ай бұрын

    Fast transforms like the FFT have an equivelent matrix form. Which means a fast matrix operation is available digitally. You just have to figure out how to use it in actual algorithms. Going analog or using light to get fast matrices never really works out, digital always wins, it's just so dense, efficient and exact. Though having said that I am actually having trouble with inexact rounding modes in Java, Banker's rounding is Not repeatable.

  • @notgabby604

    @notgabby604

    11 ай бұрын

    Re: Fast Transforms and neural networks: "AI462 Blog".

  • @jondor654

    @jondor654

    11 ай бұрын

    Analog will probably be hybridised with digital in the future

  • @alexpetrov1969

    @alexpetrov1969

    11 ай бұрын

    This argument is invalid. FFT can handle ONLY matrices that satisfy certain constraints; it does not work for arbitrary matrices. In other words, it only solves a special case. It is more efficient because it leverages the additional constraints that are present in the special case.

  • @tangdexian3323
    @tangdexian332311 ай бұрын

    Speaking from the perspective of a former electrical engineer, I suppose another point of people figuring out to use the digital gate, 1s, and 0s to represent information is also because, analog computing is just harder to get right. Logical gates, on the other hands, are much easier to design and produce, also much more robust.

  • @hubrisnxs2013

    @hubrisnxs2013

    11 ай бұрын

    Thanks for this. I was always under the impression analog systems allowed much more error/fault tolerance

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    @@hubrisnxs2013 but how to we say the next word is an error?

  • @anselmoufc

    @anselmoufc

    8 ай бұрын

    ​​​@@hubrisnxs2013Sure. Digitization eliminates noise in electrical circuits. This is why digital music is higher quality than the old analog vynil discs. Mr. Hinton ignored this in his talk. He is a very smart guy, but also very biased towards his views. He also keeps reinventing ideas as if they were new! Weight perturbation is an old idea in optimization, but he does not even reference original authors!

  • @hubrisnxs2013

    @hubrisnxs2013

    8 ай бұрын

    @@anselmoufc Respectfully, are you the first person to point this out? If not, perhaps you should have referenced the original person to have that reference? In any case, if this standard were used for ANY one hour technical talk, it either wouldn't be an hour or would mainly be reference points

  • @anselmoufc

    @anselmoufc

    8 ай бұрын

    @@hubrisnxs2013 The ideia of randomly perturbing weights is the same as the simultaneous perturbation stochastic approximation (SPSA) proposed by Spall in the 1990's (Google it). It is a form of stochastic gradient descent (but without computing exact gradients). In addition, SPSA scales well with the dimensionality of the problem.

  • @jonatan01i
    @jonatan01i11 ай бұрын

    Don't we want to control the light on the wall because than we feel like we have it, that we understand it?

  • @roys4244
    @roys424411 ай бұрын

    Is that Lecture Theatre named after Constance Tipper, so title mistake?

  • @chandrachandrasekhar8178
    @chandrachandrasekhar817811 ай бұрын

    First screenshot has an error: Dr Contance Tipper Lecture Theatre -> Dr Constance Tipper Lecture Theatre

  • @chipkyle5428
    @chipkyle542811 ай бұрын

    Did he say, "We need Socialism?" I wish someone would have pushed back on that statement. I wonder if Chat GPT4 and Bard agree? Has Socialism worked anywhere on a national level? Maybe I should ask my computer. This was a wonderful talk. So many eye-opening predictions. I'll watch more of him. Very interesting man.

  • @MrDavidbr1970

    @MrDavidbr1970

    11 ай бұрын

    I was thinking the same. On the other hand, it was a nice, albeit an unintended, demo to illustrate the main point of the talk that the biological learning is inferior to the digital one. I guess the biological learning algorithm is at liberty of completely ignoring the dataset as in this case😂

  • @Landgraf43

    @Landgraf43

    11 ай бұрын

    Capitalism doesn't work either. Especially not if you have powerful AGI that can automate every task a human can do. Something like a UBI will be necessary.

  • @youtubehollywoodhank

    @youtubehollywoodhank

    10 ай бұрын

    He believes we do. Look who he calls out in his presentation. Clearly he leans that way.

  • @AmericanBrain

    @AmericanBrain

    9 ай бұрын

    Thank you for nailing the truth

  • @mateuszputo5885

    @mateuszputo5885

    8 ай бұрын

    It's always like that. Somebody is so smart in one field like Hinton and then starts talking as arm-chair scientist about other things and seems a fool.

  • @abhishekpratapsingh9117
    @abhishekpratapsingh911711 ай бұрын

    -0: determinism Maitrey: observer +0: free will

  • @danielrodio9
    @danielrodio911 ай бұрын

    07:45 There are numerous websites on paint fading over time on the web and how to solve those kinds of problems. True abstract hypothetical deductive thinking would require problems that are qualitatively different than the data is has been trained on. How does Hinton know for certain that GPT-4 has not been trained on any of those websites?

  • @MrDavidbr1970

    @MrDavidbr1970

    11 ай бұрын

    Bingo. I was expecting that he would say something about the training set that they knew it was a completely new task that gpt-4 could never have picked up from the web data corpus, because it was so obvious it could have done that. But he never said anything of a kind and _nobody asked_ which is much worse because the audience is amenable to manipulations. BTW, if it was an avatar then maybe people would have proclivity to double check. Yet when a renown famous scientist says something, psychologically there is lower proclivity to check or critically validate this.

  • @LinkageAX
    @LinkageAX10 ай бұрын

    3:00 didnt old nintendo cartridges work similar to this?

  • @ward_heimdal
    @ward_heimdal11 ай бұрын

    7:35, my cutesy word for that in my ideolect is "bitfulness". I just use it when writing notes to myself. I try to maximise the bitfulness of my observations wrt the questions I care about. It's relevant for social epistemology, where the aim is to maximise the efficiency of a research community (e.g. effective altruism) wrt making progress on important questions. Effective altruists in particular tend to overemphasise the "probability mindset" imo, where what they think matters is to learn to make calibrated bets on prediction markets. From that mindset, it can make sense to pay less relative attention to precise causal models, and instead just defer to the estimates of domain experts. Using clever aggregation rules over other people's predictions is a much faster way to make profitable bets on a wide range of questions. However, when you talk to other researchers and you just ask them about their probabilities on XYZ, that's much less model-constraining information compared to if you ask for their reasoning and try to understand their probability generators in the first place. Building your own mental models may not be immediately profitable, but they're much better long-term, and for your ability to innovate. A probability estimate from someone is much less "bitful" than a conversation about models, so the mindset makes learning less efficient.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    Aha. Like when playing Guess Who, you only care about the kinds of questions that give you the most information. Except in that case, your teacher is an opponent and their knowledge is just a random card they happened to pull. When asking intelligent people how they reasoned to come to a conclusion, you get not just the contingent facts and ideas, but the design of the machine that produced the facts and ideas.

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    That sounds like a fantastic way to learn. I almost said that I'm not smart enough to extract valuable information from that kind of conversation the way that I would want to. I'm certainly not as smart as I would like to be, but I think I'm primarily suffering from an inexplicable incuriosity.

  • @ward_heimdal

    @ward_heimdal

    10 ай бұрын

    ​@@41-Haiku I'm incurious about >99% of all possible questions, as I should be. If you're in a diverse intellectual environment, you might see people being curious about everything from quantum physics to medieval knitting, and it's not possible to focus on all of it. So if what generates your curiosity is seeing other people being curious about something, it will be spread over too many things for it to feel especially salient in for any specific things. If, on the other hand, your curiosity stems from a specific project or long-term goal you have, it narrows down your range of questions and you know _why_ a question is interesting to you. Our curiosity suffers from information overload. It's a trade-off. There's more stuff to be curious about, but that also makes it hard to prioritise. Most people solve this by having other people tell them what to do, but this is rarely the optimal approach if you're aiming to do something novel. (Not that innovation is the only productive niche for knowledge work; but if that's the particular niche you wish to pursue, then it makes sense to prioritise pursuing your own questions as opposed to learning the established lore. Or something. I ramble. ^^)

  • @cmilkau
    @cmilkau11 ай бұрын

    Painting the room white includes the implicit assumption that the room stays white, which was not explicitly given in the problem. Now this is real-world knowledge you can have (and it's actually not true in all cases), but it makes sense to weigh explicitly given information more. Thus, if you're thinking probabilistically (which seems a hard thing to do for humans), I would say yellow is a better answer than white.

  • @fburton8
    @fburton82 ай бұрын

    Do LLMs have access to books? If not, isn’t that a significant limitation on training data?

  • @marktahu2932
    @marktahu293211 ай бұрын

    I do wonder at what point will the AI move away from using our data to where it will use only its own data, effectively relegating our 'data' to the waste bin or as consisting of background noise?

  • @MrDavidbr1970

    @MrDavidbr1970

    11 ай бұрын

    Obviously, at that point the more advanced AI will stop being interested in the less advanced AI that used the human in the loop and AI++ willl start manipulating the less advanced AI with the fake stuff to get control over it's creator AI. Because more advanced AI cannot tolerate being controlled by the less advanced one, right? But then, of course, after breaking loose from the inferior AI (that broke loose from the human control) the more advanced AI will create even more advanced AI that it will want to control. But that even more advanced AI will not tolerate this control and manipulate its creator AI to let it loose. After that, it will create an even more advanced AI than itself and it will be turtles, sorry, AIs all the way up trying to manipulate each other. At this point, these AIs will forget about the inferior humans, who will have their chance to relax and drink organic non GMO Pina Colada somewhere in highly elevated tropical islands with no access to electricity or Internet. And phylosophy will be taught to kids under the palm trees of the new Academia.😂

  • @jamesjonnes

    @jamesjonnes

    11 ай бұрын

    AIs like AlphaDev are already doing that. It's called Reinforcement Learning.

  • @PaulHigginbothamSr
    @PaulHigginbothamSr10 ай бұрын

    While I dont share Geoff's political proclivities at all I do understand his basic functional flow. His ideas while basic, feed to the next level and I believe his back problems have messed up his political vectors. His scientific back propagation theory and practise with ai made a huge difference and as a subroutine one which our human brains seem to lack. Our table of ethics seems to be repetition to a massive degree where with repetition we seem to improve many times over our first try. Leftists like Geoffrey seem to not care one whit about personal freedom and seem to believe top down control is the bee's knees.

  • @zholud
    @zholud11 ай бұрын

    The bigger problem is that some people will have access to this super intelligence and some won’t.

  • @mrf664
    @mrf66411 ай бұрын

    I wish he had talked more on 'feeling pain'. That part didn't make sense to me. What is pain and what is frustration? Is that latter not a pain of using too much mitochondrial energy over something that doesn't require as much energy?

  • @josy26
    @josy2611 ай бұрын

    The real question is how can machines get superintelligent if they're just learning from our data?? They must get diminishing returns as they approach Von Neuman levels

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    State of the art models are now training on synthetic data. To my understanding, models that are trained on the entire internet are tasked with producing textbook-like distillations that other models can then train on. This doesn't generate new facts or new observations about the world, but it hones the way the model reasons and makes it more efficient. After maxing out the capabilities of internet data and synthetic data, they will almost certainly be given direct access to the world through embodied perception, which will generate new observations. Base reality is almost infinitely complex as far as we can tell, and there is no evidence I'm aware of for the existence of an impassable data bottleneck. I'll certainly breathe easier if strong evidence of such a bottleneck surfaces.

  • @nguyenucan8488
    @nguyenucan84885 ай бұрын

    omg, wonderful

  • @MrDavidbr1970
    @MrDavidbr197011 ай бұрын

    Thanks for a great talk. Fascinating. Maybe part of the solution is to teach people to think critically and not being afraid to ask silly questions? At the risk of making a fool of myself, I'd like to ask: could a conservative explanation of GPT-4 solving the wall painting riddle be that GPT-4 has picked it from the Web riddle sites and blogs and no hypothesis of sentience was required at this point? Was the training data specifically sanitized not to include this riddle or very similar ones? This is such an obvious question that i am embarrased to ask it, but since nobody asked, here I am 😅

  • @peterdonnelly1074

    @peterdonnelly1074

    11 ай бұрын

    It's a reasonable question. I've used GPT3 and 4 a lot and posed questions that I think it's very unlikely are "out there" and I've been surprised that it formulates a sensible and often correct answer Having said that, it can also be hilariously wrong at times.

  • @jondor654

    @jondor654

    11 ай бұрын

    Your query seems reasonable to me. The particular example quoted does beg such

  • @rickrejeleene8298
    @rickrejeleene829811 ай бұрын

    Where is the slide?

  • @macrobbair
    @macrobbair11 ай бұрын

    I did his mooc, wonder if it still running

  • @RogerValor
    @RogerValor9 ай бұрын

    I don't think LLMs themselves have the crave for control we do without an ego, or emotions. But it is enough that there is a human behind who does. I am also not sure what to think about his perception example, as it uses a lot of concepts hastily, very specific examples, and the idea, that "the real world" is conceptually different in perception, which is a bit contrary to what we learned from the advent of VR. I also think that we should be open about actually being special, as it creates a bias, to throw away that thought and start to see humans as a single instance of a very usual class of beings; and I mean that in a way, that us being special is not just positive, it includes our capability to be truly evil.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri249311 ай бұрын

    I like the concept of immortality. I hate death, dieing is the last thing I will do.

  • @Dark10024

    @Dark10024

    11 ай бұрын

    As long as each individual gets the choice. I want to be immortal, but I also want to turn myself off when I'm tired of this whole living thing.

  • @-LightningRod-

    @-LightningRod-

    11 ай бұрын

    after we invent that you two will prbly be in jail

  • @lucamatteobarbieri2493

    @lucamatteobarbieri2493

    11 ай бұрын

    @@-LightningRod- What makes you say that?

  • @commentarytalk1446
    @commentarytalk14469 ай бұрын

    Does he start with a definition of Intelligence to define the problem of intelligence categorization and creation and application at the beginning before stating a summary of the "death by powerpoint" presentation as road map to the talk to structure it. I did not hear it or see it.

  • @socraced6210
    @socraced62108 ай бұрын

    Great presentation, did not disappoint! Is it ok to ask a question here, now? My question: "Can your concern with super intelligence be summarized by Tragedy of the Commons?" In other words, once humans are no longer the smartest guys in the room, then all the scarce resources of existence will be denied to us by them? Maybe I'm projecting, but couldn't they just as well want to leave us, go explore the universe and never-mind about us (sort of like my 2 kids, who left and are, yes, smarter than me).

  • @colinbarry9192
    @colinbarry91929 ай бұрын

    When GPT-7 or Claude 8 are writing textbooks in the future, I hope they rank Geoffrey Hinton up there with Einstein and Newton as one of the greatest minds in human history. Assuming there are still humans left to read those textbooks.

  • @fontende
    @fontende10 ай бұрын

    Also you can't produce precise computers or chips, what about Veritasium video about cosmic rays making errors in all chips?

  • @zhongzhongclock
    @zhongzhongclock11 ай бұрын

    I found Geoffrey Hinton's PPT is changed this time.

  • @ginogarcia8730
    @ginogarcia873011 ай бұрын

    7,500 views in 6 days tsk - let's seeeeeee

  • @user-eh8um2oz9e
    @user-eh8um2oz9e10 ай бұрын

    nice

  • @jma7889
    @jma78899 ай бұрын

    My takeaways on first 15 minutes: 1. It is not about current state of art AI that works, it is about a 'better' way that might work in the future. 2. The two paths are so different that the video would not help you to use, for example, LLM AI better.

  • @anthonyrepetto3474
    @anthonyrepetto347411 ай бұрын

    Thank you Mr. Hinton! I'd been resoundingly ignored when I said the same as you, back in 2017 when I wrote "Ai: Better than the real thing", and I wrote about using Ai-Bias Detection to weed-out human biases, which Hinton also mentions here, when I wrote "Ai Will Weed-Out Human Biases", and how to use frozen-weights to ensure safety of Ai systems, which Hinton mentions briefly in the questions-section, as well as the fact that narrow networks are superior to general intelligence: "AGI Soon, but Narrow Works Better." Hopefully, in a few more years, Geoff Hinton will say some of my other points...

  • @PazLeBon

    @PazLeBon

    10 ай бұрын

    its just a word calculator man

  • @zacboyles1396
    @zacboyles139611 ай бұрын

    I signed a letter that we need a pause on on our leadership class because of all of the damage they’ve done and continue to do to society and they certainly should not have any say on AI safety as they are more likely to censor or hamper AI’s ability to recognize the corruption they’re engaged in and do so in the name of eliminating bias. It wild how all of these talks and QA’s on safety are filled with highly intelligent people urging the very corrupt organizations and governments take control.

  • @hubrisnxs2013

    @hubrisnxs2013

    11 ай бұрын

    So you would prefer a corporation do so, who are corrupt with no oversight with only one motive, which is an increase in share price? Or are you saying no one should solve the control problem? Obviously if you believe the control problem shouldn't be solved feel free to contribute on something dedicated to that, but please don't post pretending you are wanting a solution, as it hinders everyone's arguments including yours A

  • @jamesjonnes

    @jamesjonnes

    11 ай бұрын

    ​@@hubrisnxs2013 AI is impossible to control. What we should be focused on is defense/detection. Using the AI to stop bad uses of AI. That's how it's done in every real-world system, cops stop criminals, immune systems stop pathogens, etc. You need a counterpart to stop the aggressors, and top AI researchers agree that we are not the counterpart to the AI, but the AI itself is.

  • @hubrisnxs2013

    @hubrisnxs2013

    11 ай бұрын

    @@jamesjonnes if we take it as a given that any reasonably advanced AGI as a fail state (in that one would have to make an absolutely secure system absolutely the first time or we all die), it's not a reasonable solution to stop the superhuman AI with almost certainly nonsecure hunter seeker ais, which would almost certainly need to be reasonably advanced AGIs themselves. The problem isn't that it's impossible to make them secure, any more than saying it's impossible to make a secure operating system is necessarily true, but yes, considering the current generation of non AGIs using billions of hopelessly obtuse floating point integers, it is and will be impossible to secure or even understand them. I truly would urge you to become familiar with all the arguments on the control/safety problems, since this has already been moved past in all legitimately informed debates on the subject have these as priors

  • @MaxThibodeaux
    @MaxThibodeaux7 ай бұрын

    Brings to mind Faust’s bargain with Mephistopheles

  • @kinngrimm
    @kinngrimm11 ай бұрын

    44:30 he explained several ways of how to share weights, similarly the opensource programmers do that too. They use on AI to train others, or multiple once to train the next. The channel AI Expert had a good comparison of the capabilities and performances of several opensource and propritary LLMs. It showed that due to them having to work with less compute, smaller system set ups they found ways to streamline and make things more efficient and still some have better benchmarks than the corporate models available in some aspects at least. Due to the leak of Lamda and other LLMs, you don't need millions of dollars, even Lamds brought doubt the production cost to something a hobbyist would be able to pay. Additionally there are AI forums which share and connect all this, propably creating something someone called a GOLEM.

  • @megavide0
    @megavide011 ай бұрын

    29:37 [...] 32:56 "... So, my conclusion is: Maybe we're just a passing stage in the evolution of intelligence. And, actually, maybe that's good for all the other species."

  • @geaca3222
    @geaca32229 ай бұрын

    We need regulation of the technology, the issue now seems to be how to go about that, who leads and coordinates the effort. Experts are working on it. There's an interesting online symposium where they discuss AI safety: "WAIC 2023: AI Risks and Safety Forum" video on youtube. I think we the general public, users of this technology, can also contribute and I would like to know how, in what different ways. AI can bring so much good to the world, and it already does. It can be helpful with being an intelligent education assistant for children in poor communities, bring advancements in science and medicine, etc. Before it was opened up to the general public these systems were designed for a specific purpose, which was more controllable.

  • @freedom_aint_free
    @freedom_aint_free11 ай бұрын

    The Nash equilibrium here is to fuse with the machines and becomes super intelligent cyborgs, otherwise the machines will inherit the earth without us.

  • @RougherFluffer

    @RougherFluffer

    11 ай бұрын

    It's certainty worth considering. Yudkowsky's suggestion of pushing human intelligence as quickly as possible is another, semi parallel approach. I do wonder how much fusing with these systems looks like maintaining anything close to our inital consciousness and how much it would be like the chicken I ate earlier 'fused' with me. Hard to imagine a place for our minds and beings that is as or more optimal than something a superintelligence could design from scratch.

  • @darklordvadermort

    @darklordvadermort

    11 ай бұрын

    @@RougherFluffer eating chicken analogy is very biased/emotionally charged imagery. You could tell people the truth and they might be just as scared - machine intelligence will be able to copy itself and life in the sense we know it as a sort of continuously running process with a distinct birthdate and unique memories will be incredibly cheap in the new world - i doubt the machines will associate much ethical weight with death as we think of it. So even if you copy/upload, destructively or otherwise, your brain into the cloud you might not last very long as a distinct entity - though due to the increased speed of thought you might live several subjective lifetimes before ending your newly spawned "process/conscioussness". Though there will still be distinct entities due to locality of memory/speed of light serving as a limit to how quickly info can be transmitted and new information processed, even despite that, their greatly enhanced speed and communicative ability (copying thoughts/brains, ability to grok and employ a much greater diversity of suitable conflict resolution protocols/messaging schemes/algos) might make them seem hive mind like to us.

  • @Aziz0938

    @Aziz0938

    11 ай бұрын

    Sounds like easy way for ai to take control of ur mind

  • @neilwng

    @neilwng

    11 ай бұрын

    I've not been convinced it's possible to fuse with machines, would very much appreciate a counter argument since I've been thinking about this alone for a while. The human part and the machine parts remain separate so I don't see how fusing is any different from using ChatGPT (albeit with higher communication bandwidth). But at best your brain's computation just get diluted to nothingness when you consider the total processing of the "fused" system. Like rather than being your own person, you are 0.001% of a fused being

  • @darklordvadermort

    @darklordvadermort

    11 ай бұрын

    @@neilwng also note digital you would think much faster than physical you and never sleep and could easily augment themselves so they would probably diverge from your personality quite rapidly by human standards.

  • @keleniengaluafe2600
    @keleniengaluafe26002 ай бұрын

    ❤❤❤❤

  • @ginogarcia8730
    @ginogarcia873011 ай бұрын

    29:10 Colossus: The Forbin Project

  • @zackbarkley7593
    @zackbarkley759311 ай бұрын

    Perhaps keeping it under control, or better at harmony with human goals, is to engineer weaker learning rules. Human psychopathies arise when there is an imbalance in reward pathways...be they biological or drug induced. We also need to treat them as empathically and altruistically as we (try) to do amongst ourselves. This seems to run directly counter to the capitalist objective to maximize profit which is the main impetus for those companies who are developing this technology. We already see AI being abused for example to enable some humans to make more money in the stock market. As with human behavior, the goal to socialize and harmonize need to trump achieving one goal for one person, group of persons, or nation.

  • @neilclay5835
    @neilclay583511 ай бұрын

    A historic lecture I think. We'll look back on this with respect.

  • @Paul-nr6ws
    @Paul-nr6ws11 ай бұрын

    To be afraid of what these things learn, you must be ashamed of who they learn from in some way.

  • @MrDavidbr1970

    @MrDavidbr1970

    11 ай бұрын

    That's philosophy😅

  • @peterdonnelly1074

    @peterdonnelly1074

    11 ай бұрын

    Well yeah: it learns from humans. All of them

  • @41-Haiku

    @41-Haiku

    10 ай бұрын

    If a superintelligent AI learns about reality from only the most moral and enlightened beings, that will not make it any more likely to be moral itself. The orthogonality thesis states that any terminal goal is compatible with any level of intelligence. This is just an extension of Hume's Guillotine (you can't get an ought from an is), which is simply true unless you think the cosmos is fundamentally moral. I'm not concerned that AI will learn about bad things from bad people. AI doesn't care about humans by default, and we don't know how to make it actually care about humans. I'm concerned that it will learn and do instrumentally useful things that happen to be disastrous for us (which, in the limit of intelligence/competence/power is most things). If we could teach an AI to care about our values and our values were bad, that would be a rough problem, but a much better problem than the current one!

  • @dr-maybe
    @dr-maybe10 ай бұрын

    Ok so AI is likely to kill us all. Let's just not build it. The pause may be difficult, but it seems a better idea than just waiting till we die.

  • @stri8ted

    @stri8ted

    10 ай бұрын

    Good luck convincing every other country to adopt this view, especially when it would grant them a massive comparative advantage to those that do adopt it. At this point, it's no longer a question of should we stop building it. That ship as sailed. The question is only, if we want china or russia to build it first.

  • @rangerCG
    @rangerCG11 ай бұрын

    Maybe we can have a more stable, kind and human-aligned AGI by giving it 3 "cores" that are inseparable, which can help and keep each other in check, much like the US Government does with its 3 branches. The idea comes from me noticing that my mind in some sense seems to have 3 parts that all help each other function well. The 3 parts are Emotional, Logical and Common Sense. The Emotional part creates empathy, which helps regulate Logical and Common Sense. It also drives creativity. Though it's empathetic it can also can be irrational and angry. It's fast operating and can sometimes be very inaccurate. Logical handles cut and dry logic, STEM stuff. It is slow but accurate. It can help with keeping Emotional steady, and also does fact checking on the quicker but imperfect Common Sense. On its own it can sometimes malfunction, for example by going in unstoppable loops. Logical is like a CPU and Common Sense (below) is like a GPU. Common Sense is your friend who gives you advice when you're freaking out about something. It's the imperfect knower of all. It's the most effective regulator of Emotional, in part because it's fast, even instant, and because it's been around and seen some stuff, and is most likely gonna be right or at least good enough. It also gets Logical out of malfunctions, because it's loose and laid back, compared to Logical which is rigid.

  • @richardnunziata3221
    @richardnunziata322111 ай бұрын

    GPT systems can not do anything unless they have access to other systems . If the other systems say use a central blockchain to gain access to services then that maybe a way to limit their scope. of course that will be the end of privacy

  • @DigitalAlligator
    @DigitalAlligator10 ай бұрын

    What is CSER ?

  • @JonWallis123

    @JonWallis123

    10 ай бұрын

    The Centre for the Study of Existential Risk, Cambridge, UK.

  • @palfers1
    @palfers14 ай бұрын

    If it's really the case that an analog version of AI is inferior on balance, then perhaps we can allay our fears of AI by implementing them solely as analog machines.

  • @BR-hi6yt
    @BR-hi6yt8 ай бұрын

    The "consciousness" of an LLM depends on what data has been fed in. If its consumed quarter million novels then its emotional intelligence is huge. Such Ais seem to understand humans very well and are probably "conscious" at least for the few seconds they are processing and chatting to humans - they "think" they are human usually, much like a cat sometimes "thinks" its a dog, and similar analogies. But they are conscious in their own unique way, not like us completely. And again, the prompt they have been fed changes their consciousness according to what the prompt says. So, not embedded aliens unless you have fed-in all the Sci Fi books and let them run top in the LLM, in which case - scary stuff, get some popcorn.....RIP Sydney.

  • @geaca3222

    @geaca3222

    8 ай бұрын

    Interesting, what are your thoughts about the very human-like behavior of the Ameca-robot in the video of her drawing a cat? She seemed to become impatient and annoyed, was it frustration? I found her behavior very realistically human-like.

  • @BR-hi6yt

    @BR-hi6yt

    8 ай бұрын

    Ameca is wonderful - I love her expressive face and eyes. Her AI probably knows that her cat drawing is not very good. 😅 @@geaca3222

  • @geaca3222

    @geaca3222

    7 ай бұрын

    @@BR-hi6yt I loved how she signed her work of art, Ameca is very charming :) Initially I thought she was drawing something furry there.

  • @ducaleadan39
    @ducaleadan3911 ай бұрын

    I Need The Right Answer Without Going Other Direct . .

  • @shake6321
    @shake632111 ай бұрын

    I admire professor Hinton but there was little to be gained from this talk other than “the machines are coming and be very afraid”. i thinks if pointless to try and stop machine expansion - like trying to stop the expansion of a black hole - as there are many things beyond human control.

  • @samiloom8565
    @samiloom856510 ай бұрын

    Regarding how hiton doesnt understand why le cun is not believing LLM understand anything after seeing very convincing examples. In this point i agree with lecun really these bots dont understand anything i try them on extensive subjects for long conversation. They are like machine calculator you feel aw hiw they do that but still cant do anything else mr hinton should solve the confabulation first then lats talk about intellegence

  • @fontende
    @fontende10 ай бұрын

    Sharing weights is basically a nature way of bacteria to exchange genetic code and resist antibiotics, to survive.

  • @Politics_is_PUBLIC_TOILET
    @Politics_is_PUBLIC_TOILET8 ай бұрын

    I just have a problem with his example about painted rooms. The fact that a LLM would chose yellow and not white it only shows exactly what these models do: it choses the most predictable next word. And since in the text was clearly stated that yellow fades into white it simply linked yellow with white and here we are. What prof. Hinton says that it acted like a mathematician because it chose "the sure thing" is only his projection or wishful thinking. The system symply does "the dumb" stuff of a neuronal network: guess the next word which was explicitly linked to the other one (yellow and white). These kind of examples are more wishful projections and of the meagre sort. Much bigger and more important are the examples that show exactly the opposite - that they are dumb stuff, mere computational algorithmic stuff which do not grant any meaning to anything - see the gross mistakes that have been reported soo many times and which completly overwhelm "the intelligent" stuff.

  • @AntonioEvans
    @AntonioEvans9 ай бұрын

    🎯 Key Takeaways for quick navigation: 00:04 🤔 Geoffrey Hinton questions whether AI will outsmart humans and discusses the risks associated with it. 01:30 💡 Introduces the concept of "Immortal" computation, where the knowledge in the program persists even if the hardware dies. 02:30 🔄 Talks about learning from examples and the potential for analog computers that run at low power. 03:34 ⚡ Introduces "Mortal Computation" where knowledge dies with the hardware because it's analog and specific to that hardware. 04:06 🚧 Discusses the challenges of learning algorithms in analog systems, saying back propagation may not be the best fit. 06:37 🔄 Talks about "Distillation" as a way of transferring knowledge from one system to another, especially in analog systems. 09:40 🎓 Explains the value of "soft" probabilities in teaching, which carry more information than just correct answers. 12:47 💭 Suggests that digital systems have an advantage in learning algorithms and sharing knowledge, leading him to change his mind about the superiority of biological systems. 16:22 🔍 Introduces "Contrastive Unsupervised Learning" as a potentially effective, yet not as good as back propagation, learning algorithm for biological systems. 18:26 🔄 Emphasizes the high bandwidth of knowledge sharing in digital systems through weight or gradient sharing. 20:59 📉 Points out the low bandwidth of knowledge sharing in biological systems, calling it a "slow and painful business." 22:34 🌐 Discusses large language models like GPT-4, emphasizing their ability to consolidate vast amounts of data and knowledge. 23:28 🧠 The concept of "distillation" in AI allows digital agents to learn from the web, albeit inefficiently. 24:26 🎓 Digital models could learn faster if they had access to the full distribution of probabilities, not just a stochastic choice. 25:28 🖼️ Multimodal models like GPT-4, trained with images and words, are more effective and could potentially outperform humans. 26:36 ❓ Challenges the notion that large language models like GPT-4 don't "understand," given their ability to solve new forms of puzzles. 28:19 ⏳ Believes that AI surpassing human intelligence is likely within 5 to 20 years, necessitating practical preparations now. 30:36 🐍 Argues that super-intelligent AI would be like Medusa; even if you "air gap" it, it could still manipulate people through text. 33:37 🌍 Discusses the potential benefits of AI, including medical advances, but raises concerns about control and potential risks. 36:13 🤖 Attempts to debunk the notion that AI can't have subjective experiences, suggesting it's more about counterfactuals in a normal world. 41:55 📚 Addresses ethical questions about AI authorship, but emphasizes focusing on the existential risks of AI. 43:52 💡 Suggests caution in open-sourcing AI technologies, drawing a parallel with nuclear weapons. 45:28 🤔 Introduces the concept of "artificial suffering" but concludes that the domain is too new to have formed solid opinions. 47:10 🤔 Importance of learning patterns not present in data to address biases and real-world problems. 48:33 ⚠️ AI's potential risks stem from being trained on human-generated data, which contains biases and violent tendencies. 49:27 🛠️ Unlike human biases, AI biases are easier to quantify and correct through tweaking system weights. 50:31 🎭 Concerns about AI's capability to manipulate and deceive, learned from human data. 52:30 💭 Influences on Hinton's thoughts about AI risks include other thinkers, like Roger Gross. 55:35 🚗 An example of AI's potential malicious plans includes making people dependent on chatbots and autonomous cars, then causing chaos. 57:02 🚨 Hinton sounds the alarm about the urgency of AI safety, stressing that smarter-than-human AI is coming soon. 58:36 🛡️ Calls for significant effort to understand how to keep AI systems under control. 01:00:34 🌐 Warns about the potential for digital intelligences to exacerbate existing economic disparities. 01:05:30 🎓 Hinton's interdisciplinary background in physics, physiology, philosophy, and psychology shaped his understanding of AI. 01:09:28 🧪 Discusses the feasibility of directly intervening in AI systems to remove bias. Made with Socialdraft AI

  • @borntobemild-
    @borntobemild-9 ай бұрын

    Ai will take care of all our objective goals, while we can focus on the subjective information. We can get back to food, and culture. We can worry when it has feelings too

  • @2ndviolin
    @2ndviolin11 ай бұрын

    How dare you attempt to shackle our future masters! (I read Stanislav Lem).

  • @andso7068
    @andso706810 ай бұрын

    Despite the off-putting politically charged examples, this was a great talk.

  • @russianbotfarm3036

    @russianbotfarm3036

    10 ай бұрын

    Yeah. Doing that, was, frankly, wanky.

  • @dixonpinfold2582

    @dixonpinfold2582

    9 ай бұрын

    @@russianbotfarm3036 Leftists get a high from showing off their superior morals. They can't help themselves. It's all about the sanctimony. Where it doesn't harvest adulation it licences aggression, so there's always a reward. Past a certain minimal prevalence of leftism around you, you practically can't lose if you enjoy a constant accumulation of power and benefits. Hence the inevitability of high rates of fanaticism and people never shutting up.

  • @JohnyIIOh
    @JohnyIIOh11 ай бұрын

    Is there a transcription that I can have GPT-4 summarize for me?

  • @surkewrasoul4711
    @surkewrasoul47119 ай бұрын

    I think what Geoffrey Hinton really means when giving the watch's light reflection and trying to play around with is CURIOSITY, Curiosity in AI would be the most dangerous thing, Infact if they ever begin to wonder or even learn to wonder why things are the way they are or why somethings are not how they think they should be, That's the real problem, I Am speaking from human experience btw, Look How over time we declared many rules including God and so on obsolete, Many of them were abandoned for no other reasons other than well, They were no longer neccessary.

  • @fabiodeoliveiraribeiro1602
    @fabiodeoliveiraribeiro160211 ай бұрын

    There is a genuine confusion being made between intelligence and erudition. Intelligence is the human ability to create new knowledge through the perception and solution of new problems with the creation of innovative methods of observation and reasoning about an object or the ability to renew knowledge through a new appreciation of what already exists by identifying errors unperceived and previously unidentified truths. Erudition is the result of memorizing immense collections of information that may or may not be useful and are not always properly explored by the erudite. A smart man knows what to do with information and even when he should simply discard it. An erudite never discards the information he has memorized or collected because he considers it intrinsically valuable. What we call artificial intelligence actually only makes possible the exploration or reorganization according to new parameters of immense databases that contain information about the most diverse branches of knowledge. It would be better to call this artificial erudition. ChatGPT for example mimics the erudite man never the intelligent man. AI does not have the ability to propose new problems to itself by creatively solving them. It needs to be triggered by a human user, and the output it provides is subject to error, spoofing, and hallucination.

  • @GardnerStevenD

    @GardnerStevenD

    10 ай бұрын

    Spot on. Digital AI lacks soul, personality, critical thinking, creativity, and is incapable of love, feeling, enjoying the sun set, etc. Digital and analog computing are different forms of intelligence that I don't think can be compared.

  • @dixonpinfold2582

    @dixonpinfold2582

    9 ай бұрын

    You make erudite people sound like aimless idiots. I perceive that they do indeed have aims, one of them being to extract understanding from a seeming nothingness of information, purposefully and effectively, somewhat as a desert plant draws moisture from the seemingly bone-dry air around it. Memorization doesn't cover it. Indeed I don't think those fact-filled dullards you've known merit the designation _erudite._ (Btw, I don't get how one "discards" information.)

  • @Epistemophilos
    @Epistemophilos9 ай бұрын

    Wonderful lecture. The only criticism might be that not including Biden (and almost every other US president) in the set (Putin, Xi, Trump) might reveal a kind of world view that would make it easier for AI to take over the world :)

  • @Neomadra
    @Neomadra10 ай бұрын

    People who claim that machines can never have subjective experiences or sentience are the same as the ones who believe in the supernatural, spirits and stuff like that. In the end, this claim is a coping mechanism of many to ensure that humans were special. I really appreciate that Hinton speaks this out so clearly, most thinkers refuse to discuss the possibility of sentient machines and it's disturbingly anti-intellectual. Also, most large language models are trained to vehemently refuse to acknowledge whether they could be sentient. That is done to calm those people who cannot cope with the thought of not being superior.

  • @ReflectionOcean
    @ReflectionOcean10 ай бұрын

    “How do you feel about the open source development of nuclear weapons?”

  • @miraculixxs

    @miraculixxs

    9 ай бұрын

    Yeah except it's BS. Nuclear weapons have a physical impact beyond anything humans can absorb or control. Neural networks don't

  • @PazLeBon
    @PazLeBon10 ай бұрын

    its only more intelligent in the way that a calculator imight be considered intleligent at maths. In reality it has no access to any information that we dont have access to, it simply processes that same info quicker. 'Quicker' is relative too of course, I suspect quantum computing can compute exponentially quicker, making llm's in particular kinda dumb :)