Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again

Ғылым және технология

Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.
Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work.
Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.
Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.
What's in this episode:
00:00:00 - Introduction
00:02:48 - Understanding how the brain works
00:06:59 - Why we need unsupervised local objective functions
00:09:39 - Masked auto-encoders
00:10:55 - Current methods in end to end learning
00:18:36 - Spiking neural networks
00:23:00 - Leveraging spike times
00:29:55 - The story behind AlexNet
00:36:15 - Transition from pure academia to Google
00:40:23 - The secret auction of Hinton’s company at NuerIPS
00:44:18 - Hinton’s start in psychology and carpentry
00:54:34 - Why computers should be grown rather than manufactured
01:06:57 - The function of sleep and Boltzmann Machines
01:11:49 - Need for negative data
01:19:35 - Visualizing data using t-SNE
Links:
Geoff's Bio: en.wikipedia.org/wiki/Geoffre...
Geoff's Twitter: geoffreyhinton?la...
Research and Publications: bit.ly/3z3M54e
Google Scholar Citations: bit.ly/3N892HJ
Story Behind the 2012 NIPS Auction: bit.ly/3t9xsIN
GLOM: bit.ly/3lYgWr6
Vector Institute: vectorinstitute.ai/
SUBSCRIBE TODAY:
Apple: apple.co/3NLtQED
Spotify: spoti.fi/3GBDpDM
Amazon: amzn.to/3NHlQoa
Google: bit.ly/3aD7ZkN
Acast: bit.ly/3x6ZYfw
Host: Pieter Abbeel
Executive Producers: Alice Patel & Henry Tobias Jones
Production: Fresh Air Production

Пікірлер: 52

  • @mayukhdifferent
    @mayukhdifferent Жыл бұрын

    Prof Hinton is standing the whole podcast due to his back problem. A cruel irony of nature played on this gifted person who gifted us the practical aspects of back propagation..🙏

  • @michaelvonreich74
    @michaelvonreich74 Жыл бұрын

    Hinton is an amazing teacher. I am taken back to 2019, when I first read the paper on distillation. Instead of any technical jargon, it starts with how insects have a larval form suitable for harvesting nutrients, and an adult form for movement and reproduction. He then drew the connection: maybe deep neural nets need to have different forms for training and inference! Instantly I was spellbound.

  • @whitepony8443

    @whitepony8443

    Жыл бұрын

    I'm so jealous.

  • @jayp6955
    @jayp6955 Жыл бұрын

    Interesting talk for sure, worth the whole watch. I had the fortune of chatting with Hinton after I cold-emailed him with a theory based on my undergraduate physics-neuroscience work in 2013, I remember him being a witty guy with great intuition. It's nice to see him interested in approaches other than backprop; ML needs a radical algorithm shift if it's going to get past the current plateau we're seeing with processing/data costs and model uncertainty. To me, these are dealbreakers and reason enough to explore everything all over again. Hinton's intuition that one-shot learning (many params, little data) is the goal of new first-principles approaches is sound; the current state of backprop is quite the opposite (small params, lots of data). The interviewer did a good job with the questions, focusing on spiking networks and low-power hardware -- Hinton is right that hardware will be the endgame in this industry. However, hardware design will need to be deeply influenced by algorithm certainty. The current game is to determine the correct software for learning, then optimize it in hardware. It will be a black-swan event; as soon as someone discovers "the next backprop", hardware production will blow up within a few years. It's likely that traditional ML is headed down a rabbit hole with no carrots at the bottom -- top minds in the industry are spending valuable time studying and characterizing "laws" for systems that do not have the power to come close to AGI. New approaches are needed. It's a shame we don't have people like Von Neumann alive today who can show us the way, but I'm optimistic that Hinton's head is in the right direction. If you're interested in ML research, the best thing to be working on right now is whatever everyone else isn't talking about. In other words, understand ChatGPT -- then promptly move on. AI today reminds me of physics in the 1890s, where the research community made so much progress in classical and statistical mechanics, but quantum mechanics and relativity were on the horizon, waiting to shake up the world.

  • @anonymous.youtuber
    @anonymous.youtuber Жыл бұрын

    This interview is truly beautiful, interesting, clarifying. 🙏🏻❤️

  • @samuraijosh1595
    @samuraijosh1595 Жыл бұрын

    I just recently learned that this guy is a descendant of george boole himself.....mind = blown!!!!

  • @binjianxin7830
    @binjianxin7830 Жыл бұрын

    It’s so brilliant when people were tricked into believing Jeff’s jokes. Thank you so much for the authentic and inspiring conversation!

  • @autobotrealm7897

    @autobotrealm7897

    Жыл бұрын

    His jokes are top notch !

  • @josy26
    @josy262 жыл бұрын

    man I would've loved to be a student of Hinton

  • @prabhavkaula9697
    @prabhavkaula96972 жыл бұрын

    This episode is awesome!!! Thank you for the interview sir.

  • @TheRobotBrainsPodcast

    @TheRobotBrainsPodcast

    2 жыл бұрын

    Thanks for listening!!

  • @yuanli6224
    @yuanli6224 Жыл бұрын

    WOW, really moved by the "faith" part, how much science was pushed forward by those lonely heros !

  • @ninatko
    @ninatko2 жыл бұрын

    I have missed hearing professor Hinton talking without even realizing it :D

  • @andrewashmore8000
    @andrewashmore8000 Жыл бұрын

    Fascinating man and interview. Thanks for sharing

  • @deng6291
    @deng6291 Жыл бұрын

    Very insightful!

  • @BritishConcept
    @BritishConcept Жыл бұрын

    Fantastic interview. I especially enjoyed the part about how Hinton ended up at Google. I'm looking forward to part 2. How are you going to get Alex Krizhevsky for the season 3 finale? Trap him in a large net perhaps? 😉

  • @whitepony8443
    @whitepony8443 Жыл бұрын

    I knew it! all the answer is here, I just have to know you guys or be a native English, then keep studying. Love you guys so much, please keep doing this.

  • @fredzacaria
    @fredzacaria Жыл бұрын

    very informative thanks, @23:23 how can we use spike timing algorithms for predictions?

  • @minikyu5643
    @minikyu56432 жыл бұрын

    Thank you very much for the interview, so many deep insights.

  • @TheRobotBrainsPodcast

    @TheRobotBrainsPodcast

    Жыл бұрын

    Glad you enjoyed it!

  • @sapienspace8814
    @sapienspace8814 Жыл бұрын

    Interesting talk, thank you for sharing. @ around 17:00 That seems to be very much like Fuzzy Logic with overlapping regions of general qualitative states that can be optimized with K-means clustering.

  • @bellinterlab8139
    @bellinterlab8139 Жыл бұрын

    What is happening here is that the whole world gets Geoff as their thesis advisor.

  • @pw7225
    @pw72252 жыл бұрын

    Love Hinton's comment on Russia and Ukraine being different countries :)

  • @prabhavkaula9697
    @prabhavkaula96972 жыл бұрын

    I was waiting for the deep reinforcement learning and agi question :)

  • @michaelvonreich74
    @michaelvonreich74 Жыл бұрын

    Wait is there any way to hear what was said after 10:50? : (

  • @hoang._.9466
    @hoang._.9466 Жыл бұрын

    thank u helped me a lot

  • @sdmarlow3926
    @sdmarlow3926 Жыл бұрын

    That's like saying, to understand everything that is going on in a data center, you just need to understand how the transistor works.

  • @user-lk6ik3sc9l
    @user-lk6ik3sc9l Жыл бұрын

    Can't wait to listen to this - thanks Pieter! P.S.: Feel free to hop to the other side and be a guest on my show :)

  • @akash_goel
    @akash_goel Жыл бұрын

    Can't believe this channel has ads enabled.

  • @user-sr6gi8uj8i
    @user-sr6gi8uj8i Жыл бұрын

    wow amazing

  • @stuart4003
    @stuart40032 жыл бұрын

    Brainchip's Akida neuromorphic commercial IP implementation uses spiking neural network technology.

  • @nootherchance7819
    @nootherchance78192 жыл бұрын

    Honestly, I had to google a bunch of terms to understand what our legendary Geoff Hinton was talking about 🤣. Thanks a bunch for this, really enjoyed the latest set of guests you've interviewed lately! Keep up the good work!

  • @TheRobotBrainsPodcast

    @TheRobotBrainsPodcast

    Жыл бұрын

    Thanks for tuning in!

  • @tir0__
    @tir0__2 жыл бұрын

    2 gods in one frame

  • @nineteenfortyeight6762
    @nineteenfortyeight6762 Жыл бұрын

    Damn, Hinton still looks good

  • @MLDawn
    @MLDawn Жыл бұрын

    Free Energy minimization is the key!

  • @brandomiranda6703
    @brandomiranda6703 Жыл бұрын

    I don't get it. Why is backprop given credit for current models if it's just a dynamic programming technique for gradient computation. It's SGD doing the true magic IMHO.

  • @brooklyna007

    @brooklyna007

    Жыл бұрын

    This is an odd statement. If backprop is just another DP technique then SGD is just another non-linear optimization technique. Every X is just another Y technique if you don't care to look at the work it took to get there, the space of other techniques that were searched, what it took to figure out this was the best model, etc. Hindsight is 20/20. This is like looking at modern particle physics and asking "Why is Noether's theorem given so much credit if it is just another theory about symmetries? IMHO it is Gauge theory that is magic and truly describes particle physics". Or maybe more down to earth: "Why is the Fourier Transform given so much credit if it is just an integral? IMHO it is the Fast Fourier Transform that is magic and processes all of our signals". In case I am being too obtuse, for a pure programmer without math experience: "Why is binary search given so much credit if it is another recursive search. IMHO red-black trees are the true magic and what almost all tree-map implementations use". Backprop can surely be related to other algorithms and mathematical structures, but it doesn't reduce its importance. That is more about the overall system it fits within.

  • @munagalalavanya3331
    @munagalalavanya3331 Жыл бұрын

    You worth more than a 1 billion

  • @alansancherive7323
    @alansancherive732311 ай бұрын

    14

  • @willd1mindmind639
    @willd1mindmind639 Жыл бұрын

    I think that it is unfair to take on the burden of trying to tackle all of the challenges required in order to build something that is not even well defined in the first place, like "Artificial Intelligence". There is no single definition of it so how can you say you have it or, are near to it or revolutionizing it. As it stands "Artificial Intelligence" is simply a subset of computer software (written by humans of course) where instead of explicitly writing all the conditional logic to detect and classify things in advance, the software does it automatically using data in statistical models. But that is not necessarily how the brain does it because computers, as systems designed to operate on fixed length binary types, are nothing like the brain. In fact, the brain is a bio-mechanical system where cells operate on chemical signal activation and transfer based on genetic blueprints for cellular specialization. What makes them more efficient is that bio-chemical processing is more of a mechanical operation than an algorithmic operation, based on pre defined bio-chemical signal activations and outputs. Meaning the chemical signals themselves are discrete and don't require a bunch of extra "work" to identify and process as each cell is a mini machine designed to specifically operate on specific chemical compounds. So because of that it is not an "algorithmic" process where you have to try and identify those signals using algorithms because the input data itself is all encoded as the same binary fixed length type system as all other data. And because these fixed length types are based on numeric systems like decimal or base 10 numbers, have no specific meaning explicitly tied to an 'activation' in a neural network, any use of said types requires complex algorithms to define, characterize and detect such signals within a complex mathematical framework. Those algorithms in turn require large amounts of data and cpu processing to produce anything meaningful as a measure of work and energy spent. That doesn't mean you can't do things with these algorithms, but that doesn't mean they work like humans do either or that silicon based processors are any closer to being like the brain, because they aren't.

  • @vrushankdesai715

    @vrushankdesai715

    Жыл бұрын

    It does not seem to you to be too big of a coincidence that artificial neural networks, inspired by the human brain, happen to work wayyyy better than anything that has came before? Yes, the implementation details are much different than in biological systems, but the core concept (storing & processing information in a highly distributed interconnected graph network) is the same.

  • @willd1mindmind639

    @willd1mindmind639

    Жыл бұрын

    ​@@vrushankdesai715 Its not the same because in biology each signal is encoded using specific chemical compounds that are discrete. And all "processing" happens at that level which is akin to "bottom up" processing of very finely detailed discrete elements. So those "neuron networks" are operating at a far lower level of detail than machine neural networks. For example, when light waves get converted in biological vision systems, each color is given its own discrete biochemical signal resulting in imagery being composed of many sub networks of detailed collections of related colors. Those detailed networks then get passed into higher order parts of the brain where those networks get associated with patterns, features, textures and ultimately objects at the high level. And there is no extra "work" required to get that level of detail and hierarchical embedding of relationships. Whereas in a computer vision system you start with a file which is just a bucket of binary numbers and you then have to do work to make sense of what those numbers represent and at nowhere near the same level of detail or segmentation as biological vision. And the only reason that is the case right now is because most machine learning algorithms are designed to work like java which means portable code that can work on any kind of general purpose architecture. So there are trade offs in doing it that way versus having very specialized architectures with custom data types for encoding light information (not as simple R,G,B) and so forth. That fundamental difference between how computers work and how nature works is not trivial is what I am saying. For example, look at how sea creatures with the ability to dynamically change skin color and texture work. That is biology encoding textures and patterns but as opposed to it being for internal use, it is for external use.

  • @vrushankdesai715

    @vrushankdesai715

    Жыл бұрын

    @@willd1mindmind639 what you just described is exactly how convolutional neural networks work though. Lower layers recognize lines/edges and as you get deeper the embedding is of higher levels of abstraction. Until the final layer spits out classification predictions

  • @willd1mindmind639

    @willd1mindmind639

    Жыл бұрын

    @@vrushankdesai715 No it is not. Just as an example, imagine an AI based image editor. Ask that image editor to just display the color red. It can't do it, because there is no discrete encoding for red within a neural network. Just like there is no encoding for green or blue. What you are talking about is a very high level abstraction of a "neuron network" which in biology is a physical set of biochemical relationships based on discrete activations. Which is why the brain can easily pick out the red parts of an image no problem because each color is in its own distribution within neuron networks. And those neuron networks represent a much more detailed collection of relationships at a much higher level of detail than a convolution network. Remember a convolution is nothing more than a mathematical operation applied to all elements within a collection, such as a collection of pixels. That is how you get gaussian blur. But that mathematical operation requires work to even try and distinguish red pixels from blue pixels or sharp lines from gradients. That level of detail is provided in the brain mostly for free because of the bottom up architecture of how biology works with discrete encoding of information using chemicals. There is no "work" to disentangle one color from another using anything like a mathematical convolution algorithm. There are no convolutions in a fishes' dynamic optical camouflage.

  • @pierreshasta1480
    @pierreshasta1480 Жыл бұрын

    a single poseidon torpedo can destroy an entire country like the united kingdoms, it's stupid to compare it to traditional torpedoes.

  • @gabbyafter7473
    @gabbyafter74732 жыл бұрын

    Definitely need lex to do this

  • @username2630

    @username2630

    Жыл бұрын

    Im sorry but Lex isnt even close to Pieter in technical knowledge, this interview goes to the content at the right level

  • @gabbyafter7473

    @gabbyafter7473

    Жыл бұрын

    @@username2630 okay

  • Жыл бұрын

    Lex podcast derailed for me when he started moving away from ai and started hanging out too much with right wing bs

  • @Daniel-ih4zh

    @Daniel-ih4zh

    Жыл бұрын

    @ true, we need to hear more from men dressing up as women.

Келесі