Backpropagation and the brain

Ғылым және технология

Geoffrey Hinton and his co-authors describe a biologically plausible variant of backpropagation and report evidence that such an algorithm might be responsible for learning in the brain.
www.nature.com/articles/s4158...
Abstract:
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embedded within multilayered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. The backpropagation algorithm solves this problem in deep artificial neural networks, but historically it has been viewed as biologically problematic. Nonetheless, recent developments in neuroscience and the successes of artificial neural networks have reinvigorated interest in whether backpropagation offers insights for understanding learning in the cortex. The backpropagation algorithm learns quickly by computing synaptic updates using feedback connections to deliver error signals. Although feedback connections are ubiquitous in the cortex, it is difficult to see how they could deliver the error signals required by strict formulations of backpropagation. Here we build on past and recent developments to argue that feedback connections may instead induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
Authors: Timothy P. Lillicrap, Adam Santoro, Luke Marris, Colin J. Akerman & Geoffrey Hinton
Links:
KZread: / yannickilcher
Twitter: / ykilcher
BitChute: www.bitchute.com/channel/yann...
Minds: www.minds.com/ykilcher

Пікірлер: 39

  • @YannicKilcher
    @YannicKilcher4 жыл бұрын

    Note: This is a reupload. Sorry for the inconvenience.

  • @Stopinvadingmyhardware

    @Stopinvadingmyhardware

    11 ай бұрын

    The brain does this thing call axon regulation. In some parts where there are reuptake axons, they self regulate to reduce the amount of feedback when over stimulated. Basically this means they close, and leave the flooded neutral transmitter in the flow stream for the dendrites. This has the effect of down regulating the signal. I saw another video where you covered the direct feedback mechanism, and mentioned that the neurons didn’t have a back propagation mechanism, and wanted to share that with you.

  • @MikkoRantalainen
    @MikkoRantalainen Жыл бұрын

    Great video! I think I've seen at least summary of this algorithm earlier and this video make it more clear.

  • @Murmur1131
    @Murmur11313 жыл бұрын

    Thanks so much! Super interesting! High class content!

  • @jyotiswarupsamal1587
    @jyotiswarupsamal15872 жыл бұрын

    This is a good explanation. I could understand the basics. Thank you

  • @redone9553
    @redone95533 жыл бұрын

    Thanks for the upload! But who says that we need negative voltage for a signed gradient? Why not assume high frequencies are positive and low are negative?

  • @stephanrasp3796
    @stephanrasp37964 жыл бұрын

    I think at 4:50, the perturbation should be added to w, not x, i.e. f(x, w+n). Awesome content btw!

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    True, you want to jiggle the model itself. Thanks!

  • @terumiyuuki6488
    @terumiyuuki64884 жыл бұрын

    It does sound suspiciously like Decoupled Neural Interfaces. Think you'd like to make a video on that? It would be great. Keep up the great work!

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    Thanks for the suggestion!

  • @Neural_Causality
    @Neural_Causality4 жыл бұрын

    Does anyone know of an implementation of the proposed idea on the paper? Also, thanks a lot for sharing this paper, and comments on different papers, I think it's quite useful!

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    If you look in the comments here you'll find a link to Bengio's paper about the algorithm, they might have something.

  • @Neural_Causality

    @Neural_Causality

    4 жыл бұрын

    @@YannicKilcher Thanks! Will check it

  • @dermitdembrot3091
    @dermitdembrot30914 жыл бұрын

    Could it be that perturbation learning is just Hebbian learning where the updates are scaled by the "reward"? So if the "reward" is always 1 it would correspond to Hebbian learning. And for negative rewards the weights are changed to reduce the activations. In the r=-1 vs r=-2 case that would give a negative update for both but a stronger one for the second "action" (comparable to the REINFORCE algorithm).

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    Yes that's exactly what's happening. Basically every unit does RL by itself.

  • @dermitdembrot3091

    @dermitdembrot3091

    4 жыл бұрын

    @@YannicKilcher Thanks for confirmation!

  • @BuzzBizzYou
    @BuzzBizzYou3 жыл бұрын

    Won’t the proposed network create a massive IIR filter?

  • @Zantorc
    @Zantorc3 жыл бұрын

    For perturbation learning, excitation and inhibition use completely different mechanisms in the brain - the neuro-transmitter is even different and different cell types are involved. So rather than dampen all weights when the result is wrong it can selectively dampen the excitation and/or amplify the inhibition. So there is an extra degree of freedom, which is the degree to which the correction falls on the inhibitory neurons v excitatory neurons as well as the magnitude of the correction. So this is at least a 2D correction vector - possible more given that individual neuron sub-types may be differently affected. Therefore my claim is that in the brain it's not so much 'scalar feedback' as 'vector feedback', at least for perturbation learning. I suspect it is the lack of distinction between neurons in ML which leads to poor results for perturbation learning.

  • @iuhh

    @iuhh

    3 жыл бұрын

    I think the different mechamisms in a single brain neuron could probably be represented by two or more artificial neurons though, maybe in multiple layers that handles excitation and inhibition separately, so not sure how that could relate to the quality of the results.

  • @Zantorc

    @Zantorc

    3 жыл бұрын

    @@iuhh The more you know about neurons, the less you're likely to think that. The point neuron can't do what a pyramidal neuron can do, it's predictive, synapse strength isn't the equivalent of a weight it's one bit at most on distal and apical dendrites and doesn't cause firing - it's part of the pattern matching process.

  • @bzqp2
    @bzqp22 жыл бұрын

    I like how immediately once the paper is written by Hinton you switched from drawing the layers horizontally to drawing them vertically xd

  • @8chronos
    @8chronos Жыл бұрын

    Thanks for this nice video. One thing still seems unclear to me, does this only allow for possibly near biological NN-training or are there also other advantages? E.g. Is it faster than Backprop?

  • @moormanjean5636

    @moormanjean5636

    Жыл бұрын

    This is what I would like to know as well. I would guess its slower, but the only way to train networks in a comparable manner given certain assumptions.

  • @victorrielly4588
    @victorrielly45883 жыл бұрын

    Here’s a link to an Archive.org paper on difference target propagation, for anyone like me who doesn’t want to pay to read the biology paper. Also, this paper looks like the original work describing the machine learning aspect of this idea. arxiv.org/pdf/1412.7525.pdf

  • @joirnpettersen
    @joirnpettersen4 жыл бұрын

    If the brain uses back-propagation, and we can some day figure out a way to model it mathematically, would adverserial attacks become a thing we might need to worry about? If not, would it be for a lack of information, or is there some difference between the way the brain does it and the way we do it on computers?

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    very nice question. I think this is as of yet unanswered, but definitely possible.

  • @BrtiRBaws

    @BrtiRBaws

    4 жыл бұрын

    Maybe we can see optical illusions as a sort of adversarial attack :)

  • @maloxi1472

    @maloxi1472

    4 жыл бұрын

    ​@@BrtiRBaws Yes, absolutely. I would argue that things like optical illusions, ideological belief structures, very elaborate lies, hallucinogens, unhealthy but tasty food... are all adversarial attacks on different substructures of the brain

  • @priyamdey3298

    @priyamdey3298

    3 жыл бұрын

    Numenta shows that if information flow (both inputs and weights of neurons) are quite sparse, then a network becomes quite robust to perturbations / random noise. And they say that brain has a very sparse information flow. So maybe yes, we are still yet to include more meaningful priors (like sparseness) in the right way to make them robust.

  • @bzqp2

    @bzqp2

    2 жыл бұрын

    Hitting a guy in the head with a shovel can be an adversarial neural network attack.

  • @stefanogrillo6040
    @stefanogrillo60407 ай бұрын

    Duper

  • @sehbanomer8151
    @sehbanomer81514 жыл бұрын

    I thought this is a part 2 or something

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    no, sorry, I deleted it by accident

  • @herp_derpingson
    @herp_derpingson4 жыл бұрын

    DEJA VU

  • @YannicKilcher

    @YannicKilcher

    4 жыл бұрын

    yea sorry, I hope YT reinstates the old one

  • @palfers1
    @palfers12 ай бұрын

    2020 is quite dated.

  • @ThinkTank255
    @ThinkTank255 Жыл бұрын

    How many times do I have to tell you guys, the brain doesn't "learn"??? The brain *memorizes* verbatim. For prediction, the brain says, "What matches my memories the best?" and chooses that as a prediction. It is as simple as that. Brains are generally *not* as good as backpropagation at generalization, but that feature of brains is actually very useful for nonlinear spatio-temporal patterns, such as doing mathematics and logic. This is why, to date, ML based methods have not been able to solve extremely complex reasoning based problems. They overgeneralize when it comes to nonlinear logical processes. It is actually extremely easy to prove the brain doesn't use backpropagation. How many times do you have to read a book to give a good summary? Once. Etc.... The brain learns *instantly* by rote memorization. Instant learning brings many evolutionary benefits.

  • @DajesOfficial

    @DajesOfficial

    Жыл бұрын

    How many times have you read a book before it became possible for you to give a good summary from the first time? Lets test your hypothesis by giving a book to an infant and asking them to give a good summary from the first time?

  • @ThinkTank255

    @ThinkTank255

    Жыл бұрын

    @@DajesOfficial You've actually proven my point. The problem is, most humans aren't particularly good at remembering factual information. This is because 99.99% of the information you are getting at any given time isn't factual information. It random sights, sounds, smells, that your brain deems important for your survival. The reason adults are better than infants is that they have practiced that skill of honing in on factual information.

Келесі