The Absolutely Simplest Neural Network Backpropagation Example

Ғылым және технология

I'm (finally after all this time) thinking of new videos. If I get attention in the donate button area, I will proceed:
www.paypal.com/donate/?busine...
sorry there is a typo: @3.33 dC/dw should be 4.5w - 2.4, not 4.5w-1.5
NEW IMPROVED VERSION AVAILABLE: • 0:03 / 9:21The Absolut...
The absolutely simplest gradient descent example with only two layers and single weight. Comment below and click like!

Пікірлер: 185

  • @GustavoMeschino
    @GustavoMeschinoАй бұрын

    GREAT, it was a perfect inspiration for me to explain this critical subject in a class. Thank you!

  • @markneumann381
    @markneumann381Ай бұрын

    Really nice work. Thank you so much for your help.

  • @animatedzombie64
    @animatedzombie64Ай бұрын

    Best video ever about the back propagation in the internet 🛜

  • @lazarus8011
    @lazarus8011Ай бұрын

    Unreal explanation

  • @whywhatwherein
    @whywhatwhereinАй бұрын

    finally, a proper explanation.

  • @bhlooli
    @bhlooli Жыл бұрын

    Thanks very helpful.

  • @Vicente75480
    @Vicente754805 жыл бұрын

    Dude, this was just what I needed to finally understand the basics of Back Propagation

  • @webgpu

    @webgpu

    Ай бұрын

    if you _Really_ liked his video, just click the first link he put on the description 👍

  • @AjitSingh147
    @AjitSingh147 Жыл бұрын

    GOD BLESS YOU DUDE! SUBSCRIBED!!!!

  • @fredfred9847
    @fredfred98472 жыл бұрын

    Great video

  • @polybender
    @polybender25 күн бұрын

    best on internet.

  • @drummin2dabeat
    @drummin2dabeat3 ай бұрын

    What a breakthrough, thanks to you. BTW, not to nitpick, but you are missing a close paren on f(g(x), which should be f(g(x)).

  • @justinwhite2725
    @justinwhite27253 жыл бұрын

    @8:06 this was super useful. That's a fantastic shorthand. That's exactly the kind of thing I was looking for, something quick I can iterate over all the weights and find the most significant one for each step.

  • @riccardo700
    @riccardo7003 ай бұрын

    My maaaaaaaannnnn TYYYY

  • @rachidbenabdelmalek3098
    @rachidbenabdelmalek3098 Жыл бұрын

    Thanks you

  • @anirudhputrevu3878
    @anirudhputrevu38782 жыл бұрын

    Thanks for making this

  • @svtrilogywestsail3278
    @svtrilogywestsail32782 жыл бұрын

    this was kicking my a$$ until i watched this video. thanks

  • @bedeamadi9317
    @bedeamadi93173 жыл бұрын

    My long search ends here, you simplified this a great deal. Thanks!

  • @formulaetor8686
    @formulaetor8686 Жыл бұрын

    Thats sick bro I just implemented it

  • @adoughnut12345
    @adoughnut123453 жыл бұрын

    This was great. Removing non linearity and including basic numbers as context help drove this material home.

  • @gerrypaolone6786

    @gerrypaolone6786

    2 жыл бұрын

    If you use relu there is nothing more that that

  • @mateoacostarojas6031
    @mateoacostarojas60315 жыл бұрын

    just perfect, simple and with this we can extrapolate easier when in each layer there are more than one neuron! thaaaaankksss!!

  • @sameersahu3987
    @sameersahu3987 Жыл бұрын

    Thanks

  • @ilya5782
    @ilya57826 ай бұрын

    To understand mathematics, I need to see an example. An this video from start to end is awesome with quality presentation. Thank you so much.

  • @EthanHofton
    @EthanHofton3 жыл бұрын

    Very clearly explained and easy to understand. Thank you!

  • @gautamdawar5067
    @gautamdawar50673 жыл бұрын

    After a long frantic search, I stumbled upon this gold. Thank you so much!

  • @ExplorerSpace
    @ExplorerSpace Жыл бұрын

    @Mikael Laine even though you say that @3:33 has a typo. i cant see the typo. 1.5 is correct because y is the actual desired out put and it is 0.5. so 3.0 * 0.5 = 1.5

  • @saral123
    @saral1233 жыл бұрын

    Fantastic. This is the most simple and lucid way to explain backprop. Hats off

  • @arashnozarinejad9915
    @arashnozarinejad99154 жыл бұрын

    I had to write a comment and thank you for your very precise yet simple explanation, just what I needed. Thank you sir.

  • @sabinbaral4132
    @sabinbaral4132 Жыл бұрын

    Good content sir keep making these i subscribe

  • @SureshBabu-tb7vh
    @SureshBabu-tb7vh5 жыл бұрын

    You made this concept very simple. Thank you

  • @Freethinker33
    @Freethinker332 жыл бұрын

    I was just looking for this explanation to align derivatives with gradient descent. Now it is crystal clear. Thanks Miakel

  • @alexandrmelnikov5126
    @alexandrmelnikov51267 ай бұрын

    man, thanks!

  • @LunaMarlowe327
    @LunaMarlowe3272 жыл бұрын

    very clear

  • @AAxRy
    @AAxRy3 ай бұрын

    THIS IS SOO FKING GOOD!!!!

  • @hamedmajidian4451
    @hamedmajidian44513 жыл бұрын

    Great illustrated, thanks

  • @srnetdamon
    @srnetdamon3 ай бұрын

    man 4:08 i dont undestrand how you find the valor 4.5, in expression 4.5.w-1.5,

  • @sparkartsdistinctions1257
    @sparkartsdistinctions12573 жыл бұрын

    I watched almost every videos of back propagation even Stanford but never got such clear idea until I saw this one ☝️. Best and clean explanation. My first 👍🏼 which I rarely give.

  • @webgpu

    @webgpu

    Ай бұрын

    a 👍is very good, but if you click on the first link on the description, it would be even better 👍

  • @sparkartsdistinctions1257

    @sparkartsdistinctions1257

    Ай бұрын

    @@webgpu 🆗

  • @javiersanchezgrinan919
    @javiersanchezgrinan919Ай бұрын

    Great video. Just one question, this is for 1 x 1 input and batch size of 1 right?. If we have, let´s say a batch size of 2, It is just to sum (b-y)^2 to the loss function ( C= (a-y)^2 + (b-y)^2) isnt it?, with b = w * j and j = the input of the second batch size. Then you just perform the backpropation with partial derivatives. Is it correct?

  • @praneethaluru2601
    @praneethaluru26013 жыл бұрын

    The best short video explanation of the concept0 on KZread till now...

  • @Controlvers
    @Controlvers3 жыл бұрын

    Thank you for sharing this video!

  • @santysayantan
    @santysayantan2 жыл бұрын

    This makes more sense than anything I ever heard in the past! Thank you! 🥂

  • @brendawilliams8062

    @brendawilliams8062

    9 ай бұрын

    It beats the 1002165794 thing and 1001600474 jumping and calculating with 1000325836 and 1000564416. Much easier 😊

  • @jameshopkins3541

    @jameshopkins3541

    9 ай бұрын

    you are wrong: Say me what is deltaW?

  • @demetriusdemarcusbartholom8063
    @demetriusdemarcusbartholom8063 Жыл бұрын

    ECE 449 UofA

  • @adriannyamanga1580
    @adriannyamanga15804 жыл бұрын

    dude please make more videos. this is amazing

  • @zeljkotodor
    @zeljkotodor2 жыл бұрын

    Nice and clean. Helped me a lot!

  • @outroutono4937
    @outroutono4937 Жыл бұрын

    Thank you bro! Its so easier to visualize it when its presented like that.

  • @mixhybrid
    @mixhybrid4 жыл бұрын

    Thanks for the video! Awesome explanation

  • @sunilchoudhary8281
    @sunilchoudhary82812 ай бұрын

    I am so happy that I can't even express myself right now

  • @webgpu

    @webgpu

    Ай бұрын

    there's a way you can express your happiness AND express your gratitude: by clicking on the first link in the description 🙂

  • @aorusaki
    @aorusaki4 жыл бұрын

    Very helpful tutorial. Thanks!

  • @giuliadipalma5042
    @giuliadipalma50422 жыл бұрын

    thank you, this is exactly what I was looking for, very useful!

  • @ronaldmercado4768
    @ronaldmercado47688 ай бұрын

    Absolutly simple. Very useful illustration not only to understand Backpropagation but also to show gradient descent optimization. Thanks a lot.

  • @xflory26x
    @xflory26xАй бұрын

    Not kidding. This is the best explanation of backpropagation on the internet. The way you're able to simplify this "complex" concept is *chef's kiss* 👌

  • @meanderthalensis
    @meanderthalensis2 жыл бұрын

    Helped me so much!

  • @zh4842
    @zh48424 жыл бұрын

    excellent video, simple & clear many thanks

  • @elgs1980
    @elgs19803 жыл бұрын

    Thank you so much!

  • @jakubpiekut1446
    @jakubpiekut14462 жыл бұрын

    Absolutely amazing 🏆

  • @RohitKumar-fg1qv
    @RohitKumar-fg1qv5 жыл бұрын

    Exactly what i needed

  • @lhyd7hak
    @lhyd7hak2 жыл бұрын

    Thanks for a very explanatory video.

  • @bettercalldelta
    @bettercalldelta2 жыл бұрын

    I'm currently programming a neural network from scratch, and I am trying to understand how to train it, and your video somewhat helped (didn't fully help cuz I'm dumb)

  • @JAYSVC234
    @JAYSVC2349 ай бұрын

    Thank you. Here is pytorch implementation. import torch import torch.nn as nn class C(nn.Module): def __init__(self): super(C, self).__init__() r = torch.zeros(1) r[0] = 0.8 self.r = nn.Parameter(r) def forward(self, i): return self.r * i class L(nn.Module): def __init__(self): super(L, self).__init__() def forward(self, p, t): loss = (p-t)*(p-t) return loss class Optim(torch.optim.Optimizer): def __init__(self, params, lr): defaults = {"lr": lr} super(Optim, self).__init__(params, defaults) self.state = {} for group in self.param_groups: for par in group["params"]: # print("par: ", par) self.state[par] = {"mom": torch.zeros_like(par.data)} def step(self): for group in self.param_groups: for par in group["params"]: grad = par.grad.data # print("grad: ", grad) mom = self.state[par]["mom"] # print("mom: ", mom) mom = mom - group["lr"] * grad # print("mom update: ", mom) par.data = par.data + mom print("Weight: ", round(par.data.item(), 4)) # r = torch.ones(1) x = torch.zeros(1) x[0] = 1.5 y = torch.zeros(1) y[0] = 0.5 c = C() o = Optim(c.parameters(), lr=0.1) l = L() print("x:", x.item(), "y:", y.item()) for j in range(5): print("_____Iter ", str(j), " _______") o.zero_grad() p = c(x) loss = l(p, y).mean() print("prediction: ", round(p.item(), 4), "loss: ", round(loss.item(), 4)) loss.backward() o.step()

  • @rdprojects2954
    @rdprojects29543 жыл бұрын

    Excellent , please continue we need this kind of simplicity in NN

  • @DaSticks
    @DaSticks5 ай бұрын

    Great video, going to spend some time working out it looks for multiple neurons, but a demonstration on that would be awesome

  • @SuperYtc1
    @SuperYtc116 күн бұрын

    4:03 Shouldn't 3(a - y) be 3(1.5*w - 0.8) = 4.5w - 2.4? Where have you got -1.5 from?

  • @RaselAhmed-ix5ee
    @RaselAhmed-ix5ee3 жыл бұрын

    in the final eqn why it is 4.5w-1.5 instead it should be 4.5w-2.4 since y=0.8 so 3*0.8 =2.4

  • @kamilkaya5367

    @kamilkaya5367

    2 жыл бұрын

    Yes you are right. I noticed too.

  • @SamuelBachorik-Mrtapo8-ApeX
    @SamuelBachorik-Mrtapo8-ApeX2 жыл бұрын

    Hi I have question for you, at 3:42, you have, 1.5*2(a-y) = 4.5*w-1.51, how did you get this result?

  • @nickpelov

    @nickpelov

    Жыл бұрын

    ... in case someone missed it like me - it's in the description (it's a typo). y=0.8; a=i*w = 1.5*w, so 1.5*2(a-y) =3*(1.5*w - 0.8) = 4.5*w - 3*0.8 = 4.5*w - 2.4 is the correct formula.

  • @satishsolanki9766
    @satishsolanki97663 жыл бұрын

    Awesome dude. Much appreciate your effort.

  • @OviGomy
    @OviGomy5 ай бұрын

    I think there is a mistake. 4.5w -1.5 is correct. On the first slide you said 0.5 is the expected output. So "a" is the computed output and "y" is the expected output. 0.5 * 1.5 * 2 = 1.5 is correct. You need to correct the "y" next to the output neuron to 0.5.

  • @TruthOfZ0
    @TruthOfZ020 күн бұрын

    if we take directly the derivitive dC/dw from C=(a-y)^2 is the same thing right? do we really have to split individually da/dw and dC/da ???

  • @shirish3008
    @shirish30083 жыл бұрын

    This is the best tutorial on back prop👏

  • @tellmebaby183
    @tellmebaby183 Жыл бұрын

    Perfect

  • @paurodriguez5364
    @paurodriguez5364 Жыл бұрын

    best explanation i had ever seen, thanks.

  • 4 жыл бұрын

    Very helpful

  • @user-og9zn9vf4k
    @user-og9zn9vf4k4 жыл бұрын

    thanks a lot for that explanation :)

  • @rafaelscarpe2928
    @rafaelscarpe29283 жыл бұрын

    Thank you

  • @malinyamato2291
    @malinyamato2291 Жыл бұрын

    thanks a lot... a great start for me to learn NNs :)

  • @evanparshall1323
    @evanparshall13233 жыл бұрын

    This video is very well done. Just need to understand implementation when there is more than one node per layer

  • @mikaellaine9490

    @mikaellaine9490

    3 жыл бұрын

    Have you looked at my other videos? I have a two-dimensional case in this video: kzread.info/dash/bejne/dJimz4-bf6abdc4.html

  • @riccardo700
    @riccardo7003 ай бұрын

    I have to say it. You have done the best video about backpropagation because you chose to explain the easiest example, no one did that out there!! Congrats prof 😊

  • @webgpu

    @webgpu

    Ай бұрын

    did you _really_ like his video? Then, i'd suggest you click the first link he put on the description 👍

  • @ApplepieFTW
    @ApplepieFTW Жыл бұрын

    It clicked after just 3 minutes. Thanks a lot!!

  • @mahfuzurrahman4517
    @mahfuzurrahman45177 ай бұрын

    Bro this is awesome, I was struggling to understand chain rule, now it is clear

  • @banpridev
    @banpridevАй бұрын

    Ow you did not lie on the tittle.

  • @giorgosmaragkopoulos9110
    @giorgosmaragkopoulos91102 ай бұрын

    So what is the clever part of back prop? Why does it have a special name and it isn't just called "gradient estimation"? How does it save time? It looks like it just calculates all derivatives one by one

  • @kitersrefuge7353
    @kitersrefuge73536 ай бұрын

    Brilliant. What would be awesome is to then further expand if u would and explain multiple rows of nodes...in order to try and visualise if possible multiple routes to a node and so on...i stress "if possible...".

  • @grimreaperplayz5774
    @grimreaperplayz5774 Жыл бұрын

    This is absolutely awesome. Except..... Where did that 4.5 come from???

  • @delete7316

    @delete7316

    10 ай бұрын

    You’ve probably figured it out by now but just in case: i = 1.5, y=0.8, a = i•w. This means the expression for dC/dw = 1.5 • 2(1.5w - 0.8). Simplify this and you get 4.5w - 2.4. This is where the 4.5 comes from. Extra note: in the description it says -1.5 was a typo and the correct number is -2.4.

  • @shilpatel5836
    @shilpatel58363 жыл бұрын

    Bro i just worked it through and it makes so much sense once you do the partial derivatives and do it step by step and show all the working

  • @Leon-cm4uk
    @Leon-cm4uk7 ай бұрын

    The error should be (1.2 - 0.5) = squared(0.7) = 0.49. So y is 0.49 and not 0.8 as it is displayed after minute 01:08.

  • @popionlyone
    @popionlyone5 жыл бұрын

    You made it easy to understand. Really appreciated it. You also earned my first KZread comment.

  • @samiswilf
    @samiswilf3 жыл бұрын

    This video is gold.

  • @jks234
    @jks2343 ай бұрын

    I see. As previously mentioned, there are a few typos. For anyone watching, please note there are a few places where 0.8 and 0.5 are swapped for each other. That being said, this explanation has opened my eyes to the fully intuitive explanation of what is going on... Put simply, we can view each weight as an "input knob" and we want to know how each one creates the overall Cost/Loss. In order to do this, we link (chain) each component's local influence together until we have created a function that describes weight to overall cost. Once we have found that, we can adjust that knob with the aim of lowering total loss a small amount based on what we call "learning rate". Put even more succinctly, we are converting each weight's "local frame of reference" to the "global loss" frame of reference and then adjusting each weight with that knowledge. We would only need to find these functions once for a network. Once we know how every knob influences the cost, we can tweak them based on the next training input using this knowledge. The only difference between each training set will just be the model's actual output, which is then used to adjust the weights and lower the total loss.

  • @alexaona8805
    @alexaona88053 жыл бұрын

    Thanks a lot :)

  • @TrungNguyen-ib9mz
    @TrungNguyen-ib9mz3 жыл бұрын

    Thank you for your video. But I’m a bit confused about 1,5.2(a-y) = 4,5.w-1,5, Might you please explain that? Thank you so much!

  • @user-gq7sv9tf1m

    @user-gq7sv9tf1m

    3 жыл бұрын

    I think this is how he got there : 1.5 * 2(a - y) = 1.5 * 2 (iw - 0.5) = 1.5 * 2 (1.5w - 0.5) = 1.5 * (3w - 1) = 4.5w - 1.5

  • @christiannicoletti9762

    @christiannicoletti9762

    3 жыл бұрын

    @@user-gq7sv9tf1m dude thanks for that, I was really scratching my head over how he got there too

  • @Fantastics_Beats

    @Fantastics_Beats

    Жыл бұрын

    i am also confused this error

  • @morpheus1586

    @morpheus1586

    Жыл бұрын

    @@user-gq7sv9tf1m y is 0.8 not 0.5

  • @user-mc9rt9eq5s
    @user-mc9rt9eq5s3 жыл бұрын

    Thanks! This is Awesome. I have I question, if we make the NN more complicated a little bit (adding an activation function for each layer), what will be the difference?

  • @btmg4828
    @btmg4828Ай бұрын

    I don’t get it you write 1.5*2(a-y) = 4.5w -1.5 But why? It should be 4.5w -2,4 Because 2*0,8*-1,5= -2,4 Where am I rong?

  • @ahmetpala7945
    @ahmetpala79454 жыл бұрын

    Thank you for the easiest expression for bacpropagation dude

  • @zemariagp
    @zemariagp9 ай бұрын

    why do we ever need to consider multiple levels, why not just think about getting the right weight given the output "in front" of it

  • @dcrespin
    @dcrespin Жыл бұрын

    The video shows what is perhaps the simplest case of a feedforward network, with all the advantages and limitations that extreme simplicity can have. From here to full generalization several steps are involved. 1.- More general processing units. Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong not only to Euclidean spaces but to any Hilbert spaces as well. Derivatives are linear transformations and the derivative of a unit is the direct sum of the partial derivatives with respect to the inputs and with respect to the weights. 2.- Layers with any number of units. Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a layer is equivalent to taking their product (as functions, in the set theoretical sense). Layers are functions of the totality of inputs and weights of the various units. The derivative of a layer is then the product of the derivatives of the units. This is a product of linear transformations. 3.- Networks with any number of layers. A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers. Here we have a composition of linear transformations. 4.- Quadratic error of a function. --- This comment is becoming a too long. But a general viewpoint clarifies many aspects of BPP. If you are interested in the full story and have some familiarity with Hilbert spaces please Google for papers dealing with backpropagation in Hilbert spaces. Daniel Crespin

  • @hegerwalter
    @hegerwalterАй бұрын

    Where and how did you get the learning rate?

  • @st0a
    @st0a9 ай бұрын

    Great video! One thing to mention is that the cost function is not always convex, in fact it is never truly convex. However, as an example this is really well explained.

  • @Janeilliams
    @Janeilliams2 жыл бұрын

    okay !! , it was simple and clear , BUT , things are getting complex when i add two inputs or hidden layers, the partial derivates how to do ? if you anyone have propoiate and simple vedio of doing more than one inputs , hidden layers , then please throw it in the replay box , thanks !

  • @MATLAB1Expert1
    @MATLAB1Expert12 жыл бұрын

    i like this vd

  • @mysteriousaussie3900
    @mysteriousaussie39003 жыл бұрын

    are you able to briefly describe how the calculation at 8:20 works for a network with mutliple neurons per layer?

  • @thiagocrepaldi6071
    @thiagocrepaldi60715 жыл бұрын

    Great video. I believe there is a typo at 1:10. y should be 0.5 and not 0.8. That might cause some confusion, especially at 3:34, when we use numerical values to calculate the slope (C) / slope (w)

  • @mikaellaine9490

    @mikaellaine9490

    5 жыл бұрын

    Thanks for pointing that out; perhaps time to make a new video!

  • @mikaellaine9490

    @mikaellaine9490

    5 жыл бұрын

    yes, that should say a=1.2

  • @Vicente75480

    @Vicente75480

    5 жыл бұрын

    +Mikael Laine I would be si glad if you could make more videos explaining these kind of concepts and how they actually work in a code level.

  • @mikaellaine9490

    @mikaellaine9490

    5 жыл бұрын

    Did you have any particular topic in mind? I'm planning to make a quick video about the mathematical basics of backpropagation: automatic differentiation. Also I can make a video about how to implement the absolutely simples neural network in Tensorflow/Python. Let me know if you have a specific question. I do have quite a bit experience in TF.

  • @mychevysparkevdidntcatchfi1489

    @mychevysparkevdidntcatchfi1489

    5 жыл бұрын

    @@mikaellaine9490 How about adding that to description? Someone else asked that question.

  • @Blue-tv6dr
    @Blue-tv6dr3 жыл бұрын

    Amazing

Келесі