Neural Network Backpropagation Example With Activation Function

Ғылым және технология

The simplest possible back propagation example done with the sigmoid activation function.
Some brief comments on how gradients are calculated in actual implementations.
Edit: there is a slight omission/error in the da/dw expression, as pointed out by Laurie Linnett. The video has da/dw = a(1-a), but it should be ia(1-a), because the argument to a is the function (iw), whose derivative (with respect to w) is i.

Пікірлер: 38

  • @maxim25o2
    @maxim25o24 жыл бұрын

    There is many peoples who are teaching back propagation, but after watching tons of movies I think not many of them know really how its works. No body are calculating and showing numbers in equation. This is first tutorial what are answering to all my questions to back propagation. Many others peoples just are copying work of somebody else not understanding it. Tutorial is greate, step by step, explaining equations and breaking it to simplest understanding form. Great Job!

  • @ss5380
    @ss53804 ай бұрын

    You are a life saver!! Thank you for breaking the whole process down in such an understandable way!!

  • @laurielinnett8072
    @laurielinnett80724 жыл бұрын

    I think da/dw should be i*a*(1-a). Let z=i*w, then a=1/(1+exp(-z)) and da/dz=a*(1-a). Then dz/dw=i, so da/dw=(da/dz)*(dz/dw)=i*a*(1-a) Never the less an excellent presentation Mikael showing backpropagation and weight updating for a simple example without distracting subscripts and superscripts. Keep up the good work. LML

  • @mikaellaine9490

    @mikaellaine9490

    4 жыл бұрын

    Darn, you're correct! I forgot to add the derivative of the inner function w*i, which would indeed be i as a multiplier to a(1-a).

  • @yasserahmed2781

    @yasserahmed2781

    3 жыл бұрын

    been repeating the calculations several times on paper and trying to understand how the "i" disappeared, i even thought that the video implicitly assumed that i was 1 or something haha should always check the comments right away.

  • @Sandium
    @Sandium3 жыл бұрын

    I was having difficulties wrapping my head around Backpropagation. Thank you very much for this video!

  • @jackmiller2614
    @jackmiller26144 жыл бұрын

    Thanks so much for this video -- I have spent hours looking for a clean explanation of this and I have finally found it!

  • @redditrewindeverybody-subs9336
    @redditrewindeverybody-subs93364 жыл бұрын

    Thanks for your videos! I'm finally able to implement backpropagation because I (kinda) understood the Maths behind it thanks to you! Please keep more vids coming!

  • @jiangfenglin4359
    @jiangfenglin43593 жыл бұрын

    Thank you so much for making these videos! I love your explanations. :)

  • @dmdjt
    @dmdjt4 жыл бұрын

    Thank you very much for your effort and excellent explanation!

  • @flavialan4544
    @flavialan45443 жыл бұрын

    You are a real teacher!

  • @nasirrahim5610
    @nasirrahim5610 Жыл бұрын

    Your explanation is amazing 👏

  • @trevortyne534
    @trevortyne534 Жыл бұрын

    Excellent explanation Mikael ! Trev T Sydney

  • @obsidianhead
    @obsidianhead3 жыл бұрын

    Thank you for this excellent video

  • @raymond5887
    @raymond58874 жыл бұрын

    Thanks for the awesome explanation! I finally know how to do back prop now haha.

  • @justchary
    @justchary Жыл бұрын

    Thank you very much. This was very helpful.

  • @benwan8927
    @benwan89273 жыл бұрын

    good and clear explanation

  • @vincentjr8013
    @vincentjr80134 жыл бұрын

    How bias will update for multilayer network?

  • @BB-sd6sm
    @BB-sd6sm3 жыл бұрын

    great video mate

  • @amukh1_dev274
    @amukh1_dev274 Жыл бұрын

    Thank you! You earned a sub ❤🎉

  • @kyju77
    @kyju772 жыл бұрын

    Hi, I will join to other with thanks for this video! Amazing explanation. Just one question: your example was made, let say, with single "training session". When I have dozens or hundreds "training session" I calculate average for final error. What about da/dw for example?? Shall I also calculate average for all trainings and then apply ? Or there is another approach ? Thanks again.

  • @kishorb.surwade6722
    @kishorb.surwade67223 жыл бұрын

    Nice explanation. One special request, if you can give illustration in MS EXCEL, it will give more understanding

  • @zamanmakan2729
    @zamanmakan27293 жыл бұрын

    Sorry, a(1-a) is the derivative of what? I didn't get how we reached there.

  • @sumayyakamal8857
    @sumayyakamal88573 жыл бұрын

    Thank you so much. I often hear Hadamard multiplication is used, but that's used for what?

  • @TheRainHarvester
    @TheRainHarvester Жыл бұрын

    It seems like picking up the numbers would require indirection /following pointers /memory fetch to slow memory, but just recalculating would take fewer clock cycles.

  • @TheRainHarvester

    @TheRainHarvester

    Жыл бұрын

    Storing probably wins vs recursive calcs which would be required for multiple branches of a wide NN.

  • @FPChris
    @FPChris2 жыл бұрын

    As you go back when do you update each weight? Do you go back to w1, adjust it, do a new forward pass, go back only to w2, do a new forward pass, go back only to w3. ?

  • @nickpelov
    @nickpelov Жыл бұрын

    In the table at 12:37 there is no way to see when you should stop. maybe you should have included the actual output y, or at least show y on screen. So the goal is to reach a=0.5 right?

  • @nickpelov
    @nickpelov Жыл бұрын

    Question: there are faster activation functions, but how do they affect the backpropagation? When using sigmoid function it also is contained in the derivative. That's not the case for other functions. Is it worth the effort when the back propagation would be a lot slower? Well once the network is finished it'll be used multiple times, so I guess you can use a lot more computing power on learning and using the network on a device with less computing power. Correct me if I'm wrong.

  • @onesun3023
    @onesun30234 жыл бұрын

    Why do you use lowercase Phi for the activation?

  • @nickpelov
    @nickpelov Жыл бұрын

    I don't understand why you would calculate da/dw in advance and not during the back propagation. Do we use it more than once? For each iteration the da/dw has different value, so I don't see why we should calculate it upfront. We can just take the output a and calculate a(1-a) during the backpropagation.

  • @andreaardemagni6401
    @andreaardemagni64019 ай бұрын

    Unfortunately the volume of this video is too low to watch it from the phone. Such a shame :(

  • @edwardmontague2021
    @edwardmontague2021 Жыл бұрын

    Defined as a function in Maxima CAS . sigmoid(x):=1/(1+exp(-x))$ About da/dw = d sigmoid( w*x + b) / dw , where x ==a from previous layer. Using Maxima CAS, I obtain (x*%e^(w*x+b))/(2*%e^(w*x+b)+%e^(2*w*x)+%e^(2*b)). Whereas with a= sigmoid( w*x + b) and the derivative defined as a*(1-a) I obtain (%e^(w*x+b))/(2*%e^(w*x+b)+%e^(2*w*x)+%e^(2*b)) ; which differs by the multiplier x . Which is correct ?

  • @youssryhamdy4923
    @youssryhamdy49233 жыл бұрын

    Sound of this video is low, please, try to make it higher. Thanks

  • 4 жыл бұрын

    please make videos about neural networks on python

  • @knowledgeanddefense1054
    @knowledgeanddefense1054 Жыл бұрын

    Fun fact, did you know Einstein and Hawking were socialists? Just thought you may find that interesting :)

  • @vidumini23
    @vidumini234 жыл бұрын

    Thank you so much for the clear excellent explanation and effort.

Келесі