Neural Network Backpropagation Example With Activation Function
Ғылым және технология
The simplest possible back propagation example done with the sigmoid activation function.
Some brief comments on how gradients are calculated in actual implementations.
Edit: there is a slight omission/error in the da/dw expression, as pointed out by Laurie Linnett. The video has da/dw = a(1-a), but it should be ia(1-a), because the argument to a is the function (iw), whose derivative (with respect to w) is i.
Пікірлер: 38
There is many peoples who are teaching back propagation, but after watching tons of movies I think not many of them know really how its works. No body are calculating and showing numbers in equation. This is first tutorial what are answering to all my questions to back propagation. Many others peoples just are copying work of somebody else not understanding it. Tutorial is greate, step by step, explaining equations and breaking it to simplest understanding form. Great Job!
You are a life saver!! Thank you for breaking the whole process down in such an understandable way!!
I think da/dw should be i*a*(1-a). Let z=i*w, then a=1/(1+exp(-z)) and da/dz=a*(1-a). Then dz/dw=i, so da/dw=(da/dz)*(dz/dw)=i*a*(1-a) Never the less an excellent presentation Mikael showing backpropagation and weight updating for a simple example without distracting subscripts and superscripts. Keep up the good work. LML
@mikaellaine9490
4 жыл бұрын
Darn, you're correct! I forgot to add the derivative of the inner function w*i, which would indeed be i as a multiplier to a(1-a).
@yasserahmed2781
3 жыл бұрын
been repeating the calculations several times on paper and trying to understand how the "i" disappeared, i even thought that the video implicitly assumed that i was 1 or something haha should always check the comments right away.
I was having difficulties wrapping my head around Backpropagation. Thank you very much for this video!
Thanks so much for this video -- I have spent hours looking for a clean explanation of this and I have finally found it!
Thanks for your videos! I'm finally able to implement backpropagation because I (kinda) understood the Maths behind it thanks to you! Please keep more vids coming!
Thank you so much for making these videos! I love your explanations. :)
Thank you very much for your effort and excellent explanation!
You are a real teacher!
Your explanation is amazing 👏
Excellent explanation Mikael ! Trev T Sydney
Thank you for this excellent video
Thanks for the awesome explanation! I finally know how to do back prop now haha.
Thank you very much. This was very helpful.
good and clear explanation
How bias will update for multilayer network?
great video mate
Thank you! You earned a sub ❤🎉
Hi, I will join to other with thanks for this video! Amazing explanation. Just one question: your example was made, let say, with single "training session". When I have dozens or hundreds "training session" I calculate average for final error. What about da/dw for example?? Shall I also calculate average for all trainings and then apply ? Or there is another approach ? Thanks again.
Nice explanation. One special request, if you can give illustration in MS EXCEL, it will give more understanding
Sorry, a(1-a) is the derivative of what? I didn't get how we reached there.
Thank you so much. I often hear Hadamard multiplication is used, but that's used for what?
It seems like picking up the numbers would require indirection /following pointers /memory fetch to slow memory, but just recalculating would take fewer clock cycles.
@TheRainHarvester
Жыл бұрын
Storing probably wins vs recursive calcs which would be required for multiple branches of a wide NN.
As you go back when do you update each weight? Do you go back to w1, adjust it, do a new forward pass, go back only to w2, do a new forward pass, go back only to w3. ?
In the table at 12:37 there is no way to see when you should stop. maybe you should have included the actual output y, or at least show y on screen. So the goal is to reach a=0.5 right?
Question: there are faster activation functions, but how do they affect the backpropagation? When using sigmoid function it also is contained in the derivative. That's not the case for other functions. Is it worth the effort when the back propagation would be a lot slower? Well once the network is finished it'll be used multiple times, so I guess you can use a lot more computing power on learning and using the network on a device with less computing power. Correct me if I'm wrong.
Why do you use lowercase Phi for the activation?
I don't understand why you would calculate da/dw in advance and not during the back propagation. Do we use it more than once? For each iteration the da/dw has different value, so I don't see why we should calculate it upfront. We can just take the output a and calculate a(1-a) during the backpropagation.
Unfortunately the volume of this video is too low to watch it from the phone. Such a shame :(
Defined as a function in Maxima CAS . sigmoid(x):=1/(1+exp(-x))$ About da/dw = d sigmoid( w*x + b) / dw , where x ==a from previous layer. Using Maxima CAS, I obtain (x*%e^(w*x+b))/(2*%e^(w*x+b)+%e^(2*w*x)+%e^(2*b)). Whereas with a= sigmoid( w*x + b) and the derivative defined as a*(1-a) I obtain (%e^(w*x+b))/(2*%e^(w*x+b)+%e^(2*w*x)+%e^(2*b)) ; which differs by the multiplier x . Which is correct ?
Sound of this video is low, please, try to make it higher. Thanks
please make videos about neural networks on python
Fun fact, did you know Einstein and Hawking were socialists? Just thought you may find that interesting :)
Thank you so much for the clear excellent explanation and effort.