Forward Propagation and Backward Propagation | Neural Networks | How to train Neural Networks
In this video, we will understand forward propagation and backward propagation. Forward propagation and backward propagation in Neural Networks, is a technique we use in machine learning to train our Neural Network.
Forward propagation refers to propagating forward in our Neural network while calculating the values of Neurons in the Next layers.
While, we us Backward Propagation to train our weights W and bias B based on the given input dataset, to make accurate predictions.
After understanding, Forward propagation and backward propagation, you will be all set to implement Neural Networks in code.
Timestamps:
0:00 - Video Agenda
0:22 - Important Notations
1:52 - Matrix Represenation
2:45 - Forward Propagation in Action
4:06 - Random Weight Initialization
5:04 - Gradient Descent in Neural Network
7:21 - Backward Propagation in Neural Network
7:50 - Entire Process in Action
9:00 - Neural Network Equations
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
This is Your Lane to Machine Learning ⭐
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
Continue watching the next video - • What is Activation fun...
What is Neural Networks ? : • How Neural Networks wo...
Complete Neural Network Playlist : • How Neural Networks wo...
Complete Logistic Regression Playlist : • Logistic Regression Ma...
Checkout Other Playlist : / codinglane
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
Subscribe to my channel, because I upload a new Machine Learning video every week : / @codinglane
Пікірлер: 89
If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
@arpit743
2 жыл бұрын
Excellent video! .Bro why do we have multiple nuerons in every hidden layer. is it from the point of view of introducing non linearity?
@CodingLane
2 жыл бұрын
@@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.
@arpit743
2 жыл бұрын
@@CodingLane thanks alot! but why is that it allows for complicated boundaries?
@Sigma_Hub_01
Жыл бұрын
@@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it
Literally best. Crisp and clear!! Thank you
Absolutely loved the way you explain. So easy to understand. Thank you
this video should be titled " Explain - Forward and Backward Propagation - to Me Like I'm Five. Thanks man you saved me a lot of time.
@CodingLane
2 жыл бұрын
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
@CodingLane
2 жыл бұрын
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!
@CodingLane
Жыл бұрын
I am really elated hearing this. Glad if helped you out. Thank you so much for your appreciation. 🙂
Best explanation I've seen so far
Very helpful and to the point and correct!
best explanation, best playlists I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.
Nicely explained. Keep up the good job!
Very informatics video.Explained all the terms in a simple manner.Thanks alot
you explained in very clear and easy ways. Thank you, this is so helpful!
@CodingLane
2 жыл бұрын
Your welcome!
Thanks man. The slides were amazingly put up.
@CodingLane
2 жыл бұрын
Thank you so much!
Fantastic explanation. Thank you
so glad I found this channel!!
@CodingLane
Жыл бұрын
Thank you! I appreciate your support 😇
This is so well explained.. thankyou
Excellent explanation jazakallah bro
Im a bit confuse through the exponent notations since some of it were not corresponding to the other
where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ? Thank you so much for these incredible videos
Awesome, really helpful! Thank you
@CodingLane
2 жыл бұрын
Your welcome!
Great video, and great explanation thanks dude!
@CodingLane
2 жыл бұрын
Your Welcome!
Such a simple and neat explanation.
@CodingLane
2 жыл бұрын
Thank you!
You are great. It will be very good if you continue.
@CodingLane
2 жыл бұрын
Thank you for your support! I will surely continue making more videos.
Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!
@CodingLane
2 жыл бұрын
Thank you so much for your feedback. I will surely consider making videos on generalized neural networks.
Amazing work, keep it going :)
@CodingLane
2 жыл бұрын
Thank You!
great video
best video on youtube for this topic
@CodingLane
Жыл бұрын
Thank you so much. Much appreciate your comment! 🙂
Your videos are very helpful. It will be great if you sort the video..Thank you😇😇😇
Super Bro❤❤❤❤
B1 and B2 are initialized randomly too ?
great video as always
@CodingLane
3 жыл бұрын
Thank You soo much !!!
Good Explanation !!
@CodingLane
2 жыл бұрын
Thank you!
This was actually pretty straight forward
@CodingLane
2 жыл бұрын
Glad if it helped you!
Super sir. I have learned more information from this and also calculation way. It's very useful to our study. Thank you sir
@CodingLane
Жыл бұрын
Happy to help!
You explain better than popular course instructor on deep learning
@CodingLane
Жыл бұрын
Thanks for the compliment 😇
@sajan2980
Жыл бұрын
I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.
you are really awesome. love your teaching ability
@CodingLane
3 жыл бұрын
Thank you so much !
@mdtufajjalhossain1246
3 жыл бұрын
@@CodingLane, you are most welcome bro. Please make the implementation of Multiclass Logistics Regression using OnevsAll/OnevsOne method
@CodingLane
3 жыл бұрын
@@mdtufajjalhossain1246 Okay! Thanks for suggesting!
thank u sir it was really helpful
@CodingLane
2 жыл бұрын
Your welcome!
great video, Please also make a video on SVM as soon as possible
@CodingLane
3 жыл бұрын
Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .
Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?
@CodingLane
2 жыл бұрын
Yes correct… but do check out the equations properly. It has bias also.
@vipingautam9501
2 жыл бұрын
@@CodingLane Thanks for your prompt response.
what is this B1
Good job. But Gradient descent W2 and W1 mus be updated simultaneously.
@CodingLane
2 жыл бұрын
Thank you! Yes they should be updated simultaneously.
Please share code algorithm backpropagate
Brother your explanation was great but there are some mistakes i have pointed out.
Isnt the equation : Z= W.X+B = transpose(W)*X + B.Hence the weight matrix what you have given is wrong right?
@CodingLane
2 жыл бұрын
Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B. Both represent the same thing. Hope it helps you.
@kewtomrao
2 жыл бұрын
@@CodingLane thanks for the quick clarification.makes sense now.keep up the great work!!
Sir it's W ¹¹[¹] * a⁰[1] right? You've done it as W ¹¹[¹] * a¹[1] at the matrix multiplication, can you just verify I'm wrong?
@CodingLane
2 жыл бұрын
Yes… there is a typo error
hi can you put caption option
@CodingLane
2 жыл бұрын
Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.
Can A* actually be Z*, e.g. A1 = Z1?
@CodingLane
Жыл бұрын
No, we need to apply a non-linear activation function. So A1 must be = some_non_linear_function(Z1)
hi, how to calculate the cost?
@CodingLane
3 жыл бұрын
You will get all the information in upcoming videos that I have already uploaded in this series. If you still have questions, then you can write me mail on : codeboosterjp@gmail.com
y no subtitles?
5:04
let bro cook
wait you haven't explained backpropagation at all
Lord Jay Patel