Back Propagation in Neural Network with an Example | Machine Learning (2019)

Backpropagation in Neural Network is a supervised learning algorithm, for training Multi-layer Perceptrons (Artificial Neural Networks).
The Backpropagation algorithm in neural network looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. The weights that minimize the error function is then considered to be a solution to the learning problem.
Visit our website for more Machine Learning and Artificial Intelligence blogs
www.codewrestling.com
Checkout other videos on Machine Learning
Decision Tree (ID3 Algorithm) : • Decision Tree Solved |...
Candidate Elimination Algorithm : • Candidate Elimination ...
Naive Bayes Algorithm : • Naive Bayes algorithm ...
Checkout the best programming language for 2020
• Top Programming Langua...
Checkout best laptop for programming in machine learning and deep loearning in 2020
• Best Laptop for Machin...
10 best artificial intelligence startup in india
• 10 Artificial Intellig...
Join Us to Telegram for Free Placement and Coding Challenge Resources including Machine Learning also~ @Code Wrestling
t.me/codewrestling
Ask me A Question: codewrestling@gmail.com
Refer Slides: github.com/codewrestling/Back...
Music: www.bensound.com
For Back Propagation slides comment below 😀

Пікірлер: 207

  • @huytrankhac8729
    @huytrankhac87294 жыл бұрын

    The most clear explanation I found on youtube. Thank you so much. Please make more concept videos like this about machine learning

  • @SantoshKumar-fr5tm
    @SantoshKumar-fr5tm4 жыл бұрын

    Nice explanation. Actually, I have seen all theory till now, you showed how backpropagation is actually calculating the further weights and biases. Thanks again.

  • @tombra4ril
    @tombra4ril4 жыл бұрын

    This might seem like nothing, but just wanted to say that I did enjoy the video, made back propagation easy to understand without superficial explanation. Going deep right into the heart of the problem. Thanks alot man, was just what I wanted. One Love from Nigeria.

  • @fidelventura957
    @fidelventura9574 жыл бұрын

    Simply the best of the best. Thank you for your hard work, thank you.

  • @shaurya478
    @shaurya4784 жыл бұрын

    Finally got the idea about backprop..thanks man

  • @sharathkumar8422
    @sharathkumar84223 жыл бұрын

    Excellent explanation of the concept! Thank you so much for making this...

  • @watsonhuang4760
    @watsonhuang47603 жыл бұрын

    Thanks! This video explains back propagation very well! btw i almost passed out when the volume increased in the end

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Sorry for the volume.

  • @kartikyadav1772
    @kartikyadav17723 жыл бұрын

    Surely, this would help me a lot during my end semester exam. Thanks a lot 🙏

  • @motoreh
    @motoreh4 жыл бұрын

    Excellent example!, Now I understand backpropagation and better for the steepest decent with the chain rule!

  • @ahming123
    @ahming1234 жыл бұрын

    Can you remove the background music, otherwise it's awesome

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Cannot remove in this video, but will take care of it in coming videos.

  • @coxixx

    @coxixx

    3 жыл бұрын

    @@CodeWrestling every rose has its thorn

  • @Ashajyothi23

    @Ashajyothi23

    3 жыл бұрын

    I agree

  • @saisudha6512

    @saisudha6512

    2 жыл бұрын

    very well explained

  • @maheshsb3048
    @maheshsb30484 жыл бұрын

    You are freaking best at explaining thank you

  • @hamedhomaee6410
    @hamedhomaee6410 Жыл бұрын

    If now I can work in the field of data science, it's because of YOU 🙏

  • @darshanaupadhyay3610
    @darshanaupadhyay36104 жыл бұрын

    finally i understood back propagation.. Thank you so much!

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Thanks for appreciating!!

  • @kushshri05
    @kushshri054 жыл бұрын

    I wonder backpropagation was this much easy... thanks a ton 🙂🙂

  • @jonathanlazar017
    @jonathanlazar0172 жыл бұрын

    This is the most clear explanation of back propagation I found . Thank you very much

  • @CodeWrestling

    @CodeWrestling

    Жыл бұрын

    Glad it was helpful!

  • @islamicinterestofficial
    @islamicinterestofficial4 жыл бұрын

    What a Suplendid Explanation Love you Bro Stay Blessed

  • @shravyaa8023
    @shravyaa80234 жыл бұрын

    this site is just wonderful. Thank you so much for all the videos. It would be fantastic if you could explain their implementations in python also.

  • @dhanushp1680

    @dhanushp1680

    2 жыл бұрын

    this is not a site babe

  • @praneethaluru2601
    @praneethaluru26013 жыл бұрын

    Faar better explanation of Math than the other videos on KZread.

  • @praveensoni1119
    @praveensoni11193 жыл бұрын

    Thanks a ton, finally understood backpropagation (didn't know the math behind it was this easy)

  • @raviteja6106
    @raviteja6106 Жыл бұрын

    Thank you so much bro for giving detailed explanation.

  • @blessoneasovarghese9834
    @blessoneasovarghese98344 жыл бұрын

    Thank you! You explained the crux of neural net in simple terms.

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Thanks a lot!!

  • @hohongduynguyen6916
    @hohongduynguyen69163 жыл бұрын

    Thank you, this video is thoroughly useful.

  • @lokeshjhanwar7603
    @lokeshjhanwar76033 жыл бұрын

    best one among all i have seen ever

  • @kanhaiyagupta6075
    @kanhaiyagupta60753 жыл бұрын

    Thanks a lot for such a beautiful explanation.

  • @muratkonuklar4728
    @muratkonuklar47284 жыл бұрын

    Great education video... Thanks!

  • @TheSocialDrone
    @TheSocialDrone3 жыл бұрын

    You taught it very well to a backbencher like me!

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    thanks, one of the best comment ever received.

  • @mustafasalihates2866
    @mustafasalihates28663 жыл бұрын

    You dont wanna mess with backpropagation ever never. Haha But he explain so good

  • @shihangcheng173
    @shihangcheng1732 жыл бұрын

    very clear!! thank you!!!

  • @CodeWrestling

    @CodeWrestling

    Жыл бұрын

    Glad it helped!

  • @ganeshkandepalli18
    @ganeshkandepalli183 жыл бұрын

    Very good explanation . It cleared all my doubts.

  • @Martin-ep8dy
    @Martin-ep8dy4 жыл бұрын

    Great video, thank you!

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Thanks for appreciating...!! #CodeWrestling

  • @jaikishank
    @jaikishank3 жыл бұрын

    Thanks. It was an awesome explanation on the basics.It was very useful for me.

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Glad to hear that!

  • @deveshnandan323
    @deveshnandan3232 жыл бұрын

    Salute To your Effort , Thanks a lot :)

  • @shivaramakrishna4949
    @shivaramakrishna49492 жыл бұрын

    Wonderfull . Explanation is awesome 👏👏👏

  • @raj4624
    @raj46243 жыл бұрын

    Tysm for crystal clear explanation.. God bless you

  • @snehitvaddi
    @snehitvaddi4 жыл бұрын

    Bro, whatever comment section may say, I loved the video and understood it. Keep going👍

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Thank you so much 😀

  • @hazemahmed8333
    @hazemahmed83334 жыл бұрын

    what an amazing explanation..... thank you !!

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Glad you enjoyed it!

  • @ahmedpashahayathnagar5022
    @ahmedpashahayathnagar50222 жыл бұрын

    clear explanation sir we have understood easily but if we don't write it than again we forget after few days, so I have written from starting till end.Thanks for explaining clearly and in a simple manner.

  • @batuhanartan
    @batuhanartan4 жыл бұрын

    Great explanation except the sound issues :) Background music is nice but should be a little lower :) Thank you for video man very helped for me :)

  • @reachDeepNeuron
    @reachDeepNeuron4 жыл бұрын

    to implement backpropagation in neural network algorithm , is there any built in packages available as part of sci-kit learn ? Or do I have to write a script explicitly to implement back propagation ?

  • @dcrespin
    @dcrespin Жыл бұрын

    The video shows -with all the advantages as well as the limitations of working with a specific neural graph and particular numerical values- what is the BPP of errors in a feedforward network. But the basic idea applies to much more general cases. Several steps are involved. 1.- More general processing units. Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong -beyond Euclidean spaces- to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives, with respect to the inputs and with respect to the weights; this is a linear transformation expressed as the sum of its restrictions to a pair of complementary subspaces. 2.- More general layers (any number of units). Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations. 3.- Networks with any number of layers. A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations. 4.- Quadratic error of a function. --- Since this comment is becoming too long I will stop here. The point is that a very general viewpoint clarifies many aspects of BPP. If you are interested in the full story and have some familiarity with Hilbert spaces please Google for papers dealing with backpropagation in Hilbert spaces. For a glimpse into a deep learning algorithm which is orders of magnitude more efficient, controllable and faster that BPP search in this platform for a video about: deep learning without backpropagation. Daniel Crespin

  • @samrashafique8580
    @samrashafique85805 жыл бұрын

    Thank u so much ❤️

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Thanks for appreciating!! #CodeWrestling

  • @davidwarner2491
    @davidwarner24913 жыл бұрын

    Bohot khoob!!

  • @trondknudsen6689
    @trondknudsen66892 жыл бұрын

    Nice slides :)

  • @sreenathm4539
    @sreenathm45393 жыл бұрын

    Excellent video...nicely explained...

  • @mohammadalikargar2264
    @mohammadalikargar22644 жыл бұрын

    It was perfect. Also, to do not hear the sound, you may use a hands-free, it worked for me

  • @himanshushekhardas1730
    @himanshushekhardas17303 жыл бұрын

    excellent video, you have earned a suscriber

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Awesome, thank you!

  • @ismail8973
    @ismail89733 жыл бұрын

    Well explained the math behind it

  • @Javeed_Mehdi
    @Javeed_Mehdi2 жыл бұрын

    Excellent.............Thanks

  • @CodeWrestling

    @CodeWrestling

    Жыл бұрын

    Thank you too!

  • @meenakshim1091
    @meenakshim1091 Жыл бұрын

    Very clear explanation

  • @CodeWrestling

    @CodeWrestling

    Жыл бұрын

    Glad you liked it

  • @taimoorneutron2940
    @taimoorneutron29402 жыл бұрын

    really helpful and easy to understand thanks

  • @CodeWrestling

    @CodeWrestling

    Жыл бұрын

    You are welcome!

  • @nikitasinha8181
    @nikitasinha81813 жыл бұрын

    Thank u so much

  • @johnmathai2223
    @johnmathai22233 жыл бұрын

    Really good explanation

  • @piyushdhasmana2575
    @piyushdhasmana25754 жыл бұрын

    Bro inspite of using total error for w5 can we just use error of out 1.. Cz in hidden layer any how we have to use again error out1

  • @saribarif189
    @saribarif1894 жыл бұрын

    I have a question qbout updating the weight. During using weight update formula, you have not used the multilayer perception rule in which we assume the +ve or -ve value of learning rate to increase or decrease weight. I have a doubt about this rule.

  • @federicopinto9353
    @federicopinto93534 жыл бұрын

    The music was too loud but your explanation was very helpful .

  • @TechTelligence16
    @TechTelligence162 жыл бұрын

    Best explanation ever!

  • @CodeWrestling

    @CodeWrestling

    Жыл бұрын

    Glad it was helpful!

  • @usamahussain4461
    @usamahussain44613 жыл бұрын

    thanks so much :)

  • @anirbansarkar6306
    @anirbansarkar63063 жыл бұрын

    Great explanation. It was really helpful. 😇

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Glad it was helpful!

  • @mohammedshuaibiqbal5469
    @mohammedshuaibiqbal54694 жыл бұрын

    hello sir you said you will implement decision tree algorithm in python. but i didn't find it in your playlist

  • @blessoneasovarghese9834
    @blessoneasovarghese98344 жыл бұрын

    If we give a one input and output, then weights will get adjusted as per those . But when another set of input and output are given , entirely new weights are formed. Then how does training happen in such algorithm?

  • @badiyabhargav8597
    @badiyabhargav85973 жыл бұрын

    superb ........

  • @clivelam2377
    @clivelam23774 жыл бұрын

    Should we also update the bias terms b1 and b2?

  • @Areeva2407
    @Areeva24074 жыл бұрын

    Very Nice, Vert Systematic

  • @sohaigill
    @sohaigill3 жыл бұрын

    can we use ANN and fixed-effect Poisson regression model ? in two steps for better results ?

  • @shaistasultana771
    @shaistasultana7712 жыл бұрын

    Bht sahi explanation

  • @musfirotummamluah9881
    @musfirotummamluah98813 жыл бұрын

    Thank You so much .. Finally i understood back propagation... Do you have video about Elman Recurrent Neural Network ?. If you have please send me your video link.

  • @KwstaSRr
    @KwstaSRr3 жыл бұрын

    masterpiece

  • @saitanush9453
    @saitanush94532 жыл бұрын

    tq very much

  • @jabedomor887
    @jabedomor8874 жыл бұрын

    good job

  • @waqarhussain1829
    @waqarhussain18293 жыл бұрын

    Very Nice

  • @sentientbeing9781
    @sentientbeing97814 жыл бұрын

    The song is to low, i can almost hear you. Make the song volume higher next time >3

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Sorry for the loud background music, we will make it better next time.

  • @brandonwarfield5611
    @brandonwarfield56113 жыл бұрын

    Awesome video thank you so much. Just became a subscriber. Back ground music is kinda high though.

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Thanks for the sub! Noted.

  • @chethankodenkiri2825
    @chethankodenkiri28253 жыл бұрын

    Happy teachers day

  • @haadialiaqat4590
    @haadialiaqat45902 жыл бұрын

    very nicely explained. It would be better without the background music.

  • @bheeshmsharma4834
    @bheeshmsharma48343 жыл бұрын

    Thanks

  • @MrTabraiz961
    @MrTabraiz961 Жыл бұрын

    The first equation at 13:55 seems incorrect, can anyone confirm this?

  • @rishi6954
    @rishi69543 жыл бұрын

    Net h1 should be b1+(w1*x1)+(w3*X2) as per the network connections... Because it is w3 that is connected to h1 and not w2 Correct me if I'm wrong... Or else like my comment so that I would know

  • @someorother5272
    @someorother52724 жыл бұрын

    Nice video bro......but i had to watch it at .5 speed so as to follow along.

  • @bishwasapkota9621
    @bishwasapkota96214 жыл бұрын

    One of the best explanations of how backprop works. Better than those animated and fancy but boneless videos. Simple yet covers everything from scratch. Thanks and congrats to this guy!

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Wow, thanks!

  • @roopagaur8834
    @roopagaur88344 жыл бұрын

    Excellent

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    thank you so much.

  • @koteswaruduakula5627
    @koteswaruduakula56274 жыл бұрын

    this vedio is simply super

  • @CodeWrestling

    @CodeWrestling

    4 жыл бұрын

    Thank you so much 😀

  • @saitejareddy621
    @saitejareddy6213 жыл бұрын

    Fascinating BGM

  • @arnavsharma9728
    @arnavsharma97283 жыл бұрын

    HOW DID YOU GET THE LEARNING RATE CONSTANT? IS IT GIVEN IN QUESN?

  • @aryanshridhar8517
    @aryanshridhar85174 жыл бұрын

    Why did you take mean square error ? Neural network is basically a connection of many logistic regression output , So wouldn't it be the log loss function ?

  • @easycoding9095
    @easycoding90953 жыл бұрын

    hi, from where 0.1384 came in last?

  • @jumanaalahwal
    @jumanaalahwal4 жыл бұрын

    very good explanation! but the music is a bit loud

  • @vinayvernekar3634
    @vinayvernekar36344 жыл бұрын

    @10:52 , I am unable to understand net_o1 = w_5 * out_h1 + w_6 * out_h2+ b_2 * 1. how does the derivative translate into 1 * out_h1 * w_5^(1 - 1) + 0 + 0 = out_h1 = 0.593269992, from where did w_5^(1-1) come from shouldnt it just be 1 * out_h1

  • @amitsahoo1989

    @amitsahoo1989

    4 жыл бұрын

    exponent of x decreses by one in derivatives ..i think that is what he is trying to show ,..

  • @HarishIndia123
    @HarishIndia1233 жыл бұрын

    background music was bit annoying.... couldnt hear or focus on what you were saying

  • @mathhack8647
    @mathhack86472 жыл бұрын

    Great introduciton to BNN , Nonetheless the background Music is a little bit loudy .

  • @absolute___zero
    @absolute___zero4 жыл бұрын

    you didn't explain where the -1 came from (at 9:26 ) in the formula for dEtotal=2 * 1/2(targeto1-outo1)^2-1 * -1 + 0 . the derivative rules for power don't consider any `-1` values

  • @aayushbafna7594
    @aayushbafna75944 жыл бұрын

    @14:15 should there be w2*i2 instead of w3*i2 in the formula of net_h1?

  • @anonymous-random

    @anonymous-random

    4 жыл бұрын

    you're right

  • @phattailam9814
    @phattailam98143 жыл бұрын

    why don't you update bias?

  • @dedetwinkle9775
    @dedetwinkle97752 жыл бұрын

    Thank your for creating the video. A small feedback will be that, the background music is too loud, I bearealy can understand what you are saying because of the volume of the background music. I

  • @reachDeepNeuron
    @reachDeepNeuron4 жыл бұрын

    saw hell lotta videos and blogs including mattmazzur blog.... But you ripped like anything , bro...Amazing explanation, awesome job....literally you saved my time & made me to catch sleep early.....but how did you get 1/1+e-x as e^x/ a_e^x

  • @CodeWrestling

    @CodeWrestling

    3 жыл бұрын

    Thanks a ton. If you will differentiate it, you will get the and. Maybe quickly google it and you will get the answer.

  • @kavun4905
    @kavun4905 Жыл бұрын

    thank you for uploading video with f in youtuber music , i couldn't focus shit but good content tho

  • @saichaitanyakumartirumalas2592
    @saichaitanyakumartirumalas25923 жыл бұрын

    can u send the ppt format of the concept

  • @vishwanath-ts
    @vishwanath-ts4 жыл бұрын

    Why do you add music 🎶 in the background, we want the content not the music, it's too irritating....

  • @madang9234
    @madang92344 жыл бұрын

    Bro plz do other concepts of machine learning. I'm not getting wt my lecture is teaching ,I'm depending on your vedios .can u plz do make vedios regularly

  • @codematrix
    @codematrix Жыл бұрын

    Shouldn’t 1/2 = 1/n where n = the number of nodes for the givem layer? Really the sum of all errors for given layer is just the average. BTW great explanation.

  • @syeda8343
    @syeda83432 жыл бұрын

    I need Matlab coding of exactly this manual work bro😐