Artificial Neural Networks (Part 3) - Backpropagation

Support Vector Machines Video (Part 1): • Support Vector Machine...
Support Vector Machine (SVM) Part 2: Non Linear SVM • Non Linear Support Vec...
Other Videos on Neural Networks:
scholastic.teachable.com/p/pat...
Part 1: • Artificial Neural Netw... (Single Layer Perceptrons)
Part 2: • Artificial Neural Netw... (Multi Layer Perceptrons)
More Free Video Books: scholastic-videos.com/
In this lesson we first look at how to calculate local gradients required for backpropagation calculations. Then we use a simple 2-2-1 neural network to study a complete forward and backward pass and observe the weight modifications and subsequent error reduction.

Пікірлер: 73

  • @meshackamimo1945
    @meshackamimo19459 жыл бұрын

    No human alive has ever tutored this abstract topic as good n simplied it as you. Kudos! Were it not for you, I would have long given up on this topic.I keep watching the video over n over , again n again, and I don't get bored..it inspires me to believe that maths n science are deemed hard due to poorly written textbooks for beginners. Wish I could be taught by you for just a semester! I envy your students. Your just too gud,marvellous! Keep it up!

  • @franzdusenduck42
    @franzdusenduck425 жыл бұрын

    Nobody has ever explained this topic better than you, my friend. Huge respect!!

  • @liangfage4247
    @liangfage42478 жыл бұрын

    Thank you very much for the tutorials. Most tutorials on this topic are all trying to bury me with formulas and math written poorly without examples. They succeeded, until I finished these three videos.

  • @homevideotutor

    @homevideotutor

    8 жыл бұрын

    Thank you for the nice comment

  • @devarshambani1969

    @devarshambani1969

    7 жыл бұрын

    exactly what my problem was.

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    Thanks you. My new site is scholastic.teachable.com

  • @dougylee5299
    @dougylee52999 жыл бұрын

    This is the clearest video I've seen! No one else does a worked example! Thank you so much! It's helped me heaps!

  • @homevideotutor

    @homevideotutor

    9 жыл бұрын

    Doug Lee Many thanks Doug Lee. Pleasure

  • @BajoMundoUnderground
    @BajoMundoUnderground8 жыл бұрын

    finally a person that can explain this topic... thank you really nice and clear i think the best part is when u actually do the feedforward and the backprop example thanz again

  • @BiranchiNarayanNayak
    @BiranchiNarayanNayak7 жыл бұрын

    The best video tutorial on Neural Networks.

  • @sirhuang9360
    @sirhuang93605 жыл бұрын

    The best techer, it is very clear! thanks so much.

  • @bestalouch
    @bestalouch9 жыл бұрын

    Man you have really made the best tutorial video in Whole KZread. You have really saved me :) Thanks for your efforts

  • @homevideotutor

    @homevideotutor

    9 жыл бұрын

    Ali Younes Thanks Ali Younes. It is my pleasure. Currently putting up another site with simple images instead of videos. scholastic-images.webs.com .

  • @meriemallab8511

    @meriemallab8511

    8 жыл бұрын

    +homevideotutor thank you very much sir , the best explanation ever (y)

  • @STREETBOYXY
    @STREETBOYXY6 жыл бұрын

    I finally understood how to calculate my hidden layers. Thank you very much for the inspiration :)

  • @wesleyshann6524
    @wesleyshann65248 жыл бұрын

    I have been looking for tutorials on the internet for a while, but almost everything i looked into was confusing and hard to understand, then i found your tutorial ( this video and also video 2, didn't saw video 1 yet), everything turned to be more simple ( a lot of calculation and iterations, but very easy to understand the way you explained). Well, thank you very much for provide this learn material and have my like ^^

  • @homevideotutor

    @homevideotutor

    8 жыл бұрын

    Dear Shann, Many thanks for the great positive feedback!!! kzread.info/dash/bejne/mmarmq6uldK3mZs.html

  • @devarshambani1969
    @devarshambani19697 жыл бұрын

    thankyou for making this concept clear and understandable by real examples helps a lot man!!

  • @aliveli8007
    @aliveli80077 жыл бұрын

    This very clearly and basic example for backpropagation .Thanks for this tutor.

  • @zakariazakivic1430
    @zakariazakivic14308 жыл бұрын

    thank you very much , its the best i have found in youtube

  • @dadairheren2169
    @dadairheren21698 жыл бұрын

    Thank you for this incredible tutorial video. You have just save me from big trouble. The explanations were quite understandable, bravo.

  • @venkat157reddy
    @venkat157reddy10 ай бұрын

    Super explanation..Thank you so much

  • @kumarprasant7
    @kumarprasant77 жыл бұрын

    Very nice explanation.

  • @chinthakaudugama2792
    @chinthakaudugama27928 жыл бұрын

    Thank you, It is help lots, Can you add another lesson about Matlab nntool

  • @ANASKHAN786
    @ANASKHAN7866 жыл бұрын

    Amazing tutorial sir. Thankyou very much.keep making more

  • @shamimibneshahid706
    @shamimibneshahid7067 жыл бұрын

    thank u so much.u saved us brother.finally i got a feeling i understand it

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    Titu Ti thanks bro

  • @AshrafGardizy
    @AshrafGardizy7 жыл бұрын

    Thanks Sir for very clear explanation of this tutorial and I am waiting for more videos about Artificial Neural Networks other algorithms, please continue the tutorial and if you can make some video tutorials about Deep Learning it will be very much kind of you.

  • @akhileshjoshi8484
    @akhileshjoshi84847 жыл бұрын

    very simply explained.. thnx a ton :)

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    thank you. please refer to our new Web site for any further additions. scholastic.teachable.com. best regards

  • @jesusjimenez6401
    @jesusjimenez64019 жыл бұрын

    Very useful!!!!

  • 5 жыл бұрын

    Thanks man

  • @BerkayCelik
    @BerkayCelik10 жыл бұрын

    Thanks. i have a question, you have calculated for a particular input. What if we have more inputs how the algorithm work then? for each input do we update the weights only once or given error threshold we think its acceptable, and stop. Then, do we feed to next input into the network? Another point is the multiple outputs? It may be multiclass problem, then how we solve this problem?

  • @jimmorrison6613

    @jimmorrison6613

    9 жыл бұрын

    I have the same same question, what happen How it works for multiple inputs and outputs?

  • @shadeelhadik
    @shadeelhadik7 жыл бұрын

    thank you so much

  • @mehmetakcay9659
    @mehmetakcay96597 жыл бұрын

    Thank you for the tutorials. I want to ask a question. I am working on the Artifical Neural Network (ANN) model in the matlab. I have experimental data. I want to use an ANN model for the modeling to the my experimental data. I use the back-propagation learning algorithm in a feed-forward single hidden layer neural network. and I use logistic sigmoid (logsig) transfer function for both the hidden layer and the output layer. I complate my ANN model and I get the weights. Now I want to find results as manuel by using the weights and so I will create a formula. I tried but I could not it. Can you help me this subject, and do you have any document, pdf, or video about this subject. Thaks for your interest..

  • @deftonesazazel
    @deftonesazazel5 жыл бұрын

    There's a mistake when you substitute the values for deltao1 how do you get this value?

  • @hareharan
    @hareharan6 жыл бұрын

    excellent teaching sir... thank you so much sir

  • @seprienna
    @seprienna9 жыл бұрын

    how to use this example in nntool? what training and learning function should i use?

  • @mahdishafiei7230
    @mahdishafiei72308 жыл бұрын

    thanks so much

  • @someetsingh2224
    @someetsingh22247 жыл бұрын

    Dear Sir, wonderful video! but what are n and alpha (training and mobility factor)?who decides it and what it signifies, If we don't know these values what default values we can consider? hope you will reply to this query.

  • @2427roger
    @2427roger7 жыл бұрын

    Gracias, al fin entendí esto..!

  • @messiasreinaldo4492
    @messiasreinaldo449210 жыл бұрын

    Thanks again!

  • @naimali6385
    @naimali63859 жыл бұрын

    thank you so much.... can you give me the slides of the lectures. for learning purpose. thank you.

  • @homevideotutor

    @homevideotutor

    9 жыл бұрын

    Naim Ali Slides can be downloaded from scholastic-videos.com/ in PDF format.

  • @artjom84
    @artjom848 жыл бұрын

    thank you so much !

  • @hesamrahmati498
    @hesamrahmati4989 жыл бұрын

    Thank you so much :)

  • @IsaacCallison
    @IsaacCallison4 жыл бұрын

    I guess this makes sense if you already know why you are using back-prop.

  • @Felixantony84
    @Felixantony846 жыл бұрын

    1000 Thanks

  • @ciddim
    @ciddim7 жыл бұрын

    What does the n = 0.25 (eta?) stands for next to the learning rate (alpha) of 0.0001 ?? I don't seem to get it at all. I know this is an old video I don't expect an answer, but it's worth a shot.

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    Dominic Leclerc it is the learning rate of the ANN. neuralnetworksanddeeplearning.com/chap3.html

  • @ciddim

    @ciddim

    7 жыл бұрын

    homevideotutor then what is alpha ! ?

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    Please refer to FAQ section of scholastic.teachable.com/p/pattern-classification for more information on this. Many thanks for the interest.

  • @praveshkumarsingh6132
    @praveshkumarsingh61327 жыл бұрын

    SIR WHAT is mobilty factor

  • @edwinvarghese
    @edwinvarghese7 жыл бұрын

    @homevideotutor Can you explain the difference between 'n' time step and suffixes of inputs? It's confusing

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    Suffix in x1 means first feature vector and x2 means 2nd feature vector. w11(n) means the value of the weight w11 at instant of time n and w11(n+1) means the value of the weight w11 at instant of time n+1. Hope that explains.

  • @edwinvarghese

    @edwinvarghese

    7 жыл бұрын

    homevideotutor How did you fit 'time' in the context? like in the literal sense? I am totally new to nn. sorry.

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    That is the beauty of the back propagation. You go one pass forward and then go one pass backward. Each pass backward is one time step. Each backward pass change the weights so that final error value (desired - output or d-y) is reduced in the next forward pass.

  • @edwinvarghese

    @edwinvarghese

    7 жыл бұрын

    homevideotutor cool. I think I got it. Thank you very much for replying. Really appreciate it.plus Do you know any good(good and basic as yours) video tutorials/articles of RNN? If yes, can you give me the link? Thanks in advance

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    I did a search in You tube. I do not know how good this is but it looks simple. kzread.info/dash/bejne/nYGAzo-Ne8SrnsY.html If I create one in the future I will let you know. It will be hosted in: scholastic.teachable.com

  • @rikzman4
    @rikzman47 жыл бұрын

    sorry im still learning and im really stuck on how to get the (exp) value ?? thanks

  • @homevideotutor

    @homevideotutor

    7 жыл бұрын

    Thanks for the interest. Please use a calculator and use e^(-v). If for example v=0.4851 then phi(v) = 1/(1 + e^(-0.4851))=0.619 . Hope that helps.

  • @rikzman4

    @rikzman4

    7 жыл бұрын

    Thank you soo much... just pass the exam :)

  • @dr.md.atiqurrahman2748
    @dr.md.atiqurrahman27483 жыл бұрын

    No comments. Just Wow.

  • @homevideotutor
    @homevideotutor10 жыл бұрын

    Pleasure

  • @WahranRai
    @WahranRai6 жыл бұрын

    why not explaining the backpropagation with gradient descent without momentum .The updated rule is complicated and not easy to understand.

  • @homevideotutor

    @homevideotutor

    6 жыл бұрын

    Thank you for the great comment.

  • @WahranRai

    @WahranRai

    6 жыл бұрын

    In any case, very good video !!! I am waiting for the matrice form of your example, it will be useful to take advantage of matrice computation (in case of many layers and neurons)

  • @adityarawat5063
    @adityarawat50639 жыл бұрын

    thks man for saving my ass...really good xplanation....

  • @homevideotutor

    @homevideotutor

    9 жыл бұрын

    Aditya Rawat Thank you for the nice comment

  • @mauricioribeiro3547
    @mauricioribeiro35478 жыл бұрын

    github.com/mauricioribeiro/pyNeural/tree/master/3.4 uses this video as example

  • @sebastianpoliak
    @sebastianpoliak8 жыл бұрын

    finally a clear and straight forward explanation, thank you ! :)