Artificial Neural Networks (Part 3) - Backpropagation
Support Vector Machines Video (Part 1): • Support Vector Machine...
Support Vector Machine (SVM) Part 2: Non Linear SVM • Non Linear Support Vec...
Other Videos on Neural Networks:
scholastic.teachable.com/p/pat...
Part 1: • Artificial Neural Netw... (Single Layer Perceptrons)
Part 2: • Artificial Neural Netw... (Multi Layer Perceptrons)
More Free Video Books: scholastic-videos.com/
In this lesson we first look at how to calculate local gradients required for backpropagation calculations. Then we use a simple 2-2-1 neural network to study a complete forward and backward pass and observe the weight modifications and subsequent error reduction.
Пікірлер: 73
No human alive has ever tutored this abstract topic as good n simplied it as you. Kudos! Were it not for you, I would have long given up on this topic.I keep watching the video over n over , again n again, and I don't get bored..it inspires me to believe that maths n science are deemed hard due to poorly written textbooks for beginners. Wish I could be taught by you for just a semester! I envy your students. Your just too gud,marvellous! Keep it up!
Nobody has ever explained this topic better than you, my friend. Huge respect!!
Thank you very much for the tutorials. Most tutorials on this topic are all trying to bury me with formulas and math written poorly without examples. They succeeded, until I finished these three videos.
@homevideotutor
8 жыл бұрын
Thank you for the nice comment
@devarshambani1969
7 жыл бұрын
exactly what my problem was.
@homevideotutor
7 жыл бұрын
Thanks you. My new site is scholastic.teachable.com
This is the clearest video I've seen! No one else does a worked example! Thank you so much! It's helped me heaps!
@homevideotutor
9 жыл бұрын
Doug Lee Many thanks Doug Lee. Pleasure
finally a person that can explain this topic... thank you really nice and clear i think the best part is when u actually do the feedforward and the backprop example thanz again
The best video tutorial on Neural Networks.
The best techer, it is very clear! thanks so much.
Man you have really made the best tutorial video in Whole KZread. You have really saved me :) Thanks for your efforts
@homevideotutor
9 жыл бұрын
Ali Younes Thanks Ali Younes. It is my pleasure. Currently putting up another site with simple images instead of videos. scholastic-images.webs.com .
@meriemallab8511
8 жыл бұрын
+homevideotutor thank you very much sir , the best explanation ever (y)
I finally understood how to calculate my hidden layers. Thank you very much for the inspiration :)
I have been looking for tutorials on the internet for a while, but almost everything i looked into was confusing and hard to understand, then i found your tutorial ( this video and also video 2, didn't saw video 1 yet), everything turned to be more simple ( a lot of calculation and iterations, but very easy to understand the way you explained). Well, thank you very much for provide this learn material and have my like ^^
@homevideotutor
8 жыл бұрын
Dear Shann, Many thanks for the great positive feedback!!! kzread.info/dash/bejne/mmarmq6uldK3mZs.html
thankyou for making this concept clear and understandable by real examples helps a lot man!!
This very clearly and basic example for backpropagation .Thanks for this tutor.
thank you very much , its the best i have found in youtube
Thank you for this incredible tutorial video. You have just save me from big trouble. The explanations were quite understandable, bravo.
Super explanation..Thank you so much
Very nice explanation.
Thank you, It is help lots, Can you add another lesson about Matlab nntool
Amazing tutorial sir. Thankyou very much.keep making more
thank u so much.u saved us brother.finally i got a feeling i understand it
@homevideotutor
7 жыл бұрын
Titu Ti thanks bro
Thanks Sir for very clear explanation of this tutorial and I am waiting for more videos about Artificial Neural Networks other algorithms, please continue the tutorial and if you can make some video tutorials about Deep Learning it will be very much kind of you.
very simply explained.. thnx a ton :)
@homevideotutor
7 жыл бұрын
thank you. please refer to our new Web site for any further additions. scholastic.teachable.com. best regards
Very useful!!!!
Thanks man
Thanks. i have a question, you have calculated for a particular input. What if we have more inputs how the algorithm work then? for each input do we update the weights only once or given error threshold we think its acceptable, and stop. Then, do we feed to next input into the network? Another point is the multiple outputs? It may be multiclass problem, then how we solve this problem?
@jimmorrison6613
9 жыл бұрын
I have the same same question, what happen How it works for multiple inputs and outputs?
thank you so much
Thank you for the tutorials. I want to ask a question. I am working on the Artifical Neural Network (ANN) model in the matlab. I have experimental data. I want to use an ANN model for the modeling to the my experimental data. I use the back-propagation learning algorithm in a feed-forward single hidden layer neural network. and I use logistic sigmoid (logsig) transfer function for both the hidden layer and the output layer. I complate my ANN model and I get the weights. Now I want to find results as manuel by using the weights and so I will create a formula. I tried but I could not it. Can you help me this subject, and do you have any document, pdf, or video about this subject. Thaks for your interest..
There's a mistake when you substitute the values for deltao1 how do you get this value?
excellent teaching sir... thank you so much sir
how to use this example in nntool? what training and learning function should i use?
thanks so much
Dear Sir, wonderful video! but what are n and alpha (training and mobility factor)?who decides it and what it signifies, If we don't know these values what default values we can consider? hope you will reply to this query.
Gracias, al fin entendí esto..!
Thanks again!
thank you so much.... can you give me the slides of the lectures. for learning purpose. thank you.
@homevideotutor
9 жыл бұрын
Naim Ali Slides can be downloaded from scholastic-videos.com/ in PDF format.
thank you so much !
Thank you so much :)
I guess this makes sense if you already know why you are using back-prop.
1000 Thanks
What does the n = 0.25 (eta?) stands for next to the learning rate (alpha) of 0.0001 ?? I don't seem to get it at all. I know this is an old video I don't expect an answer, but it's worth a shot.
@homevideotutor
7 жыл бұрын
Dominic Leclerc it is the learning rate of the ANN. neuralnetworksanddeeplearning.com/chap3.html
@ciddim
7 жыл бұрын
homevideotutor then what is alpha ! ?
@homevideotutor
7 жыл бұрын
Please refer to FAQ section of scholastic.teachable.com/p/pattern-classification for more information on this. Many thanks for the interest.
SIR WHAT is mobilty factor
@homevideotutor Can you explain the difference between 'n' time step and suffixes of inputs? It's confusing
@homevideotutor
7 жыл бұрын
Suffix in x1 means first feature vector and x2 means 2nd feature vector. w11(n) means the value of the weight w11 at instant of time n and w11(n+1) means the value of the weight w11 at instant of time n+1. Hope that explains.
@edwinvarghese
7 жыл бұрын
homevideotutor How did you fit 'time' in the context? like in the literal sense? I am totally new to nn. sorry.
@homevideotutor
7 жыл бұрын
That is the beauty of the back propagation. You go one pass forward and then go one pass backward. Each pass backward is one time step. Each backward pass change the weights so that final error value (desired - output or d-y) is reduced in the next forward pass.
@edwinvarghese
7 жыл бұрын
homevideotutor cool. I think I got it. Thank you very much for replying. Really appreciate it.plus Do you know any good(good and basic as yours) video tutorials/articles of RNN? If yes, can you give me the link? Thanks in advance
@homevideotutor
7 жыл бұрын
I did a search in You tube. I do not know how good this is but it looks simple. kzread.info/dash/bejne/nYGAzo-Ne8SrnsY.html If I create one in the future I will let you know. It will be hosted in: scholastic.teachable.com
sorry im still learning and im really stuck on how to get the (exp) value ?? thanks
@homevideotutor
7 жыл бұрын
Thanks for the interest. Please use a calculator and use e^(-v). If for example v=0.4851 then phi(v) = 1/(1 + e^(-0.4851))=0.619 . Hope that helps.
@rikzman4
7 жыл бұрын
Thank you soo much... just pass the exam :)
No comments. Just Wow.
Pleasure
why not explaining the backpropagation with gradient descent without momentum .The updated rule is complicated and not easy to understand.
@homevideotutor
6 жыл бұрын
Thank you for the great comment.
@WahranRai
6 жыл бұрын
In any case, very good video !!! I am waiting for the matrice form of your example, it will be useful to take advantage of matrice computation (in case of many layers and neurons)
thks man for saving my ass...really good xplanation....
@homevideotutor
9 жыл бұрын
Aditya Rawat Thank you for the nice comment
github.com/mauricioribeiro/pyNeural/tree/master/3.4 uses this video as example
finally a clear and straight forward explanation, thank you ! :)