Vanishing and exploding gradients | Deep Learning Tutorial 35 (Tensorflow, Keras & Python)
Vanishing gradient is a commong problem encountered while training a deep neural network with many layers. In case of RNN this problem is prominent as unrolling a network layer in time makes it more like a deep neural network with many layers. In this video we will discuss what vanishing and exploding gradients are in artificial neural network (ANN) and in recurrent neural network (RNN)
Do you want to learn technology from me? Check codebasics.io/ for my affordable video courses.
Deep learning playlist: • Deep Learning With Ten...
Machine learning playlist : kzread.info?list...
#vanishinggradient #gradient #gradientdeeplearning #deepneuralnetwork #deeplearningtutorial #vanishing #vanishingdeeplearning
🌎 Website: codebasics.io/
🎥 Codebasics Hindi channel: / @codebasicshindi
#️⃣ Social Media #️⃣
🔗 Discord: / discord
📸 Instagram: / codebasicshub
🔊 Facebook: / codebasicshub
📱 Twitter: / codebasicshub
📝 Linkedin: / codebasics
❗❗ DISCLAIMER: All opinions expressed in this video are of my own and not that of my employers'.
Пікірлер: 56
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
Amazing explanations. Thank you very much!
best Deep Learning playlist till date
Thanks you for the great video. Clear and easy to understand.
Hi Sir, I appreciate your videos. They're really useful. Can you please make videos that show examples of RNN, LSTM as well as videos on Deep Reinforcement Learning
Thank you very much, sir. Crystal clear explanation!
AMAZING EXPLANATION SIR.... Please make a video on how do you understand and explain such complex topics so easily, that will help us to self educate ourselves🙌🏻🙌🏻🙌🏻
@codebasics
3 жыл бұрын
good point. I will note it down.
Perfect Explanation! Thank You
@codebasics
2 жыл бұрын
Glad it was helpful!
series of explanation video by video is awsome :)
4:36 is literally me, lol amazing explanation tho, thanks so much!
Very good explanation
Sir, can you please make a video on generative adversial networks and a simple example project which implements GAN?
Thank you.
Thanks a lot
Great Explanation sir 🔥🔥🔥. .....I wonder why you haven't reached M subscribers...!!!!
Great !
great!!
Hi while training highly imbalanced dataset in binary classification weights of final layer keep going to zero leading to y_pred = 0 for all X. What are some reasons for this?
EXPLAINATION, VIDEO AND AUDIO QUALITY IS VERY GREAT. PLS GUIDE US WHAT KIND OF SOFTWARE, YOU HAVE BEEN USED FOR RECORDING THE VIDEO
@codebasics
Жыл бұрын
camtasia studio. blue yeti mic.
Please release all videos as soon as possible. 🙏🏻
@codebasics
3 жыл бұрын
I am trying mandar. it takes time to produce these videos.
@muhammedrajab2301
3 жыл бұрын
@@codebasics I agree.
While training deep neural network with 2 units in the final layer with sigmoid activation function for binary classification 2 weights of final layer becomes both 0 leading to same score for all inputs since it only uses bias in sigmoid, what are some reasons for this?
Need survival Analysis ! Plzzz do it
Hi Dhaval, Great content! Really learning a lot from your videos. Do you upload your slides as well? Would be really helpful if I could go through slides when required. Thank you.
Thanks a lot. i think there is a typo in the slides as a3 is missing. you have a2 followed by a4.
3:33 "Bigger small number" lol
4:35 This felt personal
Sir in cnn features are automatically extracted, but my project coordinator ask me what features are automatically extracted by cnn, i am stuck on this question, please help me what should i answer. I always say "we dont need to teach any features cnn extracts it in convo layer? But i think he didn't satisfy in this ans
👍🏻👍🏻
Sir, how GRU and LSTM can solve vanishing Gradient problem?? Is there any vedio on that? Kindly let me know..
where to get the presentation ure using
🔥🔥🔥🔥👍👍
As the number of hidden layers grow, the gradient becomes very small and the weights will hardly change.
I have recently started your data science tutorials especially I have been doing python and statistics learning, I have no fear on programming concepts but problem comes from when it comes to machine learning which brings me back to my days of school like algebra, matrix and calculus so is there a short path that can help me to cover those areas? can i be data scientist while I am normal at math?
@codebasics
3 жыл бұрын
I would say as and when you encounter math topic just try to get that topic clarified. I am in fact going to make a full tutorial series on "math for ML". stay tuned!
@shanglee643
3 жыл бұрын
@@codebasics Holy moly! I want to hug you, teacher.
if the weights of this single layer are same in RNN then why to back propogate till last why not use only the last word.. and get weight
Missing the exercise questions
Make a video on optimisers
@codebasics
3 жыл бұрын
point noted.
@jaysoni7812
3 жыл бұрын
@@codebasics 😂 thank you sir 🙏 last time I have requested for vanishing gradient and you made it for that thanks again.
@jaysoni7812
3 жыл бұрын
@@codebasics I hope you will cover all optimisers like GD, SGD, Mini batch SGD, SGD with momentum, Adagrde, Adadelta & RMSprop and Adam if it is possible
Hi everyone, I have one doubt, as said in the video many times we do derivative of loss with respect to weights, but the loss is a constant value and derivative of constant is zero, so how the weights are updated, I know its a silly question but can anyone please answer this it would be very helpful
@koushikramaravamudhan8380
Жыл бұрын
no the loss will change if you alter the weights and biases
Sir how many tutorials are still remaining to complete this deep learning playlist ? Or how much we have covered this deep learning playlist so far in terms of percentage ?
@codebasics
3 жыл бұрын
we have covered around 90% tutorials. I will publish more videos on RNN and then we will start deep learning projects.
@rohankushwah5192
3 жыл бұрын
@@codebasicsegerly waiting for DL projects 😋
bhai ne title me Tensorflow, Keras and Python likha hai lekin pichle teen videos me koi tutorial to nhi hai.. not enough for me to get started
Sir please include coding along with the videos
" The vanishing gradient is like a dumb student in a class who is hardly learning anything", I think, this example doesn't suits in your mouth.
I protest on behalf of dumb students.. kadi ninda from my side.