Vanishing and exploding gradients | Deep Learning Tutorial 35 (Tensorflow, Keras & Python)

Vanishing gradient is a commong problem encountered while training a deep neural network with many layers. In case of RNN this problem is prominent as unrolling a network layer in time makes it more like a deep neural network with many layers. In this video we will discuss what vanishing and exploding gradients are in artificial neural network (ANN) and in recurrent neural network (RNN)
Do you want to learn technology from me? Check codebasics.io/ for my affordable video courses.
Deep learning playlist: • Deep Learning With Ten...
Machine learning playlist : kzread.info?list...
#vanishinggradient #gradient #gradientdeeplearning #deepneuralnetwork #deeplearningtutorial #vanishing #vanishingdeeplearning
🌎 Website: codebasics.io/
🎥 Codebasics Hindi channel: / @codebasicshindi
#️⃣ Social Media #️⃣
🔗 Discord: / discord
📸 Instagram: / codebasicshub
🔊 Facebook: / codebasicshub
📱 Twitter: / codebasicshub
📝 Linkedin: / codebasics
❗❗ DISCLAIMER: All opinions expressed in this video are of my own and not that of my employers'.

Пікірлер: 56

  • @codebasics
    @codebasics2 жыл бұрын

    Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

  • @meilinlyu3572
    @meilinlyu3572 Жыл бұрын

    Amazing explanations. Thank you very much!

  • @samarsinhsalunkhe7529
    @samarsinhsalunkhe7529 Жыл бұрын

    best Deep Learning playlist till date

  • @anonymousAI-pr2wq
    @anonymousAI-pr2wq2 жыл бұрын

    Thanks you for the great video. Clear and easy to understand.

  • @eitanamos5867
    @eitanamos58673 жыл бұрын

    Hi Sir, I appreciate your videos. They're really useful. Can you please make videos that show examples of RNN, LSTM as well as videos on Deep Reinforcement Learning

  • @amirhossein.roodaki
    @amirhossein.roodaki3 ай бұрын

    Thank you very much, sir. Crystal clear explanation!

  • @hardikvegad3508
    @hardikvegad35083 жыл бұрын

    AMAZING EXPLANATION SIR.... Please make a video on how do you understand and explain such complex topics so easily, that will help us to self educate ourselves🙌🏻🙌🏻🙌🏻

  • @codebasics

    @codebasics

    3 жыл бұрын

    good point. I will note it down.

  • @kishanikandasamy
    @kishanikandasamy2 жыл бұрын

    Perfect Explanation! Thank You

  • @codebasics

    @codebasics

    2 жыл бұрын

    Glad it was helpful!

  • @suryanshpatel4750
    @suryanshpatel47503 ай бұрын

    series of explanation video by video is awsome :)

  • @Acampandoconfrikis
    @Acampandoconfrikis3 жыл бұрын

    4:36 is literally me, lol amazing explanation tho, thanks so much!

  • @akhileshkarra384
    @akhileshkarra3842 жыл бұрын

    Very good explanation

  • @saifsd8267
    @saifsd82673 жыл бұрын

    Sir, can you please make a video on generative adversial networks and a simple example project which implements GAN?

  • @jongcheulkim7284
    @jongcheulkim72842 жыл бұрын

    Thank you.

  • @md.alamintalukder3261
    @md.alamintalukder3261 Жыл бұрын

    Thanks a lot

  • @sahith2547
    @sahith25472 жыл бұрын

    Great Explanation sir 🔥🔥🔥. .....I wonder why you haven't reached M subscribers...!!!!

  • @emmanuelmoupojou1505
    @emmanuelmoupojou15052 жыл бұрын

    Great !

  • @harshalbhoir8986
    @harshalbhoir8986 Жыл бұрын

    great!!

  • @haneulkim4902
    @haneulkim49022 жыл бұрын

    Hi while training highly imbalanced dataset in binary classification weights of final layer keep going to zero leading to y_pred = 0 for all X. What are some reasons for this?

  • @n.ilayarajahicetstaffit3709
    @n.ilayarajahicetstaffit3709 Жыл бұрын

    EXPLAINATION, VIDEO AND AUDIO QUALITY IS VERY GREAT. PLS GUIDE US WHAT KIND OF SOFTWARE, YOU HAVE BEEN USED FOR RECORDING THE VIDEO

  • @codebasics

    @codebasics

    Жыл бұрын

    camtasia studio. blue yeti mic.

  • @mandarchincholkar5955
    @mandarchincholkar59553 жыл бұрын

    Please release all videos as soon as possible. 🙏🏻

  • @codebasics

    @codebasics

    3 жыл бұрын

    I am trying mandar. it takes time to produce these videos.

  • @muhammedrajab2301

    @muhammedrajab2301

    3 жыл бұрын

    @@codebasics I agree.

  • @haneulkim4902
    @haneulkim49022 жыл бұрын

    While training deep neural network with 2 units in the final layer with sigmoid activation function for binary classification 2 weights of final layer becomes both 0 leading to same score for all inputs since it only uses bias in sigmoid, what are some reasons for this?

  • @vetrijayakumaralumni376
    @vetrijayakumaralumni3763 жыл бұрын

    Need survival Analysis ! Plzzz do it

  • @tahahusain8577
    @tahahusain85773 жыл бұрын

    Hi Dhaval, Great content! Really learning a lot from your videos. Do you upload your slides as well? Would be really helpful if I could go through slides when required. Thank you.

  • @walidmaly3
    @walidmaly33 жыл бұрын

    Thanks a lot. i think there is a typo in the slides as a3 is missing. you have a2 followed by a4.

  • @ChessLynx
    @ChessLynx2 жыл бұрын

    3:33 "Bigger small number" lol

  • @Shannxy
    @Shannxy2 жыл бұрын

    4:35 This felt personal

  • @ronyjoseph7868
    @ronyjoseph78683 жыл бұрын

    Sir in cnn features are automatically extracted, but my project coordinator ask me what features are automatically extracted by cnn, i am stuck on this question, please help me what should i answer. I always say "we dont need to teach any features cnn extracts it in convo layer? But i think he didn't satisfy in this ans

  • @richasharmav
    @richasharmav3 жыл бұрын

    👍🏻👍🏻

  • @piyalikarmakar5979
    @piyalikarmakar59793 жыл бұрын

    Sir, how GRU and LSTM can solve vanishing Gradient problem?? Is there any vedio on that? Kindly let me know..

  • @jojushaji3010
    @jojushaji30102 жыл бұрын

    where to get the presentation ure using

  • @anandailyasa2530
    @anandailyasa25302 жыл бұрын

    🔥🔥🔥🔥👍👍

  • @porrasbrand
    @porrasbrand2 жыл бұрын

    As the number of hidden layers grow, the gradient becomes very small and the weights will hardly change.

  • @taabarrimahaganacsigaiyoti6356
    @taabarrimahaganacsigaiyoti63563 жыл бұрын

    I have recently started your data science tutorials especially I have been doing python and statistics learning, I have no fear on programming concepts but problem comes from when it comes to machine learning which brings me back to my days of school like algebra, matrix and calculus so is there a short path that can help me to cover those areas? can i be data scientist while I am normal at math?

  • @codebasics

    @codebasics

    3 жыл бұрын

    I would say as and when you encounter math topic just try to get that topic clarified. I am in fact going to make a full tutorial series on "math for ML". stay tuned!

  • @shanglee643

    @shanglee643

    3 жыл бұрын

    @@codebasics Holy moly! I want to hug you, teacher.

  • @manojsamal7248
    @manojsamal72482 жыл бұрын

    if the weights of this single layer are same in RNN then why to back propogate till last why not use only the last word.. and get weight

  • @joyanbhathena7251
    @joyanbhathena72513 жыл бұрын

    Missing the exercise questions

  • @jaysoni7812
    @jaysoni78123 жыл бұрын

    Make a video on optimisers

  • @codebasics

    @codebasics

    3 жыл бұрын

    point noted.

  • @jaysoni7812

    @jaysoni7812

    3 жыл бұрын

    @@codebasics 😂 thank you sir 🙏 last time I have requested for vanishing gradient and you made it for that thanks again.

  • @jaysoni7812

    @jaysoni7812

    3 жыл бұрын

    @@codebasics I hope you will cover all optimisers like GD, SGD, Mini batch SGD, SGD with momentum, Adagrde, Adadelta & RMSprop and Adam if it is possible

  • @yourentertainer19
    @yourentertainer19 Жыл бұрын

    Hi everyone, I have one doubt, as said in the video many times we do derivative of loss with respect to weights, but the loss is a constant value and derivative of constant is zero, so how the weights are updated, I know its a silly question but can anyone please answer this it would be very helpful

  • @koushikramaravamudhan8380

    @koushikramaravamudhan8380

    Жыл бұрын

    no the loss will change if you alter the weights and biases

  • @rohankushwah5192
    @rohankushwah51923 жыл бұрын

    Sir how many tutorials are still remaining to complete this deep learning playlist ? Or how much we have covered this deep learning playlist so far in terms of percentage ?

  • @codebasics

    @codebasics

    3 жыл бұрын

    we have covered around 90% tutorials. I will publish more videos on RNN and then we will start deep learning projects.

  • @rohankushwah5192

    @rohankushwah5192

    3 жыл бұрын

    @@codebasicsegerly waiting for DL projects 😋

  • @nisargbhatt4967
    @nisargbhatt4967 Жыл бұрын

    bhai ne title me Tensorflow, Keras and Python likha hai lekin pichle teen videos me koi tutorial to nhi hai.. not enough for me to get started

  • @r21061991
    @r210619913 жыл бұрын

    Sir please include coding along with the videos

  • @somdc6095
    @somdc60952 жыл бұрын

    " The vanishing gradient is like a dumb student in a class who is hardly learning anything", I think, this example doesn't suits in your mouth.

  • @shashankjaiswal1298
    @shashankjaiswal12982 жыл бұрын

    I protest on behalf of dumb students.. kadi ninda from my side.

Келесі