RMSprop Optimizer Explained in Detail | Deep Learning

RMSprop Optimizer Explained in Detail. RMSprop Optimizer is a technique that reduces the time taken to train a model in Deep Learning.
The path of learning in mini-batch gradient descent is zig-zag, and not straight. Thus, some time gets wasted in moving in a zig-zag direction. RMSprop Optimizer increases the horizontal movement and reduced the vertical movement, thus making the zig-zag path straighter, and thus reducing the time taken to train the model.
The concept of RMSprop Optimizer is difficult to understand. Thus in this video, I have done my best to provide you with a detailed Explanation of the RMSprop Optimizer.
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
▶ Momentum Optimizer in Deep Learning: kzread.info/dash/bejne/iJeZmtlqo9yWlZs.html
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
▶ Watch Next Video on Adam Optimizer: kzread.info/dash/bejne/pqmJl5tmd5S2l7g.html
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
✔ Improving Neural Network Playlist: kzread.info/dash/bejne/hYN9lZt9dautg84.html
✔ Complete Neural Network Playlist: kzread.info/dash/bejne/qKisk8uwnbLeYZM.html
✔ Complete Logistic Regression Playlist: kzread.info/dash/bejne/h2Wjz9xpcpyshNo.html
✔ Complete Linear Regression Playlist: www.youtube.com/watch?v=mlk0r...
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
Timestamp:
0:00 Agenda
1:42 RMSprop Optimizer Explained
5:37 End
➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
Subscribe to my channel, because I upload a new Machine Learning video every week: kzread.info/dron/JFA.html...

Пікірлер: 28

  • @user-tt1ox5ls2d
    @user-tt1ox5ls2d6 ай бұрын

    thankyou so much for uploading these videos, your explanations are easily understandable

  • @jyotsanaj1425
    @jyotsanaj1425 Жыл бұрын

    Such clear explanation

  • @kasyapdharanikota8570
    @kasyapdharanikota8570 Жыл бұрын

    your channel is highly underrated, it deserves a lot more audience

  • @CodingLane

    @CodingLane

    Жыл бұрын

    Thank you for this considerate comment 😇

  • @vishalchandra1450
    @vishalchandra14502 жыл бұрын

    Hand's down,best explanation ever:)

  • @CodingLane

    @CodingLane

    2 жыл бұрын

    Haha… Thanks you so much 😄

  • @christopherwashington9448
    @christopherwashington9448 Жыл бұрын

    Hello thanks for the info. But you didn't mention the purpose of the square for the gradient.

  • @himtyagi9740
    @himtyagi97402 жыл бұрын

    waiting for SVM since you explain so nicely..thnks

  • @CodingLane

    @CodingLane

    2 жыл бұрын

    Thank you! I will upload SVM video after finishing RNN series

  • @KorobkaAl
    @KorobkaAl2 жыл бұрын

    You are the best, thanks dude 🤙

  • @CodingLane

    @CodingLane

    2 жыл бұрын

    You’re welcome 😇

  • @Vinay192
    @Vinay1922 жыл бұрын

    Hi Sir, any plan of uploading videos on support vector machines? If yes then, please try to cover the mathematical background of SVM as much as you can ... Anyway your content is really appreciable...Thanks !

  • @CodingLane

    @CodingLane

    2 жыл бұрын

    Thank you so much for your suggestion! Yes, I will be making video on SVM and covering mathematical details behind it.

  • @syedalimoajiz1179
    @syedalimoajiz1179 Жыл бұрын

    how to initialize value of Sdw and Sdb?

  • @marccasals6366
    @marccasals63662 жыл бұрын

    You're incredible

  • @CodingLane

    @CodingLane

    2 жыл бұрын

    Thank You Marc! Glad you found my videos valuable.

  • @mugomuiruri2313
    @mugomuiruri23137 ай бұрын

    good

  • @Maciek17PL
    @Maciek17PL2 жыл бұрын

    If situation with w and b would be opposite values of gradients on the vertical axis were small and values on horizontal axis where large would RMSprop slow down the training by making vertical axis values larger and horizontal axis values smaller?

  • @CodingLane

    @CodingLane

    2 жыл бұрын

    No no… it will still make the training faster. Vertical horizontal is just an example i am giving. Realistically, it can be in any direction. In every direction, its gonna work the same way.

  • @ueslijtety
    @ueslijtety Жыл бұрын

    Hi,Is it correct that you set the vertical coordinates to w and the horizontal coordinates to b? I think it should be the other way around.Because whether the goal can be reached in the end depends on w rather than b.

  • @CodingLane

    @CodingLane

    Жыл бұрын

    Hi… neither we set vertical to w nor b. Its just an example given… in a model.. there are many axis, not just x and y if we have more than 2 number of features. So a model can take any axis as any w or b. and it doesn’t matter as well which axis is for waht

  • @ueslijtety

    @ueslijtety

    Жыл бұрын

    @@CodingLane thanks!So in practice this is not going to be a 2D planar image but a multidimensional image?And which parameters can determine the point of convergence in gradient descent?W OR b?

  • @minister1005

    @minister1005

    10 ай бұрын

    So i guess what he means is that if you get a high gradient, you will be updated a lower amount and if you get a low gradient, you will be updated a higher amount.

  • @yahavx
    @yahavx Жыл бұрын

    What is (dw)^2?

  • @keshavmaheshwari521
    @keshavmaheshwari5212 жыл бұрын

    What is S?

  • @MrMadmaggot
    @MrMadmaggot Жыл бұрын

    Man and what kind of LOSS should I use when training using RMSprop optimizer?

  • @CodingLane

    @CodingLane

    Жыл бұрын

    You can use any loss function

  • @user-rx9kq2wi8n
    @user-rx9kq2wi8n9 ай бұрын

    Explain ADMM also