Training Softmax Classifier (C2W3L09)

Take the Deep Learning Specialization: bit.ly/2VMuKZT
Check out all our courses: www.deeplearning.ai
Subscribe to The Batch, our weekly newsletter: www.deeplearning.ai/thebatch
Follow us:
Twitter: / deeplearningai_
Facebook: / deeplearninghq
Linkedin: / deeplearningai

Пікірлер: 22

  • @IgorAherne
    @IgorAherne6 жыл бұрын

    Thank you very much Andrew! At first, I was struggling to understand previous videos (backprop, sgd), but now I find your explanations actually one of the best & carefully given with love to the students 5:00 was very helpful - my jaw dropped!

  • @miftahbedru543
    @miftahbedru5436 жыл бұрын

    As always, awesome illustration! Big sauce of your videos are the intuition to simplify the maths

  • @teampluu4195
    @teampluu41955 жыл бұрын

    Awesome video! Thank you very much!

  • @chriswyatt66
    @chriswyatt665 жыл бұрын

    Thankyou Andrew. This was very helpful. The bit I missed on the last two videos was how to convert 2D input(x,y) to 4Classes on the input side. Is that done by hidden layers?

  • @billykotsos4642
    @billykotsos46424 жыл бұрын

    Yes. This is soooo good !!!!

  • @gabrielwong1991
    @gabrielwong19913 жыл бұрын

    What happens if your output layer is continuous like house price? Do you set a boundary and bin them into each unit?

  • @miguelpetrarca5540
    @miguelpetrarca55405 жыл бұрын

    is this the same cost function we would use if we chose an activation layer with a sigmoid cost function?

  • @aayushpaudel2379

    @aayushpaudel2379

    3 жыл бұрын

    for Sigmoid activation, better use binary cross entropy loss

  • @sandipansarkar9211
    @sandipansarkar92113 жыл бұрын

    nic explanation.Ned to watch again

  • @ermano5586

    @ermano5586

    11 ай бұрын

    I am watching it again

  • @cyrilgarcia2485
    @cyrilgarcia24854 жыл бұрын

    So this type of loss function is called cross entropy

  • @aayushpaudel2379

    @aayushpaudel2379

    3 жыл бұрын

    Multi-Class Cross Entropy !

  • @youtubeadventurer1881
    @youtubeadventurer18815 жыл бұрын

    Yay, I managed to derive it myself. I'm certainly not an expert in calculus though!

  • @taiyaki6982

    @taiyaki6982

    5 жыл бұрын

    Are you talking about d(zL) = y_hat - y? For some reason when I try to do it I always get y_hat*y - y, not y_hat - y

  • @compilationsmania451

    @compilationsmania451

    4 жыл бұрын

    @@taiyaki6982 did you find what you're mistake is? Because that's what I'm getting.

  • @checkpeck
    @checkpeck6 жыл бұрын

    all these could have been very easily communicated rather than complexing it with all the jargons

  • @dpacmanh

    @dpacmanh

    6 жыл бұрын

    Thats exactly what a person who didnt follow the series would feel :)

  • @LunnarisLP

    @LunnarisLP

    6 жыл бұрын

    It was really easily communicated lol..

  • @banipreetsinghraheja8529

    @banipreetsinghraheja8529

    6 жыл бұрын

    Uhmm, he doesn't give a crash course on a particular topic, not like Siraj Raval. You ought to go through all of his videos to get what he is saying. You might be feeling as a person who couldn't catch up with earlier episodes, and now isn't able to understand the plot of a series.

  • @rubinluitel158

    @rubinluitel158

    4 жыл бұрын

    you have to go through the previous videos to understand this...

  • @Jirayu.Kaewprateep
    @Jirayu.Kaewprateep Жыл бұрын

    📺👤💬 Yui you do not have to explain much in detail they are watching because of me and they understand the concepts. 🥺💬 Yes, I know beginning I noted it down for myself but as you see they keep trying to yell at me I do not make a bad word but reply and inform you they are doing but with my attention and I confirm that I am listening with my attention 100% I tell them to stop but they are not that is why I tell all that I am listening with full potential please stop that is all. 🥺💬 Your lessons are value for attention learning. 📺👤💬 Backward propagation and the summation of derivative / 📺👤💬 We do this way along add some sentence or question somebody specific to think about it that loss focus and continue 🥺💬 Yes and outside they are not stopping but do not worry this lesson is not too hard or I read it before watching this VDO. 🏍💬‼ 💬‼💬‼💬‼💬‼ 📺👤💬 That is it SoftMax regressions algorithms 🥺💬 It is one of the Backward propagation algorithms, it is the same terms when the primary is very large and the exponential is small until you consider it is a small value update. 🥺💬 It is different than the SoftMax layer we use often, I think you update the backward propagation methods. Is it provide better results ⁉ 🥺💬 Last time the regression you explain about update weight as weight = A x I + ( 1 - B ) where it is a linear relationship but you need to explain about softmax update or some questions on StackOverflow asking about it. It is possible and it is the input matrices if you look at loss estimation functions but I think you make this VDO must to have some contents that innovations. 🐑💬 It effects by creating small values from input matrices by dividing them from the estimates target value, now the weights update does not only increase or decrease value parameters from the training function but its relationship remains as the SoftMax layer does. 🐐💬 Do you mean even networks have likely and unlikely ⁉ 👧💬 Yes of course and how about the dropout may be he aim to prunes the dropout from the leafs nodes by priority.