Forward propagation in training neural networks step by step

This videos presents the first step in training a neural network: forward propagation.

Пікірлер: 40

  • @quadrialli3715
    @quadrialli3715 Жыл бұрын

    Beautiful video. The patience, the calmness in your voice in these videos. Thank you so much

  • @PrashantThakre
    @PrashantThakre2 жыл бұрын

    This is the best video on forward propagation in KZread. Thanks for posting such videos.

  • @bobdillon1138
    @bobdillon11389 ай бұрын

    Simply the best explanations i have found if i could give part one and two a hundred likes i would...Please do more AI content you have a gift for teaching.

  • @fundatamdogan
    @fundatamdogan2 жыл бұрын

    I 've never seen such a perfect explanation.I have faced with a problem while I was writing my kernel and searched everywhere to find a solution for understanding reason of error I ve got.Suddenly I saw the video and started to listen carefully .Thank you so much sir

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    I'm very happy to hear this :) Good luck to you

  • @myprofile6668
    @myprofile66682 жыл бұрын

    Thank you so much for such a simple way to teach a medium student. It was too clear and obvious to understand and learn the NN.. Love from Pakistan😀

  • @Edin12n
    @Edin12n Жыл бұрын

    Part 1 and Part 2 are the best explanation of these subjects going. Thanks so much Bevan. You have a real talent for explaining a difficult subject in a way that’s as easy as possible to grasp. Brilliant videos

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    Жыл бұрын

    You're very welcome!

  • @karthikrajeshwaran1997

    @karthikrajeshwaran1997

    4 ай бұрын

    Most amazing explanation

  • @geld5220
    @geld5220 Жыл бұрын

    the best explanation so far...

  • @dhishsaxena5746
    @dhishsaxena574611 ай бұрын

    Thanks a lot Bevan. Excellent insights with simplicity. Kudos!

  • @mmacaulay
    @mmacaulay2 жыл бұрын

    Thank you so much. This the first explanation of forward propagation in neural networks, that I actually understood.

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    Awesome to hear. Good luck!

  • @wd8222
    @wd8222 Жыл бұрын

    Excellent presentation! I wish a intro in Transformer Architecture, which today replaces many NN (CNN,RNN,…).

  • @kdSU30
    @kdSU30 Жыл бұрын

    Dear Bevan, You have presented both the forward and backward propagation concepts in an exceptional manner, especially the example that you have chosen for doing so. The majority of ANN tutorials stick to 'cat-dog' kind of categorical examples. I have one query though. Do you recommend the use of a non-linear activation function in the output layer? And if yes, which non-linear activation function will you prefer for continuous problems. The thing to keep in mind while choosing this output layer activation function is that it should be able to provide an output which can exceed 1 or can be less than 0 or -1. Sigmoid or tanh activation function in the output layer will not allow us to have such 'greater than 1 or less than 0 / -1' outputs. I hope to hear from you. Thanks!

  • @TheKwame83
    @TheKwame832 жыл бұрын

    I'm currently taking an AI course but I.must say your explanation is.more understandable. Thank you.

  • @EricD_192
    @EricD_1922 жыл бұрын

    The best content I have found, thanks for such a detailed explanation!

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    Glad it was helpful!

  • @mariammkassim7879
    @mariammkassim78792 жыл бұрын

    Thank You. Very Informative

  • @kaushikplabon8530
    @kaushikplabon8530 Жыл бұрын

    so perfect brother nice job

  • @osirismaat9695
    @osirismaat9695 Жыл бұрын

    Stochastic Gradient Decent. It is Stochastic since it Randomly shuffle samples in the training set.

  • @onlyawatcher5810
    @onlyawatcher58105 ай бұрын

    May I ask how you can determine the bias to be added in z1 calculation?

  • @jaeen7665
    @jaeen7665 Жыл бұрын

    This is a fantastic explanation, but you'll need some knowledge of machine learning from scratch, particularly linear and logistic regression. This is perfect and exactly what I needed to understand NN. Liked and subbed.

  • @mustafizurrahman5699
    @mustafizurrahman56998 ай бұрын

    Awesome

  • @gourinathhs3850
    @gourinathhs38502 жыл бұрын

    Great work Thankyou

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    My pleasure Gouri

  • @dubeypankaj1983
    @dubeypankaj198311 ай бұрын

    Why haven't we performed Activation function on Ypred ??

  • @luciaballesterosgarcia1306
    @luciaballesterosgarcia13062 жыл бұрын

    Thanks for your explanation! How do you choose the weights? or its a random decision? Thank u in advance!

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    Hi Lucia. Yes it was a random decision. For a more thorough explanation of the initial values of the weights, check out Andrew Ng (kzread.info/dash/bejne/aJatmLqao8KxmNI.html). Good luck!

  • @yinka366
    @yinka3662 жыл бұрын

    Thanks for this! clear and straight to the point. The simplest example I've seen so far. As regards biases, does this apply to the RBF Neural Networks as well? For instance, if the input layer has a bias unit, do we add the bias during forward propagation? Thanks

  • @yazou1307
    @yazou13077 ай бұрын

    I was wondering why didn't you write the active fucnton in your output neuron? is there a specific reason?

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    7 ай бұрын

    Most likely because it is just a linear summation of the inputs. There was no need to pass it through a non linear activation function

  • @limpblz1988
    @limpblz19882 жыл бұрын

    Why do we only use 2 nodes?

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    It is just an example. There can be as many nodes/neurons as you wish. I just used two for this example

  • @ChargedPulsar
    @ChargedPulsar2 жыл бұрын

    What is the point of saying "okay" every three seconds? Imagine people listening to your videos multiple times, it makes a lot of "okay, okay, okay, okay, okay". Which is just "noise" in the information that's actually blocking the material!

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    Then don't watch. Go elsewhere

  • @ChargedPulsar

    @ChargedPulsar

    2 жыл бұрын

    @@bevansmithdatascience9580 Ignorance is bliss. Thanks, but no one is asking you what they should do next.

  • @bevansmithdatascience9580

    @bevansmithdatascience9580

    2 жыл бұрын

    @@ChargedPulsar I'm telling you

  • @joachimguth6226

    @joachimguth6226

    8 ай бұрын

    You may learn how to bring across your message in a more pleasant manner. We get an excellent lesson for free. And you are focusing on a minor issue. Think about it, if you can.

  • @ChargedPulsar

    @ChargedPulsar

    8 ай бұрын

    @@joachimguth6226 You are right, it could have been sugarcoated more. Unfortunately this feedback to benefit and improve the video/channel is answered with insult and disrespect. When a teacher starts insulting the audience with disrespect, it's a clear sign that he/she isn't the right character to be a teacher, but just uses information as an excuse to feel above others.