Bevan Smith 2

Bevan Smith 2

Hi, I'm Bevan Smith. Welcome to my data science channel.

The aim is to present a relatively thorough overview of introductory machine learning and data science topics.

I hope to present topics on what is machine learning, supervised learning, regression and classification, cross-validation and also to show the viewer actual examples using Python and Scikit-learn.

Eventually I want to present more advanced topics describing linear and logistic regression, decision trees, random forest, boosting and neural networks.

I also plan to present topics on feature selection using various model-agnostic methods such as LIME and SHAP.

I hope you learn something from this channel. Please give me feedback, I would love to hear from you.

Пікірлер

  • @HyeyungPark
    @HyeyungPark8 күн бұрын

    I am watching Machine Learning playlists. Your teaching is full of enthusiasm enabling me to be engaged in your instructions. I really appreciate your wholehearted instruction.

  • @nico-wj1mh
    @nico-wj1mh9 күн бұрын

    thank you

  • @kunalsutar3946
    @kunalsutar3946Ай бұрын

    You are a lengend, Thanks man.

  • @AnkurChauhan-Rajput
    @AnkurChauhan-RajputАй бұрын

    One of the best videos to start with..thanks

  • @phy6132
    @phy6132Ай бұрын

    Thank you for your videos. Very well explained. However, how the mini batch looks like in a practice? How to put multiple rows in the network in the same time? Do we need bigger layer? Could you provide some details how to do that? Thank you.

  • @krishnakumarik208
    @krishnakumarik2082 ай бұрын

    Excellent video with neat and clear explanation for a beginner to learn and motivate towards neural networks.

  • @wis-labbahasainggris8956
    @wis-labbahasainggris89562 ай бұрын

    Why does weight updating use a minus sign, instead of a plus sign? 24:34

  • @bevansmithdatascience9580
    @bevansmithdatascience95802 ай бұрын

    In gradient descent we want to tweak the weights/biases until we obtain a minimum error in our cost function. So for that we need to compute the negative of the gradient of the cost function, multiply it by a learning rate and add it to the previous value. This negative means we are moving downhill in the cost function (so to speak)

  • @rahuldevgun8703
    @rahuldevgun87032 ай бұрын

    The best i have seen till date .. superb

  • @felixmillan7345
    @felixmillan73452 ай бұрын

    Great video!

  • @luisreynoso1734
    @luisreynoso17342 ай бұрын

    This is the very best video on explaining Back Propagation! It is very clear and well-designed for anyone needing to learn more about AI. I look forward to seeing other videos from Bevan.

  • @syakiraljuicy625
    @syakiraljuicy6252 ай бұрын

    Thank youuu the pace is for meee haha

  • @codingwithelhacen990
    @codingwithelhacen9903 ай бұрын

    Thank you! I was looking for those exact regression model examples.

  • @user-ou7dq1bu9v
    @user-ou7dq1bu9v3 ай бұрын

    After 2 years since publishing, your video is still a gem 💥

  • @wanna_die_with_me
    @wanna_die_with_me3 ай бұрын

    THE BEST VIDEO FOR UNDERSTANDING BACK PROPAGATION!!!! Thank you sir <3

  • @aamirsuleman9815
    @aamirsuleman98153 ай бұрын

    Is each mini batch using the average of the losses batch to update its weights and biases? this part is unclear.

  • @bevansmithdatascience9580
    @bevansmithdatascience95803 ай бұрын

    however large the batch size is, it calculates a mean squared error (if regression) of those samples

  • @suryagiriofficial1740
    @suryagiriofficial17403 ай бұрын

    I watch half of the video and I already liked it.

  • @effortlessjapanese123
    @effortlessjapanese1233 ай бұрын

    haha South African accent. baie dankie Bevan!

  • @bevansmithdatascience9580
    @bevansmithdatascience95803 ай бұрын

    lekker bru

  • @user-ts5vd9fp1g
    @user-ts5vd9fp1g3 ай бұрын

    This channel is so underrated...

  • @Richard-bt6uk
    @Richard-bt6uk3 ай бұрын

    Hello Bevan Thank you for your excellent videos on neural networks. I have a question pertaining to this video covering Back Propagation. At about 14:30 you present the equation for determining the updated weight, W7. You are subtracting the product of η and the partial derivative of the Cost (Error) Function with respect to W7. However, this product does not yield a delta W7, i.e., a change in W7. It would seem that the result of this product is more like a delta of the Cost Function, not W7, and it is not mathematically consistent to adjust W7 by a change in the Cost Function. Rather we should adjust W7 by a small change in W7. Put another way, if these quantities had physical units, the equation would not be consistent in units. From this perspective, It would be more consistent to use the reciprocal of the partial derivative shown. I’m unsure if this would yield the same results. Can you explain how using the derivative as shown to get the change in W7 (or indeed in any of the weights) is mathematically consistent?

  • @sma92878
    @sma928784 ай бұрын

    This is amazing, so clear and easy to understand!

  • @giorgosmaragkopoulos9110
    @giorgosmaragkopoulos91104 ай бұрын

    So what is the clever part of back prop? Why does it have a special name and it isn't just called "gradient estimation"? How does it save time? It looks like it just calculates all derivatives one by one

  • @bevansmithdatascience9580
    @bevansmithdatascience95804 ай бұрын

    it is the main reason why we can train neural nets. The idea in training neural nets is to obtain the weights and biases throughout the network that will give us good predictions. The gradients you speak of get propagated back through the network in order to update the weights to be more accurate each time we add in more training data

  • @techgamer4291
    @techgamer42914 ай бұрын

    Thank you so much , Sir. Best explanation I have seen on this platform .

  • @ammarjagadhita3189
    @ammarjagadhita31894 ай бұрын

    i just wondering in the last part when i try to calculate partial derivative of w4 the result i got is -3711 but in the video it is -4947. then i make sure so i changed the last equation part to x1 (60) and it gives me the same result like in the video which -2783, so im not sure if i miss something since he didnt write the calculation from w4

  • @dlpkmrpttpt
    @dlpkmrpttpt5 ай бұрын

    Thank you Bevan, nice video. In 5-fold CV, we end up with 5 models. Which model should we use for deployment?

  • @bevansmithdatascience9580
    @bevansmithdatascience95805 ай бұрын

    You don't use any of them. I know it sounds confusing. Ok, say now you want to compare three models, a random forest, linear regression and neural net. For each model you perform k-fold CV and average the performance. Then you take the model that gave the best average CV performance and train that model on the entire dataset for deployment. The point about having good CV performance is that it shows that that model will perform best on unseen data. The whole point of cross validation is to see how well a model performs on unseen data. I suggest you have a good long chat with chatGPT to get more detailed answers

  • @sisumon91
    @sisumon915 ай бұрын

    Best video I have found for BP! Thanks for all your efforts.

  • @yashgodbole8247
    @yashgodbole82475 ай бұрын

    Perfectly explain 😌 finally the best explanation ❤ thankyou sir

  • @karthikrajeshwaran1997
    @karthikrajeshwaran19975 ай бұрын

    just outstanding - re watched it and it made it so clear!

  • @karthikrajeshwaran1997
    @karthikrajeshwaran19975 ай бұрын

    thanks so much for the clarity. helps tremendously! lvoe this.

  • @ALINDASERGIOUS
    @ALINDASERGIOUS5 ай бұрын

    I like how well you simplify the concepts

  • @onlyawatcher5810
    @onlyawatcher58105 ай бұрын

    May I ask how you can determine the bias to be added in z1 calculation?

  • @LakshmiDevirade2018
    @LakshmiDevirade20185 ай бұрын

    Thank you so much. it was so simple

  • @StudioFilmoweAlpha
    @StudioFilmoweAlpha6 ай бұрын

    22:53 Why Z1 is equal to -0,5?

  • @kennethcarvalho3684
    @kennethcarvalho36846 ай бұрын

    Finally i understood something on this topic

  • @ALINDASERGIOUS
    @ALINDASERGIOUS6 ай бұрын

    very happy to see you back

  • @bobdillon1138
    @bobdillon11386 ай бұрын

    Really like your teaching style...Any chance you can do a matrix vector calculus version of neural nets when you have finished with reinforcement learning?

  • @hengxianghu8735
    @hengxianghu87356 ай бұрын

    It's a very clearly elaborated Q-Learning, I really enjoyed it!

  • @moali7156
    @moali71566 ай бұрын

    I am just beginner in machine learning but have found it more beneficial your class.

  • @ayoubtech6930
    @ayoubtech69306 ай бұрын

    could you please send me the ppt file

  • @asaad3138
    @asaad31387 ай бұрын

    By far this is the best explanation. Clear, precise, detailed instructions. Well good and thank you so much 🙏

  • @tymeksalamon850
    @tymeksalamon8507 ай бұрын

    Fantastic tutorial Bevan, thank you very much!!

  • @PLAYWW
    @PLAYWW7 ай бұрын

    You are the KZreadr I have met who can explain all the specific calculation processes clearly and patiently. I appreciate you creating this video. It helps a lot. I wonder if you can make a video about Collaborative filtering?

  • @depressivepumpkin7312
    @depressivepumpkin73127 ай бұрын

    Man, this is at least the 15th video on the topic I watch, including several books, related to the back propagation, and this is the best one. All previous videos just skip a lot of explanation, focusing on how this backpropagation is important and crucial and what it allows to do, instead of doing the step-by-step overview. This video contains zero bs, and only has clear explanations, thank you

  • @torgath5088
    @torgath50887 ай бұрын

    The whole video: "Mini-batch" is like a batch but smaller. No calculus

  • @alonmalka8008
    @alonmalka80087 ай бұрын

    illegally underrated

  • @bobdillon1138
    @bobdillon11387 ай бұрын

    Looking forward to this series!

  • @bobdillon1138
    @bobdillon11387 ай бұрын

    Excellent! was hoping you would be back with some new material.

  • @bevansmithdatascience9580
    @bevansmithdatascience95807 ай бұрын

    More to come!

  • @zebthegreat6172
    @zebthegreat61725 ай бұрын

    looking forward to it kind sir!

  • @ps3301
    @ps33017 ай бұрын

    Please do a coding example soon ?

  • @bevansmithdatascience9580
    @bevansmithdatascience95807 ай бұрын

    im getting there

  • @yazou1307
    @yazou13077 ай бұрын

    I was wondering why didn't you write the active fucnton in your output neuron? is there a specific reason?

  • @bevansmithdatascience9580
    @bevansmithdatascience95807 ай бұрын

    Most likely because it is just a linear summation of the inputs. There was no need to pass it through a non linear activation function

  • @catalinafuentes9537
    @catalinafuentes95377 ай бұрын

    Midterm tomorrow, this was the only video in several I watched that finally made me understand linear regression.

  • @user-xg1cj7wh1m
    @user-xg1cj7wh1m7 ай бұрын

    Mister, you have saved my life lol, thank you!!!