Ridge Regression

Ойын-сауық

My Patreon : www.patreon.com/user?u=49277905

Пікірлер: 177

  • @xavierfournat8264
    @xavierfournat82643 жыл бұрын

    This is showing that the quality and value of a video is not depending on how fancy the animations are, but how expert and pedagogue the speaker is. Really brilliant! I assume you spent a lot of time designing that course, so thank you for this!

  • @ritvikmath

    @ritvikmath

    3 жыл бұрын

    Wow, thanks!

  • @backstroke0810

    @backstroke0810

    2 жыл бұрын

    Totally agree. I learn a lot from his short videos. Precise, concise, enough math, enough ludic examples. True professor mind.

  • @rez_daddy
    @rez_daddy4 жыл бұрын

    "Now that we understand the REASON we're doing this, let's get into the math." The world would be a better place if more abstract math concepts were approached this way, thank you.

  • @garbour456

    @garbour456

    2 жыл бұрын

    good point

  • @tzu-chunchen5139
    @tzu-chunchen51398 ай бұрын

    This is the best explanation of Ridge regression that I have ever heard! Fantastic! Hats off!

  • @siddharthkshirsagar2545
    @siddharthkshirsagar25454 жыл бұрын

    I was searching for ridge regression on the whole internet and stumbled upon this is a video which is by far the best explanation you can find anywhere thanks.

  • @bettychiu7375
    @bettychiu73754 жыл бұрын

    This really helps me! Definitely the best ridge and lasso regression explanation videos on KZread. Thanks for sharing! :D

  • @nadekang8198
    @nadekang81985 жыл бұрын

    This is awesome! Lots of machine learning books or online courses don't bother explaining the reason behind Ridge regression, you helped me a lot by pulling out the algebraic and linear algebra proofs to show the reason WHY IT IS THIS! Thanks!

  • @GreenEyesVids
    @GreenEyesVids3 жыл бұрын

    Watched these 5 years ago to understand the concept and I passed an exam. Coming back to it now to refresh my memory, still very well explained!

  • @ritvikmath

    @ritvikmath

    3 жыл бұрын

    Nice! Happy to help!

  • @murraystaff568
    @murraystaff5688 жыл бұрын

    Brilliant! Just found your channel and can't wait to watch them all!!!

  • @zgbjnnw9306
    @zgbjnnw93062 жыл бұрын

    It's so inspiring to see how you get rid of the c^2! I learned Ridge but didn't know why! Thank you for making this video!

  • @Lisa-bp3ec
    @Lisa-bp3ec7 жыл бұрын

    Thank you soooo much!!! You explain everything so clear!! and there is no way I couldn't understand!

  • @cu7695
    @cu76956 жыл бұрын

    I subscribed just after watching this. Great foundation for ML basics

  • @theoharischaritidis4173
    @theoharischaritidis41736 жыл бұрын

    This really helped a lot. A big thanks to you Ritvik!

  • @TahaMVP
    @TahaMVP6 жыл бұрын

    best explanation of any topic i've ever watched , respect to you sir

  • @taareshtaneja7523
    @taareshtaneja75235 жыл бұрын

    This is, by far, the best explanation of Ridge Regression that I could find on KZread. Thanks a lot!

  • @alecvan7143
    @alecvan71434 жыл бұрын

    Amazing video, you really explained why we do things which is what really helps me!

  • @akino.3192
    @akino.31926 жыл бұрын

    You, Ritvik, are simply amazing. Thank you!

  • @BhuvaneshSrivastava
    @BhuvaneshSrivastava4 жыл бұрын

    Your data science videos are the best I have seen on KZread till now. :) Waiting to see more

  • @ritvikmath

    @ritvikmath

    4 жыл бұрын

    I appreciate it!

  • @abhichels1
    @abhichels17 жыл бұрын

    This is gold. Thank you so much!

  • @nickb6811
    @nickb68117 жыл бұрын

    So so so very helpful! Thanks so much for this genuinely insightful explanation.

  • @mortezaabdipour5584
    @mortezaabdipour55845 жыл бұрын

    It's just awesome. Thanks for this amazing explanation. Settled in mind forever.

  • @SarahPourmolamohammadi
    @SarahPourmolamohammadi Жыл бұрын

    You are the best of all.... you explained all the things,,, so nobody is gonna have problems understanding them.

  • @RobertWF42
    @RobertWF426 ай бұрын

    Excellent video! One more thing to add - if you're primarily interested in causal inference, like estimating the effect of daily exercise on blood pressure while controlling for other variables, then you want an unbiased estimate of the exercise coefficient and standard OLS is appropriate. If you're more interested in minimizing error on blood pressure predictions and aren't concerned with coefficients, then ridge regression is better. Also left out is how we choose the optimal value of lambda by using cross-validation on a selection of lambda values (don't think there's a closed form expression for solving for lambda, correct me if I'm wrong).

  • @nikunjgattani999
    @nikunjgattani9992 жыл бұрын

    Thanks a lot.. I watched many videos and read blogs before this but none of them clarified at this depth

  • @yxs8495
    @yxs84957 жыл бұрын

    This really is gold, amazing!

  • @babakparvizi2425
    @babakparvizi24256 жыл бұрын

    Fantastic! It's like getting the Cliff's Notes for Machine Learning. These videos are a great supplement/refresher for concepts I need to knock the rust off of. I think he takes about 4 shots of espresso before each recording though :)

  • @soudipsanyal
    @soudipsanyal6 жыл бұрын

    Superb. Thanks for such a concise video. It saved a lot of time for me. Also, subject was discussed in a fluent manner and it was clearly understandable.

  • @yanlinwang5703
    @yanlinwang57032 жыл бұрын

    The explanation is so clear!! Thank you so much!!

  • @surajshivakumar5124
    @surajshivakumar51243 жыл бұрын

    This is literally the best video on ridge regression

  • @ethanxia1288
    @ethanxia12888 жыл бұрын

    Excellent explanation! Could you please do a similar video for Elastic-net?

  • @q0x
    @q0x8 жыл бұрын

    I think its explained very fast, but still very clear, for my level of understanding its just perfect !

  • @jhhh0619
    @jhhh06199 жыл бұрын

    Your explanation is extremely good!

  • @Thaifunn1
    @Thaifunn18 жыл бұрын

    excellent video! Keep up the great work!

  • @youyangcao3837
    @youyangcao38377 жыл бұрын

    great video, the explanation is really clear!

  • @aDifferentHandle
    @aDifferentHandle6 жыл бұрын

    The best ridge regression lecture ever.

  • @Krishna-me8ly
    @Krishna-me8ly9 жыл бұрын

    Very good explanation in an easy way!

  • @teegnas
    @teegnas4 жыл бұрын

    These explanations are by far the best ones I have seen so far on youtube ... would really love to watch more videos on the intuitions behind more complicated regression models

  • @vishnu2avv
    @vishnu2avv6 жыл бұрын

    Awesome, Thanks a Million for great video! Searching you have done video on LASSO regression :-)

  • @charlesity
    @charlesity4 жыл бұрын

    Stunning! Absolute gold!

  • @wi8shad0w

    @wi8shad0w

    4 жыл бұрын

    seriously!!!

  • @sasanosia6558
    @sasanosia65585 жыл бұрын

    Amazingly helpful. Thank you.

  • @abhijeetsingh5049
    @abhijeetsingh50498 жыл бұрын

    Stunning!! Need more access to your coursework

  • @JC-dl1qr
    @JC-dl1qr7 жыл бұрын

    great video, brief and clear.

  • @shiva6016
    @shiva60166 жыл бұрын

    simple and effective video, thank you!

  • @myazdani2997
    @myazdani29977 жыл бұрын

    I love this video, really informative! Thanks a lot

  • @Viewfrommassada
    @Viewfrommassada4 жыл бұрын

    I'm impressed by your explanation. Great job

  • @ritvikmath

    @ritvikmath

    4 жыл бұрын

    Thanks! That means a lot

  • @kamesh7818
    @kamesh78186 жыл бұрын

    Excellent explanation, thanks!

  • @intom1639
    @intom16396 жыл бұрын

    Brilliant! Could you make more videos about Cross validation, RIC, BIC, and model selection.

  • @aarshsachdeva5785
    @aarshsachdeva57857 жыл бұрын

    You should add in that all the variables (dependent and independent) need to be normalized prior to doing a ridge regression. This is because betas can vary in regular OLS depending on the scale of the predictors and a ridge regression would penalize those predictors that must take on a large beta due to the scale of the predictor itself. Once you normalize the variables, your A^t*A matrix being a correlation matrix of the predictors. The regression is called "ridge" regression because you add (lambda*I + A^t*A ) which is adding the lambda value to the diagonal of the correlation matrix, which is like a ridge. Great video overall though to start understanding this regression.

  • @mohamedgaal5340
    @mohamedgaal5340 Жыл бұрын

    I was looking for the math behind the algorithm. Thank you for explaining it.

  • @ritvikmath

    @ritvikmath

    Жыл бұрын

    No problem!

  • @faeritaaf
    @faeritaaf7 жыл бұрын

    Thank you! Your explaining is really good, Sir. Do you have time to make a video explaining the adaptive lasso too?

  • @sanketchavan8
    @sanketchavan86 жыл бұрын

    best explanation on ridge reg. so far

  • @wi8shad0w
    @wi8shad0w4 жыл бұрын

    THIS IS ONE HELL OF A VIDEO !!!!

  • @TURBOKNUL666
    @TURBOKNUL6668 жыл бұрын

    great video! thank you very much.

  • @Hazit90
    @Hazit907 жыл бұрын

    excellent video, thanks.

  • @mikeperez4222
    @mikeperez42223 жыл бұрын

    Anyone else get anxiety when he wrote with the marker?? Just me? Felt like he was going to run out of space 😂 Thank you so much thoo, very helpful :)

  • @adityakothari193
    @adityakothari1937 жыл бұрын

    Excellent explanation .

  • @xwcao1991
    @xwcao19913 жыл бұрын

    Thank you. I make the comment because I know I will never need to watch it again! Clearly explained..

  • @ritvikmath

    @ritvikmath

    3 жыл бұрын

    Glad it was helpful!

  • @zhilingpan2486
    @zhilingpan24867 жыл бұрын

    Very clear. Thank you!

  • @LossAndWaste
    @LossAndWaste6 жыл бұрын

    you are the man, keep doing what you're doing

  • @abeaumont10
    @abeaumont105 жыл бұрын

    Great videos thanks for making it

  • @hunarahmad
    @hunarahmad7 жыл бұрын

    thanks for the nice explanation

  • @mnwepple
    @mnwepple8 жыл бұрын

    Awesome video! Very intuitive and easy to understand. Are you going to make a video using the probit link?

  • @tamoghnamaitra9901
    @tamoghnamaitra99017 жыл бұрын

    Beautiful explanation

  • @garbour456
    @garbour4562 жыл бұрын

    great video - thanks

  • @lauraarmbrust1639
    @lauraarmbrust16395 жыл бұрын

    Thanks for this really helpful video! Could you explain why the independent variables in A should be standardized for Ridge and Lasso Regression?

  • @RAJIBLOCHANDAS
    @RAJIBLOCHANDAS2 жыл бұрын

    Excellent approach to discuss Lasso and Ridge regression. It could have been better if you have discussed how Lasso yields sparse solutions! Anyway, nice discussion.

  • @sendydowneyjr
    @sendydowneyjr7 жыл бұрын

    This is great, thank you!

  • @meysamsojoudi3947
    @meysamsojoudi39473 жыл бұрын

    It is a brilliant video. Great

  • @brendachirata2283
    @brendachirata22835 жыл бұрын

    hey, great video and excellent job

  • @HeduAI
    @HeduAI7 жыл бұрын

    I would trade diamonds for this explanation (well, allegorically! :) ) Thank you!!

  • @kartikkamboj295
    @kartikkamboj2954 жыл бұрын

    Dude ! Hats off 🙏🏻

  • @lucyli8770
    @lucyli87706 жыл бұрын

    very helpful, thanks

  • @canernm
    @canernm3 жыл бұрын

    Hi and thanks fr the video. Can you explain briefly why when the m_i and t_i variables are highly correlated , then the estimators β0 and β1 are going to have very big variance? Thanks a lot in advance!

  • @lanag873

    @lanag873

    2 жыл бұрын

    Hi same question here😶‍🌫

  • @zw7453
    @zw74532 жыл бұрын

    best explanation ever!

  • @nicolasmanelli7393
    @nicolasmanelli7393 Жыл бұрын

    I think it's the best video ever made

  • @e555t66
    @e555t66 Жыл бұрын

    I don't have money to pay him so leaving a comment instead for the algo. He is the best.

  • @janaosea6020
    @janaosea60204 жыл бұрын

    bless this is amazing

  • @qiulanable
    @qiulanable6 жыл бұрын

    excellent video!!!!

  • @Sytch
    @Sytch6 жыл бұрын

    Finally, someone who talks quickly.

  • @jakobforslin6301
    @jakobforslin63012 жыл бұрын

    You are awesome!

  • @happy_labs
    @happy_labs7 жыл бұрын

    Thanks for this one!

  • @jamiewilliams9271
    @jamiewilliams92716 жыл бұрын

    Thank you so much!!!!

  • @divyarthprakash1541
    @divyarthprakash15416 жыл бұрын

    Very well explained :)

  • @kxdy8yg8
    @kxdy8yg86 жыл бұрын

    This is gold indeed!

  • @adrianfischbach9496
    @adrianfischbach9496 Жыл бұрын

    Huge thanks!

  • @shashankparameswaran2336
    @shashankparameswaran23362 жыл бұрын

    Amazing!!!

  • @prabhuthomas8770
    @prabhuthomas87705 жыл бұрын

    SUPER !!! You have to become a professor and replace all those other ones !!

  • @yingbinjiang133
    @yingbinjiang1337 жыл бұрын

    a very nice video

  • @SUBHRASANKHADEY
    @SUBHRASANKHADEY5 жыл бұрын

    Shouldn't the radius of the Circle be c instead of c^2 (at time around 7:00)?

  • @samie3000
    @samie30007 жыл бұрын

    Thank you!

  • @justinm1307
    @justinm13076 жыл бұрын

    this is great stuff

  • @Tyokok
    @Tyokok5 жыл бұрын

    Thanks for the video! silly question: where is your L2 norm video, can you provide a link? (subscribed)

  • @zhongshanhu7376
    @zhongshanhu73768 жыл бұрын

    very good explanation in an easy way!!

  • @JoonHeeKim
    @JoonHeeKim6 жыл бұрын

    Great video. A (very minor) question: isn't it c instead of c^2 when you draw the radius vector of the circle for \beta restriction?

  • @Viewfrommassada

    @Viewfrommassada

    4 жыл бұрын

    think of it as an equation of a circle with center (0,0)

  • @sergioperezmelo3090
    @sergioperezmelo30905 жыл бұрын

    Super clear

  • @ilkerkarapanca
    @ilkerkarapanca6 жыл бұрын

    Awesome, thanks man

  • @yassersaeid3424
    @yassersaeid34247 жыл бұрын

    a big thanks

  • @sachinrathi7814
    @sachinrathi78143 жыл бұрын

    Can anyone explain the statement "The efficient property of any estimator says that the estimator is the minimum variance unbiased estimator", so what is minimum variance denotes here.

  • @tsrevo1
    @tsrevo16 жыл бұрын

    Sir, a question about 4:54: I understand that in tax/income example the VARIANCE of the beta0-beta1's is high, since there's an additional beta2 effecting things. However, the MEAN in the population should be the same, even with high variance, isn't it so? Thanks in advance!

  • @nickwagner5173
    @nickwagner51736 жыл бұрын

    We start out by adding a constraint that beta 1 squared + beta 2 squared must be less than c squared, where c is some number we choose. But then after choosing lamda, we minimize F and c ends up having no effect at all on our choice of the betas. I may be wrong but it doesn't seem like c has any effect on our choice of lambda either. I find it strange that we start out with the criteria that beta 1 squared + beta 2 squared must be less than c squared, but the choice of c is irrelevant. If someone can help me un-boggle my mind that would be great.

  • @RobertWF42

    @RobertWF42

    6 ай бұрын

    Good question - I think it has to do with using the method of Lagrange multipliers to solve the constrained OLS optimization problem. The lambda gets multiplied by the expression in the parentheses at 11:17, which includes the c squared term. So whatever c squared value you choose, it's going to be changed anyways when you multiply by the lambda.

  • @sagarsitap3540
    @sagarsitap35404 жыл бұрын

    Thanks! why lamba cannot be negative? What if to improve variance it is need to increase the slope and not decrease?

Келесі