Ridge vs Lasso Regression, Visualized!!!

People often ask why Lasso Regression can make parameter values equal 0, but Ridge Regression can not. This StatQuest shows you why.
NOTE: This StatQuest assumes that you are already familiar with Ridge and Lasso Regression. If not, check out the 'Quests.
Ridge: • Regularization Part 1:...
Lasso: • Regularization Part 2:...
For a complete index of all the StatQuest videos, check out:
statquest.org/video-index/
If you'd like to support StatQuest, please consider...
Buying my book, The StatQuest Illustrated Guide to Machine Learning:
PDF - statquest.gumroad.com/l/wvtmc
Paperback - www.amazon.com/dp/B09ZCKR4H6
Kindle eBook - www.amazon.com/dp/B09ZG79HXC
Patreon: / statquest
...or...
KZread Membership: / @statquest
...a cool StatQuest t-shirt or sweatshirt:
shop.spreadshirt.com/statques...
...buying one or two of my songs (or go large and get a whole album!)
joshuastarmer.bandcamp.com/
...or just donating to StatQuest!
www.paypal.me/statquest
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
0:00 Awesome song and introduction
0:33 Ridge Regression visualized
6:00 Lasso Regression visualized
7:48 Summary of Ridge vs Lasso Regression
#statquest #regularization

Пікірлер: 411

  • @statquest
    @statquest2 жыл бұрын

    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @thatguyadarsh

    @thatguyadarsh

    11 ай бұрын

    Subscriiibed ! DOUBLE BAM!!! 😂😂

  • @Mabitesurtaglotte
    @Mabitesurtaglotte4 жыл бұрын

    Still the best stat videos on KZread You have no idea how much you've helped me. You'll be in the acknowledgments of my diploma

  • @statquest

    @statquest

    4 жыл бұрын

    Wow, thanks!

  • @johnnyt5108

    @johnnyt5108

    2 жыл бұрын

    He'd probably like better to be in the acknowledgments of your checkbook then

  • @reflections86

    @reflections86

    Жыл бұрын

    @@johnnyt5108 I am sure many people will do that by buying the reports and the book Josh has written.

  • @LauraMarieChua

    @LauraMarieChua

    Жыл бұрын

    update after 2 years: did u include him on ur diploma?

  • @chyldstudios
    @chyldstudios4 жыл бұрын

    The visualization really sells it.

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks! :)

  • @platanus726
    @platanus7263 жыл бұрын

    You are truly an angel. Your videos on Ridge, Lasso and Elastic Net really helps with my understanding. It's way better than the lectures in my university.

  • @statquest

    @statquest

    3 жыл бұрын

    Thanks!

  • @Azureandfabricmastery
    @Azureandfabricmastery4 жыл бұрын

    Hello Josh, Ridge and Lasso clearly visualized :) I must say that if one thing that makes your videos clearly explained to curious minds like me is that the visual illustrations that you provide in your stat videos. Glad. Thank you very much for your efforts.

  • @statquest

    @statquest

    4 жыл бұрын

    Thank you very much! :)

  • @lakshitakamboj198
    @lakshitakamboj1983 жыл бұрын

    Thanks, josh for this amazing video. I promise to support this channel once I land a job offer as a data scientist. This is the only video on youtube, that practically shows all the algo's.

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you and Good luck!

  • @huzefaghadiyali5886
    @huzefaghadiyali58862 жыл бұрын

    I'm just gonna take a minute to appreciate the effort you put in your jokes to make the video more interesting. Its quite underrated.

  • @statquest

    @statquest

    2 жыл бұрын

    Thank you!

  • @afaisaladhamshaazi7519
    @afaisaladhamshaazi75194 жыл бұрын

    I was wondering why I missed out on this video while going through the ones on Ridge and Lasso Regression from Sept-Oct 2018. Then I noticed this is a video you put out only a few days ago. Awesome. Much gratitude from Malaysia. 🙇

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks! :)

  • @philwebb59
    @philwebb592 жыл бұрын

    Best visuals ever! No matter how much I think I know about stats, I always learn something from your videos. Thanks.

  • @statquest

    @statquest

    2 жыл бұрын

    Thank so much! BAM! :)

  • @hemaswaroop7970
    @hemaswaroop79704 жыл бұрын

    Fantastic, Josh!! Thank you very very much. We all owe you a lot many thanks. "I" owe you a lot. 😊😊👍👍

  • @statquest

    @statquest

    4 жыл бұрын

    Awesome! Thanks! :)

  • @shamshersingh9680
    @shamshersingh96802 күн бұрын

    Hi Josh, pse accept my heartfelt thanks to such a wonderful video. I guess your videos are an academy in itself. Just follow along your videos and BAM!! you are a master of Data Science and Machine Learning. 👏

  • @statquest

    @statquest

    Күн бұрын

    Wow, thank you!

  • @chunchen3450
    @chunchen34504 жыл бұрын

    Just found this channel today, great illustrations! Thanks for keeping the voice speed down, that makes me easy to follow!

  • @statquest

    @statquest

    4 жыл бұрын

    Awesome, thank you!

  • @Sello_Hunter
    @Sello_Hunter3 жыл бұрын

    This explained everything i needed to know in 9 minutes. Absolute genius, thank you!

  • @statquest

    @statquest

    3 жыл бұрын

    Glad it was helpful!

  • @Spamdrew128
    @Spamdrew1282 жыл бұрын

    I needed this information for my data science class and didn't expect such a well crafted and humorous video! You are doing great work sir!

  • @statquest

    @statquest

    2 жыл бұрын

    Wow, thank you!

  • @geovannarojas2580
    @geovannarojas25803 жыл бұрын

    These videos are so clear and fun, they helped me a lot with modeling and statistic in biology.

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you! :)

  • @bikramsarkar1484
    @bikramsarkar14844 жыл бұрын

    You are a life saver! I have been trying to understand this for years now!! Thanks a ton!!!

  • @statquest

    @statquest

    4 жыл бұрын

    Bam! :)

  • @aliaghayari2294
    @aliaghayari22945 ай бұрын

    dude is creating quality videos and replies to every comment! talk about dedication! thanks a lot

  • @statquest

    @statquest

    5 ай бұрын

    bam! :)

  • @markevans5648
    @markevans56484 жыл бұрын

    Great work Josh! Your songs get me every time.

  • @statquest

    @statquest

    4 жыл бұрын

    Bam! :)

  • @thryce82
    @thryce823 жыл бұрын

    this channel is saving my ass when it comes to applied ml class. so frustrating when a dude who has been researching Lasso for 10 years just breaks out some Linear algebra derviation and then acts like your suppose to instantly understand that...... thanks for taking the time to come up with an exhbition that makes sense.

  • @statquest

    @statquest

    3 жыл бұрын

    Thanks!

  • @flavio4923
    @flavio49232 жыл бұрын

    I've never been good with this kind of math/statistics because when I encounter the book formulas I tend to forget or not understand the symbols. Your videos make it possible to go beyond the notation and to learn the idea behind these concepts to apply them in machine learning. Thank you !

  • @statquest

    @statquest

    2 жыл бұрын

    Bam! :)

  • @ehg02
    @ehg023 жыл бұрын

    Can we start a petition to change the lasso and ridge names to absolute value penalty and squared penalty pwease?

  • @statquest

    @statquest

    3 жыл бұрын

    That would be awesome! :)

  • @JoaoVitorBRgomes

    @JoaoVitorBRgomes

    3 жыл бұрын

    @@statquest I am listening to u on spotify

  • @statquest

    @statquest

    3 жыл бұрын

    @@JoaoVitorBRgomes Bam!

  • @chrissmith1152
    @chrissmith11524 жыл бұрын

    incredible videos, been watching all of your videos during quarantine for my future job interview. Still waiting for the time series tho. Thanks sir

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks!

  • @hopelesssuprem1867
    @hopelesssuprem18678 ай бұрын

    Many people on the Internet explain regularization of regression using polynomial features in the data that ridge and lasso are allegedly used to reduce the curvature of the line, but in fact in this case you just need to find the right degree of the polynomial. You are one of the few who have shown the real essence of regularization in linear regression and the bottom line is that we simply fine the model by exchanging the bias for a lower variance through slope changes. By the way, real overfitting in regression can be well observed in data with a large number of features, some of which strongly correlate with each other, as well as a relatively small number of samples, and in this case that L1/L2/Lasso will be useful. Thank you so much for a very good explanation.

  • @statquest

    @statquest

    8 ай бұрын

    Thanks!

  • @siddhu2605
    @siddhu26053 жыл бұрын

    Your are a super professor and I'll give you a infinity BAM !!!!!!!!!!!. I really like the way your repeat the earlier discussed topic to refresh the student memory and that really helpful and you have a-lot of patience. Once again you proved that a picture is worth a thousand words.

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you very much! :)

  • @eminatabeypeker6305
    @eminatabeypeker63053 жыл бұрын

    You are really doing great great job. This channel is the best way to learn a lot, right and important things in a short time.

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you very much! And thank you for your support!! BAM! :)

  • @ruonanzheng2019
    @ruonanzheng2019 Жыл бұрын

    Thank you, regularization seris videos from 2018 to 2020 are so helpful.😀

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @tvbala00s27
    @tvbala00s272 жыл бұрын

    Thanks a lot for this wonderful lesson...loved it ..seeing how the function behaves with different parameters makes it etched in the memory

  • @statquest

    @statquest

    2 жыл бұрын

    Glad it helped!

  • @tymothylim6550
    @tymothylim65503 жыл бұрын

    Thank you very much for this video! It helped me visually understand how Lasso regression can remove some predictors from the final model!

  • @statquest

    @statquest

    3 жыл бұрын

    Glad it was helpful!

  • @RaviShankar-jm1qw
    @RaviShankar-jm1qw4 жыл бұрын

    You simply amaze me with each of your videos. The best part is the way you explain stuff is so original and simple. Will really love if you could also pen down a book on AI/ML. Would be a bestseller i reckon for sure. Keep up the good work and enlightening us :)

  • @statquest

    @statquest

    4 жыл бұрын

    Wow, thank you!

  • @rainymornings

    @rainymornings

    Жыл бұрын

    This aged very well (he has a book now lol)

  • @mansoorbaig9232
    @mansoorbaig92323 жыл бұрын

    You are awesome Josh. This one always bothered me why L1 would make coefficients to 0 and not L2 and you explained it so simply.

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you! :)

  • @NRienadire
    @NRienadire3 жыл бұрын

    Great videos, thank you very much!!!

  • @statquest

    @statquest

    3 жыл бұрын

    Glad you like them!

  • @pmsiddu
    @pmsiddu4 жыл бұрын

    Very well explained this one video cleared all my doubts along with practical calculations and visualization. Kudos for the great job.

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks! :)

  • @katielui131
    @katielui1315 ай бұрын

    This is amazing - thanks for this

  • @statquest

    @statquest

    5 ай бұрын

    Thanks!

  • @ll-bc4gn
    @ll-bc4gn8 ай бұрын

    "Unfortunately, no one asked me." I almost fart loud in a library.

  • @statquest

    @statquest

    8 ай бұрын

    :)

  • @xvmmazy4398
    @xvmmazy43986 ай бұрын

    Dude you succeed at helping me and at making that thing funny as I'm struggling with my ML homework, thank you so much

  • @statquest

    @statquest

    6 ай бұрын

    Glad I could help!

  • @vishnuprakash9196
    @vishnuprakash91968 ай бұрын

    The best. Definitely gonna come back and donate once I land a job.

  • @statquest

    @statquest

    8 ай бұрын

    Wow! Thank you!

  • @sravanlankoti5244
    @sravanlankoti5244 Жыл бұрын

    Thanks for taking out time and explaining ML concepts in an amazing manner with clear visualizations. Great work.

  • @statquest

    @statquest

    Жыл бұрын

    WOW! Thank you so much for supporting StatQuest! TRIPLE BAM!!! :)

  • @heteromodal

    @heteromodal

    Жыл бұрын

    @@statquest Hey Josh! What's your preferred way of being supported? Would Paypal be better than Patreon?

  • @statquest

    @statquest

    Жыл бұрын

    @@heteromodal It's really up to you - whatever is more convenient and whether or not you want to be a long time supporter or not.

  • @heteromodal

    @heteromodal

    Жыл бұрын

    @@statquest I meant assuming i make a fixed sum donation - would you see more of it through PP or Patreon :)

  • @statquest

    @statquest

    Жыл бұрын

    @@heteromodal If it's a one-time donation, than PayPal is probably the best.

  • @cat-.-
    @cat-.-4 жыл бұрын

    I just became the 104th patron of the channel!

  • @statquest

    @statquest

    4 жыл бұрын

    TRIPLE BAM!!! Thank you very much!!! :)

  • @sebastiencrepel5032
    @sebastiencrepel50323 жыл бұрын

    Great videos. Very helpful. Thanks !

  • @statquest

    @statquest

    3 жыл бұрын

    Glad you like them!

  • @sunritjana4573
    @sunritjana45733 жыл бұрын

    Thanks a lot for thes awesome videos, you deserver milllion followers, and a lot of credits :) I just love these and they are KISS. so simple and understandable. I owe you a lot of thanks and credits :D

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you so much 😀!

  • @ZinzinsIA
    @ZinzinsIA Жыл бұрын

    Great, many thanks, very understandable and clear. it gave me a good intuiton of how lasso regression shrinks some varables to zero.

  • @statquest

    @statquest

    Жыл бұрын

    Glad it was helpful!

  • @statisticaldemystic6817
    @statisticaldemystic68174 жыл бұрын

    Very well done as usual.

  • @statquest

    @statquest

    4 жыл бұрын

    Thank you very much! :)

  • @josherickson5446
    @josherickson54464 жыл бұрын

    Dude you're killing it!

  • @statquest

    @statquest

    4 жыл бұрын

    Thank you! :)

  • @younghoe6849
    @younghoe68493 жыл бұрын

    Great master, thanks for your great effort

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you!

  • @kseniyaesepkina3734
    @kseniyaesepkina37342 жыл бұрын

    Just an incredible explanation!

  • @statquest

    @statquest

    2 жыл бұрын

    Thank you!

  • @jaskaransingh0304
    @jaskaransingh0304 Жыл бұрын

    Great explanation!

  • @statquest

    @statquest

    Жыл бұрын

    Thanks!

  • @pkmath12345
    @pkmath123454 жыл бұрын

    Ridge regression! Good topic to cover as always!

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks! :)

  • @user-fo3sw8nw3f
    @user-fo3sw8nw3f6 ай бұрын

    Thank you so much Blessing from Spain/Morocco

  • @statquest

    @statquest

    6 ай бұрын

    Thanks!

  • @siddhantk007
    @siddhantk0074 жыл бұрын

    Your videos are super intuitive.. thanks alot sir

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks and welcome!

  • @Imrans_Chronicles
    @Imrans_Chronicles Жыл бұрын

    The explanation can't be any better than this....!

  • @statquest

    @statquest

    Жыл бұрын

    bam! :)

  • @Jack-mz7ox
    @Jack-mz7ox2 жыл бұрын

    This is the perfect explanation I am searching for why L1 can be used for feature importance!!!

  • @statquest

    @statquest

    2 жыл бұрын

    bam! :)

  • @priyanatraj5634
    @priyanatraj56344 жыл бұрын

    Thank you for helping us to understand statistics! May I request for a video on Dirichlet regression?

  • @gavinaustin4474
    @gavinaustin44744 жыл бұрын

    Really enjoying these videos, Josh. Please keep 'em coming. Although I understand the distinction between correlation and interaction, I'd be interested to see how you might explain it in your inimitable fashion.

  • @statquest

    @statquest

    4 жыл бұрын

    I'll put that on the to-do list.

  • @gavinaustin4474

    @gavinaustin4474

    4 жыл бұрын

    @@statquest I'm pushing my luck here, but one more item, if I may: the difference between PCA and factor analysis. Often, these are distinguished in general terms (e.g., they are concerned with the total variance vs the shared variance, respectively), but I think that the best way to distinguish them would be to apply both methods to the same data set. I would be most interested in seeing that done.

  • @oanphong61
    @oanphong612 жыл бұрын

    Thank you very much!

  • @statquest

    @statquest

    2 жыл бұрын

    bam!

  • @sane7263
    @sane7263 Жыл бұрын

    That's a great video, Josh! 6:10 they should definitely have asked you 😂

  • @statquest

    @statquest

    Жыл бұрын

    BAM! :)

  • @thedanglingpointer8411
    @thedanglingpointer84114 жыл бұрын

    God of explanation !!! 🙏🏻🙏🏻🙏🏻 Awesome stuff 🙂🙂

  • @statquest

    @statquest

    4 жыл бұрын

    Thank you! 🙂

  • @ganpatinatrajan5890
    @ganpatinatrajan5890 Жыл бұрын

    Excellent Explanations 👍👍👍 Great work 👍👍👍

  • @statquest

    @statquest

    Жыл бұрын

    Thank you!

  • @albertomontori2863
    @albertomontori28632 жыл бұрын

    this video.....you are my savior ❤️❤️❤️

  • @statquest

    @statquest

    2 жыл бұрын

    bam!

  • @niyousha6868
    @niyousha68683 жыл бұрын

    Thank you Josh

  • @statquest

    @statquest

    3 жыл бұрын

    Any time!

  • @ngochua6679
    @ngochua66793 жыл бұрын

    Fortunately, I asked you :) I agree squared and absolute penalty are better word choices for these regularization methods. Thanks again for making my ML at Scale a tad bit easier.

  • @statquest

    @statquest

    3 жыл бұрын

    BAM! Thank you very much! :)

  • @jeffchoi6179
    @jeffchoi61794 жыл бұрын

    The best visualization I've ever seen

  • @statquest

    @statquest

    4 жыл бұрын

    Thank you! :)

  • @ling6701
    @ling6701 Жыл бұрын

    Nice. Thank you.

  • @statquest

    @statquest

    Жыл бұрын

    :)

  • @srs.shashank
    @srs.shashank3 жыл бұрын

    As a result when the slope becomes 0 for large lambda in lasso, then we can use lasso for feature selection. Nice Video Josh!!.

  • @statquest

    @statquest

    3 жыл бұрын

    Bam! :)

  • @arjunpukale3310
    @arjunpukale33104 жыл бұрын

    And thats the reason why lasso does a kind of feature selection and sets many weights to 0 compared to ridge regression. And now I know the reason behind it thanks a lot❤

  • @statquest

    @statquest

    4 жыл бұрын

    BAM! :)

  • @DreamyGirlChannel
    @DreamyGirlChannel3 жыл бұрын

    Just too much awesome!

  • @statquest

    @statquest

    3 жыл бұрын

    Thank you! :)

  • @kzengineai
    @kzengineai4 жыл бұрын

    your videos're very explanatory for studying this field...

  • @statquest

    @statquest

    4 жыл бұрын

    Glad you think so!

  • @AhmedKhaled-xp7dm
    @AhmedKhaled-xp7dmАй бұрын

    Amazing series on regularization (As usual) I just didn't quite understand why in the ridge regression the weights/parameters never ever reach zero, I didn't give it much thought but it didn't pop right at me like it usually does in your videos lol, but again great series!

  • @statquest

    @statquest

    Ай бұрын

    Thanks!

  • @samuelhughes804
    @samuelhughes8044 жыл бұрын

    All your videos are great, but the regularization ones have been a fantastic help. Was wondering if you were planning any on selective inference from lasso models? That would complete the set for me haha

  • @statquest

    @statquest

    4 жыл бұрын

    Not yet!

  • @mik8760
    @mik87603 жыл бұрын

    THAT IS SOOOOOO GOOD MAN

  • @statquest

    @statquest

    3 жыл бұрын

    Thanks! :)

  • @Ganeshkakade454
    @Ganeshkakade454 Жыл бұрын

    Just wanna say..U r my Guru..means Teacher..In data Science ...more love to u from India

  • @statquest

    @statquest

    Жыл бұрын

    Thank you! :)

  • @suryan5934
    @suryan59344 жыл бұрын

    Amazing video as always Josh! Just to be sure if I got it correctly, the plot between RSS error and slope represents a parabola in 2D. So when we do the same thing in 3D i.e. With 2 parameters, does it represent the same bowl shaped cost function that we try to minimise?

  • @statquest

    @statquest

    3 жыл бұрын

    Yes

  • @berkceyhan5031
    @berkceyhan50312 жыл бұрын

    I first like your videos then watch them!

  • @statquest

    @statquest

    2 жыл бұрын

    BAM!

  • @anshpujara14
    @anshpujara144 жыл бұрын

    Can you do a lecture on Kohonen Self Organising Maps?

  • @praveerparmar8157
    @praveerparmar81573 жыл бұрын

    "Unfortunately, no one asked me" 🤣🤣🤣

  • @statquest

    @statquest

    3 жыл бұрын

    :)

  • @rishipatel7998
    @rishipatel7998 Жыл бұрын

    This guy is amazing.... BAM!!!

  • @statquest

    @statquest

    Жыл бұрын

    Thanks! :)

  • @premnathkn1976
    @premnathkn19764 жыл бұрын

    Clear and apt..

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks! :)

  • @adhiyamaanpon4168
    @adhiyamaanpon41684 жыл бұрын

    Hey josh!! Can u plz make a video for K-modes algorithm for categorical variables(unsupervised learning) with an example..plz?

  • @PerfectPotential
    @PerfectPotential4 жыл бұрын

    "I got ... calling a young StatQuest phone" 😁 (The Ladys might love your work fam.)

  • @statquest

    @statquest

    4 жыл бұрын

    Bam!

  • @omnesomnibus2845
    @omnesomnibus28454 жыл бұрын

    Really excellent video Josh. You consistently do a great job, and I appreciate it. Could you make a video showing the use of Ridge regression and especially Lasso regression in parameter selection? I had to do that once, and it is complicated. From your example it seems that using neither penalty gives you the best response. So, in what circumstances do you want to use the regression to improve your result? If you are using lasso regression to find the top 3 predictive parameters, how does this work? What are the dangers? How do you optimally use it? A complicated subject for sure! I'm sorry if this is covered in your videos on Lasso and Ridge regression individually, I am watching them next. I agree with your naming convention btw, squared and absolute-value penalty is MUCH more intuitive!

  • @statquest

    @statquest

    4 жыл бұрын

    Watch the other regularization videos first. I cover some of what you would like to know in about parameter selection in my video on Elastic-Net in R: kzread.info/dash/bejne/laihsNNwdsrIpqw.html

  • @omnesomnibus2845

    @omnesomnibus2845

    4 жыл бұрын

    @@statquest I will check out those videos, thanks. I actually did use elastic net regularization. The whole issue is complex (for somebody without a decent stats background) because the framework of how everything works isn't covered very well AND simply anywhere that I could find, without going down several pretty deep rabbit holes. Some of the parameter selections that I remember were suggested depended on the assumption that the parameters were independent, which was NOT the case in my situation. I'm still not sure what the best approach would have been.

  • @omnesomnibus2845

    @omnesomnibus2845

    4 жыл бұрын

    @@statquest As an additional note, I've always found that examples and exercises are even more important than theory, while theory is essential at times too. In many math classes concepts were laid out in formal and generalized glory, but I couldn't get the concept at all until I put hard numbers or examples to it. It's probably not the subject of your channel or in your interest, but I think some really hand-holding examples of using these concepts in some kaggle projects, or going through what some interesting papers did, would be a great way of bringing the theory and the real world together.

  • @statquest

    @statquest

    4 жыл бұрын

    @@omnesomnibus2845 I do webinars that focus on the applied side of all these concepts. So we can learn the theory, and then practice it with real data.

  • @omnesomnibus2845

    @omnesomnibus2845

    4 жыл бұрын

    @@statquest That's great!

  • @designcredible8247
    @designcredible8247 Жыл бұрын

    Hi, thanks for this explanation, it really helped! In my previous work place almost everyone said that lasso could be used for feature selection and it was kind of given. Like no matter what the lambda value is but it solely depends on the lambda right? It may not remove any features at all? And increasing lambad value to the maximum isn't always most beneficial?

  • @statquest

    @statquest

    Жыл бұрын

    That's correct.

  • @RAJATTHEPAGAL
    @RAJATTHEPAGAL4 жыл бұрын

    L2= weight penalisation (smooths out weight losss curve but and reduces overfitting , but higher lambda can kill model training) L1 = weight imputation (dragging it to zero, useful for learnable ignoring of variables, useful for high dimensional data at times) . I have used both of these earlier with similar mindset. Earlier even in Deep Learning i used a similar analogy to reason about what was happening. The visualisation really did helped, so just wanted to know is this simplistic way of viewing the behaviour makes sense ??? Or am I missing something ....

  • @aliguzel2030
    @aliguzel20302 жыл бұрын

    hey josh ur videos are great, ı m really appreciate for that but i have a question: at 0:46 when u choose the line with 0 slope, why did u choose its intercept at that value, bcs if u choose another intercept value u ll get totally different L1 and L2 graphics.ty

  • @statquest

    @statquest

    2 жыл бұрын

    I think the intercept is from the least squares optimal fit. I picked it because it does a good job illustrating the concepts without making the explanation more complicated than it needed to be. In theory every line will have a different intercept, but the concepts illustrated will not change.

  • @anuragshandilya3556
    @anuragshandilya35564 жыл бұрын

    Perfect!!!!

  • @statquest

    @statquest

    4 жыл бұрын

    Thanks! :)

  • @baharb5321
    @baharb53213 жыл бұрын

    Awesome! And I should mention actually: We are asking YOU!"

  • @statquest

    @statquest

    3 жыл бұрын

    Bam!

  • @saranyakumaran459
    @saranyakumaran4592 жыл бұрын

    Thank you very much for the video!!! All u r Videos are really easy to understand... thanks alot.. could you please!!.... upload a video for SCAD (Smoothly Clipped Absolute Deviation Method) Regularization Method....

  • @statquest

    @statquest

    2 жыл бұрын

    I'll keep that in mind.

  • @nikgabrovsek3236
    @nikgabrovsek32362 жыл бұрын

    Genious!

  • @statquest

    @statquest

    2 жыл бұрын

    Thanks!

  • @mihailtegovski4028
    @mihailtegovski40283 жыл бұрын

    You should receive a Nobel Prize.

  • @statquest

    @statquest

    3 жыл бұрын

    BAM! :)

  • @shawnkim6287
    @shawnkim62879 ай бұрын

    Hi Statquest. Thank you for the video. Just have a question. Let's say we have a categorical variable with 4 levels. After running the LASSO regression, if we see 3 dummy variables' beta (i.e. 1 level is in the base level) are 0 then do we drop that categorical variable? I know if the LASSO drops only 1 dummy variable, not 3, we are just merging that dummy variable with the base level. Thanks as always

  • @statquest

    @statquest

    9 ай бұрын

    To be honest, I don't know exactly what will happen in that situation. What you suggest, dropping the variable seems reasonable, but I'm not sure.

  • @shawnkim6287

    @shawnkim6287

    9 ай бұрын

    @@statquest thanks. If a numerical variable's coefficient is 0 then we are dropping that predictor in our lasso model?

  • @statquest

    @statquest

    9 ай бұрын

    @@shawnkim6287 yes

  • @dudeyosemite1890
    @dudeyosemite18903 жыл бұрын

    Josh, great video, I got a quick question: In your plotting, when lambda goes bigger (say lambda =10), (the sum of squared residual + penalty) is always overlapping or higher than the case when lambda = 0, then why do we want to use Ridge/Lasso even the sum is bigger than when lambda = 0.

  • @statquest

    @statquest

    3 жыл бұрын

    The individual videos on Ridge kzread.info/dash/bejne/g2xltLRsqa7UY5M.html and Lasso kzread.info/dash/bejne/gHuaktiohLDSk9Y.html regression show why we would want to use regularization. In short, Ridge and Lasso regression can improve predictions (and this can be observed with cross validation). In contrast, the purpose of this example is only to highlight the differences between Ridge and Lasso regression and show why Lasso can set parameters to 0 if lambda is large enough.

  • @tanbui7569
    @tanbui75693 жыл бұрын

    Thank you for your work as always. Its AWESOME. I just got some questions. Why is there a kink in the SSR curve for Lasso Regression ? Is it because we are adding lambda * |slope| which is a linear component ? And Does the curve for Ridge Regression stay parabola because we are adding lambda*slope^2 which is a parabola component ?

  • @statquest

    @statquest

    3 жыл бұрын

    I believe that is correct.

  • @sudeshnasen2731

    @sudeshnasen2731

    2 жыл бұрын

    Hi. Great video! I had the same query as to why we cannot see a similar kink in curve in the Ridge Regression CF vs Slope curve.

  • @tats21a
    @tats21a2 жыл бұрын

    curious is it possible to learn the optimal y-intercept or the optimal bias term with regularization ? any insights? Like get a worse fit straight line with slope zero but cutting y axis at a different height?

  • @statquest

    @statquest

    2 жыл бұрын

    As we increase regularization, the y-axis intercept moves towards the average y-axis value, which is the optimal value if none of the parameters are useful.

  • @sabrihamad
    @sabrihamad3 жыл бұрын

    Thank you for such a great video. I have a question though: What happens to the intercept term during regression? Do you also shrink it? Here you have it fixed and only vary the slope but in your older ridge regression video you changed both!

  • @statquest

    @statquest

    3 жыл бұрын

    The intercept is not affected by regularization. For details, see: kzread.info/dash/bejne/g2xltLRsqa7UY5M.html

  • @TheCheukhin
    @TheCheukhin3 жыл бұрын

    Underrated

  • @statquest

    @statquest

    3 жыл бұрын

    Thanks!

  • @mohitnagarkoti4086
    @mohitnagarkoti40864 жыл бұрын

    Was about to start this topic. Thanks @Statquest Hellowww , I just have quick question. by what time we should expect your video on Neural networks? And I have a request, could you add your upcomming videos on your website in a separate sections. By looking at the topic and the date odr month it will be uploaded. It will be very helpfull for students to buy a subscription plan of your channel in order to get early access to your videos.

  • @statquest

    @statquest

    4 жыл бұрын

    You can get early access by becoming a channel member or signing up for my Patreon: www.patreon.com/statquest

  • @ericklestrange6255
    @ericklestrange62554 жыл бұрын

    please do: Wasserstein metric (earth movers distance)

  • @mathildereynes8508
    @mathildereynes85084 жыл бұрын

    Could be interesting to see the explaination in the case of a multidimensional problem with more than 2 d features, but very nice video!

  • @neillunavat

    @neillunavat

    3 жыл бұрын

    Be grateful we've got such a nice guy.

  • @manjushang
    @manjushang3 жыл бұрын

    ‘Unfortunately no one asked me ‘ 😀 . Unique content . Hats off !

  • @manjushang

    @manjushang

    3 жыл бұрын

    Also it will be of great help if you explain the following points. 1. How the lasso regression excludes the useless variables. 2. How the ridge regression do a little better when most variables are useful Thanks, Manjusha

  • @statquest

    @statquest

    3 жыл бұрын

    Thanks!

  • @MorriganSlayde
    @MorriganSlayde2 жыл бұрын

    I died laughing when you sang the intro.

  • @statquest

    @statquest

    2 жыл бұрын

    That's a good one! :)

  • @adibhatlavivekteja2679
    @adibhatlavivekteja26794 жыл бұрын

    Explain stats to a 10-year-old? Me: "You kid, Subscribe and drill through all the content of StatQuest with Josh Starmer"

  • @statquest

    @statquest

    4 жыл бұрын

    :)

  • @usamahussain4461
    @usamahussain44612 жыл бұрын

    Excellent. I have just one question. In case of L1 penalty, isn't the line with lambda equal 40 (or slope 0) giving a bad line? I mean with blue line, we were getting a better fit since it didn't completely ignore weight in predicting height and sum of residuals is smallest?

  • @statquest

    @statquest

    2 жыл бұрын

    What time point, minutes and seconds, are you asking about?

  • @usamahussain4461

    @usamahussain4461

    2 жыл бұрын

    @@statquest 7:16

  • @statquest

    @statquest

    2 жыл бұрын

    @@usamahussain4461 Yes. For both L1 and L2 you need to test different values for lambda, including setting it to 0, to find the optimal value.