Bayesian Linear Regression : Data Science Concepts
The crazy link between Bayes Theorem, Linear Regression, LASSO, and Ridge!
LASSO Video : • Lasso Regression
Ridge Video : • Ridge Regression
Intro to Bayesian Stats Video : • What the Heck is Bayes...
My Patreon : www.patreon.com/user?u=49277905
Пікірлер: 169
As soon as you explained the results from Bayesian my jaw was wide open for like 3 minutes this is so interesting
Read it on a book. Didn't understand jack shit back then. Your videos are awesome. Rich, small, consise. Please make a video on Linear Discriminant Analysis and how its related to bay's theorem. This video will be saved in my data science playlist.
This video is a true gem, informative and simple at once. Thank you so much!
@ritvikmath
3 жыл бұрын
Glad it was helpful!
This is my favorite video out of a large set of fantastic videos that you have made. It just brings everything together in such a brilliant way. I keep getting back to it over and over again. Thank you so much!
Regardless of how they were really initially devised, seeing the regularization formulas pop out of the bayesian linear regression model was eye-opening - thanks for sharing this insight
@dennisleet9394
2 жыл бұрын
Yes. This really blew my mind. Boom.
Really good explanation. I really like how you gave context and connected all topics together and it make perfect sense. While maintaining the perfect balance b/w math and intution. Great worl. Thank You !
I've seen everything in this video many, many times, but no one had done as good a job as this in pulling these ideas together in such an intuitive and understandable way. Well done and thank you!
Best of all videos on Bayesian regression; other videos are so boring and long but this one has quality as well as ease of understanding..Thank you so much!
Your videos are great. Love the connections you make so that stats is intuitive as opposed to plug and play formulas.
This is incredible. Clear, well paced and explained. Thank you!
Brilliant and clear explanation, I was struggling to grasp the main idea for a Machine Learning exam but your video was a blessing. Thank you so much for the amazing work!
Man I'm going to copy-paste your video whenever I want to explain regularization to anyone! I knew the concept but I would never explain it the way you did. You nailed it!
Man! What a great explanation of Bayesian Stats. It's all starting to make sense now. Thank you!!!
For me, the coolest thing about statistics is that every time I do a refresh on these topics, I get some new ideas or understandings. It's lucky that I came across this video after a year, which could also explain why we need to "normalized" the X (0 centered, with stdev = 1) before we feed them into the MLP model, if we use regularization terms in the layers.
Thanks, man. A really good and concise explanation of the approach (together with the video on Bayesian statistics).
Your videos are a true gem, and an inspiration even. I hope to be as instructive as you are if I ever become a teacher!
This video is super informative! It gave me the actual perspective on regularization.
Amazing, you kept it simple and showed how regularization terms in linear regression originated from Bayesian approach!! Thank U!
Cristal clear! , thank you so much, the explanation is very structured and detailed
This is sooo clear. Thank you so much!
Amazing video! Really clearly explained! Keep em coming!
@ritvikmath
3 жыл бұрын
Glad you liked it!
Excellent tutorial! I have applied RIDGE as the loss function in different models. However, it is the first time I understand the mathematical meaning of lambda. It is really cool!
Awesome explanation! Especially the details on the prior were so helpful!
@ritvikmath
3 жыл бұрын
Glad it was helpful!
Thank you for this amazing video, It clarified many things to me!
It just blown my mind too. I can feel you brother. Thank you!
Super informative and clear lesson! Thank you very much!
Thanks, that was a good one. Keep up the good work!
Awesome explanation!
Your videos are a Godsend!
This was awesome, thanks a lot for your time :)
This video is amazing!!! so helpful and clear explanation
very cool the link you explained between regularization and prior
Mi mente explotó con este video. Gracias
Incredible explanation!
This was an excellent introduction to Bayesian Regression. Thanks a lot!
This was incredible, thank you so much.
Love this content! More examples like this are appreciated
@ritvikmath
3 жыл бұрын
More to come!
you are so good at this, this video is amazing
@ritvikmath
11 ай бұрын
Thank you so much!!
Love you, bro, I got my joining letter from NASA as a Scientific Officer-1, believe me, your videos always helped me in my research works.
You are a great teacher thank you for your videos!!
truly excellent explanation; well done
This is truly cool. I had the same thing with the lambda. It’s good to know that it was not some engineering trick.
This is an awesome explanation
This blew my mind.Thanks
Thanks a lot! Great! I am reading Elements of Statistical Learning and did not understand what they were talking about. Now I got it.
Unbelievable, you explained linear reg, explained in simple terms Bayesian stat, and showed the connection under 20min .... Perfect
at last!!! Now I can see what lamda was doing in tne lasso and ridge regression!! great video!!
@ritvikmath
3 ай бұрын
Glad you liked it!
you got a subscriber, awesome explanation. I spent hours learning it from other source, but no success. You are just great
This is brillian man! Brilliant! Literally solved where the lamda comes from!
Thank you for sharing this fantastic content.
@ritvikmath
3 жыл бұрын
Glad you enjoy it!
Tks a lot for this clear explanation !
thank you so much for the great explanation
Thanks a lottttt! I had so much difficulty understanding this.
wonderful stuff! thank you
Thanks for video.. Its really helpful.. I was trying to understand how regularization terms are coming.. Now i got. Thanks ..
Wow, killer video. This was a topic where it was especially nice to see everything written on the board in one go. Was cool to see how a larger lambda implies a more pronounced prior belief that the parameters lie close to 0.
@ritvikmath
Жыл бұрын
I also think it’s pretty cool 😎
Awesome video. I didn't realize that the L1, L2 regularization had a connection with the Bayesian framework. Thanks for shedding some much needed light on the topic. Could you please also explain the role of MCMC Sampling within Bayesian Regression models? I recently implemented a Bayesian Linear Regression model using PyMC3, and there's definitely a lot of theory involved with regards to MCMC NUTS (No U-Turn) Samplers and the associated hyperparameters (Chains, Draws, Tune, etc.). I think it would be a valuable video for many of us. And of course, keep up the amazing work! :D
@ritvikmath
3 жыл бұрын
good suggestion!
At last!! I could find an explanation for the lasso and ridge regression lamdas!!! Thank you!!!
@ritvikmath
3 ай бұрын
Happy to help!
What a wonderful explanation!!
@ritvikmath
3 жыл бұрын
Glad you think so!
perfect explanation thank you
Thank you very much. Pretty helpful video!
Great video with a very clear explanation. COuld you also do a video on Bayesian logistic regression
This is the best explanation of L1 and L2 I've ever heard
You are the go-to for me when I need to understand topics better. I understand Bayesian parameter estimation thanks to this video! Any chance you can do something on the difference between Maximum Likelihood and Bayesian parameter estimation? I think anyone that watches both of your videos will be able to pick up the details but seeing it explicitly might go a long way for some.
I used to be afraid of Bayesian Linear Regression until I saw this vid. Thank you sooo much
@ritvikmath
4 ай бұрын
Awesome! Youre welcome
You are THE LEGEND
Legendary video
you are a great teacher!!!🏆🏆🏆
@ritvikmath
Жыл бұрын
Thank you! 😃
thank you so much for this
Great video!!
Nice i never thought that 👍🏼👍🏼
such a nice explanation. I mean thats the first time I actually understood it.
fantastic! u r my savor!
Mind blown on the connection between regularization and priors in linear regression
very great, thank you
Excellent thank you
Thank you very much
Great video, do you have some sources I can use for my university presentation? You helped me a lot 🙏 thank you!
your videos are awesome so much better than my prof
Holy shit! This is amazing. Mind blown :)
I'd never considered a Bayesian approach to linear regression let alone its relation to lasso/ridge regression. Really enlightening to see!
@ritvikmath
Жыл бұрын
Thanks!
Most insightful! L1 as Laplacian toward the end was a bit skimpy, though. Maybe I should watch your LASSO clip. Could you do a video on elastic net? Insight on balancing the L1 and L2 norms would be appreciated.
@danielwiczew
2 жыл бұрын
Yea, Elasticnet and comparison to Ridge/Lasso would be very helpful
Beautiful!
@ritvikmath
3 жыл бұрын
Thank you! Cheers!
Can you please please do a series on categorical distribution, multinomial distribution, Dirichlet distribution, Dirichlet process and finally non parametric Bayesian tensor factorisation including clustering of steaming data. I will personally pay you for this. I mean it!! There are a few videos on these things on youtube, some are good, some are way high-level. But, no one can explain the way you do. This simple video has such profound importance!!
Excellent
thank you!
Excellent!
@ritvikmath
3 жыл бұрын
Thank you! Cheers!
Max ( P(this is the best vid explaining these regressions | KZread) )
Thank You , I saw this before but i didnt understand. Please , where can i find the complete derivation? And maybe You can do a complete series in this topic
Great video, just a question, where can I get some example of the algebra?
Amazing! But where did Ridge and Lasso start from? Were they invented with Bayesian statistics as a starting point, or is that a duality that came later?
Notes for my future revision. *Priror β* 10:30 Value of Prior β is normally distributed. The by product of using Normal Distribution is Regularisation. Because the prior values of β won't be too large (or too small) from the mean. Regularisation keep values of β small.
Great video. The relation between the prior and LASSO penalty was a "wow" moment for me. It would be helpful to see actual computation example in python or R. A common problem I see in Bayesian lectures is - too much focus on math rather to show how actually/ how much the resulting parameters differs. Specially, when to consider bayesian approach over ols.
Do we need to use cross-validation in bayesian analysis to control overfitting?
mind boggling
There is an error at the beginning of the video, in frequentist approaches X is treated as non random covariate data and y is the random part so the high variance of OLS should be expressed as small changes to y => big changes to OLS estimator. The changes to covariate matrix becoming big changes to OLS estimator is more like a non robustness of OLS wrt outlier contamination. Also the lambda should be 1/2τ^2 not σ^2/τ^2 since: ln(P(β))=-p * ln(τ * √2*π) - ||β||₂/2τ^2 Overall this was very helpful cheers!
Is there an algorithm/method for choosing the tau or lambda ? Or Should i run the algo for multiple values of tau/lambda?
Great thanks! .. was feeling the same discomfort about the origin of these...
Thanks from korea 사랑해요!
@ritvikmath
3 жыл бұрын
You're welcome!!!
What about bayesian logistic? Could you post video about it?