Deriving the least squares estimators of the slope and intercept (simple linear regression)
I derive the least squares estimators of the slope and intercept in simple linear regression (Using summation notation, and no matrices.) I assume that the viewer has already been introduced to the linear regression model, but I do provide a brief review in the first few minutes. I assume that you have a basic knowledge of differential calculus, including the power rule and the chain rule.
If you are already familiar with the problem, and you are just looking for help with the mathematics of the derivation, the derivation starts at 3:26.
At the end of the video, I illustrate that sum(X_i-X bar)(Y_i - Y bar) = sum X_i(Y_i - Y bar) =sum Y_i(X_i - X bar) , and that sum(X_i-X bar)^2 = sum X_i(X_i - X bar).
There are, of course, a number of ways of expressing the formula for the slope estimator, and I make no attempt to list them all in this video.
Пікірлер: 194
Finally, someone who made it simple to understand! Thank you!
@victoriabrimm5014
3 жыл бұрын
Right! i went through like a million videos trying to understand this one segment and this was the first to do it.
@agustinlawtaro
3 жыл бұрын
True.
@cmacompilation4649
6 ай бұрын
please, how is it possible to consider Beta variable (when taking derivatives) and then consider Beta constant (to take it out of the sum) ???
@cleisonarmandomanriqueagui9176
Ай бұрын
Best video . i was looking for something like this
This is an underrated video.
@divitisaitezaa2599
3 жыл бұрын
@Isaiah Matias is it?
My god, you explained this so easily. It took me hours trying to understand this before watching this video but still couldn’t understand it properly. After watching this video, it's crystal-clear now. ❤️
@DHDH_DH
5 ай бұрын
me, too. I spent a whole morning figuring this. He is a savior
I have gone through tons of materials on this topic and they either skip the derivation process or go direct into some esoteric matrix arithmetics. This video explains everything I need to know. Thanks.
unbelievably perfect video, one of the best videos I have watched in the statistics field, so rare to find high-quality in this field idk why
I never thought that I could understand simple linear regression using this approach. Thank you
I don't usually comment on teaching videos. But this really deserves thanks for how clearly and simply you explained everything. The lecture I had at the university left much to be desired
I can't thank you enough for this brilliant explanation!
My Physical Chemistry teacher spent ~1.5 hrs showing this derivation and I got completely lost. Watching your video, it's so clear now. Thank you for your phenomenal work.
Absolutely beautiful derivation! Crystal clear! Thanks very much.
@mns003-choudharyvivek2
9 ай бұрын
Can I apply this in Mumbai University exam?
phenomenal video. Thank you for taking the time to explain each step of the derivations such as the sum rule for derivation. Thank you for helping me learn.
@jbstatistics
2 жыл бұрын
You are very welcome!
you have no idea how you saved my life, I was struggling so hard to find out why xi(xi-xbar)=(xi-xbar)^2 and etc. you are the first one I found explained that.
Thanks so much, this was so easy to follow and comprehend!
Amazing video! Slight bumps where my own knowledge was patchy but you provided enough steps for me to work those gaps out.
The best part of this video is finally figuring out where that "n" came from in the equation for beta-naught-hat. Thank you so very much for making this available.
@jbstatistics
Жыл бұрын
I'm glad to be of help!
Really, really good explanation!! Thank you!!
This is FANTASTIC. THANK YOU!
Thank you so much! This explanation is literally perfect, helped me so much!
@jbstatistics
2 жыл бұрын
Thanks for the kind words! I'm glad to be of help!
Thank you so much for such a clear explanation! It helps me a lot in preparing for my upcoming final exam.
Absolutely brilliant video!!! Thanks so much
You made it simpler than my lecturer do. Thank you!
Thank you for explaining in such details ❤️
Thank you very much! Very clear and interesting explanation!
one video on youtube that actually explains something properly
Glad you're back!
@jbstatistics
5 жыл бұрын
Thanks! Glad to be back! Just recording and editing as I type this!
Very helpful video to understand. Many thanks!
thank you so much, this video has cleared all my confusions cuz the book im reading just says 'by doing some simple calculus'
Thanks a lot sir I really got this what I need indeed. 🙏🙏🙏🙏🙏🙏🙏🙏There is no words for appreciation of your efforts
thank you for actually explaining it, most of videos are just like "hi, if you want to solve this, plug in this awesome formula and thats it, thank you for watching :)"
Beautiful video, good explanation
Great explanation! Step by step...
thanks a lot for simplifying the derivation
Best explanation, thank you so much
Thank u soooo much! For explaining this. You made my day
This is talent. Thank you so much 😊
Awesome video sir! Thank you!
Thank you so much am really enjoying and understanding what your teaching
This was really helpful thanks!
Exactly what I was looking for. Thank you so much!
@jbstatistics
3 жыл бұрын
You are very welcome!
@herodmoonga4799
2 жыл бұрын
Greatful, you are wonderful Sir,you have made me understand economics
Thank you very much. This video helped me a lot.
Excellent video
finally, I've understood this bloody thing. Thank u sooooo much m8.
@jbstatistics
2 жыл бұрын
I'm glad you found it helpful!
This is incredible, thank you so much! :)
@jbstatistics
5 ай бұрын
You are very welcome!
Greatly explained
Great sir, very helpful!
Excellent video!
You are awesome! I am not a native speaker and still struggling with the master program courses in the US, but your instruction is so helpful. I appreciate your great help
@jbstatistics
5 ай бұрын
Thanks! I'm happy to be of help!
Thank you. This is very clear
Easiest subscribe of my life.
Great job! Thank you sir!
Very well explained
good explanation!
you make it sooo easy
Thanks, so helpful!
Wow. This is great.thanku so much.
Really nice derivation!
@jbstatistics
2 жыл бұрын
Thanks!
Thank you very to clear explanation ❤
LEGEND, HAVE TO SAY YOU ARE BETTER THAN A PROFFESOR
@jbstatistics
2 жыл бұрын
I *am* a professor!!!
Thank you so much.
Question: 6:24 Why and how beta zero hat is multiplied with n? Does n mean sample size? What's the reasoning behind n adjoining with beta zero hat?
You sir are AMazing
Yes! New stuff 👍🏼👍🏼
definitely the best video out there on this topic makes me wonder why its not the top recommendation / search result maybe because of the title
@jbstatistics
2 жыл бұрын
Thanks! How about "Finding the formulas for the slope and intercept the EASY WAY! (When I got to step 8, my jaw DROPPED!)" :)
@robin-bird
2 жыл бұрын
@@jbstatistics no idea, I'm not good at making up titles. Maybe something like this? Simple Linear Regression | Derivation
@jbstatistics
2 жыл бұрын
@@robin-bird I was just joking, but thanks for the input :)
@robin-bird
2 жыл бұрын
@@jbstatistics I was wondering - thanks for clearing that up ^^
Thanks alot it really helped
holy hell I wish you were my econometrics professor. mine is useless
Good video Thanks!
Thank you very much sir !
@jbstatistics
5 ай бұрын
You are very welcome!
Thank you so much
Thanks so much!
Great video Brother
u are a life saver
finally got my doubt resolved.😊
The result represent the minimas since the original function that we were minimizing is convex and open upwards, so the only way for a critical value to exist is for it to be a minimum.
THANK YOU
THANKS GOD FINALLY SOMEONE TRIED TO DERIVE THE FORMULA, INSANE THAT NEARLY ALL OTHER RESOURCES OMIT THIS SHIT
Thanks a lot!!!
thank you 100^100 times
Thank you
this is great
Hi, do you have a video on deriving coefficients in multiple regression?
@mattstats399
Жыл бұрын
That is a fun derivation using linear algebra and calculus. First step is the same here which is taking the first derivative and setting it equal to zero. The book "The Elements of Statistical Learning" has a good proof. I'd say one needs a calc 1 and linear algebra background first though.
thank u so much.
Thanks so much
Thanks a lot
Thanks Dr. Balka! Is it computationally expensive to estimate the parameters in this manner for models with many independent variables or very large datasets? Is that the reason why iterative methods such as gradient descent are sometimes employed instead?
@Tusharchitrakar
5 ай бұрын
The matrix inversion operation in ols is computationally expensive, hence numerical methods like gradient descent are useful.
Awesome!
Nailed it
Nice trick! Adding an intelligent zero huh? Thanks for this video!
Wow many university lecturers can’t explain it this well!
At 10:52 timeline, how can we switch the role of X sub i and Y sub i? Could you help explain how this happens?
@harveywilliams7013
3 жыл бұрын
In the first step, we choose to expand (Xi - Xbar) but we could have chosen to expand (Yi - Ybar) and it would follow a similar route.
saved my life
god bless you brother
Thanks a lot!!!!!
Amazing and super helpful video! Extremely simple and easy to follow! But please, quick question: Why did you switch the Xi and Xbar at 7:51? This drastically changes the ending solution.
@malolanbalaji98
2 жыл бұрын
When he removes the inner paranthesis, the term Xi becomes negative and Xbar becomes positive. So when you multiply it by (-ve)Beta, the signage of both terms reverses
How can we find the intercept and slope value of B0 and B1
great video, my summary just gave the formula with the text: 'just remember this' hate that
thank you so so much :)))))
2:24, where did you discuss why it makes sense to minimize the sum of squared residuals ?
@aakarshan01
4 жыл бұрын
makes it more sensitive to bigger errors. And it's differentiable at all points. In the Mod function , it is not differentiable at the point it pivots up
@SuperYtc1
3 жыл бұрын
@@aakarshan01 but why not to power 1.5? why not to power 4? why is it exactly power 2?
@aakarshan01
3 жыл бұрын
@@SuperYtc1 you can.but there is no need to. The differentiability is achieved in square. Why calculate a bigger number that could lead to problems since power 4 of a decimal number of more likely to break the minimum number limit of a float than a square. But in theory, you can
In which video does he discuss why the we use squared residuals?
thank you so much ...............
@jbstatistics
5 ай бұрын
You are very welcome!
How do we do this if we have three or more unknown parameters?