Normal equation solution of the least-squares problem | Lecture 27 | Matrix Algebra for Engineers
How to solve the least-squares problem using matrices.
Join me on Coursera: imp.i384100.net/mathematics-f...
Lecture notes at www.math.ust.hk/~machas/matrix...
Paperback at www.amazon.com/Matrix-Algebra...
Subscribe to my channel: kzread.info?...
Пікірлер: 48
Find other Matrix Algebra videos in my playlist kzread.info/head/PLkZjai-2Jcxlg-Z1roB0pUwFU-P58tvOx
this is the best description of using matrices to solve Linear Regression I have ever seen. With basic to intermediate knowledge of Linear Algebra Chasnow explains each step in clear detail and then rounds things of with a trivial example to explain all the math behind the powerful method of Linear regression. This man is both a Maths and communications genius. Greetings from Germany. Your students are incredibly lucky, you look like a professor who is determined to make sure all students understand the topics rather than one who wants students to marvel at their own math prowess.
Many thanks!!! You certainly have a huge talant for teaching as you know where the breakes in understanding sit to be given a particular attention!
Your explanation is superb!
Thank you Prof. Chasnov, we appreciate your good work.
Thank you so much Dear Prof. Chansov
thanks a lot mr. Chasnov ! really saved me hours of mental breakdown
Fantastic explaination!
Thanks a lot !
You made it look easy; thank yo so much, Professor Chasnov!
Amazing explanation
Thank you.
Good class professor
Thank you
superb
Thank you professor !!
You are awesome!
Many thanks
thank you
Professor,I still have one question you mean b-b_proj is orthogonal to Matrix A,wheteher do we know that at the beginning B is out of the column space of A, so b and b-b_proj is in the kernel of Matrix A,b-b_proj becomes just short or long
Thanks a lot Prof Jeff for valuable lecture. @4:25 : Why AT A is invertible matrix - as per last lesson Invertible matrix which has inverse (AA-1=I) ,kindly clarify what make AT A invertible matrix? @5:21 : Projection Matrix (A(ATA)-1 AT - Is there any paper illustrate it as i think we didn't come to it last lectures .
Super good.
at fisrt you start with 1) Ax=b and then you say 2) Ax = b(proj) you multiplied both sidesof (1) by A transposed, and then use (2). can't understand how you can use both in the same proof.
@oiloveggoindian3087
4 ай бұрын
It should be like this A^TAx = A^T +A*T(b - bproj) however the b-bproj is just Null(AT) so that part just become zero
How does multiplying a vector in null space by transpose of that matrix get rid of the vector in the null space
Why would the columns of A typically be linearly independent in a least squares problem? (4:10)
@ProfJeffreyChasnov
3 жыл бұрын
Lot's of rows, very few columns.
Is this similar to finding a "regression plane"?
@GrahamEckel
3 жыл бұрын
My understanding is that if you continue on with the Normal Equation and find the projection matrix you will solved the regression plane
Hello professor, thanks for your video. But I have one question if the Ax=b is overdetermined, why can we still use "=" instead of Ax>=b or Ax
@ProfJeffreyChasnov
2 жыл бұрын
With = there is no solution. What would be the meaning of >= or
@lancelofjohn6995
2 жыл бұрын
@@ProfJeffreyChasnov thanks a lot, professor, I understand the meaning of the equation and the way to obtain solution.
@lancelofjohn6995
2 жыл бұрын
@@ProfJeffreyChasnov Professor,I realized I made a error when I want to describe the unequal relation between Ax and b. I should use Norm2(Ax)=Norm2(b) .May I know whether this expression is correct?
At 10:08 why is b equal to y? I have seen the normal equation as A^T * A * x = A^T * y which im confused by because of the y in the equation
@ProfJeffreyChasnov
2 жыл бұрын
Because this lecture follows Lecture 26.
At the end with B0 = 1 and B1 = 1/2. If I plug in X = 1 then Y = 1 + 1/2 * 1 = 3/2 and not 1 like in the data. Same for other two datapoints. Am I missing something?
@martinsanchez-hw4fi
2 жыл бұрын
We try to do the line that gets as closer as possible to the data points
@leksa8845
Жыл бұрын
There is a wrong solution B_0 = 0 B_1 = 1 if you drow it, it is clear
Why do you use the normal equations to find x and not just directly use the x = (A'A)^-1 A' b equation you'd derived already?
@ProfJeffreyChasnov
3 жыл бұрын
It is computationally inefficient to compute inverses. Faster to do Gaussian elimination.
Isn't A(AtA)^(-1)At just the identity matrix?
@ProfJeffreyChasnov
2 жыл бұрын
Only if A is invertible. But here A is not even a square matrix.
Did you reflect the video or are you really writing backwards?
@vmarchenkoff
4 жыл бұрын
Of course, it's reflected ))
@VoidFame
4 жыл бұрын
@@vmarchenkoff : )
why is y = 1/2(x) better than simply y = x? Plotted it seems like y=x fits better
@pipertripp
2 жыл бұрын
it minimizes the square of the errors. the errors being the difference between y value of the data and the y value of the line at each x value.
Great video... But there is a set of ounnecessary confusions: You name the vrctor of beta_i as x, matrix containing x_i as A, and vector from y as b. :/ Most people are lost at that point. :/