Normal equation solution of the least-squares problem | Lecture 27 | Matrix Algebra for Engineers

How to solve the least-squares problem using matrices.
Join me on Coursera: imp.i384100.net/mathematics-f...
Lecture notes at www.math.ust.hk/~machas/matrix...
Paperback at www.amazon.com/Matrix-Algebra...
Subscribe to my channel: kzread.info?...

Пікірлер: 48

  • @ProfJeffreyChasnov
    @ProfJeffreyChasnov4 жыл бұрын

    Find other Matrix Algebra videos in my playlist kzread.info/head/PLkZjai-2Jcxlg-Z1roB0pUwFU-P58tvOx

  • @ajsctech8249
    @ajsctech82492 жыл бұрын

    this is the best description of using matrices to solve Linear Regression I have ever seen. With basic to intermediate knowledge of Linear Algebra Chasnow explains each step in clear detail and then rounds things of with a trivial example to explain all the math behind the powerful method of Linear regression. This man is both a Maths and communications genius. Greetings from Germany. Your students are incredibly lucky, you look like a professor who is determined to make sure all students understand the topics rather than one who wants students to marvel at their own math prowess.

  • @Sergei-ld1iv
    @Sergei-ld1iv Жыл бұрын

    Many thanks!!! You certainly have a huge talant for teaching as you know where the breakes in understanding sit to be given a particular attention!

  • @abdula.7064
    @abdula.70644 жыл бұрын

    Your explanation is superb!

  • @johnedakigimode4747
    @johnedakigimode47475 ай бұрын

    Thank you Prof. Chasnov, we appreciate your good work.

  • @sorayyakrimi215
    @sorayyakrimi215 Жыл бұрын

    Thank you so much Dear Prof. Chansov

  • @idanmalka4561
    @idanmalka45612 жыл бұрын

    thanks a lot mr. Chasnov ! really saved me hours of mental breakdown

  • @trentconley4374
    @trentconley4374 Жыл бұрын

    Fantastic explaination!

  • @meknassihamza9324
    @meknassihamza93244 жыл бұрын

    Thanks a lot !

  • @yuanzhuchen481
    @yuanzhuchen4815 жыл бұрын

    You made it look easy; thank yo so much, Professor Chasnov!

  • @dark6.63E-34
    @dark6.63E-34 Жыл бұрын

    Amazing explanation

  • @sujanbhakat1199
    @sujanbhakat11993 жыл бұрын

    Thank you.

  • @grakeshn
    @grakeshn2 жыл бұрын

    Good class professor

  • @aryamaanbasuroy7646
    @aryamaanbasuroy76463 жыл бұрын

    Thank you

  • @mathsbyamna1843
    @mathsbyamna18433 жыл бұрын

    superb

  • @prabaa123
    @prabaa123Ай бұрын

    Thank you professor !!

  • @exove410
    @exove4102 жыл бұрын

    You are awesome!

  • @sirginirgin4808
    @sirginirgin4808 Жыл бұрын

    Many thanks

  • @piotrjaga6929
    @piotrjaga69292 жыл бұрын

    thank you

  • @lancelofjohn6995
    @lancelofjohn69952 жыл бұрын

    Professor,I still have one question you mean b-b_proj is orthogonal to Matrix A,wheteher do we know that at the beginning B is out of the column space of A, so b and b-b_proj is in the kernel of Matrix A,b-b_proj becomes just short or long

  • @88noureldin
    @88noureldin Жыл бұрын

    Thanks a lot Prof Jeff for valuable lecture. @4:25 : Why AT A is invertible matrix - as per last lesson Invertible matrix which has inverse (AA-1=I) ,kindly clarify what make AT A invertible matrix? @5:21 : Projection Matrix (A(ATA)-1 AT - Is there any paper illustrate it as i think we didn't come to it last lectures .

  • @andrewsdanquah2395
    @andrewsdanquah2395Ай бұрын

    Super good.

  • @EB3103
    @EB31033 жыл бұрын

    at fisrt you start with 1) Ax=b and then you say 2) Ax = b(proj) you multiplied both sidesof (1) by A transposed, and then use (2). can't understand how you can use both in the same proof.

  • @oiloveggoindian3087

    @oiloveggoindian3087

    4 ай бұрын

    It should be like this A^TAx = A^T +A*T(b - bproj) however the b-bproj is just Null(AT) so that part just become zero

  • @calculusguru1063
    @calculusguru106310 ай бұрын

    How does multiplying a vector in null space by transpose of that matrix get rid of the vector in the null space

  • @williamwilkins8022
    @williamwilkins80223 жыл бұрын

    Why would the columns of A typically be linearly independent in a least squares problem? (4:10)

  • @ProfJeffreyChasnov

    @ProfJeffreyChasnov

    3 жыл бұрын

    Lot's of rows, very few columns.

  • @chrischoir3594
    @chrischoir35944 жыл бұрын

    Is this similar to finding a "regression plane"?

  • @GrahamEckel

    @GrahamEckel

    3 жыл бұрын

    My understanding is that if you continue on with the Normal Equation and find the projection matrix you will solved the regression plane

  • @lancelofjohn6995
    @lancelofjohn69952 жыл бұрын

    Hello professor, thanks for your video. But I have one question if the Ax=b is overdetermined, why can we still use "=" instead of Ax>=b or Ax

  • @ProfJeffreyChasnov

    @ProfJeffreyChasnov

    2 жыл бұрын

    With = there is no solution. What would be the meaning of >= or

  • @lancelofjohn6995

    @lancelofjohn6995

    2 жыл бұрын

    @@ProfJeffreyChasnov thanks a lot, professor, I understand the meaning of the equation and the way to obtain solution.

  • @lancelofjohn6995

    @lancelofjohn6995

    2 жыл бұрын

    @@ProfJeffreyChasnov Professor,I realized I made a error when I want to describe the unequal relation between Ax and b. I should use Norm2(Ax)=Norm2(b) .May I know whether this expression is correct?

  • @nped5054
    @nped50542 жыл бұрын

    At 10:08 why is b equal to y? I have seen the normal equation as A^T * A * x = A^T * y which im confused by because of the y in the equation

  • @ProfJeffreyChasnov

    @ProfJeffreyChasnov

    2 жыл бұрын

    Because this lecture follows Lecture 26.

  • @juhu5504
    @juhu55043 жыл бұрын

    At the end with B0 = 1 and B1 = 1/2. If I plug in X = 1 then Y = 1 + 1/2 * 1 = 3/2 and not 1 like in the data. Same for other two datapoints. Am I missing something?

  • @martinsanchez-hw4fi

    @martinsanchez-hw4fi

    2 жыл бұрын

    We try to do the line that gets as closer as possible to the data points

  • @leksa8845

    @leksa8845

    Жыл бұрын

    There is a wrong solution B_0 = 0 B_1 = 1 if you drow it, it is clear

  • @williamwilkins8022
    @williamwilkins80223 жыл бұрын

    Why do you use the normal equations to find x and not just directly use the x = (A'A)^-1 A' b equation you'd derived already?

  • @ProfJeffreyChasnov

    @ProfJeffreyChasnov

    3 жыл бұрын

    It is computationally inefficient to compute inverses. Faster to do Gaussian elimination.

  • @martinsanchez-hw4fi
    @martinsanchez-hw4fi2 жыл бұрын

    Isn't A(AtA)^(-1)At just the identity matrix?

  • @ProfJeffreyChasnov

    @ProfJeffreyChasnov

    2 жыл бұрын

    Only if A is invertible. But here A is not even a square matrix.

  • @VoidFame
    @VoidFame4 жыл бұрын

    Did you reflect the video or are you really writing backwards?

  • @vmarchenkoff

    @vmarchenkoff

    4 жыл бұрын

    Of course, it's reflected ))

  • @VoidFame

    @VoidFame

    4 жыл бұрын

    @@vmarchenkoff : )

  • @TylerMatthewHarris
    @TylerMatthewHarris4 жыл бұрын

    why is y = 1/2(x) better than simply y = x? Plotted it seems like y=x fits better

  • @pipertripp

    @pipertripp

    2 жыл бұрын

    it minimizes the square of the errors. the errors being the difference between y value of the data and the y value of the line at each x value.

  • @BlackHoleGeorge
    @BlackHoleGeorge9 ай бұрын

    Great video... But there is a set of ounnecessary confusions: You name the vrctor of beta_i as x, matrix containing x_i as A, and vector from y as b. :/ Most people are lost at that point. :/