Quadratic Form Minimization: A Calculus-Based Derivation

bit.ly/PavelPatreon
lem.ma/LA - Linear Algebra on Lemma
bit.ly/ITCYTNew - Dr. Grinfeld's Tensor Calculus textbook
lem.ma/prep - Complete SAT Math Prep

Пікірлер: 47

  • @MathTheBeautiful
    @MathTheBeautiful3 жыл бұрын

    Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.

  • @ijustneedaname47
    @ijustneedaname472 жыл бұрын

    This video really helped tie these concepts together for me. I really appreciate your posting it.

  • @bryan-9742
    @bryan-97424 жыл бұрын

    this is so cool. Love this channel. I'm learning so much I should have learned years ago.

  • @vothiquynhyen09
    @vothiquynhyen096 жыл бұрын

    I have to say that I love your voice, and the passion you have for the subject.

  • @joshuaronisjr

    @joshuaronisjr

    5 жыл бұрын

    He talks a little like Feynman

  • @omedomedomedomedomed
    @omedomedomedomedomed4 жыл бұрын

    To understand the least square derivation, I check this. Super helpful !!!

  • @gerardogutierrez4911
    @gerardogutierrez49114 жыл бұрын

    Why does he talk like hes trying to get me to recapture the means of production from the bourgeoisie?

  • @MathTheBeautiful

    @MathTheBeautiful

    4 жыл бұрын

    Because he is lenin in that direction

  • @kreechapuphaiboon4886
    @kreechapuphaiboon48866 жыл бұрын

    Great lecture he explains so well.

  • @joaquingiorgi5133
    @joaquingiorgi51332 жыл бұрын

    Made this concept easy to understand, thank you!

  • @ekandrot
    @ekandrot7 жыл бұрын

    For your gradient descent, do you need the -b in there, eg x -> x - a(Ax-b) ? It seemed without that -b and a positive definite matrix A, zero is the only solution. But with -b then -1,-2,4 is the solution.

  • @MathTheBeautiful

    @MathTheBeautiful

    7 жыл бұрын

    Yes, you are correct!

  • @Userjdanon
    @Userjdanon Жыл бұрын

    Great video. This was explained very intuitive.

  • @snnwstt
    @snnwstt Жыл бұрын

    1:18 Just as an observation, while it is usual to see the quadratic form as presented here, I find the following a little bit more ... elegant: 0.5 * [W] {x y z 1} With a line vector, { } a column vector and [ ] a matrix. Here W = 4 1 2 -2 1 8 5 -3 2 5 4 -4 -2 -3 -4 0 symmetric if A is symmetric. Note that the minus sign for the last column and the last line is due to the original subtraction. The 0 stands when the constant term is ... zero.

  • @sora290762594
    @sora2907625943 жыл бұрын

    great way of explaining quadratic optimization

  • @ashwinkraghu1646
    @ashwinkraghu16463 жыл бұрын

    Excellent teacher! and Life saver

  • @TuNguyen-ox5lt
    @TuNguyen-ox5lt6 жыл бұрын

    Gradient descent is a technique used in machine learning nowadays to optimize a loss function . This video is great

  • @jjgroup.investments
    @jjgroup.investments2 жыл бұрын

    Thanks for this awesome video

  • @DiegoAToala
    @DiegoAToala2 жыл бұрын

    Thank you, so clear!

  • @ibrahimalotaibi2399
    @ibrahimalotaibi23995 жыл бұрын

    Monster of Math.

  • @johnfykhikc
    @johnfykhikc6 жыл бұрын

    where i can found the statement ? i did an unsuccessful search

  • @user-iv9po5jt9n
    @user-iv9po5jt9n4 жыл бұрын

    This really helps a lot in understanding matrix derivative, and it's so clear. Thanks!!!

  • @kumudayanayanajith6427
    @kumudayanayanajith64273 жыл бұрын

    Great explanation!! Thank You

  • @MathTheBeautiful

    @MathTheBeautiful

    3 жыл бұрын

    Glad it was helpful!

  • @somekindofbluestuff
    @somekindofbluestuff3 жыл бұрын

    thank you!

  • @marshall7253
    @marshall72535 жыл бұрын

    I love this guy

  • @bobstephens97
    @bobstephens97 Жыл бұрын

    Awsome. Thank you.

  • @MathTheBeautiful

    @MathTheBeautiful

    Жыл бұрын

    Thank you!

  • @s25412
    @s254123 жыл бұрын

    7:15 what if your matrix is positive semi-definite? Wouldn't there be a minimum?

  • @serkangoktas5502
    @serkangoktas55024 жыл бұрын

    I always knew that something was off with this derivation. I am relieved that this wasn't because of my lack of talent in math.

  • @MathTheBeautiful

    @MathTheBeautiful

    4 жыл бұрын

    It's **never** you. It's always the textbook.

  • @AliVeli-gr4fb
    @AliVeli-gr4fb7 жыл бұрын

    thank you, it was a beautiful course

  • @MathTheBeautiful

    @MathTheBeautiful

    7 жыл бұрын

    Thank you, Ali, I'm glad you're enjoying our videos. But why "was"?

  • @AliVeli-gr4fb

    @AliVeli-gr4fb

    7 жыл бұрын

    MathTheBeautiful it is normal to say it in past tense in my language, so I thought in it, but wrote in English. so no real reason

  • @MathTheBeautiful

    @MathTheBeautiful

    7 жыл бұрын

    :) I just wanted to convey that the course is ongoing!

  • @TheTacticalDood

    @TheTacticalDood

    5 жыл бұрын

    @@MathTheBeautiful Is it still ongoing? This channel is amazing, it would be sad to see it stop!

  • @user-xt9js1jt6m
    @user-xt9js1jt6m3 жыл бұрын

    Nice explanation sir You look like Jason Statham ❤️❤️❤️ I felt like action star is giving lecture on matrix❤️❤️🙏

  • @MathTheBeautiful

    @MathTheBeautiful

    3 жыл бұрын

    I get that a lot when I wear a tight t-shirt.

  • @kaursingh637
    @kaursingh6374 жыл бұрын

    SIR - U R VERY CLEAR =PLEASE GIVE SHORT LECTUR

  • @devrimturker
    @devrimturker3 жыл бұрын

    Is there a relation between positive definite matrix and convex set

  • @MathTheBeautiful

    @MathTheBeautiful

    3 жыл бұрын

    Yes, excellent intuition. The level set for a positive-definite quadratic form is a convex shape.

  • @roaaabualgasim4882
    @roaaabualgasim48823 жыл бұрын

    I wont examples or meteial to illusstrate the idea of method of maximization and minimaization of function with constraint(lagrage multiplier ) and with no constraint(by quadratic form and hessian matrex) 😭

  • @joshuaronisjr
    @joshuaronisjr5 жыл бұрын

    This is just a comment for me to look at in the future, but at some point, he says that A will mostly be filled with zeroes before we start Gaussian elimination. A will be the covariance matrix, (X^T X) (look at the next video, the least squares solution video). That it's mostly filled with zeroes indicates that most of the random variables (each column of X is a different random variable of the dataset) are independent of one another (or at least, if they ARE independent then their covariance will be 0). However, Gaussian elimination involves linearly combining rows. The matrices in between may NOT be sparse! As for computer storage...I don't know much about it, but maybe computers store zeroes in a different way, so that sparse matrices are easier to store? Actually, I guess this comment is more than for just me...why can computers store sparse matrices well?

  • @darrenpeck156
    @darrenpeck1562 жыл бұрын

    Absolute value has a minimum.

  • @gustavoexel5569
    @gustavoexel55694 жыл бұрын

    At 13:15 my chin literally felt

  • @ElizaberthUndEugen

    @ElizaberthUndEugen

    4 жыл бұрын

    *dropped

  • @telraj
    @telraj2 жыл бұрын

    Why skip the matrix calculus? It's not rocket science