22. Diagonalization and Powers of A

MIT 18.06 Linear Algebra, Spring 2005
Instructor: Gilbert Strang
View the complete course: ocw.mit.edu/18-06S05
KZread Playlist: • MIT 18.06 Linear Algeb...
22. Diagonalization and Powers of A
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Пікірлер: 334

  • @bigfrankgaming2423
    @bigfrankgaming2423 Жыл бұрын

    This man single handedly saved my university algebra course, my teacher was just reading notes, he's actually expalining in a very clear manner.

  • @charmenk
    @charmenk11 жыл бұрын

    Good professor with good old blackboard and white chalk teaching method. This is way better than all the fancy powerpoints that many teachers use now a days.

  • @9888565407

    @9888565407

    4 жыл бұрын

    hey did ya benefit from these lectures

  • @yusufkavcakar8744

    @yusufkavcakar8744

    3 жыл бұрын

    @@9888565407 yeah ı did

  • @jiayuanwang1498

    @jiayuanwang1498

    2 жыл бұрын

    I can't agree more!!!

  • @Tutkumsdream
    @Tutkumsdream11 жыл бұрын

    Thanks to him! I passed Linear Algebra.. I watched his videos for 4 days before final exam and I got 74 from final.. If I couldnt watch Dr.Strang's lectures, I would probably fail...

  • @snoefnone9647

    @snoefnone9647

    7 ай бұрын

    For some reason i thought you were saying Dr. Strange's lecture!

  • @kayfouroneseven
    @kayfouroneseven12 жыл бұрын

    this is a bazillion times more straightforward and clear than the lectures i pay for at my university. :( I appreciate this being online

  • @bsmichael9570

    @bsmichael9570

    8 ай бұрын

    He tells it like a story. It’s like he’s taking us all on a journey. You can’t wait to see the next episode.

  • @apocalypse2004
    @apocalypse20047 жыл бұрын

    I think Strang leaves out a key point in the difference equation example, which is that the n unique eigenvectors form a basis for R^n, which is why u0 can be expressed as a linear combination of the eigenvectors.

  • @alessapiolin

    @alessapiolin

    7 жыл бұрын

    thanks!

  • @wontbenice

    @wontbenice

    6 жыл бұрын

    I was totally confused until you chimed in. Thx!

  • @seanmcqueen8498

    @seanmcqueen8498

    6 жыл бұрын

    Thank you for this comment!

  • @arsenron

    @arsenron

    6 жыл бұрын

    in my opinion it is so obvious that it is not worth stopping on it

  • @dexterod

    @dexterod

    5 жыл бұрын

    I think Strang assumed that A has n independent eigenvectors since most matrices do not have repeated eigenvalues.

  • @jollysan3228
    @jollysan32289 жыл бұрын

    I agree. > Just one small correction at 32:30: It should have been S * LAMBDA^100 * c instead of LAMBDA^100 * S * c.

  • @Slogan6418

    @Slogan6418

    4 жыл бұрын

    thank you

  • @ozzyfromspace

    @ozzyfromspace

    4 жыл бұрын

    The sad thing was, a few moments later he was struggling to explain things because even though he hadn't pinned down the error, he someone knew that something wasn't quite right. But he obviously had the core idea nailed

  • @alexandresoaresdasilva1966

    @alexandresoaresdasilva1966

    4 жыл бұрын

    thank you so much, was about to post asking about this.

  • @user-px8iy5jj8q

    @user-px8iy5jj8q

    4 жыл бұрын

    I stuck on this for like 10 mins, until I saw the comments here...

  • @maitreyverma2996

    @maitreyverma2996

    4 жыл бұрын

    Perfect. I was about to write the same.

  • @eye2eyeerigavo777
    @eye2eyeerigavo7774 жыл бұрын

    Math surprises you everytime...🤔 Never thought that connections between rate of growth in system dynamics, fibonacci Series and diagnalization of an INDEPENDENT vectors will finally boil down INTO GOLDEN RATIO OF EIGENVALUES at END! 😳

  • @albertacristie99
    @albertacristie9914 жыл бұрын

    This is magnificiant!! I have no words to express how thankful I am towards the exposure of this video

  • @georgesadler7830
    @georgesadler78303 жыл бұрын

    From this latest lecture , I am learning more about eigenvalues and eigenvectors in relation to diagonalization of a matrix. DR. Strang continues to increase my knowledge of linear algebra with these amazing lectures.

  • @cozmo4825
    @cozmo4825 Жыл бұрын

    Thank you a lot Prof. Strang you really made this course clear to understand, the way you teach these topics are superior, what really special about it is you put yourself in the position of your students answering their questions without them even asking, a true skill acquired not only by decades of teaching but actually having the mindset of a true teacher. Mr. Strang, words can't describe how good these lectures are its a true form of art. And thanks a lot for MIT OpenCourseWare for providing these lectures in good quality and free of charge, hopefully one day to I will pursue a master in my degree in electrical engineering in MIT.

  • @christoskettenis880
    @christoskettenis8807 ай бұрын

    The explanations of this professor of all those abstract theorems and blind methodologies are simply briliant

  • @eroicawu
    @eroicawu14 жыл бұрын

    It's getting more and more interesting when differential equations are involved!

  • @florianwicher
    @florianwicher6 жыл бұрын

    Really happy this is online! Thank you Professor :)

  • @ccamii__
    @ccamii__ Жыл бұрын

    Absolutely amazing! This lecture really helped me to understand better the ideas about Linear Algebra I've already had.

  • @BirnieMac1
    @BirnieMac15 ай бұрын

    You know you’re in for some shenanigans when they pull out the “little trick” Professor Gilbert is an incredible teacher; I struggled with Eigenvalues and vectors in a previous course and this series of lectures has really helped understand it better Love your work Professor Gilbert

  • @kanikabagree1084
    @kanikabagree10844 жыл бұрын

    This teacher made fall in love with linear algebra thankyou ❤️

  • @meetghelani5222
    @meetghelani52227 ай бұрын

    Thank you for existing MITOCW and Prof. Gilbert Strang.

  • @Zumerjud
    @Zumerjud9 жыл бұрын

    This is so beautiful!

  • @SimmySimmy
    @SimmySimmy5 жыл бұрын

    through single matrix transformation, the whole subspace will expand or shrink with the rate of eigenvalues in the direction of its eigenvectors, suppose you can decompose a vector in this subspace into the linear combination of its eigenvectors, so after many times of the same transformation, the random vector will ultimately land on one of its eigenvectors with the largest eigenvalue.

  • @MsAlarman

    @MsAlarman

    2 жыл бұрын

    Mindbending

  • @RolfBazuin
    @RolfBazuin11 жыл бұрын

    Who would have guessed, when this guy explains it, it almost sounds easy! You, dear dr. Strang, are a master at what you do...

  • @rolandheinze7182
    @rolandheinze71825 жыл бұрын

    Hard lecture to get through personally but does illustrate some of the cool machinery for applying eigenvectors

  • @ozzyfromspace
    @ozzyfromspace4 жыл бұрын

    For the curious: F_100 = (a^99 - b^99) * b/sqrt(5) + a^99 , where a = (1 + sqrt(5))/2 and b = (1 - sqrt(5))/2 are the two eigenvalues of our system of difference equations. Numerically, F_100 = ~3.542248482 * 10^20 ... it's a very large number that grows like ~1.618^k 😲 Overall, great lecture Professor Strang! Thank you for posting, MIT OCW ☺️

  • @user-ld6fq4wi5e
    @user-ld6fq4wi5e7 ай бұрын

    I have learned about the Fibonacci sequence in my high school, and it is so good to have a new perspective on the magical sequence.I think the significane of learning lies in the collection of new perspectives.😀

  • @tomodren
    @tomodren12 жыл бұрын

    Thank you for posting this. These videos will allow me to pass my class!

  • @sathviktummala5480
    @sathviktummala54803 жыл бұрын

    44:00 well that's an outstanding move

  • @dennisyangji
    @dennisyangji14 жыл бұрын

    A great lecture showing us the wonderful secret behind linear algebra

  • @Mohamed1992able
    @Mohamed1992able12 жыл бұрын

    a big thanks tothis prof for his efforts to give us cours about linear algebra

  • @alexspiers6229
    @alexspiers62294 ай бұрын

    This is one of the best in the series

  • @uzferry5524
    @uzferry5524 Жыл бұрын

    bruh the fibonacci example just blew my mind. crazy how linear algebra just works like that!!

  • @go_all_in_777
    @go_all_in_7774 ай бұрын

    At 28:07, uk = (A^k)uo, can also be written as uk = S*(Lambda)^k*(S^-1)*uo. Also, we can write uo = S*c as explained at 30:00. therefore, uk = S*(Lambda)^k*(S^-1)*S*c=S*(Lambda)^k*c

  • @jeanpierre-st7rl

    @jeanpierre-st7rl

    4 ай бұрын

    Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.

  • @Hindusandaczech
    @Hindusandaczech13 жыл бұрын

    Bravo!!! Very much the best and premium stuff.

  • @kunleolutomilayo4018
    @kunleolutomilayo40185 жыл бұрын

    Thank you, Prof. Thank you, MIT.

  • @amyzeng7130
    @amyzeng71302 жыл бұрын

    What a brilliant lecture !!!

  • @aattoommmmable
    @aattoommmmable13 жыл бұрын

    the lecture and the teacher of my life!

  • @serden8804

    @serden8804

    4 жыл бұрын

    bro yasiyor musun

  • @ricardocesargomes7274
    @ricardocesargomes72747 жыл бұрын

    Thanks for uploading.!

  • @cuinuc
    @cuinuc14 жыл бұрын

    I love professor Strang's great lectures. Just one small correction at 32:30: It should have been S * LAMBDA^100 * c instead of LAMBDA^100 * S * c.

  • @starriet

    @starriet

    2 жыл бұрын

    Nice catch!

  • @jeffery777

    @jeffery777

    2 жыл бұрын

    haha I think so

  • @eyuptarkengin816

    @eyuptarkengin816

    6 ай бұрын

    yeah, i though of the same thing and scrolled down the comments for a approval. Thanks mate :D

  • @dwijdixit7810
    @dwijdixit7810 Жыл бұрын

    33:40 Correction: Eigenvalue matrix be multiplied to S from the right. That has been made in the book. Probably, it slipped off Prof. Strang in the flow.

  • @syedsheheryarbokhari2780
    @syedsheheryarbokhari278010 ай бұрын

    There is a small writing mistake at 32:30 by Prof Strang. He writes (eigenvalue matrix)^100 multiplying (eigenvector matrix) multiplying c's (constants). It ought to be (eigenvector matrix) multiplying (eigenvalue matrix)^100 multiplying c's. At the end of the lecture Professor Strang does narrate the correct formula but it is easier to miss.

  • @clutterbrainx

    @clutterbrainx

    10 ай бұрын

    Yeah I was confused for a very long time there

  • @coreconceptclasses7494
    @coreconceptclasses74943 жыл бұрын

    I got 70 out of 75 in my final linear algebra exam thanks MIT...

  • @abdulghanialmasri5550
    @abdulghanialmasri55502 жыл бұрын

    The best math teacher ever.

  • @shadownik2327
    @shadownik23276 ай бұрын

    Now I get it, so its like breaking the thing ( vector or matrix or system really) we want to transform into little parts and then transforming them individually cz thats easier as the parts get transformed in the same direction and then adding up all those pieces. E vectors tell us how to make the pieces and e values how to make the transformation with the given matrix or system. Wow thanks ! It’s like something fit in in my mind and became very simple. Basically this is like finding the easiest way to transform. Thanks to @MIT and Professor Strang for making this available online for free.

  • @veronicaecheverria594
    @veronicaecheverria5943 жыл бұрын

    What a great professor!!!

  • @eugenek951
    @eugenek9516 ай бұрын

    He is my linear algebra super hero!🙂

  • @jamesmcpherson3924
    @jamesmcpherson39243 жыл бұрын

    I had to pause to figure out how he got the eigenvectors at the end. Plugging in Phi works but it wasn’t until I watched again that I noticed he was pointing to the lambda^2-lambda-1=0 relationship to reveal the vector.

  • @ax2kool
    @ax2kool12 жыл бұрын

    That was amazing and awe-inspiring. :)

  • @neoneo1503
    @neoneo15033 жыл бұрын

    A*S=S*Lambda (Using the linear combination view (Ax1=b1 column part) of Matrix Multiplication), That is Brilliant and Clear! Thanks!

  • @neoneo1503

    @neoneo1503

    3 жыл бұрын

    Also expressing the state u_0 to u_k as linear combination of eigenvectors (at 30:00 and 50:00)

  • @fragileomniscience7647

    @fragileomniscience7647

    3 жыл бұрын

    Well, it's - if you interpret it that way - just a basis transformation from the standard base (up to isomorphism, then just additionally multiply the transforms of the alternate basis) onto the eigenvector basis. Provided of course, that either the characteristic polynomial factors distinctly, or that geometric and algebraic multiplicity match (because then the eigenspaces distinctly span the vector space up to isomorphism; if they weren't, you'd just have a subspace as a generating system). For anyone who wanted one more run-through.

  • @neoneo1503

    @neoneo1503

    3 жыл бұрын

    @@fragileomniscience7647 Thanks! =)

  • @Afnimation
    @Afnimation11 жыл бұрын

    well i got impressed at the begining, but when he stated the second eigenvalue i realized it is just the golden ratio... That does not demerits him, he's great!

  • @ozzyfromspace
    @ozzyfromspace4 жыл бұрын

    Did we ever prove that if the set of eigenvalues are distinct, the set of eigenvectors are linearly independent? I ask because at ~ 32:00 taking u_o = c1*x1 + c2*x2 + ... + cn*xn requires the eigenvectors to form a basis for an n-dimensional vector space (i.e. span the column space of an invertible matrix). It feels right but I have no solid background for how to think about it

  • @roshinis9986

    @roshinis9986

    8 ай бұрын

    The idea is easy for 2d. If you have two distinct eigenvalues and their corresponding eigenvectors, you don't just have one eigenvector per eigenvalue, the whole span of that vector (its multiples forming a line) are also the eigenvectors associated with that eigenvalue. If the original eigenvectors were to be dependent, they would lie in the same line making it impossible for them to scale by a factor of two distinct eigenvalues simultaneously. I haven't yet been able to extend this intuition to 3 or higher dimensions though as now dependence need not mean lying in the same line.

  • @jeanpierre-st7rl

    @jeanpierre-st7rl

    4 ай бұрын

    @@roshinis9986 Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.

  • @cooperxie
    @cooperxie11 жыл бұрын

    agree! that's what I plan to use in my teacing

  • @benzhang7261
    @benzhang72613 жыл бұрын

    Master Yoda passed on what he has learnt by fibonacci and 1.618.

  • @cecilimiao
    @cecilimiao14 жыл бұрын

    @cuinuc I think they are actually the same, because LAMBDA is a diagonal matrix, you can have a try.

  • @nguyenbaodung1603
    @nguyenbaodung16033 жыл бұрын

    I read something on SVD without even knowing about eigenvalues and eigenvectors, then watch a youtube video, explaining that V is actually the eigenvector decomposition of A^TA. Which is extremely insane when I got to see this video oh my godness. Now even haven't watched your SVD lecture, I can even tell the precise concept of it. Oh my godness Math is so perfect!!

  • @SamSarwat90
    @SamSarwat906 жыл бұрын

    I love you professor !!!

  • @blondii0072
    @blondii007212 жыл бұрын

    Beautiful lecture. Thanks

  • @mospehraict
    @mospehraict13 жыл бұрын

    @PhilOrzechowski he does it to make first order difference equations system out of second order

  • @healthfreak5438
    @healthfreak54389 жыл бұрын

    why are we representing mu(k) vector as the combination of eigen vectors in the problem of Fibonacci sequence?

  • @user-pd1sx9tx4q
    @user-pd1sx9tx4q3 жыл бұрын

    MIT, thanks you!

  • @davidsfc9
    @davidsfc911 жыл бұрын

    Great lecture !

  • @shavuklia7731
    @shavuklia77317 жыл бұрын

    Fantastic! Thanks for uploading.

  • @zionen01
    @zionen0114 жыл бұрын

    Great stuff. I was able to do my homework with this lecture. I will definitely be getting Strang's book.

  • @dalisabe62
    @dalisabe624 жыл бұрын

    The golden ratio arose from the Fibonacci sequence and has nothing to do with eigenvectors or eigenvalues. The beauty of using the eigenvectors and eigenvalue of a matrix though is limiting the effect of the transformation to the change in magnitude only, which reduces dynamics systems such as population growth that is a function of several variables to be encoded in a matrix computation without worrying about the effect of direction or rotation typically associated with matrix transformation. Since eigenvectors and eigenvalues change the magnitude of the parameter vector only, the idea of employing the Eigen transformation concept is quite genius. The same technique could be used in any dynamic system that could be modeled as a matrix transformation but one that produces a change in magnitude only.

  • @Arycke

    @Arycke

    9 ай бұрын

    Hence the title of his *example* as "Fibonacci Example." Nowhere was it stated explicitly sthat the golden ratio didn't arise from the Fibonacci sequence, so I don't see where you got that from. The example has a lot to do with eigenvalues and eigenvectors by design, and is using a simple recurrence relation to show a use case. The Fibonacci sequence isn't unique anyway.

  • @praduk
    @praduk14 жыл бұрын

    Fibonacci numbers being solved for as an algebraic equation with linear algebra was pretty cool.

  • @eren96lmn
    @eren96lmn8 жыл бұрын

    43:36 that moment when your professor's computational abilities goes far beyond standart human capabilities

  • @BalerionFyre

    @BalerionFyre

    8 жыл бұрын

    Yeah wtf? How did he do that in his head?? lol

  • @BalerionFyre

    @BalerionFyre

    8 жыл бұрын

    Wait a minute! He didn't do anything special. 1.618... is the golden ratio! He just knew the first 4 digits. Damn that's a little anticlimactic. Bummer.

  • @AdrianVrabie

    @AdrianVrabie

    8 жыл бұрын

    +Stephen Lovejoy Damn! :D Wow! AWESOME! I have no words! Nice spot! I actually checked it in Octave and I was amazed the prof could do it in his head. But I guess he knew the Fibonacci is related to the golden ratio.

  • @IJOHN84

    @IJOHN84

    5 жыл бұрын

    All students should know the solution to that golden quadratic by heart.

  • @ozzyfromspace

    @ozzyfromspace

    4 жыл бұрын

    Fun fact since we're all talking about the golden ratio. The Fibonacci sequence isn't that special. Any sequence F_(k+2) = F_(k+1) + F_k for any seeds F_0 = a and F_1 = b != -a generate a sequence that grows at the rate (1+sqrt(5))/2 .. your golden ratio. Another fun way to check this: take the limit of the ratio of numbers in your arbitrary sequence with your preferred software :) edit: that's a great excuse to write a bit of code lol

  • @jasonhe6947
    @jasonhe69474 жыл бұрын

    absolutely a brilliant example for how to apply eigenvalues to real world problem

  • @gianlucacococcia2384

    @gianlucacococcia2384

    4 жыл бұрын

    Can I ask you why A1 x x1 is lambda x1?

  • @gianlucacococcia2384

    @gianlucacococcia2384

    4 жыл бұрын

    Zhixun He

  • @noorceen
    @noorceen12 жыл бұрын

    thank you :)) you are amazing

  • @hektor6766
    @hektor67665 жыл бұрын

    You can hear the students chuckling as they recognized the Golden Ratio. Didn't quite recognize it as (1 + root 5)/2.

  • @bastudil94
    @bastudil9410 жыл бұрын

    There is a MISTAKE on the formula of the minute 32:31. It must be S(Λ^100)c in order to work as it is supposed. However it is an excellent lecture, thanks a lot. :)

  • @YaguangLi

    @YaguangLi

    9 жыл бұрын

    Yes, I am also confused by this mistake.

  • @sammao8478

    @sammao8478

    9 жыл бұрын

    Yaguang Li agree with you.

  • @AdrianVrabie

    @AdrianVrabie

    8 жыл бұрын

    +Bryan Astudillo Carpio why not S(Λ^100)S^{-1}c ???

  • @apocalypse2004

    @apocalypse2004

    7 жыл бұрын

    u0 is Sc, so S inverse cancels out with the S

  • @daiz9109

    @daiz9109

    7 жыл бұрын

    You're right... it confused me too...

  • @kebabsallad
    @kebabsallad13 жыл бұрын

    @PhilOrzechowski , he says that he just adds it to create a system of equations.

  • @shamsularefinsajib7778
    @shamsularefinsajib777811 жыл бұрын

    Gilbert strang a great math teacher............

  • @chiaochao9550
    @chiaochao95503 жыл бұрын

    46:08 It should be F_100 is similar to c_1 * lambda1 * x_1. The professor missed x_1 here. But if you assume x_1 is 1 (which is the case here), then this is correct.

  • @starriet

    @starriet

    2 жыл бұрын

    Not actually. F_100 is a number and x_1 is a vector.

  • @dexterod
    @dexterod8 жыл бұрын

    I'd say if you play this video at speed 1.5, it's even more awesome!

  • @abdulbasithashraf5480

    @abdulbasithashraf5480

    5 жыл бұрын

    Trueee

  • @ozzyfromspace

    @ozzyfromspace

    4 жыл бұрын

    1x all the way. I savor the learning ☺️

  • @wendywang4232
    @wendywang423212 жыл бұрын

    something wrong with this lecture, 32:39, A^{100}u_0=SM^100c. Here I use M to substitute the eigenvalue diagonal matrix. The professor said A^{100}u_0=M^100Sc which is not correct.

  • @Stoikpilled
    @Stoikpilled14 жыл бұрын

    awesome!! Greetings from Peru

  • @joe130l
    @joe130l3 жыл бұрын

    so it seems like the professor emphasized the importance of the eigenvalue here, that's nice. but is the eigenvector of any importance? what's a good example of eigenvectors?

  • @alijoueizadeh8477
    @alijoueizadeh84775 жыл бұрын

    Thank you.

  • @phononify
    @phononify10 ай бұрын

    very nice discussion about Fibonacci ... great !

  • @thomassun3046
    @thomassun3046Ай бұрын

    Here comes a question, How U0 is equal to c1x1+c2x2...+cnxn. at 29:50, confused, could anyone explain it to me?

  • @codeo6246
    @codeo62462 жыл бұрын

    Can't you also use the determinant to figure out that Aᵏ → 0 as K → ∞ ? i.e. if det A < 1 then Aᵏ → 0

  • @technoshrink
    @technoshrink9 жыл бұрын

    U0 == "you know it" First time I've heard his boston accent c:

  • @utxeee
    @utxeee6 жыл бұрын

    So, the second component of u(k+1) is useless, right? The actual value is given by the first component.

  • @thovinh5386

    @thovinh5386

    5 жыл бұрын

    Yep, you can use u(k) = u(k) and it still works.

  • @zyctc000
    @zyctc0007 ай бұрын

    If any one ever asks you about why the Fibonacci and the golden ratio phi is connected , point him/her to this video. Thank you Dr. Strang

  • @jojowasamanwho
    @jojowasamanwho Жыл бұрын

    19:21 I would sure like to see the proof that if there are no repeated eigenvalues, then there are certain to be n linearly independent eigenvectors

  • @theshreyansjain
    @theshreyansjain Жыл бұрын

    Is there an error at 32:30? Shouldn't S be multiplied before (lamda matrix)^100?

  • @gomasaanjanna2897
    @gomasaanjanna28973 жыл бұрын

    Iam from india I love your teaching

  • @jingwufang1796
    @jingwufang17969 жыл бұрын

    Great professor

  • @drhf1214
    @drhf12147 жыл бұрын

    i'm still a little confused, what is u naught supposed to represent? Are those just the linear combination of the eigenvectors?

  • @lucasm4299

    @lucasm4299

    6 жыл бұрын

    Nicole O Yes. They form a basis

  • @leothegreat3
    @leothegreat312 жыл бұрын

    thank you

  • @iDiAnZhu
    @iDiAnZhu11 жыл бұрын

    At around 32:45, Prof. Strang writes Lambda^100*S*c. Notation wise, shouldn't this be S*Lambda^100*c?

  • @sudipandatta5371
    @sudipandatta53713 жыл бұрын

    but will it be easy to find the C vector in c1x1+c2x2+...cnxn where xi are eigen vectors and n is very big.

  • @jeanpierre-st7rl

    @jeanpierre-st7rl

    4 ай бұрын

    Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.

  • @xl000
    @xl0004 жыл бұрын

    Is there a practical use to matrix diagonalization, or is it just a cool trick like 3 balls Mill's mess or sword swallowing ?

  • @gianlucacococcia2384

    @gianlucacococcia2384

    4 жыл бұрын

    Can I ask you why A1 x x1 is lambda x1?

  • @maoqiutong
    @maoqiutong11 ай бұрын

    32:41 There is a slight error here. The result Λ^100 * S * C may be wrong. I think it should be S * Λ^100 * C.

  • @mdrumon319
    @mdrumon3192 ай бұрын

    I didn't get , how Au =ΛSc is right. Shouldn't Λ be at right of S. Is there anyone who can help ?

  • @ashutoshtiwari4398
    @ashutoshtiwari43985 жыл бұрын

    Do all matrix have eigenvalue and eigenvector? (including complex eigenvalue )

  • @rolandheinze7182

    @rolandheinze7182

    5 жыл бұрын

    Yes square matrices but not always real eigenvalues

  • @dadadada2367
    @dadadada236711 жыл бұрын

    the best of the best

  • @khanhdovanit
    @khanhdovanit3 жыл бұрын

    15:02 interested information inside matrix - eigenvalues

  • @anonymous.youtuber
    @anonymous.youtuber3 жыл бұрын

    Just wondering...what keeps us from calling the eigenvector matrix E instead of S ? Is E already used for something else ?

  • @abdulrhmanmadi7392

    @abdulrhmanmadi7392

    3 жыл бұрын

    Yes, it is used for elimination matrix.

  • @PaulHobbs23
    @PaulHobbs2312 жыл бұрын

    @lolololort 1/2(1 + sqrt(5)) is also the golden ratio! Math is amazing =] I'm sure the professor knew the answer and didn't calculate it in his head on the spot.

  • @richarddow8967
    @richarddow8967 Жыл бұрын

    beautifully simple how that Fibonacci worked out

  • @abhi220
    @abhi2207 жыл бұрын

    I have a doubt in Difference equations part. He writes u_0 as a combination of eigen-vectors of A. Why should this be true?

  • @olfchandan

    @olfchandan

    7 жыл бұрын

    eigen vectors span the entire space (Remeber - S is square invertible matrix). So, U0 will be a linear combination of eigen vectors.

  • @suziiemusic

    @suziiemusic

    7 жыл бұрын

    A set of n independent eigenvectors, each one with n components, is a basis for Rn, and therefore any vector in Rn (including u0) can be written as a linear combination of these n eigenvectors. We could choose any other set of n independent vectors as a basis and do the same thing. The "standard" basis would be the columns of the identity matrix, which in 3 dimensions correspond to the x,y and z axes.

  • @ashutoshtiwari4398
    @ashutoshtiwari43985 жыл бұрын

    Why the skew-symmetric matrix have zero or imaginary eigenvalue?

  • @sidaliu8989

    @sidaliu8989

    5 жыл бұрын

    By the definition of the skew-symmetric matrix (A^T=-A), all entries in the diagonal of the matrix must be 0. So when we come up with the characteristic equation, it will be lambda^n+b^2=0 (since the trace is zero and the determinant is some square), and this will give us pure imaginary solutions if b^2>0.