Computing the Singular Value Decomposition | MIT 18.06SC Linear Algebra, Fall 2011
Computing the Singular Value Decomposition
Instructor: Ben Harris
View the complete course: ocw.mit.edu/18-06SCF11
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
Пікірлер: 263
5 min of this video taught me more than two lectures, two chapters of my book, one office hour with the TA, and 4 hours trying to figure it out on my own.
@jugsma6676
7 жыл бұрын
same here, :))) , haha
@TheR971
6 жыл бұрын
nice thought!
@sunghunet
6 жыл бұрын
Yes. It saved the time to understand the difficult processes of SVD. Very helpful to me.
@oguztopcu9048
6 жыл бұрын
dude you can accomplish anything by watching 2 youtube videos.
@deepgee9214
5 жыл бұрын
Hahaha 😂🤣
I vocally say out loud 'Minus lambda' at 3:44, he then turns around and says 'minus lambda, thank you'. You're welcome.
7:24 No need to look for me man, I'm right here.
You explained singular value decomposition better in 11 minutes than my linear algebra professor did in 50. Thanks.
@carl6167
5 жыл бұрын
I am afraid mate that then you don't understand anything, but how to compute it... See, SVD is way more than that, it is a way of decomposing a homomorphism into three separate endomorphism. One that rotates it in the definition Space, One that scales vectors accordingly when bringing them from one dimension to another, and then, the vector gets rotated once again in the Output Space (Unitary matrices can be decomposed into rotation matrices).
@sathasivamk1708
4 жыл бұрын
@@carl6167 I don't think first one rotates , orthogonal matrices doesn't mean it's a rotation I can be a reflection/inversion too
I'm always amazed by MIT OCW videos. The way they teach is just ideal. Clear/big enough writen on the board, systematic explanation and comfortable to understand.
I'm sincerely thanks for MIT to give me this wonderful lecture for free . It's really helpful for me to learn SVD.
This is the best svd tutorial I could wish for, thanks for making it easy
MIT OCW is the big reason due to which I pass my courses like Linear Algebra in my university ... THANKS MIT_OCW
Thank you, most examples I found for this were simple examples, this helped me figure out the more complex problems.
Explained a complicated problem in a simple way. Amazing work.
Thanks for this, Mclovin.
This is a bit confusing. If anyone wants to know how to find U, have a look at my workings. (S=sigma) Firstly, when calculating the eigenvector for the eigenvalue 20, swap the signs around so that the vector is (3/root10, -1/root10). Note that this also a correct eigenvector for the given matrix and its eigenvalue 20. You can find out why by reading up on eigendecomposition. Secondly, swap the order of the columns around in S, so that the values go from high to low when looking from left to right. This is the conventional format of the sigma matrix. Now when finding U, I'm not sure why he's done the unit length thing, and I can't even see how he's got his final answer from it. Anyway, we know that CV = US, which means CVS^-1 = USS^-1. Since S is diagonal, SS^-1 = I, the identity matrix i.e. it can be ignored. So now we have CVS^-1 = U. To find S^-1: Since we have a diagonal matrix, just invert the values along the diagonal i.e. any value on the diagonal, x, becomes 1/x. Now multiply your CV by your S^-1 and you should get the same result for U as in the video, but with the columns swapped around i.e. in the correct format.
@cdahdude51
5 жыл бұрын
Excellent, thanks. Was wondering about how he calculated U
@Yustynn
5 жыл бұрын
Regarding getting U after he retrieves US, there are 2 ways. Both rely on the following: Notice that S, being a diagonal matrix, stretches the matrix U. Each column gets scaled by the corresponding diagonal in the S matrix. Method 1 (need to know the value of S): Divide each column j by the corresponding S_jj value. Method 2 (no need to know the value of S): Observe that U is by definition orthogonal, and that each column has been scaled by 2. Observe that: i) S is by definition diagonal => all that happened to U was a scaling of each column vector ii) U is by definition orthogonal => each column vector has magnitude 1 So just normalize all the columns. He made a mistake: he put the minus sign on U_11 instead of U_21
@forthrightgambitia1032
3 жыл бұрын
When he gets the eigenvectors did he just have the prepared or is there a mental trick to get the vectors he saw without Gaussian elimination?
@KnowledgeGuide859
2 жыл бұрын
Excellent. I spent an hour thinking how did he calculate U. Thanks again.
@kaloyank6605
Жыл бұрын
Thank you so much, now I can finally finish my assignment.
Gotta save my life for the rest of the quarter!!! So lucky to find this tutorial right before the midterm tomorrow LOL
This video is really great! Thank you for walking through this example so clearly, I really get it now! :)
@eevibessite
Жыл бұрын
kzread.info/dash/bejne/YnWczJqMfKfNmco.html
There is a problem, the \Sigma matrix must have positive decreasing values, so can't be diag([2/sqrt(5) 4/sqrt(5)]) but \Sigma = diag([4/sqrt(5) 2/sqrt(5)]). Indeed, if you perform the matrices product of your U Sigma and V' you don't get C but you obtain [ -1 7; 5 5] instead
Great way of teching!! you've teach me SVD in 11 minutes. a few error on the end but that's understandable
Appreciation! You save my life!
Best explanation ever ! Finally I got it😍😍 Thanks
Singular values need to be ordered decreasingly. When you write sigma, should not 1st and 4th values switched ?
To find U, it would've been a lot easier to multiply to the right of each side by A^T so this time you get another diagonalization equation for AA^T and so all you need to do is find the eigenvectors of AA^T. Much faster imo
Thank you for this video! Very nicely done! Just one correction: U11 and U21 seems to be should be swapped)
Thank you for the lecture!
Yes. Notice that its simple to invert sigma, since it is a diagonal matrix.
v1 is the firs eigenvector (corresponding to the first eigenvalue) after it is normalized etc.
Good job. Very helpful.
Sometimes the old ways are better. All this new fangled technology and we forget how to teach along the way. Blackboard and chalk is the way to go!
V1 and V2 are ortonormal vector, using gram-schmidt process..:)
Excellent video. Quite clear.
very short, fast, and easily understandable
Great example, thank you very much.
You're the real MVP man
I just wanted to let you know that AA* or A*A works in the SVD and the other way results in a far easier computation of the SVD.
saved me a lot of time!
Note to viewers, the sigma is wrong. The elements, s, in sigma should be s1>s2.
I think the V and the Sigma need to be ordered with the largest singular vector/value on the left?
@kanakambaran
4 жыл бұрын
ya.. this is what i thought as well. but if you do that and rearrange V also, you dont get the original matrix if you multiply the 3 components. so there must be some reason why he has put it like that
@heathmatthews2643
3 жыл бұрын
They don’t need to be but it’s super useful for stuff like Principal Component Analysis so arrange the sigma from greatest to least :)
Thank you! You are a genius!
Nice both video and the way you up the eyebrow in last second
This was very helpful indeed.
Thank you for your another method to get U.
Thank you, It was very helpful.
To find U it is simple. Did the professor found U*Sigma from CV, didn't he? So, to find U we do. U*Sigma = CV. U = [a11 a12; a21 a22]*[2*sqrt(5) 0; 0 4*sqrt(5)] = [-sqrt(10) 2*sqrt(10); sqrt(10) 2*sqrt(10)]. So, we have a11=-sqrt(10)/2*sqrt(5) = -1/sqrt(2) a12 = 2*sqrt(10)/4*sqrt(5)=1/sqrt(2) a21 = sqrt(10)/2*sqrt(5)=1/sqrt(2) a22 = 2*sqrt(10)/4*sqrt(5)=1/sqrt(2)
@SKCSK792
9 жыл бұрын
Perfect
@plavix221
8 жыл бұрын
+Luis Lobo Isnt the order of the matrices in the first equation CtC wrong? I dont get why V is at the front and Vt at the end. I thought matrices are not commutative..... Hope someone can help me out.
@GasMaskedMidget
8 жыл бұрын
This still doesn't make sense to me. How is -sqrt(10)/2*sqrt(5) not equal to -sqrt(2)/2???
@danielwenhao
7 жыл бұрын
i just came across ur comment, hope u have already solved ur problem, but ill give u the reason behind it with the link :) www.wikihow.com/Divide-Square-Roots
Yeah makes sense, haven't had to calculate this problem only understand the two methods, I see why you need to be careful, we mainly used Schur's triangulation/decomposition which more or less does the same thing since you're making the U a unitary matrix and will either result in upper trianglular matrix, unless diagonalizable then it's sigma in this case. But this seems to have a little more freedom since U and V* are not the same as U and U* Excuse any typos I'm on my phone auto correct sucks
At 5:08 I don't understand how you get -3 and 1? Then again at 6:04 how do you get 1 and 3?
One of the best ways to teach people is to show them an example. I cannot find the example of this thing in my LA class. A teacher uses matlab to calculats the UEV Thanks this man. He must gain Phd already. I wish
I’m going to be completely honest. You are exactly like me! I am a masters recipient, but I still get confused multiplying matrices in my head because it’s just so much to keep track of.
Thank you! Super helpful.
There is a much simpler way of doing this with less errors in one of the MIT pages on SVD decomposition. U is the unit Eigenvector of C Ctranspose, and V transpose is the unit eig. Vector of C transpose C. Eigenvalues are same for both matrices. Just find your eigen value diagonal matrice and you are done.
@dorsaghasri5760
3 ай бұрын
?Can you please put the link
Wonderful hand waving at 1.57
Really enjoyed!!
Last minute Mistake: He put a wrong sign in u11 and u21 position. Correction: u11 = -1/root(2), u21= 1/root(2) Correct me If I am going somewhere wrong.
@GarbageMan144
3 жыл бұрын
no ur right. these mistakes can be so annoying sometimes
@dpccn6969
3 жыл бұрын
I think same you.
Thank you Sir, it is
Easy to understand...
I thought that for the sigma matrix, the eigenvalues were listed in descending order, so it should be sqrt(80) then sqrt(20). Is this true or does it matter?
Great tutorial
You can find an SVD of a non-square matrix, but you cannot find the determinant of a non-square matrix... because you take C^TC, which is a square matrix... if you exclude the zero eigenvalues and eigenvectors that you obtain, you get the SVD of a non square matrix.
U is wrong. First entry is negative, not third.
@ortollj4591
4 жыл бұрын
I highly recommend watching this other video about SVD very well explained: kzread.info/dash/bejne/d6Ofrpl-ZtKXcag.html Yes a little sign error in U matrix: sagecell.sagemath.org/?q=yuwvgr
@fxelixe
4 жыл бұрын
thank you finally I can go to sleep. he didn't seem to notice it at all
@mdichathuranga1
3 жыл бұрын
@@ortollj4591 thanks for sharing this link, this seems the clearest explanation I found so far.
Very helpful. You delivered it very nicely, stay blessed:).... But I am confused on the V1 and V2 part ,how did you make that a unit like -3 and 1
@Sammyd123samuel
7 ай бұрын
Part of finding eigenvectors, set the (CtC -20i)v = 0 and v will be -3 and 1
simple, but very good.
Great video!
why he swapped the negative sign in the last section of finding u(in column 1)? -(1/2) change to +(1/2) and +(1/2) change to -(1/2) in column 1? got confused in the last section? plz help thankyou
@ausamahassan9559
4 жыл бұрын
you are right, though too late a reply,!!u11 negative rather than u21you can check by multiplying u(sigma)V^T to get C
Thank you very much
Thank you sooooooo much
sigma matrix values should have been switched such that a11 >= a22. That's what svd says. Correct me if I am wrong
Directly computing CC* and C*C will work definitely but an important issue will arise. That is the uniqueness of eigenvector! Since giving minus to every eigenvector could also be a reasonable solution, U and V obtained from AA* and A*A might not be able to reconstruct the original A. However, if one uses the 2nd equation mentioned in this video, such potential issue could be avoided because U is derived from C, V and sigma!
@kuzminkg
Жыл бұрын
Underrated comment! Indeed, the demonstrated method find the matrix V that corresponds to previously found matrix U.
God, I love MIT.
Excellent effort of this young man ! But I have a little confusion that if the evectors V1 and V2 do not happen to be orthonormals to each other then what should be there because we always require our V to be orthogonal ?
@hoangminhle8485
Жыл бұрын
eigenvectors, i.e. V1 and V2, are of the symmetric matrix Ctranspose.C. Therefore, V1 and V2 are always orthogonal
When finding the eigenvectors for a given eigenvalue, why is it necessary that we find a single element of the nullspace, instead of the whole nullspace?
in u matrix the minus sign should go from u21 to u11
you are a born teacher. justified because you are a student of Mr.Gilbert
thanx a ton !
Sooo helpful!
@7:04 Are you sure that your diagonal matrix will be like that? I think the diagonal matrix's diagonal elements have to be monotonically decreasing from upper left to lower right. I think it has to be [sqrt(80) 0; 0 sqrt(20)].
@peter2485
2 жыл бұрын
You are right. The order has to decrease from upper left to lower right. So v_1 and v_2 needs to swapped around, the same with \sigma_1 and \sigma_2.
Is it necessary to have the diagonals of the sigma matrix to be square root of eigenvalues of A*A. If I try to find U then I need to go with AA* which means the diagonal elements of sigma matrix should now contain AA* eigenvalues's square root.
In final step there is a mistake in first column of u reverses sign of element.
@rohitkumarsingh-iw7rf
4 жыл бұрын
arey ved bhau...aap bhi
Really helpful!! Tysm
You should go over the case when some of the eigen values are 0.
@andibritz
7 жыл бұрын
einegvalues can never be trivial
@carl6167
5 жыл бұрын
Well, you just substract 0 to the start matrix. SO you basically just compute Kern(C^TC)
How did the negative sign in the U change to the 3rd element from the 1st element? I didn't understand that part.
I really thought he was going to leave for a whole minute.
This is the closest to understanding the process of SVD to date! So thank you for that, but it would be great if you didn't use the convenient example for the V matrix. Anybody know of a video that uses algebra? Also, the identity matrix is Sigma*Sigma.T right?
@hminhph
8 жыл бұрын
+skilstopaybils Sigma*Sigma^T equals a matrix with the eigenvalues of (C^T)*C on the diagonal. in this case its 20 and 80 on the diagonal
What if C iss not a square matric, you need to show that you have to find c^Tc and cc^T, then find there coresponding eigen vectors
A Singular Value Decomposition (SVD) computation is easy to screw up. It’s wise to take this video reciter’s (Ben Harris’) advice. Please try a few SVD computations on simple 2x2 matrices. Once you have an answer, check that it works. This will help demystify SVD. This link may help: h t t p://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-29-singular-value-decomposition/ (URL editing is required at the beginning.)
Quick question! Is it fair to say that the left singular vectors reflect the distribution (statistical distribution) of the column vectors?!
shouldnt you multiply U(Sigma) with sigma ^ -1 for finding U in the end
Perhaps there's a faster way. let's note C' the transpose of C, S for sigma matrix By using two equations : CC' = V S'S V' C'C = US'SU' You can compute S'S by finding eigenvalues of CC' but it happens to be the same eigen values of C'C. So after finding V, instead of using your second equation. You can just find the eigen vectors of C'C by doing : C'C - 20I C'C - 80I You'll find the vectors of U faster and without inversing S. Also by factorizing 1/sqrt(10), it's easier to compute
I think he skipped a step where product CV is multiplied by inverse of sigma matrix
I think he mistaken when writing the final V. It should be write down as the eigen vector of the largest eigen value is column one, the eigen vector of the next largest eigen value is column two, and so forth and so on until we have the eigen vector of the smallest eigen value as the last column of our matrix. Right?
Can anyone explain me precisely, what is happening @10:00, how does he get this first matrix? What is he diving it with? I'm a bit confused.
@incognito5438
4 жыл бұрын
I am also confused about that moment :(
I'm a starter of SVD. I would like to know how to calculate the sigma matrix and matrix U.
great video
for rectangular matrix, is sigma still symmetric?
Thanks for the video. Kinda confused on the signs of the final answer of U tho'. Did we need to change the signs of a11 and a21?
@giovanni-cx5fb
7 жыл бұрын
No, he just screwed up.
@moshiurchowdhury3673
6 жыл бұрын
he just screwed up at the end of U matrix!. In front of U11, there should be a - 'Minus' sign instead of U21.
EXCELENT!!
nice video, thanks
dude, when i compute the UsigmaV^T you have, I dont get the A matrix back.
At 5:08, can you explain how did you calculated the value of V1?
@MrCorpsy6
4 жыл бұрын
The second row is 3 times the first one, so you forget it and you find that y = -1/3x (or alternatively, x = -3y), therefore you can take v1 = (-3 1) (or v1 = (1 -3)). Basically, you're solving the linear system C^TC * X = 80X, where X = (x y) for instance.
In which cases will you arrive at complex U?
Nice job, dude. :)
Thanks buddy
thank u
formula to find v1 and v2???