Google and eigenvalues
Google and eigenvalues. We describe the Pagerank algorithm, which was one of the algorithms used by Google for their search engine. For this, we rank the websites using an importance vector vector and write the system as a Markov chain, using matrices. Then we diagonalize the matrix using linear algebra, more precisely eigenvalues and eigenvectors. Finally, by taking limits of the matrix, we can determine which website reigns supreme. This probably has applications in data science and neural networks as well.
Matrix Limits and Markov Chains: • Matrix Limits and Mark...
Diagonalization: • Diagonalize 3x3 matrix
Subscribe to my channel: / drpeyam
TikTok: / drpeyam
Instagram: / peyamstagram
Twitter: / drpeyam
Teespring merch: teespring.com/stores/dr-peyam
Пікірлер: 104
"80% of mathematics is linear algebra" -Raoul Bott
@jewulo
Жыл бұрын
How can this be true?
@adamfattal468
Жыл бұрын
What about non-linear algebra?
@thesenate8268
Жыл бұрын
@@adamfattal468 it must be the 20% of mathematics, according to my calculations
@adamfattal468
Жыл бұрын
@@thesenate8268 I wonder what rigorous methodology is behind this result
@amo6139
Жыл бұрын
@@adamfattal468 linear algebra
Dr Peyam! I have my Linear Algebra exam this week! You've helped me revise a lot and understand many of the concepts! I'm so grateful for you and grateful you exist. You're a wonderful person and such an empathetic teacher! 💛
@drpeyam
Жыл бұрын
Thank you!!!
this is awesome, I just finished linear algebra 1, and seeing how concepts like eigenvalues interact in the real world is super cool
Learning about this in my data science classes. Truly exciting stuff!
Classic Markov chain problem!! For those who dont understand why 1 is always an eigenvalue of a Markov Matrix, consider Av=cv for some constant c and some markov vector v(and of course, the markov matrix A), take A to be a 2x2 matrix, for simplicity. Now multiply both sides by the row vector k= [1 1], so kAv =ckv, which gives c=1.
@thesharpshooter157
Жыл бұрын
Building on this and explaining all the steps for anyone who is stuck. Notice that v is itself a probability distribution across states (websites in this case) and this is why transpose(k)*v = 1 (the sum of a probability distribution is 1 by definition). Therefore the right hand side reduces to 1*c = c. Now the left hand side transpose(k)Av, notice that A is a square matrix where each column is a probability distribution across states (by definition). Then transpose(k)*A = [1 1] (in this case, in the general case it is a 1 vector of length k). Then we have transpose(k)Av = transpose(k)v. We know from our argument for the right hand side that transpose(k)v = 1. Therefore there exists some v (which we call v_infinity also called the stationary distribution of the markov matrix A) for which it has an eigenvalue of 1.
@randyt700
Жыл бұрын
Sir this is a wendys
what a joy to see math so clearly explained. that was awesome! you basically capture the essence of it really well.
I remember encountering your videos back in high school and understood only like 2% of it. I still don't understand lots, but I'm glad to know that you're still going strong educating people through internet
@drpeyam
Жыл бұрын
Thank you!!!
Good timing! I just gave my students a project on this topic!
We actually learned about this the last day of my linear algebra class! (Last year)
great video, all your videos in fact. It's good to see someone explaining in an easy way how this works, I've been following you on yt for years and you keep all the quality and motivation, keep it up. There is a book called Google's PageRank and Beyond: The Science of Search Engine Ranking. I read it a while back and the book discussed some methods to find eigenvalues faster and optimize algorithms etc. Sometimes we don't appreciate that, but it is important to study and develop it because it is our daily tool to work, at least in modern society. Un abrazo Doc y gracias por el video...;0
Abstractions are so cool! How they translate to real world ideas is even cooler!
@MyOneFiftiethOfADollar
Жыл бұрын
Not too surprising since many abstractions originate as a special case of a "real world idea"
Great video as always.
I have two questions... 1. What assures us that the others elements of D are less than 1? 2. What is the name of the theorem mentioned in 10:04? Pd: great video :)
@mltyblnd
2 ай бұрын
Because were dealing with numbers thats are fractions they will always be 1/n and when you multiply n to infinity it will go to suuuuper close to 0, thus those numbers wont matter
This is amazing !!!! . Please make more videos like this where you are explaining how these technologies are using Mathematics 🤩🤩🤩🤩🤩🤩
5:30 , and that's how power method works for dominant eigenvalue and corresponding eigenvector
4:31 memerable MAGIC
I've seen a similar analysis applied to the game of Monopoly. There was an article about that years ago in Scientific American.
You are the happiest mathematician of youtube! This is awesome
@drpeyam
Жыл бұрын
Thank youuuu
Love to see the math I learned applied! I study mech engineering not computer science but we also have some lin algebra. But in mech engineering, it feels like the only reason to study linear algebra for 2 semesters is to solve differential equation systems (except for statistics maybe). So seeing linear algebra being useful by itself is pretty cool!
@zb0921
Жыл бұрын
It’s used very frequently in multibody system dynamics when dealing with positions or forces of rigid bodies in different orientations
4:24 „… the same Spiel with….“. 🤗
Love Markov matrices...
I really like the linear algebra problems
@drpeyam
Жыл бұрын
Sameeee
Beautiful
Cool system, and even cooler video! As a physics student, as soon as you said transition matrix, I got flashbacks of hopping probabilities from quantum classes D: (also, go bears!!)
I understand it is a Markonikov chain problem just like the weather prediction models. What I am curious about is the astounding similarity between this and the functionality of a moore type finite state machine. Instead of states we have probabilities here.
this is great!! how do we know the diagonalized matrix will have rapidly decaying eigenvalues?
Love your energy! Btw what exactly do the connections between the states/ websites mean in the real world? You mentioned website can link to other websites, but that is very rare right?
@dudeabideth4428
Жыл бұрын
The count of Websites having link to others is what is used for , at least that was true in the initial days . Websites commonly link others right ?
This is really cool but not having used algebra in a few years I can just barely remember "yeah, eigenproblem, we did that" but the details are murky. I didn't even remember matrix multiplication is associative :D I thought you have to do this from right to left
is there a case where transition matrix is not diagonisable?
Any explanation on why the largest eigenvalue is always 1 and the rest is less than 1 but not negative?
give this man a diamond medal...
@drpeyam
Жыл бұрын
Awwwww
Brilliant idea although you skipped some ideas we are hopefully waiting see in your next videos
@drpeyam
Жыл бұрын
Check out my playlist
@reu.mathematicsacademy8566
Жыл бұрын
@@drpeyam thank you Dr
I have a question, how does one prevent someone from abusing this algorithm by creating a bunch of dummy websites that link to your desired website and thus unfarily weighting the probabilities of landing in your desired website?
@l_szabi
Жыл бұрын
In reality, Google doesn't count how many websites link to your website, they count how many people actually visit your website coming from other sites. They can do this because have their analytics code on almost all websites today. Of course this can be (and sometimes is) abused by spamming links etc. but there's also countermeasures. In other words, this is much much more complicated than showed here.
@profesorjan7614
Жыл бұрын
@@l_szabi thanks!
I guess, Google can estimate the transition probabilities.
In this specific example the original matrix A has the trace equal to zero , therefore the sum of the eigenvalues in D has to be equal to zero as well. So with exception of the first one that is equal to 1, all the others eigenvalues can be negative and with absolute values less than 1. Correct?
@drpeyam
Жыл бұрын
It’s possible
Excellent! Very nice explanation, But I was wondering what if the matrix is not diagonalizable? Is it then the Jordan normal form?( generalized eigenvactors?)
@drpeyam
Жыл бұрын
Yes you would use the Jordan normal form
I miss the "chenlu" tshirt. Want that, but the sad its- hard to shipment to indonesia 😅
At the beginning of the video the ranking of website importance was 3>4>1>2 and then at the end the video the inferred ranking was 1>3>4>2. Am I missing something?
Great video, but why do the sum of the terms of the final probability vector not equal to 1?
@drpeyam
Жыл бұрын
It should, up to approximation
As a CS major it's really cool seeing what seems to be a pointless class at the same being shown in real examples.
I lost you at A equals PDP inverse . Haven’t taken a linear algebra class officially. Any simple books or lectures?
@drpeyam
Жыл бұрын
Check out my playlists!
How is Vo computed in realistic settings?
@drpeyam
Жыл бұрын
The result is independent of v0 actually
Changing two letters of an exceptionally copyrighted symbol to Lambda should keep you out of troubles. 😬
Why is the sum norm chosen and not the l^2 norm, or any other norm for that matter?
@drpeyam
Жыл бұрын
They’re all equivalent actually
@ArturQML
Жыл бұрын
@@drpeyam for finite dimensional spaces right or does it hold for all spaces? And even though they are equivalent, the “normalization” factor is still different, depending on what your measuring this scale factor can make things weird or what I’m saying is totally wrong?
@ArturQML
Жыл бұрын
@@drpeyam for finite dimensional spaces right or does it hold for all spaces? And even though they are equivalent, the “normalization” factor is still different, depending on what your measuring this scale factor can make things weird or what I’m saying is totally wrong?
As a fellow Golden Bear I can confirm that Cal shirts never fit 😆
@drpeyam
Жыл бұрын
😂😂
Just graduated and I don’t want to see any more of stochastic processes 😂
Nice; but do you have a rounding error. The sum 0.38 + 0.12 + 0.29 + 1.9 is just 0.98; and I believe it's necessary to have at least 5 values (rather than just 4) to fall that short of 1.00 under proper rounding. With 4 values the average error is 0.02/4 = 0.005, so at least 1 must round up; though with 5 values the average error is just 0.004 and so all can round down.
@drpeyam
Жыл бұрын
Omg 😂
@pietergeerkens6324
Жыл бұрын
@@drpeyam All through middle school, and into high school, I was leaving careless errors in my wake like a cruise ship - until I determined after Grade 10 to stop doing so. Since then, it's become a bit of an OCD thing to search them out - just to keep in practice I guess. I hope you don't mind.
@andre-z
Жыл бұрын
I checked last matrix more rigorously, than Dr. Peyam did and there is what I have found: First entry is 12/31 or ≈0.3870, correctly rounded value is 0.39. Second entry is 4/31 or ≈0.1290, correctly rounded value is 0.13. Third entry is 9/31 or ≈0.2903, rounded value is 0.29. Fourth entry is 6/31 or ≈0.1935, rounded value is 0.19. 0.39+0.13+0.29+0.19=1.00 If students aren't allowed to make faults in using of rule of correct numbers, why Doctor should morally allow himself do something that is morally and pedagogically unacceptable for students? Mr. Peyam, please, use rules of math correctly. Even if it touches not so significant rule like rule of correct numbers.
In order for this algortihm to work, we absolutely need our matrix to behave in such a way that all its eigenvalues have absolute value at most 1 (otherwise it will diverge when raised to infinity), and at least one of the eigenvalues has to be exactly 1 (otherwise the matrix will converge to 0). So why can we guarantee these two properties of A?
@drpeyam
Жыл бұрын
I think it’s because the sums are 1, I think
@TT-cf7xl
Жыл бұрын
You cannot always guarantee it. In general you need to adjust A using a dumping factor and then the two properties are guaranteed.
@TT-cf7xl
Жыл бұрын
The relevant theorem is called the Perron-Frobenius Theorem.
Steve Strogatz did an oral explanation here: kzread.info/dash/bejne/c4yfk5SzXbGtgKw.html. It is nice to see some low level calculations. Page and Brin did well out of lineat algebra!!
you know our λ as loo, leeee, lemon, λέμον - smart
I was expecting the diagonal entries to be all ones and not zeros... You are already in the website so it's weird to have zeros
@drpeyam
Жыл бұрын
But it doesn’t link to itself though
Hey, why are the eigenvalues of the Matrix so small? Another thing that bothers me is that it is assumed that if a website has many links to other websites, it is relevant. But isn't it the case that these links also have to be clicked? I.e. there would still have to be a weighting for each connection that shows how often this connection is used.
@vivvpprof
Жыл бұрын
They're 0 < x < 1, so raised to infinte power they go to zero.
....no magic @ 3:02 ???
@drpeyam
Жыл бұрын
Thank you hahaha, I forgot to cut that part out
I wished that you did a prank on us and called the algorithm PRank instead of PageRank.
Markov chain...
this seems familiar...
@drpeyam
Жыл бұрын
Hahaha I wonder why 😉
Гугле-мугле
Sorry to say but it is a truth 💔💔💔....I am from Asia where there is only theory not real life examples...I am a university student and today I know what exactly vector is .....In my past student career I hated vector a lot though I love mathematics 💔💔💔
i dont get how peanuts got into the equation
Now if only you could crack the KZread algorithm ... 🤔