Google and eigenvalues

Google and eigenvalues. We describe the Pagerank algorithm, which was one of the algorithms used by Google for their search engine. For this, we rank the websites using an importance vector vector and write the system as a Markov chain, using matrices. Then we diagonalize the matrix using linear algebra, more precisely eigenvalues and eigenvectors. Finally, by taking limits of the matrix, we can determine which website reigns supreme. This probably has applications in data science and neural networks as well.
Matrix Limits and Markov Chains: • Matrix Limits and Mark...
Diagonalization: • Diagonalize 3x3 matrix
Subscribe to my channel: / drpeyam
TikTok: / drpeyam
Instagram: / peyamstagram
Twitter: / drpeyam
Teespring merch: teespring.com/stores/dr-peyam

Пікірлер: 104

  • @mastershooter64
    @mastershooter64 Жыл бұрын

    "80% of mathematics is linear algebra" -Raoul Bott

  • @jewulo

    @jewulo

    Жыл бұрын

    How can this be true?

  • @adamfattal468

    @adamfattal468

    Жыл бұрын

    What about non-linear algebra?

  • @thesenate8268

    @thesenate8268

    Жыл бұрын

    @@adamfattal468 it must be the 20% of mathematics, according to my calculations

  • @adamfattal468

    @adamfattal468

    Жыл бұрын

    @@thesenate8268 I wonder what rigorous methodology is behind this result

  • @amo6139

    @amo6139

    Жыл бұрын

    @@adamfattal468 linear algebra

  • @lizziethelemon
    @lizziethelemon Жыл бұрын

    Dr Peyam! I have my Linear Algebra exam this week! You've helped me revise a lot and understand many of the concepts! I'm so grateful for you and grateful you exist. You're a wonderful person and such an empathetic teacher! 💛

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Thank you!!!

  • @falconajc4113
    @falconajc4113 Жыл бұрын

    this is awesome, I just finished linear algebra 1, and seeing how concepts like eigenvalues interact in the real world is super cool

  • @hungryplate400
    @hungryplate400 Жыл бұрын

    Learning about this in my data science classes. Truly exciting stuff!

  • @HelloWorld-dq5pn
    @HelloWorld-dq5pn Жыл бұрын

    Classic Markov chain problem!! For those who dont understand why 1 is always an eigenvalue of a Markov Matrix, consider Av=cv for some constant c and some markov vector v(and of course, the markov matrix A), take A to be a 2x2 matrix, for simplicity. Now multiply both sides by the row vector k= [1 1], so kAv =ckv, which gives c=1.

  • @thesharpshooter157

    @thesharpshooter157

    Жыл бұрын

    Building on this and explaining all the steps for anyone who is stuck. Notice that v is itself a probability distribution across states (websites in this case) and this is why transpose(k)*v = 1 (the sum of a probability distribution is 1 by definition). Therefore the right hand side reduces to 1*c = c. Now the left hand side transpose(k)Av, notice that A is a square matrix where each column is a probability distribution across states (by definition). Then transpose(k)*A = [1 1] (in this case, in the general case it is a 1 vector of length k). Then we have transpose(k)Av = transpose(k)v. We know from our argument for the right hand side that transpose(k)v = 1. Therefore there exists some v (which we call v_infinity also called the stationary distribution of the markov matrix A) for which it has an eigenvalue of 1.

  • @randyt700

    @randyt700

    Жыл бұрын

    Sir this is a wendys

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Жыл бұрын

    what a joy to see math so clearly explained. that was awesome! you basically capture the essence of it really well.

  • @bagusamartya5325
    @bagusamartya5325 Жыл бұрын

    I remember encountering your videos back in high school and understood only like 2% of it. I still don't understand lots, but I'm glad to know that you're still going strong educating people through internet

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Thank you!!!

  • @DrWeselcouch
    @DrWeselcouch Жыл бұрын

    Good timing! I just gave my students a project on this topic!

  • @colleen9493
    @colleen9493 Жыл бұрын

    We actually learned about this the last day of my linear algebra class! (Last year)

  • @gatitoconsueter
    @gatitoconsueter Жыл бұрын

    great video, all your videos in fact. It's good to see someone explaining in an easy way how this works, I've been following you on yt for years and you keep all the quality and motivation, keep it up. There is a book called Google's PageRank and Beyond: The Science of Search Engine Ranking. I read it a while back and the book discussed some methods to find eigenvalues ​​faster and optimize algorithms etc. Sometimes we don't appreciate that, but it is important to study and develop it because it is our daily tool to work, at least in modern society. Un abrazo Doc y gracias por el video...;0

  • @titian5298
    @titian5298 Жыл бұрын

    Abstractions are so cool! How they translate to real world ideas is even cooler!

  • @MyOneFiftiethOfADollar

    @MyOneFiftiethOfADollar

    Жыл бұрын

    Not too surprising since many abstractions originate as a special case of a "real world idea"

  • @dominicellis1867
    @dominicellis1867 Жыл бұрын

    Great video as always.

  • @Ryokusei2
    @Ryokusei2 Жыл бұрын

    I have two questions... 1. What assures us that the others elements of D are less than 1? 2. What is the name of the theorem mentioned in 10:04? Pd: great video :)

  • @mltyblnd

    @mltyblnd

    2 ай бұрын

    Because were dealing with numbers thats are fractions they will always be 1/n and when you multiply n to infinity it will go to suuuuper close to 0, thus those numbers wont matter

  • @shamaldesilva9533
    @shamaldesilva9533 Жыл бұрын

    This is amazing !!!! . Please make more videos like this where you are explaining how these technologies are using Mathematics 🤩🤩🤩🤩🤩🤩

  • @holyshit922
    @holyshit922 Жыл бұрын

    5:30 , and that's how power method works for dominant eigenvalue and corresponding eigenvector

  • @user-vq8on7dh1y
    @user-vq8on7dh1y Жыл бұрын

    4:31 memerable MAGIC

  • @Enigma758
    @Enigma758 Жыл бұрын

    I've seen a similar analysis applied to the game of Monopoly. There was an article about that years ago in Scientific American.

  • @igorkuivjogifernandes3012
    @igorkuivjogifernandes3012 Жыл бұрын

    You are the happiest mathematician of youtube! This is awesome

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Thank youuuu

  • @tristanmoller9498
    @tristanmoller9498 Жыл бұрын

    Love to see the math I learned applied! I study mech engineering not computer science but we also have some lin algebra. But in mech engineering, it feels like the only reason to study linear algebra for 2 semesters is to solve differential equation systems (except for statistics maybe). So seeing linear algebra being useful by itself is pretty cool!

  • @zb0921

    @zb0921

    Жыл бұрын

    It’s used very frequently in multibody system dynamics when dealing with positions or forces of rigid bodies in different orientations

  • @moinauch3041
    @moinauch3041 Жыл бұрын

    4:24 „… the same Spiel with….“. 🤗

  • @user-bk2fo7ny9s
    @user-bk2fo7ny9s Жыл бұрын

    Love Markov matrices...

  • @garyhuntress6871
    @garyhuntress6871 Жыл бұрын

    I really like the linear algebra problems

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Sameeee

  • @haritssyah7434
    @haritssyah7434 Жыл бұрын

    Beautiful

  • @uraniumra
    @uraniumra Жыл бұрын

    Cool system, and even cooler video! As a physics student, as soon as you said transition matrix, I got flashbacks of hopping probabilities from quantum classes D: (also, go bears!!)

  • @sheharyarkhan9453
    @sheharyarkhan9453 Жыл бұрын

    I understand it is a Markonikov chain problem just like the weather prediction models. What I am curious about is the astounding similarity between this and the functionality of a moore type finite state machine. Instead of states we have probabilities here.

  • @tannersundwall12
    @tannersundwall128 ай бұрын

    this is great!! how do we know the diagonalized matrix will have rapidly decaying eigenvalues?

  • @wokeclub1844
    @wokeclub1844 Жыл бұрын

    Love your energy! Btw what exactly do the connections between the states/ websites mean in the real world? You mentioned website can link to other websites, but that is very rare right?

  • @dudeabideth4428

    @dudeabideth4428

    Жыл бұрын

    The count of Websites having link to others is what is used for , at least that was true in the initial days . Websites commonly link others right ?

  • @vivvpprof
    @vivvpprof Жыл бұрын

    This is really cool but not having used algebra in a few years I can just barely remember "yeah, eigenproblem, we did that" but the details are murky. I didn't even remember matrix multiplication is associative :D I thought you have to do this from right to left

  • @weliveinasociety420
    @weliveinasociety420 Жыл бұрын

    is there a case where transition matrix is not diagonisable?

  • @yackawaytube
    @yackawaytube4 ай бұрын

    Any explanation on why the largest eigenvalue is always 1 and the rest is less than 1 but not negative?

  • @yassinetiaret505
    @yassinetiaret505 Жыл бұрын

    give this man a diamond medal...

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Awwwww

  • @reu.mathematicsacademy8566
    @reu.mathematicsacademy8566 Жыл бұрын

    Brilliant idea although you skipped some ideas we are hopefully waiting see in your next videos

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Check out my playlist

  • @reu.mathematicsacademy8566

    @reu.mathematicsacademy8566

    Жыл бұрын

    @@drpeyam thank you Dr

  • @profesorjan7614
    @profesorjan7614 Жыл бұрын

    I have a question, how does one prevent someone from abusing this algorithm by creating a bunch of dummy websites that link to your desired website and thus unfarily weighting the probabilities of landing in your desired website?

  • @l_szabi

    @l_szabi

    Жыл бұрын

    In reality, Google doesn't count how many websites link to your website, they count how many people actually visit your website coming from other sites. They can do this because have their analytics code on almost all websites today. Of course this can be (and sometimes is) abused by spamming links etc. but there's also countermeasures. In other words, this is much much more complicated than showed here.

  • @profesorjan7614

    @profesorjan7614

    Жыл бұрын

    @@l_szabi thanks!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Жыл бұрын

    I guess, Google can estimate the transition probabilities.

  • @sigfridsixsis3255
    @sigfridsixsis3255 Жыл бұрын

    In this specific example the original matrix A has the trace equal to zero , therefore the sum of the eigenvalues in D has to be equal to zero as well. So with exception of the first one that is equal to 1, all the others eigenvalues can be negative and with absolute values less than 1. Correct?

  • @drpeyam

    @drpeyam

    Жыл бұрын

    It’s possible

  • @speedbird7587
    @speedbird7587 Жыл бұрын

    Excellent! Very nice explanation, But I was wondering what if the matrix is not diagonalizable? Is it then the Jordan normal form?( generalized eigenvactors?)

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Yes you would use the Jordan normal form

  • @ekadria-bo4962
    @ekadria-bo4962 Жыл бұрын

    I miss the "chenlu" tshirt. Want that, but the sad its- hard to shipment to indonesia 😅

  • @leukosnanos
    @leukosnanos Жыл бұрын

    At the beginning of the video the ranking of website importance was 3>4>1>2 and then at the end the video the inferred ranking was 1>3>4>2. Am I missing something?

  • @Simon-cz1jg
    @Simon-cz1jg Жыл бұрын

    Great video, but why do the sum of the terms of the final probability vector not equal to 1?

  • @drpeyam

    @drpeyam

    Жыл бұрын

    It should, up to approximation

  • @liamwelsh5565
    @liamwelsh5565 Жыл бұрын

    As a CS major it's really cool seeing what seems to be a pointless class at the same being shown in real examples.

  • @dudeabideth4428
    @dudeabideth4428 Жыл бұрын

    I lost you at A equals PDP inverse . Haven’t taken a linear algebra class officially. Any simple books or lectures?

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Check out my playlists!

  • @christiancahig2269
    @christiancahig2269 Жыл бұрын

    How is Vo computed in realistic settings?

  • @drpeyam

    @drpeyam

    Жыл бұрын

    The result is independent of v0 actually

  • @MyOneFiftiethOfADollar
    @MyOneFiftiethOfADollar Жыл бұрын

    Changing two letters of an exceptionally copyrighted symbol to Lambda should keep you out of troubles. 😬

  • @ArturQML
    @ArturQML Жыл бұрын

    Why is the sum norm chosen and not the l^2 norm, or any other norm for that matter?

  • @drpeyam

    @drpeyam

    Жыл бұрын

    They’re all equivalent actually

  • @ArturQML

    @ArturQML

    Жыл бұрын

    @@drpeyam for finite dimensional spaces right or does it hold for all spaces? And even though they are equivalent, the “normalization” factor is still different, depending on what your measuring this scale factor can make things weird or what I’m saying is totally wrong?

  • @ArturQML

    @ArturQML

    Жыл бұрын

    @@drpeyam for finite dimensional spaces right or does it hold for all spaces? And even though they are equivalent, the “normalization” factor is still different, depending on what your measuring this scale factor can make things weird or what I’m saying is totally wrong?

  • @manonamission2000
    @manonamission2000 Жыл бұрын

    As a fellow Golden Bear I can confirm that Cal shirts never fit 😆

  • @drpeyam

    @drpeyam

    Жыл бұрын

    😂😂

  • @kriszhli
    @kriszhli Жыл бұрын

    Just graduated and I don’t want to see any more of stochastic processes 😂

  • @pietergeerkens6324
    @pietergeerkens6324 Жыл бұрын

    Nice; but do you have a rounding error. The sum 0.38 + 0.12 + 0.29 + 1.9 is just 0.98; and I believe it's necessary to have at least 5 values (rather than just 4) to fall that short of 1.00 under proper rounding. With 4 values the average error is 0.02/4 = 0.005, so at least 1 must round up; though with 5 values the average error is just 0.004 and so all can round down.

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Omg 😂

  • @pietergeerkens6324

    @pietergeerkens6324

    Жыл бұрын

    @@drpeyam All through middle school, and into high school, I was leaving careless errors in my wake like a cruise ship - until I determined after Grade 10 to stop doing so. Since then, it's become a bit of an OCD thing to search them out - just to keep in practice I guess. I hope you don't mind.

  • @andre-z

    @andre-z

    Жыл бұрын

    I checked last matrix more rigorously, than Dr. Peyam did and there is what I have found: First entry is 12/31 or ≈0.3870, correctly rounded value is 0.39. Second entry is 4/31 or ≈0.1290, correctly rounded value is 0.13. Third entry is 9/31 or ≈0.2903, rounded value is 0.29. Fourth entry is 6/31 or ≈0.1935, rounded value is 0.19. 0.39+0.13+0.29+0.19=1.00 If students aren't allowed to make faults in using of rule of correct numbers, why Doctor should morally allow himself do something that is morally and pedagogically unacceptable for students? Mr. Peyam, please, use rules of math correctly. Even if it touches not so significant rule like rule of correct numbers.

  • @Friek555
    @Friek555 Жыл бұрын

    In order for this algortihm to work, we absolutely need our matrix to behave in such a way that all its eigenvalues have absolute value at most 1 (otherwise it will diverge when raised to infinity), and at least one of the eigenvalues has to be exactly 1 (otherwise the matrix will converge to 0). So why can we guarantee these two properties of A?

  • @drpeyam

    @drpeyam

    Жыл бұрын

    I think it’s because the sums are 1, I think

  • @TT-cf7xl

    @TT-cf7xl

    Жыл бұрын

    You cannot always guarantee it. In general you need to adjust A using a dumping factor and then the two properties are guaranteed.

  • @TT-cf7xl

    @TT-cf7xl

    Жыл бұрын

    The relevant theorem is called the Perron-Frobenius Theorem.

  • @peterhall6656
    @peterhall6656 Жыл бұрын

    Steve Strogatz did an oral explanation here: kzread.info/dash/bejne/c4yfk5SzXbGtgKw.html. It is nice to see some low level calculations. Page and Brin did well out of lineat algebra!!

  • @erimosini
    @erimosini Жыл бұрын

    you know our λ as loo, leeee, lemon, λέμον - smart

  • @MrLeonardo9
    @MrLeonardo9 Жыл бұрын

    I was expecting the diagonal entries to be all ones and not zeros... You are already in the website so it's weird to have zeros

  • @drpeyam

    @drpeyam

    Жыл бұрын

    But it doesn’t link to itself though

  • @georgfolwerk2531
    @georgfolwerk2531 Жыл бұрын

    Hey, why are the eigenvalues of the Matrix so small? Another thing that bothers me is that it is assumed that if a website has many links to other websites, it is relevant. But isn't it the case that these links also have to be clicked? I.e. there would still have to be a weighting for each connection that shows how often this connection is used.

  • @vivvpprof

    @vivvpprof

    Жыл бұрын

    They're 0 < x < 1, so raised to infinte power they go to zero.

  • @pedllz
    @pedllz Жыл бұрын

    ....no magic @ 3:02 ???

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Thank you hahaha, I forgot to cut that part out

  • @josephvanname3377
    @josephvanname3377 Жыл бұрын

    I wished that you did a prank on us and called the algorithm PRank instead of PageRank.

  • @birdie123
    @birdie123 Жыл бұрын

    Markov chain...

  • @keys
    @keys Жыл бұрын

    this seems familiar...

  • @drpeyam

    @drpeyam

    Жыл бұрын

    Hahaha I wonder why 😉

  • @Halleluyah83
    @Halleluyah83 Жыл бұрын

    Гугле-мугле

  • @arslanaliwarsi6309
    @arslanaliwarsi6309 Жыл бұрын

    Sorry to say but it is a truth 💔💔💔....I am from Asia where there is only theory not real life examples...I am a university student and today I know what exactly vector is .....In my past student career I hated vector a lot though I love mathematics 💔💔💔

  • @alcoholrelated4529
    @alcoholrelated4529 Жыл бұрын

    i dont get how peanuts got into the equation

  • @nHans
    @nHans Жыл бұрын

    Now if only you could crack the KZread algorithm ... 🤔