Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17

Cornell class CS4780. (Online version: tinyurl.com/eCornellML )
GPyTorch GP implementatio: gpytorch.ai/
Lecture Notes:
www.cs.cornell.edu/courses/cs4...
Small corrections:
Minute 14: it should be P(y,w|x,D) and not P(y|x,w,D) sorry about that typo.
Also the variance term in 40:20 should be K** - K* K^-1 K*.

Пікірлер: 102

  • @pandasstory
    @pandasstory4 жыл бұрын

    I got my first data science internship after watching all lectures. And now revisiting it during the quarantine and still benefit a lot. This whole series is a legend, thank you so much, professor Killian! Stay safe and healthy!

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Awesome! I am happy they are useful to you!

  • @jiahao2709
    @jiahao27094 жыл бұрын

    He is the most interesting ML professor that I Ever seen on the Internet.

  • @horizon2reach561
    @horizon2reach5614 жыл бұрын

    There are no words to describe the power of the intelligence in the lecture , thanks a lot for sharing it.

  • @karl-henridorleans5081
    @karl-henridorleans50814 жыл бұрын

    8 hours of scraping the internet, but the 9th was the successful one. You sir, have explained and answered all questions I had on the subject, and raised much more interesting ones. Thank you ver much!

  • @TeoChristopher
    @TeoChristopher4 жыл бұрын

    Best prof that Ive experienced so far. I love the way he tries to build sensible intuition behind the math. FYI, Love the sense of humour

  • @damian_smith
    @damian_smith2 ай бұрын

    Loved that "the answer will always be Gaussian, the whole lecture!" moment.

  • @rshukla64
    @rshukla645 жыл бұрын

    That was a truly amazing lecture from an intuitive teaching perspective. I LOVE THE ENERGY!

  • @miguelalfonsomendez2224
    @miguelalfonsomendez22243 жыл бұрын

    amazing lecture in every possible aspect: bright, funny, full of energy... a true inspiration!

  • @ikariama100
    @ikariama100 Жыл бұрын

    Currently writing my master thesis working with bayesian optimization, thank god I found this video!

  • @gareebmanus2387
    @gareebmanus23873 жыл бұрын

    Thanks for the sharing the excellent lecture. @27:00 About the house's price: The contour plot was drawn always in the first quadrant, but the Gaussian contours should have been extended over the entire plane. This actually is a drawback of the Gaussian: While we know that the house's price can't be negative, and we do not wish to consider the negative range in out model at all, we can't avoid it: The Gaussian would allow for non-zero probability for the negative price intervals as well.

  • @jiageng1997

    @jiageng1997

    2 жыл бұрын

    exactly, I was so confused why he drew it as a peak rather than a ridge

  • @saikumartadi8494
    @saikumartadi84944 жыл бұрын

    explanation was great ! thanks a lot .it would be great if you upload other courses videos you taught at cornell because everyone is not lucky to get aa teacher like you :)

  • @yibinjiang9009
    @yibinjiang90093 жыл бұрын

    The best GP lecture I've found. Simple enough and makes sense.

  • @kiliandervaux6675
    @kiliandervaux66753 жыл бұрын

    The comparision with the houses prices to explain the covariance was very pertinent. I never heard it elsewhere. Thanks !

  • @kilianweinberger698

    @kilianweinberger698

    2 жыл бұрын

    From one Kilian to another! :-)

  • @rossroessler5159
    @rossroessler51596 ай бұрын

    Thank you so much for the incredible lecture and for sharing the content on KZread! I'm a first year Master's student and this is really helping me self study a lot of the content I didn't learn in undergrad. I hope I can be a professor like this one day.

  • @CibeSridharanK
    @CibeSridharanK4 жыл бұрын

    Awesome explanation. That house example explains in very layman’s terms.

  • @rajm3496
    @rajm34964 жыл бұрын

    Very intuitive and easy to follow. Loved it!

  • @benoyeremita1359
    @benoyeremita1359 Жыл бұрын

    Sir your lectures are really amazing, you give so many insights I would've never thought of. Thank you

  • @ylee5269
    @ylee52695 жыл бұрын

    Thanks for such good lecture and nice explanation, I was struggling of understanding gaussian process for a while until I saw your viedeo

  • @George-lt6jy
    @George-lt6jy3 жыл бұрын

    This is a great lecture, thanks for sharing it. I also appreciate that you took the time to add the lecture corrections.

  • @htetnaing007
    @htetnaing0072 жыл бұрын

    People like these are truly a gift to our mankind!

  • @alvarorodriguez1592
    @alvarorodriguez15924 жыл бұрын

    Hooray! Gaussian process for dummies! Exactly what I was looking for Thank you very much.

  • @prizmaweb
    @prizmaweb5 жыл бұрын

    This is a more intuitive explanation than the Sheffield summer school GP videos

  • @salmaabdelmonem7482
    @salmaabdelmonem74824 жыл бұрын

    the best GP lecture ever, impressive work (Y)

  • @mostofarafiduddin9361
    @mostofarafiduddin93613 жыл бұрын

    Best lecture on GPs! Thanks.

  • @massisenergy
    @massisenergy4 жыл бұрын

    It might have only 112 likes & ~5000 views at the moment while I comment, but it will have profound influence to the people who watched it & it would stick to the minds!

  • @danielism8721
    @danielism87214 жыл бұрын

    AMAZING LECTURER

  • @rohit2761
    @rohit27612 жыл бұрын

    Kilian Is ML God. Why so less views compared to crappy lectures getting so many, and this gold playlist still less. I hope people dont find it and struggle to decrease competition. But still Kilian is God, and gold series. Please upload deep learning also.

  • @erenyeager4452
    @erenyeager44523 жыл бұрын

    I love you. Thank you for explaining on why you can model it as a gaussian.

  • @parvanehkeyvani3852
    @parvanehkeyvani3852 Жыл бұрын

    amazing, I really love the energy of teacher.

  • @DJMixomnia
    @DJMixomnia4 жыл бұрын

    Thanks Kilian, this was really insightful!

  • @fierydino9402
    @fierydino94024 жыл бұрын

    Thank you so much for this clear lecture :D It helped me a lot!!

  • @naifalkhunaizi4372
    @naifalkhunaizi43722 жыл бұрын

    Professor Killian you are truly an amazing professor

  • @jaedongtang37
    @jaedongtang375 жыл бұрын

    Really nice explanation.

  • @laimeilin6708
    @laimeilin67084 жыл бұрын

    Woo this is Andrew Ng level explanations!! Thank you for making these videos. :)

  • @prathikshaav9461
    @prathikshaav94614 жыл бұрын

    just binge watching your course i love it...is there link to homework, exam and solutions for the same... it would be helpful

  • @gyeonghokim
    @gyeonghokim Жыл бұрын

    Such a wonderful lecture!

  • @tintin924
    @tintin9244 жыл бұрын

    Best lecture on Gaussian Processes

  • @galexwong3368
    @galexwong33685 жыл бұрын

    Really awesome teaching

  • @hamade7997
    @hamade7997 Жыл бұрын

    Insane lecture. This helped so much, thank you.

  • @siyuanma2323
    @siyuanma23234 жыл бұрын

    Looooove this lecture!

  • @clementpeng
    @clementpeng3 жыл бұрын

    amazing explanation!

  • @jiahao2709
    @jiahao27095 жыл бұрын

    Your lecture is really really good! I have a question here, If the input also have noise, how we can use the beyesian linear regression? In most book it mention the gaussian noise in the label, But I think it also quite possible have some noise in the input X.

  • @preetkhaturia7408
    @preetkhaturia74083 жыл бұрын

    Thankyou for an Amazing lecture sir!! :)

  • @sarvasvarora
    @sarvasvarora3 жыл бұрын

    "What the bleep" HAHAH, it was genuinely interesting to look at regression from this perspective!

  • @rorschach3005
    @rorschach30053 жыл бұрын

    Really insightful lecture series and I have to say gained a lot from it. An important correction in the beginning - Sums and products of Normal distributions are not always normal. Sum of two gaussians is gaussian only if they are independent or jointly normal. No such rule exists for products as far as I remember.

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    Yes, that came out wrong. What I wanted to say is the product of two normal PDFs is proportional to a normal PDF (which is something that comes up a lot in Bayesian statistics).

  • @rorschach3005

    @rorschach3005

    3 жыл бұрын

    @@kilianweinberger698 Thanks for replying. I am not sure that I understand what you meant by proportional to a normal. Product of two normals generally is in the form of a combination of chi square variables : XY = ((X+Y)^2 - (X-Y)^2)/4. Please correct me if I am missing something

  • @fowlerj111

    @fowlerj111

    Жыл бұрын

    @@rorschach3005 I had the same reaction and I think I've resolved it. "product of Gaussians" can be interpreted two different ways. You and I considered the distribution of z where z=x*y and x and y are Gaussian. By this definition, z is definitely not Gaussian. KW is saying that if you define the pdf of z to be the product of the pdfs of x and y, normalized, then z is Gaussian. This is the property exploited in the motivating integral - note that probability densities are multiplied, but actual random variables are never multiplied.

  • @Higgsinophysics
    @Higgsinophysics2 жыл бұрын

    Brilliant and interesting !

  • @Ankansworld
    @Ankansworld3 жыл бұрын

    What a teacher!!

  • @logicboard7746
    @logicboard77462 жыл бұрын

    The last demo was great for understanding gp

  • @iusyiftgkl7346u
    @iusyiftgkl7346u4 жыл бұрын

    Thank you so much!

  • @CalvinJKu
    @CalvinJKu3 жыл бұрын

    Hypest GP lecture ever LOL

  • @kevinshao9148
    @kevinshao91484 ай бұрын

    Thanks for the brilliant lecture! One confusion if I may: since 39:18 you change the conditional probability P( y1...yn | x1 .. xn) based on data D to P(y1 ... yn, y_test | x1 ... xn, x_test) ... questions are 1) before test data point, do we already have a joint distribution P(y1 ... yn, x1 ... xn) based on D? 2) once test point comes in, we need form another Gaussian distribution N(mean, variance) for (y1 ... yn, x1 ... xn, y_test , x_test) ? if so how to get covariance term between test data point with each training data? So basically for prediction, I have new x_test, what are the exact parameters we can get for y_test distribution (how to get the mean and variance)? Many Thanks!

  • @CibeSridharanK
    @CibeSridharanK4 жыл бұрын

    18.08 I have a doubt we are not constructing a line instead we are comparing with every possible lines near by does that mean we are indirectly taking the W using covariance matrix.

  • @SubodhMishrasubEE
    @SubodhMishrasubEE3 жыл бұрын

    The professor's throat is unable to keep up with his excitement!

  • @franciscos.2301

    @franciscos.2301

    3 жыл бұрын

    *Throat clearing sounds*

  • @vishaljain4915
    @vishaljain4915Ай бұрын

    What was the question at 14:30 anyone know? Brilliant lecture - easily a new all time favourite.

  • @dr.vinodkumarchauhan3454
    @dr.vinodkumarchauhan34542 жыл бұрын

    Beautiful

  • @pratyushkumar9037
    @pratyushkumar90374 жыл бұрын

    Professor Kilian, I don't understand how did you write mean= K*K^ -1Y and variance = K** -K*K^-1 K* for the normal distribution?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    It is just the conditional distribution for the Gaussian ( see e.g. en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions , here Sigma is our K)

  • @dheerajbaby
    @dheerajbaby3 жыл бұрын

    Thanks for a great lecture. I am bit confused about the uncertainty estimates. How can we formally argue that the posterior variance at any point is telling us something really useful? For example, let's say we consider a simple setup where the training data is generated as y_i = f(x_i) + N(0,sigma^2), i = 1,..n and f is a sample path of the GP(0,k). Then is it possible to construct a high probability confidence band that traps the ground truth f_i using the posterior covariance and mean functions? After all, if I understood correctly, the main plus point of GP regression over kernel ridge regression is due to the posterior covariance.

  • @dheerajbaby

    @dheerajbaby

    3 жыл бұрын

    I actually found all my questions answered at this paper arxiv.org/pdf/0912.3995.pdf which is the test of time paper at ICML 2020

  • @yuanchia-hung8613
    @yuanchia-hung86133 жыл бұрын

    These lectures definitely have some problems... I have no idea why they are even more interesting than Netflix series lol

  • @sulaimanalmani

    @sulaimanalmani

    3 жыл бұрын

    Before starting the lecture, I thought this must be an exaggeration, but after watching it, this is actually true!

  • @vikramnanda2833
    @vikramnanda2833 Жыл бұрын

    Which course to learn Data science or Machine learning

  • @ejomaumambala5984
    @ejomaumambala59844 жыл бұрын

    Great lectures! Really enjoyable. There's an important mistake at 40:20, I think? The variance is not K** K^-1 K*, as kilian wrote, but rather it is K** - K* K^-1 K*.

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Yes, good catch! Thanks for pointing this out. Luckily it is correct in the notes: www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote15.html

  • @yannickpezeu3419
    @yannickpezeu34193 жыл бұрын

    Thanks

  • @DrEhrfurchtgebietend
    @DrEhrfurchtgebietend4 жыл бұрын

    It is worth pointing out that while there is no specific model there is an analytic model being assumed. In this case he assumed a linear model

  • @zhongyuanchen8424
    @zhongyuanchen84243 жыл бұрын

    Why is integral over w of P(y|x,w)P(w|D) equal to P(y|x,D) ? Is it because P(w|D) = P(w|D,x)?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    P(y|x,w)P(w|D)=P(y,w|x,D) If you now integrate out w you obtain P(y|x,D). (Here x is the test point, and D is the training data.) If you want to make it clearer you can also use the following intermediate step: P(y|x,w)=P(y|x,w,D). You can condition on D here, because y is conditionally independent of D, when x,w are given. For the same reason you can write P(w|D)=P(w|D,x) as w does not depend on the test point x (it is only fitted on the training data). Hope this helps.

  • @mutianzhu5128
    @mutianzhu51284 жыл бұрын

    I think there is a typo at 40:18 for the variance.

  • @ejomaumambala5984

    @ejomaumambala5984

    4 жыл бұрын

    Yes, i agree. The variance is not K** K^-1 K*, as kilian wrote, but rather it is K** - K* K^-1 K*.

  • @christiansetzkorn6241
    @christiansetzkorn62412 жыл бұрын

    Sorry but why correlation of 10 for POTUS example? Correlation can only be -1 ... 1?!

  • @zvxcvxcz
    @zvxcvxcz3 жыл бұрын

    Really making concrete what I've known about ML for some time. There is no such thing as ML, it is all just glorified correlation :P

  • @hossein_haeri
    @hossein_haeri3 жыл бұрын

    What is exactly k**? Isn't it always ones(m,m)?

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    No, depends on the kernel function. But it is the inner-product of the test point(s) with it-/themselves.

  • @gregmakov2680
    @gregmakov2680 Жыл бұрын

    hahahah, sinh vien nao ma hieu duoc bai nay la thien tai :D:D:D:D pha tron tum lum :D:D roi qua di.

  • @namlehai2737

    @namlehai2737

    7 ай бұрын

    Lots of ppl do actually

  • @vatsan16
    @vatsan164 жыл бұрын

    "One line of julia... two lines of python!!" whats with all the python hate professor? :P

  • @zvxcvxcz

    @zvxcvxcz

    3 жыл бұрын

    Oh come on, two isn't so bad, do you know how many it is in assembly? :P

  • @arihantjha4746
    @arihantjha47463 жыл бұрын

    Since p(xi,yi;w) = p(yi|xi;w)p(xi) and during MLE and MAP we ignore p(xi), as it is independent of w, to get the likelihood function (multiply from i to n -> p(yi|xi;w)). But here, why do we simply start with P(D;w) as equal to the likelihood function. Shouldn't P(D;w) be equal to (Multiply from i to n -> p(yi|xi;w) p(xi) ) where p(xi) is some arbitrary dist as it is independent of w and no assumptions are made about it, while p(yi|xi;w) is a Gaussian. Since only multiplying Gaussian with Gaussian gives us a Gaussian, how is the answer a Gaussian when p(xi) is not a Gaussian. Ignoring p(xi) during MLE and MAP makes a lot of sense as it is independent of theta, but why wasn't it been included when writing P(D;w) in the first place. Do we just assume that since xi are given to us and we don't model p(xi), p(xi) is a constant for all xi?? Can anyone help??? Also, thank you for the lectures Prof.

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    The trick is that P(D;w) is inside a maximization with respect to the parameters w. Because P(x_i) is independent of w, it is just a constant we can drop. max_w P(D;w)=max_w \PI_i P(x_i,y_i;w)=max_w (\PI_i P(y_i|x_i;w)) * (PI_i P(x_i) ) This last term is a multiplicative constant that you can pull out of the maximization and drop, as it won’t affect your choice of w. (Here PI is the capital PI multiply symbol.)

  • @sandipua8586
    @sandipua85865 жыл бұрын

    Thanks for the content but please calm down, I'm getting a heart attack

  • @nichenjie

    @nichenjie

    5 жыл бұрын

    Learning GP is so frustrated T.T

  • @jzinou1779

    @jzinou1779

    5 жыл бұрын

    lol

  • @sekfook97
    @sekfook972 жыл бұрын

    just know about they used Gaussian processes to search the airplanes in the ocean. btw, I am from malaysia.

  • @maxfine3299
    @maxfine3299Ай бұрын

    the Donald Trump bits were very funny!

  • @zhuyixue4979
    @zhuyixue49794 жыл бұрын

    aha moment: 11:15 to 11:25

  • @busTedOaS
    @busTedOaS3 жыл бұрын

    ERRM

  • @bnouadam
    @bnouadam4 жыл бұрын

    this giy has absolutely no charisma and has a controlling attitude. tone is not fluent

  • @prathikshaav9461
    @prathikshaav94614 жыл бұрын

    just binge watching your course i love it...is there link to homework, exam and solutions for the same... it would be helpful

  • @kilianweinberger698

    @kilianweinberger698

    4 жыл бұрын

    Past 4780 exams are here: www.dropbox.com/s/zfr5w5bxxvizmnq/Kilian past Exams.zip?dl=0 Past 4780 Homeworks are here: www.dropbox.com/s/tbxnjzk5w67u0sp/Homeworks.zip?dl=0