Machine learning - Introduction to Gaussian processes

Introduction to Gaussian process regression.
Slides available at: www.cs.ubc.ca/~nando/540-2013/...
Course taught in 2013 at UBC by Nando de Freitas

Пікірлер: 159

  • @augustasheimbirkeland4496
    @augustasheimbirkeland44962 жыл бұрын

    5 minutes in and its already better than all 3 hours at class earlier today!

  • @erlendlangseth4672
    @erlendlangseth46726 жыл бұрын

    Thanks, this helped me a lot. By the time you got to the hour mark, you had covered sufficient ground for me to finally understand gaussian processes!

  • @sarnathk1946
    @sarnathk19466 жыл бұрын

    This is indeed an Awesome lecture! I liked the way the complexity is slowly built over the lecture. Thank you very much!

  • @SijinSheung
    @SijinSheung5 жыл бұрын

    This lecture is so amazing! The hand drawing part is really helpful to build up intuition reagarding GP. This is a life-saving video to my finals. Many thanks!

  • @DistortedV12
    @DistortedV124 жыл бұрын

    Finally! This is gold for beginners like me! Thank you Nando!! Saw you o the committee at the MIT defense, great questions!

  • @sourabmangrulkar9105
    @sourabmangrulkar91054 жыл бұрын

    The way you started from basics and built up on it to explain the Gaussian Processes is very easy to understand. Thank you :)

  • @maratkopytjuk3490
    @maratkopytjuk34908 жыл бұрын

    Thank you, I tried to understand GP via papers, but only you could help me to build up understanding the idea. That is great that you took time to explain gaussian distribution and the important operations! You're the best!

  • @MrEdnz

    @MrEdnz

    2 жыл бұрын

    Learning a new subject via papers isn’t very helpful indeed :) They expect you to understand basic principles of GP. However lectures like these or books start with the basic principles💪🏻

  • @fuat7775
    @fuat7775 Жыл бұрын

    This is absolutely the best explanation of the Gaussian!

  • @jingjingjiang6403
    @jingjingjiang64036 жыл бұрын

    Thank you for sharing this wonderful lecture! Gaussian process was so confusing when it was taught in my university. Now it is crystal clear!

  • @francescocanonaco5988
    @francescocanonaco59885 жыл бұрын

    I tried to understand GP via blog article, paper and a lot of videos. Best video ever on GP! Thank you !

  • @user-oc5gk7yn6o
    @user-oc5gk7yn6o4 жыл бұрын

    I've found so many lectures for understanding gaussian process. Until now you are the only one I think can make me understand it.. Thanks a lot man

  • @huitanmao5267
    @huitanmao52677 жыл бұрын

    Very clear lectures ! Thanks for make them publicly available !

  • @dennisdoerrich3743
    @dennisdoerrich37436 жыл бұрын

    Wow, you saved my life with this genius lecture ! I think it's a pretty abstract idea with GP and it's nice that you can walk one through from scratch !

  • @life99f
    @life99f2 жыл бұрын

    I feel so fortunate to find this video. It's like walking in a fog and finally be able to see things clearly.

  • @daesoolee1083
    @daesoolee10832 жыл бұрын

    The best tutorial for GP among all the materials I've checked.

  • @ziangxu7751
    @ziangxu77513 жыл бұрын

    What an amazing lecture. It is much clearer than lectures taught in my university.

  • @LynN-he7he
    @LynN-he7he3 жыл бұрын

    Thank you, thank you thank you!! I was stuck on a homework problem and still figuring out what it means to be a testing vs. training data set and how the play a role in the Gaussian Kernel function. I was stuck for the last 3 days, and your video from about 45min - 1 hour mark made the lightbulb go off!

  • @xingtongliu1636
    @xingtongliu16365 жыл бұрын

    This becomes very easy to understand with your thorough explanation. Thank you very much!

  • @pradeepprabakarravindran615
    @pradeepprabakarravindran61511 жыл бұрын

    Thank you ! Your videos are so much awesome than any ML lecture series I have seen so far ! -- Grad Student from CMU

  • @Ricky-Noll
    @Ricky-Noll3 жыл бұрын

    All time one of the best videos on KZread

  • @dieg3005
    @dieg30058 жыл бұрын

    Thank you very much Prof. de Freitas, excellent introduction

  • @marcyaudreydemafonangmo6608
    @marcyaudreydemafonangmo66088 ай бұрын

    This lecture is amazing Professor. From the bottom of my heart, I say thank you.

  • @woo-jinchokim6441
    @woo-jinchokim64417 жыл бұрын

    by far the best structured lecture on gaussian processes. love it :D

  • @DanielRodriguez-or7sk
    @DanielRodriguez-or7sk4 жыл бұрын

    Thank you so much Professor De Freitas. What a clear explanation of GP

  • @emrecck
    @emrecck3 жыл бұрын

    That was a great lecture Mr.Freitas, thank you very very much! I watched it to study my Computational Biology course, and it really helped.

  • @Jacob011
    @Jacob01110 жыл бұрын

    Absolutely superb lecture! Everything is clearly explained even with source code.

  • @dwhdai
    @dwhdai4 жыл бұрын

    wow, this is probably the best lecture I've ever watched. on any topic.

  • @MB-pt8hi
    @MB-pt8hi5 жыл бұрын

    Very good lecture, full of intuitive examples which deepens the understanding. Thanks a lot

  • @austenscruggs8726
    @austenscruggs87262 жыл бұрын

    This is an amazing video! Clear and digestible.

  • @bluestar2253
    @bluestar22533 жыл бұрын

    One of the best teachers in ML out there!

  • @jinghuizhong
    @jinghuizhong9 жыл бұрын

    The lecture is quite clear and it inspires me about the the key ideas of gaussian process. Many thanks!

  • @sumantamukherjee1952
    @sumantamukherjee19529 жыл бұрын

    Lucidly explained. Great video

  • @MattyHild
    @MattyHild4 жыл бұрын

    FYI Notation @22:05 is wrong. since he selected an x1 to condition on, he should be computing mu2|1 but he is computing mu1|2

  • @KhariSecario
    @KhariSecario2 жыл бұрын

    Here I am in 2021, yet your explanation is the easiest one to understand from all the sources I gathered! Thank you very much 😍

  • @matej6418

    @matej6418

    Жыл бұрын

    me in 2023, still the same

  • @quantum01010101
    @quantum010101014 жыл бұрын

    That is clear and flows naturally, Thank you very much.

  • @Gouda_travels
    @Gouda_travels2 жыл бұрын

    after one hour of smooth explanation, he says and this brings us to Gaussian processes :)

  • @HarpreetSingh-ke2zk
    @HarpreetSingh-ke2zk2 жыл бұрын

    I started learning about multivariate Gaussian processes in 2011, but it's terrible that I just got to this video when 2021 is ending. He explained things in a way that even a layperson could grasp. He first explains the meaning of the concepts, followed by an example/data, and last, theoretical representation. Typically, mathematic's presenters/writers avoid using data to provide examples. I'm always on the lookout for lectures like these, where the theoretical understanding is demonstrated through examples or data. Unless the concepts are not difficult to grasp, but the presenter/writer has made us go deep in order to open up complex notations without providing any examples.

  • @bottomupengineering
    @bottomupengineering4 ай бұрын

    Great explanation and pace. Very legit.

  • @saminebagheri4175
    @saminebagheri41757 жыл бұрын

    amazing lecture.

  • @sanjanavijayshankar5508
    @sanjanavijayshankar55084 жыл бұрын

    Brilliant lecture. One could not have taught GPs better.

  • @sak02010
    @sak020105 жыл бұрын

    thanks a lot prof. Very clean and easy to understand explanation.

  • @darthyzhu5767
    @darthyzhu57678 жыл бұрын

    really clear and comprehensive. thanks so much.

  • @oliverxie9559
    @oliverxie95593 жыл бұрын

    Really great video for reading Gaussian Processes for Machine Learning!

  • @EbrahimLPatel
    @EbrahimLPatel8 жыл бұрын

    Excellent introduction to the subject! Thank you :)

  • @adrianaculebro9176
    @adrianaculebro91765 жыл бұрын

    Finally understood how this idea is explained and applied using mathematical language

  • @pankayarajpathmanathan7009
    @pankayarajpathmanathan70096 жыл бұрын

    The best lecture for gaussian processes

  • @afish3356
    @afish33563 жыл бұрын

    An extremely good lecture! Thank you for recording this :) :)

  • @homtom2
    @homtom28 жыл бұрын

    This helped me so much! Thanks!

  • @taygunkekec9616
    @taygunkekec96169 жыл бұрын

    Very clearly explained. The dependencies for learning the framework is concisely and incrementally given while details that make the framework harder to understand is elaborately evaded (You will understand what I mean if you try to dig through Rasmussen's book on GP).

  • @jx4864
    @jx48642 жыл бұрын

    After 30mins, I am sure that he is top 10 teacher in my life

  • @rsilveira79
    @rsilveira795 жыл бұрын

    Awesome lecture, very well explained!

  • @richardbrown2565
    @richardbrown25653 жыл бұрын

    Great explanation. I wish that the title mentioned that it was part one of two, so that I would have known it was going to take twice as long.

  • @pattiknuth4822
    @pattiknuth48223 жыл бұрын

    Extremely good lecture. Well done.

  • @akshayc113
    @akshayc1139 жыл бұрын

    Thanks a lot Prof. Just a minor correction for the people following the lectures. You made a mistake while writing out the formulae at 22:10 You were writing out mean and variance of P(X1|X2) whereas the diagram was to find P(X2|X1). Since this is symmetric, you can just get them by appropriate replacements, but just letting slightly confused people know

  • @charlsmartel

    @charlsmartel

    8 жыл бұрын

    +akshayc113 I think all that should change is the formula for the given graphs. It should read: mu_21 = mu_2 + sigma_21 sigma_11*-1 (x_1 - mu_1). Everything else can stay the same.

  • @tobiaspahlberg1506

    @tobiaspahlberg1506

    8 жыл бұрын

    I think he actually meant to draw x_1 where x_2 is in the diagram. This switch would agree with the KPM formulae on the next slide.

  • @maudentable
    @maudentable3 жыл бұрын

    a master doing his work

  • @yousufhussain9530
    @yousufhussain95308 жыл бұрын

    Amazing lecture!

  • @turkey343434
    @turkey3434344 жыл бұрын

    Gaussian processes start at 1:01:15

  • @hohinng8644

    @hohinng8644

    Жыл бұрын

    pin this

  • @SimoneIovane
    @SimoneIovane5 жыл бұрын

    Great lesson! Thank you!

  • @niqodea
    @niqodea4 жыл бұрын

    BEAST MODE teaching

  • @dhruv385
    @dhruv3855 жыл бұрын

    Wow! Great Lecture!

  • @haunted2097
    @haunted209710 жыл бұрын

    well done! Very intuitive!

  • @malharjajoo7393
    @malharjajoo73934 жыл бұрын

    Basic summary of lecture video: 1) Recap on multivariate Normal/Gaussian distribution (MVN). - some info on conditional probability 2) Some information on how sampling can be done from Univariate/Multivariate Gaussian distribution. 3) 39:00 - Introduction to Gaussian Process (GP) It is important to note that GP is considered as a Bayesian non-parametric approach/model

  • @kiliandervaux6675
    @kiliandervaux66753 жыл бұрын

    Thank you so much for this amazing lecture. I wanted to applaude at the end but I realised I was in front of my computer.

  • @liamdavey8726
    @liamdavey87266 жыл бұрын

    Great Teacher! Thanks!

  • @kevinzhang4692
    @kevinzhang46922 жыл бұрын

    Thank you! It is a wonderful lecture

  • @stefansgrab
    @stefansgrab7 жыл бұрын

    Chapeau, good lecture!

  • @gourv7ghoshal
    @gourv7ghoshal6 жыл бұрын

    Thank you for sharing this vdo, it was really helpful

  • @bertobertoberto3
    @bertobertoberto39 жыл бұрын

    Round of Applause

  • @FariborzGhavamian
    @FariborzGhavamian7 жыл бұрын

    great lecture !

  • @ahaaha8462
    @ahaaha84624 жыл бұрын

    amazing lecture, thanks a lot

  • @jhn-nt
    @jhn-nt2 жыл бұрын

    Great lecture!

  • @brianstampe7056
    @brianstampe70564 жыл бұрын

    Very helpful. Thanks!

  • @yunlongsong7618
    @yunlongsong76184 жыл бұрын

    Great lecture. Thanks.

  • @bingtingwu8620
    @bingtingwu8620 Жыл бұрын

    Thanks!!! Easy to understand👍👍👍

  • @malharjajoo7393
    @malharjajoo73934 жыл бұрын

    1:04:08 - Would be good to emphasize that the test set is actually used for generating prior ... I had a hard time making sense out of it because the test set is usually provided separately (but in this case we are generating it !!)

  • @dracleirbag5838
    @dracleirbag58382 жыл бұрын

    I like the way you teach

  • @zhijianli8975
    @zhijianli89757 жыл бұрын

    Great lecture

  • @user-ym7rp9pf6y
    @user-ym7rp9pf6y3 жыл бұрын

    Awesome explanation. thanks

  • @katerinapapadaki4810
    @katerinapapadaki48105 жыл бұрын

    Thanks for the helful lecture! The only thing I want to point out is that if you put labels on the axises on your plots, it would be more helful for the listener to understand from the begging what you describe

  • @xesan555
    @xesan5557 жыл бұрын

    nando you are wondeful...

  • @redberries8039
    @redberries80393 жыл бұрын

    This was a good explanation.

  • @crestz1
    @crestz12 ай бұрын

    Amazing lecturer

  • @buoyrina9669
    @buoyrina96695 жыл бұрын

    You are the best

  • @adamtran5747
    @adamtran5747 Жыл бұрын

    Love the content.

  • @huuducdo143
    @huuducdo1435 ай бұрын

    Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!

  • @itai19
    @itai193 жыл бұрын

    Thanks for the lecture, I have a problem with the discussion around 11 - from my understanding, a spherical case does represent some correlation between X and Y, as X is a sub-component of the max radius calculation, meaning larger x leads to smaller possible values of y (or at least lower probability for higher values). In other words, the covariance can be approximated to something like E[x*sqrt(r^2-x^2)]. Are we saying that ends up being zero, i.e. correlation is unable to express such a dependency? My intuition currently understands a square to express 0 correlation

  • @Raven-bi3xn
    @Raven-bi3xn3 жыл бұрын

    Am I correct to think that the "f" notation in 30':30" is not the same "f" in 1:01':30"? In the latter case, each f consists of all the 50 f distributions that are exemplified in the former case? If that understanding is correct, then in sampling from the GP, each sample is a 50by1 vector from the 50D multivariate Gaussian distribution. This 50by1 vector is what Dr. Nando refers to as "distribution over functions". In other words, given the definition of a stochastic process as "indexed random variables", each random variable of GP is drawn from a multivariate Gaussian distribution. In that viewpoint, each "indexed" random variable is a function in 1:01':30". This lecture from 2013 is truly an amazing resource.

  • @kambizrakhshan3248
    @kambizrakhshan32483 жыл бұрын

    Thank you!

  • @swarnendusekharghosh9539
    @swarnendusekharghosh95393 жыл бұрын

    Thankyou sir for a clear explanation

  • @Romis008
    @Romis0086 жыл бұрын

    Fantastic

  • @philwebb59
    @philwebb592 жыл бұрын

    1:05:58 Analog computers existed way before the first digital circuits. A WWII vintage electrical analog computer, for example, consisted of banks of op amps, configured as integrators and differentiators.

  • @ratfuk9340
    @ratfuk93403 ай бұрын

    Thank you for this

  • @abhishekparida22
    @abhishekparida224 жыл бұрын

    Thank you for the lecture, and I appreciate the way you presented, spending a reasonable amount of time explaining Multivariate Gaussian distribution and building up from basics. My question to you is the following: If I happen to anticipate that the underlying distribution is Poisson (say), instead of Gaussian, WHAT will be the appropriate changes (I have an understanding its the likelihood which is modified, but not sure!). Will it still be called a Gaussian Process (or Poisson Process)?

  • @dakshinshiva
    @dakshinshiva3 жыл бұрын

    The best 👍

  • @terrynichols-noaafederal9537
    @terrynichols-noaafederal95374 ай бұрын

    For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?

  • @minglee5164
    @minglee51645 жыл бұрын

    UBC amazing

  • @heyjianjing
    @heyjianjing3 жыл бұрын

    around 56:00, I don't think we should omit the condition sign on the mu*, that is conditioned on f: E(f*|f), not E(f*), otherwise, the expected value of f* alone should just be zero

  • @WWTMA
    @WWTMA6 жыл бұрын

    Very Good

  • @ho4040
    @ho40402 жыл бұрын

    Holy shit...what a good lecture

  • @JaysonSunshine
    @JaysonSunshine7 жыл бұрын

    Correct me if I am wrong, but isn't the whole cluster of examples starting at 36:35 flawed? Nando shows three points in a single dimension: x1, x2, x3 and their corresponding f-values: f1, f2, f3. It seems these points are three samples from a univariate normal distribution with a scalar variance, rather than what he shows, i.e. a vector from R^3 with a 3x3 covariance matrix.

  • @JaysonSunshine

    @JaysonSunshine

    7 жыл бұрын

    On further reflection, perhaps you're doing a non-parametric approach in which you assign a Gaussian per point... ...since the distribution you're forming is empirical, it seems it would be more precise to to say the mean vector of the f-distribution is [f1, f2, f3], yes?

  • @DESYAAR

    @DESYAAR

    6 жыл бұрын

    I agree. That took me a while as well.