Nando de Freitas

Nando de Freitas

I am a machine learning professor at UBC. I am making my lectures available to the world with the hope that this will give more folks out there the opportunity to learn some of the wonderful things I have been fortunate to learn myself. Enjoy.

Пікірлер

  • @user-nr3ej2ud5j
    @user-nr3ej2ud5jАй бұрын

    isn't 22:19 the right side formula for x1|x2 not for x2|x1?

  • @Sheriff_Schlong
    @Sheriff_Schlong2 ай бұрын

    at 1:02:40 IK this teacher was a legend. 11years late and still able to gain much valuable knowledge from these lectures!

  • @crestz1
    @crestz12 ай бұрын

    beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41

  • @crestz1
    @crestz12 ай бұрын

    Amazing lecturer

  • @forughghadamyari8281
    @forughghadamyari82813 ай бұрын

    hi. Thanks for wonderful videos. please introduce a book to study for this course.

  • @ratfuk9340
    @ratfuk93403 ай бұрын

    Thank you for this

  • @bottomupengineering
    @bottomupengineering4 ай бұрын

    Great explanation and pace. Very legit.

  • @terrynichols-noaafederal9537
    @terrynichols-noaafederal95374 ай бұрын

    For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?

  • @m0tivati0n71
    @m0tivati0n714 ай бұрын

    Still great in 2023

  • @huuducdo143
    @huuducdo1435 ай бұрын

    Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!

  • @newbie8051
    @newbie80516 ай бұрын

    It amazes me that people were discussing these topics when I was studying about the water-cycle lol.

  • @ScieLab
    @ScieLab7 ай бұрын

    Hi Nando, is it possible to access the codes that you have mentioned in the lecture?

  • @S25plus
    @S25plus7 ай бұрын

    Thanks prof. Freitas, this is extremely helpful

  • @TheDeatheater3
    @TheDeatheater38 ай бұрын

    super good

  • @marcyaudreydemafonangmo6608
    @marcyaudreydemafonangmo66088 ай бұрын

    This lecture is amazing Professor. From the bottom of my heart, I say thank you.

  • @concoursmaths8270
    @concoursmaths82709 ай бұрын

    professor Nando, thank you a lot!!

  • @bodwiser100
    @bodwiser10010 ай бұрын

    One thing that remained confusing for me for a long time and which I don't think he clarified in the video was that the N and the summation from i = 1 to i = N does not refer to the # of data points in our dataset but to the number of times of we run the Monte Carlo simulation.

  • @truongdang8790
    @truongdang879011 ай бұрын

    Amazing example!

  • @guliyevshahriyar
    @guliyevshahriyar Жыл бұрын

    Thank you very much.

  • @bingtingwu8620
    @bingtingwu8620 Жыл бұрын

    Thanks!!! Easy to understand👍👍👍

  • @subtlethingsinlife
    @subtlethingsinlife Жыл бұрын

    He is a hidden gem .. I have gone through a lot of his videos , they are great in terms of removing jargon .. and bringing clarity

  • @fuat7775
    @fuat7775 Жыл бұрын

    This is absolutely the best explanation of the Gaussian!

  • @nikolamarkovic9906
    @nikolamarkovic9906 Жыл бұрын

    49:40 str 46

  • @el-ostada5849
    @el-ostada5849 Жыл бұрын

    Thank you for everything you have given to us.

  • @charlescoult
    @charlescoult Жыл бұрын

    This was an excellent lecture. Thank you.

  • @cryptogoth
    @cryptogoth Жыл бұрын

    Great lecture, abrupt ending. I believe this is the short (but dense) book mentioned by Criminisi about decision forests www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf

  • @chenqu773
    @chenqu773 Жыл бұрын

    It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.

  • @kianbehdad
    @kianbehdad Жыл бұрын

    You can olny "die" once. That is how I remember die is singular :D

  • @hohinng8644
    @hohinng8644 Жыл бұрын

    The use of notation at 23:00 is confusing for me

  • @rikki146
    @rikki146 Жыл бұрын

    Learning advanced ml concepts for free! What a time to be alive. Thanks a lot for the vid!

  • @marouanbelhaj7881
    @marouanbelhaj7881 Жыл бұрын

    To this day, I keep coming back to your videos to refresh ML concepts. Your courses are a Masterpiece!

  • @emmanuelonyekaezeoba6346
    @emmanuelonyekaezeoba6346 Жыл бұрын

    Very elaborate and simple presentation. Thank you.

  • @Gouda_travels
    @Gouda_travels Жыл бұрын

    This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's

  • @MrStudent1978
    @MrStudent1978 Жыл бұрын

    1:12:24 What is mu(x)? Is that different from mu?

  • @ahmed_mohammed_1
    @ahmed_mohammed_1 Жыл бұрын

    I wish if i discovered your courses a bit earlier

  • @adamtran5747
    @adamtran5747 Жыл бұрын

    Love the content. <3

  • @michaelcao9483
    @michaelcao94832 жыл бұрын

    Thank you! Really great explanation!!!

  • @augustasheimbirkeland4496
    @augustasheimbirkeland44962 жыл бұрын

    5 minutes in and its already better than all 3 hours at class earlier today!

  • @truptimohanty9386
    @truptimohanty93862 жыл бұрын

    This is the best video for understanding the Bayesian Optimization. It would be a great help if you could you post a video on multi objective Bayesian optimization specifically on expected hyper volume improvement. Thank you

  • @htetnaing007
    @htetnaing0072 жыл бұрын

    Don't stop sharing these knowledge for those are vital to the progress of humankind!

  • @jeffreycliff922
    @jeffreycliff9222 жыл бұрын

    access to the source code to do this would be useful

  • @gottlobfreige1075
    @gottlobfreige10752 жыл бұрын

    So, basically, it's partial derivatives?

  • @gottlobfreige1075
    @gottlobfreige10752 жыл бұрын

    I don't understand, it's basically a lot of derivatives within the layers.. correct?

  • @jx4864
    @jx48642 жыл бұрын

    After 30mins, I am sure that he is top 10 teacher in my life

  • @cicik57
    @cicik572 жыл бұрын

    the best way to explain gamma function, is that is continuous factorial. you should point that P(teta) you write is probability DENCITY function here

  • @jhn-nt
    @jhn-nt2 жыл бұрын

    Great lecture!

  • @gottlobfreige1075
    @gottlobfreige10752 жыл бұрын

    How do you understand the math part with depth? Anyone? Help me!

  • @xinking2644
    @xinking26442 жыл бұрын

    if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?

  • @FabulusIdiomas
    @FabulusIdiomas2 жыл бұрын

    People is scared because your explanation sucks. You should do a better job as teacher

  • @austenscruggs8726
    @austenscruggs87262 жыл бұрын

    This is an amazing video! Clear and digestible.