I am a machine learning professor at UBC. I am making my lectures available to the world with the hope that this will give more folks out there the opportunity to learn some of the wonderful things I have been fortunate to learn myself. Enjoy.
isn't 22:19 the right side formula for x1|x2 not for x2|x1?
@Sheriff_Schlong2 ай бұрын
at 1:02:40 IK this teacher was a legend. 11years late and still able to gain much valuable knowledge from these lectures!
@crestz12 ай бұрын
beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41
@crestz12 ай бұрын
Amazing lecturer
@forughghadamyari82813 ай бұрын
hi. Thanks for wonderful videos. please introduce a book to study for this course.
@ratfuk93403 ай бұрын
Thank you for this
@bottomupengineering4 ай бұрын
Great explanation and pace. Very legit.
@terrynichols-noaafederal95374 ай бұрын
For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?
@m0tivati0n714 ай бұрын
Still great in 2023
@huuducdo1435 ай бұрын
Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!
@newbie80516 ай бұрын
It amazes me that people were discussing these topics when I was studying about the water-cycle lol.
@ScieLab7 ай бұрын
Hi Nando, is it possible to access the codes that you have mentioned in the lecture?
@S25plus7 ай бұрын
Thanks prof. Freitas, this is extremely helpful
@TheDeatheater38 ай бұрын
super good
@marcyaudreydemafonangmo66088 ай бұрын
This lecture is amazing Professor. From the bottom of my heart, I say thank you.
@concoursmaths82709 ай бұрын
professor Nando, thank you a lot!!
@bodwiser10010 ай бұрын
One thing that remained confusing for me for a long time and which I don't think he clarified in the video was that the N and the summation from i = 1 to i = N does not refer to the # of data points in our dataset but to the number of times of we run the Monte Carlo simulation.
@truongdang879011 ай бұрын
Amazing example!
@guliyevshahriyar Жыл бұрын
Thank you very much.
@bingtingwu8620 Жыл бұрын
Thanks!!! Easy to understand👍👍👍
@subtlethingsinlife Жыл бұрын
He is a hidden gem .. I have gone through a lot of his videos , they are great in terms of removing jargon .. and bringing clarity
@fuat7775 Жыл бұрын
This is absolutely the best explanation of the Gaussian!
@nikolamarkovic9906 Жыл бұрын
49:40 str 46
@el-ostada5849 Жыл бұрын
Thank you for everything you have given to us.
@charlescoult Жыл бұрын
This was an excellent lecture. Thank you.
@cryptogoth Жыл бұрын
Great lecture, abrupt ending. I believe this is the short (but dense) book mentioned by Criminisi about decision forests www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf
@chenqu773 Жыл бұрын
It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.
@kianbehdad Жыл бұрын
You can olny "die" once. That is how I remember die is singular :D
@hohinng8644 Жыл бұрын
The use of notation at 23:00 is confusing for me
@rikki146 Жыл бұрын
Learning advanced ml concepts for free! What a time to be alive. Thanks a lot for the vid!
@marouanbelhaj7881 Жыл бұрын
To this day, I keep coming back to your videos to refresh ML concepts. Your courses are a Masterpiece!
@emmanuelonyekaezeoba6346 Жыл бұрын
Very elaborate and simple presentation. Thank you.
@Gouda_travels Жыл бұрын
This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's
@MrStudent1978 Жыл бұрын
1:12:24 What is mu(x)? Is that different from mu?
@ahmed_mohammed_1 Жыл бұрын
I wish if i discovered your courses a bit earlier
@adamtran5747 Жыл бұрын
Love the content. <3
@michaelcao94832 жыл бұрын
Thank you! Really great explanation!!!
@augustasheimbirkeland44962 жыл бұрын
5 minutes in and its already better than all 3 hours at class earlier today!
@truptimohanty93862 жыл бұрын
This is the best video for understanding the Bayesian Optimization. It would be a great help if you could you post a video on multi objective Bayesian optimization specifically on expected hyper volume improvement. Thank you
@htetnaing0072 жыл бұрын
Don't stop sharing these knowledge for those are vital to the progress of humankind!
@jeffreycliff9222 жыл бұрын
access to the source code to do this would be useful
@gottlobfreige10752 жыл бұрын
So, basically, it's partial derivatives?
@gottlobfreige10752 жыл бұрын
I don't understand, it's basically a lot of derivatives within the layers.. correct?
@jx48642 жыл бұрын
After 30mins, I am sure that he is top 10 teacher in my life
@cicik572 жыл бұрын
the best way to explain gamma function, is that is continuous factorial. you should point that P(teta) you write is probability DENCITY function here
@jhn-nt2 жыл бұрын
Great lecture!
@gottlobfreige10752 жыл бұрын
How do you understand the math part with depth? Anyone? Help me!
@xinking26442 жыл бұрын
if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?
@FabulusIdiomas2 жыл бұрын
People is scared because your explanation sucks. You should do a better job as teacher
Пікірлер
isn't 22:19 the right side formula for x1|x2 not for x2|x1?
at 1:02:40 IK this teacher was a legend. 11years late and still able to gain much valuable knowledge from these lectures!
beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41
Amazing lecturer
hi. Thanks for wonderful videos. please introduce a book to study for this course.
Thank you for this
Great explanation and pace. Very legit.
For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?
Still great in 2023
Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!
It amazes me that people were discussing these topics when I was studying about the water-cycle lol.
Hi Nando, is it possible to access the codes that you have mentioned in the lecture?
Thanks prof. Freitas, this is extremely helpful
super good
This lecture is amazing Professor. From the bottom of my heart, I say thank you.
professor Nando, thank you a lot!!
One thing that remained confusing for me for a long time and which I don't think he clarified in the video was that the N and the summation from i = 1 to i = N does not refer to the # of data points in our dataset but to the number of times of we run the Monte Carlo simulation.
Amazing example!
Thank you very much.
Thanks!!! Easy to understand👍👍👍
He is a hidden gem .. I have gone through a lot of his videos , they are great in terms of removing jargon .. and bringing clarity
This is absolutely the best explanation of the Gaussian!
49:40 str 46
Thank you for everything you have given to us.
This was an excellent lecture. Thank you.
Great lecture, abrupt ending. I believe this is the short (but dense) book mentioned by Criminisi about decision forests www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf
It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.
You can olny "die" once. That is how I remember die is singular :D
The use of notation at 23:00 is confusing for me
Learning advanced ml concepts for free! What a time to be alive. Thanks a lot for the vid!
To this day, I keep coming back to your videos to refresh ML concepts. Your courses are a Masterpiece!
Very elaborate and simple presentation. Thank you.
This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's
1:12:24 What is mu(x)? Is that different from mu?
I wish if i discovered your courses a bit earlier
Love the content. <3
Thank you! Really great explanation!!!
5 minutes in and its already better than all 3 hours at class earlier today!
This is the best video for understanding the Bayesian Optimization. It would be a great help if you could you post a video on multi objective Bayesian optimization specifically on expected hyper volume improvement. Thank you
Don't stop sharing these knowledge for those are vital to the progress of humankind!
access to the source code to do this would be useful
So, basically, it's partial derivatives?
I don't understand, it's basically a lot of derivatives within the layers.. correct?
After 30mins, I am sure that he is top 10 teacher in my life
the best way to explain gamma function, is that is continuous factorial. you should point that P(teta) you write is probability DENCITY function here
Great lecture!
How do you understand the math part with depth? Anyone? Help me!
if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?
People is scared because your explanation sucks. You should do a better job as teacher
This is an amazing video! Clear and digestible.