Understanding Multivariate Gaussian Distribution (Machine Learning Fundamentals)
#gaussiandistribution #machinelearning #statistics
In this video, we will understand the intuition and maths behind the Multivariate Gaussian/Normal Distribution. We will be doing a walkthrough from Stanford CS229 course document.
⏩ OUTLINE:
0:00 - Introduction and Formula breakdown
2:35 - Relationship between Multivariate Gaussians and Univariate Gaussians
6:00 - Covariance Matrix
8:38 - Diagonal Covariance Matrix
11:10 - Shape of Isocontours
12:35 - Heatmap density view
⏩ Document: cs229.stanford.edu/section/gau...
⏩ Organisation: Stanford University
*********************************************
If you want to support me financially which totally optional and voluntary :) ❤️
You can consider buying me chai ( because i don't drink coffee :) ) at www.buymeacoffee.com/TechvizC...
*********************************************
⏩ KZread - / @techvizthedatascienceguy
⏩ Blog - prakhartechviz.blogspot.com
⏩ LinkedIn - / prakhar21
⏩ Medium - / prakhar.mishra
⏩ GitHub - github.com/prakhar21
*********************************************
Tools I use for making videos :)
⏩ iPad - tinyurl.com/y39p6pwc
⏩ Apple Pencil - tinyurl.com/y5rk8txn
⏩ GoodNotes - tinyurl.com/y627cfsa
#techviz #datascienceguy #ml #stats #distribution #multivariategaussian #gaussian
Пікірлер: 20
I really like how you are making an effort to explain the intuition behind the concepts by explaining the meanings within the equations. Only a few ppl do that.
@TechVizTheDataScienceGuy
2 жыл бұрын
Thank you so much for appreciating the effort :)
Love the explanation! Thank you!!
Really a great explaination. I really liked the part you related the formulation of gaussian curve with the iso-contours.
@TechVizTheDataScienceGuy
2 жыл бұрын
Thank you 😊
what a great video
I'm an undergrad at Berkeley currently taking ML and you just saved my ass on this midterm
@TechVizTheDataScienceGuy
Жыл бұрын
:D
Nice content
some input on how the eigen values of the covariances matrix would give a better intuition for the ellipses
@TechVizTheDataScienceGuy
3 жыл бұрын
That’s a good point to discuss. I missed that in the video I suppose, Let’s consider we have diagonal covariance matrix (ie. the off diagonal value go zero), then for this matrix the eigen values will be the variance only for each variable. If it’s 2x2 then we will have 2 eigen values and corresponding eigen vectors. So eigen vectors will act as 2 principal components and will point to directions of variance (ie. major and minor axis) and the spread by their respective eigen values. Diagonal covariance would mean axis aligned ellipse for 2D .. ellipsoid etc for higher dimensions. For non diagonal matrix, the same concept holds true but the ellipse becomes non axis aligned due to covariance change amongst variables and requires to calculate eigen value explicitly rather considering diagonals as the output.
Great video thank you so much sir
@TechVizTheDataScienceGuy
3 жыл бұрын
Glad you liked it.
@shashankkumar7920
3 жыл бұрын
@@TechVizTheDataScienceGuy sir I m doing mtech in machine learning related field at IIT Delhi ...ur videos are very helpful🙏
👌
nice
👍👍
nice explanation
@TechVizTheDataScienceGuy
3 жыл бұрын
Thanks Prabir.
At @11.55 I think the whole term is supposed to be r1 squared and not squared root of r1. Btw wonderful explanation 👍