Stanford CS236: Deep Generative Models I 2023 I Lecture 5 - VAEs

For more information about Stanford's Artificial Intelligence programs visit: stanford.io/ai
To follow along with the course, visit the course website:
deepgenerativemodels.github.io/
Stefano Ermon
Associate Professor of Computer Science, Stanford University
cs.stanford.edu/~ermon/
Learn more about the online course and how to enroll: online.stanford.edu/courses/c...
To view all online courses and programs offered by Stanford, visit: online.stanford.edu/

Пікірлер: 4

  • @dohyun0047
    @dohyun004723 күн бұрын

    @56:38 쯤에 수업 분위기가 너무 좋네욤ㅋㅋㅋㅋ

  • @CPTSMONSTER
    @CPTSMONSTER28 күн бұрын

    29:30 Infinite number of latent variables z 30:10 Finite gaussians, able to choose parameters arbitrarily, lookup tables 30:30 Infinite gaussians, not arbitrary, chosen by feeding z through neural network 39:30 Parameters of infinite gaussian model 40:30? Positive semi-definite covariance matrix 41:30? Latent variable represented by part of image obscured 50:00 Number of latent variables (binary variables, Bernoulli) 52:00 Naive Monte Carlo approximation of likelihood function for partially observable data 1:02:30? Modify learning objective to do semi-supervised learning 1:04:00 Importance sampling with Monte Carlo 1:07:00? Unbiased estimator, is q(z^(j)) supposed to be maximized? 1:09:00 Biased estimator when computing log-likelihood, proof by Jensen's inequality for concave functions (log is concave) 1:14:30 Summary, log p_theta(x) desired. Conditioned on latent variables z, if infinite Gaussians, then intractable. Do importance sampling with Monte Carlo. Base case k=1 shows biased estimator for log p_theta(x). Jensen's inequality yields ELBO. Optimize by choosing q. 1:17:00 KUBO and other techniques for upper bound, much trickier to get UB 1:18:40? Entropy and equality when q is posterior distribution 1:19:40? E step of EM algorithm 1:20:30? Loop when training? x to z and z to x

  • @chongsun7872
    @chongsun787214 күн бұрын

    @37:30 Why the nonlinear transformation of iid Gaussian distribution p(x|z) is another Gaussian? Can anyone explain to me please?

  • @artemkondratyev2805

    @artemkondratyev2805

    19 сағат бұрын

    The way I understand it: - You make a modeling assumption about distribution p(Z), can be Gaussian, can be categorical, but can be anything else; but it's convenient to pick something "easy" like Gaussian - You make a modeling assumption about p(X|Z), and again, can be Gaussian, can be Exponential, can be anything else, doesn't have to be the same family of distributions as p(Z), and again, it's convenient to pick something "easy" - You make a final modeling assumption that the parameters (theta) of p(X|Z) depend on Z in some unknown and complicated way; i.e. theta = f(z), where f is some complex non-linear transformation (and which you approximate using NN) So you don't really transform the distribution p(Z) into p(X|Z), you transform values of Z into parameters theta of p(X|Z) And all of it just by modeling assumptions And then you just hope that your assumptions match the reality, and that you can find such Z and such transformations f(z)=theta that allow good approximation of p(X) using p(X) = sum [p(X|Z)*p(Z)] over all z (from the law of total probability)

Келесі