MIT 6.S191: Deep Generative Modeling

Ғылым және технология

MIT Introduction to Deep Learning 6.S191: Lecture 4
Deep Generative Modeling
Lecturer: Ava Amini
New 2024 Edition
For all lectures, slides, and lab materials: introtodeeplearning.com​
Lecture Outline
0:00​ - Introduction
6:10- Why care about generative models?
8:16​ - Latent variable models
10:50​ - Autoencoders
17:02​ - Variational autoencoders
23:25 - Priors on the latent distribution
32:31​ - Reparameterization trick
34:36​ - Latent perturbation and disentanglement
37:40 - Debiasing with VAEs
39:37​ - Generative adversarial networks
42:09​ - Intuitions behind GANs
44:57 - Training GANs
48:28 - GANs: Recent advances
50:57 - CycleGAN of unpaired translation
55:03 - Diffusion Model sneak peak
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Пікірлер: 21

  • @freddybrou405
    @freddybrou405Ай бұрын

    Thank you so much for the course. So much interesting.

  • @erikkim4739
    @erikkim4739Ай бұрын

    so excited for this!

  • @ML-DS-AI-Projects
    @ML-DS-AI-Projects27 күн бұрын

    First thank you Alexander and Ava for sharing the knowledge After watching these videos, I realized that learning machine learning is not just a skill; teaching is a much bigger skill.

  • @pradyumnanimbkar8011
    @pradyumnanimbkar801124 күн бұрын

    Cool and well-sorted.

  • @civilengineeringonlinecour7143
    @civilengineeringonlinecour7143Ай бұрын

    Awesome lecture. 🎉

  • @arpandas2758
    @arpandas2758Ай бұрын

    thank you for the amazing content, please add the slides for this lecture in the website, its still not there, cheers :)

  • @shakshamkarki7061
    @shakshamkarki706129 күн бұрын

    Not a MITian but learning in MIT

  • @4threich166
    @4threich166Ай бұрын

    Beauty with brain ❤

  • @4threich166
    @4threich166Ай бұрын

    Queen

  • @ahmedelsafty6654
    @ahmedelsafty665424 күн бұрын

    First thank you Ava for sharing the knowledge. I'm not able to understand, why the standard auto-encoder does a deterministic operation?

  • @akshay5011

    @akshay5011

    6 сағат бұрын

    I guess its because once the training is done and as the neural network weights are fixed , as there is no backpropogation etcc.., involved after training , the weights couldn't change and thus for every input you would get the same output as learnt function doesnt involve any probabilistic element.

  • @catalinmanea1560
    @catalinmanea1560Ай бұрын

    awesome, many thanks for your initiative ! keep up the great work

  • @genkideska4486
    @genkideska4486Ай бұрын

    5 mins more let's gooooo

  • @geoffreyporto
    @geoffreyportoАй бұрын

    I have a dataset of 120 images of cell phone photographs of the skin of dogs sick with 12 types of skin diseases, with a distribution of 10 images for each dog. What type of Generative Adversarial Network (GAN) is most suitable to increase my dataset with quality and be able to train my DL model? DcGAN, ACGAN, StyleGAN3, CGAN?

  • @TechWithAbee

    @TechWithAbee

    Ай бұрын

    just try them out

  • @faridsaud6567

    @faridsaud6567

    Ай бұрын

    Try fine tuning the models with your data

  • @aurabless7552
    @aurabless7552Ай бұрын

    when gpt 4o lectures :D

  • @Lima3578user
    @Lima3578userАй бұрын

    Spellbound by the lecture, great insights. Is she Indian

  • @dragonartgroup6982

    @dragonartgroup6982

    10 күн бұрын

    She's Persian

  • @gapcreator726
    @gapcreator726Ай бұрын

    Nice amini teaching❤ and your curly hair nice😮

Келесі