Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)

Ғылым және технология

Generative adversarial networks (GANs) are a recently introduced class of generative models, designed to produce realistic samples. This tutorial is intended to be accessible to an audience who has no experience with GANs, and should prepare the audience to make original research contributions applying GANs or improving the core GAN algorithms. GANs are universal approximators of probability distributions. Such models generally have an intractable log-likelihood gradient, and require approximations such as Markov chain Monte Carlo or variational lower bounds to make learning feasible. GANs avoid using either of these classes of approximations. The learning process consists of a game between two adversaries: a generator network that attempts to produce realistic samples, and a discriminator network that attempts to identify whether samples originated from the training data or from the generative model. At the Nash equilibrium of this game, the generator network reproduces the data distribution exactly, and the discriminator network cannot distinguish samples from the model from training data. Both networks can be trained using stochastic gradient descent with exact gradients computed by maximum likelihood.
Topics include:
- An introduction to the basics of GANs.
- A review of work applying GANs to large image generation.
- Extending the GAN framework to approximate maximum likelihood, rather than minimizing the Jensen-Shannon divergence.
- Improved model architectures that yield better learning in GANs.
- Semi-supervised learning with GANs.
- Research frontiers, including guaranteeing convergence of the GAN game.
- Other applications of adversarial learning, such as domain adaptation and privacy.

Пікірлер: 108

  • @RATMCU
    @RATMCU4 жыл бұрын

    even the lecture on GAN had an Adversary

  • @luisluiscunha

    @luisluiscunha

    Жыл бұрын

    ahahah

  • @Prithviization
    @Prithviization5 жыл бұрын

    Schmidhuber vs Goodfellow @ 1:03:00

  • @sudharsank5780

    @sudharsank5780

    5 жыл бұрын

    God bless you. I came here for that!

  • @wennianyu5134

    @wennianyu5134

    4 жыл бұрын

    @@sudharsank5780 +1

  • @donniedorko3336

    @donniedorko3336

    4 жыл бұрын

    @@markmcelroy1872 and after I spent so much brainpower trying to follow the question smh

  • @donniedorko3336

    @donniedorko3336

    4 жыл бұрын

    @nerd I'm sorry, I don't understand.

  • @niladrishekhardutt

    @niladrishekhardutt

    3 жыл бұрын

    *Grabs popcorn*

  • @dasayan05
    @dasayan054 жыл бұрын

    Schmidhuber: Can I ask a question ? GoodFellow: Oh, You_Again ?

  • @TheAntrikooos
    @TheAntrikooos4 жыл бұрын

    An adversarial interaction about adversarial methods.

  • @christophschmidl
    @christophschmidl5 жыл бұрын

    Jürgen Schmidhuber starts talking at 1:02:59.

  • @mranonymous8815
    @mranonymous88154 жыл бұрын

    Watch from 1:03:00 to 1:05:04 in double speed. The art of schmidhubering vs the art of patience while being schmidhubered.

  • @RATMCU

    @RATMCU

    4 жыл бұрын

    He took time to answer the question to a bigger audience than Schmidhuber, by directing them to the places where it matters, also saving time of his presentation.

  • @TapabrataGhosh
    @TapabrataGhosh5 жыл бұрын

    1:03:00 is what you're all here for

  • @eugenedsky3264

    @eugenedsky3264

    4 жыл бұрын

    Learning Finite Automaton Simplifier: drive _ google _ com/file/d/1GSv89tiQmPDcnFEu4n4CqfaJcUJxVmL5KrSCJ047g4o/edit

  • @y.o.2478

    @y.o.2478

    2 жыл бұрын

    So you come to a video about an amazing technology just for petty drama?

  • @olarenee208

    @olarenee208

    2 жыл бұрын

    @@y.o.2478 yes

  • @adamtran5747

    @adamtran5747

    Жыл бұрын

    @@y.o.2478 1000%

  • @carlossegura403

    @carlossegura403

    Жыл бұрын

    @@y.o.2478 Indeed.

  • @MunkyChunk
    @MunkyChunk2 жыл бұрын

    Very well organised lecture, thank you Ian!

  • @svenjaaunes2507
    @svenjaaunes25074 жыл бұрын

    So many people in the wild came up with this idea of training two networks against each other. Some lacked the deeper knowledge to continue, some tackled specific problems instead of generalizing. Ian Goodfellow is just another person who came up with this exact idea (in fact while drunk or something but this may be wrong but in a casual context for sure). Schmidthuber's paper is also based on this exact idea. Goodfellow must acknowledge the overwhelming overlap, let alone similarity, but he doesn't, because if he did, that would take all attention away from him since the earlier work is the same. He probably wasn't even aware of Schmidthuber's paper because that paper dates back >20 years earlier. This doesn't justify not giving proper credit though. Goodfellow simply popularized GANs, certainly not invented. In fact, I bet there were even others before Schmidthuber who came up with this core idea but just didn't keep up.

  • @ruchirjain1163
    @ruchirjain11632 жыл бұрын

    i came here just to understand GANs a bit better, didn't realize i would strike gold at 1:03:00

  • @kanishktantia7899

    @kanishktantia7899

    Жыл бұрын

    Can you give your study plan as in how do you get it?

  • @ruchirjain1163

    @ruchirjain1163

    Жыл бұрын

    @@kanishktantia7899 just watch a couple of videos on this, derive the min-max loss atleast once on paper. See a basic implementation of GAN from scratch. I was doing it just for making a ppt on a research paper (StyleGAN2), so this was more than enough for me. Same goes for any other topic in DL. I was just dipping my feet in the deep sea of DL, i ain't touching it again.

  • @kanishktantia7899

    @kanishktantia7899

    Жыл бұрын

    @@ruchirjain1163 Sure , that might help. Im looking for higher study opportunities from abroad in this space only. Can you help me in any capacity?

  • @ruchirjain1163

    @ruchirjain1163

    Жыл бұрын

    @@kanishktantia7899 it depends mate, i would say you should try going for thesis if u have that in ur university

  • @kanishktantia7899

    @kanishktantia7899

    Жыл бұрын

    @@ruchirjain1163 I graduated last year, working these days.. not enrolled currently at any university..I'm looking for that only. Any lab or any university in US or maybe somewhere else.

  • @sairocks128
    @sairocks1286 жыл бұрын

    Thank you very much for uploading.

  • @masisgroupmarinesoftintell3299
    @masisgroupmarinesoftintell32993 жыл бұрын

    Thank you for uploading the wonderful video! The explanations were really clear.

  • @neuron8186
    @neuron81863 жыл бұрын

    Really good fellow

  • @wenboma4398
    @wenboma43985 жыл бұрын

    Good speech, thx

  • @architjain6749
    @architjain67494 жыл бұрын

    I dont know if its just me, but, I enjoyed and understood Sir Jürgen Schmidhuber more than Sir Ian Goodfellow. Will definitely go check his contributions.

  • @kanishktantia7899

    @kanishktantia7899

    Жыл бұрын

    How do you study can you share your plan with me?

  • @backnforth8401
    @backnforth84014 жыл бұрын

    1:03:00 what a way of getting called out

  • @Peace_in_you
    @Peace_in_you4 жыл бұрын

    I have a question:is this kind of network just good for the same data as training data?

  • @bitbyte8177

    @bitbyte8177

    4 жыл бұрын

    No

  • @busTedOaS

    @busTedOaS

    3 жыл бұрын

    The generator, if successful, will recover the training distribution, and nothing else, if that answers the question.

  • @yoloswaggins2161
    @yoloswaggins21614 жыл бұрын

    If only Schmidhuber had more friends in the field he wouldn't be outmaneuvered in this manner.

  • @hummingbird7579

    @hummingbird7579

    2 жыл бұрын

    It should not matter if someone has friends or not within the field!!!

  • @yoloswaggins2161

    @yoloswaggins2161

    2 жыл бұрын

    @@hummingbird7579 It shouldn't but it does!

  • @hummingbird7579

    @hummingbird7579

    2 жыл бұрын

    @@yoloswaggins2161 History has shown time and time again... you are right. It's really a shame.

  • @g.l.5072
    @g.l.5072 Жыл бұрын

    I dont think people realize this is one of the most important lectures in the past 100 years... GAN... it will be everywhere soon.

  • @tobiasarndt5640

    @tobiasarndt5640

    Жыл бұрын

    who is using gans?

  • @togo7022

    @togo7022

    5 ай бұрын

    gans are dead lol

  • @harshinisewani5095
    @harshinisewani50953 жыл бұрын

    Can someone give more insights to exercises?

  • @cueva_mc
    @cueva_mc3 жыл бұрын

    Can someone explain in English the reason of the confrontation?

  • @Karl_Squell

    @Karl_Squell

    3 жыл бұрын

    stats.stackexchange.com/questions/251460/were-generative-adversarial-networks-introduced-by-j%c3%bcrgen-schmidhuber/301280#301280

  • @AvielLivay
    @AvielLivay Жыл бұрын

    13:24 when searching for the best theta, Ian is summing over the log of the probabilities rather than over the probabilities, why?

  • @piclkesthedrummer6439

    @piclkesthedrummer6439

    Жыл бұрын

    If it's still relevant, I'll try to help. This is called maximum likelyhood estimation, which is the product of the estimated probabilities on the training dataset. As derivative of product is not easy to work with, we take a log of this, so this yields the sum of logs. As log is a monotonous function it doesn't change the local minimum and is easier to take the derivative of. Hope it helped

  • @svenjaaunes2507
    @svenjaaunes25074 жыл бұрын

    predictability minimization is almost literally the same thing as GANs

  • @busTedOaS

    @busTedOaS

    3 жыл бұрын

    almost literally kind of exactly vaguely the same thing.

  • @OttoFazzl
    @OttoFazzl4 жыл бұрын

    I think he misspoke at 31:39 - he said that we want to make sure that x has a higher dimension than z. Instead, z should have a higher dimension than x, to provide full support to space of x and avoiding learning lower-dimensional manifold.

  • @busTedOaS

    @busTedOaS

    3 жыл бұрын

    It's almost never the case in practice where z has about 100 entries and x is 500x500x3 or something similar. If it was the other way around, the noise input would have superfluous entries, which is what he means by "learning lower-dimensional manifolds", I believe. However, in order to perfectly reconstruct the training distribution, I agree that it makes sense to have it exactly that way, and I'm confused by the way he worded that whole part.

  • @akshayshrivastava97

    @akshayshrivastava97

    3 жыл бұрын

    yeah, that confused me too. Good to see someone agrees that it should be the other way round.

  • @Marcos10PT
    @Marcos10PT4 жыл бұрын

    Schmidhuber has a point

  • @hummingbird7579

    @hummingbird7579

    2 жыл бұрын

    I feel bad for him. He deserves more recognition.

  • @dibyaranjanmishra4272
    @dibyaranjanmishra42725 жыл бұрын

    THERE IS NO BETTER LECTURE ON INTRODUCTION TO GAN. PERIOD.

  • @robbiedozier2840

    @robbiedozier2840

    3 жыл бұрын

    THANKS FOR SHARING

  • @user-pz7sl4qq9v
    @user-pz7sl4qq9v2 жыл бұрын

    Well well well another prodigy from stanford that has made a.i more complicated and sophisticated for the better good of humanity 😔... Isnt GAN whats CONDUCTING the war now lol 😂. I remember movie WARGAMES

  • @nikhilmuthukrishnan7222
    @nikhilmuthukrishnan72225 жыл бұрын

    Pure Genius, available in R?

  • @EB3103
    @EB31032 жыл бұрын

    This ian kid is rude and not so goodfellow. Schmidhuber politely just asked a question and got attacked

  • @EB3103

    @EB3103

    2 жыл бұрын

    And also schmidhuber can be his father, he should show a little more respect

  • @NavinF

    @NavinF

    2 жыл бұрын

    @@EB3103 Ok boomer

  • @Nickyreaper2008

    @Nickyreaper2008

    2 жыл бұрын

    We're talking about a guy that says he invented "generative adversarial networks", when the paper clearly mentions 7 other people, university staff and more working on the project. Of course he's gonna talk like that.

  • @floydamide

    @floydamide

    Ай бұрын

    @@Nickyreaper2008 He also likes to call himself "The industry lead"

  • @pranav7471
    @pranav74714 жыл бұрын

    Such a fake Schmidhuber is the true creator of Gans, and Goodfellow had the nerve to shut him up

  • @busTedOaS

    @busTedOaS

    3 жыл бұрын

    Two people inventing more or less the same thing independent of each other has happened many times in history, for example Newton's method or Darwin's theory of natural selection. Calling either of them fake is rather presumptuous.

  • @pranav7471

    @pranav7471

    3 жыл бұрын

    @@busTedOaS bro Goodfellow came decades after this guy that's not called simultaneous inventions

  • @busTedOaS

    @busTedOaS

    3 жыл бұрын

    @@pranav7471 That's why I said independent, not simultaneous. Bro.

  • @pranav7471

    @pranav7471

    3 жыл бұрын

    ​@@busTedOaS I completely understand that, but u need to credit the first creator too, that's the only problem I have. As far I know Goodfellow was the first one to make Adverserial Networks work, Schmidhuber worked with 1000x worser hardware and wasn't able to get any real results so it was just a cool idea with no solid results backing it up. Thus Goodfellow deserves credit but not completely. Even after all these arguments, Goodfellow refused to acknowledge the clear similarity between the work and cite it, this is blatantly unethical from an academic standpoint.

  • @busTedOaS

    @busTedOaS

    3 жыл бұрын

    ​@@pranav7471Schmidhuber had modern hardware in 2014. Plus years of experience with adversarial models ahead of Goodfellow, presumably. I don't see any disadvantage for Schmidhuber there. I agree that one should cite related work and Goodfellow does this consistently - in fact that's what he did right before the confrontation. Why would he specifically ignore Schmidhuber's work while citing many other works with even stronger similarities? The reasonable explanations I can come up with are 1) personal spite or 2) he honestly thinks the techniques are sufficiently different.

  • @akshayshrivastava97
    @akshayshrivastava973 жыл бұрын

    Ok, people like Dr. Schmidhuber need to understand that a tutorial is not a place for these kind of discussions. This could have been easily taken offline. Also, once the presenter expresses they have no desire to discuss it at that moment, back off. It's too conceited and self-important to think your argument with the presenter is more important than everyone else - who paid for NIPS and had been looking forward to this tutorial - getting their time and money's worth. That being said, not downplaying what Dr. Schmidhuber was trying to point out, simply that it could've been discussed differently and elsewhere.

  • @MrMSS22

    @MrMSS22

    3 жыл бұрын

    If Schmidhuber had only raised his concern offline, it would not have gotten in the focus of academic publicity in the way it did. It's reasonable to assume that the latter was his intention, therefore it didn't matter whether Goodfellows presentation was a tutorial.

  • @youtubeadventurer1881
    @youtubeadventurer18815 жыл бұрын

    Why is he obfuscating everything with needless mathematical jargon that most ML researchers won't understand? This stuff actually isn't so complicated that you need a degree in mathematics to understand it. You can understand it on a deep level with only high school mathematics if it is explained properly.

  • @teckyify

    @teckyify

    4 жыл бұрын

    Oh sorry, what would you like to talk instead about? Visual Code themes or the latest JavaScript framework? Any ML course in the university is heavy math Einstein.

  • @OttoFazzl

    @OttoFazzl

    4 жыл бұрын

    Engineering details are straightforward, however, this is a NIPS lecture, they have to have theoretical justification about how it works. If you want to engineer a working system you don't need all this, I agree with that.

  • @robbiedozier2840

    @robbiedozier2840

    3 жыл бұрын

    Man, it’s almost like Computer Science is a subset of Mathematics...

  • @robbiedozier2840

    @robbiedozier2840

    3 жыл бұрын

    @@sZlMu2vrIDQZBNke8ENmEKvzoZ lmao

  • @judedavis92

    @judedavis92

    2 жыл бұрын

    GO write your HTML code, kid.

Келесі