Unsupervised Deep Learning - Google DeepMind & Facebook Artificial Intelligence NeurIPS 2018

Ғылым және технология

Presented by Alex Graves (Google DeepMind) and Marc Aurelio Ranzato (Facebook)
Presented December 3rd, 2018
This tutorial Unsupervised Deep Learning will cover in detail, the approach to simply 'predict everything' in the data, typically with a probabilistic model, which can be seen through the lens of the Minimum Description Length principle as an effort to compress the data as compactly as possible.
Alex Graves is a research scientist at DeepMind. He did a BSc in Theoretical Physics at Edinburgh and obtained a PhD in AI under Jürgen Schmidhuber at IDSIA. He was also a postdoc at TU Munich and under Geoffrey Hinton at the University of Toronto.

Пікірлер: 26

  • @mohammadkhalooei637
    @mohammadkhalooei6375 жыл бұрын

    Thank you so much for your interesting presentation!

  • @sofdff
    @sofdff9 күн бұрын

    Amazing

  • @bingeltube
    @bingeltube5 жыл бұрын

    Very recommendable! Talks by two very renowned researchers

  • @zkzhao279
    @zkzhao2795 жыл бұрын

    Slides: ranzato.github.io/

  • @siegmeyer995

    @siegmeyer995

    5 жыл бұрын

    Really useful! Thank you

  • @Troyster94806
    @Troyster948065 жыл бұрын

    Maybe it's possible to use narrow AI to figure the optimum method of unsupervised learning for us.

  • @kazz811
    @kazz8115 жыл бұрын

    Great talks but wish Alex Graves had paced his talk better to focus on the interesting stuff instead of the more well known ideas.

  • @torincarl7934

    @torincarl7934

    2 жыл бұрын

    Not sure if you guys gives a damn but if you're bored like me during the covid times then you can watch pretty much all of the latest movies on InstaFlixxer. I've been watching with my girlfriend for the last couple of weeks xD

  • @michaelonyx3903

    @michaelonyx3903

    2 жыл бұрын

    @Torin Carl definitely, been using InstaFlixxer for since december myself :)

  • @machinistnick2859
    @machinistnick28593 жыл бұрын

    thanks god

  • @messapatingy
    @messapatingy4 жыл бұрын

    What is density modelling?

  • @SudhirPratapYadav

    @SudhirPratapYadav

    2 жыл бұрын

    modelling -> finding out /predicting -> Basically finding out from data a model density -> here it means probability density function -> i.e. probability distribution of data/thing to be modelled.

  • @reinerwilhelms-tricarico344
    @reinerwilhelms-tricarico3443 жыл бұрын

    0.5 < P(the cat sat on the mat | google talk) < 1

  • @AnimeshSharma1977
    @AnimeshSharma19775 жыл бұрын

    getting the metric right seems like feature engineering...

  • @vsiegel
    @vsiegel2 жыл бұрын

    He does not fully understand, I think.

  • @jabowery
    @jabowery5 жыл бұрын

    About 17 minutes and I had to stop listening because I felt like I had lost about a standard deviation IQ. Hasn't this guy ever heard of Solomonoff induction? Hasn't he ever talked to Shane Legg? The intrinsic motivation is lossless compression and if the agent is active the decision theoretic utility determines the explore exploit tradeoff as in AIXI. If passive it just compresses whatever it's given as data.

  • @theJACKATIC

    @theJACKATIC

    5 жыл бұрын

    Thats Alex Graves... well renowned at DeepMind. He's released papers with Shane Legg.

  • @webxhut

    @webxhut

    5 жыл бұрын

    Fish !

  • @jabowery

    @jabowery

    5 жыл бұрын

    @@theJACKATIC I listened to the rest and he did, finally, bring in compression as one would expect of someone with his background. And it does appear important. His presentation threw me off. At a meta level, he really should start with the "high level coding" of his presentation: Describe the space in terms of AIXI's unification of Solomonoff Induction and Sequential Decision Theory before breaking down into his 2x2 taxonomy. That way it would be clear that "unsupervised learning" is simply lossless compression toward Solomonoff Induction's use of the KC program's "latent representations". He appears to have his head so far into the techniques of lossless compression that he elides the "top down" definition of AGI as the start of his "high level".

  • @coolmechelugwu7305

    @coolmechelugwu7305

    5 жыл бұрын

    @@jabowery some persons are not so advanced in this field and starting from the known to the unknown is a great technique in passing knowledge. Great presentation🙋

  • @jabowery

    @jabowery

    5 жыл бұрын

    @@coolmechelugwu7305 Solomonoff Induction is just Ockham's Razor for the Turing Age -- so there's no real challenge in coming up with an exoteric framing. Sequential Decision Theory can be framed quite simply as well: If you know the outcome of all choices available to you (provided by Solomonoff Induction), Decisions become trivial. The reason I'm hammering on this is that the failure to understand lossless compression's value as the intrinsic utility function of unsupervised learning has untold opportunity costs to society: The enormous resources poured, not only into the social sciences but social "experiments" conducted on vast populations without any serious notion of "informed consent", should be informed by the lossless compression of a wide range of longitudinal social data. Google DeepMind should be at the forefront of this given its background and Google's resources. See this question I put to Kaggle: www.kaggle.com/general/37155#post207935

Келесі