Yann LeCun: "Energy-Based Self-Supervised Learning"

Ғылым және технология

Machine Learning for Physics and the Physics of Learning 2019
Workshop IV: Using Physical Insights for Machine Learning
"Energy-Based Self-Supervised Learning"
Yann LeCun - Courant Institute of Mathematical Sciences, New York University & Facebook AI Research
Institute for Pure and Applied Mathematics, UCLA
November 18, 2019
For more information: www.ipam.ucla.edu/mlpws4

Пікірлер: 23

  • @CosmiaNebula
    @CosmiaNebula3 жыл бұрын

    28:24 "probability is derived from energy" probably refers to statistical mechanics, where any energy function on the possible states of a system defines a probability distribution on these states (Boltzmann distribution).

  • @imrematajz1624
    @imrematajz16243 жыл бұрын

    Try again at 0.75 of normal speed...makes a huge difference in comprehension! His mind is hyper fast. And I am not a robot :-)

  • @whatsinthepapers6112
    @whatsinthepapers61124 жыл бұрын

    Sounds like we all need to put more energy into Energy-based models

  • @visuality2541
    @visuality25414 жыл бұрын

    this is gold

  • @CristianGarcia
    @CristianGarcia4 жыл бұрын

    I think the Contrastive Predictive Coding paper achieves similar kinds of results for images and audio as the ones presented for text.

  • @WeidiXie

    @WeidiXie

    4 жыл бұрын

    And it actually also works on videos:arxiv.org/abs/1909.04656 kzread.info/dash/bejne/Zmd_q6qOpqvQcpM.html

  • @robbiero368
    @robbiero3684 жыл бұрын

    So actually it takes us months to learn anything with millions of examples too then, but what we learn first can be transferred to many things later.

  • @minhvu8909
    @minhvu89094 жыл бұрын

    The slides: helper.ipam.ucla.edu/publications/mlpws4/mlpws4_15927.pdf

  • @ImGonnaShitYourPants
    @ImGonnaShitYourPants2 жыл бұрын

    KZread, just because I watched 45 minutes of this while asleep cause I had shrek free movie 4k uhd on with autoplay, doesn't mean I want it in my recommendations every time I log into KZread.

  • @robbiero368
    @robbiero3684 жыл бұрын

    For images would it not make more sense to just predict the label for the missing "thing" rather than the actual pixels, how many humans could do that after all?

  • @robbiero368

    @robbiero368

    4 жыл бұрын

    Actually that's not true is it. Our visual system is constantly replacing or imagining missing data

  • @snippletrap
    @snippletrap4 жыл бұрын

    The ridge at 41:20, and the ambiguity it implies, calls to mind the gestalt idea of "multistability".

  • @christianleininger2954
    @christianleininger29542 жыл бұрын

    2:44 he says human play reach in 15 min of play and after at least 10 years of being a life learning how the world works (physics and predicting the future in his mind)

  • @snippletrap
    @snippletrap4 жыл бұрын

    The Chomskyans are right in part, for the same reason that LeCun mentions in the beginning of the lecture. What LeCun calls poor "sample efficiency" is what Chomsky calls "the poverty of the stimulus". Children require far less training data.

  • @agiisahebbnnwithnoobjectiv228
    @agiisahebbnnwithnoobjectiv2283 жыл бұрын

    The objective function of animal brains and therefore Human Level A.I. is impact maximization. You were chosen to receive this message. Help spread the word.

  • @ephi124
    @ephi1244 жыл бұрын

    "Babies learn by observation with little interaction", yes and that's because they inherit such capability from their parents: their neurons are already fine-tuned to have those features and the question is how do we enforce these in our ML models?

  • @Rishabhshukla13

    @Rishabhshukla13

    4 жыл бұрын

    Guess, pre-training is equivalent to that. So are genetic algorithms (in a different way though).

  • @ephi124

    @ephi124

    4 жыл бұрын

    @@Rishabhshukla13 Which tells me our approaches to mimic Biological neurons has been a fiasco. Like he said the way humans learn so quickly is neither supervised nor reinforced but pre-training is. The only choice we have is understanding Biological neurons (not superficially) and how evolution works and see if we have the resources to replicate them. And I'm not even sure if it is necessary to mimic Biology in order to build intelligent machines.

  • @vast634

    @vast634

    3 жыл бұрын

    @@ephi124 Neurons always work in groups in the cortical column. Artificial NN always consider them singular logic elements. This way too fine grained, and not their job in biology. The whole column is the logical element, not the single neuron.

  • @_chip
    @_chip4 жыл бұрын

    Why does he call his cost function an energy function? Isn’t that just a synonym?

  • @christoferberruzchungata2722

    @christoferberruzchungata2722

    4 жыл бұрын

    Because his lost IS BASED on the concept of how an energy function should behave. Not all loss functions are inspired by energy functions. I believe he emphasizes the "energy-based" idea to make a strong point that he is borrowing the concept/idea from physics and natural systems.

  • @johnjewell5008
    @johnjewell50083 жыл бұрын

    I am all for asking questions but when one of the premier AI researchers in the world is giving a talk, probably avoid asking basic details about transformers, especially when it is not the main focus of the talk hahahah this made me cringe a bit

  • @agiisahebbnnwithnoobjectiv228
    @agiisahebbnnwithnoobjectiv2283 жыл бұрын

    This is never gonna work

Келесі