On Characterizing the Capacity of Neural Networks using Algebraic Topology

Ғылым және технология

The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this talk, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias. After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision region is a strictly limiting factor in its ability to generalize. We then provide the first empirical characterization of the topological capacity of neural networks. Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions. This observation allows us to connect existing theory to empirically driven conjectures on the choice of architectures for fully-connected neural networks. Finally, we provide some first steps in building a general theory of neural homology.
See more at www.microsoft.com/en-us/resea...

Пікірлер: 26

  • @kadentaylor8503
    @kadentaylor85033 жыл бұрын

    Wow, what a patient presenter. When the guy said, "No other interpretation type-checks, so this has to be the right interpretation", I wanted to slap him through the screen and go, "He knows dude, he wrote it!" That kind of intellectual gatekeeping crap grinds my gears.

  • @tigeruby
    @tigeruby6 жыл бұрын

    one of the more fresh and insightful talks i've seen in a while when it comes to deep learning/neural network architectures.

  • @grauf0x
    @grauf0x4 жыл бұрын

    Awesome 15 minute presentation, could do without the peanut gallery making it endless

  • @AnupreetBhuyar
    @AnupreetBhuyar Жыл бұрын

    Amazing approach towards architecture selection, would learn more about it. Thank you so much!

  • @HamidKarimiDS
    @HamidKarimiDS5 жыл бұрын

    Nice talk. Why so many interruptions? For God's sake. It is not his Ph.D. defense exam! I wish they had left their questions to the end!

  • @FrankBria

    @FrankBria

    4 жыл бұрын

    I used to call that "Whack a Quant" when I presented in front of groups like that. :)

  • @FrankBria
    @FrankBria4 жыл бұрын

    Is it clear that a combination of layers and hidden units is homology-invariant? In other words, can we be sure that under continuous deformation, these neural networks perform similarly?

  • @FrankBria

    @FrankBria

    4 жыл бұрын

    In other words, he's asserting a map T: Z^2 -> Z^inf where the (a,b) represent a neural network with a layers and b hidden units, and the range is the infinite n-tuple of homological dimensions. Is T homologically invariant? It seems like you'd have to prove something about the open sets in the data set, etc. It seems like the definition of open set in the data set would be very important.

  • @ahme0307
    @ahme03076 жыл бұрын

    Nice presentation Guss...but too many interruptions

  • @BabaYaraMUFC
    @BabaYaraMUFC6 жыл бұрын

    Really cool paper.

  • @socratesantypas1424
    @socratesantypas14246 жыл бұрын

    Is that a Stokes Theorem tattoo?

  • @TheAlphazeta09
    @TheAlphazeta096 жыл бұрын

    Nice Nice Nice :)

  • @abiolalapite2704
    @abiolalapite27046 жыл бұрын

    An interesting talk, marred by an excessive number of interruptions of the speaker by audience members who all seem to be obsessed with proving to the world how clever they are, but who for the most part end up establishing the opposite through their mostly irrelevant segues. Microsoft should do a better job of reining in such audience interruptions, at least by confining them to the end of the talks, so we can get less disjointed presentations that are easier to follow. In addition, whoever filmed this needs to learn the basics of colour-balance, as far too often the slides were made difficult to read because of an unpleasant blue cast.

  • @cadebruce4401

    @cadebruce4401

    3 жыл бұрын

    I thought that the questions/comments were really good.

  • @cadebruce4401

    @cadebruce4401

    3 жыл бұрын

    nvm I had not gone all the way through the video lol

  • @levizhou6726

    @levizhou6726

    2 жыл бұрын

    Naive.

  • @isleofdeath
    @isleofdeath3 жыл бұрын

    I guess most of the audience had some coursera DataScience course done and never really studied math, computer science etc. At least judging by their questions and behaviour. Incredible that the presenter has to repeat 3 times that the capability of getting the homology is the target at 30:00 to 30:40...

  • @StephenPaulKing
    @StephenPaulKing3 жыл бұрын

    How many holes are in a disconnected space?

  • @StephenPaulKing

    @StephenPaulKing

    3 жыл бұрын

    One? None?

  • @axe863
    @axe8637 ай бұрын

    Moving away from explainability/feature engineering and knockoff construction is problematic in terms of overfitting risk even if said overfitting is harder to uncover. More modern approaches are causality gnostic/OOD generalizability and have explainability. Complicated nonstationarities are the rule, no the exception.

  • @superjaykramer
    @superjaykramer6 жыл бұрын

    And the conclusion is???

  • @steliostoulis1875

    @steliostoulis1875

    5 жыл бұрын

    What do you mean by conclusion. This isn't a fairytale

  • @Estoniran
    @Estoniran6 жыл бұрын

    Poor guy lol the audience is ruthless

  • @alexandergallandt6521
    @alexandergallandt65216 жыл бұрын

    COMPUTAH

  • @alexandergallandt6521
    @alexandergallandt65216 жыл бұрын

    How fascinating I love bitcoin!!!

Келесі