A shallow grip on neural networks (What is the "universal approximation theorem"?)

Ғылым және технология

The "universal approximation theorem" is a catch-all term for a bunch of theorems regarding the ability of the class of neural networks to approximate arbitrary continuous functions. How exactly (or approximately) can we go about doing so? Fortunately, the proof of one of the earliest versions of this theorem comes with an "algorithm" (more or less) for approximating a given continuous function to whatever precision you want.
(I have never formally studied neural networks.... is it obvious? 👉👈)
The original manga:
[LLPS92] M. Leshno, V.Y. Lin, A. Pinkus, S. Schocken, 1993. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 6(6):861--867.
________________
Timestamps:
00:00 - Intro (ABCs)
01:08 - What is a neural network?
02:37 - Universal Approximation Theorem
03:37 - Polynomial approximations
04:26 - Why neural networks?
05:00 - How to approximate a continuous function
05:55 - Step 1 - Monomials
07:07 - Step 2 - Polynomials
07:33 - Step 3 - Multivariable polynomials (buckle your britches)
09:35 - Step 4 - Multivariable continuous functions
09:47 - Step 5 - Vector-valued continuous functions
10:20 - Thx 4 watching

Пікірлер: 33

  • @connor9024
    @connor9024Ай бұрын

    It’s t-22 hours until my econometrics final, I have been studying my ass off, I’m tired, I have no idea what this video is even talking about, I’m hungry and a little scared.

  • @SheafificationOfG

    @SheafificationOfG

    29 күн бұрын

    Did you pass? (Did the universal approx thm help?)

  • @davidebic

    @davidebic

    21 күн бұрын

    ​@@SheafificationOfGusing this theorem he could create a neural network that approximates test answers to an arbitrarily good degree, thus getting an A-.

  • @henriquenoronha1392
    @henriquenoronha139222 күн бұрын

    Came for the universal approximation theorem, stayed for the humor (after the first pump up I didn't understand a word). Great video!

  • @raneena5079
    @raneena5079Ай бұрын

    super underrated channel

  • @gbnam8
    @gbnam8Ай бұрын

    as someone who is really interested in pure maths, i think that youtube should really have more videos like these, keep it up!

  • @SheafificationOfG

    @SheafificationOfG

    Ай бұрын

    Happy to do my part 🫡

  • @dinoscheidt
    @dinoscheidt20 күн бұрын

    5:07 phew, this channel is gold. Basic enough that I understand whats going on as an applied ML engineer, and smart enough that I feel like I would learn something. Subscribed.

  • @neelg7057
    @neelg7057Ай бұрын

    I have no idea what I just watched

  • @raspberryspicelatte65
    @raspberryspicelatte6520 күн бұрын

    Did not expect to see a Jim's Big Ego reference here

  • @antarctic214
    @antarctic21419 күн бұрын

    To 6:20. The secant line approximation converges at least pointwise. But for the theorem we want to construct uniform/sup-norm convergence, and I don't see why that holds for the secant approximation.

  • @SheafificationOfG

    @SheafificationOfG

    19 күн бұрын

    Good catch! The secret sauce here is that we're using a smooth activation function, and we're only approximating the function over a closed interval. For a smooth function f(x), the absolute difference between df/dx at x and a secant line approximation (of width h) is bounded by M*h/2, where M is a bound on the absolute value of the second derivative of f(x) between x and x+h [this falls out of the Lagrange form of the error in Taylor's Theorem]. If x is restricted to a closed interval, we can choose the absolute bound M of the second derivative to be independent of x (and h, if h is bounded), and this gives us a uniform bound on convergence of the secant lines.

  • @Baer2
    @Baer2Ай бұрын

    I don't think I'm part of the target group for this video ( i have no idea what the fuck you are talking about) but it was still entertaining and allowed me to feel smart whenever I was able to make sense of anything ( I know what f(x) means) so have a like and a comment, and good luck with your future math endeavors!!

  • @SheafificationOfG

    @SheafificationOfG

    Ай бұрын

    Haha thanks, I really appreciate the support! The fact that you watched it makes you part of the target group. Exposure therapy is a pretty powerful secret ingredient in math.

  • @kennycommentsofficial
    @kennycommentsofficial20 күн бұрын

    @6:22 missed opportunity for the canonical recall from gradeschool joke

  • @SheafificationOfG

    @SheafificationOfG

    20 күн бұрын

    😔

  • @smellslikeupdog80
    @smellslikeupdog8020 күн бұрын

    your linguistic articulation is extremely specific and 🤌🤌🤌

  • @decare696
    @decare696Ай бұрын

    This is by far the math channel with the best jokes. Sadly, I don't know any Chinese, so I couldn't figure out who 丩的層化 is. Best any translator would give me was "Stratification of ???"...

  • @SheafificationOfG

    @SheafificationOfG

    Ай бұрын

    Haha, thanks! 層化 can mean "sheafification" and ㄐ is zhuyin for the sound "ji"

  • @akhiljalan11
    @akhiljalan1120 күн бұрын

    Great content

  • @98danielray
    @98danielray28 күн бұрын

    I suppose I can show the last equality of 9:04 using induction on monomial operators?

  • @SheafificationOfG

    @SheafificationOfG

    28 күн бұрын

    Yep! Linearity of differentiation allows you to assume q is just a k-th order mixed partial derivative, and then you can proceed by induction on k.

  • @gabrielplzdks3891
    @gabrielplzdks3891Ай бұрын

    Any videos coming about Kolmogorov Arnold networks?

  • @SheafificationOfG

    @SheafificationOfG

    Ай бұрын

    I didn't know about those prior to reading your comment, but they look really interesting! Might be able to put something together in the future; stay tuned.

  • @korigamik
    @korigamik26 күн бұрын

    Can you share the pdf of the notes you show in the video?

  • @SheafificationOfG

    @SheafificationOfG

    26 күн бұрын

    If you're talking about the source of the proof I presented, the paper is in the description: M. Leshno, V.Y. Lin, A. Pinkus, S. Schocken, 1993. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 6(6):861--867. If you're talking about the rest, I actually just generated LaTeX images containing specifically what I presented; they didn't come from a complete document. I might *write* such documents down the road for my videos, but that's heavily dependent on disposable time I have, and general demand.

  • @korigamik

    @korigamik

    26 күн бұрын

    @@SheafificationOfG then demand there is. I love well written explanations to read not see

  • @SheafificationOfG

    @SheafificationOfG

    26 күн бұрын

    @@korigamik I'll keep debating writing up supplementary material for my videos (though I don't want to make promises). In the meantime, though, I highly recommend reading the reference I cited: it's quite well-written (and, of course, the argument is more complete).

  • @noahgeller392
    @noahgeller39222 күн бұрын

    banger vid

  • @SheafificationOfG

    @SheafificationOfG

    22 күн бұрын

    thanks fam

  • @kuzuma4523
    @kuzuma4523Ай бұрын

    Okay but has the manga good application? Does it train faster or something?😊 (Please help me I like mathing but world is corrupting me with its engineering)

  • @SheafificationOfG

    @SheafificationOfG

    Ай бұрын

    The manga is an enjoyable read (though it's an old paper), but it doesn't say anything about how well neural networks train; it's only concerned with the capacity of shallow neural networks in approximating continuous functions (that we already "know" and aren't trying to "learn"). In particular, it says nothing about training a neural network with tune-able parameters (and fixed size). (I feel your pain, though; that's what brought me to make a channel!)

  • @kuzuma4523

    @kuzuma4523

    Ай бұрын

    @@SheafificationOfG Fair enough. I'll still give it a read. Also, thanks for the content; it felt like fresh air watching high level maths with comedy; I shall therefore use the successor function on your sub count.

Келесі