Lecture 7 - Deep Learning Foundations: Neural Tangent Kernels

Course Webpage: www.cs.umd.edu/class/fall2020/...

Пікірлер: 29

  • @TheAIEpiphany
    @TheAIEpiphany2 жыл бұрын

    Cool video thanks! 00:00:00 Intro: linear regression 00:23:55 NTKs start here 01:01:33 link between NNs and ODEs (ordinary differential equations)

  • @debadeepta
    @debadeepta3 жыл бұрын

    Really nice lecture! I was looking to quickly learn NTKs before diving deep into the original papers and this really helped.

  • @zl7460

    @zl7460

    2 жыл бұрын

    +1. Most well-explained DL lecture I've seen for a long time

  • @user-mm2xj2wj8w
    @user-mm2xj2wj8w3 жыл бұрын

    Awesome lesson! Straight and clear!

  • @itachi7243456
    @itachi72434563 жыл бұрын

    These are fantastic, thanks!

  • @StratosFair
    @StratosFair2 жыл бұрын

    Incredibly clear lecture, allowed me to fill the gaps in my understanding of NTK. Thank you professor !

  • @sikun7894
    @sikun78943 жыл бұрын

    Thank you so much for sharing these lectures! Really useful

  • @joonho0
    @joonho03 жыл бұрын

    Thanks a lot for sharing this lecture!

  • @dv019
    @dv0193 жыл бұрын

    Great video, thank you! To the student asking about Kernels: the word is overloaded. It is used in linear algebra to mean the set of all vectors mapped to 0 by a linear transformation. Sometimes Green's functions in PDEs are called integral kernels. In general a kernel is "the central or most important part of something". I don't like how overloaded the word is either, but c'est la vie.

  • @DarkNinja-24
    @DarkNinja-24 Жыл бұрын

    Beautiful explanation!

  • @weisenjiang9179
    @weisenjiang91793 жыл бұрын

    great intro to NTK, benefit me a lot

  • @yuwu7547
    @yuwu75472 жыл бұрын

    Very useful and easy-catching lecture. Thanks a lot!

  • @nhl8586
    @nhl85862 жыл бұрын

    Super useful for understanding NTK in 15 mins!

  • @AyushSharma-ie7tj
    @AyushSharma-ie7tj Жыл бұрын

    Really nice lecture with a very even pace. Thank you for sharing.

  • @tanchienhao
    @tanchienhao2 жыл бұрын

    Thanks for the awesome lectures!!

  • @mstislavmaslennikov326
    @mstislavmaslennikov3262 жыл бұрын

    The lecturer is imho doing a great job explaining difficult material!

  • @da_lime
    @da_lime2 жыл бұрын

    Awesome, thanks!

  • @sinaasadiyan
    @sinaasadiyan Жыл бұрын

    great explanation, just Subscribed!

  • @chenamora1653
    @chenamora16533 жыл бұрын

    So amazing

  • @ihany9061
    @ihany90612 жыл бұрын

    lifesaver!

  • @vi5hnupradeep
    @vi5hnupradeep2 жыл бұрын

    Thankyou so much!

  • @yuzhema2506
    @yuzhema25062 жыл бұрын

    Thanks for the nice lecture! One question: the bias term in the Taylor approximation seems dependent on x, which means for different input x, the bias term varies. This is different from the traditional kernel view where the bias term is the same for different transformed input phi(x). In other words, for NTK, the inputs in the transformed space do not strictly follow the same linear model. How do we interpret such deviation? Thanks

  • @sayeedchowdhury11
    @sayeedchowdhury112 жыл бұрын

    thanks for the nice lecture, I have a query, we're evaluating the gradient at w0, does it mean the kernel is evaluated based on gradients obtained from an untrained NN which has just been initialized? i mean is the f(w,x) a trained NN or just an initialized one?

  • @MetaOptimizer
    @MetaOptimizer2 жыл бұрын

    41:07 Do we consider the large width of parameter (m) in empirical observation as an extremely large network such as GPT3? In other words, could I interpret the meaning of "the width of parameters" as "the number of trainable parameters"? Thank for your valuable lecture :)

  • @meghbhalerao5208
    @meghbhalerao52082 жыл бұрын

    If I understand right, the NTK is derived when we only consider quadratic mse loss, right? can it be generalized to other loss functions?

  • @chongyizheng7758
    @chongyizheng77583 жыл бұрын

    Question about the first-order Taylor approximation of neural network: Why the first term f(w_0, x) is not included in the kernel function since it is nonlinear w.r.t. x?

  • @ramanasubramanyam1110

    @ramanasubramanyam1110

    3 жыл бұрын

    The first derivative is included (and called NTK) because it resembles the operation of a kernel on an input, i.e a transformation function mapping to a higher dimension

  • @chongyizheng7758

    @chongyizheng7758

    3 жыл бұрын

    @@ramanasubramanyam1110 Thanks for your reply, but I don't think I am asking for that. Let me clarify: My question is about the constant (the first) term f(w_0, x) at 41:16 instead of the derivative (the second) term in the equation. f(w_0, x) seems also nonlinearly depend on x, why it was excluded in the definition of NTK?

  • @hw1451

    @hw1451

    2 жыл бұрын

    I think since it's a constant, we can always subtract it from y.