Steve Brunton

Steve Brunton

A Neural Network Primer

A Neural Network Primer

Пікірлер

  • @anthonymiller6234
    @anthonymiller62347 сағат бұрын

    Awesome video and very helpful. Thanks

  • @kambizmerati1119
    @kambizmerati111910 сағат бұрын

    Amazing lectures.

  • @codybarton2090
    @codybarton2090Күн бұрын

    The picture at @3:35 looks like witch craft lol how do u keep tract of that much data

  • @codybarton2090
    @codybarton2090Күн бұрын

    Thank you too great video would they be building a quantum computer to be a single one of those dots to read internet transaction logs based on web page dynamics to filter and feed data across apps ?

  • @emmanueld92
    @emmanueld92Күн бұрын

    Very interesting, thank you. What’s the difference between ESC and computing partial derivatives of J vs U?

  • @Tom-sp3gy
    @Tom-sp3gyКүн бұрын

    You are the best ever !

  • @Tom-sp3gy
    @Tom-sp3gyКүн бұрын

    You are the best ever!

  • @kepler_22b83
    @kepler_22b83Күн бұрын

    So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder. From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.

  • @piusmurimikangai8401
    @piusmurimikangai8401Күн бұрын

    I have officially joined "the control boot camp" 😄

  • @SergeyPopach
    @SergeyPopachКүн бұрын

    it turned out to be that we got a vector space with orthonormal basis of infinite dimension that has infinite amount eigenfunctions and their corresponding eigenvalues… just like in quantum physics

  • @zlackoff
    @zlackoffКүн бұрын

    Euler integration got dumped on so hard in this video

  • @amortalbeing
    @amortalbeingКүн бұрын

    Thanks a lot . this was great. but how do you do this on images?

  • @prikarsartam
    @prikarsartam2 күн бұрын

    If I have a very large video feed, isn't doing singular value decomposition extremely computationally expensive?

  • @Eigensteve
    @Eigensteve2 күн бұрын

    You can always do a randomized SVD to make it faster

  • @HD-qq3bn
    @HD-qq3bn2 күн бұрын

    I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.

  • @etiennetiennetienne
    @etiennetiennetienne2 күн бұрын

    I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .

  • @Heliosnew
    @Heliosnew2 күн бұрын

    Nice presentation Steve! I just gave a very similar presentation on Neural ODE-s just a week prior. Would like to see it one day to be used for audio compression. Keep up the content!

  • @codybarton2090
    @codybarton20903 күн бұрын

    Wonder how ai is gonna use this ?

  • @codybarton2090
    @codybarton20903 күн бұрын

    Loved the video ❤️❤️

  • @smeetsv103
    @smeetsv1033 күн бұрын

    If you only have access to the x-data and numerically differentiate to obtain dxdt to train the Neural ODE. How does this noise propagate in the final solution? Does it acts as regularisation?

  • @as-qh1qq
    @as-qh1qq3 күн бұрын

    Amazing review. Engaging and sharp

  • @astledsa2713
    @astledsa27133 күн бұрын

    Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !

  • @1.4142
    @1.41423 күн бұрын

    multi flashbacks

  • @Sagitarria
    @Sagitarria3 күн бұрын

    this was so well done. a good complement is 3Brown1Blue's videos on convolutions

  • @ricardoceballosgarzon6100
    @ricardoceballosgarzon61003 күн бұрын

    Interesting...

  • @edwardgongsky8540
    @edwardgongsky85403 күн бұрын

    I finally understand what 'e' is! thanks professor!

  • @edwardgongsky8540
    @edwardgongsky85403 күн бұрын

    Damn I'm still going through the ode and dynamical systems course, this new material seems interesting AF though

  • @daniellu9499
    @daniellu94993 күн бұрын

    very interesting course, love such great video...

  • @anonym9323
    @anonym93233 күн бұрын

    Does some one have a example repository or libary so i can plaz with it

  • @devinbae9914
    @devinbae99143 күн бұрын

    Maybe in the Neural ODE paper?

  • @The018fv
    @The018fv3 күн бұрын

    Is there a model that can do integro-differential equations?

  • @codybarton2090
    @codybarton20903 күн бұрын

    I love it great video

  • @hyperplano
    @hyperplano3 күн бұрын

    So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?

  • @erikkhan
    @erikkhan3 күн бұрын

    Hi Professor , What are some prerequisites for this course?

  • @joshnicholson6194
    @joshnicholson61944 күн бұрын

    Very cool!

  • @smustavee
    @smustavee4 күн бұрын

    I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.

  • @topamazinggadgetsoftrendin2916
    @topamazinggadgetsoftrendin29164 күн бұрын

    Very interesting

  • @moonice1194
    @moonice11944 күн бұрын

    can you share slides/summries?

  • @amortalbeing
    @amortalbeing4 күн бұрын

    it was great thanks to you and your mother for this amazing explanation

  • @amortalbeing
    @amortalbeing4 күн бұрын

    Thanks good

  • @muthukamalan.m6316
    @muthukamalan.m63164 күн бұрын

    wonderful content, any code sample would be helpful

  • @MariaHeger-tb6cv
    @MariaHeger-tb6cv4 күн бұрын

    I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?

  • @evanparshall1323
    @evanparshall13234 күн бұрын

    Great Video! This derivation hinges on the fact that H=OC and the SVD of H=USV*. You make the assumption that O = US^.5 and C = S^.5V* in order to estimate A, B, and C. This assumption does not seem trivial to me. Why do you assume this? Thank you Steve!

  • @adamhuang1416
    @adamhuang14165 күн бұрын

    I think an error can be found at around 21:50, there is no rho since it's Kinematic Eddy Viscosity, congrats for that great video! :)

  • @student99bg
    @student99bg5 күн бұрын

    Where are other videos, this video looks like it is out of context, no derivation of anything in it

  • @mysillyusername
    @mysillyusername5 күн бұрын

    Beware: always say an "m by n matrix" rather than the sloppy "m times n matrix". The difference matters at minute 1:43!

  • @harikrishnanb7273
    @harikrishnanb72736 күн бұрын

    Is the rate of change, multiplication or addition?

  • @hsenagrahdeers
    @hsenagrahdeers7 күн бұрын

    Why doesn't this work with the general Taylor series expansions of e^ix, sin(x), and cos(x) although it works perfectly with the Maclaurin series expansion? How do we generalize it then?

  • @user-zc4mg1pi6w
    @user-zc4mg1pi6w7 күн бұрын

    what does PDE stand for???????

  • @luc423
    @luc4237 күн бұрын

    How you made that video? With glass?

  • @ninafrd9913
    @ninafrd99137 күн бұрын

    Hi, i love your work, what would you use to tune pid if not genetic algorithms for unknown system dynamics?

  • @arnold-pdev
    @arnold-pdev7 күн бұрын

    PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.

  • @arnold-pdev
    @arnold-pdev7 күн бұрын

    On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn. Great vid tho!