The picture at @3:35 looks like witch craft lol how do u keep tract of that much data
@codybarton2090Күн бұрын
Thank you too great video would they be building a quantum computer to be a single one of those dots to read internet transaction logs based on web page dynamics to filter and feed data across apps ?
@emmanueld92Күн бұрын
Very interesting, thank you. What’s the difference between ESC and computing partial derivatives of J vs U?
@Tom-sp3gyКүн бұрын
You are the best ever !
@Tom-sp3gyКүн бұрын
You are the best ever!
@kepler_22b83Күн бұрын
So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder. From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.
@piusmurimikangai8401Күн бұрын
I have officially joined "the control boot camp" 😄
@SergeyPopachКүн бұрын
it turned out to be that we got a vector space with orthonormal basis of infinite dimension that has infinite amount eigenfunctions and their corresponding eigenvalues… just like in quantum physics
@zlackoffКүн бұрын
Euler integration got dumped on so hard in this video
@amortalbeingКүн бұрын
Thanks a lot . this was great. but how do you do this on images?
@prikarsartam2 күн бұрын
If I have a very large video feed, isn't doing singular value decomposition extremely computationally expensive?
@Eigensteve2 күн бұрын
You can always do a randomized SVD to make it faster
@HD-qq3bn2 күн бұрын
I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.
@etiennetiennetienne2 күн бұрын
I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .
@Heliosnew2 күн бұрын
Nice presentation Steve! I just gave a very similar presentation on Neural ODE-s just a week prior. Would like to see it one day to be used for audio compression. Keep up the content!
@codybarton20903 күн бұрын
Wonder how ai is gonna use this ?
@codybarton20903 күн бұрын
Loved the video ❤️❤️
@smeetsv1033 күн бұрын
If you only have access to the x-data and numerically differentiate to obtain dxdt to train the Neural ODE. How does this noise propagate in the final solution? Does it acts as regularisation?
@as-qh1qq3 күн бұрын
Amazing review. Engaging and sharp
@astledsa27133 күн бұрын
Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !
@1.41423 күн бұрын
multi flashbacks
@Sagitarria3 күн бұрын
this was so well done. a good complement is 3Brown1Blue's videos on convolutions
@ricardoceballosgarzon61003 күн бұрын
Interesting...
@edwardgongsky85403 күн бұрын
I finally understand what 'e' is! thanks professor!
@edwardgongsky85403 күн бұрын
Damn I'm still going through the ode and dynamical systems course, this new material seems interesting AF though
@daniellu94993 күн бұрын
very interesting course, love such great video...
@anonym93233 күн бұрын
Does some one have a example repository or libary so i can plaz with it
@devinbae99143 күн бұрын
Maybe in the Neural ODE paper?
@The018fv3 күн бұрын
Is there a model that can do integro-differential equations?
@codybarton20903 күн бұрын
I love it great video
@hyperplano3 күн бұрын
So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?
@erikkhan3 күн бұрын
Hi Professor , What are some prerequisites for this course?
@joshnicholson61944 күн бұрын
Very cool!
@smustavee4 күн бұрын
I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.
@topamazinggadgetsoftrendin29164 күн бұрын
Very interesting
@moonice11944 күн бұрын
can you share slides/summries?
@amortalbeing4 күн бұрын
it was great thanks to you and your mother for this amazing explanation
@amortalbeing4 күн бұрын
Thanks good
@muthukamalan.m63164 күн бұрын
wonderful content, any code sample would be helpful
@MariaHeger-tb6cv4 күн бұрын
I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?
@evanparshall13234 күн бұрын
Great Video! This derivation hinges on the fact that H=OC and the SVD of H=USV*. You make the assumption that O = US^.5 and C = S^.5V* in order to estimate A, B, and C. This assumption does not seem trivial to me. Why do you assume this? Thank you Steve!
@adamhuang14165 күн бұрын
I think an error can be found at around 21:50, there is no rho since it's Kinematic Eddy Viscosity, congrats for that great video! :)
@student99bg5 күн бұрын
Where are other videos, this video looks like it is out of context, no derivation of anything in it
@mysillyusername5 күн бұрын
Beware: always say an "m by n matrix" rather than the sloppy "m times n matrix". The difference matters at minute 1:43!
@harikrishnanb72736 күн бұрын
Is the rate of change, multiplication or addition?
@hsenagrahdeers7 күн бұрын
Why doesn't this work with the general Taylor series expansions of e^ix, sin(x), and cos(x) although it works perfectly with the Maclaurin series expansion? How do we generalize it then?
@user-zc4mg1pi6w7 күн бұрын
what does PDE stand for???????
@luc4237 күн бұрын
How you made that video? With glass?
@ninafrd99137 күн бұрын
Hi, i love your work, what would you use to tune pid if not genetic algorithms for unknown system dynamics?
@arnold-pdev7 күн бұрын
PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.
@arnold-pdev7 күн бұрын
On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn. Great vid tho!
Пікірлер
Awesome video and very helpful. Thanks
Amazing lectures.
The picture at @3:35 looks like witch craft lol how do u keep tract of that much data
Thank you too great video would they be building a quantum computer to be a single one of those dots to read internet transaction logs based on web page dynamics to filter and feed data across apps ?
Very interesting, thank you. What’s the difference between ESC and computing partial derivatives of J vs U?
You are the best ever !
You are the best ever!
So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder. From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.
I have officially joined "the control boot camp" 😄
it turned out to be that we got a vector space with orthonormal basis of infinite dimension that has infinite amount eigenfunctions and their corresponding eigenvalues… just like in quantum physics
Euler integration got dumped on so hard in this video
Thanks a lot . this was great. but how do you do this on images?
If I have a very large video feed, isn't doing singular value decomposition extremely computationally expensive?
You can always do a randomized SVD to make it faster
I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.
I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .
Nice presentation Steve! I just gave a very similar presentation on Neural ODE-s just a week prior. Would like to see it one day to be used for audio compression. Keep up the content!
Wonder how ai is gonna use this ?
Loved the video ❤️❤️
If you only have access to the x-data and numerically differentiate to obtain dxdt to train the Neural ODE. How does this noise propagate in the final solution? Does it acts as regularisation?
Amazing review. Engaging and sharp
Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !
multi flashbacks
this was so well done. a good complement is 3Brown1Blue's videos on convolutions
Interesting...
I finally understand what 'e' is! thanks professor!
Damn I'm still going through the ode and dynamical systems course, this new material seems interesting AF though
very interesting course, love such great video...
Does some one have a example repository or libary so i can plaz with it
Maybe in the Neural ODE paper?
Is there a model that can do integro-differential equations?
I love it great video
So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?
Hi Professor , What are some prerequisites for this course?
Very cool!
I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.
Very interesting
can you share slides/summries?
it was great thanks to you and your mother for this amazing explanation
Thanks good
wonderful content, any code sample would be helpful
I was thinking about your comment that rules of physics become expressions to be optimized. Unfortunately, I think that they are absolute rules that should be enforced at every stage of the process. Maybe only at the last step? It’s like allowing an accountant to have errors knowing that the overall performance is better?
Great Video! This derivation hinges on the fact that H=OC and the SVD of H=USV*. You make the assumption that O = US^.5 and C = S^.5V* in order to estimate A, B, and C. This assumption does not seem trivial to me. Why do you assume this? Thank you Steve!
I think an error can be found at around 21:50, there is no rho since it's Kinematic Eddy Viscosity, congrats for that great video! :)
Where are other videos, this video looks like it is out of context, no derivation of anything in it
Beware: always say an "m by n matrix" rather than the sloppy "m times n matrix". The difference matters at minute 1:43!
Is the rate of change, multiplication or addition?
Why doesn't this work with the general Taylor series expansions of e^ix, sin(x), and cos(x) although it works perfectly with the Maclaurin series expansion? How do we generalize it then?
what does PDE stand for???????
How you made that video? With glass?
Hi, i love your work, what would you use to tune pid if not genetic algorithms for unknown system dynamics?
PINNs have to be one of the most over-hyped ML concepts... and that's stiff competition.
On one level, it's an unprincipled way of doing data assimilation. On another level, it's an unprincipled way of doing numerical integration. Yawn. Great vid tho!