Deep Learning to Discover Coordinates for Dynamics: Autoencoders & Physics Informed Machine Learning

Ғылым және технология

Joint work with Nathan Kutz: / @nathankutzuw
Discovering physical laws and governing dynamical systems is often enabled by first learning a new coordinate system where the dynamics become simple. This is true for the heliocentric Copernican system, which enabled Kepler's laws and Newton's F=ma, for the Fourier transform, which diagonalizes the heat equation, and many others. In this video, we discuss how deep learning is being used to discover effective coordinate systems where simple dynamical systems models may be discovered.
Citable link for this video at: doi.org/10.52843/cassyni.4zpjhl
@eigensteve on Twitter
eigensteve.com
databookuw.com
Some useful papers:
www.pnas.org/content/116/45/2... [SINDy + Autoencoders]
www.nature.com/articles/s4146... [Koopman + Autoencoders]
arxiv.org/abs/2102.12086 [Koopman Review Paper]
This video was produced at the University of Washington

Пікірлер: 108

  • @liamtsai2179
    @liamtsai21792 жыл бұрын

    YT algorithm does know where to take me, never thought i'd sit through a lecture in my leisure time fully engaged. Very well done!

  • @AICoffeeBreak
    @AICoffeeBreak2 жыл бұрын

    Knowing a lot about autoencoders already, it is useful to see how they start to dissipate into other research areas, like physics (my favorite!). Great to see a good explanation of ML as a tool for further discovery. Thanks for this video!

  • @wibulord926

    @wibulord926

    Жыл бұрын

    cant belive see you here, your vidieo is helpful too thanks you alot.

  • @aidankennedy6973
    @aidankennedy69732 жыл бұрын

    Incredible work your team is doing. So much to think about, with incredibly wide ranging applications

  • @marioskokmotos8274
    @marioskokmotos82742 жыл бұрын

    Awesome work! Thanks for sharing in such a digestible way! I feel we cannot even start to imagine in how many different fields this approach could be used.

  • @gammaian
    @gammaian2 жыл бұрын

    Your channel is incredible Prof. Brunton, thank you for your work! There is so much value here

  • @HeitorvitorC
    @HeitorvitorC2 жыл бұрын

    Thank you for your videos, Steve! Also, your gesticulation eases the complexity of your talk significantly. Keep up with the good work!

  • @jimlbeaver
    @jimlbeaver2 жыл бұрын

    This is the most amazing stuff you guys have came up with so far!!! Awesome…great job.

  • @albertocaballero7922
    @albertocaballero79222 жыл бұрын

    Awesome work. I can't believe I understood most of this topic. One of the best explanations I have seen so far.

  • @jessegibson3548
    @jessegibson35482 жыл бұрын

    Thank you for this vid. Really great content you are putting out for the community Steve.

  • @danberm1755
    @danberm1755 Жыл бұрын

    Fantastic discussion! Love that you cover the complexities so in-depth.

  • @doganbirol13
    @doganbirol132 жыл бұрын

    I might just have found my research topic for my master's. Fascinating, thanks. Besides that, the quality of the video deserves remarks: Dark background which is good for eyes, persistently high quality graphics, and a narrator who does his best to create understanding with a decent use of English.

  • @Ejnota
    @Ejnota2 жыл бұрын

    how much i love this videos and the quality of the software they use

  • @lablive
    @lablive2 жыл бұрын

    I'm lucky to meet this work positioned between the 3rd and 4th science paradigms. As mentioned at the end of this video, I think the key to the interpretability is to take advantage of inductive biases described as existing models or algorithms for forward/inverse problems to design the encoder, decoder, and loss function.

  • @__-op4qm
    @__-op4qm2 жыл бұрын

    very kindly structured explanations like this can make everyone feel welcome and interested) This is exactly why subbed to this channel almost 2 years ago; all the videos are very, inviting, welcoming and by the end leave a calm sense of curiosity balanced with a pinch of reassurance, free of any unnecessary panic. In other places these types of subjects are often presented with a thick padding of jargon and dry math abstractions, but not here. Here the explanations are distilled into a sparse latent form without loss of generality and with a clear reminder of the real life value of these methods.

  • @iestynne
    @iestynne2 жыл бұрын

    This was a super interesting one. Thank you very much for another engaging whirlwind tour through recent advances in computer science! :)

  • @diegocalanzone655
    @diegocalanzone6552 жыл бұрын

    Brought here by YT algorithm while finishing my BS thesis on non-phsysics-informed auto-encoders to learn from Shallow Water Equations. I will definitely dedicate further studies on the lecture content. Thanks!

  • @peilanhsu
    @peilanhsu29 күн бұрын

    Such a gem of a video! Thank you!!

  • @MaxHaydenChiz
    @MaxHaydenChiz2 жыл бұрын

    This is a really good video. Really well explained and it let me see how your field was using this tech. Thanks for posting it. It sounds like you are doing a lot of interesting research. I'll keep an eye on your channel now that the algorithm recommended it to me.

  • @skeletonrowdie1768
    @skeletonrowdie17682 жыл бұрын

    thanks so much! this definitely helped me get into deep learning dynamical systems. I am working on a problem where I want to classify the state of a viral particle near a membrane. I transformed a lot of simulation frames into structural descriptors. I am at the point where I need to decide on an architecture and loss functions to learn. I have begun naively with a dense neural network. This however seems very interesting, not directly but it could be another input for the DNN. The Z could be describing certain constant dynamics surrounding the viral particle which could help classify the state. Anyway, thanks a lot!

  • @PedrossaurusRex
    @PedrossaurusRex2 жыл бұрын

    Amazing lecture!

  • @AliRashidi97
    @AliRashidi972 жыл бұрын

    Great lecture . Thanks a lot 🙏

  • @drskelebone
    @drskelebone2 жыл бұрын

    I will always love that the simple solution was just returned as the simple solution. :D

  • @jeroenritmeester73
    @jeroenritmeester732 жыл бұрын

    Hi Steve, very interesting video. One remark on the slides that you use: I tend to watch videos with closed captions despite me having average hearing because it helps me keep track of what you're saying. I can imagine that people with hear impairments will also do this, but sometimes elements on your slides will overlap with KZread's space for subtitles, like the derivative at 1:45. Perhaps this is something you could take into account, particularly for slides that do not contain many different elements and allow for scaling. Thanks again.

  • @jinghangli623
    @jinghangli623 Жыл бұрын

    I've been looking for some insights on how to leverage deep learning to optimize our MRI transmit coil. This has been extremely helpful

  • @weeb3277
    @weeb32772 жыл бұрын

    Very esoteric video. I like. 👍

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    I've just been learning about how to use PCA to reduce dimensionality. Now I see one can go further and learn the meaning of the linear combination at the bottleneck. I don't really understand how one can use additional loss functions to find that meaning, but now I know it can be found. I'll need to think about it. Thank you.

  • @ArxivInsights
    @ArxivInsights2 жыл бұрын

    Fantastic video!!

  • @zhanzo
    @zhanzo2 жыл бұрын

    I wish I was able press the like button more than once.

  • @alfcnz
    @alfcnz2 жыл бұрын

    Cool, nice lecture! 🤓🤓🤓

  • @Eigensteve

    @Eigensteve

    2 жыл бұрын

    Thanks!

  • @ernstuzhansky
    @ernstuzhansky5 ай бұрын

    This is very cool!

  • @AllanMedeiros
    @AllanMedeiros2 жыл бұрын

    Fantastic!

  • @netoskin
    @netoskin Жыл бұрын

    Amazing!!

  • @AA-gl1dr
    @AA-gl1dr2 жыл бұрын

    Thank you so much!

  • @krishnaaditya2086
    @krishnaaditya20862 жыл бұрын

    Awesome Thanks!

  • @joseantoniogambin9609
    @joseantoniogambin96092 жыл бұрын

    Awesome!

  • @eerturk
    @eerturk2 жыл бұрын

    Thank you.

  • @leonardromano1491
    @leonardromano14912 жыл бұрын

    Nice video! I am very new to this subject (In fact this is the first video I have seen about it), but it seems that essentially what you do is derive dynamics from an action principle (minimizing the generalized loss functional) and so any partially known physics I suppose would just be incorporated by Lagrange multipliers. About the two different approaches for linearisation (going to higher and lower dimension), I think that both are physically motivated. You can definitely expect dynamics to become more linear if you go to higher dimension too. Think about thermodynamics: You can either try to describe average degrees of freedom like entropy, heat, etc. which would follow easy laws, but at the same time you could try and describe the system by describing each individual particle. It wouldn't really be feasible, but it's not unlikely that the dynamics can be described from a simple possibly linear law (like a box full of free collisionless particles in a homogeneous gravitational field).

  • @spencermarkowitz2699
    @spencermarkowitz2699 Жыл бұрын

    so amazing

  • @rockapedra1130
    @rockapedra113011 ай бұрын

    Nice but would love to see some demos of the results. For example, the equation of the pendulum, the reconstruction from the found dynamics and comparison between the two.

  • @andersonmeneses3599
    @andersonmeneses35992 жыл бұрын

    Thanks! 👍🏼

  • @FromaGaluppo
    @FromaGaluppo2 жыл бұрын

    Amazing

  • @have_a_nice_day399
    @have_a_nice_day3992 жыл бұрын

    Thank you for the amazing video. Would you please give a few simple examples and explain step by step of how to use these machine learning algorithms?

  • @vitorbortolin6810
    @vitorbortolin68102 жыл бұрын

    Great!

  • @johnsalkeld1088
    @johnsalkeld10882 жыл бұрын

    The linear areas seem to be a maximising of the neighbourhoods implied by the implicit function theory - i am probably wrong it was 1987 when i studied this

  • @SaonCrispimVieira
    @SaonCrispimVieira2 жыл бұрын

    Professor Brunton, thanks to you and team mates for the amazing content. I think it is desirable to correct the pendulum videos, because the images are affected by an affine transformation due to the lens distortions, looking to the botton video line you can se how distorted it is. There are libraries to identify the parameters of the camera affine transformation using a chessboard tracking the corners coordinates distortion.

  • @alfcnz

    @alfcnz

    2 жыл бұрын

    You can easily factor the affine transformation in the encoder (and the inverse one in the decoder). You don't always have access to distortion correction settings, and as long as you've been using the same capturing equipment, you will be able to factor such transformations during training.

  • @SaonCrispimVieira

    @SaonCrispimVieira

    2 жыл бұрын

    @@alfcnz Professor Canziani, it's amazing to have your answer here, in a way I'm your virtual machine learning student on youtube! Thanks a lot you and your team mates for the amazing content. I totally agree, especially when it comes to a linear transformation that would be easily understood by the network, my biggest concern is that this distortion could be wrongly treated as the problem physics, being more of an observational error, especially when linearity is enforced in the dynamics discovery.

  • @maythesciencebewithyou

    @maythesciencebewithyou

    2 жыл бұрын

    @@alfcnz But if you trained it on distorted image data, wouldn't it make a false correction to undistorted image data?

  • @SaonCrispimVieira

    @SaonCrispimVieira

    2 жыл бұрын

    @@iestynne Its is not difficult to calibrate the camera!

  • @johnsalkeld1088
    @johnsalkeld10882 жыл бұрын

    Do you have your presentation available on line? Or links to the arxiv site for the papers referenced? I would love to read them

  • @mattkafker8400
    @mattkafker84002 жыл бұрын

    Tremendous video!

  • @rrr33ppp000
    @rrr33ppp0002 жыл бұрын

    YES

  • @user-uy6bo6il4q
    @user-uy6bo6il4q11 ай бұрын

    I tried to use autoencoder to do Anomsly detection for anti-fraud task in social media.It's a good way to do information compression.But I never thought it can be used in model discovery for science! AI will change the game of Science research today!

  • @majstrstych15
    @majstrstych152 жыл бұрын

    Hey Steve, your videos are great! I wanna ask how can the balanced model reduction be used in the deep learning autoencoder. I'm asking, because with the BML you are able to find the coordinate transformation to equalize and diagonalize the Gramians, but this transformation could turn out to be dense and non-interpretable, right? Could you please explain what would be the advantage of combining these two? Thanks, your big fan!

  • @niccologiovenali7597
    @niccologiovenali7597 Жыл бұрын

    you are the best

  • @marjankrebelj4007
    @marjankrebelj40072 жыл бұрын

    I saw the thumbnail and the title and I assumed this was a course on encoding audio (dynamics) for movie editing. :)

  • @weert7812
    @weert78122 жыл бұрын

    Do you know of any jupyter notebook examples in say Keras or Pytorch that give an example of how to do this?

  • @veil6666
    @veil66662 жыл бұрын

    Just curious whether your usage of the term "lift" is related to the topological/categorical use of that term? Specifically whenever there is a morphism f: X -> Y and g: Z -> Y then a lift is a map h: X -> Z such that f = gh (i.e. the diagram commutes). I think the analogy works: Let X be the original data space, Z the latent space, and Y = X. The composition gh is a map X -> Z -> X, if we set f = the identity on X, then h and g are the encoder and decoder, then f ≈ gh expresses the reconstruction objective.

  • @meetplace
    @meetplace7 ай бұрын

    @3:30 If Steve Brunton says something is "a difficult task", you can be sure it really is a difficult task! :D

  • @beauzeta1342
    @beauzeta13424 ай бұрын

    Thank you professor for the very inspiring video! At 12:05, can we say something about the uniqueness of the representation transform phi and psi? Or they may not be unique at all, and may depend on how we train the network?

  • @haydergfg6702
    @haydergfg67022 жыл бұрын

    Thank you alot i hope share with apply by cod

  • @frankdelahue9761
    @frankdelahue97612 жыл бұрын

    Deep learning is revolutionizing engineering, along with Exascale supercomputing.

  • @radenmuaz7125
    @radenmuaz71252 жыл бұрын

    How do you deal with external control input u(t) for control problems and robots, Maybe called exogenous inputs.

  • @kawingchan
    @kawingchan2 жыл бұрын

    Many non linear system exhibit phenomenon of chaos (divergence in the “original” coord if 2 systems have tiny diff in their init condition), would be interested to see if the “recovered” x_\hat should also reproduce the chaotic behavior with that same Lyapunov expononent, and also what should happen to the latent z’s.

  • @hfkssadfrew

    @hfkssadfrew

    2 жыл бұрын

    First question, they do. It is validated in 1990-2000 where numerous engineers and mathematicians play shallow neural network. Second, I don’t have an answer.

  • @vyacheslavboyko6114
    @vyacheslavboyko61142 жыл бұрын

    23:32 sounds interesting. So you say this is a way to learn the linearizing transform for the convective term of the Navier-Stocks Eq? How do you even know if, after training the network, we end up with a meaningful solution?

  • @iestynne

    @iestynne

    2 жыл бұрын

    You might not. Sara Hooker has recently been arguing that properties like accuracy and interpretability (among others) may direct conflict; so the better one is, the worse the others are. You might have to sacrifice a 'meaningful' solution for an accurate one.

  • @drskelebone
    @drskelebone2 жыл бұрын

    Is Steve quiet for everyone? I've been in conferences all week, so I might be set up wrong, but I had to reverse twice to get a clean vocal.

  • @jeroenritmeester73

    @jeroenritmeester73

    2 жыл бұрын

    It's fine for me on mobile

  • @user255

    @user255

    2 жыл бұрын

    I had to turn up volume quite high, but now hearing just fine.

  • @toastyPredicament
    @toastyPredicament2 жыл бұрын

    No this is good

  • @tharunsankar4926
    @tharunsankar49262 жыл бұрын

    How would we train a network like this though?

  • @yoavzack
    @yoavzack2 жыл бұрын

    Imagine using this to represent a human brain in a low dimentional space.

  • @__-op4qm

    @__-op4qm

    2 жыл бұрын

    probably boils down to 2D ('amount of tasty pizza' x 'amount of tasty bacon') quite precisely. [If even one training example involves brain data in response to pineapple pizza, the gradient instantly explodes, coffee levitates onto keyboard and alien police come to remove pineapple away from pizza, just in time before a black hole forms turning milky-way into a Lorenz attractor.]

  • @JohnWasinger
    @JohnWasinger2 жыл бұрын

    Single Value Decomposition / Principal Components Analysis / Proper Orthogonal Decomposition (field? / field? / field?)

  • @zeydabadi

    @zeydabadi

    2 жыл бұрын

    Am I right that he implied that all those three are the same?

  • @JohnWasinger

    @JohnWasinger

    2 жыл бұрын

    @@zeydabadi you’re right, they are. I was wondering if certain fields prefer one term over another.

  • @prikarsartam
    @prikarsartam9 күн бұрын

    If I have a very large video feed, isn't doing singular value decomposition extremely computationally expensive?

  • @Eigensteve

    @Eigensteve

    9 күн бұрын

    You can always do a randomized SVD to make it faster

  • @Anujkumar-my1wi
    @Anujkumar-my1wi2 жыл бұрын

    In wikipedia ,state variables are reffered to as the varibles that describes the mathematical state of the system and state as something that descirbes the system ,but isn't state is the minimum set of variables that describes the system wikipedia article link : en.wikipedia.org/wiki/State_variable And also ,I want to ask is there any difference between configuration of a system and state of a system?

  • @vg5028

    @vg5028

    2 жыл бұрын

    Yes, your understanding of state variables is correct. Sometimes its useful to make a distinction between state variables and a "minimum set" of state variables. State variables are anything that give you information about the state of the system -- it doesn't always have to be a minimal set. In my experience "configuration" and "state" are similar terms but I could be wrong about that.

  • @Anujkumar-my1wi

    @Anujkumar-my1wi

    2 жыл бұрын

    @@vg5028 yes but isn't state is referred to as minimum set of varibles that completly desctibes the system(those minumun set of varibles i.e state varibales) but in wikipedia state is referred to as something that describes the system and state variable are something that describes the state of the system but isn't here state was reffered as minumun set of varibales i.e state variables?

  • @Anujkumar-my1wi

    @Anujkumar-my1wi

    2 жыл бұрын

    @@vg5028 Well,my question is that,why the definition of state is different in this article by mit :web.mit.edu/2.14/www/Handouts/StateSpace.pdf and in this wikipedia article:en.wikipedia.org/wiki/State_variable

  • @hfkssadfrew

    @hfkssadfrew

    2 жыл бұрын

    You asked a GREAT question. Think about this, you have a system variable of 2 state, one always is around 0.00001, the other is around -1 to 1. So you will tend to believe this system is approximately 1D. But mathematically, your understanding is 100% right. it is 2 degree and no less, but you can think it as 1D which brings you a lot of easy life, if you are in the business of modeling and control!

  • @Anujkumar-my1wi

    @Anujkumar-my1wi

    2 жыл бұрын

    @@hfkssadfrew what i am asking is what 'state' is whether its referring to that condition of the system or referring to the mathematical description of the system?

  • @marku7z
    @marku7z2 жыл бұрын

    How do I compute the x dot in case of x are pixels?

  • @__-op4qm

    @__-op4qm

    2 жыл бұрын

    probably for each pixel separately in 1D by simple Euclidean gradient dx/dt, because the joint underlying function over all pixes is unknown (neural network needs to learn those correlations from examples).

  • @MrHardgabi
    @MrHardgabi Жыл бұрын

    waouh, cool but complex, not sure if it could be simplified a bit

  • @NozaOz
    @NozaOz2 жыл бұрын

    Could someone help me? I’m a student fresh out of high school, I’ve got an Australian-HSC-education in Chemistry, physics and extension 2 maths, I intend on studying physics at university and possibly getting a minor in CS to give me the marketable skills. I’m currently just doing simple things like a code academy course on Python and likely the machine learning skill path. From where I am now, where do I go to understand this video?

  • @mohdnazarudin2636

    @mohdnazarudin2636

    2 жыл бұрын

    to understand the video, coding is useless, it is not gonna help. you need to understand linear algebra, dynamical system or ODE/PDE, and also the math for neural network. take course in those subjects.

  • @huyvuquang2041
    @huyvuquang2041 Жыл бұрын

    Anybody have a feeling like me? Learning math and science with Harrison Well?

  • @user-gj6cw6yc8s
    @user-gj6cw6yc8s3 ай бұрын

    😊 I don't know if computers are capable of deep learning Like I just explained our type of learning It don't come from all your Function boards The details that you place in it are your details I can't live your life my friend And your computer will never know what I'm trying to say Unless we were being straight but you don't have a straight life I doubt you make a completely straight computer ...😊 It's personal To understand your construction modeling You see the thing about my life it is not orchestrated by your construction modeling 😊 Even if I had my own chance ... Sometimes the facts ain't even facts... if it ain't even there What could be what won't be That's really not your prediction 😊 Unless it's within your case to understand 😮 Most people don't have these matters and they only predict 😊 Try to be the cause and effect of them Before you predict in the middle of them .... Even if predictions are such outcasts 😊 Even the teacher's pet taught us that ... I won't even use the word persuasions ..... You see a computer has to modify itself to each and every case of individual and the life and standards that they have to live by To understand them You will never help them By a parents point of view You got to take the strong considerations of their wrongs .... Their point of views Were there aiming what they can what they can't I don't need a computer that says well I can't do that I won't learn that 😊 That's what my professor at MIT told me If I can't do that I won't work on that 😊 I said okay you will give me a computer just the same ..... 😊 Logically I am correct But like I said that's a prediction I am careful about my predictions Because what is important to you is the same that is important to me it's just not important to you to give it to me as much as it was important to just keep it to yourself 😊 I'm a man of discoveries and I can't help but run my mouth 😮 But you're a man with a job and you got nothing else to learn ....😊 We did meet in the middle 😮 I can't help it you're going the other way 😊 Maybe I'm stupid Look we met back in the middle 😢 Call it even damn it

  • @__--JY-Moe--__
    @__--JY-Moe--__2 жыл бұрын

    wow! this is so fun! I think I made it 2 somewhere, in this switchboard of bowties! I don't know whether 2 call this ''at&t,how can I help U"! or. land of confusion, in deep thought flow's? ha..ha.. yes, my attempt @ humor! thanks so much 4 the lesson! totally love this! good luck!

  • @user-gj6cw6yc8s
    @user-gj6cw6yc8s3 ай бұрын

    You got to be worried about the wrong point of view you feed a computer 😊 As a human we don't make the mistakes 😊 We necessarily know or know what we need or what is needed to be added ....😊 Sometimes no potential strains there 😊 Sometimes we don't have such qualifications as a qualification 😮 Even if you are not qualified a human will work you into qualified Leave it up to a computer 😊 You won't be qualified for s***

  • @Tyler-pj3tg
    @Tyler-pj3tg Жыл бұрын

    AI to learn how many black shirts Steve Brunton has

  • @ArbaouiBillel
    @ArbaouiBillel2 жыл бұрын

    AI has gone through a number of AI winters because people claimed things they couldn't deliver

  • @laxibkamdi1687
    @laxibkamdi16872 жыл бұрын

    Sound really hard

  • @tag_of_frank
    @tag_of_frank2 жыл бұрын

    First 9 minutes can be summarized with this sentence: "There exists a neural network which can perform SVD."

  • @hfkssadfrew

    @hfkssadfrew

    2 жыл бұрын

    Lol. You can say “there exists a polynomial which can approximately perform any operation”. If you think so, then you still don’t get the point.

  • @tag_of_frank

    @tag_of_frank

    2 жыл бұрын

    @@hfkssadfrew I think the point is after minute 9.

  • @user-gj6cw6yc8s
    @user-gj6cw6yc8s3 ай бұрын

    😊 next thing you know we got crooked computers 😊 Last time I checked there's not a f****** game on this computer that the game does not f****** cheat or can it play f****** digitally Fair 😊 Ever since they made one f****** computer program You can never trust a f****** poker cards ever again 😊 I don't want to play with your computer 😊 For one it does not know how to f****** shuffle 😊 And for two it don't know how to stop looking at my f****** cards

  • @gtsmeg3474
    @gtsmeg34742 жыл бұрын

    audio is sooo low WTF

  • @nerdomania24
    @nerdomania242 жыл бұрын

    inventing my own math, from ground up and have no problem with physical systems and AI, you just have to make metrics emergent from a sack of infinite amount of Differential forms and just pick one until the metric of selfmanifistation won't be statistically correlated.

Келесі