Physics-Informed Neural Networks in Julia

PINNs are an approach in deep learning to solve partial differential equations by minimizing residuum information. They require (higher-order) input-output derivatives for the MLP networks, which we will do manually in this video. Here is the code: github.com/Ceyron/machine-lea...
-------
👉 This educational series is supported by the world-leaders in integrating machine learning and artificial intelligence with simulation and scientific computing, Pasteur Labs and Institute for Simulation Intelligence. Check out simulation.science/ for more on their pursuit of 'Nobel-Turing' technologies (arxiv.org/abs/2112.03235 ), and for partnership or career opportunities.
-------
📝 : Check out the GitHub Repository of the channel, where I upload all the handwritten notes and source-code files (contributions are very welcome): github.com/Ceyron/machine-lea...
📢 : Follow me on LinkedIn or Twitter for updates on the channel and other cool Machine Learning & Simulation stuff: / felix-koehler and / felix_m_koehler
💸 : If you want to support my work on the channel, you can become a Patreon here: / mlsim
🪙: Or you can make a one-time donation via PayPal: www.paypal.com/paypalme/Felix...
-------
⚙️ My Gear:
(Below are affiliate links to Amazon. If you decide to purchase the product or something else on Amazon through this link, I earn a small commission.)
- 🎙️ Microphone: Blue Yeti: amzn.to/3NU7OAs
- ⌨️ Logitech TKL Mechanical Keyboard: amzn.to/3JhEtwp
- 🎨 Gaomon Drawing Tablet (similar to a WACOM Tablet, but cheaper, works flawlessly under Linux): amzn.to/37katmf
- 🔌 Laptop Charger: amzn.to/3ja0imP
- 💻 My Laptop (generally I like the Dell XPS series): amzn.to/38xrABL
- 📱 My Phone: Fairphone 4 (I love the sustainability and repairability aspect of it): amzn.to/3Jr4ZmV
If I had to purchase these items again, I would probably change the following:
- 🎙️ Rode NT: amzn.to/3NUIGtw
- 💻 Framework Laptop (I do not get a commission here, but I love the vision of Framework. It will definitely be my next Ultrabook): frame.work
As an Amazon Associate I earn from qualifying purchases.
-------
Timestamps:
00:00 Introduction
00:24 What is a PINN?
00:54 Interpretation of the Poisson problem
01:32 Informing neural network of the physics
03:14 Problem with automatic differentiation
04:15 Manual differentiation of a shallow MLP
07:29 Batched Execution of the neural network
08:36 Imports
08:59 Constants
09:59 Forcing Function & Analytical Solution
10:30 Setting the random seed
10:41 Sigmoid activation function
10:52 Initialize weights & bias of the neural network
13:51 Forward/Primal pass of the network
15:02 Plot initial prediction & analytical solution
18:31 Manual input-output differentiation
23:36 Check correctness with automatic differentiation
26:13 Randomly draw collocation points
28:30 Implement forward loss function
33:11 Testing the outer autodiff call
35:49 Training loop
38:34 Loss plot
39:09 Final PINN prediction
40:03 Summary
42:35 Outro

Пікірлер: 25

  • @ziweiyang2673
    @ziweiyang2673Ай бұрын

    Very nice video. Truly showing the potential of Julia for sciml! I’m curious have you compared this Julia algorithm with Jax? It seems like much faster than training in Jax. However, I’m also worried about what if I need to construct mlp rather than one layer net which is most common situation in ml? How about high dimensional data rather than 1d data? Does that also increase the complexity to use Julia?

  • @josep5840
    @josep58402 ай бұрын

    Hi, thanks for the video. I am kind of confuse, why do you define manually the differentiation if you can use the Zygote.gradient?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    2 ай бұрын

    You're very welcome 🤗 Thanks a lot for the comment. Back when I created the video (I believe it's still the same today), Zygote.jl could not efficiently do higher-order autodiff. It works, but since its source code rewrite model is not efficient (long compile time and slow execution) for nested application, I decided to not do it.

  • @nicklam3594
    @nicklam35949 ай бұрын

    With regards to boundary conditions (BCs) ; whether Dirichlet or Neumann you can always do a function composition that a priory satisfies the BCs and have a way much faster PINNS training.

  • @MachineLearningSimulation

    @MachineLearningSimulation

    5 ай бұрын

    Thanks for the remark. In case anyone is interested in this further, you can, for instance, find a tutorial here: compphysics.github.io/CompSciProgram/doc/pub/week6/html/week6-bs.html#plans-for-february-13-17-2023

  • @michaelpieters1844

    @michaelpieters1844

    3 ай бұрын

    @@MachineLearningSimulation Thank you for that github! Some really useful stuff there from Oslo university.

  • @pietheijn-vo1gt
    @pietheijn-vo1gt9 ай бұрын

    Great video. Does your network have to make use of diff. eq. for it to fit the definition of a PINN? If I embed physics into my NN in some other way (so not using diff. eq.) can it still be called a PINN?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    Thanks for the kind comment and the great question 😊 It's a bit unfortunate but I think due to the popularity of PINNs people will probably misunderstand you if you did something different. In my opinion, a PINN should therefore always be a coordinate network trained with continuous residuum information.

  • @user-ks2iu7xj7d
    @user-ks2iu7xj7d8 ай бұрын

    Hi Felix K. You mention that Julia does not support multiple layers of autodiff. I am curious because I am starting up a PINN project in science context and I am in doubt which framework to choose (I am working with 2D and potentially 3D fluid flow (slow fluids). I wonder if you have a recommendation for which framework (Tensorflow, JAX, Julia etc.) is more suited for PINN work?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    Great question! In short: I would recommend JAX because (as of now) it is the DL framework with the richest feature set and maturity of the autodiff engine. It has the unfair advantage of many google engineers working full-time on it. You might also find Patrick Kidger's thoughts helpful: kidger.site/thoughts/jax-vs-julia/ I agree in many points. Regarding Julia: there are some ways to hack around and get multiple layers of autodiff working (like using reversediff.jl instead of zygote.jl, or obtaining the network derivatives with forwarddiff.jl (which then unfortunately does not allow the last pullback to be done by zygote anymore...)), but if they work I experienced them to be rather slow in comparison to JAX. The first address package for PINNs in Julia would be Neural PDEs.jl (github.com/SciML/NeuralPDE.jl ). I haven't used it myself yet, but from what I can see in the documentation it only supports grid-based training (getting the network derivatives with an eulerian derivative) or an approximation by finite differences. Certainly, this should also give you a valid PINN, but it's not as intuitive as hierarchical autodiff. Hope that helps 😊

  • @user-ks2iu7xj7d

    @user-ks2iu7xj7d

    8 ай бұрын

    @@MachineLearningSimulation Thanks for the reply, this is great info. Thank you so much! I will have a look at JAX I think. Julia is such a neat language so hopefully they will get there :)

  • @berntlie6799
    @berntlie67998 ай бұрын

    Very nice presentation of the basic ideas of PINNs. It seems to me that this is exactly the same as the Weighted Residual Method, using "collocation weights" from the 1970s and 1980s (possibly earlier?), where the "neural network" (your 1-hidden-layer "ansatz") is what is there denoted the "trial solution". I guess this is named "spectral methods" in some fields... In the Weighted Residual Method, if one uses a trial solution which is a parameter linear combination of finite support basis functions, the result is a Finite Element Method. One variant is the Galerkin method. For dynamic systems, a common approach is to specify trial solutions in the spatial directions, and then let the weights be time functions. The WRM then leads to ordinary differential equations in time for the weights. Question: how are PINNs used for dynamic systems? Do you do the same as in WRM? E.g., using Forward NNs as "trial solutions" in space, and use time varying weights?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    Thanks for the kind comment and adding your thoughts 😊 I also view PINNs as a global Ansatz function to the continuous (initial-)boundary value problem. Regarding problems with time dependency: the original PINN paper by Raising et al. already covers transient problems in that the time dimension becomes an additional input of the PINN network. For example for a Burgers equation in 1d, we would have a two dimensional input and the network maps to a 1d output. I would say, this is the most direct incorporation of time dependency. Certainly it also fits the PINN framework to become solutions to the continuous problem. There are some issues with learning the temporal coherence which recent papers tried to address. Alternatives would be to fix a temporal mesh and learn a new space continuous solution for each time step but that requires re-training a PINN network for each step.

  • @MrThirupathit
    @MrThirupathit9 ай бұрын

    Nice Video 😊

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    Thank you! Cheers! 😊

  • @ahmedshakiraliali397
    @ahmedshakiraliali3979 ай бұрын

    Could you please provide an example of how to solve heat diffusion eq 1D with a heterogeneous domain?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    Hi, thanks for the comment 😊 For now, I would like to stick to simple examples to reach the widest audience possible. Hope you can understand that.

  • @Am-pe4iy
    @Am-pe4iy9 ай бұрын

    Is there a reason why use Jupyter on Pluto?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    I don't have much experience with Pluto (yet). Let's see if I can use it for the next Julia video 😊

  • @beerazzkhadka9027
    @beerazzkhadka90279 ай бұрын

    Can you make video on python?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    There is a video with a very similar setup, but implemented in JAX: kzread.info/dash/bejne/X5iFqNSxftjeqdY.html Hope that helps 😊

  • @shocklab
    @shocklab9 ай бұрын

    x in [0,1], surely?

  • @MachineLearningSimulation

    @MachineLearningSimulation

    8 ай бұрын

    Hi, can you elaborate what you mean by the question? 😊

  • @shocklab

    @shocklab

    8 ай бұрын

    My apologies, yes, that was very unclear! At 2.19 on thereabouts, you write down the differential equation and say that x is in (0,1), ie. not including 0 and 1. However, I realise that because you are giving the boundary conditions at {0,1} and presumably demanding that the function be continuously differentiable, it's probably fine.