Git Product home page Git Product logo

physics-informed-neural-networks's Introduction

Physics-Informed-Neural-Networks (PINNs)

PINNs were proposed by Raissi et al. in [1] to solve PDEs by incorporating the physics (i.e the PDE) and the boundary conditions in the loss function. The loss is the Mean-Squared Error of the PDE and boundary residual measured on 'collocation points' distributed across the domain.

PINNs are summarised in the following schematic:

This repository currently contains implementation of PINNs in TensorFlow 2 and PyTorch for the Burgers' and Helmholtz PDE.

Currently working to incorporate SIREN (paper from NeurIPS 2020).

Installation

TensorFlow

pip install numpy==1.19.2 scipy==1.5.3 tensorflow==2.0.0 matplotlib==3.3.2 pydoe==0.3.8 seaborn==0.9.0

PyTorch

pip install numpy==1.19.2 scipy==1.5.3 matplotlib==3.3.2 pydoe==0.3.8 torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

For GPU installations, check for compatible PyTorch versions on the official website.

NOTE: Newer versions of seaborn do not support sns.distplot and can problematic when ploting gradient histograms

Work Summary

  1. Solving stiff PDEs with the L-BFGS optimizer

PINNs are studied with the L-BFGS optimizer and compared with the Adam optimizer to observe the gradient imbalance reported in [2] for stiff PDEs. It was observed that the gradient imbalance is not as stark with the L-BFGS optimizer when solving stiff PDEs. However, the convergence of PINNs is still slow due to the ill-conditioning of the optimization landscape.

  1. Bottom Up learning in PINNs

It was reported in [3] that PINNs tend to learn all spectral frequencies of the solution simalteneously due to the presence of derivatives in the loss function. In order to understand if there are any other changes in the learning mechanics of PINNs, bottom-up learning was reinvestigated. Bottom-up learing implies that the lower layers, i.e layers close to the input, converge faster than the upper layers, i.e layers closer to the output. A heuristic proof of bottom-up learning was given in [4], the same methodology is followed here while training the PINN to solve the Burgers' PDE. No changes in this mechanism was observed and it was confirmed that PINNs also learn bottom-up. A video of this observation can be found here (https://youtu.be/LmaSPoBVOrA).

  1. Transfer Learning in PINNs

The effect of transfer learning in PINNs was studied to understand its effects on solution error. The general observation was that transfer learning helps us find better local minima when compared to a random Xavier initialization.

Bibliography

[1] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. http://arxiv.org/pdf/1711.10561v1

[2] Sifan Wang, Yujun Teng, Paris Perdikaris. UNDERSTANDING AND MITIGATING GRADIENT PATHOLOGIES IN PHYSICS-INFORMED NEURAL NETWORKS. 2020. https://arxiv.org/pdf/2001.04536.pdf.

[3] Lu Lu et al. DeepXDE: A deep learning library for solving differential equations.2019. https://arxiv.org/abs/1907.04502

[4] Maithra Raghu et al. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. http://arxiv.org/pdf/1706.05806v2

[5] Levi McClenny and Ulisses Braga-Neto. Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism. 2020. https://arxiv.org/abs/2009.04544

physics-informed-neural-networks's People

Contributors

omniscientoctopus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

physics-informed-neural-networks's Issues

Helmholtz pytorch predicted problem

I try the tensorflow version of the Helmholtz problem and it was working correctly:
image

But when I try the version of pytorch , and whatever i did(changing the layers or iter) ,the result was bad , if there is some parameters make this problem happen?I checked out the code and I think there is nothing wrong?
image

Solving non-linear diffusion equation

Hi, I saw your code which is very helpful. Currently, I am solving 1D non-linear diffusion equation. I have successfully solved linear equation suing PINN model. For non-linear equation when i am training my network , it is showing Nan value after 1 iteration. I am using this piece of code for internal data.

def loss_initernal(self, x_train):
    g = x_train.clone()
    g.requires_grad = True
    u = self.forward(g)
    u_g = gradients(u, g)[0]
    u_x, u_t= u_g[:, [0]], u_g[:, [1]]
    D = 1.1*10**(-9)*(self.forward(g))**1.2

    u_xx = gradients(D, g)[0]
    u_xx = u_xx[:, [0]]
    pde = u_t - u_xx
    loss_pde = pde.pow(2).mean()
    print("D",D,"u_x",u,'loss', loss_pde)
    return loss_pde 

def gradients(outputs, inputs):
         return torch.autograd.grad(outputs, inputs, grad_outputs=torch.ones_like(outputs), create_graph=True)

While running this nan is coming after 1 iteration. Can you tell me what i am doing wrong.

The two datasets are actually the same.

Hi omniscientoctopus,

Thank you for this great implementation.

I tried to use your dataset for an inverse problem on Burger's equation. they look exactly the same when I plot them. I guess there might be a difference in the solution of the PDE once we use different values for nu.

Secondly, In my inverse problem, I tried to identify the parameter nu of the PDE as well as the solutions of the PDE. Unfortunately, the NN can't converge towards the desired value, say nu=0.01/pi.

looking forward to your feedback, best.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.