Git Product home page Git Product logo

rmojgani / physicsawareae Goto Github PK

View Code? Open in Web Editor NEW
16.0 6.0 4.0 57.45 MB

The unsupervised learning problem trains a diffeomorphic spatio-temporal grid, that registers the output sequence of the PDEs onto a non-uniform parameter/time-varying grid, such that the Kolmogorov n-width of the mapped data on the learned grid is minimized.

MATLAB 0.50% M 0.01% Jupyter Notebook 99.31% Python 0.13% Shell 0.05%
auto-encoder dimensionality-reduction rnn lstm svd model-order-reduction reduced-order-models rom manifold-learning rank

physicsawareae's Introduction

[Physics-aware] low-rank registration based manifold[/auto-encoder] for convection dominated PDEs

Table of contents

Introduction

We design a physics-aware auto-encoder to specifically reduce the dimensionality of solutions arising from convection-dominated nonlinear physical systems. Although existing nonlinear manifold learning methods seem to be compelling tools to reduce the dimensionality of data characterized by a large Kolmogorov n-width, they typically lack a straightforward mapping from the latent space to the high-dimensional physical space. Moreover, the realized latent variables are often hard to interpret. Therefore, many of these methods are often dismissed in the reduced order modeling of dynamical systems governed by the partial differential equations (PDEs). Accordingly, we propose an auto-encoder type nonlinear dimensionality reduction algorithm. The unsupervised learning problem trains a diffeomorphic spatio-temporal grid, that registers the output sequence of the PDEs on a non-uniform parameter/time-varying grid, such that the Kolmogorov n-width of the mapped data on the learned grid is minimized. We demonstrate the efficacy and interpretability of our approach to separate convection/advection from diffusion/scaling on various manufactured and physical systems.

Method

Video-description

FAQ

  • Why is the method considered physics-aware?

    • The existance of a low-rank time/parameter-varying grid that minimizes the Kolmogrov n-width of the snapshots is a conjecture based on the physics of many of the convection dominated flows and is based on the possiblity of low-rank approximation of the characteristics line of hyperbolic PDEs, read sec 3.1 of this[1].
  • Why is the method considered an auto-encoder?

    • We make a one-to-one comparison of the traditional definition of a neural network-based auto-encoder to the proposed approach. An auto-encoder is defined as:

          

          It comprares to low-rank registeration based manifold defined as:

          

          Our proposed low-rank registeration based method acts as an auto-encoder where the encoder/decoders, and , are the mapping to/from the time/parameter-varying grid, and . The code is the interpolated data on the time/parameter-varying grid, here . The feature space, , is the space of time/parameter-varying grid, i.e. . The proposed feature space is compressed since it is of a lower dimensionality (here rank) compared to the the input space, , here is of lower rank compared to .

  • How does the method handle noisy data?

    • SVD (singular value decomposition) and truncation at the heart of the algorithm acts as a filter removing the low energy-containing features of the data, i.e. the noise is filtered as a result of SVD-truncate.
  • What interpolation scheme to use?

    • Use any of the off-the-shelf interpolation schemes, e.g. linear, cubic, spline. High order interpolation schemes only becomes advantageous in higher rank of reconstruction, i.e. higher . This is due to the local aliasing of high wave-number bases (features) on the coarsened grid. Since we are often interested in a low-rank reconstruction, linear interpolation would be sufficient.
  • What optimization scheme to use?

    • Use any of the methods that can handle nonlinear constraints.
  • What's the complexity of the optimization problem?

    • Each iteration of the optimization problem is of ; However, the cost can be reduced by down-sampling both and , and yet evaluating the cost function on the fine grid. The number of iterations required for convergence is problem dependent.

Requirements

Experiments

Rotating A

Rotating A Rotated A Location

open matlab

matlab -nodisplay -nosplash

Run the manifold learning

run main_rotatingA.m

Evaluate the snapshots on the learned manifold

run post_process.m

Two-dimensional fluid flows

2D Fluid Flows (2D Riemann) 2D Fluid Flows

open matlab

matlab -nodisplay -nosplash

Run the manifold learning

run main_opt_config03.m
run main_opt_config12.m

Evaluate the snapshots on the learned manifold

main_solve_config03.m
main_solve_config12.m

Physics aware auto-encoder in an LSTM architecture

LSTM architecture LSTM architecture

  • The notebooks are Google colab ready, make sure to have .py, *.pkl, .h5 files at the same directory of the notebooks (.ipynb)

Burgers' equation Open In Colab

Run the manifold learning and train the LSTM with Physics-aware registration based auto-encoder

main_manifold_burgers.ipynb

Run the manifold learning and train the LSTM with Neural Network based auto-encoder

main_eulerian_burgers.ipynb

Wave equation Open In Colab

Run the manifold learning and train the LSTM with Physics-aware registration based auto-encoder

 main_eulerian_wave_small.ipynb

Run the manifold learning and train the LSTM with Neural Network based auto-encoder

main_manifold_wave_small.ipynb

Citation

The low-rank registeration based auto-encoder/manifold [2] is built on the idea of low-rank Lagrangian bases first introduced in [1]. Entries are listed chronologically.

physicsawareae's People

Contributors

rmojgani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

physicsawareae's Issues

Missing fd_normal

Dear Rambod,

Thank you for providing the codes. I was running the code main_opt_config03 for 2DRiemann, but unfortunately, I get the following error:

The code is initiated on Serial 
ssssssssssssssssssssssssssssssssssssssssssssss
Unrecognized function or variable 'fd_normal'.

Error in main (line 48)
Dx = sparse(fd_normal(Nx,order_D1,x,1));

Error in main_opt (line 10)
main

Error in main_opt_config03 (line 32)
main_opt

Can you please help me with this? I am running the code on Matlab2020b.

Thank you in advance!

Best,
Pawan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.