Git Product home page Git Product logo

varnet's Introduction

VarNet Library

Variational Neural Networks for the Solution of Partial Differential Equations

Authors: Reza Khodayi-mehr and Michael M Zavlanos
reza.khodayi.mehr(at)duke.edu
http://people.duke.edu/~rk157/
Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA.

Copyright (c) 2019 Reza Khodayi-mehr - licensed under the MIT License
For a full copyright statement see the accompanying LICENSE.md file.

For theoretical derivations as well as numerical experiment results, see:
Reza Khodayi-mehr and Michael M Zavlanos. VarNet: Variational neural networks for the solution of partial differential equations, 2019. [Online]. Available: https://arxiv.org/pdf/1912.07443.pdf

To examine the functionalities of the VarNet library, see the acompanying Operater files.

The code is fully functional with the following module versions:
- python: 3.6.7
- tensorflow: 1.10.0
- numpy: 1.16.4
- scipy: 1.2.1
- matplotlib: 3.0.3

varnet's People

Contributors

rizaxudayi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

varnet's Issues

The reduce loss is too slowly after retraining

Hi,
I run "Operator_2Dt.py", in terminal I get:

Training weight information:
	boundary condition loss value: 0.1557
	initial condition loss value: 0.0051
	integral loss value: 0.0005
	requested weight on each term: [5, 1, 1]
	corresponding training weights: [6376349.4644 1275269.8929 1275269.8929]


Epoch  1: loss = 1000000.00000
Epoch  2: loss = 945083.43750
Epoch  3: loss = 900038.37500
Epoch  4: loss = 864876.43750
Epoch  5: loss = 839367.06250
Epoch  6: loss = 822930.62500
Epoch  7: loss = 814537.93750
Epoch  8: loss = 812679.68750
Epoch  9: loss = 815441.00000
Epoch 10: loss = 820707.12500
Epoch 11: loss = 826476.18750
Epoch 12: loss = 831175.81250
Epoch 13: loss = 833848.68750
Epoch 14: loss = 834163.50000
Epoch 15: loss = 832290.68750
Epoch 16: loss = 828728.81250
Epoch 17: loss = 824136.18750
Epoch 18: loss = 819195.00000
Epoch 19: loss = 814513.50000
Epoch 20: loss = 810559.00000
Epoch 21: loss = 807621.06250
Epoch 22: loss = 805800.37500
Epoch 23: loss = 805019.62500
Epoch 24: loss = 805059.87500
Epoch 25: loss = 805612.12500
Epoch 26: loss = 806338.56250
Epoch 27: loss = 806933.31250
Epoch 28: loss = 807168.31250
Epoch 29: loss = 806920.43750
Epoch 30: loss = 806174.06250
Epoch 31: loss = 805007.06250
Epoch 32: loss = 803560.18750
Epoch 33: loss = 802003.43750
Epoch 34: loss = 800501.18750
Epoch 35: loss = 799184.87500
Epoch 36: loss = 798134.43750
Epoch 37: loss = 797371.50000
Epoch 38: loss = 796863.43750
Epoch 39: loss = 796537.93750
Epoch 40: loss = 796300.50000
Epoch 41: loss = 796057.37500
Epoch 42: loss = 795732.12500
Epoch 43: loss = 795278.00000
Epoch 44: loss = 794683.12500
Epoch 45: loss = 793966.37500
Epoch 46: loss = 793169.50000
Epoch 47: loss = 792344.56250
Epoch 48: loss = 791541.81250
Epoch 49: loss = 790799.18750
Epoch 50: loss = 790137.18750
Epoch 51: loss = 789556.50000
Epoch 52: loss = 789040.81250
Epoch 53: loss = 788563.37500
Epoch 54: loss = 788094.06250
Epoch 55: loss = 787606.56250
Epoch 56: loss = 787083.00000
Epoch 57: loss = 786517.43750
Epoch 58: loss = 785914.81250
Epoch 59: loss = 785287.68750
Epoch 60: loss = 784653.00000
Epoch 61: loss = 784026.50000
Epoch 62: loss = 783420.06250
Epoch 63: loss = 782839.25000
Epoch 64: loss = 782282.68750
Epoch 65: loss = 781744.00000
Epoch 66: loss = 781213.62500
Epoch 67: loss = 780682.12500
Epoch 68: loss = 780142.43750
Epoch 69: loss = 779590.25000
Epoch 70: loss = 779026.06250
Epoch 71: loss = 778453.37500
Epoch 72: loss = 777877.31250
Epoch 73: loss = 777303.43750
Epoch 74: loss = 776736.12500
Epoch 75: loss = 776177.56250
Epoch 76: loss = 775627.75000
Epoch 77: loss = 775084.50000
Epoch 78: loss = 774544.87500
Epoch 79: loss = 774005.68750
Epoch 80: loss = 773464.31250
Epoch 81: loss = 772920.31250
Epoch 82: loss = 772373.37500
Epoch 83: loss = 771825.25000
Epoch 84: loss = 771277.68750
Epoch 85: loss = 770732.50000
Epoch 86: loss = 770190.93750
Epoch 87: loss = 769653.62500
Epoch 88: loss = 769120.06250
Epoch 89: loss = 768589.50000
Epoch 90: loss = 768060.75000
Epoch 91: loss = 767532.87500
Epoch 92: loss = 767005.37500
Epoch 93: loss = 766478.25000
Epoch 94: loss = 765951.75000
Epoch 95: loss = 765426.68750
Epoch 96: loss = 764903.68750
Epoch 97: loss = 764383.31250
Epoch 98: loss = 763865.81250
Epoch 99: loss = 763351.06250
Epoch 100: loss = 762838.93750
Epoch 200: loss = 727092.75000
WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/tensorflow_core/python/training/saver.py:963: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
Epoch 300: loss = 719476.12500
Epoch 400: loss = 717680.43750
Epoch 500: loss = 716005.93750

Then I'm stop training after 500 epoch. Next, I restore training by call:

VarNet_2d.loadModel(None,folderpath)
VarNet_2d.train(folderpath, weight=[5, 1, 1], smpScheme='uniform')

However, the loss value is equal to epoch 1 and decreases too slowly, specifically:

Training weight information:
	boundary condition loss value: 0.1133
	initial condition loss value: 0.0031
	integral loss value: 0.0012
	requested weight on each term: [5, 1, 1]
	corresponding training weights: [8757908.6151 1751581.723  1751581.723 ]


Epoch  1: loss = 1000000.06250
Epoch  2: loss = 999273.68750
Epoch  3: loss = 998952.81250
Epoch  4: loss = 997740.68750
Epoch  5: loss = 996759.37500
Epoch  6: loss = 996656.37500
Epoch  7: loss = 995764.18750
Epoch  8: loss = 994928.68750
Epoch  9: loss = 994668.25000
Epoch 10: loss = 994174.87500
Epoch 11: loss = 993411.50000
Epoch 12: loss = 992953.56250
Epoch 13: loss = 992683.43750
Epoch 14: loss = 992186.12500
Epoch 15: loss = 991655.18750
Epoch 16: loss = 991368.56250
Epoch 17: loss = 991117.31250
Epoch 18: loss = 990718.18750
Epoch 19: loss = 990375.06250
Epoch 20: loss = 990195.56250
Epoch 21: loss = 989987.50000
Epoch 22: loss = 989703.81250
Epoch 23: loss = 989512.12500
Epoch 24: loss = 989407.25000
Epoch 25: loss = 989247.50000
Epoch 26: loss = 989073.75000
Epoch 27: loss = 988989.18750
Epoch 28: loss = 988923.93750
Epoch 29: loss = 988811.18750
Epoch 30: loss = 988729.75000
Epoch 31: loss = 988702.75000
Epoch 32: loss = 988651.06250
Epoch 33: loss = 988584.50000
Epoch 34: loss = 988561.87500
Epoch 35: loss = 988545.31250
Epoch 36: loss = 988500.50000
Epoch 37: loss = 988473.12500
Epoch 38: loss = 988466.43750
Epoch 39: loss = 988438.06250
Epoch 40: loss = 988406.62500
Epoch 41: loss = 988395.62500
Epoch 42: loss = 988374.37500
Epoch 43: loss = 988340.81250
Epoch 44: loss = 988321.31250
Epoch 45: loss = 988299.81250
Epoch 46: loss = 988265.00000
Epoch 47: loss = 988237.43750
Epoch 48: loss = 988212.62500
Epoch 49: loss = 988176.68750
Epoch 50: loss = 988143.93750
Epoch 51: loss = 988115.56250
Epoch 52: loss = 988079.00000
Epoch 53: loss = 988043.75000
Epoch 54: loss = 988012.93750
Epoch 55: loss = 987976.62500
Epoch 56: loss = 987940.68750
Epoch 57: loss = 987908.93750
Epoch 58: loss = 987873.31250
Epoch 59: loss = 987838.18750
Epoch 60: loss = 987806.25000
Epoch 61: loss = 987771.87500
Epoch 62: loss = 987737.68750
Epoch 63: loss = 987706.18750
Epoch 64: loss = 987672.62500
Epoch 65: loss = 987639.62500
Epoch 66: loss = 987608.31250
Epoch 67: loss = 987575.31250
Epoch 68: loss = 987543.06250
Epoch 69: loss = 987511.68750
Epoch 70: loss = 987478.81250
Epoch 71: loss = 987446.68750
Epoch 72: loss = 987414.68750
Epoch 73: loss = 987381.68750
Epoch 74: loss = 987349.31250
Epoch 75: loss = 987316.50000
Epoch 76: loss = 987283.00000
Epoch 77: loss = 987250.00000
Epoch 78: loss = 987216.31250
Epoch 79: loss = 987182.18750
Epoch 80: loss = 987148.25000
Epoch 81: loss = 987113.68750
Epoch 82: loss = 987078.93750
Epoch 83: loss = 987044.18750
Epoch 84: loss = 987008.68750
Epoch 85: loss = 986973.43750
Epoch 86: loss = 986937.75000
Epoch 87: loss = 986901.75000
Epoch 88: loss = 986865.75000
Epoch 89: loss = 986829.25000
Epoch 90: loss = 986792.62500
Epoch 91: loss = 986755.93750
Epoch 92: loss = 986719.00000
Epoch 93: loss = 986681.62500
Epoch 94: loss = 986644.37500
Epoch 95: loss = 986606.62500
Epoch 96: loss = 986568.87500
Epoch 97: loss = 986530.75000
Epoch 98: loss = 986492.37500
Epoch 99: loss = 986453.87500
Epoch 100: loss = 986415.18750

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.