Git Product home page Git Product logo

sciml / neuralpde.jl Goto Github PK

View Code? Open in Web Editor NEW
923.0 36.0 196.0 528.81 MB

Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

Home Page: https://docs.sciml.ai/NeuralPDE/stable/

License: Other

Julia 100.00%
differential-equations differentialequations neural-differential-equations scientific-ml scientific-ai partial-differential-equations ordinary-differential-equations neural-network neural-networks ode

neuralpde.jl's Introduction

NeuralPDE

Join the chat at https://julialang.zulipchat.com #sciml-bridged Global Docs

codecov Build Status Build status

ColPrac: Contributor's Guide on Collaborative Practices for Community Packages SciML Code Style

NeuralPDE.jl is a solver package which consists of neural network solvers for partial differential equations using physics-informed neural networks (PINNs). This package utilizes neural stochastic differential equations to solve PDEs at a greatly increased generality compared with classical methods.

Installation

Assuming that you already have Julia correctly installed, it suffices to install NeuralPDE.jl in the standard way, that is, by typing ] add NeuralPDE. Note: to exit the Pkg REPL-mode, just press Backspace or Ctrl + C.

Tutorials and Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.

Features

  • Physics-Informed Neural Networks for ODE, SDE, RODE, and PDE solving
  • Ability to define extra loss functions to mix xDE solving with data fitting (scientific machine learning)
  • Automated construction of Physics-Informed loss functions from a high level symbolic interface
  • Sophisticated techniques like quadrature training strategies, adaptive loss functions, and neural adapters to accelerate training
  • Integrated logging suite for handling connections to TensorBoard
  • Handling of (partial) integro-differential equations and various stochastic equations
  • Specialized forms for solving ODEProblems with neural networks
  • Compatibility with Flux.jl and Lux.jl for all of the GPU-powered machine learning layers available from those libraries.
  • Compatibility with NeuralOperators.jl for mixing DeepONets and other neural operators (Fourier Neural Operators, Graph Neural Operators, etc.) with physics-informed loss functions

Example: Solving 2D Poisson Equation via Physics-Informed Neural Networks

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimisers
import ModelingToolkit: Interval, infimum, supremum

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

# 2D PDE
eq = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sin(pi * x) * sin(pi * y)

# Boundary conditions
bcs = [u(0, y) ~ 0.0, u(1, y) ~ 0,
    u(x, 0) ~ 0.0, u(x, 1) ~ 0]
# Space and time domains
domains = [x  Interval(0.0, 1.0),
    y  Interval(0.0, 1.0)]
# Discretization
dx = 0.1

# Neural network
dim = 2 # number of dimensions
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))

discretization = PhysicsInformedNN(chain, QuadratureTraining())

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)

callback = function (p, l)
    println("Current loss is: $l")
    return false
end

res = Optimization.solve(prob, ADAM(0.1); callback = callback, maxiters = 4000)
prob = remake(prob, u0 = res.minimizer)
res = Optimization.solve(prob, ADAM(0.01); callback = callback, maxiters = 2000)
phi = discretization.phi

And some analysis:

xs, ys = [infimum(d.domain):(dx / 10):supremum(d.domain) for d in domains]
analytic_sol_func(x, y) = (sin(pi * x) * sin(pi * y)) / (2pi^2)

u_predict = reshape([first(phi([x, y], res.minimizer)) for x in xs for y in ys],
    (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys],
    (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)

using Plots
p1 = plot(xs, ys, u_real, linetype = :contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype = :contourf, title = "predict");
p3 = plot(xs, ys, diff_u, linetype = :contourf, title = "error");
plot(p1, p2, p3)

image

Citation

If you use NeuralPDE.jl in your research, please cite this paper:

@article{zubov2021neuralpde,
  title={NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations},
  author={Zubov, Kirill and McCarthy, Zoe and Ma, Yingbo and Calisto, Francesco and Pagliarino, Valerio and Azeglio, Simone and Bottero, Luca and Luj{\'a}n, Emmanuel and Sulzer, Valentin and Bharambe, Ashutosh and others},
  journal={arXiv preprint arXiv:2107.09443},
  year={2021}
}

neuralpde.jl's People

Contributors

akaysh avatar albheim avatar arnostrouwen avatar ashutosh-b-b avatar astitvaaggarwal avatar ayushinav avatar chrisrackauckas avatar christopher-dg avatar dependabot[bot] avatar github-actions[bot] avatar kanav99 avatar killah-t-cell avatar kirillzubov avatar ldeso avatar maleadt avatar mkg33 avatar paniash avatar ranocha avatar rohitrathore1 avatar sathvikbhagavan avatar sdesai1287 avatar shashi avatar thazhemadam avatar vaibhavdixit02 avatar xinyu-li-1997 avatar xtalax avatar yichengdwu avatar yingboma avatar zoemcc avatar zzj0402 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralpde.jl's Issues

Array error in 2D poisson example

Hi,
I'm trying to run the 2D poisson example but always get a BoundsError when running it. Any ideas?
Thanks.

julia> phi, res = solve(prob, alg, verbose=true, maxiters=5000)
ERROR: BoundsError: attempt to access 1-element Array{Int64,1} at index [2]
Stacktrace:
[1] getindex at ./array.jl:788 [inlined]
[2] adjoint at /Users/ralph/.julia/packages/Zygote/1GXzF/src/lib/array.jl:37 [inlined]
[3] _pullback at /Users/ralph/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47 [inlined]
[4] literal_getindex at /Users/ralph/.julia/packages/Zygote/1GXzF/src/lib/lib.jl:94 [inlined]
[5] #185 at /Users/ralph/.julia/packages/NeuralPDE/RrGzl/src/pinns_pde_solve.jl:146 [inlined]
[6] _pullback(::Zygote.Context, ::NeuralPDE.var"#185#187", ::NeuralPDE.var"#149#162"{FastChain{Tuple{FastDense{typeof(σ),DiffEqFlux.var"#initial_params#89"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(σ),DiffEqFlux.var"#initial_params#89"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(identity),DiffEqFlux.var"#initial_params#89"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}}}}}, ::Array{Int64,1}, ::Array{Float32,1}, ::NeuralPDE.var"#154#168"{Array{Array{Float64,1},1}}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[7] #157 at /Users/ralph/.julia/packages/NeuralPDE/RrGzl/src/pinns_pde_solve.jl:428 [inlined]
[8] _pullback(::Zygote.Context, ::NeuralPDE.var"#157#171"{Array{Function,1},Int64}, ::Array{Int64,1}, ::Array{Float32,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[9] inner_loss at /Users/ralph/.julia/packages/NeuralPDE/RrGzl/src/pinns_pde_solve.jl:442 [inlined]
[10] _pullback(::Zygote.Context, ::NeuralPDE.var"#inner_loss#175", ::NeuralPDE.var"#157#171"{Array{Function,1},Int64}, ::Array{Int64,1}, ::Array{Float32,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[11] #160 at ./none:0 [inlined]
[12] _pullback(::Zygote.Context, ::NeuralPDE.var"#160#178"{NeuralPDE.var"#157#171"{Array{Function,1},Int64},Array{Float32,1},NeuralPDE.var"#inner_loss#175"}, ::Array{Int64,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[13] MappingRF at ./reduce.jl:90 [inlined]
[14] _pullback(::Zygote.Context, ::Base.MappingRF{NeuralPDE.var"#160#178"{NeuralPDE.var"#157#171"{Array{Function,1},Int64},Array{Float32,1},NeuralPDE.var"#inner_loss#175"},Base.MappingRF{typeof(abs2),Base.BottomRF{typeof(Base.add_sum)}}}, ::Base._InitialValue, ::Array{Int64,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[15] _foldl_impl at ./reduce.jl:55 [inlined]
[16] _pullback(::Zygote.Context, ::typeof(Base._foldl_impl), ::Base.MappingRF{NeuralPDE.var"#160#178"{NeuralPDE.var"#157#171"{Array{Function,1},Int64},Array{Float32,1},NeuralPDE.var"#inner_loss#175"},Base.MappingRF{typeof(abs2),Base.BottomRF{typeof(Base.add_sum)}}}, ::Base._InitialValue, ::Array{Any,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[17] foldl_impl at ./reduce.jl:45 [inlined]
[18] _pullback(::Zygote.Context, ::typeof(Base.foldl_impl), ::Base.MappingRF{NeuralPDE.var"#160#178"{NeuralPDE.var"#157#171"{Array{Function,1},Int64},Array{Float32,1},NeuralPDE.var"#inner_loss#175"},Base.MappingRF{typeof(abs2),Base.BottomRF{typeof(Base.add_sum)}}}, ::NamedTuple{(),Tuple{}}, ::Array{Any,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[19] mapfoldl_impl at ./reduce.jl:41 [inlined]
[20] #mapfoldl#189 at ./reduce.jl:157 [inlined]
[21] mapfoldl at ./reduce.jl:157 [inlined]
[22] #mapreduce#193 at ./reduce.jl:283 [inlined]
[23] mapreduce at ./reduce.jl:283 [inlined]
[24] sum at ./reduce.jl:486 [inlined]
[25] #159 at ./none:0 [inlined]
[26] MappingRF at ./reduce.jl:90 [inlined]
[27] _pullback(::Zygote.Context, ::Base.MappingRF{NeuralPDE.var"#159#177"{Array{Float32,1},NeuralPDE.var"#inner_loss#175"},Base.BottomRF{typeof(Base.add_sum)}}, ::Base._InitialValue, ::Tuple{NeuralPDE.var"#157#171"{Array{Function,1},Int64},Array{Any,1}}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[28] _foldl_impl at ./reduce.jl:55 [inlined]
[29] _pullback(::Zygote.Context, ::typeof(Base._foldl_impl), ::Base.MappingRF{NeuralPDE.var"#159#177"{Array{Float32,1},NeuralPDE.var"#inner_loss#175"},Base.BottomRF{typeof(Base.add_sum)}}, ::Base._InitialValue, ::Base.Iterators.Zip{Tuple{Array{Any,1},Array{Any,1}}}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[30] foldl_impl at ./reduce.jl:45 [inlined]
[31] _pullback(::Zygote.Context, ::typeof(Base.foldl_impl), ::Base.MappingRF{NeuralPDE.var"#159#177"{Array{Float32,1},NeuralPDE.var"#inner_loss#175"},Base.BottomRF{typeof(Base.add_sum)}}, ::NamedTuple{(),Tuple{}}, ::Base.Iterators.Zip{Tuple{Array{Any,1},Array{Any,1}}}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[32] mapfoldl_impl at ./reduce.jl:41 [inlined]
[33] #mapfoldl#189 at ./reduce.jl:157 [inlined]
[34] mapfoldl at ./reduce.jl:157 [inlined]
[35] #mapreduce#193 at ./reduce.jl:283 [inlined]
[36] mapreduce at ./reduce.jl:283 [inlined]
[37] sum at ./reduce.jl:486 [inlined]
[38] sum at ./reduce.jl:503 [inlined]
[39] loss_boundary at /Users/ralph/.julia/packages/NeuralPDE/RrGzl/src/pinns_pde_solve.jl:446 [inlined]
[40] loss at /Users/ralph/.julia/packages/NeuralPDE/RrGzl/src/pinns_pde_solve.jl:450 [inlined]
[41] _pullback(::Zygote.Context, ::NeuralPDE.var"#loss#179"{Int64,Int64,NeuralPDE.var"#loss_domain#173"{Array{Any,1},NeuralPDE.var"#inner_loss_domain#172"{NeuralPDE.var"#156#170"{NeuralPDE.var"#181#183"}}},NeuralPDE.var"#loss_boundary#176"{Array{Any,1},Array{Any,1},NeuralPDE.var"#inner_loss#175"}}, ::Array{Float32,1}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[42] adjoint(::Zygote.Context, ::typeof(Core._apply_iterate), ::typeof(iterate), ::Function, ::Tuple{Array{Float32,1}}, ::Tuple{}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/lib/lib.jl:179
[43] _pullback(::Zygote.Context, ::typeof(Core._apply_iterate), ::typeof(iterate), ::Function, ::Tuple{Array{Float32,1}}, ::Tuple{}) at /Users/ralph/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47
[44] #33 at /Users/ralph/.julia/packages/DiffEqFlux/FZMwP/src/train.jl:99 [inlined]
[45] _pullback(::Zygote.Context, ::DiffEqFlux.var"#33#38"{Tuple{},NeuralPDE.var"#loss#179"{Int64,Int64,NeuralPDE.var"#loss_domain#173"{Array{Any,1},NeuralPDE.var"#inner_loss_domain#172"{NeuralPDE.var"#156#170"{NeuralPDE.var"#181#183"}}},NeuralPDE.var"#loss_boundary#176"{Array{Any,1},Array{Any,1},NeuralPDE.var"#inner_loss#175"}},Array{Float32,1}}) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface2.jl:0
[46] pullback(::Function, ::Zygote.Params) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface.jl:172
[47] gradient(::Function, ::Zygote.Params) at /Users/ralph/.julia/packages/Zygote/1GXzF/src/compiler/interface.jl:53
[48] macro expansion at /Users/ralph/.julia/packages/DiffEqFlux/FZMwP/src/train.jl:98 [inlined]
[49] macro expansion at /Users/ralph/.julia/packages/ProgressLogging/BBN0b/src/ProgressLogging.jl:328 [inlined]
[50] (::DiffEqFlux.var"#32#37"{NeuralPDE.var"#161#180"{Float32,Bool},Int64,Bool,Bool,NeuralPDE.var"#loss#179"{Int64,Int64,NeuralPDE.var"#loss_domain#173"{Array{Any,1},NeuralPDE.var"#inner_loss_domain#172"{NeuralPDE.var"#156#170"{NeuralPDE.var"#181#183"}}},NeuralPDE.var"#loss_boundary#176"{Array{Any,1},Array{Any,1},NeuralPDE.var"#inner_loss#175"}},Array{Float32,1},Zygote.Params})() at /Users/ralph/.julia/packages/DiffEqFlux/FZMwP/src/train.jl:43
[51] maybe_with_logger(::DiffEqFlux.var"#32#37"{NeuralPDE.var"#161#180"{Float32,Bool},Int64,Bool,Bool,NeuralPDE.var"#loss#179"{Int64,Int64,NeuralPDE.var"#loss_domain#173"{Array{Any,1},NeuralPDE.var"#inner_loss_domain#172"{NeuralPDE.var"#156#170"{NeuralPDE.var"#181#183"}}},NeuralPDE.var"#loss_boundary#176"{Array{Any,1},Array{Any,1},NeuralPDE.var"#inner_loss#175"}},Array{Float32,1},Zygote.Params}, ::Nothing) at /Users/ralph/.julia/packages/DiffEqBase/ytJuW/src/utils.jl:259
[52] sciml_train(::Function, ::Array{Float32,1}, ::ADAM, ::Base.Iterators.Cycle{Tuple{DiffEqFlux.NullData}}; cb::Function, maxiters::Int64, progress::Bool, save_best::Bool) at /Users/ralph/.julia/packages/DiffEqFlux/FZMwP/src/train.jl:42
[53] solve(::NNPDEProblem{NeuralPDE.var"#181#183",Function,Array{Array{Any,1},1}}, ::NNDE{FastChain{Tuple{FastDense{typeof(σ),DiffEqFlux.var"#initial_params#89"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(σ),DiffEqFlux.var"#initial_params#89"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}},FastDense{typeof(identity),DiffEqFlux.var"#initial_params#89"{typeof(Flux.glorot_uniform),typeof(Flux.zeros),Int64,Int64}}}},ADAM,Array{Float32,1},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}}; timeseries_errors::Bool, save_everystep::Bool, adaptive::Bool, abstol::Float32, verbose::Bool, maxiters::Int64) at /Users/ralph/.julia/packages/NeuralPDE/RrGzl/src/pinns_pde_solve.jl:456

Parameters identification

Hi,

First of all congratulations for developing the NeuralPDE package! I have been playing around the different examples and tutorials and although I am new at Julia, I have been able to solve some PDEs that interest me.

I would like now to do parameters identifications. For instance using PINNs to identify the value of an unknown coefficient in a PDE from measured data. Is it possible to solve this kind of problem ?

Best,

Lucas

Error in NNRODE test routine

running ]test NeuralPDE I get the following error:

NNRODE: Test Failed at /home/ah/.julia/packages/NeuralPDE/xidim/test/NNRODE_tests.jl:33 Expression: err < 0.4 Evaluated: 0.541088f0 < 0.4 Stacktrace: [1] top-level scope at /home/ah/.julia/packages/NeuralPDE/xidim/test/NNRODE_tests.jl:33 [2] include(::Module, ::String) at ./Base.jl:377 [3] include(::String) at /home/ah/.julia/packages/SafeTestsets/A83XK/src/SafeTestsets.jl:23 [4] top-level scope at /home/ah/.julia/packages/NeuralPDE/xidim/test/runtests.jl:13 [5] top-level scope at /build/julia-jWOAej/julia-1.4.2+dfsg/usr/share/julia/stdlib/v1.4/Test/src/Test.jl:1113 [6] top-level scope at /home/ah/.julia/packages/NeuralPDE/xidim/test/runtests.jl:13 Test Summary: | Pass Fail Total NNRODE | 1 1 2 ERROR: LoadError: Some tests did not pass: 1 passed, 1 failed, 0 errored, 0 broken. in expression starting at /home/ah/.julia/packages/NeuralPDE/xidim/test/runtests.jl:10 ERROR: Package NeuralPDE errored during testing

which indicates poor convergence.

On the other hand, the previous tests passed without problems. All tests seem to run on one core without GPU ...

periodic boundary conditions

Hello,

how can we define the periodic boundary conditions in "NeuralPDE.jl"? For example: 2D Poisson equation with periodic boundary conditions:
[https://fenicsproject.org/docs/dolfin/1.4.0/python/demo/documented/periodic/python/documentation.html]

Thanks in advance!

Failed to precompile Zygote

@ChrisRackauckas I have been trying for some time to run your tests locally and I keep bumping against the pre-compiling error below. Please give an indication about the correct configuration of Zygote and dependencies to pass this point.

using Flux, Zygote, LinearAlgebra, Statistics
using Test, NeuralNetDiffEq
println("NNPDEHAN_tests")

Info: Precompiling Flux [587475ba-b771-5e3f-ad9e-33799f191a9c]
└ @ Base loading.jl:1260
┌ Warning: Package Zygote does not have ArrayLayouts in its dependencies:
│ - If you have Zygote checked out for development and have
│ added ArrayLayouts as a dependency but haven't updated your primary
│ environment's manifest file, try Pkg.resolve().
│ - Otherwise you may need to report an issue with Zygote
└ Loading ArrayLayouts into Zygote from project dependency, future warnings for Zygote are suppressed.
WARNING: could not import NNlib.batched_mul into Zygote
WARNING: could not import NNlib.batched_adjoint into Zygote
ERROR: LoadError: LoadError: UndefVarError: batched_mul not defined
Stacktrace:
[1] top-level scope at C:\Users\Denis.julia\packages\ZygoteRules\6nssF\src\adjoint.jl:45
[2] include(::Module, ::String) at .\Base.jl:377
[3] include(::String) at C:\Users\Denis.julia\packages\Zygote\YeCEW\src\Zygote.jl:1
[4] top-level scope at C:\Users\Denis.julia\packages\Zygote\YeCEW\src\Zygote.jl:30
[5] include(::Module, ::String) at .\Base.jl:377
[6] top-level scope at none:2
[7] eval at .\boot.jl:331 [inlined]
[8] eval(::Expr) at .\client.jl:449
[9] top-level scope at .\none:3
in expression starting at C:\Users\Denis.julia\packages\Zygote\YeCEW\src\lib\nnlib.jl:80
in expression starting at C:\Users\Denis.julia\packages\Zygote\YeCEW\src\Zygote.jl:30
ERROR: LoadError: Failed to precompile Zygote [e88e6eb3-aa80-5325-afca-941959d7151f] to C:\Users\Denis.julia\compiled\v1.4\Zygote\4kbLI_rTblq.ji.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] compilecache(::Base.PkgId, ::String) at .\loading.jl:1272
[3] _require(::Base.PkgId) at .\loading.jl:1029
[4] require(::Base.PkgId) at .\loading.jl:927
[5] require(::Module, ::Symbol) at .\loading.jl:922
[6] include(::Module, ::String) at .\Base.jl:377
[7] top-level scope at none:2
[8] eval at .\boot.jl:331 [inlined]
[9] eval(::Expr) at .\client.jl:449
[10] top-level scope at .\none:3
in expression starting at C:\Users\Denis.julia\packages\Flux\Fj3bt\src\Flux.jl:6
Failed to precompile Flux [587475ba-b771-5e3f-ad9e-33799f191a9c] to C:\Users\Denis.julia\compiled\v1.4\Flux\QdkVy_rTblq.ji.

Stacktrace:
[1] error(::String) at .\error.jl:33
[2] compilecache(::Base.PkgId, ::String) at .\loading.jl:1272
[3] _require(::Base.PkgId) at .\loading.jl:1029
[4] require(::Base.PkgId) at .\loading.jl:927
[5] require(::Module, ::Symbol) at .\loading.jl:922
[6] top-level scope at In[3]:1

Cyclops Error with Julia 1.1

(v1.1) pkg> add NeuralPDE
  Updating registry at `~/.julia/registries/General`
  Updating git-repo `https://github.com/JuliaRegistries/General.git`
 Resolving package versions...
ERROR: Unsatisfiable requirements detected for package NeuralPDE [315f7962]:
 NeuralPDE [315f7962] log:
 ├─possible versions are: [2.0.0, 2.1.0, 2.2.0, 2.3.0] or uninstalled
 ├─restricted to versions * by an explicit requirement, leaving only versions [2.0.0, 2.1.0, 2.2.0, 2.3.0]
 ├─restricted by compatibility requirements with CuArrays [3a865a2d] to versions: [2.1.0, 2.2.0, 2.3.0] or uninstalled, leaving only versions: [2.1.0, 2.2.0, 2.3.0]
 │ └─CuArrays [3a865a2d] log:
 │   ├─possible versions are: [0.2.1, 0.3.0, 0.4.0, 0.5.0, 0.6.0-0.6.2, 0.7.0-0.7.3, 0.8.0-0.8.1, 0.9.0-0.9.1, 1.0.0-1.0.2, 1.1.0, 1.2.0-1.2.1, 1.3.0, 1.4.0-1.4.7, 1.5.0, 1.6.0, 1.7.0-1.7.3, 2.0.0-2.0.1, 2.1.0, 2.2.0-2.2.2] or uninstalled
 │   └─restricted to versions 1.2.1 by an explicit requirement, leaving only versions 1.2.1
 └─restricted by compatibility requirements with Flux [587475ba] to versions: uninstalled — no versions left
   └─Flux [587475ba] log:
     ├─possible versions are: [0.4.1, 0.5.0-0.5.4, 0.6.0-0.6.10, 0.7.0-0.7.3, 0.8.0-0.8.3, 0.9.0, 0.10.0-0.10.4, 0.11.0-0.11.1] or uninstalled
     └─restricted to versions 0.9.0 by an explicit requirement, leaving only versions 0.9.0

Error while trying to run code from Readme

Hello!
First of all, I want to thank you for developing such kind of powerful tool for solving differential equations.
I'm trying to get into it by running example code from repository readme, but failing to do so.
Run gives me an error when trying to invoke solve function:
ERROR: Use broadcasting (σ.(x)) to apply activation functions to arrays.

Stacktrace:

 [1] #evaluate_call_recurse!#48(::Bool, ::typeof(JuliaInterpreter.evaluate_call_recurse!), ::Any, ::JuliaInterpreter.Frame, ::Expr) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:213
 [2] evaluate_call_recurse! at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:202 [inlined]
 [3] eval_rhs(::Any, ::JuliaInterpreter.Frame, ::Expr) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:387
 [4] step_expr!(::Any, ::JuliaInterpreter.Frame, ::Any, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:527
 [5] step_expr!(::Any, ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:566
 [6] finish!(::Any, ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:14
 [7] finish_and_return! at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:29 [inlined]
 [8] #evaluate_call_recurse!#48(::Bool, ::typeof(JuliaInterpreter.evaluate_call_recurse!), ::Any, ::JuliaInterpreter.Frame, ::Expr) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:240
 ... (the last 7 lines are repeated 3 more times)
 [30] evaluate_call_recurse! at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:202 [inlined]
 [31] eval_rhs(::Any, ::JuliaInterpreter.Frame, ::Expr) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:387 [32] step_expr!(::Any, ::JuliaInterpreter.Frame, ::Any, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:522
 [33] step_expr!(::Any, ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\interpret.jl:566
 [34] finish!(::Any, ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:14
 [35] finish_and_return!(::Any, ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:29
 [36] finish_stack!(::Any, ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:59
 [37] #debug_command#70(::Nothing, ::typeof(JuliaInterpreter.debug_command), ::Any, ::JuliaInterpreter.Frame, ::Symbol, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:449
 [38] debug_command(::Any, ::JuliaInterpreter.Frame, ::Symbol, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\JuliaInterpreter\VxcSp\src\commands.jl:391
 [39] (::Atom.JunoDebugger.var"#52#54"{Bool,Bool,Bool})() at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\debugger\stepper.jl:151
 [40] evalscope(::Atom.JunoDebugger.var"#52#54"{Bool,Bool,Bool}) at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\debugger\stepper.jl:392
 [41] #startdebugging#51(::Bool, ::Bool, ::typeof(Atom.JunoDebugger.startdebugging), ::JuliaInterpreter.Frame, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\debugger\stepper.jl:149
 [42] #startdebugging at .\none:0 [inlined]
 [43] (::Base.var"#inner#2"{Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol,Symbol},NamedTuple{(:istoplevel, :toggle_ui),Tuple{Bool,Bool}}},typeof(Atom.JunoDebugger.startdebugging),Tuple{JuliaInterpreter.Frame,Bool}})() at .\essentials.jl:712
 [44] #invokelatest#1(::Base.Iterators.Pairs{Symbol,Bool,Tuple{Symbol,Symbol},NamedTuple{(:istoplevel, :toggle_ui),Tuple{Bool,Bool}}}, ::typeof(Base.invokelatest), ::Any, ::Any, ::Vararg{Any,N} where N) at .\essentials.jl:713
 [45] (::Base.var"#kw##invokelatest")(::NamedTuple{(:istoplevel, :toggle_ui),Tuple{Bool,Bool}}, ::typeof(Base.invokelatest), ::Function, ::JuliaInterpreter.Frame, ::Vararg{Any,N} where N) at .\none:0
 [46] (::Atom.JunoDebugger.var"#47#49"{String,String,Bool,Int64})() at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\debugger\stepper.jl:112
 [47] hideprompt(::Atom.JunoDebugger.var"#47#49"{String,String,Bool,Int64}) at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\repl.jl:122
 [48] debug_file(::String, ::String, ::String, ::Bool, ::Int64) at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\debugger\stepper.jl:83
 [49] debug_file(::String, ::String, ::String, ::Bool) at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\debugger\stepper.jl:81
 [50] handlemsg(::Dict{String,Any}, ::String, ::Vararg{Any,N} where N) at C:\Users\Gavrilov.AIU\.julia\packages\Atom\N5oSJ\src\comm.jl:166

My dev environment:

  • julia 1.3.1 x64
  • NeuralNetDiffEq.jl package installed via package manager from master branch with all its dependencies

What I tried:

  • Create TerminalPDEProblem with the following parameters TerminalPDEProblem(g, f, μ, σ., X0, tspan). Gives a syntax error
  • Define sigma as σ.(X,p,t) = Diagonal(sigma*X.data). Didn't help either, gives ERROR: UndefVarError: X not defined at runtime

I guess I'm missing something in the code, but since I'm new to julia, I cannot figure it out.
Any help would be appreciated.

Sincerely,
Alex Gavrilov

GPU training does not work with at least Flux optimizers

Consider an example from the docs demonstrating GPU training: https://neuralpde.sciml.ai/dev/examples/pinns_example/#Example-8-:-Fokker-Planck-equation-with-GPU-acceleration-1

If we call it with Flux's optimizers, for example, like GalacticOptim.solve(prob, ADAM()) then that dispatches to this implementation, which won't run on GPU. We can see it from

julia> _loss = function(θ)
           x = prob.f.f(θ, prob.p)
       end
SciML/GalacticOptim.jl#15 (generic function with 1 method)
julia> θ = copy(prob.x)   ### still on CPU ###
193-element Array{Float32,1}:
  
julia> gs = Flux.Zygote.gradient(ps = Flux.params(θ)) do
         x = _loss(θ)
         first(x)
       end
Grads(...)
julia> gs[θ]    ### still on CPU ###
193-element Array{Float64,1}:

Test Errors?

I ran: https://github.com/JuliaDiffEq/NeuralNetDiffEq.jl/blob/master/test/NNPDENS_tests.jl

For "Black-Scholes-Barenblatt equation"

ans = solve(prob, pdealg, verbose=true, maxiters=250, trajectories=m,
alg=EM(), dt=dt, pabstol = 1f-6)

It says:
MethodError: no method matching Float32(::Tracker.TrackedReal{Float32})
Closest candidates are:
Float32(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float32(::T) where T<:Number at boot.jl:718
Float32(!Matched::Int8) at float.jl:60

"Nonlinear Black-Scholes Equation with Default Risk"

@time ans = solve(prob, pdealg, verbose=true, maxiters=200, trajectories=m,
                            alg=EM(), dt=dt, pabstol = 1f-6)

Says:
MethodError: vcat(::TrackedArray{…,Array{Float32,1}}, ::Array{Tracker.TrackedReal{Float32},1}) is ambiguous. Candidates:

Rename?

I am wondering if this package should get a snappier name. NeuralPDE.jl? PhysicsInformedML.jl?

Matrix PDEs

Any chance this NeuralPDE.jl can be eventually used with (complex) matrix PDEs?

Currently,

eq  = Dxx(u(x,y,θ)) + Dyy(u(x,y,θ)) ~ -sin(pi*x)*sin(pi*y)*[0 1; 0 1]

gives an error

MethodError: no method matching ~(::Operation, ::Array{Operation,2})

Closest candidates are:

~(::Expression, !Matched::Expression) at /home/user/.julia/packages/ModelingToolkit/jYRzA/src/equations.jl:38

~(::Expression, !Matched::Number) at /home/user/.julia/packages/ModelingToolkit/jYRzA/src/equations.jl:39

~(::Number, !Matched::Expression) at /home/user/.julia/packages/ModelingToolkit/jYRzA/src/equations.jl:40

Stacktrace:

[1] top-level scope at In[17]:7

I have two questions:

  • I assume this is a limitation ModelingToolkit, but maybe there an alternative way to define the PDEs with NeuralPDE.jl as a workaround?
  • After a workaround or a proper fix in ModelingToolkit, will NeuralPDE.jl still require additional work to support a complex matrix PDE?

Tests sometimes fail

It seems tests for this package sometimes fail spuriously:

Current loss is: 2.334692289916828 (tracked)
Current loss is: 1.9999000160857503 (tracked)
Current loss is: 1.8632511709577806 (tracked)
Current loss is: 2.0558248507233574 (tracked)
Current loss is: 1.6692429166434624 (tracked)
Current loss is: 1.9107033348359952 (tracked)
Hamilton Jacobi Bellman Equation
error_l2 = 20.24199802266439

NNPDEHan: Test Failed at /home/pkgeval/.julia/packages/NeuralNetDiffEq/BKkcz/test/NNPDEHan_tests.jl:211
  Expression: error_l2 < 0.2
   Evaluated: 20.24199802266439 < 0.2
Stacktrace:
 [1] top-level scope at /home/pkgeval/.julia/packages/NeuralNetDiffEq/BKkcz/test/NNPDEHan_tests.jl:211
 [2] include at ./boot.jl:328 [inlined]
 [3] include_relative(::Module, ::String) at ./loading.jl:1105
 [4] include at ./Base.jl:31 [inlined]
 [5] include(::String) at /home/pkgeval/.julia/packages/SafeTestsets/A83XK/src/SafeTestsets.jl:23
 [6] top-level scope at /home/pkgeval/.julia/packages/NeuralNetDiffEq/BKkcz/test/runtests.jl:5
 [7] top-level scope at /workspace/srcdir/julia/usr/share/julia/stdlib/v1.3/Test/src/Test.jl:1107
 [8] top-level scope at /home/pkgeval/.julia/packages/NeuralNetDiffEq/BKkcz/test/runtests.jl:5

from https://github.com/maleadt/BasePkgEvalReports/blob/df660562b2dd1fc99b44e1ea582934701d8d421f/8639e57_vs_46ce4d7/logs/NeuralNetDiffEq/1.3.1-pre-8639e5771d.log#L2063-L2083

It would be good if that could be fixed because it is contributing some noise to the automatic tests we run on packages for different Julia versions.

Push nnode further

We should get some examples showing bigger neural networks training things like Lotka-Volterra. Might need GPUs.

Finite differences performance

Going back to this thread:
https://discourse.julialang.org/t/diffeqflux-autodifferentiating-inside-loss-function/47203

I've been cannibalizing some code from NeuralPDE to perform numerical differentiation inside a loss function for scientific machine learning:

function get_derivative_simplified(order,phi,x,θ)

    _derivative = (x,θ,order) ->
    begin
        if order == 1
            return (phi(x+cbrt(eps(Float32)),θ)[1] - phi(x-cbrt(eps(Float32)),θ)[1])/(2*cbrt(eps(Float32)))

        else
            return (_derivative(x+cbrt(eps(Float32)),θ,order-1)
                  - _derivative(x-cbrt(eps(Float32)),θ,order-1))/(2*cbrt(eps(Float32)))
        end

    end

    return _derivative(x,θ,order)
end

And then in the loss function:

function loss(p)    
    df(x)=get_derivative_simplified(1,NN,x,p)
    du_NN = [df(x) for x in collect(xsteps)]
    mean(broadcast(-, du_true, du_NN))
end

This approach gave a good result, but made the whole training take approximately 30% longer than just explicitly writing the finite differences inside the loss function:

    function loss(p)
    f(x) = NN(x, p)[1]    
    df(x) = (f(x + cbrt(eps(Float32))) - f(x - cbrt(eps(Float32)))) / (2*cbrt(eps(Float32)))
    du_NN = [df(x) for x in collect(xsteps)]
    mean(broadcast(-, du_true, du_NN))
end

For my actual project (where I also need second order derivatives), the difference was rather 50%. My guess is that this is due to the recursive and conditional stuff going around in the get_derivative_simplified function.

@ChrisRackauckas comment:

"It’ll depend on the neural network size, with larger NNs seeing less overhead. This overhead can be completely removed though by using an iterator form for the loss function instead of building a bunch of arrays, and making sure everything is concretely typed. I’ll talk with @KirillZubov about it."

Pricing options using NeuralNetDiffEq

Hi, guys!
I'm trying to implement option pricing on some abstract underlying asset using ANN approximation for each time period using different x0 intial states (price of underlying asset). I've have an option with only one underlying asset, so we could also price it with Black-Sholes method, so I can make BS-value an expected one

In terms of NeuralNetDiffEq there is the following problem TerminalPDEProblem under consideration:

d = 1
tspan = (0.0f0, 1.0f0)
dt = 0.1
m = 70 # number of trajectories (batch size)
r = 0.05f0
sigma = 0.04f0
strikePrice = 28
g(X) = max(sum(X .- strikePrice), 0)
f(X, u, σᵀ∇u, p, t) = r .* u
μ_f(X, p, t) = r .* X
σ_f(X, p, t) = Diagonal(0.04f0* X) #Matrix d x d
prob = TerminalPDEProblem(g, f, μ_f, σ_f, x0, tspan)
hls = 100 #hide layer size
opt = Flux.ADAM(0.0001)

I have a collection of prices for underlying asset as follows (a collection of initial states x0):
assetCostPerDayCollection = [36.6f0, 37.2f0, 37.5f0, 39.2f0, 40.2f0, 40.0f0, 39.7f0]

And a collection of expected option prices as follows:
blackSholesOptionCostPerDay = [ 9.965574f0, 10.565577f0, 10.865576f0, 12.565577f0, 13.565577f0, 13.365576f0, 13.065577f0 ]

Then I'm running a solution for each x0:

opt = Flux.ADAM(0.0001)
u0 = Flux.Chain(Dense(d, hls, relu),
        Dense(hls, hls, relu),
        Dense(hls, 1))
σᵀ∇u = Flux.Chain(
        Dense(d + 1, hls, relu),
        Dense(hls, hls, relu),
        # Dense(hls, hls, relu), # if turned on we get the same result, but with more time
        Dense(hls, d),
    )
pdealg = NNPDENS(u0, σᵀ∇u, opt = opt)
evaluationTime = @elapsed answer = solve(
        prob,
        pdealg,
        verbose = true,
        maxiters = 300,
        trajectories = m,
        alg = EM(),
        dt = dt,
        pabstol = 1f-6,
    )

After running solve for a problem for each x0, written above, I obtain the following results for each period of time:
Option prices: Any[11.175135f0, 11.756061f0, 12.0082035f0, 13.761024f0, 15.023212f0,14.787739f0, 14.440373f0]

The solution gives me the following plot:
plot

Actually it looks like ANN approximation is close to expected values in terms of trend change of the price, but it has a strange bias which is approximatly 1.2846 for each time frame.
Do you have any idea of why it may be happening? Is that something I should expect or the is a problem in my code?

What I tried to do to avoid make this situation better:

  • increase iterations number (up to 1000)
  • increase batch size (up to 100)
  • decrease learning rate (down to 0.00001)
  • change discretization step (down to 0.001)
  • change ANN architecture (reducing the number of layers, changing activation function on layers)

None of these seems to be making a rapid change in how solution gonna look like at the end.
I was not able to obtain any major improvement to it.

Any help is appriciated.
Sincerely, Alex

third derivative

how can we define third derivative (2D/3D) in NeuralPDE?

Thanks for your support in advance!

Best regards!

Slight refactoring of PINNs

It differs slightly from the intended interface. It would be good to get this corrected before doing more development. The discretize function is intended to produce a common numerical problem so that alternative numerical tools can be used. Here, the separation of concerns is broken because there is only one solver available because it creates an uncommon NNPDEProblem. What this should instead be doing is building the loss function in the discretization and then that should be used to build an OptimizationProblem. That optimization problem can then be accessed used and solved by users of GalacticOptim.jl or any other package since it would just have prob.f and prob.u0 for the optimization.

@KirillZubov

Make a real documentation

Now that DiffEqFlux, ModelingToolkit, Surrogates, etc. all have one, NeuralNetDiffEq needs to get one.

Remaining documentation changes:

  • Split the tutorials into separate ones on the sidebar. Have "copy-pastable code" on top, explained code below.
  • Tutorials for the neural network ODE solver by using the tests
  • More explanation in the deep BSDE methods as to where they come from and what they solve
  • Separate out reference documentation
  • For the reference documentation, convert some of the stuff from the README into docstrings, and use the docstrings in the documentation

#86 for more details. I think that's what's needed to get it released, and then we can keep going from there!

Improving derivatives parser

At the moment, to support expressions with derivatives that directly depend on the function of the neural network.

@parameters x y θ
@variables u(..)
@derivatives Dx'~x
@derivatives Dy'~y
@derivatives Dyy''~y
expr1 = Dyy(u(x,y,θ))
#or
expr2 = Dx(Dy(u(x,y,θ)))

The goal is support expression of derivatives of a wider type that have inner dependence on functions and constants.

Sample:

@variables  f(x,y) 
C = 1

expr1 = Dyy(f(x,y))
expr2 = Dyy(C*u(x,y,θ))
expr3 = Dyy(C*f(x,y)*u(x,y,θ))
expr4 = Dx(C*f(x,y)*u(x,y,θ) * Dy(u(x,y,θ)))
...

or

@variables x, r, t
@variables cᵉ(..)
@variables Dᵉ_eff(cᵉ(t,x)) 
@variables κ_eff(cᵉ(t,x)), σ_eff(x)
@derivatives Dx'~x
@derivatives Dr'~r
Υ = 1.0
I(t) = 1.0  

cᵉ_expr = Dx(Dᵉ_eff * Dx(cᵉ(t,x)))
cˢ_expr = Dr(r^2 * Dˢ_eff * Dr((t,x)))
Φᵉ_expr1 = Dx(κ_eff * Dx(Φᵉ(t,x))) 
Φᵉ_expr2 = Dx* κ_eff * Dx(log(cᵉ(t,x))))

Method error

https://github.com/SciML/NeuralNetDiffEq.jl/blob/80045e0c6332e1d3a160b7b42d7390c457abaa31/test/NNODE_tests.jl#L16

opt = BFGS()
sol = solve(prob, NeuralNetDiffEq.NNODE(chain,opt), dt=1/20f0, verbose = true,
            abstol=1e-10, maxiters = 200)
MethodError: no method matching apply!(::BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}, ::Array{Float32,2}, ::Array{Float64,2})
Closest candidates are:
  apply!(!Matched::Descent, ::Any, ::Any) at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\optimisers.jl:40
  apply!(!Matched::Momentum, ::Any, ::Any) at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\optimisers.jl:70
  apply!(!Matched::Nesterov, ::Any, ::Any) at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\optimisers.jl:103
  ...

Stacktrace:
 [1] update!(::BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}, ::Array{Float32,2}, ::Array{Float64,2}) at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\train.jl:25
 [2] update!(::BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}, ::Zygote.Params, ::Zygote.Grads) at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\train.jl:31
 [3] macro expansion at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\train.jl:92 [inlined]
 [4] macro expansion at C:\Users\TeAmp0is0N\.julia\packages\Juno\tLMZd\src\progress.jl:134 [inlined]
 [5] train!(::NeuralNetDiffEq.var"#4#13"{ODEFunction{false,var"#3#4",LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},DiffEqBase.NullParameters,StepRangeLen{Float32,Float64,Float64},NeuralNetDiffEq.var"#phi#11"{Float32,Tuple{Float32,Float32},Chain{Tuple{Dense{typeof(σ),Array{Float32,2},Array{Float32,1}},Dense{typeof(identity),Array{Float32,2},Array{Float32,1}}}}}}, ::Zygote.Params, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{}}}, ::BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}; cb::NeuralNetDiffEq.var"#9#18"{Float64,Bool}) at C:\Users\TeAmp0is0N\.julia\packages\Flux\Fj3bt\src\optimise\train.jl:81
 [6] solve(::ODEProblem{Float32,Tuple{Float32,Float32},false,DiffEqBase.NullParameters,ODEFunction{false,var"#3#4",LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem}, ::NNODE{Chain{Tuple{Dense{typeof(σ),Array{Float32,2},Array{Float32,1}},Dense{typeof(identity),Array{Float32,2},Array{Float32,1}}}},BFGS{LineSearches.InitialStatic{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Nothing,Flat}}; dt::Float32, timeseries_errors::Bool, save_everystep::Bool, adaptive::Bool, abstol::Float64, verbose::Bool, maxiters::Int64) at C:\Users\TeAmp0is0N\.julia\packages\NeuralNetDiffEq\BNep9\src\ode_solve.jl:55
 [7] top-level scope at In[6]:2

Run solve on vectors in NNODE_tests.jl

Hi there,

I just tried the test cases in NNODE_tests.jl and the illustrative codes work perfect.
However, when I tried to add a real row vector like MNIST perception, it gives the dimension dismatch error from Zygote.

The codes are:

linear = (u, p, t) -> @. t - u*t
tspan = (0.0f0, 1.0f0)
u0 = [0.0f0 0.0f0]
prob = ODEProblem(linear, u0 ,tspan)
chain = Flux.Chain(Dense(1, 5, σ), Dense(5, 1))

opt = Flux.ADAM(0.1, (0.9, 0.95))
sol = solve(prob, NeuralNetDiffEq.NNODE(chain, opt), dt=1/50f0, verbose=true, abstol=1e-10, maxiters=10)

And the errors are

DimensionMismatch("dimensions must match")

Stacktrace:
 [1] promote_shape at ./indices.jl:159 [inlined]
 [2] promote_shape at ./indices.jl:145 [inlined]
 [3] +(::Array{Float32,2}, ::Array{Float32,1}) at ./arraymath.jl:45
 [4] adjoint at /home/tianbai/.julia/packages/Zygote/KNUTW/src/lib/array.jl:773 [inlined]
 [5] _pullback(::Zygote.Context, ::typeof(+), ::Array{Float32,2}, ::Array{Float32,1}) at /home/tianbai/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47
 [6] #6 at /home/tianbai/.julia/packages/NeuralNetDiffEq/skSLw/src/ode_solve.jl:63 [inlined]
 [7] _pullback(::Zygote.Context, ::NeuralNetDiffEq.var"#6#17"{Array{Float32,2},Tuple{Float32,Float32},Flux.var"#12#14"{Chain{Tuple{Dense{typeof(σ),Array{Float32,2},Array{Float32,1}},Dense{typeof(identity),Array{Float32,2},Array{Float32,1}}}}}}, ::Float32, ::Array{Float32,1}) at /home/tianbai/.julia/packages/Zygote/KNUTW/src/compiler/interface2.jl:0
 [8] _pullback(::Zygote.Context, ::NeuralNetDiffEq.var"#9#20", ::Float32, ::Array{Float32,1}) at /home/tianbai/.julia/packages/NeuralNetDiffEq/skSLw/src/ode_solve.jl:70
 [9] _pullback(::Zygote.Context, ::NeuralNetDiffEq.var"#inner_loss#21"{ODEFunction{false,var"#63#64",LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},DiffEqBase.NullParameters}, ::Float32, ::Array{Float32,1}) at /home/tianbai/.julia/packages/NeuralNetDiffEq/skSLw/src/ode_solve.jl:74
 [10] #10 at ./none:0 [inlined]
...

Instead of using a loop to iterate every scalar over the vector, I wonder if there is a smarter way to handle this?

Thx : )

Fail to run the example

I just followed tutorial with a fresh install of Julia 1.5.2(MAC) and added NeuralPDE


After running example, I came across a error.

the error log is

ERROR: LoadError: UndefVarError: Flux not defined
Stacktrace:
 [1] top-level scope at xxx/example.jl:22
 [2] include(::Function, ::Module, ::String) at ./Base.jl:380
 [3] include(::Module, ::String) at ./Base.jl:368
 [4] exec_options(::Base.JLOptions) at ./client.jl:296
 [5] _start() at ./client.jl:506
in expression starting at xxx/example.jl:22

2631601478054_ pic_hd

Investigate physics-informed neural network (PINN) training strategies

#46 shows that the basic strategy of training on a fixed training grid doesn't work so well even on a 1-dimensional PINN (an ODE) if the nonlinearity is strong enough. To improve the training process, we should look at changing the strategies a bit. Here's some things that come to mind:

  • Random sampling in the domain, making a stochastic loss function to prevent local minima
  • Random minibatching
  • Starting the training with a small time interval, and successively growing it (user side)
  • Growing quasi-Monte Carlo sampling (https://github.com/JuliaDiffEq/QuasiMonteCarlo.jl) (done #189)

A new type should be added to the alg to allow dispatching on strategy.

Paper Translations

Lagaris, Isaac E., Aristidis Likas, and Dimitrios I. Fotiadis. "Artificial neural networks for solving ordinary and partial differential equations." IEEE Transactions on Neural Networks 9, no. 5 (1998): 987-1000.

Just to log some details;

$$N(x)=\sum_{i=1}^{i=H} v_i \sigma(\sum_{j=1}^{j=N} w_{ij} x_j +u_j) + k_i$$

So in the network defined:

  • $$v_i = $$ elements of P[3]
  • $$ w_{ij} = $$ elements of P[1]
  • $$u_i = $$ elements of P[2]
  • $$k_i = $$ elements of P[4]
  • P[4] is not in the paper, but doesn't effect the derivative

PINNs solver applied to existing problem tests

@KirillZubov and @ashutosh-b-b The PINN solver is a major step forward in terms of readability of the PDE problem investigated. It would be extremely useful to apply/merge it to the existing test problems in NeuralNetDiffEq. Also in the test problems, now that the backward Kolmogorov equation is up and running, the Fokker-Planck (FP) equation is an obvious addition and then Mean Field Game which is a system of HJB and FP equations.

X.data

I am trying to use NeuralNetDiffEq to solve a PDE.

I am not sure what X.data means when defining the volatility function σ
For instance, in the first Black-Scholes-Barenblatt example, we have

σ(X,p,t) = Diagonal(sigma*X.data) #Matrix d x d

When I try

σ(X,p,t) = Diagonal(sigma*X) #Matrix d x d

I get the following error

ERROR: MethodError: no method matching Float32(::Tracker.TrackedReal{Float32})
Closest candidates are:
  Float32(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:194
  Float32(::T<:Number) where T<:Number at boot.jl:718
  Float32(::Int8) at float.jl:60
  ...
Stacktrace:
 [1] convert(::Type{Float32}, ::Tracker.TrackedReal{Float32}) at ./number.jl:7
 [2] setindex!(::Array{Float32,1}, ::Tracker.TrackedReal{Float32}, ::Int64) at ./array.jl:766
 [3] copyto! at ./multidimensional.jl:488 [inlined]
 [4] Array{Float32,1}(::TrackedArray{…,Array{Float32,1}}) at ./array.jl:482
 [5] sparse at /Users/sabae/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.2/SparseArrays/src/sparsematrix.jl:733 [inlined]
 [6] (::getfield(SparseArrays, Symbol("##52#53")))(::Diagonal{Float32,TrackedArray{…,Array{Float32,1}}}) at /Users/sabae/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.2/SparseArrays/src/sparsevector.jl:1050
 [7] map at ./tuple.jl:140 [inlined]
 [8] vcat(::Diagonal{Float32,TrackedArray{…,Array{Float32,1}}}, ::Adjoint{Float32,Array{Float32,1}}) at /Users/sabae/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.2/SparseArrays/src/sparsevector.jl:1050
 [9] _forward at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/lib/array.jl:191 [inlined]
 [10] #track#1 at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/Tracker.jl:51 [inlined]
 [11] track at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/Tracker.jl:51 [inlined]
 [12] vcat(::Diagonal{Float32,TrackedArray{…,Array{Float32,1}}}, ::TrackedArray{…,Adjoint{Float32,Array{Float32,1}}}) at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/lib/array.jl:180
 [13] (::getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}})(::TrackedArray{…,Array{Float32,1}}, ::DiffEqBase.NullParameters, ::Float32) at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:45
 [14] perform_step!(::StochasticDiffEq.SDEIntegrator{EM{true},false,TrackedArray{…,Array{Float32,1}},Tracker.TrackedReal{Float32},Float32,DiffEqBase.NullParameters,Tracker.TrackedReal{Float32},Float32,Tracker.TrackedReal{Float32},DiffEqNoiseProcess.NoiseProcess{Tracker.TrackedReal{Float32},2,Float32,TrackedArray{…,Array{Float32,1}},Nothing,Nothing,typeof(DiffEqNoiseProcess.WHITE_NOISE_DIST),typeof(DiffEqNoiseProcess.WHITE_NOISE_BRIDGE),false,DataStructures.Stack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing}},ResettableStacks.ResettableStack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing},false},DiffEqNoiseProcess.RSWM{:RSwM1,Float64},RandomNumbers.Xorshifts.Xoroshiro128Plus},TrackedArray{…,Array{Float32,1}},RODESolution{Tracker.TrackedReal{Float32},2,Array{TrackedArray{…,Array{Float32,1}},1},Nothing,Nothing,Array{Float32,1},DiffEqNoiseProcess.NoiseProcess{Tracker.TrackedReal{Float32},2,Float32,TrackedArray{…,Array{Float32,1}},Nothing,Nothing,typeof(DiffEqNoiseProcess.WHITE_NOISE_DIST),typeof(DiffEqNoiseProcess.WHITE_NOISE_BRIDGE),false,DataStructures.Stack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing}},ResettableStacks.ResettableStack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing},false},DiffEqNoiseProcess.RSWM{:RSwM1,Float64},RandomNumbers.Xorshifts.Xoroshiro128Plus},SDEProblem{TrackedArray{…,Array{Float32,1}},Tuple{Float32,Float32},false,DiffEqBase.NullParameters,Nothing,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},Nothing,TrackedArray{…,Array{Float32,2}}},EM{true},StochasticDiffEq.LinearInterpolationData{Array{TrackedArray{…,Array{Float32,1}},1},Array{Float32,1}},DiffEqBase.DEStats},StochasticDiffEq.EMConstantCache,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},StochasticDiffEq.SDEOptions{Float32,Float32,typeof(DiffEqBase.ODE_DEFAULT_NORM),CallbackSet{Tuple{},Tuple{}},typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN),typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE),typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK),DataStructures.BinaryHeap{Float32,DataStructures.LessThan},Nothing,Nothing,Int64,Tracker.TrackedReal{Float32},Tracker.TrackedReal{Float32},Tracker.TrackedReal{Float32},Array{Float32,1},Array{Float32,1},Array{Float32,1}},Nothing,Tracker.TrackedReal{Float32},Nothing}, ::StochasticDiffEq.EMConstantCache, ::Function) at /Users/Matthieu/.julia/packages/StochasticDiffEq/3EqDI/src/perform_step/low_order.jl:13
 [15] perform_step! at /Users/Matthieu/.julia/packages/StochasticDiffEq/3EqDI/src/perform_step/low_order.jl:2 [inlined]
 [16] solve!(::StochasticDiffEq.SDEIntegrator{EM{true},false,TrackedArray{…,Array{Float32,1}},Tracker.TrackedReal{Float32},Float32,DiffEqBase.NullParameters,Tracker.TrackedReal{Float32},Float32,Tracker.TrackedReal{Float32},DiffEqNoiseProcess.NoiseProcess{Tracker.TrackedReal{Float32},2,Float32,TrackedArray{…,Array{Float32,1}},Nothing,Nothing,typeof(DiffEqNoiseProcess.WHITE_NOISE_DIST),typeof(DiffEqNoiseProcess.WHITE_NOISE_BRIDGE),false,DataStructures.Stack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing}},ResettableStacks.ResettableStack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing},false},DiffEqNoiseProcess.RSWM{:RSwM1,Float64},RandomNumbers.Xorshifts.Xoroshiro128Plus},TrackedArray{…,Array{Float32,1}},RODESolution{Tracker.TrackedReal{Float32},2,Array{TrackedArray{…,Array{Float32,1}},1},Nothing,Nothing,Array{Float32,1},DiffEqNoiseProcess.NoiseProcess{Tracker.TrackedReal{Float32},2,Float32,TrackedArray{…,Array{Float32,1}},Nothing,Nothing,typeof(DiffEqNoiseProcess.WHITE_NOISE_DIST),typeof(DiffEqNoiseProcess.WHITE_NOISE_BRIDGE),false,DataStructures.Stack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing}},ResettableStacks.ResettableStack{Tuple{Float32,TrackedArray{…,Array{Float32,1}},Nothing},false},DiffEqNoiseProcess.RSWM{:RSwM1,Float64},RandomNumbers.Xorshifts.Xoroshiro128Plus},SDEProblem{TrackedArray{…,Array{Float32,1}},Tuple{Float32,Float32},false,DiffEqBase.NullParameters,Nothing,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},Nothing,TrackedArray{…,Array{Float32,2}}},EM{true},StochasticDiffEq.LinearInterpolationData{Array{TrackedArray{…,Array{Float32,1}},1},Array{Float32,1}},DiffEqBase.DEStats},StochasticDiffEq.EMConstantCache,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},StochasticDiffEq.SDEOptions{Float32,Float32,typeof(DiffEqBase.ODE_DEFAULT_NORM),CallbackSet{Tuple{},Tuple{}},typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN),typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE),typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK),DataStructures.BinaryHeap{Float32,DataStructures.LessThan},Nothing,Nothing,Int64,Tracker.TrackedReal{Float32},Tracker.TrackedReal{Float32},Tracker.TrackedReal{Float32},Array{Float32,1},Array{Float32,1},Array{Float32,1}},Nothing,Tracker.TrackedReal{Float32},Nothing}) at /Users/Matthieu/.julia/packages/StochasticDiffEq/3EqDI/src/solve.jl:401
 [17] #__solve#39(::Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}}, ::typeof(DiffEqBase.__solve), ::SDEProblem{TrackedArray{…,Array{Float32,1}},Tuple{Float32,Float32},false,DiffEqBase.NullParameters,Nothing,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},Nothing,TrackedArray{…,Array{Float32,2}}}, ::EM{true}, ::Array{Any,1}, ::Array{Any,1}, ::Nothing, ::Type{Val{true}}) at /Users/Matthieu/.julia/packages/StochasticDiffEq/3EqDI/src/solve.jl:7
 [18] #__solve at ./none:0 [inlined] (repeats 5 times)
 [19] #solve#386(::Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}}, ::typeof(solve), ::SDEProblem{TrackedArray{…,Array{Float32,1}},Tuple{Float32,Float32},false,DiffEqBase.NullParameters,Nothing,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},Nothing,TrackedArray{…,Array{Float32,2}}}, ::EM{true}) at /Users/Matthieu/.julia/packages/DiffEqBase/pqp0B/src/solve.jl:39
 [20] (::getfield(DiffEqBase, Symbol("#kw##solve")))(::NamedTuple{(:dt,),Tuple{Float64}}, ::typeof(solve), ::SDEProblem{TrackedArray{…,Array{Float32,1}},Tuple{Float32,Float32},false,DiffEqBase.NullParameters,Nothing,SDEFunction{false,getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},Nothing,TrackedArray{…,Array{Float32,2}}}, ::EM{true}) at ./none:0
 [21] (::getfield(NeuralNetDiffEq, Symbol("##37#45")){Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{EM{true}}})(::Int64) at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:52
 [22] iterate at ./generator.jl:47 [inlined]
 [23] _collect(::UnitRange{Int64}, ::Base.Generator{UnitRange{Int64},getfield(NeuralNetDiffEq, Symbol("##37#45")){Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{EM{true}}}}, ::Base.EltypeUnknown, ::Base.HasShape{1}) at ./array.jl:619
 [24] collect_similar at ./array.jl:548 [inlined]
 [25] map at ./abstractarray.jl:2073 [inlined]
 [26] (::getfield(NeuralNetDiffEq, Symbol("##neural_sde#36#44")){Int64,Int64})(::Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}}, ::getfield(NeuralNetDiffEq, Symbol("#neural_sde#43")){getfield(NeuralNetDiffEq, Symbol("##neural_sde#36#44")){Int64,Int64}}, ::TrackedArray{…,Array{Float32,1}}, ::Function, ::Function, ::Tuple{Float32,Float32}, ::EM{true}) at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:51
 [27] (::getfield(NeuralNetDiffEq, Symbol("#kw##neural_sde#43")))(::NamedTuple{(:dt,),Tuple{Float64}}, ::getfield(NeuralNetDiffEq, Symbol("#neural_sde#43")){getfield(NeuralNetDiffEq, Symbol("##neural_sde#36#44")){Int64,Int64}}, ::TrackedArray{…,Array{Float32,1}}, ::Function, ::Function, ::Tuple{Float32,Float32}, ::EM{true}) at ./none:0
 [28] (::getfield(NeuralNetDiffEq, Symbol("##38#46")){EM{true},Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{Float32,Float32},getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}}})(::TrackedArray{…,Array{Float32,1}}) at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:57
 [29] (::getfield(NeuralNetDiffEq, Symbol("#predict_n_sde#47")){Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("##38#46")){EM{true},Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{Float32,Float32},getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}}}})() at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:62
 [30] (::getfield(NeuralNetDiffEq, Symbol("#loss_n_sde#48")){typeof(g),getfield(NeuralNetDiffEq, Symbol("#predict_n_sde#47")){Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("##38#46")){EM{true},Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{Float32,Float32},getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}}}}})() at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:66
 [31] #15 at /Users/Matthieu/.julia/packages/Flux/qXNjB/src/optimise/train.jl:72 [inlined]
 [32] gradient_(::getfield(Flux.Optimise, Symbol("##15#21")){getfield(NeuralNetDiffEq, Symbol("#loss_n_sde#48")){typeof(g),getfield(NeuralNetDiffEq, Symbol("#predict_n_sde#47")){Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("##38#46")){EM{true},Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{Float32,Float32},getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}}}}},Tuple{}}, ::Tracker.Params) at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/back.jl:97
 [33] #gradient#24(::Bool, ::typeof(Tracker.gradient), ::Function, ::Tracker.Params) at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/back.jl:164
 [34] gradient at /Users/Matthieu/.julia/packages/Tracker/SAr25/src/back.jl:164 [inlined]
 [35] macro expansion at /Users/Matthieu/.julia/packages/Flux/qXNjB/src/optimise/train.jl:71 [inlined]
 [36] macro expansion at /Users/Matthieu/.julia/packages/Juno/oLB1d/src/progress.jl:134 [inlined]
 [37] #train!#12(::getfield(NeuralNetDiffEq, Symbol("##40#50")){Bool,Float32,Bool,Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("#loss_n_sde#48")){typeof(g),getfield(NeuralNetDiffEq, Symbol("#predict_n_sde#47")){Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("##38#46")){EM{true},Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{Float32,Float32},getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}}}}},Array{Float32,1}}, ::typeof(Flux.Optimise.train!), ::Function, ::Tracker.Params, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{}}}, ::ADAM) at /Users/Matthieu/.julia/packages/Flux/qXNjB/src/optimise/train.jl:69
 [38] (::getfield(Flux.Optimise, Symbol("#kw##train!")))(::NamedTuple{(:cb,),Tuple{getfield(NeuralNetDiffEq, Symbol("##40#50")){Bool,Float32,Bool,Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("#loss_n_sde#48")){typeof(g),getfield(NeuralNetDiffEq, Symbol("#predict_n_sde#47")){Array{Float32,1},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},getfield(NeuralNetDiffEq, Symbol("##38#46")){EM{true},Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}},Tuple{Float32,Float32},getfield(NeuralNetDiffEq, Symbol("#F#41")){typeof(f),typeof(μ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}},getfield(NeuralNetDiffEq, Symbol("#G#42")){typeof(σ),Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}}}}}},Array{Float32,1}}}}, ::typeof(Flux.Optimise.train!), ::Function, ::Tracker.Params, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{}}}, ::ADAM) at ./none:0
 [39] #solve#35(::Bool, ::Int64, ::Int64, ::EM{true}, ::Float32, ::Bool, ::Base.Iterators.Pairs{Symbol,Float64,Tuple{Symbol},NamedTuple{(:dt,),Tuple{Float64}}}, ::typeof(solve), ::TerminalPDEProblem{typeof(g),typeof(f),typeof(μ),typeof(σ),Array{Float32,1},Float32,Nothing}, ::NNPDENS{Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},ADAM}) at /Users/Matthieu/.julia/packages/NeuralNetDiffEq/BKkcz/src/pde_solve_ns.jl:78
 [40] (::getfield(DiffEqBase, Symbol("#kw##solve")))(::NamedTuple{(:verbose, :maxiters, :trajectories, :alg, :dt, :pabstol),Tuple{Bool,Int64,Int64,EM{true},Float64,Float32}}, ::typeof(solve), ::TerminalPDEProblem{typeof(g),typeof(f),typeof(μ),typeof(σ),Array{Float32,1},Float32,Nothing}, ::NNPDENS{Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},Chain{Tuple{Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(relu),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}},Dense{typeof(identity),TrackedArray{…,Array{Float32,2}},TrackedArray{…,Array{Float32,1}}}}},ADAM}) at ./none:0
 [41] top-level scope at REPL[51]:1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.