juliareach / closedloopreachability.jl Goto Github PK
View Code? Open in Web Editor NEWReachability analysis for closed-loop control systems in Julia
Home Page: https://juliareach.github.io/ClosedLoopReachability.jl/
License: MIT License
Reachability analysis for closed-loop control systems in Julia
Home Page: https://juliareach.github.io/ClosedLoopReachability.jl/
License: MIT License
Rename all .jld2
files such that they match the corresponding .mat
files.
Add a README.md
in all models that contain:
.mat
) was taken.using NeuralNetworkAnalysis
controller = read_nnet(@modelpath("Single-Pendulum", "controller_single_pendulum.nnet"));
const m = 0.5;
const L = 0.5;
const c = 0.0;
const g = 1.0;
const gL = g/L;
const mL = 1/(m*L^2);
function single_pendulum!(dx, x, params, t)
dx[1] = x[2]
dx[2] = gL * sin(x[1]) + mL*(x[3] - c*x[2])
dx[3] = zero(x[1])
return dx
end
# X0 = Hyperrectangle([1.1, 0.1], [0.1, 0.1]); # full initial states
X0 = Singleton([1.2, 0.2]); # subset of initial states
U0 = ZeroSet(1);
ivp = @ivp(x' = single_pendulum!(x), dim: 3, x(0) ∈ X0 × U0);
vars_idx = Dict(:state_vars=>1:2, :control_vars=>3);
period = 0.05;
T = 0.1;
prob = ControlledPlant(ivp, controller, vars_idx, period);
alg = TMJets(abs_tol=1e-10, orderT=10, orderQ=3);
alg_nn = BoxSolver();
@time sol = NeuralNetworkAnalysis.solve(prob, T=T, alg_nn=alg_nn, alg=alg);
solz = overapproximate(sol, Zonotope);
using DifferentialEquations
sim = simulate(prob, T=T; trajectories=1)
using Plots
vars = (0, 1)
plot()
plot!(solz, vars=vars, lab="")
xl = xlims()
yl = ylims()
for simulation in trajectories(sim)
for piece in simulation
plot!(piece, vars=vars, color=:red, lab="")
end
end
xlims!(xl)
ylims!(yl)
plot!()
Constructing matrices in @taylorize
is not supported. Hence the model does not compile.
Toy example:
julia> @taylorize function test!(dx, x, p, t)
A = [1.0 0.0; 0.0 1.0] # not supported
dx = A * x
end
ERROR: LoadError: MethodError: no method matching parse!(::Espresso.ExGraph, ::Espresso.ExH{:vcat})
I suggest that we remove this method (later). It is probably a remainder of an early version.
Originally posted by @schillic in #176 (comment)
We should just check whether this method is used in any model.
Some models normalize the output of the controller with a translation. This amount should be part of the model description (i.e., ControlledPlant
). It needs to be incorporated in solve
and simulate
.
ref: https://github.com/Verisig/verisig
... there are some issues with the LaTeX output (see https://juliareach.github.io/NeuralNetworkAnalysis.jl/dev/models/ACC/#Specifications-1).
References to the original papers are missing in the Examples.
Ref Algorithm 1 in http://www.verivital.com/research/tran2019fm.pdf
Currently we hard-code an empty label in plot_simulation!
and if passing a label, that label is just ignored. We probably only want to add the label for the first trajectory/piece because there are many pieces involved.
I think the contents of mat2NV.jl
can be refactored to a new file src/utils.jl
. The function(s) should be added to the docs as well.
In current master
i'm getting:
$ julia --color=yes docs/make.jl
┌ Warning: ignoring ACC
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Non-linear Cart-Pole
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Sherlock-Benchmark-10
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Sherlock-Benchmark-7
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Sherlock-Benchmark-9
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
[ Info: generating plain script file from `~/.julia/dev/NeuralNetworkAnalysis/models/mat2NV.jl`
[ Info: writing result to `~/.julia/dev/NeuralNetworkAnalysis/docs/src/models/mat2NV.jl`
[ Info: generating markdown page from `~/.julia/dev/NeuralNetworkAnalysis/models/mat2NV.jl`
[ Info: writing result to `~/.julia/dev/NeuralNetworkAnalysis/docs/src/models/mat2NV.md`
[ Info: generating notebook from `~/.julia/dev/NeuralNetworkAnalysis/models/mat2NV.jl`
[ Info: executing notebook `mat2NV.ipynb`
[ Info: writing result to `~/.julia/dev/NeuralNetworkAnalysis/docs/src/models/mat2NV.ipynb`
[ Info: SetupBuildDirectory: setting up build directory.
ERROR: LoadError: 'models/Sherlock-Benchmark-7.md' is not an existing page!
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] walk_navpages(::Bool, ::String, ::String, ::Array{Any,1}, ::Documenter.Documents.NavNode, ::Documenter.Documents.Doc
ument) at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:176
[3] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:191 [inlined]
[4] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:193 [inlined]
[5] (::Documenter.Builder.var"#1#2"{Documenter.Documents.NavNode,Documenter.Documents.Document})(::Pair{String,String})
at ./none:0
[6] iterate at ./generator.jl:47 [inlined]
[7] collect(::Base.Generator{Array{Any,1},Documenter.Builder.var"#1#2"{Documenter.Documents.NavNode,Documenter.Documents
.Document}}) at ./array.jl:665
[8] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:194 [inlined]
[9] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:181 [inlined]
[10] walk_navpages(::String, ::Array{Any,1}, ::Nothing, ::Documenter.Documents.Document) at /home/mforets/.julia/package
s/Documenter/QQWIJ/src/Builder.jl:190
[11] walk_navpages(::Pair{String,Any}, ::Nothing, ::Documenter.Documents.Document) at /home/mforets/.julia/packages/Docu
menter/QQWIJ/src/Builder.jl:193
[12] (::Documenter.Builder.var"#1#2"{Nothing,Documenter.Documents.Document})(::Pair{String,Any}) at ./none:0
[13] iterate at ./generator.jl:47 [inlined]
[14] collect_to!(::Array{Documenter.Documents.NavNode,1}, ::Base.Generator{Array{Any,1},Documenter.Builder.var"#1#2"{Not
hing,Documenter.Documents.Document}}, ::Int64, ::Int64) at ./array.jl:711
[15] collect_to_with_first! at ./array.jl:689 [inlined]
[16] collect(::Base.Generator{Array{Any,1},Documenter.Builder.var"#1#2"{Nothing,Documenter.Documents.Document}}) at ./ar
ray.jl:670
[17] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:194 [inlined]
[18] runner(::Type{Documenter.Builder.SetupBuildDirectory}, ::Documenter.Documents.Document) at /home/mforets/.julia/pac
kages/Documenter/QQWIJ/src/Builder.jl:146
[19] dispatch(::Type{Documenter.Builder.DocumentPipeline}, ::Documenter.Documents.Document) at /home/mforets/.julia/pack
ages/Documenter/QQWIJ/src/Utilities/Selectors.jl:167
[20] #2 at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Documenter.jl:237 [inlined]
[21] cd(::Documenter.var"#2#3"{Documenter.Documents.Document}, ::String) at ./file.jl:104
[22] #makedocs#1 at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Documenter.jl:236 [inlined]
[23] top-level scope at /home/mforets/.julia/dev/NeuralNetworkAnalysis/docs/make.jl:9
[24] include(::Module, ::String) at ./Base.jl:377
[25] exec_options(::Base.JLOptions) at ./client.jl:288
[26] _start() at ./client.jl:484
in expression starting at /home/mforets/.julia/dev/NeuralNetworkAnalysis/docs/make.jl:9
Continuation of #112 for the 2D case. It should be straightforward, using LazySets.area
.
VertexSolver
currently does not output an overapproximation in general (thanks @mforets!).
We need to add the intersection with the positive orthant. And then it is not sufficient to do this for the whole network but we need to repeat this step for each layer.
So the full operation per layer is this:
X' := intersection(convex_hull(X, VPolytope(rectify(vertices_list(X)))), rectify(box_approximation(X)))
See eg. here.
Moreover, we can also add buttons to directly launch the example in MyBinder (see button on top here: https://fredrikekre.github.io/Literate.jl/v2/generated/example/).
The reason for having added the overapproximate
is related to #89: Projection
results in a LinearMap{CartesianProduct}
which is much less efficient to handle. We should at least use a better concrete projection here because we know by construction that the projection will result in the first component.
this solver can be used as a basis to refine the output set with branch and prune strategies (https://github.com/Kolaru/BranchAndPrune.jl)
Originally posted by @mforets in #128 (comment)
I will soon propose a reordering of the code that will make this step unnecessary.
Originally posted by @schillic in #176 (comment)
The pattern is:
A
for i = 1:NSAMPLES
B
if i < NSAMPLES
A
end
end
Instead it can just be:
for i = 1:NSAMPLES
A
B
end
If P₀
and U₀
are AbstractZonotope
s, we can use cartesian_product
here. But that fails with a LinearAlgebra.SingularException(5)
in _overapproximate_hparallelotope
.
see: https://github.com/sisl/NNet
After installing the python package onnx
, and downloading the github package NNet the following works in my machine
import sys
sys.path.append('/home/mforets/Tools/NNet')
from NNet.python.nnet import *
from NNet.converters.onnx2nnet import onnx2nnet
from onnx import *
## Script showing how to run onnx2nnet
# Min and max values used to bound the inputs
inputMins = [0.0,-3.141593,-3.141593,100.0,0.0]
inputMaxes = [60760.0,3.141593,3.141593,1200.0,1200.0]
# Mean and range values for normalizing the inputs and outputs. All outputs are normalized with the same value
means = [1.9791091e+04,0.0,0.0,650.0,600.0,7.5188840201005975]
ranges = [60261.0,6.28318530718,6.28318530718,1100.0,1200.0,373.94992]
# ONNX file to convert to .nnet file
onnxFile = '/home/mforets/Tools/NNet/nnet/TestNetwork2.onnx'
# Convert the file
onnx2nnet(onnxFile, inputMins, inputMaxes, means, ranges)
The specifications are missing for the TORA model (https://juliareach.github.io/NeuralNetworkAnalysis.jl/dev/models/Sherlock-Benchmark-9/).
I was comparing the results for the SinglePendulum benchmark with Ai2
and VertexSolver(x -> overapproximate(x, Interval))
. The latter gives a strict subset for the initial inputs. So the cpost starts from a strict subset (Q₀
in line 143 below). But then when plotting the solution at the time point (X
at line 148 below) the result is not a subset. I plotted the projections to x1 and x2 below (blue = Ai2
, orange = VertexSolver
).
It is not completely clear whether that is really a bug because our zonotope overapproximation of a Taylor model may not be monotonic. But since the input has the same type and just tighter bounds, I still find that suspicious and tentatively mark this as a bug.
Currently the control inputs are modeled as state variables. This requires some boilerplate code and does not seem necessary. I suggest to change this.
The periodic control requires to compute precise sets at time points. Currently solve
uses the last reach set, which covers a time interval and is hence quite a coarse approximation. This is probably something to resolve in ReachabilityAnalysis
. Note that the dynamics are deterministic, which we should be able to exploit.
Here is an example of the SinglePendulum
model with restricted initial states. The first flowpipe (until t = 0.05) is quite precise but the second flowpipe is coarser already. This then also leads to coarser control inputs from the neural network.
julia> X0 = Hyperrectangle([1.0, 0.1], [0.0, 0.1]);
controls 1 solve: [-0.67564, -0.543037]
controls 1 simulate: [-0.668787, -0.54751]
controls 2 solve: [-0.609227, -0.42013]
controls 2 simulate: [-0.550897, -0.458256]
solver = Ai2();
split_fun = X -> split(box_approximation(X), 4 * ones(Int, dim(X)));
merge_fun1 = X -> convert(Interval, box_approximation(X)); # takes the box around all results
alg_nn1 = SplitSolver(solver=solver, split_fun=split_fun, merge_fun=merge_fun1);
sol1 = solve(prob, T=T, alg_nn=alg_nn1, alg=alg);
merge_fun2 = X -> convert(Interval, X.array[1]); # only takes the first result
alg_nn2 = SplitSolver(solver=solver, split_fun=split_fun, merge_fun=merge_fun2);
sol2 = solve(prob, T=T, alg_nn=alg_nn2, alg=alg);
Clearly sol1
should be a superset of sol2
because the inputs (the result of merge_fun·
) are a superset. But after a few steps, the results are incomparable for the TORA model. These are the outputs of the network:
sol1:
X1 = Interval([10.0874, 10.0919])
X2 = Interval([9.04627, 9.07028])
X3 = Interval([8.81513, 9.21949])
X4 = Interval([10.7785, 11.8])
X5 = Interval([9.44153, 12.6363])
sol2:
Y1 = Interval([10.0898, 10.091])
Y2 = Interval([9.0592, 9.06251])
Y3 = Interval([9.04095, 9.46076])
Y4 = Interval([10.6961, 11.1838])
Y5 = Interval([10.4556, 11.835])
see discussion here
Add a function toy_models(n, m; ivp=...)
that returns a list of control problems (described below). ivp
should be a default random IVP of dimension n
and m
control inputs.
Investigate the precision of the individual steps of the set propagation.
For affine dynamics we know how to do (almost error-free) set propagation.
EDIT: Since we currently deal with nonlinear plants only, I take that back. In #104 I create a linear model with a BlackBoxContinuousSystem
such that we can use our standard algorithm.
simulate
should take a parameter to optionally include the vertices of X0. This should give a better idea about the boundaries.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.