Git Product home page Git Product logo

closedloopreachability.jl's People

Contributors

dependabot[bot] avatar dfcaporale avatar github-actions[bot] avatar mforets avatar schillic avatar sebastianguadalupe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

closedloopreachability.jl's Issues

Rename jld2 files

  • Rename all .jld2 files such that they match the corresponding .mat files.

  • Add a README.md in all models that contain:

    • A link to the repository where the controller (.mat) was taken.
    • The commit used.

Mismatch with simulation

pendulum2

using NeuralNetworkAnalysis

controller = read_nnet(@modelpath("Single-Pendulum", "controller_single_pendulum.nnet"));

const m = 0.5;
const L = 0.5;
const c = 0.0;
const g = 1.0;
const gL = g/L;
const mL = 1/(m*L^2);

function single_pendulum!(dx, x, params, t)
    dx[1] = x[2]
    dx[2] = gL * sin(x[1]) + mL*(x[3] - c*x[2])
    dx[3] = zero(x[1])
    return dx
end

# X0 = Hyperrectangle([1.1, 0.1], [0.1, 0.1]);  # full initial states
X0 = Singleton([1.2, 0.2]);  # subset of initial states
U0 = ZeroSet(1);
ivp = @ivp(x' = single_pendulum!(x), dim: 3, x(0)  X0 × U0);
vars_idx = Dict(:state_vars=>1:2, :control_vars=>3);
period = 0.05;
T = 0.1;
prob = ControlledPlant(ivp, controller, vars_idx, period);

alg = TMJets(abs_tol=1e-10, orderT=10, orderQ=3);
alg_nn = BoxSolver();
@time sol = NeuralNetworkAnalysis.solve(prob, T=T, alg_nn=alg_nn, alg=alg);
solz = overapproximate(sol, Zonotope);

using DifferentialEquations
sim = simulate(prob, T=T; trajectories=1)

using Plots
vars = (0, 1)
plot()
plot!(solz, vars=vars, lab="")
xl = xlims()
yl = ylims()
for simulation in trajectories(sim)
    for piece in simulation
        plot!(piece, vars=vars, color=:red, lab="")
    end
end
xlims!(xl)
ylims!(yl)
plot!()

Airplane model constructs matrices in taylorize

Constructing matrices in @taylorize is not supported. Hence the model does not compile.

Toy example:

julia> @taylorize function test!(dx, x, p, t)
           A = [1.0 0.0; 0.0 1.0]  # not supported
           dx = A * x
       end
ERROR: LoadError: MethodError: no method matching parse!(::Espresso.ExGraph, ::Espresso.ExH{:vcat})

Add normalization of control inputs to models

Some models normalize the output of the controller with a translation. This amount should be part of the model description (i.e., ControlledPlant). It needs to be incorporated in solve and simulate.

Add label for plot_simulation!

Currently we hard-code an empty label in plot_simulation! and if passing a label, that label is just ignored. We probably only want to add the label for the first trajectory/piece because there are many pieces involved.

Refactor readnn

I think the contents of mat2NV.jl can be refactored to a new file src/utils.jl. The function(s) should be added to the docs as well.

Fix autogenerated examples

In current master i'm getting:

$ julia --color=yes docs/make.jl

┌ Warning: ignoring ACC
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Non-linear Cart-Pole
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Sherlock-Benchmark-10
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Sherlock-Benchmark-7
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
┌ Warning: ignoring Sherlock-Benchmark-9
└ @ Main ~/.julia/dev/NeuralNetworkAnalysis/docs/generate.jl:18
[ Info: generating plain script file from `~/.julia/dev/NeuralNetworkAnalysis/models/mat2NV.jl`
[ Info: writing result to `~/.julia/dev/NeuralNetworkAnalysis/docs/src/models/mat2NV.jl`
[ Info: generating markdown page from `~/.julia/dev/NeuralNetworkAnalysis/models/mat2NV.jl`
[ Info: writing result to `~/.julia/dev/NeuralNetworkAnalysis/docs/src/models/mat2NV.md`
[ Info: generating notebook from `~/.julia/dev/NeuralNetworkAnalysis/models/mat2NV.jl`
[ Info: executing notebook `mat2NV.ipynb`
[ Info: writing result to `~/.julia/dev/NeuralNetworkAnalysis/docs/src/models/mat2NV.ipynb`
[ Info: SetupBuildDirectory: setting up build directory.
ERROR: LoadError: 'models/Sherlock-Benchmark-7.md' is not an existing page!
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] walk_navpages(::Bool, ::String, ::String, ::Array{Any,1}, ::Documenter.Documents.NavNode, ::Documenter.Documents.Doc
ument) at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:176
 [3] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:191 [inlined]
 [4] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:193 [inlined]
 [5] (::Documenter.Builder.var"#1#2"{Documenter.Documents.NavNode,Documenter.Documents.Document})(::Pair{String,String})
at ./none:0
 [6] iterate at ./generator.jl:47 [inlined]
 [7] collect(::Base.Generator{Array{Any,1},Documenter.Builder.var"#1#2"{Documenter.Documents.NavNode,Documenter.Documents
.Document}}) at ./array.jl:665
 [8] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:194 [inlined]
 [9] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:181 [inlined]
 [10] walk_navpages(::String, ::Array{Any,1}, ::Nothing, ::Documenter.Documents.Document) at /home/mforets/.julia/package
s/Documenter/QQWIJ/src/Builder.jl:190
 [11] walk_navpages(::Pair{String,Any}, ::Nothing, ::Documenter.Documents.Document) at /home/mforets/.julia/packages/Docu
menter/QQWIJ/src/Builder.jl:193
 [12] (::Documenter.Builder.var"#1#2"{Nothing,Documenter.Documents.Document})(::Pair{String,Any}) at ./none:0
 [13] iterate at ./generator.jl:47 [inlined]
 [14] collect_to!(::Array{Documenter.Documents.NavNode,1}, ::Base.Generator{Array{Any,1},Documenter.Builder.var"#1#2"{Not
hing,Documenter.Documents.Document}}, ::Int64, ::Int64) at ./array.jl:711
 [15] collect_to_with_first! at ./array.jl:689 [inlined]
 [16] collect(::Base.Generator{Array{Any,1},Documenter.Builder.var"#1#2"{Nothing,Documenter.Documents.Document}}) at ./ar
ray.jl:670
 [17] walk_navpages at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Builder.jl:194 [inlined]
 [18] runner(::Type{Documenter.Builder.SetupBuildDirectory}, ::Documenter.Documents.Document) at /home/mforets/.julia/pac
kages/Documenter/QQWIJ/src/Builder.jl:146
 [19] dispatch(::Type{Documenter.Builder.DocumentPipeline}, ::Documenter.Documents.Document) at /home/mforets/.julia/pack
ages/Documenter/QQWIJ/src/Utilities/Selectors.jl:167
 [20] #2 at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Documenter.jl:237 [inlined]
 [21] cd(::Documenter.var"#2#3"{Documenter.Documents.Document}, ::String) at ./file.jl:104
 [22] #makedocs#1 at /home/mforets/.julia/packages/Documenter/QQWIJ/src/Documenter.jl:236 [inlined]
 [23] top-level scope at /home/mforets/.julia/dev/NeuralNetworkAnalysis/docs/make.jl:9
 [24] include(::Module, ::String) at ./Base.jl:377
 [25] exec_options(::Base.JLOptions) at ./client.jl:288
 [26] _start() at ./client.jl:484
in expression starting at /home/mforets/.julia/dev/NeuralNetworkAnalysis/docs/make.jl:9

Make VertexSolver sound

VertexSolver currently does not output an overapproximation in general (thanks @mforets!).
We need to add the intersection with the positive orthant. And then it is not sufficient to do this for the whole network but we need to repeat this step for each layer.

So the full operation per layer is this:
X' := intersection(convex_hull(X, VPolytope(rectify(vertices_list(X)))), rectify(box_approximation(X)))

Do not overapproximate inputs in simulate

The reason for having added the overapproximate is related to #89: Projection results in a LinearMap{CartesianProduct} which is much less efficient to handle. We should at least use a better concrete projection here because we know by construction that the projection will result in the first component.

Conversion from onnx to nnet format

see: https://github.com/sisl/NNet


After installing the python package onnx, and downloading the github package NNet the following works in my machine

import sys
sys.path.append('/home/mforets/Tools/NNet')
from NNet.python.nnet import *

from NNet.converters.onnx2nnet import onnx2nnet
from onnx import *

## Script showing how to run onnx2nnet
# Min and max values used to bound the inputs
inputMins  = [0.0,-3.141593,-3.141593,100.0,0.0]
inputMaxes = [60760.0,3.141593,3.141593,1200.0,1200.0]

# Mean and range values for normalizing the inputs and outputs. All outputs are normalized with the same value
means  = [1.9791091e+04,0.0,0.0,650.0,600.0,7.5188840201005975]
ranges = [60261.0,6.28318530718,6.28318530718,1100.0,1200.0,373.94992]

# ONNX file to convert to .nnet file
onnxFile = '/home/mforets/Tools/NNet/nnet/TestNetwork2.onnx'

# Convert the file
onnx2nnet(onnxFile, inputMins, inputMaxes, means, ranges)

Solve from a subset does not result in a subset

I was comparing the results for the SinglePendulum benchmark with Ai2 and VertexSolver(x -> overapproximate(x, Interval)). The latter gives a strict subset for the initial inputs. So the cpost starts from a strict subset (Q₀ in line 143 below). But then when plotting the solution at the time point (X at line 148 below) the result is not a subset. I plotted the projections to x1 and x2 below (blue = Ai2, orange = VertexSolver).

https://github.com/JuliaReach/NeuralNetworkAnalysis.jl/blob/122ec2278f346a64f81d04ea9200807c23662c25/src/solve.jl#L143-L148

sols

It is not completely clear whether that is really a bug because our zonotope overapproximation of a Taylor model may not be monotonic. But since the input has the same type and just tighter bounds, I still find that suspicious and tentatively mark this as a bug.

Remove control inputs from states

Currently the control inputs are modeled as state variables. This requires some boilerplate code and does not seem necessary. I suggest to change this.

More precise reachability result at time point

The periodic control requires to compute precise sets at time points. Currently solve uses the last reach set, which covers a time interval and is hence quite a coarse approximation. This is probably something to resolve in ReachabilityAnalysis. Note that the dynamics are deterministic, which we should be able to exploit.

Here is an example of the SinglePendulum model with restricted initial states. The first flowpipe (until t = 0.05) is quite precise but the second flowpipe is coarser already. This then also leads to coarser control inputs from the neural network.

julia> X0 = Hyperrectangle([1.0, 0.1], [0.0, 0.1]);

controls 1 solve: [-0.67564, -0.543037]
controls 1 simulate: [-0.668787, -0.54751]
controls 2 solve: [-0.609227, -0.42013]
controls 2 simulate: [-0.550897, -0.458256]

singlependulum
singlependulum2

Inconsistent result from input subset

solver = Ai2();
split_fun = X -> split(box_approximation(X), 4 * ones(Int, dim(X)));

merge_fun1 = X -> convert(Interval, box_approximation(X));  # takes the box around all results
alg_nn1 = SplitSolver(solver=solver, split_fun=split_fun, merge_fun=merge_fun1);
sol1 = solve(prob, T=T, alg_nn=alg_nn1, alg=alg);

merge_fun2 = X -> convert(Interval, X.array[1]);  # only takes the first result
alg_nn2 = SplitSolver(solver=solver, split_fun=split_fun, merge_fun=merge_fun2);
sol2 = solve(prob, T=T, alg_nn=alg_nn2, alg=alg);

Clearly sol1 should be a superset of sol2 because the inputs (the result of merge_fun·) are a superset. But after a few steps, the results are incomparable for the TORA model. These are the outputs of the network:

sol1:

X1 = Interval([10.0874, 10.0919])
X2 = Interval([9.04627, 9.07028])
X3 = Interval([8.81513, 9.21949])
X4 = Interval([10.7785, 11.8])
X5 = Interval([9.44153, 12.6363])

sol2:

Y1 = Interval([10.0898, 10.091])
Y2 = Interval([9.0592, 9.06251])
Y3 = Interval([9.04095, 9.46076])
Y4 = Interval([10.6961, 11.1838])
Y5 = Interval([10.4556, 11.835])

Parametric family of toy models with random dynamics

Proposal

Add a function toy_models(n, m; ivp=...) that returns a list of control problems (described below). ivp should be a default random IVP of dimension n and m control inputs.

Purpose

Investigate the precision of the individual steps of the set propagation.

For affine dynamics we know how to do (almost error-free) set propagation.
EDIT: Since we currently deal with nonlinear plants only, I take that back. In #104 I create a linear model with a BlackBoxContinuousSystem such that we can use our standard algorithm.

Control problems

  1. controller = identity
    Here we can see the influence of interrupting and re-discretizing.
  2. controller = 2 layers with a nonlinear activation function
    Here we can see the influence of a nonlinear activation function
  3. controller = 5 layers with nonlinear activation functions
    Here we can see the influence of a somewhat real network

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.