juliaintervals / intervallinearalgebra.jl Goto Github PK
View Code? Open in Web Editor NEWLinear algebra done rigorously
License: MIT License
Linear algebra done rigorously
License: MIT License
Mention in contribution guidelines under "contributing to documentation" section that if you add a new page to the documentation you also need to update the website structure in docs/make.jl
Given a square scalar linear system Ax = b implement verified_linear_solver(A, b)
(feel free to suggest other names) which gives a rigorous bound of the solution x. Use epsilon inflation method described in section 5.5 of horacek paper, which is algorithm 10.7 in Rump's paper Verification methods: Rigorous results using floating-point arithmetic
time to start thinking the first release. This is a metaissue containing things to do before release
DOCUMENTATION
FEATURES
OTHER
I setup the deploy key and secret key called DOCUMENTER_KEY
but doesn't work. I'll try to go through the instructions again and see if I missed something.
Currently it is called as oettli(A, b, X, vars; tol=0.01)
,
the list of variables vars
is used only internally to construct the separators, so the user should not need to give it manually, the function should generate the list of variables internally. It should do something similar to @polyvar x[1:10]
in DynamicPolynomials.jl
, but probably simpler and cheaper
This is small and low-priority (but also easy to do), it would be nice to standardise the references in the references page to follow a common standard (e.g. APA).
Normal interval linear systems (that is the main functionality of this package currently) are quite useless. In most (prob. all) true applications, you have parametric interval linear systems (PILS, like the beer 🍺 ), that is a system of the form
A(p)x = b(p)
where p
is a vector of intervals (ranges for each parameters). Treating PILS like normal linear systems gives poor results, because dependency problem etc. the next big milestone of this package is to take parametric interval linear systems seriously and produce a state-of-the-art toolset for it. This would greatly increase uniqueness and value of the package. This metaissue collects different scenarios and references related to PILS.
Let us first focus on symmetric and linear PILS. In a linear PILS we have
A(p) = A0 + A1*p1 + A2*p2 + .... + An*pn
e.g.
solve(A::AbstractMatrix{T}, b::AbstractMatrix{T}) where {T<:Real} = solve(Interval.(A), Interval.(b))
The first GSoC blogpost is a good example to put in the documentation. I think in the current form the current blog should go under explanations. However I can adapt a subset of it to make also a tutorial.
Currently the CI is testing on all main OS (windows, linux, macos) and on both 64 and 32 bits architecture (32 bits not for macos), for latest stable and nightly, giving in total 322 - 1= 11 checks plus the documentation.
I wonder whether this is necessary or whether it is an overkill. Maybe test just 3 OS and 64 bits (latest and nightly)?
The function list_orthants
at the moment returns a vector of vectors and hence allocates and it's not very efficient. It should return an iterator, similar to what DiagDirection
in LazySets.jl does (cc @mforets ). Since LazySets.jl
is an optional dependency used only in LinearOettliPrager
, it would be good to have a more efficient version of list_orthants
in IntervalLinearAlgebra.jl
I'm not sure what would be a smart way to go, maybe define
struct Orthants
n::Int
end
and implement the iterators interface for that
The task is to extend perf/benchmarks.jl
with more Ax = b
instances. Maybe check Horacek's thesis.
Let us first focus on symmetric and linear PILS. In a linear PILS we have
A(p) = A0 + A1p1 + A2p2 + .... + An*pn
A few alternatives off the top of my head
struct IntervalAffineArray{T, MT<: AbstractVecOrMat{T}}
coeffs::Vector{M}
params::Vector{Interval{T}
end
using Symbolics.jl
struct IntervalParametricArray{T, MT <: AbstractVecOrMat{Expr}}
A::MT
params::Vector{Interval{T}}
end
I guess this would generalize to more complex expressions, but it would require to analyze the expressions in the matrix to figure out the case, i.e. for the linear case we would need a function
is_affine_multivariate_polynomial(ex::Expr)::Bool
e.g.
is_affine_multivariate_polynomial(x^2+ y) == false
is_affine_multivariate_polynomial(x + y + z + 1) == true
is_affine_multivariate_polynomial(xy + x + y) == false
maybe if we are planning to focus on linear PILS atm, it could be good to go for option 1 (unless that is worse than 2 anyway)
in both cases, the solve interface could be
AbstractParametricIntervalLinearSolver <: AbstractIntervalLinearSolver
solve(Ap, bp, method::AbstractParametricIntervalLinearSolver) = ....
The approach to compute the spectral decomposition of interval matrices described here could be interesting to implement, would be interesting to study how big intervals it can handle.
see here: https://juliaintervals.github.io/IntervalLinearAlgebra.jl/dev/
I think the problem is in how Documenter manipulates the @raw html
block in the documentation
Implement the algorithms described in this paper to find a bound on eigenvalue set of interval matrices
I'm thinking of an interface like:
function eigvals(A::AbstractMatrix{IntervalOrComplexInterval}, method=someDefaultMethod)
.....
end
function eigvals(A::SymmetricMatrix{IntervalOrComplexInterval}, method=someDefaultMethodForSymmetricMatrices)
....
end
For start the methods could be
In a first version, it could be non-rigorous (solve real symmetric eigenvalue problems with eigen
). Later, it could have rigorous and non-rigorous version (after #68 is merged)
At the moment Gaussian elimination supports only mutable arrays (because it calls rref). Would be nice to support also static arrays, probably the static matrix should be converted to mutable to use rref.
A = @SMatrix [2..4 -1..2;-2..1 2..4]
b = @SVector [-2..2, -2..2]
solve(A, b, GaussianElimination)
implement algorithms to classify different families of interval matrices, for start at least the following:
DL (for PR): 23.6
It's a minor thing but adding a custom printing to algorithm structures make it easy to know what do the parameters represent. Example here.
Please also note that the algorithm parameters should also be mentioned in their respective docstrings.
? Jacobi
search: Jacobi
Solves the linear system using Jacobi method. See section 5.7.4 of [1](page 52)
It is not clear what is [1], but we can add a hyperlink to the reference once the docs are setup.
function solve(A, b, X=enclose(A, b), solver::AbstractIterativeSolver, precondition=_default_precondition(A, b))
......
A minimal working example of the package functions applied to a linear static FEM problem. I offer myself as contributor!
... I think that should be mentioned somewhere in the README / docs.
Similar to the tutorial for linear systems.
I think it would be a good idea to ship a version of OpenBLAS with the CONSISTENT_FPCSR=1
flag enabled together with the library as an Artifact, or compile during installation.
The main reason is that the system (or Julia) OpenBLAS distribution may not have this flag enabled.
While Julia may be started with only 1 thread, unless explicitly stated, OpenBLAS may run with multiple thread enabled and have different rounding modes on each thread.
Currently, a fix that allows consistent rounding is to call Julia with the
OPENBLAS_NUM_THREADS=1
but this affects performance.
See
Julia Threads + BLAS Threads
Using directed rounding in Octave/Matlab
At least for the time being, should we reexport intervalArithmetic.jl? This way it would be enough to do
julia> using IntervalLinearAlgebra
instead of
julia> using IntervalLinearAlgebra, IntervalArithmetic
Add the Hertz method to compute the exact hull of eigenvalues of symmetric interval matrices. This has exponential complexity, the alternative is the Rohn method (currently useD), which is faster but can return a strictly larger enclosure of the eigenvalues.
struct Herz end
struct Rohn end
function eigenbox(A::Symmetric, ::Type{Hertz}) end
function eigenbox(A::Symmetric, ::Type{Rohn}) end
function eigenbox(A, method)
# construct symmetric interval eigenvalue problem
eigenbox(As, method)
end
The current type system for the solvers (including the changes in the latest PRs)
LinearSolver
I think there's room for improvement, here's a proposal
AbstractIntervalLinearSolver
Krawczyk
exported in IntervalRootFining.jl)what do you think? Feel free to comment other proposals or suggestions
Multiplying an interval matrix with a floating point matrix uses the generic method in LinearAlgebra.jl instead of Rump multiplicaton in the package
julia> A = [1..2 3..4;5..6 7..8]
2×2 Matrix{Interval{Float64}}:
[1, 2] [3, 4]
[5, 6] [7, 8]
julia> B = [1 2;3 4]
2×2 Matrix{Int64}:
1 2
3 4
julia> @which A*B
*(A::AbstractMatrix{T} where T, B::AbstractMatrix{T} where T) in LinearAlgebra at C:\Users\lucaa\AppData\Local\Programs\Julia-1.6.1\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:151
should dispatch to the method in IntervalLinearAlgebra.jl. It should be fixable by just adding
@eval *(A::AbstractMatrix{Interval{T}} where T, B::AbstractMatrix{T} where T) =
*($type, A, B)
@eval *(A::AbstractMatrix{T} where T, B::AbstractMatrix{Interval{T}} where T) =
*($type, A, B)
in the definition of set_multiplication_mode
Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, skylake)
Environment:
JULIA_EDITOR = code.cmd -g
JULIA_NUM_THREADS =
Add any other useful information
julia> A = @SMatrix [2..4 -2..1; -1..2 2..4]
julia> b = @SVector [-2..2, -2..2];
julia> Am = Matrix(A)
julia> bm = Vector(b);
julia> solve(A, b, Jacobi())
solve(A, b, Jacobi()) = Interval{Float64}[[-14.0001, 14.0001], [-14.0001, 14.0001]]
julia> solve(Am, bm, Jacobi())
solve(Am, bm, Jacobi()) = Interval{Float64}[[-45.7501, 45.7501], [-45.0795, 45.0795]]
Interface to compute the determinant of the interval matrix, implementing some of the algorithms described here: https://arxiv.org/abs/1809.03736
Instead of having all tests in one file, they should be in several files. Ideally, the structure of test
should reflect the structure of src
.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
Now that we have different preconditioning mechanisms and solvers, it's time to polish the generic solve
interface
oettli
I guess that should also be a solver like the others #41function _default_precondition(A, method::AbstractSolver)
# ....
end
# specific default preconditioning methods
function _default_precondition(A, method::GaussElimination)
if is_sdd_matrix || is_M_matrix
return NoPrecondition()
else
return InverseMidpoint()
end
end
function _default_solver()
GaussEliminiation()
end
function solve(A, b,
solver::AbstractSolver=_default_solver(),
precondition::AbstractPrecondition=_default_precondition(A, method))
# precondition
Ap, bp = precondition(A, b)
# compute solution
return solver(Ap, bp)
end
I'm new to interval computing. I'm a PhD student at Imperial College London. I have a suggestion, and I'm wondering how you handle this.
Certain operations on matrices are uncomputable. For instance, eigendecomposition of even "nice" matrices like symmetric matrices is not computable. What "uncomputable" means in practice is that "forwards numerical stability" is not attainable. The notion of "backwards numerical stability" may still be attainable and can suffice for certain use cases. Unfortunately, interval arithmetic in the naive sense cannot "handle" backwards stability. But this limitation can be by-passed.
My suggestion surrounds eigendecomposition. The API I propose should have three functions:
eigendecomp : Mat(Complex) -> (Mat(Complex), DiagMat(Complex))
,inv_eigendecomp : (Mat(Complex), DiagMat(Complex)) -> Mat(Complex)
lift : (Complex -> Complex) -> ((Mat(Complex), DiagMat(Complex)) -> (Mat(Complex), DiagMat(Complex)))
.eigendecomp(M)
should produce a pair of matrices (P,D)
such that the matrix P is exact (and not interval valued!!) and D is interval-valued. P and D should be chosen so that P D P^-1
contains M
as snuggly as possible. The actual eigenvalues and eigenvectors of M don't matter because the eigenvectors of M in particular are not computable.
I also suggest that the purpose of eigendecomposition is to lift functions of type Complex -> Complex
to complex matrices. I suspect that the eigenvalues and eigenvectors are perhaps not entirely relevant, in practice.
At the moment, using complex interval matrices falls back to the "default" multiplication in LinearAlgebra.jl instead of using Rump fast multiplication, which should be added for complex interval matrices.
julia> typeof(A)
Matrix{Complex{Interval{Float64}}} (alias for Array{Complex{Interval{Float64}}, 2})
julia> @which A * A
*(A::AbstractMatrix{T} where T, B::AbstractMatrix{T} where T) in LinearAlgebra at C:\Users\lucaa\AppData\Local\Programs\Julia-1.6.1\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:151
julia> typeof(P)
Matrix{Interval{Float64}} (alias for Array{Interval{Float64}, 2})
julia> typeof(D)
Diagonal{ComplexF64, Vector{ComplexF64}}
julia> P * D
ERROR: MethodError: no method matching *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::Matrix{Interval{Float64}})
Closest candidates are:
*(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::AbstractArray{Interval{T}, 2}, ::AbstractArray{Interval{T}, 2}) where T<:Real at c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:46
*(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::AbstractArray{Interval{T}, 2}, ::AbstractMatrix{T}) where T<:Real at c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:101
*(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::AbstractMatrix{T}, ::AbstractArray{Interval{T}, 2}) where T<:Real at c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:76
...
Stacktrace:
[1] *(::IntervalLinearAlgebra.MultiplicationType{:fast}, ::Matrix{Interval{Float64}}, ::Diagonal{ComplexF64, Vector{ComplexF64}})
@ Base .\operators.jl:560
[2] *(A::Matrix{Interval{Float64}}, B::Diagonal{ComplexF64, Vector{ComplexF64}})
@ IntervalLinearAlgebra c:\Users\lucaa\projects\IntervalLinearAlgebra\src\multiplication.jl:37
[3] top-level scope
@ REPL[86]:1
should use the multiplication algorithm defined in the package.
I am starting to think I should define a IntervalMatrix
type here and define the operations on it, this would solve this, #72 and other possible issues that haven't noticed yet.
For example, sometimes preconditioning by Diagonal(Ac)^-1
gives better results than Ac^-1
. It might be good to offer different preconditioning mechanisms.
I don't know whether there are some criteria or heuristic techniques to choose the preconditioning method for a given system.
the process of generation of the documentation freezes in ubuntu and in arch. the build folder is not filled with the files. only image files are copied to assets
and applications
folder
(@v1.7) pkg> activate .
Activating project at `~/work/IntervalLinearAlgebra.jl`
julia> include("docs/make.jl")
[ Info: generating markdown page from `~/work/IntervalLinearAlgebra.jl/docs/literate/applications/FEM_example.jl`
[ Info: writing result to `~/work/IntervalLinearAlgebra.jl/docs/src/applications/FEM_example.md`
[ Info: SetupBuildDirectory: setting up build directory.
[ Info: Doctest: running doctests.
[ Info: ExpandTemplates: expanding markdown templates.
documentation html files generated in the build folder.
(IntervalLinearAlgebra) pkg> st
Project IntervalLinearAlgebra v0.1.4
Status `~/work/IntervalLinearAlgebra.jl/Project.toml`
[38540f10] CommonSolve v0.2.0
[d1acc4aa] IntervalArithmetic v0.20.3
[189a3867] Reexport v1.2.2
[ae029012] Requires v1.3.0
[90137ffa] StaticArrays v1.3.3
[37e2e46d] LinearAlgebra
julia> versioninfo()
Julia Version 1.7.0
Commit 3bf9d17731 (2021-11-30 12:12 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
I am currently procrastinating on my thesis by trying to update IntervalRootFinding.jl to the newest change in IntervalArithmetic.
It appears that some basics LinearAlgebra do not work.
In particular, det
and inv
or interval matrices use forbidden operations (isfinite
and <
). Would this package be a good place to have those ? IntervalRootFinding would then depend on IntervalLinearAlgebra (would make sense, we already have our own version of Gauss elimination for some reason.
DL (for PR): 27.06
If there is a way to solve the case where $ A \in \mathbb{R}^{m \times n}, m > n $ is not a square matrix (number of rows is greater than number of columns)?
using IntervalLinearAlgebra, LazySets, Plots
A = [2..4 -1..1;-1..1 2..4; -0.5..0.5 1..2]
b = [-2..2; -1..1; -0.1..0.1]
Xenclose = solve(A, b)
polytopes = solve(A, b, LinearOettliPrager())
Is there any relevant literature to refer to?
At the moment each docstring has it's own style
references.md
under docs
and use @refs
in docstrings #40!!! note
in the homepage is not rendered correctly #50docs stable
badge #40Verified floating point algorithms (such as epsilon inflation) need to check that a vector is in the interior of the other. Currently, this is done by all(x .⊂ y)
, but that checks it is a proper subset.
use all(isinterior.(x, y))
instead.
Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, skylake)
Environment:
JULIA_EDITOR = code.cmd -g
See related issue in IntervalArithmetic.jl here
Add any other useful information
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.