jutho / krylovkit.jl Goto Github PK
View Code? Open in Web Editor NEWKrylov methods for linear problems, eigenvalues, singular values and matrix functions
License: Other
Krylov methods for linear problems, eigenvalues, singular values and matrix functions
License: Other
Hi, is there a way to output all the eigenvectors corresponding to the same eigenvalue? Currently, I get
Invariant subspace of dimension 1 (up to requested tolerance `tol = 1.0e-12`), which is smaller than the number of requested eigenvalues (i.e. `howmany == 2`); setting `howmany = 1'.
Is there a way around this, or is this a limitation of iterative methods in general?
Thanks
Consider diagonalisation of a real symmetric matrix in the following MWE:
import BandedMatrices as bm
import KrylovKit as kk
import LinearAlgebra as la
function test_krylov(a, b, N, n_j)
h = bm.BandedMatrix(bm.Zeros(2n_j + 1, 2n_j + 1), (2N, 2N))
h[bm.band(0)] .= [(2j/N)^2 + (a + b)/2 for j = -n_j:n_j]
h[bm.band(-2N)] .= h[bm.band(2N)] .= a / 4
h[bm.band(-N)] .= h[bm.band(N)] .= b / 4
kk.eigsolve(h, 14*2N, :SR; krylovdim=120) # sometimes triggers `ERROR: LinearAlgebra.LAPACKException(22)`
# H = Array(h); la.eigvals(H) # always succeds
end
test_krylov(-2000, 3, 4, 112)
Approximately once in 20 runs, eigsolve
tirggers LinearAlgebra.LAPACKException(22)
:
ERROR: LinearAlgebra.LAPACKException(22)
Stacktrace:
[1] chklapackerror(ret::Int64)
@ LinearAlgebra.LAPACK \julia\stdlib\v1.7\LinearAlgebra\src\lapack.jl:43
[2] stegr!(dv::Vector{Float64}, ev::Vector{Float64}, Z::SubArray{Float64, 2, Matrix{Float64}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false})
@ KrylovKit \packages\KrylovKit\kWdb6\src\dense\linalg.jl:482
[3] tridiageigh!
@ \packages\KrylovKit\kWdb6\src\dense\linalg.jl:115 [inlined]
[4] eigsolve(A::BandedMatrices.BandedMatrix{Float64, Matrix{Float64}, Base.OneTo{Int64}}, x₀::Vector{Float64}, howmany::Int64, which::Symbol, alg::KrylovKit.Lanczos{KrylovKit.ModifiedGramSchmidt2, Float64})
@ KrylovKit \packages\KrylovKit\kWdb6\src\eigsolve\lanczos.jl:49
[5] #eigsolve#38
@ \packages\KrylovKit\kWdb6\src\eigsolve\eigsolve.jl:202 [inlined]
[6] #eigsolve#36
@ \packages\KrylovKit\kWdb6\src\eigsolve\eigsolve.jl:176 [inlined]
[7] test_krylov(a::Int64, b::Int64, N::Int64, n_j::Int64)
@ Main \KrylovTest.jl:12
[8] top-level scope
@ \KrylovTest.jl:16
When eigsolve
does succeed, it always converges to the same result, which coincides with the one returned by LinearAlgebra.eigvals
. Perhaps the problem is caused by the groups (of size 2N) of very closely spaced eigenvalues of the constructed matrix.
The problem shows up as well for e.g., krylovdim=40
(with howmany=30
), provided we keep the matrix of the same size as in the example above (where n_j=112
).
Tested on Julia 1.7.2 and KrylovKit 0.5.4.
Hi,
I am not sure this is really an issue so feel free to close it.
I am using Newton - GMRES and I was wondering:
Thank you for your help,
Besr regards
Hi,
I am using KK in a very memory "limited" GPU. I would like to ask you a few questions to optimize my code:
If you have suggestions, please do not hesitate,
Thank you
Best regards.
Hello,
First thanks for the package. There seems to be a bug in the following:
julia> linsolve(identity,rand(10))
([-3.411259393663508e-16, -1.705629696831754e-16, -2.1320371210396926e-16, -1.3325232006498079e-17, -2.1320371210396926e-16, -6.822518787327016e-16, -4.264074242079385e-16, -6.822518787327016e-16, -3.411259393663508e-16, -2.1320371210396925e-17], ConvergenceInfo: no converged values after 100 iterations and 102 applications of the linear map;
norms of residuals are given by (1.5362946534167095,).
)
julia> linsolve(x->copy(x),rand(10))
([0.45405764157553313, 0.9479983088580736, 0.6357434060853168, 0.9590624031355045, 0.37730901182712784, 0.09497638177435319, 0.5263104518163659, 0.19843955874258803, 0.7991780212359407, 0.6081845420845212], ConvergenceInfo: one converged value after 1 iterations and 3 applications of the linear map;
norms of residuals are given by (2.3055512673781017e-16,).
)
Given that the result is correct for the second form but not the first, I suspect an aliasing bug but I don't know the package well enough to track it down :)
I couldn't find any warnings against aliasing in the documentation either...
Cheers!
With v0.5, there is a method error if the initial vector is Abstract, e.g.
using KrylovKit
n = 10
KrylovKit.eigsolve(rand(n, n), 1:n, 1, :LM)
# ERROR: MethodError: no method matching KrylovKit.ArnoldiFactorization(::Int64, ::KrylovKit.OrthonormalBasis{StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}}, ::Array{Float64,1}, ::Array{Float64,1})
KrylovKit.eigsolve(rand(n, n), collect(1:n), 1, :LM)
# works
I am trying to obtain the smallest eigenvalue of a real symmetric matrix. Here are my codes:
using LinearAlgebra,KrylovKit,Arpack
A = rand(Float64,(10,10)) .- one(Float64)/2;
A=(A+A')/2;
eigsolve(A,1,:SR;krylovdim =4, maxiter =100, tol = precision(Float64))[1]
1-element Vector{Float64}:
-0.02558177613350929
data = eigen(A);
data.values[1]
-0.978221881847597
Clearly I catch the wrong answer.
Compare with Arpack
result:
eigs(A,nev=1,which=:SR,maxiter=2)[1]
1-element Vector{Float64}:
-0.9782218818475978
Probably my inputs are not correct, if so, I hope some explicit examples to use KrylovKit
.
I had some issues where A x_init = o after which normalization introduced NaN.
This sould probably throw an error ?
The following example should return the lowest eigen value according to the docstring, but returns full spectrum. This behavior also apply for SparseMatrixCSC
type input.
julia> eigsolve(randn(4,4), 1, :SR)
(Complex{Float64}[-2.312+0.0im, -0.462406+1.1235im, -0.462406-1.1235im, 2.3231+0.0im], Array{Complex{Float64},1}[[-0.100722+0.0im, -0.827104+0.0im, -0.519301+0.0im, 0.189947+0.0im], [-0.872454-0.0771637im, 0.0691546-0.482002im, 0.415766-0.00552139im, 0.280353-0.280638im], [-0.872454+0.0771637im, 0.0691546+0.482002im, 0.415766+0.00552139im, 0.280353+0.280638im], [-0.12619+0.0im, 0.521181+0.0im, -0.999292+0.0im, -0.192739+0.0im]], ConvergenceInfo: 4 converged values after 1 iterations and 4 applications of the linear map;
norms of residuals are given by (3.766032907369262e-32, 5.7580543111933805e-33, 5.7580543111933805e-33, 3.955301283972102e-33).
)
help?> KrylovKit.eigsolve
eigsolve(A::AbstractMatrix, [howmany = 1, which = :LM, T = eltype(A)]; kwargs...)
eigsolve(f, n::Int, [howmany = 1, which = :LM, T = Float64]; kwargs...)
eigsolve(f, x₀, [howmany = 1, which = :LM]; kwargs...)
eigsolve(f, x₀, howmany, which, algorithm)
Compute howmany eigenvalues from the linear map encoded in the matrix A or by the function f. Return eigenvalues, eigenvectors and a ConvergenceInfo structure.
Hi,
The downside of having a good package is that people wants extensions...
I was wondering how difficult it would be to make your GMRES work with CuArrays? I would say that only ArnoldiFactorization.V
should stay on the GPU plus the current x
in GMRES
.
Have you thought about this?
Thank you for your help,
Best. regards
Consider the following matrix:
>>> A = zeros(15,15);
>>> A[[ 2, 16, 18, 32, 49, 50, 51, 52, 56, 64, 66, 68, 72,
79, 80, 84, 88, 94, 98, 99, 101, 110, 112, 113, 114, 117,
126, 127, 128, 133, 149, 154, 157, 162, 163, 170, 173, 176, 178,
186, 189, 191, 192, 193, 205, 210, 224]] .= 1;
>>> A[[ 3, 31, 150, 220]] .= -1;
The eigenvalues of this matrix are degenerate, yet eigsolve
is unable to resolve this degeneracy and instead returns eigenvalues with multiplicity one:
>>> eigvals(A)'
[-2.0 -2.0 -2.0 -1.73205 -1.73205 -1.37228 1.0 1.0 1.0 ... 1.0 1.73205 1.73205 4.37228]
>>> evals_k, evecs_k, info = KrylovKit.eigsolve(A, 3, :SR);
>>> evals_k'
[ -2.0 -1.73205 -1.37228 1.0 1.73205 4.37228]
This is independent of which
. ARPACK (tested using scipy.sparse.eigs
) instead correctly resolves the degeneracy:
>>> import scipy.sparse.linalg
>>> scipy.sparse.linalg.eigsh(A, k=3, which='SA')[0]
array([-2., -2., -2.])
Hi,
Is it possible to pass a dot product in eigsolve
?
This is useful for me because I know a dot product for which the operator is hermitian.
Thank you for your help,
Best
Hi,
This MWE shows the problem, it uses Arnoldi.
Thanks for your help,
Bests,
PS: I am on KrylovKit v0.5.2 #master
N = 100
A = rand(TY,(N,N)) .- one(TY)/2
A = (A+A')/2
Af = x->A*x
eigsolve(Af,rand(N),3; verbosity=2, issymmetric=true)
Sir,
I am struggling with the method geneigsolve of KrylovKit. With the Clement matrix C of size 101*101 whose the lowest eigenvalue is -100, when I use eigsolve: it works:
eigsolve(C, 2, :SR, eltype(C))
(Complex{Float64}[-100.00000000000057 + 0.0im, -97.9999999999994 + 0.0im], Array{Complex{Float64},1}[[-5.934823977225039e-17 + 0.0im, 3.320208037214963e-17 + 0.0im, 4.9512960751656277e-17 + 0.0im, -1.0008367950905649e-16 + 0.0im, 1.9294724938278272e-16 + 0.0im, -1.567962742202036e-17 + 0.0im, -3.9153858197655655e-17 + 0.0im, 3.1427591972433225e-18 + 0.0im, -7.05722251471179e-17 + 0.0im, 1.7906628918386472e-16 + 0.0im … 5.564153874360254e-16 + 0.0im, -4.962583320636797e-17 + 0.0im, -2.350694315844001e-16 + 0.0im, 1.7194234008369993e-16 + 0.0im, -1.4898044751323327e-16 + 0.0im, 2.1723558020234293e-16 + 0.0im, -1.6117744893327697e-16 + 0.0im, 1.2940048266310558e-16 + 0.0im, 4.700943375695305e-17 + 0.0im, -9.93535915208007e-17 + 0.0im], [-2.125811387704412e-18 + 0.0im, -2.1103217340835757e-17 + 0.0im, 4.0638801159622474e-17 + 0.0im, 3.0122951033892746e-17 + 0.0im, -7.833453391488912e-17 + 0.0im, 2.2458218145930223e-17 + 0.0im, 1.3680484459243355e-16 + 0.0im, -1.811268606205335e-16 + 0.0im, 3.0930102966243023e-16 + 0.0im, -7.470590659650482e-16 + 0.0im … -8.288981774225782e-16 + 0.0im, 2.856106009589991e-16 + 0.0im, -3.289414921445156e-16 + 0.0im, 3.156433357758001e-16 + 0.0im, 1.5492772606972423e-17 + 0.0im, -1.0742140666038115e-16 + 0.0im, 5.890215683339088e-17 + 0.0im, 5.669760393049083e-17 + 0.0im, -3.249941094882854e-17 + 0.0im, -4.870621770281174e-18 + 0.0im]], ConvergenceInfo: 2 converged values after 13 iterations and 171 applications of the linear map;
norms of residuals are given by (4.9392800197204625e-15, 3.152148007644767e-13).
)
But when I try geneigsolve, I got a error message that I don't understand, see below:
geneigsolve((C,I), 1, :SR, promote_type(eltype(C),eltype(I)))
ERROR: TypeError: in fieldtype, expected DataType, got Type{Union{}}
Stacktrace:
[1] tuple_type_head(::Type{T} where T) at ./essentials.jl:208
[2] geneigsolve(::Tuple{Tridiagonal{Float64,Array{Float64,1}},UniformScaling{Bool}}, ::Array{Float64,1}, ::Int64, ::Symbol; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /Users/etiennebernard/.julia/packages/KrylovKit/OLgKs/src/eigsolve/geneigsolve.jl:148
[3] geneigsolve(::Tuple{Tridiagonal{Float64,Array{Float64,1}},UniformScaling{Bool}}, ::Array{Float64,1}, ::Int64, ::Symbol) at /Users/etiennebernard/.julia/packages/KrylovKit/OLgKs/src/eigsolve/geneigsolve.jl:146
[4] geneigsolve(::Tuple{Tridiagonal{Float64,Array{Float64,1}},UniformScaling{Bool}}, ::Int64, ::Symbol, ::Type{T} where T; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /Users/etiennebernard/.julia/packages/KrylovKit/OLgKs/src/eigsolve/geneigsolve.jl:139
[5] geneigsolve(::Tuple{Tridiagonal{Float64,Array{Float64,1}},UniformScaling{Bool}}, ::Int64, ::Symbol, ::Type{T} where T) at /Users/etiennebernard/.julia/packages/KrylovKit/OLgKs/src/eigsolve/geneigsolve.jl:139
[6] top-level scope at REPL[32]:1
Please, could you explain me how to fix it ?
Sincerely,
Etienne
Hi,
I am still on KK0.4.2. Somehow, this used to work but not anymore
using KrylovKit
using CuArrays # version 0.2.2
CuArrays.allowscalar(false)
import LinearAlgebra: mul!, axpby!
mul!(x::CuArray, y::CuArray, α::T) where {T <: Number} = (x .= α .* y)
mul!(x::CuArray, y::T, α::CuArray) where {T <: Number} = (x .= α .* y)
axpby!(a::T, X::CuArray, b::T, Y::CuArray) where {T <: Number} = (Y .= a .* X .+ b .* Y)
L = x->x
vals, vec, info = KrylovKit.eigsolve(L, CuArray{Float64}(rand(Float64, 1024)), 10, :LM, tol = 1e-10, maxiter = 40, verbosity = 2, issymmetric = true, krylovdim = max(40, 10 + 10))
It returns the following error. I'd say the IndexStyle
is off. Do you have an idea what's wrong?
Thank you,
Best
Romain
julia> vals, vec, info = KrylovKit.eigsolve(L, CuArray{Float64}(rand(Float64, 1024)), 10, :LM, tol = 1e-10, maxiter = 40, verbosity = 2, issymmetric = true, krylovdim = max(40, 10 + 10))
[ Info: Lanczos eigsolve in iter 1: 10 values converged, normres = (6.31e-17, 3.49e-17, 2.82e-19, 8.34e-17, 9.53e-17, 9.82e-17, 9.20e-17, 7.74e-17, 5.59e-17, 2.93e-17)
ERROR: TaskFailedException:
scalar getindex is disallowed
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] assertscalar(::String) at /user/rveltz/home/.julia/packages/GPUArrays/JqOUg/src/host/indexing.jl:41
[3] getindex(::CuArray{Float64,1,Nothing}, ::Int64) at /user/rveltz/home/.julia/packages/GPUArrays/JqOUg/src/host/indexing.jl:96
[4] macro expansion at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:125 [inlined]
[5] macro expansion at ./simdloop.jl:77 [inlined]
[6] unproject_linear_kernel!(::CuArray{Float64,1,Nothing}, ::KrylovKit.OrthonormalBasis{CuArray{Float64,1,Nothing}}, ::SubArray{Float64,1,Array{Float64,2},Tuple{UnitRange{Int64},Int64},true}, ::UnitRange{Int64}, ::Int64, ::Int64, ::Base.OneTo{Int64}) at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:124
[7] macro expansion at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:116 [inlined]
[8] (::KrylovKit.var"#36#threadsfor_fun#11"{Int64,Int64,Base.OneTo{Int64},Int64,CuArray{Float64,1,Nothing},SubArray{Float64,1,Array{Float64,2},Tuple{UnitRange{Int64},Int64},true},KrylovKit.OrthonormalBasis{CuArray{Float64,1,Nothing}},Int64,StepRange{Int64,Int64}})(::Bool) at ./threadingconstructs.jl:61
[9] (::KrylovKit.var"#36#threadsfor_fun#11"{Int64,Int64,Base.OneTo{Int64},Int64,CuArray{Float64,1,Nothing},SubArray{Float64,1,Array{Float64,2},Tuple{UnitRange{Int64},Int64},true},KrylovKit.OrthonormalBasis{CuArray{Float64,1,Nothing}},Int64,StepRange{Int64,Int64}})() at ./threadingconstructs.jl:28
Stacktrace:
[1] wait(::Task) at ./task.jl:251
[2] macro expansion at ./threadingconstructs.jl:69 [inlined]
[3] unproject_linear_multithreaded!(::CuArray{Float64,1,Nothing}, ::KrylovKit.OrthonormalBasis{CuArray{Float64,1,Nothing}}, ::SubArray{Float64,1,Array{Float64,2},Tuple{UnitRange{Int64},Int64},true}, ::Int64, ::Int64, ::Base.OneTo{Int64}) at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:115
[4] unproject!(::CuArray{Float64,1,Nothing}, ::KrylovKit.OrthonormalBasis{CuArray{Float64,1,Nothing}}, ::SubArray{Float64,1,Array{Float64,2},Tuple{UnitRange{Int64},Int64},true}, ::Int64, ::Int64, ::Base.OneTo{Int64}) at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:89
[5] unproject! at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:88 [inlined]
[6] mul! at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:58 [inlined]
[7] * at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/orthonormal.jl:56 [inlined]
[8] #44 at ./none:0 [inlined]
[9] iterate at ./generator.jl:47 [inlined]
[10] collect(::Base.Generator{KrylovKit.ColumnIterator{SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false},Base.OneTo{Int64}},KrylovKit.var"#44#47"{KrylovKit.OrthonormalBasis{CuArray{Float64,1,Nothing}}}}) at ./array.jl:622
[11] eigsolve(::Function, ::CuArray{Float64,1,Nothing}, ::Int64, ::Symbol, ::Lanczos{ModifiedGramSchmidt2,Float64}) at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/eigsolve/lanczos.jl:146
[12] #eigsolve#41(::Base.Iterators.Pairs{Symbol,Real,NTuple{5,Symbol},NamedTuple{(:tol, :maxiter, :verbosity, :issymmetric, :krylovdim),Tuple{Float64,Int64,Int64,Bool,Int64}}}, ::typeof(eigsolve), ::Function, ::CuArray{Float64,1,Nothing}, ::Int64, ::Symbol) at /user/rveltz/home/.julia/packages/KrylovKit/oJ59b/src/eigsolve/eigsolve.jl:163
[13] (::KrylovKit.var"#kw##eigsolve")(::NamedTuple{(:tol, :maxiter, :verbosity, :issymmetric, :krylovdim),Tuple{Float64,Int64,Int64,Bool,Int64}}, ::typeof(eigsolve), ::Function, ::CuArray{Float64,1,Nothing}, ::Int64, ::Symbol) at ./none:0
[14] top-level scope at none:0
Sometimes svdsolve
throws a DomainError
on v0.5.4 and now v0.6.0:
using LinearAlgebra, KrylovKit, StableRNGs
m, n = 4100, 4200
X = randn(StableRNG(1234), m, n)
# should work for r <= 4096 == BLOCKSIZE
r = 10
svdsolve(X, m, r, :LR, Float64, krylovdim=r);
# this should throw the DomainError; takes a long time
r = 4097 # BLOCKSIZE + 1
svdsolve(X, m, r, :LR, Float64, krylovdim=r);
The stacktrace (see below) hints at a prevpow
call that may be the source of the problem; see for example https://github.com/Jutho/KrylovKit.jl/blob/v0.6.0/src/orthonormal.jl#L167. My guess is that we somehow end up with prevpow(2, 0)
which throws the error.
I did not study the multithreaded code closely but would adding something like
prevpow(2, max(1, div(BLOCKSIZE, n)))
be sufficient? It certainly avoids the error but not sure about impact on performance. Admittedly, this is a highly contrived edge case. I can't recall how I came across this error in the first place a few months ago. My example was constructed after peeking at the multithreaded code.
Stacktrace:
ERROR: DomainError with 0:
`x` must be ≥ 1.
Stacktrace:
[1] prevpow(a::Int64, x::Int64)
@ Base ./intfuncs.jl:479
[2] unproject_linear_multithreaded!(y::Vector{Float64}, b::KrylovKit.OrthonormalBasis{Vector{Float64}}, x::SubArray{Float64, 1, Matrix{Float64}, Tuple{UnitRange{Int64}, Int64}, true}, α::Int64, β::Int64, r::Base.OneTo{Int64})
@ KrylovKit ~/.julia/packages/KrylovKit/diNbc/src/orthonormal.jl:167
[3] unproject!(y::Vector{Float64}, b::KrylovKit.OrthonormalBasis{Vector{Float64}}, x::SubArray{Float64, 1, Matrix{Float64}, Tuple{UnitRange{Int64}, Int64}, true}, α::Int64, β::Int64, r::Base.OneTo{Int64})
@ KrylovKit ~/.julia/packages/KrylovKit/diNbc/src/orthonormal.jl:135
[4] unproject!
@ ~/.julia/packages/KrylovKit/diNbc/src/orthonormal.jl:134 [inlined]
[5] mul!
@ ~/.julia/packages/KrylovKit/diNbc/src/orthonormal.jl:61 [inlined]
[6] *
@ ~/.julia/packages/KrylovKit/diNbc/src/orthonormal.jl:59 [inlined]
[7] #69
@ ./none:0 [inlined]
[8] iterate
@ ./generator.jl:47 [inlined]
[9] collect(itr::Base.Generator{KrylovKit.ColumnIterator{SubArray{Float64, 2, Matrix{Float64}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false}, Base.OneTo{Int64}}, KrylovKit.var"#69#73"{KrylovKit.OrthonormalBasis{Vector{Float64}}}})
@ Base ./array.jl:724
[10] svdsolve(A::Matrix{Float64}, x₀::Vector{Float64}, howmany::Int64, which::Symbol, alg::GKL{ModifiedGramSchmidt2, Float64})
@ KrylovKit ~/.julia/packages/KrylovKit/diNbc/src/eigsolve/svdsolve.jl:274
[11] #svdsolve#68
@ ~/.julia/packages/KrylovKit/diNbc/src/eigsolve/svdsolve.jl:127 [inlined]
[12] svdsolve(f::Matrix{Float64}, n::Int64, howmany::Int64, which::Symbol, T::Type; kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:krylovdim,), Tuple{Int64}}})
@ KrylovKit ~/.julia/packages/KrylovKit/diNbc/src/eigsolve/svdsolve.jl:119
[13] top-level scope
@ REPL[16]:1
Julia command:
julia -t 10
Version info:
julia> versioninfo()
Julia Version 1.7.3
Commit 742b9abb4d (2022-05-06 12:58 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i9-10900KF CPU @ 3.70GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, skylake)
Environment:
(@v1.7) pkg> activate --temp
Activating new project at `/tmp/jl_NkV4GT`
(jl_NkV4GT) pkg> add LinearAlgebra KrylovKit StableRNGs
Resolving package versions...
Updating `/tmp/jl_NkV4GT/Project.toml`
[0b1a1467] + KrylovKit v0.6.0
[860ef19b] + StableRNGs v1.0.0
[37e2e46d] + LinearAlgebra
Updating `/tmp/jl_NkV4GT/Manifest.toml`
[79e6a3ab] + Adapt v3.5.0
[d360d2e6] + ChainRulesCore v1.15.7
[34da2185] + Compat v4.6.0
[46192b85] + GPUArraysCore v0.1.3
[0b1a1467] + KrylovKit v0.6.0
[860ef19b] + StableRNGs v1.0.0
[56f22d72] + Artifacts
[2a0f44e3] + Base64
[ade2ca70] + Dates
[b77e0a4c] + InteractiveUtils
[8f399da3] + Libdl
[37e2e46d] + LinearAlgebra
[56ddb016] + Logging
[d6f4376e] + Markdown
[de0858da] + Printf
[9a3f8284] + Random
[ea8e919c] + SHA
[9e88b42a] + Serialization
[2f01184e] + SparseArrays
[8dfed614] + Test
[cf7118a7] + UUIDs
[4ec0a83e] + Unicode
[e66e0078] + CompilerSupportLibraries_jll
[4536629a] + OpenBLAS_jll
[8e850b90] + libblastrampoline_jll
Feel free to reopen and share the matrices.
Originally posted by @Jutho in #18 (comment)
The matrices can be found at
size(M) = (2187, 2187)
https://www.dropbox.com/s/oneuns05l3e3rva/unit_cube_8.zip?dl=0
size(M) = (14739, 14739)
https://www.dropbox.com/s/2sdyhaxk2pd0axl/unit_cube_16.zip?dl=0
Simulation run as
neigvs=20
OmegaShift = (0.01*2*pi)^2;
d, v, info = geneigsolve((K+OmegaShift*M, M), neigvs, :SR; krylovdim = 2*neigvs, issymmetric = true, verbosity = 1)
Fails with
┌ Warning: GolubYe eigsolve finished without convergence after 100 iterations:
│ * 1 eigenvalues converged
│ * norm of residuals = (7.66211503629717e-13, 0.7585108726412924, 0.7029803176625605, 3.870821453942646, 14.613356935519448, 21.74815548797692, 33.786265705509045, 50.072831967271455, 65.11342753900738, 77.44873911331082, 75.66586526202666, 96.08167739154138, 103.36719352447871, 135.2866204761907, 143.3234236856939, 147.71528778473464, 185.40110483665762, 190.187097932011, 229.1869935974752, 254.63625652304694)
│ * number of operations = 4140
and such
I tried
eval, evec, info = geneigsolve((K, M), neigvs, :SR; krylovdim = neigvs, issymmetric = true, verbosity = 1)
Both K and M are positive definite, about 50,000 x 50,000. neigvs = 150. KrylovKit complained that krylovdim was insufficient, hence I increased it to the number of eigenvalues.
For these matrices Arpack solves the problem in 40 seconds. KrylovKit prints:
info = ConvergenceInfo: no converged values after 100 iterations and 15250 applications of the linear map;
norms of residuals are given by (0.002396873417046488, 3.248835060602836e7, 2.7753093442263037e8, 4.9315321722871387e8, 7.748594302081199e8, 8.547390494758842e8, 3.4130731195272117e9, 2.4306401956439214e9, 5.244045970318826e9, 5.323371454557005e9, 8.310185004644285e9, 1.0643108361077374e10, 1.2894525228928179e10, 1.555410215182708e10, 1.8052112354689373e10, 2.2001736414298813e10, 2.3379073914942333e10, 2.3999892493262e10, 2.63218402446868e10, 3.115759149886282e10, 3.774859397558686e10, 3.699561047627666e10, 4.0447127619959496e10, 4.181580879356961e10, 5.275823985974196e10,
4.768126278527812e10, 4.882031555572997e10, 5.330862193819132e10, 5.353485665490572e10, 5.6667741329282135e10, 5.733496435968245e10, 6.3448545928700806e10, 6.5015631900482185e10, 6.7404750475145256e10, 6.992019932905296e10, 7.344382026718034e10, 7.623674909270993e10, 8.279887125787743e10, 8.161285815088121e10, 8.750864463221799e10, 9.200200109475017e10, 9.942697543438498e10, 1.0421345396558173e11, 1.0849542693004073e11, 1.140218636996058e11, 1.185705367155918e11, 1.2024562750017061e11, 1.2562159433846292e11, 1.3631432243966956e11, 1.3728760123105396e11, 1.4308825857752185e11, 1.5033458874452377e11, 1.5793654872723734e11, 1.6031270399887112e11, 1.6423326876692508e11, 1.604821916559468e11, 1.686614555984489e11, 1.7345671335089658e11, 1.7434602117846674e11, 1.3436876346820663e11, 1.6606297010356393e11, 1.6658326554228894e11, 1.6418416894572073e11, 1.5374871316269406e11, 1.783107828047467e11, 1.6815619189843665e11, 1.6351392732242627e11, 1.6509436136073273e11, 1.601023400072549e11, 1.6456769821694348e11, 1.6047541818201074e11, 1.4038081600641678e11, 1.3720794169566965e11, 1.631760242331073e11, 1.4234108793806427e11, 1.8020445089496085e11, 1.51720828119771e11, 1.8464034968520828e11, 1.660626475659846e11, 1.6632630341921048e11, 1.6123468785146686e11, 1.377808422913065e11, 1.6198876300644772e11, 1.611275417938122e11, 1.3061449142377744e11, 1.6782364215299008e11, 1.5342449592515982e11, 1.735252239537415e11, 1.4547049114227682e11, 1.3628003879957086e11, 1.5560568203321463e11, 1.3683976181544379e11, 1.7975223317252618e11, 1.476248988115132e11, 1.8047591240991754e11, 1.6594854036829172e11, 1.4595864416503073e11, 1.4292860766728033e11, 1.888523385252819e11, 1.559656863079164e11, 1.8187799044350156e11, 1.783649242150821e11, 1.408815371242372e11, 1.3834326675060028e11, 1.7525667949945404e11, 1.5796171843597052e11, 2.0585757711743658e11, 1.3665030858602708e11, 1.3147173721526407e11, 1.7053953353792828e11, 2.065389321386339e11, 2.0798016142978784e11, 1.406676404428412e11, 1.3574961066155034e11, 1.4856818470185443e11, 1.3758794314976102e11, 2.0186502504187262e11, 1.526715726440403e11, 1.422192579326239e11, 1.4466740543047934e11, 1.202792375613822e11, 2.2853924388476486e11, 1.2563885264359932e11, 1.3320393849570345e11, 2.260546376076804e11, 1.2962041331021895e11, 1.2166448213725748e11, 1.2915512375891316e11, 1.1976672844963583e11, 1.2648989380452153e11, 1.3449121537950958e11, 1.4399992971674252e11, 1.5729075967673212e11, 1.8370733318267642e11, 1.2705222336777429e11, 1.3261045192382887e11, 1.2898933300171616e11, 1.4691808009139304e11, 1.3494349702638138e11, 1.6269401589458328e11, 1.717339010561835e11, 1.2377789782210353e11, 1.377322886081942e11, 1.210614217886396e11, 8.63304435006993e10, 1.6828500315167252e11, 1.5807734611546933e11, 2.1371191932422336e11, 1.8572763563475046e11, 1.3215581179489737e11).
after about 256 seconds.
Any idea what it is I am doing wrong? Am I doing something wrong?
Hey, I saw that you added some phiv calculations and was wondering if you tested against ExponentialUtilities.jl to see if there's anything we should upstream for use in DifferentialEquations.jl. FWIW we haven't seen fantastic results of exponential integrators against well-tuned IMEX methods, and I'm curious to find out if there's any possible improvements to the standard Krylov phiv methods.
Hi,
I am using your package and I get the following error message on this simple example. However the fill!
method does not seem to be required in your docs. In essence, my "issue" is the same as #25
using KrylovKit
using PseudoArcLengthContinuation
function f(dx::BorderedArray)
out = copy(dx)
out.u .*= 2
out.p .*= 2
return out
end
KrylovKit.eigsolve(f, BorderedArray(rand(100),rand(2)), 2, :LR; verbosity = 2, krylovdim = 50, maxiter = 20 )
returns
[ Info: Arnoldi schursolve in iter 1: 5 values converged, normres = (1.90e-35, 8.52e-21)
ERROR: MethodError: no method matching fill!(::BorderedArray{Array{Complex{Float64},1},Array{Complex{Float64},1}}, ::Complex{Float64})
Closest candidates are:
fill!(::Array{T,N} where N, ::Any) where T at array.jl:308
fill!(::BitArray, ::Any) at bitarray.jl:351
fill!(::SubArray{Bool,#s627,#s626,Tuple{AbstractUnitRange{Int64}},L} where L where #s626<:BitArray where #s627, ::Any) at multidimensional.jl:1268
...
Stacktrace:
[1] unproject!(::BorderedArray{Array{Complex{Float64},1},Array{Complex{Float64},1}}, ::KrylovKit.OrthonormalBasis{BorderedArray{Array{Float64,1},Array{Float64,1}}}, ::SubArray{Complex{Float64},1,Array{Complex{Float64},2},Tuple{Base.Slice{Base.OneTo{Int64}},Int64},true}, ::Int64, ::Int64, ::Base.OneTo{Int64}) at /Users/rveltz/.julia/packages/KrylovKit/mHobB/src/orthonormal.jl:94
[2] unproject! at /Users/rveltz/.julia/packages/KrylovKit/mHobB/src/orthonormal.jl:88 [inlined]
[3] mul! at /Users/rveltz/.julia/packages/KrylovKit/mHobB/src/orthonormal.jl:58 [inlined]
[4] * at /Users/rveltz/.julia/packages/KrylovKit/mHobB/src/orthonormal.jl:56 [inlined]
[5] #56 at ./none:0 [inlined]
[6] iterate at ./generator.jl:47 [inlined]
[7] collect(::Base.Generator{KrylovKit.ColumnIterator{Array{Complex{Float64},2},Base.OneTo{Int64}},KrylovKit.var"#56#59"{KrylovKit.OrthonormalBasis{BorderedArray{Array{Float64,1},Array{Float64,1}}}}}) at ./array.jl:622
[8] eigsolve(::Function, ::BorderedArray{Array{Float64,1},Array{Float64,1}}, ::Int64, ::Symbol, ::Arnoldi{ModifiedGramSchmidt2,Float64}) at /Users/rveltz/.julia/packages/KrylovKit/mHobB/src/eigsolve/arnoldi.jl:123
[9] #eigsolve#41 at /Users/rveltz/.julia/packages/KrylovKit/mHobB/src/eigsolve/eigsolve.jl:163 [inlined]
[10] (::KrylovKit.var"#kw##eigsolve")(::NamedTuple{(:verbosity, :krylovdim, :maxiter),Tuple{Int64,Int64,Int64}}, ::typeof(eigsolve), ::Function, ::BorderedArray{Array{Float64,1},Array{Float64,1}}, ::Int64, ::Symbol) at ./none:0
[11] top-level scope at none:0
Hi,
I am wondering if this line is not a mistake. Indeed this corresponds to mul!(w, α, v)
and not the definition in your docs: "mul!(w, v, α): out of place scalar multiplication; multiply vector v with scalar α and store the result in w".
The tag name "V0.3.3" is not of the appropriate SemVer form (vX.Y.Z).
cc: @Jutho
Am I doing something wrong?
julia> KrylovKit.eigsolve(rand(100,100), howmany=2)
ERROR: MethodError: no method matching eigselector(::Matrix{Float64}, ::Type{Float64}; howmany=2)
Closest candidates are:
eigselector(::AbstractMatrix, ::Type; issymmetric, ishermitian, krylovdim, maxiter, tol, orth, eager, verbosity) at C:\Users\julio\.julia\packages\KrylovKit\kWdb6\src\eigsolve\eigsolve.jl:237 got unsupported keyword argument "howmany"
eigselector(::Any, ::Type; issymmetric, ishermitian, krylovdim, maxiter, tol, orth, eager, verbosity) at C:\Users\julio\.julia\packages\KrylovKit\kWdb6\src\eigsolve\eigsolve.jl:205 got unsupported keyword argument "howmany"
Stacktrace:
[1] kwerr(::NamedTuple{(:howmany,), Tuple{Int64}}, ::Function, ::Matrix{Float64}, ::Type)
@ Base .\error.jl:165
[2] eigsolve(f::Matrix{Float64}, x₀::Vector{Float64}, howmany::Int64, which::Symbol; kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:howmany,), Tuple{Int64}}})
@ KrylovKit C:\Users\julio\.julia\packages\KrylovKit\kWdb6\src\eigsolve\eigsolve.jl:184
[3] eigsolve(A::Matrix{Float64}, howmany::Int64, which::Symbol, T::Type; kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:howmany,), Tuple{Int64}}})
@ KrylovKit C:\Users\julio\.julia\packages\KrylovKit\kWdb6\src\eigsolve\eigsolve.jl:176
[4] top-level scope
@ REPL[27]:1
Thanks a lot for the very nice package.
Although it seems that your package is faster and more robust than eigs
it allocates significantly more memory than eigs
. For example, performing Arnoldi with eigsolve
on a 1000 x 1000
matrix allocates ~20x more memory as compared to eigs
:
X = randn(1000, 1000);
@allocated eigs(X, nev=1)
@allocated eigsolve(X, 1)
while the number of iterations/matrix multiplications are comparable for the two functions.
On a related note, it would be nice to allow for functions f(y, x)
that mutate y
.
I am happy to contribute if you provide guidance.
I'm opening this issue to reflect strong interest in this planned functionality. It is crucial e.g. to build an efficient shift-and-invert scheme on top of KrylovKit that can compete with Arpack (which is far more fragile than KrylovKit in my experience, and of course not pure Julia).
Currently, when passing a function to, say, eigsolve
, it is assumed that the function f(v)
implements some linear map A * v
, allocating a new vector for each call. The low hanging fruit here is to allow f!(w, v)
with a preallocated vector w
, much like LinearMaps
does this. It would actually be nice to integrate KrylovKit with LinearMaps.
I am trying to get geneigsolve
working but having no luck.
For instance, the trivial example
using KrylovKit
using LinearAlgebra
A = Matrix(1I, 5,5)
B = Matrix(1I, 5,5)
@show KrylovKit.geneigsolve((A,B))
generates the following error:
And more complicated versions seem to generate similar errors (it complains the operators are not Hermitian or Positive Definite even when they are).
Am I missing something? I see the tests for geneigsolve
don't seem to have such an issue?
eigsolve(epsfun, length(scfres.ρ.real), 1, :SR, verbosity=3, tol=1e-4)
[ Info: Arnoldi iteration step 1: normres = 0.016207911009266315
[ Info: Arnoldi iteration step 2: normres = 0.5590671934901157
[ Info: Arnoldi iteration step 3: normres = 0.905377226214006
[ Info: Arnoldi iteration step 4: normres = 0.23298405811493555
[ Info: Arnoldi iteration step 5: normres = 0.15201652536133126
[ Info: Arnoldi iteration step 6: normres = 0.592603021837977
[ Info: Arnoldi iteration step 7: normres = 0.12176731843540282
[ Info: Arnoldi iteration step 8: normres = 0.09097000303823394
[ Info: Arnoldi iteration step 9: normres = 0.04323381192254804
[ Info: Arnoldi iteration step 10: normres = 0.07849715070693251
[ Info: Arnoldi iteration step 11: normres = 0.129007401971026
[ Info: Arnoldi iteration step 12: normres = 0.7133902458129476
[ Info: Arnoldi iteration step 13: normres = 0.05367711291850232
[ Info: Arnoldi iteration step 14: normres = 0.03533800454486114
[ Info: Arnoldi iteration step 15: normres = 0.7191209788317857
[ Info: Arnoldi iteration step 16: normres = 0.08411095972078231
[ Info: Arnoldi iteration step 17: normres = 0.06905579200587413
[ Info: Arnoldi iteration step 18: normres = 0.03099370046380324
[ Info: Arnoldi iteration step 19: normres = 0.17225062546432904
[ Info: Arnoldi iteration step 20: normres = 0.28845620299403657
[ Info: Arnoldi iteration step 21: normres = 0.15783635449059297
[ Info: Arnoldi iteration step 22: normres = 0.03996345360229037
[ Info: Arnoldi iteration step 23: normres = 0.025087829332985617
[ Info: Arnoldi iteration step 24: normres = 0.020227416987651632
[ Info: Arnoldi iteration step 25: normres = 0.12157529694425907
[ Info: Arnoldi iteration step 26: normres = 0.02732767047408143
[ Info: Arnoldi iteration step 27: normres = 0.08297860675872647
[ Info: Arnoldi iteration step 28: normres = 0.04088602000015171
[ Info: Arnoldi iteration step 29: normres = 0.015330135139814571
[ Info: Arnoldi iteration step 30: normres = 0.029899360771800297
[ Info: Arnoldi schursolve in iter 1: 1 values converged, normres = (1.34e-05)
┌ Info: Arnoldi eigsolve finished after 1 iterations:
│ * 1 eigenvalues converged
│ * norm of residuals = (1.3359144756588437e-5,)
└ * number of operations = 30
The normres
printouts here are strange: they can't be residuals, right?
Hi,
I am seeing the following error and I cannot debug it. Is it an issue with your code?
Note that my operators are real. Providing a MWE will prove difficult has I have a massive project :(
ERROR: MethodError: no method matching complex(::Type{Any})
Closest candidates are:
complex(::Complex) at complex.jl:158
complex(::Real) at complex.jl:159
complex(::Real, ::Real) at complex.jl:160
...
Stacktrace:
[1] (::getfield(KrylovKit, Symbol("##45#48")){Array{Any,1}})(::SubArray{Complex{Float64},1,Array{Complex{Float64},2},Tuple{Base.Slice{Base.OneTo{Int64}},Int64},true}) at ./none:0
[2] collect(::Base.Generator{KrylovKit.ColumnIterator{Array{Complex{Float64},2},Base.OneTo{Int64}},getfield(KrylovKit, Symbol("##45#48")){Array{Any,1}}}) at ./generator.jl:47
[3] eigsolve(::Function, ::Array{Float64,1}, ::Int64, ::ClosestTo{Float64}, ::Arnoldi{ModifiedGramSchmidt2,Float64}) at /Users/rveltz/.julia/packages/KrylovKit/UgmTa/src/eigsolve/arnoldi.jl:113
[4] top-level scope at util.jl:156
Observed on master and Julia 1.5, svdsolve
does not seem to work with BigFloats:
julia> using KrylovKit
julia> direction = rand(BigFloat, 4, 3)
julia> svdsolve(direction)
ERROR: MethodError: no method matching bidiagsvd!(::LinearAlgebra.Bidiagonal{BigFloat,SubArray{BigFloat,1,Array{BigFloat,1},Tuple{UnitRange{Int64}},true}}, ::SubArray{BigFloat,2,Array{BigFloat,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}, ::SubArray{BigFloat,2,Array{BigFloat,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false})
Stacktrace:
[1] svdsolve(::Array{BigFloat,2}, ::Array{BigFloat,1}, ::Int64, ::Symbol, ::GKL{ModifiedGramSchmidt2,Float64}) at /home/mbesancon/.julia/packages/KrylovKit/OLgKs/src/eigsolve/svdsolve.jl:155
[2] svdsolve(::Array{BigFloat,2}, ::Array{BigFloat,1}, ::Int64, ::Symbol; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/mbesancon/.julia/packages/KrylovKit/OLgKs/src/eigsolve/svdsolve.jl:109
[3] svdsolve(::Array{BigFloat,2}, ::Array{BigFloat,1}, ::Int64, ::Symbol) at /home/mbesancon/.julia/packages/KrylovKit/OLgKs/src/eigsolve/svdsolve.jl:106
[4] svdsolve(::Array{BigFloat,2}, ::Int64, ::Symbol, ::Type{T} where T; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/mbesancon/.julia/packages/KrylovKit/OLgKs/src/eigsolve/svdsolve.jl:101
[5] svdsolve(::Array{BigFloat,2}, ::Int64, ::Symbol, ::Type{T} where T) at /home/mbesancon/.julia/packages/KrylovKit/OLgKs/src/eigsolve/svdsolve.jl:101 (repeats 2 times)
[6] top-level scope at REPL[4]:1
Hi!
Here's the issue:
using CUDA
using KrylovKit
linsolve(CuArray([1.0 2.0; 3.0 4.0]),CuArray([1.0,2.0]))
┌ Warning: Performing scalar operations on GPU arrays: This is very slow, consider disallowing these operations with `allowscalar(false)`
└ @ GPUArrays /pool/aerospace/Tools/julia-1.5.3/depot/packages/GPUArrays/WV76E/src/host/indexing.jl:43
This seems to fix it:
LinearAlgebra.mul!(x::CuArray{T,1},y::CuArray{T,1},α) where T = x .= α .* y;
Maybe something can be done about it in the code? I don't think this is an issue with LinearAlgebra as:
using LinearAlgebra
a=CuArray([1.0 2.0; 3.0 4.0]);
b=CuArray([1.0,2.0]);
c=CuArray([0.0,0.0]);
mul!(c,a,b)
works.
KrylovKit.jl uses an absolute convergence criterion norm(residual) < tol
. Some packages, like ArnoldiMethod.jl and Arpack.jl, use a slightly different criterion norm(residual) < tol * |λ|
where λ
is the eigenvalue associated with the residual vector (modulo some absolute tolerance for when |λ| ≈ 0
). To me, this makes a lot of sense, as it makes convergence of an eigenpair invariant to the overall scale of A
, inversion of A
, and the size of A
on the eigenspace in question. I suppose you could think of it as setting a relative tolerance for eigenvalues, or equivalently an absolute tolerance for eigenvector directions*. It's particularly relevant to my current application, where I'm solving for the smallest eigenvalue of Hermitian positive definite matrices and the numbers can get quite small (or large if I invert).
Is modifying the convergence criterion something you might consider for KrylovKit.jl?
*The intuition I'm describing here applies to symmetric/Hermitian problems where Schur vectors and eigenvectors are the same, so norm(residual) = ‖Ax - λx‖₂
with normalized x
. Not sure exactly how the intuition translates to Schur vector residuals.
When testing this package against the upcoming 1.3 release, the following test error occured:
Error During Test at /root/.julia/packages/KrylovKit/kSkwM/test/linalg.jl:57
Got exception outside of a @test
MethodError: no method matching hschur!(::UpperHessenberg{Float32,Array{Float32,2}})
...
Stacktrace:
[1] top-level scope at /root/.julia/packages/KrylovKit/kSkwM/test/linalg.jl:61
[2] top-level scope at /workspace/srcdir/julia/usr/share/julia/stdlib/v1.3/Test/src/Test.jl:1180
[3] top-level scope at /root/.julia/packages/KrylovKit/kSkwM/test/linalg.jl:57
[4] top-level scope at /workspace/srcdir/julia/usr/share/julia/stdlib/v1.3/Test/src/Test.jl:1107
[5] top-level scope at /root/.julia/packages/KrylovKit/kSkwM/test/linalg.jl:57
[6] include at ./boot.jl:328 [inlined]
[7] include_relative(::Module, ::String) at ./loading.jl:1105
[8] include(::Module, ::String) at ./Base.jl:31
[9] include(::String) at ./client.jl:432
[10] top-level scope at /root/.julia/packages/KrylovKit/kSkwM/test/runtests.jl:19
[11] include at ./boot.jl:328 [inlined]
[12] include_relative(::Module, ::String) at ./loading.jl:1105
[13] include(::Module, ::String) at ./Base.jl:31
[14] include(::String) at ./client.jl:432
[15] top-level scope at none:6
[16] eval(::Module, ::Any) at ./boot.jl:330
[17] exec_options(::Base.JLOptions) at ./client.jl:271
[18] _start() at ./client.jl:468
This is likely due to the PR JuliaLang/julia#31853 introducing the new UpperHessenberg
type returned from some functions. Adding a specialization here might be enough to fix it.
I am trying to compute eigenvalues of matrices with extended precision but the function eigsolve
return an error. For example with the following matrix
using LinearAlgebra, LinearMaps, Quadmath, KrylovKit
T = Complex{Float128}
N = 32
A = Tridiagonal(rand(T, N - 1), rand(T, N), rand(T, N - 1))
the matrix version give
julia> vals, vecs, info = eigsolve(A, 3, :LM)
ERROR: LoadError: MethodError: no method matching hschur!(::SubArray{Complex{Float128}, 2, Matrix{Complex{Float128}}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false}, ::SubArray{Complex{Float128}, 2, Matrix{Complex{Float128}}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false})
and the matrix-free version give the same error
julia> A_map = LinearMap{T}(x -> A * x, N);
julia> vals, vecs, info = eigsolve(A_map, N, 3, :LM)
ERROR: MethodError: no method matching hschur!(::SubArray{Complex{Float128}, 2, Matrix{Complex{Float128}}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false}, ::SubArray{Complex{Float128}, 2, Matrix{Complex{Float128}}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false})
From this discussion on julialang, I have been advise to open this issue.
Additional information:
Float128
by Double64
from the DoubleFloats.jl
package or Float64x2
from MultiFloats.jl
package.using GenericLinearAlgebra, GenericSchur
still yield the error.Version 1.6.2
and KrylovKit v0.5.3
.Hi,
I have become addicted to your package. The downside is that I would like an additional feature ;)
I use Arnoldi iterations for computing the spectrum and the convergence is super slow.
So, I was wondering how difficult it would be for you to add support for preconditionning?
Thank you a lot for your help,
Best regards
Using the exponentiate
interface to expintegrator
, I have been finding that the tol
keyword does not behave as expected. Please see the minimal sample code to reproduce below.
I was expecting that by setting tol
the number of operations numops
would adaptively depend on tol
and the total time t
. Instead I am finding that the only thing which seems to reduce numops
is lowering the krylovdim
to a value around 10 or so. The error I find after the code runs, by comparing the returned vector to an exact solution, is usually extremely small (< 1E-15) regardless of how high I make the tol
parameter (say if I set it to 1E-3 or even 0.1).
Code to reproduce – the main thing I've been doing is adjusting t
from 0.1 up to 1.0 while raising and lowering tol
and I see the behavior described above (the amount of linear operators, numops, is something like 22). Lowering krylovdim
to 10 does make numops become smaller while still giving an accurate result.
using Random
import KrylovKit: exponentiate
using Printf
using LinearAlgebra
let
Random.seed!(1)
# --------------------
# Adjustable Parameters:
# Problem definition
L = 20 # size of matrix
t = 0.1 # time to evolve to
ElT = Float64 # element type
# exponentiate keyword args
tol = 1E-5
krylovdim = 30
maxiter = 100
#---------------------
# Make a random Hermitian matrix H
M = randn(ElT,L,L)
H = (M + M')/2
# Random starting vector (t=0)
x0 = randn(ElT,L)
# Define linear map A
count = 0
function A(x)
count += 1
return H*x
end
# Compute result from exponentiate
xt_exp,info = exponentiate(A,-im*t,x0; tol, krylovdim, maxiter, ishermitian=true, verbosity=100)
# Compute exact result
xt = exp(-im*H*t)*x0
println("exponentiate info:")
@printf(" converged = %s\n",info.converged)
@printf(" residual = %.4E\n",info.residual)
@printf(" normres = %.4E\n",info.normres)
@printf(" numiter = %d\n",info.numiter)
@printf(" numops = %d (<- # times linear map A was applied)\n",info.numops)
@assert count == info.numops # check that exponentiate is reporting this correctly
@printf("Actual error |xt_exp - xt| = %.4E\n",norm(xt_exp - xt))
return
end
Typical output for the above settings:
[ Info: Lanczos iteration step 1: normres = 3.549103723766127
[ Info: Lanczos iteration step 2: normres = 2.3614405541788916
[ Info: Lanczos iteration step 3: normres = 3.588689632432383
[ Info: Lanczos iteration step 4: normres = 2.7584942917329247
[ Info: Lanczos iteration step 5: normres = 4.132375600781351
[ Info: Lanczos iteration step 6: normres = 2.622109652985752
[ Info: Lanczos iteration step 7: normres = 2.6603824297597125
[ Info: Lanczos iteration step 8: normres = 3.014748524743459
[ Info: Lanczos iteration step 9: normres = 3.0273258357955317
[ Info: Lanczos iteration step 10: normres = 1.982100738953418
[ Info: Lanczos iteration step 11: normres = 2.409275684096494
[ Info: Lanczos iteration step 12: normres = 1.6056315015783447
[ Info: Lanczos iteration step 13: normres = 1.119638147747482
[ Info: Lanczos iteration step 14: normres = 1.1900999270425054
[ Info: Lanczos iteration step 15: normres = 0.8606215889963659
[ Info: Lanczos iteration step 16: normres = 1.2985006799870673
[ Info: Lanczos iteration step 17: normres = 0.27409751590136305
[ Info: Lanczos iteration step 18: normres = 0.7444688486178209
[ Info: Lanczos iteration step 19: normres = 0.049873174694519816
[ Info: Lanczos iteration step 20: normres = 2.1595912431228573e-31
[ Info: expintegrate finished after 1 iterations: total error = 9.039173795164503e-67
exponentiate info:
converged = 1
residual = 0.0000E+00
normres = 9.0392E-67
numiter = 1
numops = 22 (<- # times linear map A was applied)
Actual error |xt_exp - xt| = 8.0934E-16
Thanks for this great package! 👏
I am using the exponentiate
method to solve coupled linear homogeneous ODEs on matrices, for example:
where
It turns out that using a vector for the x
argument of exponentiate
raises an error, contrary to what's mentioned in the documentation:
The linear map
A
can be anAbstractMatrix
(dense or sparse) or a general function or callable object that implements the action of the linear map on a vector. IfA
is anAbstractMatrix
,x
is expected to be anAbstractVector
, otherwisex
can be of any type that behaves as a vector and supports the required methods (see KrylovKit docs).
I am probably missing something obvious, here's a minimal example:
> X0 = [1.0 2.0; 3.0 4.0]
> Y0 = [0.0 1.0; 1.0 2.0]
> x0 = [X0, Y0]
> println(typeof(x0))
Vector{Matrix{Float64}}
> A(x) = [x[1] + x[2], 2 * x[1] - x[2]]
> exponentiate(A, 1.0, x0)
UndefRefError: access to undefined reference
...
Hi,
This is not really an issue but a suggestion...
Can you provide a way to monitor the run of the computations, like a verbose mode in eigensolve
...
Thank you for your help and for your great package!
Someone on Slack ran into this issue when they accidentally ran
exponentiate(::SparseMatrixCSC{ComplexF32, Int64}, 1, ::Vector{Complex})
instead of using a concrete type.
Here's a MWE:
#+begin_src julia
using KrylovKit, SparseArrays
let H = sprand(1000, 1000, 0.001), v = zeros(Complex, 1000)
v[1] = 1
exponentiate(H, 1, v)
end
#+end_src
#+RESULTS:
UndefRefError: access to undefined reference
Stacktrace:
[1] getindex
@ ./essentials.jl:13 [inlined]
[2] macro expansion
@ ~/julia/usr/share/julia/stdlib/v1.9/LinearAlgebra/src/generic.jl:183 [inlined]
[3] macro expansion
@ ./simdloop.jl:77 [inlined]
[4] rmul!(X::Vector{Complex}, s::Bool)
@ LinearAlgebra ~/julia/usr/share/julia/stdlib/v1.9/LinearAlgebra/src/generic.jl:182
[5] expintegrator
@ ~/.julia/packages/KrylovKit/diNbc/src/matrixfun/expintegrator.jl:106 [inlined]
[6] expintegrator(::SparseMatrixCSC{Float64, Int64}, ::Int64, ::Vector{Complex}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ KrylovKit ~/.julia/packages/KrylovKit/diNbc/src/matrixfun/expintegrator.jl:102
[7] expintegrator
@ ~/.julia/packages/KrylovKit/diNbc/src/matrixfun/expintegrator.jl:98 [inlined]
[8] #exponentiate#82
@ ~/.julia/packages/KrylovKit/diNbc/src/matrixfun/exponentiate.jl:79 [inlined]
[9] exponentiate(A::SparseMatrixCSC{Float64, Int64}, t::Int64, v::Vector{Complex})
@ KrylovKit ~/.julia/packages/KrylovKit/diNbc/src/matrixfun/exponentiate.jl:79
[10] top-level scope
@ In[17]:4
Hi Jutho!
I'm not sure if this is a bug, but I noticed that eigsolve() does not always return normalized eigenvectors. Are the eigenvalues supposed to be normalized? I couldn't see anything about this in my cursory glance at the docs.
Anyway, I am giving eigsolve() a real, nonsymmetric sparse matrix (Julia's built-in sparse type) and the eigenvectors have norm 1 or sqrt(2). If I make the element type of the matrix complex (without changing the values), all eigenvectors have norm 1.
For more detail, I was just diagonalizing the Ising spin chain (as one does...) via T*H, where T is a translation by one site and H is the Hamiltonian. The translation-invariant states (with real eigenvalues) had norm 1, the others had norm sqrt(2).
Ash
PS: Thanks for all the cool code you're putting out at the moment. Much appreciated! 🥇
I notice that sometimes eigsolve
returns incorrect results. Below you can find a minimal example:
Download error.jld2 and run:
using JLD2
using KrylovKit
using Arpack
@load "error.jld2" A
println("Real part of rightmost eigevalue")
println(string("Direct: ", maximum(real(eigvals(A)))))
λ, V, info = eigsolve(A, howmany=1, which=:LR)
println(string("KrylovKit: ", maximum(real(λ))))
λ, V, _ = eigs(A, nev=1, which=:LR)
println(string("Arpack: ", maximum(real(λ))))
which produces the following output
Real part of rightmost eigevalue:
Direct: 0.15369067701327851
KrylovKit: -10.02637671173757
Arpack: 0.15369067701327957
on
Julia
version: 1.0.1
KrylovKit.jl
version: latest master (28d74fd)
Apologies for reporting issues without participating in fixing them. I hope that, in the future, I will be able to help more.
Hi,
I'm a big fan of the package. Being able to solve eigenvalue problems without vectors and matrices is awesome. Thanks you very much!
I was wondering about the possibility to support the LOBPCG (locally optimised block preconditioned conjugate gradient) method in KrylovKit. See https://en.wikipedia.org/wiki/LOBPCG
My use case is that I would like to solve really large matrix-free problems (exact diagonalisation of some quantum Hamiltonian; matrix elements are computed on-the-fly) with (MPI-)distributed "vectors" that are indexed in a fancy way. KrylovKit works very well, but memory usage becomes a bottleneck. I read about LOBPCG and it seems like it might have an advantage over Lanczos/Arnoldi in requiring less memory:
It also seems to be good with clustered/degenerate eigenvalues when the block size is large enough, which might help with #57.
A Julia implementation of LOBPCG is available at IterativeSolvers.jl
. It requires actual Vector
s, though - hence my suggestion to add the method to KrylovKit.jl
.
Fwiw, testing on a 90_000^2 (ordinary) sparse matrix, KrylovKit's eigsolve
was slightly faster (but 20x more memory allocations: allocations: 214.506 MiB
) than lobpcg
without preconditioning (allocations: 10.034 MiB
). With a simple diagonal preconditioner,lobpcg
was about twice faster.
GMRES often fails to converge when restart is less than the problem dimension, but will return without indicating failure.
This failure (likely due to stagnation) seems to depend on the absolute difference between the problem dimension and the Krylov subspace dimension (restart.) ie. roughly 30% of the time GMRES will fail silently on a problem of dimension 25 if restart = 24, and will still fail 30% of the time with a problem of dimension 100 if restart = 99. For a difference in dimension of 5, the test below returns an incorrect value greater than 90% of the time.
Reproducing code:
import KrylovKit: linsolve, GMRES
import LinearAlgebra: cond, norm
for problem_dim in [25, 100]
for excess_dim in [0, 1, 5]
succeeded = 0
restart = problem_dim - excess_dim
for i = 1:100
condition_number = Inf
matrix = nothing
while condition_number > 1000
matrix = randn(Float64, (problem_dim, problem_dim))
condition_number = cond(matrix)
end
true_vec = randn(Float64, (problem_dim,))
b = matrix * true_vec
x_0 = zero(b)
julia_soln, _ = linsolve(x->matrix*x, b, x_0, GMRES(krylovdim=restart))
residual_norm = norm(julia_soln - true_vec)
succeeded += (residual_norm < 1) # very loose tolerance!
end
println("problem dim: $problem_dim. restart dim: $restart. succeeded: $succeeded")
end
end
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
Otherwise the package does not work on 1.8 or master, lacking ae53a89.
When using Arnoldi on some (badly scaled) problems, trexc!
throws LAPACKException(1)
with the following stacktrace
[1] trexc!(::Int64, ::Int64, ::SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}, ::SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}) at /Users/nrontsis/KrylovKit.jl/src/dense/linalg.jl:415
[2] permuteschur!(::SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}, ::SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}, ::Array{Int64,1}) at /Users/nrontsis/KrylovKit.jl/src/dense/linalg.jl:335
[3] _schursolve(::Function, ::Array{Float64,1}, ::Int64, ::Symbol, ::Arnoldi{ModifiedGramSchmidt2,Float64}) at /Users/nrontsis/KrylovKit.jl/src/eigsolve/arnoldi.jl:184
[4] eigsolve(::Function, ::Array{Float64,1}, ::Int64, ::Symbol, ::Arnoldi{ModifiedGramSchmidt2,Float64}) at /Users/nrontsis/KrylovKit.jl/src/eigsolve/arnoldi.jl:108
[5] #eigsolve#41 at /Users/nrontsis/KrylovKit.jl/src/eigsolve/eigsolve.jl:163 [inlined]
[6] #eigsolve at ./none:0 [inlined]
[7] #eigsolve#40 at /Users/nrontsis/KrylovKit.jl/src/eigsolve/eigsolve.jl:146 [inlined]
[8] #eigsolve at ./none:0 [inlined] (repeats 2 times)
My understanding is that trexc!
reorders a Schur factorization. However, the Q
matrix passed to trexc!
is not orthogonal. Is this something expected?
Julia
version: 1.0.1
KrylovKit.jl
version: latest master (28d74fd)
Hi Jutho,
Thanks for the nice package. I wanted to point out a small bug I came across getting multiple degenerate eigenvalues:
using KrylovKit
using LinearAlgebra
eigs = [1; 1; 0; 0]
U = qr(randn(4, 4)).Q
M = U*Diagonal(eigs)*U'
@show eigsolve(Hermitian(M), 2, :LM)
@show eigsolve(Hermitian(M), 2, :SR)
results in:
eigsolve(Hermitian(M), 2, :LM) = ([1.0000000000000004, -5.204170427930421e-18], [[-0.7528788248393644, -0.11875011251302399, 0.4496317483253351, 0.4657286514533373], [-0.45690159127657226, -0.5426827326209265, -0.21223585437236434, -0.6720805976390425]], ConvergenceInfo: 2 converged values after 1 iterations and 2 applications of the linear map;
norms of residuals are given by (2.5979283617878776e-16, 2.3812210978860583e-17).
)
eigsolve(Hermitian(M), 2, :SR) = ([2.7755575615628914e-17, 1.0000000000000004], [[-0.4451190535974025, -0.5638946680346222, -0.20606235129617, -0.6644020912557167], [0.5867198946724742, 0.19434763988529719, 0.3940429084822651, -0.6802344789419107]], ConvergenceInfo: 2 converged values after 1 iterations and 2 applications of the linear map;
norms of residuals are given by (1.5430643551413483e-16, 1.960324084219161e-16).
)
Asking for 1, 3 or 4 eigenvectors appears to work. This is on v0.5.2.
Nothing urgent, thought I'd bring it to your attention.
-Matt
Hi,
I was trying to use geneigsolve to solve a generalized eigenvalue problem. So I did some simple test, but it seems not working properly.
A = rand(10,10); A = A*A'; B = rand(10,10); B = B*B'; geneigsolve(A,B,isposdef=true,ishermitian=true,issymmetric=true)
Then it gives me error:
ERROR: MethodError: objects of type Array{Float64,2} are not callable
Use square brackets [] for indexing an Array.
Is it a bug or did I do something wrong? Thanks very much.
Best regards
Wangwei
I suggest to add KrylovKit.jl to https://github.com/JuliaLinearAlgebra/
I am using KrylovKit to solve for the eigenvalue with the largest real part, as well as its left and right eigenvectors
Is there a way for KrylovKit.eigsolve
to return both the left and right eigenvector? For now I call eigsolve
twice, one time on the matrix and the second time time on its transpose, but there must be a better way.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.