Git Product home page Git Product logo

iterativesolvers.jl's Introduction

Iterative Solvers

license Build Status Codecov Coveralls

IterativeSolvers is a Julia package that provides iterative algorithms for solving linear systems, eigensystems, and singular value problems.

Resources

Installing

To install the package, open the package manager in the REPL via ] and run

pkg> add IterativeSolvers

iterativesolvers.jl's People

Contributors

andreasnoack avatar ararslan avatar chrisrackauckas avatar deltaeecs avatar dependabot[bot] avatar femtocleaner[bot] avatar forgotten avatar fredrikekre avatar github-actions[bot] avatar h-larsson avatar haampie avatar iainnz avatar jasonpries avatar jiahao avatar jkrch avatar kshyatt avatar lopezm94 avatar lostella avatar mohamed82008 avatar mschauer avatar platawiec avatar ranocha avatar shivin9 avatar spaette avatar stevengj avatar timholy avatar tkelman avatar tkonolige avatar vincentcp avatar viralbshah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iterativesolvers.jl's Issues

Ambiguous definition of *

When loading IterativeSolvers (in julia v0.3.5) following warning is thrown:

julia> using IterativeSolvers
Warning: New definition 
    *(Any,srft) at /home/jaak/.julia/v0.3/IterativeSolvers/src/rlinalg.jl:301
is ambiguous with: 
    *(AbstractMatrixFcn{T},Any) at /home/jaak/.julia/v0.3/IterativeSolvers/src/common.jl:113.
To fix, define 
    *(AbstractMatrixFcn{T},srft)
before the new definition.

Two Possibilities for Speedup in CG

Hi there!

While playing around with our implementation of CG ( https://github.com/lruthotto/SimpleIterativeSolvers.jl ) we've discovered two possibilities for speeding up iterative methods for sparse matrices, I would like to share here.

  1. in order to avoid allocation of large vectors we went over to use A_mul_B! directly. In our (limited) experience this can save around 10-15% computation time

  2. When profiling our code we've found that the following operations

x += alpha*p    # equivalent to: BLAS.axpy!(n,alpha,p,1,x,1) 

or

r -= alpha*Ap   # equivalent to: BLAS.axpy!(n,-alpha,Ap,1,r,1)

or

p = z + beta*p  # equivalent to  BLAS.axpy!(n,1.0,z,1,BLAS.scal!(n,beta,p,1),1)

take a lot of time. However, for some strange reason I could so far only replace the first one Otherwise I run into numerical instabilities. If anybody can explain them to me, I would be very happy. Anyway, just replacing the first one gives another 10-15% speedup.

I hope this remark is useful in improving the CG in this package. If you are interested in more detail, I put a small comparison of the CG implementations in our GIT repository (benchmark/benchmarkCG.jl)

Cheers,
Lars

cg returns incorrect result?

Hi all

I'm trying to use this package, and below is my experiments

using IterativeSolvers
julia> A = rand(10,10);

julia> b = rand(10);

julia> x = cg(A,b)[1];

julia> A*x-b
10-element Array{Float64,1}:
 -28.5586
 -86.5707
 -55.2936
 -85.9302
 -63.0426
 -26.0243
 -86.1318
 -26.6518
 -69.1788
  27.4126

Is there anything wrong? the result is a little confusing, it should be nearly zero I think. any explanation will be helpful, thanks!

svdl gives inaccurate results on top singular value/vector

For small problem sizes, svdl gives very inaccurate estimates of the top singular vector and value when only the top singular tuple is requested:

julia> A = randn(10,12);

julia> u,s,v = svd(A);

julia> usv, pf = svdl(A, 1, vecs=:both);

julia> usv.S - s[1]
1-element Array{Float64,1}:
-0.254325

julia> usv.U - u[:,1]
10x1 Array{Float64,2}:
-0.267003
0.0976195
0.103981
0.104208
0.053799
-0.0442081
-0.0124698
0.0776354
0.0219778
0.091303

Roadmap to accepting LinearMaps as arguments

@Jutho's LinearMaps.jl offers an elegant uniform interface to linear operations as represented by dense matrices, sparse matrices and functions (and possibly other things I haven't looked at yet).

It would be nice to accept LinearMaps as an acceptable input to the iterative algorithms here, which would eliminate checks in the algorithms for whether to apply or multiply.

Realistically I think it would make sense to simply deprecate the checks for Function inputs and assume that premultiplication is supported.

A wrinkle in this scheme would be how to specify preconditioning operations. Currently M \ b is used in code that specifies preconditioning using a matrix M.

Incorrect `gmres`?

In the following example

import IterativeSolvers
n=30;
A = randn(n,n); A= A.+A'; 
x = rand(n);
b = A*x; 
A*IterativeSolvers.gmres(A, b) .- b

It only work for n less than 20. This is the case for both v0.2.2 and master.

Am I using it the wrong way?

Integration with Plots.jl?

I think it would be interesting to supply a recipe for the ConvergenceHistory type. This recipe would make it easy to plot the convergence history via plot(ch). The only dependency would be RecipesBase which is really small, and recipe codes are usually short as well. It would be a small convenience change that could help the Julia ecosystem work more fluidly.

don't overtype

The arguments should not be AbstractMatrix types, since that implies access to the elements. The argument should be untyped ("duck-typed"), and work for any type T supporting *(A::T, v::AbstractVector) and possibly size(A::T) (although for linear solvers this is not needed, since the size can be inferred from the right-hand-side vector).

Support for non-Vector v

Is it the case that while the linear operator may be a custom type, v must be a vector? I am implementing a shared-memory code that would be much more efficient if I were able to iterate between a custom type and a v::SharedArray rather than a v::Vector.

If this is not possible, is it something that the developers are interested in implementing? It is possible that I could work on it.

What is the status of eigensolvers in this package?

I would like to use this package for eigenproblems. Which eigenfactorizations are working? Which are tested? I see a branch arpackgobyebye which looks like it has both ERAM and IRAM, but it hasn't been updated or merged since 2014.

Implementation roadmap

A comprehensive listing of methods in the references.
Please remove any methods are impractical/obsolete and add methods worth implementing to the list.

Linear systems

  • Stationary methods
    • Jacobi (LT)
    • Gauss-Seidel (LT)
    • Successive overrelaxation, SOR (LT)
    • Symmetric SOR (LT)
  • Nonstationary methods
    • GMRES (LT, Saa)
    • MINRES (LT)
    • Conjugate Gradients, CG (LT)
    • LSQR
    • LSMR
    • BiCGStab(l) (Sle)
    • Chebyshev iteration (LT)
    • Quasi-Minimal Residual, QMR (LT)
    • DQGMRES

(Generalized) eigenproblems

  • Without subspace
    • (Shift-and-inverted) power iteration (ET)
    • Rayleigh Quotient Iteration [1]
  • Krylov subspace methods
    • Implicitly restarted Lanczos (ET)
    • Implicitly restarted Arnoldi (ET)
  • Other subspace methods
    • Jacobi-Davidson method (ET)
    • LOPCG
    • Accelerated Rayleigh Quotient Iteration [1]
    • (Shift-and-inverted) subspace iteration (ET)

Singular value decomposition

  • Golub-Kahan-Lanczos (ET)

References

[1] Low-priority methods: they are quite expensive per iteration.

Docs incorrect for `cg`

From the README:

All linear solvers have a common function declaration
solver(A, b::Vector, [x, Pl, Pr, varargs...]; tol::Real, maxiter::Int, kwargs...)

cg starts from zero and doesn't take the x input---if you try to supply it, you get a cryptic error:

julia> cg(A, rand(5), rand(5))
ERROR: DimensionMismatch("dot product arguments have lengths 5 and 1")
 in dot at linalg/blas.jl:153
 in dot at linalg/generic.jl:286
 in cg! at /home/tim/.julia/v0.4/IterativeSolvers/src/cg.jl:18
 in cg! at /home/tim/.julia/v0.4/IterativeSolvers/src/cg.jl:8
 in cg at /home/tim/.julia/v0.4/IterativeSolvers/src/cg.jl:3

Is starting at zero important for cg, or should one be able to supply x?

https://github.com/mschauer/IDRsSolver.jl

@astudillor and me ported a basic Induced Dimension Reduction Solver to julia, @tkelman mentioned IterativeSolvers and medium-term I think this could be incorporated into this package. Anyway I harmonized the interface mostly, modulo issues how generic LinearOperators are treated.

Stack overflow in cg

Just tried out the test script for CG. It returns a stack overflow.

ERROR: stack overflow

 in KrylovSubspace at /Users/larsruthotto/Dropbox/Julia/Modules/IterativeSolvers/src/krylov.jl:25 (repeats 80000 times)
while loading /Users/larsruthotto/Dropbox/Julia/Modules/IterativeSolvers/test/cg.jl, in expression starting on line 8

I guess this is associated to the KrylovSubspace type. Since we do not understand types well enough to see benefits, me and @ehaber99 created a new package called SimpleIterativeSolvers (https://github.com/lruthotto/SimpleIterativeSolvers.jl) in which we so far managed to avoid creating our own types.

Slow convergence of Chebyshev iteration

I implemented the Chebyshev iteration algorithm from Linear Templates, but it's not clear to me why its convergence is so slow. The tests for it require an extremely large number of iterations to achieve minimal convergence, even when the exact range of eigenvalues are known.

It may be worth implementing a more sophisticated form of the Chebyshev iteration that takes higher order Chebyshev recurrences.

How to supply custom MatVec function to eigvals_lanczos?

After reading the documentations and the source code (common.jl, krylov,jl, and lanczos.jl), I still have no clues as to how to supply a custom matrix-vector multiplication function to eigvals_lanczos.

I only figures out how to get linear-operators-defined-as-functions, such as A = MatrixFcn{Float64}(4, 4, mulbyA!). Although this works as matrix-vector multiplication, yet when put in eigvals_lanczos(A) it throws error message:
"KrylovSubspace{T,OpT} has no method matching KrylovSubspace{T,OpT}(::MatrixFcn{Float64}, ::Int64)".

I will appreciate it greatly if you guys can help me on this.

gmres fails to achieve tolerance

I compare the gmres in this package to the one in SimpleIterativeSolvers (https://github.com/lruthotto/SimpleIterativeSolvers.jl)

using IterativeSolvers
using SimpleIterativeSolvers

A  = sprandn(10000,10000,.01) + 10*speye(10000);
rhs = randn(10000);
x1=IterativeSolvers.gmres(A,rhs,restart=5,max_it=1000,tol=1e-1);
x2=SimpleIterativeSolvers.gmres(A,rhs ,5,tol=1e-1,maxIter=1000);

println("resnorm IterativeSolvers.gmres: ", norm(A*x1-rhs)/norm(rhs))
println("resnorm SimpleIterativeSolvers.gmres: ", norm(A*x2[1]-rhs)/norm(rhs))

resnorm IterativeSolvers.gmres: 9.05749960154009e6
resnorm SimpleIterativeSolvers.gmres: 0.09998687461581612

I actually tried this a number of times, but could not get satisfactory results from IterativeSolvers.gmres. Is it a bug or my fault?

orthogonalization methods

Just some minor nitpicking:

The authors of the paper you linked to are not saying that MGS and Householder QR are numerically equivalent. But I guess implementing the orthogonalization with Householder would be a bit tedious.

Since classical GS is less stable than MGS it is not really necessary to have it as an option.

Strange error in jacobi!

Hi, while trying this package with some abstract matrices, I encountered the following error

julia> IterativeSolvers.jacobi(K,R)
ERROR: LoadError: UndefVarError: a not defined
 in jacobi!(::Array{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},1}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}) at /Users/rgates/.julia/v0.5/IterativeSolvers/src/stationary.jl:15
 in #jacobi#22(::Array{Any,1}, ::Function, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}) at /Users/rgates/.julia/v0.5/IterativeSolvers/src/stationary.jl:9
 in jacobi(::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}) at /Users/rgates/.julia/v0.5/IterativeSolvers/src/stationary.jl:9
 in include_from_node1(::String) at ./loading.jl:488
 in include_from_node1(::String) at /Applications/Julia.app/Contents/Resources/julia/lib/julia/sys.dylib:?
while loading /Users/rgates/Documents/GitHub/ElementalExt.jl/test/testassembler.jl, in expression starting on line 829

Debugging line 15 with Gallium gives:

1|julia > (plot & !log) && error("Can't plot when log keyword is false")
ERROR: UndefVarError: plot not defined
 in _step_expr(::ASTInterpreter.Interpreter) at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:653
 in step_expr at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:666 [inlined]
 in #finish!#81(::Bool, ::Bool, ::Function, ::ASTInterpreter.Interpreter) at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:1742
 in eval_in_interp(::ASTInterpreter.Interpreter, ::Expr, ::JuliaParser.Tokens.SourceExpr, ::String) at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:1299
 in eval_code(::ASTInterpreter.InterpreterState, ::String) at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:1551
 in (::ASTInterpreter.##72#78{ASTInterpreter.InterpreterState,Base.LineEdit.Prompt})(::Base.LineEdit.MIState, ::Base.AbstractIOBuffer{Array{UInt8,1}}, ::Bool) at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:1676
 in run_interface(::Base.Terminals.TTYTerminal, ::Base.LineEdit.ModalInterface) at ./LineEdit.jl:1579
 in run_interface(::Base.Terminals.TTYTerminal, ::Base.LineEdit.ModalInterface) at /Applications/Julia.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 in RunDebugREPL(::ASTInterpreter.Interpreter) at /Users/rgates/.julia/v0.5/ASTInterpreter/src/ASTInterpreter.jl:1715
 in (::Gallium.##112#114)(::IterativeSolvers.#jacobi!, ::Array{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},1}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}) at ./<missing>:0
 in #jacobi#22(::Array{Any,1}, ::Function, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}) at /Users/rgates/.julia/v0.5/IterativeSolvers/src/stationary.jl:9
 in jacobi(::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}, ::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64}) at /Users/rgates/.julia/v0.5/IterativeSolvers/src/stationary.jl:9

1|debug > fr v
[1] jacobi!(x, A, b) at /Users/rgates/.julia/v0.5/IterativeSolvers/src/stationary.jl:15
  | #self#::IterativeSolvers.#jacobi! = IterativeSolvers.jacobi!
  | x::Array{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},1} = <suppressed 5177 bytes of output>
  | A::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64} = <suppressed 43813 bytes of output>
  | b::SparseMatrixCSC{ElementalExt.DistributedBlockMatrices.Block{Array{Float64,2}},Int64} = <suppressed 6423 bytes of output>

Where are the default arguments and why does this appear to occur only with my matrix type? IterativeSolvers passed all of its tests.

Simple example for Matrix-free eigenvalues

Hi,

It is possible that you provide a simple example of a Matrix-free eigenvalue problem please?

For example the following example does not work,

using IterativeSolvers
function shiftback!(output, b)
n = length(b)
length(output) == n || throw(DimensionMismatch())
for i = 1:n-1
output[i+1] = b[i]
end
output[1] = b[n]
output
end
A = IterativeSolvers.MatrixFcn{Int}(15, 15, shiftback!)
IterativeSolvers.eigvals_power(A)

Thank you,

compatibility with MPI

Hey, I just wanted to ask about the feasibility of using this library as a replacement for a large-scale parallel solver like PETSc or hypre. Looking at the code for cg.jl it seems like all that is required would be to 1) implement a KrylovSubspace type with a nextvec method, and 2) implement a parallel version of dot. Has anyone used this library with MPI.jl or DistributedArrays.jl?

Support for block Krylov methods

I was wondering if you guys would think it's necessary to support arbitrary sized inputs. For example, instead of just allowing matrix vector, allow matrix matrix in the normal interpretation. This can come up in solving PDEs (see DifferentialEquations.jl). Currently the issue with the CG and GMRES functions are due to their use of the dot function, and I could change it up if people think this is the right way to go.

Restriction on square linear systems

The README mentions:

operator A that maps n-dimensional vectors to n-dimensional vectors...

Is this correct and the package is restricted to square linear systems? Or are non-square systems also within the scope of this package?

ERROR: MethodError: `one` has no method matching one(::Type{Any}

I'm getting an error message from IterativeSolvers.jl that I don't understand. I've pasted a minimal working example below. On the last line, I call cg with a data type that supports right multiplication, and get the error

julia> cg(T_fft, z)
ERROR: MethodError: `one` has no method matching one(::Type{Any})
 in cg at /home/jeff/.julia/v0.4/IterativeSolvers/src/cg.jl:3 (repeats 2 times)

Here is the full example:

using IterativeSolvers

n = 10
v = randn(n);
v[1] += 1.

T = Array(Float64, n, n);
for i in 1:n, j in 1:n
    T[j, i] = v[abs(i - j) + 1]
end

type SymmetricToeplitz
    v_fft::Vector{Complex{Float64}}

    SymmetricToeplitz(v::Vector{Float64}) = begin
        v2 = [v; v[n:-1:2]];
        new(fft(v2))
    end
end

import Base.*

function *(mat::SymmetricToeplitz, x::Vector{Float64})
    n = length(x)
    x2 = [x; zeros(n-1)]
    real(ifft(mat.v_fft .* fft(x2)))[1:n]
end

T_fft = SymmetricToeplitz(T[:,1])

z = randn(n);

T * z  # works
T_fft * z  # works and matches previous result
T^-1 * z  # works
cg(T, z)  # works and matches previous results
cg(T_fft, z)  # ERROR: MethodError: `one` has no method...

Thanks in advance for any assistance!

GSOC 14

Hi
I'm a MS student of applied stochastics with a BS in pure math and like to pick this as a GSOC project. As a first try, I implemented the bicgstab algorithm:
https://github.com/krz/julia-numalg/blob/master/bicgstab.jl
It is working but not ready for commit as I don't really have the hang of the Krylov type yet. Let me know what you think.

Travis integration

Now that we have some code and tests here, we should integrate travis-ci.

GMRES docs need to explain macro- and microiterations better

According to the docs, for every solver:

  • maxiter is the maximum number of allowed iterations, which conventionally defaults to length(b)

However this is not true (at least for gmres). Indeed, the default number of iterations is 1, which obviously very rarely converge (from https://github.com/JuliaLang/IterativeSolvers.jl/blob/6ab23802336ec7883965c93c6ff23ca33a735e29/src/gmres.jl):

gmres(A, b, Pl=1, Pr=1;
      tol=sqrt(eps(typeof(real(b[1])))), maxiter::Int=1, restart::Int=min(20,length(b))) =
    gmres!(zerox(A,b), A, b, Pl, Pr; tol=tol, maxiter=maxiter, restart=restart)

Use correct tolerances on the solution vector in tests

The tests occasionally fail on random inputs. This is almost certainly due to not using the proper inequalities on the error bounds of the solution vector and/or residual vector. For many of these algorithms, these inequalities are known (or at least, for special cases), and we should use the known inequalities to test for correct convergence.

Incorrect number of singular values

When working with small matrices, for example a 3 by 3 matrix of the form
svdl(sparse([3,3,2,2],[1,2,2,3],[1.0,2.0,3.0,2.4])), will return 6 singular values instead of 3

incorrect singular values from svdl?

SVDL returns radically different answers than SVD for the case generated by the test case, below. assuming that the two algorithms are equivalent.

using IterativeSolvers
M = zeros(110,22);
M[:,1] = ones(110); M[:,2] = repeat(1:11,outer=10);
offset = 0; indx = 1:11;
for i = 3:2:21
	M[offset+indx,i] = ones(11);
	M[offset+indx,i+1] = indx;
	offset += 11;
end
svd(M)[2]
svdl(M)[1]
svdl(M,22)[1]
svdl(sparse(M),22)[1]

API for \

It would be nice to have IterativeSolvers.jl match the interface of other linear solvers so that way \ will solve using one of these methods. A discussion on this can be found here:

https://discourse.julialang.org/t/unified-interface-for-linear-solving/699/5

If I'm not mistaken, all that would need to be done is the creation of types like K = CGSetup(A,opts...) (name pending of course) with a dispatch on \ which will call the correct iterative solver. This would then match using factorizations:

K = lufact(A)
x = K \ b

vs

K = CGSetup(A,args...)
x = K \ b

and will make it easy for package developers to incorporate IterativeSolvers.jl as a replacement for direct methods (without even requiring it as a dependency).

Ambiguity in README regarding form of linear operator

README states that "A is not explicitly typed, but must either be a KrylovSubspace or support multiplication * or function composition (apply) that behave as necessary to produce the correct mapping on the vector space".

This led me to believe that I could do something such as the following:

J = rand(10,100); b = rand(100);
A(x) = J'*(J*x)
cg(A,rand(b))

But this fails as the package will eventually call eltype on the function A.

I have no problems using the solvers with an instance of MatrixFcn, but would like to understand if I have misunderstood the quoted text, or if it should be modified?

Return type of solvers?

The purpose of this issue is to discuss what should be returned by iterative solvers. Obviously the solution vector should be returned, but we should agree on what other data should be returned as a general principle.

There is a case to be made that it is worth returning additional data about the convergence of the solution and/or the iteration history. Examples of such data are:

  • isconverged flag (should be Bool, not numeric)
  • number of iterations taken
  • residual vector
  • values of convergence criteria, e.g. norm of residual vector
  • iteration history of the residual vector

Furthermore, should such additional information be returned as separate quantities in a tuple following the solution vector, or should everything be packaged into a new type?

LSQR not working for complex numbers

I tried to use the lsqr method on a complex system of equations and it did not work because several functions are not available for complex numbers (eps, isless).

cg with a matrix-vector handle

Is it possible to do cg with a matrix-vector function? For example,

B = randn(3,3)
C(x) = B*x
x = cg( C, randn(3))
LoadError: MethodError: `one` has no method matching one(::Type{Any})
while loading In[204], in expression starting on line 3

 in cg at /home/juser/.julia/v0.4/IterativeSolvers/src/cg.jl:3 (repeats 2 times)

Reducing memory allocation

I completed a first browsing through the code of this package. I would like to comment that, if speed/efficiency is going to be high on the focus list, excessive memory allocation should be avoided. One example is the orthogonalize routine, which will be the backbone of many solvers and will be called many times:

    elseif method == :ModifiedGramSchmidt || method== :Householder
        #The numerical equivalence of ModifiedGramSchmidt and Householder was
        #established in doi:10.1137/0613015
        cs = zeros(T, p)
        for (i, e) in enumerate(Kk)
            cs[i] = dot(v, e)
            v-= cs[i] * e
        end
    else

I strongly believe the v=-cs[i]*e line should call Base.LinAlg.BLAS.axpy!
This will of course modify v in place, such that orthogonalize should become orthogonalize!, which I think is fine. I would be happy to change this if everybody agrees.

On a side note, even though ModifiedGramSchmidt is numerically equivalent to some form of HouseHolder orthogonalization on matrix stacked with zeros on top of it (which is useful to get good error bounds and get better insights in how to modify existing algorithms), this doesn't imply that it is equivalent to HouseHolder applied to the original matrix, nor that things like a second reorthogonalization round in MGS is useless. (Although I certainly agree that MGS is sufficient for many algorithms which only use small Krylov subspaces).

Randomized algorithms roadmap

This issue tracks the implementation of randomized algorithms that have appeared in the literature.

Dixon, 1983 doi:10.1137/0720053:

  • Randomized condition number estimate (implemented as rcond)
  • Randomized extremal eigenvalue estimates (implemented as reigmax and reigmin)

Cheng, Gimbutas, Martinsson and Rokhlin, 2005 doi:10.1137/030602678

  • Interpolative decomposition (the four-term variant)

Liberty, Woolfe, Martinsson, Rokhlin, Tygert, 2007 doi:10.1073/pnas.0709640104

  • Randomized norm estimate (implemented as rnorms)
  • Interpolative decomposition (the two-term variant; currently a very hacky version is implemented in idfact)

Strohmer and Vershynin, 2009 doi:10.1007/s00041-008-9030-4:

  • randomized Kaczmarz algorithm

Halko, Martinsson and Tropp 2011 doi:10.1137/090771806

  • Lemma 4.1: Randomized norm estimate (implemented as rnorm)
  • Algorithm 4.1: randomized range finder (implemented as rrange)
  • Algorithm 4.2: adaptive randomized range finer (implemented as rrange_adaptive)
  • Algorithm 4.4: randomized subspace iteration (implemented as rrange_si)
  • Algorithm 4.5: fast randomized range finder (implemented as rrange_f)
  • Variant of Algorithm 4.5 using randomized Givens rotations
  • Algorithm 5.1: direct SVD (implemented as svdfact_restricted)
  • Algorithm 5.2: SVD via row extraction (implemented as svdfact_re)
  • Algorithm 5.3: direct eigenvalue decomposition (implemented as eigfact_restricted)
  • Algorithm 5.4: eigenvalue decomposition via row extraction (implemented as eigfact_re)
  • Algorithm 5.5: eigenvalue decomposition via Nyström method (implemented as eigfact_nystrom)
  • Algorithm 5.6: eigenvalue decomposition in one pass for Hermitian matrices (implemented as eigfact_onepass)
  • Algorithm 5.6: eigenvalue decomposition in one pass for non-Hermitian matrices (currently implemented in a hacky way in eigfact_onepass)

Meng, Saunders and Mahoney, 2014 doi:10.1137/120866580

  • LSRN - least squares solver based on randomized normal projections

Missing dependency on Docile

It seems that IterativeSolvers depends on Docile but does not declare it (at least Pkg.clone("...iterativeSolvers.jl.git") does not install it) and later loading it gives following error in julia v0.3.5.

ERROR: Docile not found
 in require at loading.jl:47
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:51
while loading /home/jaak/.julia/v0.3/IterativeSolvers/src/IterativeSolvers.jl, in expression starting on line 6

rsvd giving wrong result when only one value is requested

For more than value things seem to be fine. See

julia> srand(123);

julia> A = randn(10,2)*randn(2,10);

julia> [IterativeSolvers.rsvdvals(A, 1, 10)[1] - svdvals(A)[1] for i = 1:10]
10-element Array{Any,1}:
 -0.100009   
 -1.1555     
 -1.00972    
 -0.581412   
 -0.935978   
 -0.0789618  
 -0.0809019  
 -0.407415   
 -0.000799146
 -1.03752    

julia> [IterativeSolvers.rsvdvals(A, 2, 10)[1] - svdvals(A)[1] for i = 1:10]
10-element Array{Any,1}:
  0.0        
 -3.55271e-15
  0.0        
 -1.77636e-15
  0.0        
  1.77636e-15
  0.0        
 -3.55271e-15
  0.0        
  0.0  

Change description

The current repository description reads "Implement Arnoldi and Lanczos methods for svds and eigs".
A better title would be something like the first sentence in the README "IterativeSolvers is a Julia package that provides flexible iterative algorithms for linear algebra."

Package ambiguity

Hello,

There is a ambiguity with the package measure.jl:

WARNING: New definition 
    *(#T<:Any, Type{Measures.Length{:cx, T<:Any}}) at /Users/Romain/.julia/v0.4/Compose/src/measure.jl:13
is ambiguous with: 
    *(IterativeSolvers.AbstractMatrixFcn, Any) at /Users/Romain/.julia/v0.4/IterativeSolvers/src/common.jl:118.
To fix, define 
    *(_<:IterativeSolvers.AbstractMatrixFcn, Type{Measures.Length{:cx, T<:Any}})
before the new definition.
WARNING: New definition 
    *(#T<:Any, Type{Measures.Length{:cy, T<:Any}}) at /Users/Romain/.julia/v0.4/Compose/src/measure.jl:14
is ambiguous with: 
    *(IterativeSolvers.AbstractMatrixFcn, Any) at /Users/Romain/.julia/v0.4/IterativeSolvers/src/common.jl:118.
To fix, define 
    *(_<:IterativeSolvers.AbstractMatrixFcn, Type{Measures.Length{:cy, T<:Any}})

Br,

Romain.

tests pass only on master, tag new version?

First of all, thank you very much for this package!
I noticed that the latest tagged version 0.2.2 fails on julia 0.5, while the tests on master pass. May I suggest to tag a new version?
Kind regards

lsqr() bug and/or unintuitive behavior

See below

module dev

using IterativeSolvers

srand(224)
aa=sprand(10,10,.5)
bb=rand(10)

ll=lsqr(aa,bb,maxiter=100)
println(norm(aa*ll[1]-bb))          #1.426850749393321e-9
println(ll[2].isconverged)          #false
println(norm(ll[2].residuals))    #1.2050621207683134

end #module

New API name conventions

Due to the coming of the new API, the names of some methods and keyword arguments are going to change, these are all preliminary so feel free to share alternatives. Here are some of the changes:

Method names:
  • eigvals_power => powm
  • eigvals_ii => invpowm
  • eigvals_rqi => rqi
  • gauss_seidel => gs
  • eigvals_lanczos => eiglancz
Keyword arguments:

In the new API all the optional positional arguments are now going to be keyword arguments. Some of the new keyword names:

  • shift: For inverse iteration and rayleigh
  • nsv: For number of singular values requested in svdl
  • The rest have the same name but are now keywords
Dictionary keys:

It is also worth discussing the name conventions in ConvergenceHistory since it will most likely have an inner dictionary for tolerances and the rest of the information. So some names for tolerances could be reltol, abstol and so on instead of atol, btol, ctol or other non-explicit naming, it goes the same for norms (resnorm, Anorm, etc). If two of them repeat then it will have a number as suffix.

@lruthotto @dpo

Preconditioner: P^{-1} and not P

in some applications (e.g., Boundary Element Method / BEM), it is possible to directly build an approximation of the inverse, and then, Pl\(A*(Pr\x)) e.g. l.81 in gmres.jl. does not make sense.

For example, the classical operator preconditioning of the integral single layer operator V is the integral double layer W, i.e, the preconditioned system reads W*V x = W*b. And both are dense operators. Therefore, it is not possible to use this version of gmres, since it should provide inv(W) which is clearly not accessible.

It seems more coherent to give a Linear Operator as preconditioner such that Ml*(A*(Mr*x)) approximates the identity. And not Pl\(A*(Pr\x)). Otherwise it is confusing.

The user should be free to evaluate as he wants the result of P\, and he should only provide this, not P. The use of \ is restricting.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.