Git Product home page Git Product logo

sparse-linear-algebra's Introduction

sparse-linear-algebra

Numerical computation in native Haskell

Hackage Build Status sparse-linear-algebra sparse-linear-algebra

This library provides common numerical analysis functionality, without requiring any external bindings. It aims to serve as an experimental platform for scientific computation in a purely functional setting.

State of the library

Mar 14, 2018: Mostly functional, but there are still a few (documented) bugs. Complex number support is still incomplete, so the users are advised to not rely on that for the time being. The issues related to Complex number handling are tracked in #51, #12, #30.

News

Oct 7, 2017: The library is evolving in a number of ways, to reflect performance observations and user requests:

  • typeclasses and instances for primitive types will become sparse-linear-algebra-core, along with a typeclass-oriented reformulation of the numerical algorithms that used to depend on the nested IntMap representation. This will let other developers build on top of this library, in the spirit of vector-space and linear.

  • The vector-based backend is being reworked.

  • An accelerate-based backend is under development [6, 7].

Contents

  • Iterative linear solvers (<\>)

    • Generalized Minimal Residual (GMRES) (non-Hermitian systems)

    • BiConjugate Gradient (BCG)

    • Conjugate Gradient Squared (CGS)

    • BiConjugate Gradient Stabilized (BiCGSTAB) (non-Hermitian systems)

    • Moore-Penrose pseudoinverse (pinv) (rectangular systems)

  • Direct linear solvers

    • LU-based (luSolve); forward and backward substitution (triLowerSolve, triUpperSolve)
  • Matrix factorization algorithms

    • QR (qr)

    • LU (lu)

    • Cholesky (chol)

    • Arnoldi iteration (arnoldi)

  • Eigenvalue algorithms

    • QR (eigsQR)

    • QR-Arnoldi (eigsArnoldi)

  • Utilities : Vector and matrix norms, matrix condition number, Givens rotation, Householder reflection

  • Predicates : Matrix orthogonality test (A^T A ~= I)

Under development

  • Eigenvalue algorithms

    • Rayleigh quotient iteration (eigRayleigh)
  • Matrix factorization algorithms

    • Golub-Kahan-Lanczos bidiagonalization (gklBidiag)

    • Singular value decomposition (SVD)

  • Iterative linear solvers

    • Transpose-Free Quasi-Minimal Residual (TFQMR)

Examples

The module Numeric.LinearAlgebra.Sparse contains the user interface.

Creation of sparse data

The fromListSM function creates a sparse matrix from a collection of its entries in (row, column, value) format. This is its type signature:

fromListSM :: Foldable t => (Int, Int) -> t (IxRow, IxCol, a) -> SpMatrix a

and, in case you have a running GHCi session (the terminal is denoted from now on by λ>), you can try something like this:

λ> amat = fromListSM (3,3) [(0,0,2),(1,0,4),(1,1,3),(1,2,2),(2,2,5)] :: SpMatrix Double

Similarly, fromListSV is used to create sparse vectors:

fromListSV :: Int -> [(Int, a)] -> SpVector a

Alternatively, the user can copy the contents of a list to a (dense) SpVector using

fromListDenseSV :: Int -> [a] -> SpVector a

Displaying sparse data

Both sparse vectors and matrices can be pretty-printed using prd:

λ> prd amat

( 3 rows, 3 columns ) , 5 NZ ( density 55.556 % )

2.00   , _      , _      
4.00   , 3.00   , 2.00   
_      , _      , 5.00       

Note (sparse storage): sparse data should only contain non-zero entries not to waste memory and computation.

Note (approximate output): prd rounds the results to two significant digits, and switches to scientific notation for large or small values. Moreover, values which are indistinguishable from 0 (see the Numeric.Eps module) are printed as _.

Matrix factorizations, matrix product

There are a few common matrix factorizations available; in the following example we compute the LU factorization of matrix amat and verify it with the matrix-matrix product ## of its factors :

λ> (l, u) <- lu amat
λ> prd $ l ## u

( 3 rows, 3 columns ) , 9 NZ ( density 100.000 % )

2.00   , _      , _      
4.00   , 3.00   , 2.00   
_      , _      , 5.00       

Notice that the result is dense, i.e. certain entries are numerically zero but have been inserted into the result along with all the others (thus taking up memory!). To preserve sparsity, we can use a sparsifying matrix-matrix product #~#, which filters out all the elements x for which |x| <= eps, where eps (defined in Numeric.Eps) depends on the numerical type used (e.g. it is 10^-6 for Floats and 10^-12 for Doubles).

λ> prd $ l #~# u

( 3 rows, 3 columns ) , 5 NZ ( density 55.556 % )

2.00   , _      , _      
4.00   , 3.00   , 2.00   
_      , _      , 5.00 

A matrix is transposed using the transpose function.

Sometimes we need to compute matrix-matrix transpose products, which is why the library offers the infix operators #^# (i.e. matrix transpose * matrix) and ##^ (matrix * matrix transpose):

λ> amat' = amat #^# amat
λ> prd amat'

( 3 rows, 3 columns ) , 9 NZ ( density 100.000 % )

20.00  , 12.00  , 8.00   
12.00  , 9.00   , 6.00   
8.00   , 6.00   , 29.00      


λ> lc <- chol amat'
λ> prd $ lc ##^ lc

( 3 rows, 3 columns ) , 9 NZ ( density 100.000 % )

20.00  , 12.00  , 8.00   
12.00  , 9.00   , 10.80  
8.00   , 10.80  , 29.00      

In the last example we have also shown the Cholesky decomposition (M = L L^T where L is a lower-triangular matrix), which is only defined for symmetric positive-definite matrices.

Linear systems

Large sparse linear systems are best solved with iterative methods. sparse-linear-algebra provides a selection of these via the <\> (inspired by Matlab's "backslash" function. Currently this method uses GMRES as default) :

λ> b = fromListDenseSV 3 [3,2,5] :: SpVector Double
λ> x <- amat <\> b
λ> prd x

( 3 elements ) ,  3 NZ ( density 100.000 % )

1.50   , -2.00  , 1.00      

The result can be verified by computing the matrix-vector action amat #> x, which should (ideally) be very close to the right-hand side b :

λ> prd $ amat #> x

( 3 elements ) ,  3 NZ ( density 100.000 % )

3.00   , 2.00   , 5.00       

The library also provides a forward-backward substitution solver (luSolve) based on a triangular factorization of the system matrix (usually LU). This should be the preferred for solving smaller, dense systems. Using the LU factors defined previously we can cross-verify the two solution methods:

λ> x' <- luSolve l u b
λ> prd x'

( 3 elements ) ,  3 NZ ( density 100.000 % )

1.50   , -2.00  , 1.00     

License

GPL3, see LICENSE

Credits

Inspired by

References

[1] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., 2000

[2] G.H. Golub and C.F. Van Loan, Matrix Computations, 3rd ed., 1996

[3] T.A. Davis, Direct Methods for Sparse Linear Systems, 2006

[4] L.N. Trefethen, D. Bau, Numerical Linear Algebra, SIAM, 1997

[5] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in Fortran 77, 2nd ed., 1992

[6] M. M. T. Chakravarty, et al., Accelerating Haskell array codes with multicore GPUs - DAMP'11

[7] accelerate

sparse-linear-algebra's People

Contributors

gregoryschwartz avatar jaxan avatar ocramz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparse-linear-algebra's Issues

Define typeclass for linear solvers:

E.g. by using a phantom type. The code below the comment by jle does not work since GHC cannot discriminate between instances (needs AllowAmbiguousTypes, which is to be avoided).

-- <jle`> class Solver t where method :: Op' t Foo -> Op' t Foo -> Op' t Foo
--                                      [12:46]
-- <jle`> newtype Op' t a = O (Op a)


class LinearSolver t a where
  linearSolve :: SpMatrix a -> SpVector a -> SpVector a -> SpVector a

data CGS_'
data BiCGSTAB_'

instance LinearSolver CGS_' Double where
  linearSolve CGS_ aa b x0 = _x $ cgs aa b x0 x0 

instance LinearSolver BiCGSTAB_' Double where
  linearSolve aa b x0 = _xBicgstab $ bicgstab aa b x0 x0 

Incorrect bounds checking.

Prelude Data.Sparse.Common> x = (fromListSV 4 [(2,3)] :: SpVector Double)
Prelude Data.Sparse.Common> y = (fromListSV 4 [(0,3)] :: SpVector Double)
Prelude Data.Sparse.Common> fromRowsL [x, y]
*** Exception: insertRowSM : incompatible dimensions (2,4)
CallStack (from HasCallStack):
  error, called at src/Data/Sparse/Common.hs:79:17 in sparse-linear-algebra-0.2.9.8-GSb8IwES68r5Fh6hC5eCZ9:Data.Sparse.Common

Here, x and y are rows with the same dimension, so from the name I would think that fromRowsL [x, y] would result in a 2x4 matrix. Further evidence:

Prelude Data.Sparse.Common> m
SM ( 2 rows, 4 columns ) , 8 NZ ( density 100.000 % ) [(0,[(0,0.0),(1,0.0),(2,3.0),(3,0.0)]),(1,[(0,3.0),(1,0.0),(2,0.0),(3,0.0)])]
Prelude Data.Sparse.Common> fromRowsL $ toRowsL m
*** Exception: insertRowSM : incompatible dimensions (2,4)
CallStack (from HasCallStack):
  error, called at src/Data/Sparse/Common.hs:79:17 in sparse-linear-algebra-0.2.9.8-GSb8IwES68r5Fh6hC5eCZ9:Data.Sparse.Common

toColsL and fromColsL both work it seems.

Documentation

  • Convergence plots
  • Inline examples
  • Tutorial module in the style of pipes
  • More examples in the README, in particular re. eigensolvers (once #12 is complete)
  • Developer notes (where to find stuff etc.)
  • Doctests

Remove vector-space dependency

It depends onMemo-trie, which is not updated often, and which currently fails to build with GHC 8.2.1.
This in turn prevents sparse-linear-algebra to be included in Stackage LTS.

Zippers for updating matrix factorizations

Note:

complex-valued Cholesky fails

As shown in one of the test-cases, the Cholesky factorisation fails for complex vector spaces. May be a consequence of issue #50 .

Convergence as a `State` combinator

  • Linear solvers : The squared difference of the past 2 temporary solutions doesn't seem to sufficiently assess convergence.
  • there is a bug in modifyInspectN
  • Fixed by having a fixed iteration budget. Not a solution though

CSR backend

I have already some parts of a CSR and CSB (compressed sparse blocks) implementation

The benefits are unclear at this stage but many algorithms which require exclusively mat-vec (but not mat-transpose-vec) such as BiCGSTAB might benefit from it

Improve pretty printing

We could have a pprintWith that accepts options such as numerical precision of floating point display (supplied to printf)

unnecessary insertion

in src/Math/Linear/Sparse/IntMap.hs:

insertIM2 :: IM.Key -> IM.Key -> a -> IM.IntMap (IM.IntMap a) -> IM.IntMap (IM.IntMap a)
insertIM2 i j x imm = IM.insert i **(IM.insert j x ro)** imm 
  where ro = maybe (IM.singleton j x) **(IM.insert j x)** (IM.lookup i imm)

don't you insert two times? It probably will make it slower.
maybe something like this?

insertIM2 :: IM.Key -> IM.Key -> a -> IM.IntMap (IM.IntMap a) -> IM.IntMap (IM.IntMap a)
insertIM2 i j x imm = IM.insert i ro imm 
  where ro = maybe (IM.singleton j x) (IM.insert j x) (IM.lookup i imm)

CSC.hs doesn't build w current stackage nightly (ghc 8.2)

/home/travis/build/ocramz/sparse-linear-algebra/src/Data/Sparse/Internal/CSC.hs:88:16: error:
    • Couldn't match type ‘a’ with ‘Int’
      ‘a’ is a rigid type variable bound by
        the type signature for:
          extractColCSC :: forall a. CscMatrix a -> Int -> SVector a
        at src/Data/Sparse/Internal/CSC.hs:82:1-48
      Expected type: V.Vector Int
        Actual type: V.Vector a
    • In the first argument of ‘trimf’, namely ‘x’
      In the expression: trimf x
      In an equation for ‘vals’: vals = trimf x
    • Relevant bindings include
        x :: V.Vector a (bound at src/Data/Sparse/Internal/CSC.hs:83:34)
        extractColCSC :: CscMatrix a -> Int -> SVector a
          (bound at src/Data/Sparse/Internal/CSC.hs:83:1)
   |
88 |   vals = trimf x
   |                ^

Cannot get the <\> operator to work

I have just started to learn Haskell and I am trying to build a simple tool for analysing a beam on multiple spring supports. I have been able to get the lusolve to work but not the <>.

I keep on getting this error

*** Exception: givens: no compatible rows for entry (3,2)
CallStack (from HasCallStack):
error, called at src\Numeric\LinearAlgebra\Sparse.hs:168:28 in sparse-linear-algebra-0.2.2.0-CbAalqT73THJ7zUSs35jqX:Numeric.LinearAlgebra.Sparse

My code

import System.IO
import Numeric.LinearAlgebra.Sparse
import Numeric.LinearAlgebra.Class
import Data.Sparse.SpMatrix
import Data.Sparse.Common

eMod = 210000.0



k 0 0 i l = 12* eMod * i /(l**3)
k 0 1 i l = 6 * eMod * i /(l**2)
k 0 2 i l = (- (k 0 0 i l))
k 0 3 i l = k 0 1 i l 
k 1 0 i l = k 0 1 i l 
k 1 1 i l = 4 * eMod *i / l
k 1 2 i l = - k 0 1 i l 
k 1 3 i l = 2* eMod * i /l
k 2 0 i l = - k 0 0 i l
k 2 1 i l = - k 0 1 i l
k 2 2 i l = k 0 0 i l
k 2 3 i l = - k 0 1 i l
k 3 0 i l = k 0 1 i l 
k 3 1 i l = k 1 3 i l
k 3 2 i l = - k 0 1 i l
k 3 3 i l = k 1 1 i l
k _ _ _ _ = error "Index out of bounds: Local Stiffness Matrix is only 4x4"


localStiffness (i1:[]) (l1:[]) 0 = [ (a , b, (k a b i1 l1) ) | a <- [0..3], b <- [0..3], not(a>1 && b> 1)]
localStiffness (i1:i2:[]) (l1:l2:[]) rn = 
    firstQ ++ rest
    where 
        firstQ = [ (rn + a, rn + b, ((k (a+2) (b+2) i1 l1) + (k a b i2 l2) )) | a <- [0..1], b <- [0..1]]
        rest = [ (rn + a, rn + b, ( k a b i2 l2 )) | a <- [0..3], b <- [0..3], not(a<2 && b<2)]


localStiffness (i1:i2:it) (l1:l2:lt) rn = 
    firstQ ++ sndTrdQ
    where 
        firstQ = [ (rn + a, rn + b, ((k (a+2) (b+2) i1 l1) + (k a b i2 l2) )) | a <- [0..1], b <- [0..1]]
        sndTrdQ = [ (rn + a, rn + b, ( k a b i2 l2 )) | a <- [0..3], b <- [0..3], not(a<2 && b<2), not(b>1 && a >1)]

createStiffnessMatrix (i1:i2:iTail) (l1:l2:lTail) [] rn = 
   createStiffnessMatrix (i1:i2:iTail) (l1:l2:lTail) (localStiffness [i1] [l1] 0) 2
createStiffnessMatrix (i1:i2:[]) (l1:l2:[]) matrix rn =
    fromListSM (rn+4,rn+4) ( matrix ++ localStiffness [i1,i2] [l1,l2] (rn))
createStiffnessMatrix (i1:iTail) (l1:lTail) matrix rn = 
   createStiffnessMatrix (iTail) (lTail) ((localStiffness (i1:iTail) (l1:lTail) rn)++ matrix) (rn+2)

   
insertSprings :: SpMatrix Double -> [(Int, Int, Double)] -> SpMatrix Double
insertSprings kMat springs = 
        foldl (addSpring) kMat springs
        where
                addSpring k (a, b, v) = insertSpMatrix a b ( ( k @@! (a,b) ) + v) k 


main = do
        let l=replicate 500 100
        let i=replicate 500 400000000
        let s = [(x,x,2000)|x<- [200,400..1000]]
        let k=createStiffnessMatrix i l [] 0
        let knew = insertSprings  k s 
        let f= [ (x , 0) | x<-[0..9]] ++ [(1000,5000.0),(1001,819000000.0)]
        let fVec = (fromListSV 1002 f)
        --let res= (\(lo, up)-> luSolve lo up (dropSV 2 fVec)) $ lu (extractSubmatrixSM (\x -> x-2) (\x -> x-2) knew (2,1001) (2,1001))
        let res = (extractSubmatrixSM (\x -> x-2) (\x -> x-2) knew (2,1001) (2,1001)) <\> (dropSV 2 fVec)
        let myres = map (\x -> lookupSV x res) [198,398..998]
        print myres

Parallelism

  • Data parallelism for heavy operations such as mat-vec (which is currently implemented with two nested fmaps, so divide and conquer is straightforward).
  • Investigate monad-par for multicore parallelism

`fromList` fold/overwrite behaviour toggle

Generalize fromListSV/fromListSM to accept a parameter to either overwrite or apply some fold over repeated elements.
The current behaviour is overwriting.

Example of new behaviour:

fromListSV (FoldRepeated (+) 0) [(1, 1), (2, 3), (2, 5)] == fromListSV OverwriteRepeated [(1, 1), (2, 8)]

Read instance.

It would be nice to have a Read instance such that we can show the vector or matrix as part of another structure and read it back.

Instances missing.

I cannot find instances for Innerspace and AdditiveGroup for SpVector and there may be other missing instances for both SpVector and SpMatrix as well. They were present in previous versions, but not the latest. This prevents me from adding vectors or finding dot products.

Add `accelerate` support

  • Implement accelerate-specific algorithms (e.g. matvec, vecmat, matmat ..)
  • Implement " in terms of sparse-linear-algebra-core classes (depends on #28 )

Decouple algorithms from implementation

Currently, Data.Sparse.Common is the interface to sparse datastructures. It gathers SpMatrix, SpVector and IntMap2 functionality, and re-exports all of it.
If we comment out as X in the line import Data.Sparse.IntMap2.IntMap2 as X , thus omitting its re-export, a number of algorithms can't find the implementation:

src/Numeric/LinearAlgebra/Sparse.hs:410:13: error: …
    • Variable not in scope:
        lookupIM2 :: IxRow -> IxCol -> IM.IntMap (IM.IntMap a) -> Maybe a1
    • Perhaps you meant ‘lookupSM’ (imported from Data.Sparse.Common)
src/Numeric/LinearAlgebra/Sparse.hs:412:20: error: …
    • Variable not in scope:
        ifilterIM2
          :: (IxRow -> IxCol -> t0 -> Bool)
             -> IM.IntMap (IM.IntMap a) -> IM.IntMap a0
    • Perhaps you meant one of these:
        ‘ifilterSM’ (imported from Data.Sparse.Common),
        ‘ifilterSV’ (imported from Data.Sparse.Common),
        ‘filterSM’ (imported from Data.Sparse.Common)
src/Numeric/LinearAlgebra/Sparse.hs:436:25: error: …
    • Couldn't match expected type ‘(Rows, Cols)’
                  with actual type ‘FDSize f’
    • In the first argument of ‘SM’, namely ‘(dim m)’
      In the expression: SM (dim m)
      In the expression: SM (dim m) $ ifilterIM2 f (dat m)
    • Relevant bindings include
        m :: f a4
          (bound at /Users/ocramz/Dropbox/RESEARCH/Haskell/sparse-linear-algebra/src/Numeric/LinearAlgebra/Sparse.hs:436:14)
        sparsifyLU :: f a4 -> SpMatrix a2 -> SpMatrix a3
          (bound at /Users/ocramz/Dropbox/RESEARCH/Haskell/sparse-linear-algebra/src/Numeric/LinearAlgebra/Sparse.hs:436:3)
src/Numeric/LinearAlgebra/Sparse.hs:436:34: error: …
    • Variable not in scope:
        ifilterIM2
          :: (IxRow -> IxCol -> t1 -> Bool)
             -> HDData f a4 -> IM.IntMap (IM.IntMap a3)
    • Perhaps you meant one of these:
        ‘ifilterSM’ (imported from Data.Sparse.Common),
        ‘ifilterSV’ (imported from Data.Sparse.Common),
        ‘filterSM’ (imported from Data.Sparse.Common)
Compilation failed.

This is wrong, because it pins the algorithm implementation to the datastructure used. We need a typeclass interface

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.