Git Product home page Git Product logo

primme / primme Goto Github PK

View Code? Open in Web Editor NEW
132.0 14.0 40.0 47.22 MB

PReconditioned Iterative MultiMethod Eigensolver for solving symmetric/Hermitian eigenvalue problems and singular value problems

Home Page: http://www.cs.wm.edu/~andreas/software/

License: Other

C 82.78% Makefile 0.58% Fortran 3.84% MATLAB 2.31% Python 1.05% C++ 6.08% R 1.15% Shell 0.39% Cython 1.81%
eigenvalues high-performance singular-value-decomposition

primme's Introduction

PRIMME: PReconditioned Iterative MultiMethod Eigensolver

PRIMME, pronounced as prime, is a high-performance library for computing a few eigenvalues/eigenvectors, and singular values/vectors. PRIMME is especially optimized for large, difficult problems. Real symmetric and complex Hermitian problems, standard A x = \lambda x and generalized A x = \lambda B x, are supported. Besides, standard eigenvalue problems with a normal matrix are supported. It can find largest, smallest, or interior singular/eigenvalues, and can use preconditioning to accelerate convergence. PRIMME is written in C99, but complete interfaces are provided for Fortran, MATLAB, Python, and R.

Making and Linking

To generate the static and the shared library type:

make lib     #  builds lib/libprimme.a
make solib   #  builds lib/libprimme.so (or lib/libprimme.dylib)

The shared library is generated with the action solib instead. Usual flags are supported

  • CC, compiler program such as gcc, clang or icc
  • CFLAGS, compiler options such as -g or -O3
  • CUDADIR, directory of CUDA installation (optional)
  • MAGMADIR, directory of MAGMA installation (optional)
  • PRIMME_WITH_HALF, activates support for half precision if it set to yes; compiler supporting __fp16 is required, e.g., clang.

The flags can be indicated by customizing Make_flags or directly introduced at the command line:

make lib CC=clang CFLAGS='-O3'

For building the external interfaces just do:

make matlab       # Set MATLAB=/path/Matlab/bin/matlab MEX=/path/Matlab/bin/mex if needed
make matlab-cuda  # Requires to set CUDADIR and MAGMADIR
make octave
make python
make R_install

We provide packages of the released version for R (see R PRIMME):

install.packages('PRIMME')

and Python (see Python primme):

pip install numpy   # if numpy is not installed yet
pip install scipy   # if scipy is not installed yet
pip install future  # if using python 2
conda install mkl-devel # if using Anaconda Python distribution
pip install primme

C Library Interface

To compute few eigenvalues and eigenvectors from a real symmetric matrix call:

int dprimme(double *evals, double *evecs, double *resNorms,
            primme_params *primme);

The call arguments are:

  • evals, array to return the found eigenvalues;
  • evecs, array to return the found eigenvectors;
  • resNorms, array to return the residual norms of the found eigenpairs; and
  • primme, structure that specify the matrix problem, which eigenvalues are wanted and several method options.

To compute few singular values and vectors from a matrix call:

int dprimme_svds(double *svals, double *svecs, double *resNorms,
            primme_svds_params *primme_svds);

The call arguments are:

  • svals, array to return the found singular values;
  • svecs, array to return the found vectors;
  • resNorms, array to return the residual norms of the triplets; and
  • primme_svds, structure that specify the matrix problem, which values are wanted and several method options.

There are available versions for half and float and complex variants. See documentation in readme.txt file and in doc directory; also it is online at doc. The examples directory has several self-contained examples in C, C++ and F77, some of them using PETSc and MAGMA.

Citing this code

Please cite (bibtex):

  • A. Stathopoulos and J. R. McCombs PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description, ACM Transaction on Mathematical Software Vol. 37, No. 2, (2010), 21:1-21:30.
  • L. Wu, E. Romero and A. Stathopoulos, PRIMME_SVDS: A High-Performance Preconditioned SVD Solver for Accurate Large-Scale Computations, SIAM J. Sci. Comput., Vol. 39, No. 5, (2017), S248--S271.

More information on the algorithms and research that led to this software can be found in the rest of the papers. The work has been supported by a number of grants from the National Science Foundation.

  • A. Stathopoulos, Nearly optimal preconditioned methods for Hermitian eigenproblems under limited memory. Part I: Seeking one eigenvalue, SIAM J. Sci. Comput., Vol. 29, No. 2, (2007), 481--514.
  • A. Stathopoulos and J. R. McCombs, Nearly optimal preconditioned methods for Hermitian eigenproblems under limited memory. Part II: Seeking many eigenvalues, SIAM J. Sci. Comput., Vol. 29, No. 5, (2007), 2162-2188.
  • J. R. McCombs and A. Stathopoulos, Iterative Validation of Eigensolvers: A Scheme for Improving the Reliability of Hermitian Eigenvalue Solvers, SIAM J. Sci. Comput., Vol. 28, No. 6, (2006), 2337-2358.
  • A. Stathopoulos, Locking issues for finding a large number of eigenvectors of Hermitian matrices, Tech Report: WM-CS-2005-03, July, 2005.
  • L. Wu and A. Stathopoulos, A Preconditioned Hybrid SVD Method for Computing Accurately Singular Triplets of Large Matrices, SIAM J. Sci. Comput. 37-5(2015), pp. S365-S388.

License Information

PRIMME is licensed under the 3-clause license BSD. Python and Matlab interfaces have BSD-compatible licenses. Source code under tests is compatible with LGPLv3. Details can be taken from COPYING.txt.

Contact Information

For reporting bugs or questions about functionality contact Andreas Stathopoulos by email, andreas at cs.wm.edu. See further information in the webpage http://www.cs.wm.edu/~andreas/software.

Support

  • National Science Foundation through grants CCF 1218349, ACI SI2-SSE 1440700, and NSCI 1835821
  • Department of Energy through grant Exascale Computing Project 17-SC-20-SC

primme's People

Contributors

andreasnoack avatar duguxy avatar eromero-vlc avatar jcmgray avatar jornada avatar jxy avatar kelvinabrokwa avatar mrakitin avatar otbrown avatar ralphas avatar robertrueger avatar stathopoulos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

primme's Issues

Support for time limit

I am trying to find a way to halt iterations of dprimme if a maximum time is reached.

I have tried using primme.monitorFun(...): According to the documentation, I should be able to set ierr to a nonzero value to stop the current call to primme, but doing so triggers an assertion error:

include/../eigs/auxiliary_eigs_normal.c:485: int monitorFun_dprimme(dummy_type_dprimme *, int, int *, int *, int, dummy_type_dprimme *, int, dummy_type_dprimme *, int, int *, dummy_type_dprimme *, int, dummy_type_dprimme, const char *, double, primme_event, double, primme_context): 
Assertion `__err==0' failed.

Is there some other way to achieve this?

In terms of a feature request, I think ideally this could be introduced as a field double primme_params::maxTime which enables a time limit if set to a positive value (in seconds).

Tests fail with dynamic library

Simply

make solib test

fails on Ubuntu 16.10 with

andreasnoack@anubis:~/.julia/v0.7/Primme/deps/primme$ make test
------------------------------------------------
 Test C examples
------------------------------------------------
make[1]: Entering directory '/home/andreasnoack/.julia/v0.7/Primme/deps/primme/examples'
cc -O -fPIC -DNDEBUG -DF77UNDERSCORE  -I../include -c ex_eigs_dseq.c -o ex_eigs_dseq.o
cc -o ex_eigs_dseq ex_eigs_dseq.o -L../lib -I../include -lprimme -llapack -lblas -lm -gfortran
cc -O -fPIC -DNDEBUG -DF77UNDERSCORE  -I../include -c ex_eigs_zseq.c -o ex_eigs_zseq.o
cc -o ex_eigs_zseq ex_eigs_zseq.o -L../lib -I../include -lprimme -llapack -lblas -lm -gfortran
cc -O -fPIC -DNDEBUG -DF77UNDERSCORE  -I../include -c ex_svds_dseq.c -o ex_svds_dseq.o
cc -o ex_svds_dseq ex_svds_dseq.o -L../lib -I../include -lprimme -llapack -lblas -lm -gfortran
cc -O -fPIC -DNDEBUG -DF77UNDERSCORE  -I../include -c ex_svds_zseq.c -o ex_svds_zseq.o
cc -o ex_svds_zseq ex_svds_zseq.o -L../lib -I../include -lprimme -llapack -lblas -lm -gfortran
=========== Executing ./ex_eigs_dseq
./ex_eigs_dseq: error while loading shared libraries: libprimme.so: cannot open shared object file: N
o such file or directory
=========== Executing ./ex_eigs_zseq
./ex_eigs_zseq: error while loading shared libraries: libprimme.so: cannot open shared object file: N
o such file or directory
=========== Executing ./ex_svds_dseq
./ex_svds_dseq: error while loading shared libraries: libprimme.so: cannot open shared object file: N
o such file or directory
=========== Executing ./ex_svds_zseq
./ex_svds_zseq: error while loading shared libraries: libprimme.so: cannot open shared object file: N
o such file or directory
Some tests fail. Please consider to send us the file
examples/tests.log if the software doesn't work as expected.
Makefile:56: recipe for target 'test_examples_C' failed
make[1]: *** [test_examples_C] Error 1
make[1]: Leaving directory '/home/andreasnoack/.julia/v0.7/Primme/deps/primme/examples'
makefile:29: recipe for target 'test' failed
make: *** [test] Error 2

If I delete the .so (such that the static library is loaded) then it works.

I also think the so is not correctly linked to LIBS so when it is dlopened, the LAPACK symbols are not found.

Accessing 'ShiftsForPreconditioner' Field in Python `primme.eigsh`

Hello everyone!
I was trying to use primme.eigsh with extra arguments via **kargs. When I added 'ShiftsForPreconditioner' as an arguemnt, I get the following error:

Traceback (most recent call last):
  File "primme.pyx", line 555, in primme.eigsh
  File "primme.pyx", line 156, in primme.__primme_params_set
ValueError: Invalid field 'b'ShiftsForPreconditioner''

Backtracking through the source code, I noticed that in /src/eigs/primme_interface.c's primme_member_info routine, for 'ShiftsForPreconditioner' the value of arity is set to 0 (Line 1592-1594).

  case PRIMME_targetShifts:
  case PRIMME_ShiftsForPreconditioner:
  if (type) *type = primme_double;
  if (arity) *arity = 0;
  break;

But the Python Interface's primme.pyx requires it to be 1 (Line 155-156)

    if r != 0  or arity != 1:
        raise ValueError("Invalid field '%s'" % field_)

I believe that this is the cause of the error.
Is this behaviour intentional? Is it possible to use 'ShiftsForPreconditioner' in Python Interface?

Also, can I request a change in the documentation for primme.eigsh() under the Python Interface to include a description for **kargs? I did not notice its existence for so long!

Eigenvalue near zero

Hi,
I have a 40000 by 40000 semi-definite positive matrix and I want to calculate the smallest eigenvalue which is close to zero and corresponding eigenvector but it is taking too much time. However, for the largest eigenvalue, it is taking few minutes. Can you please suggest me how can I get the eigenvalue and corresponding eigenvector near zero efficiently? Here is my python code:

from scipy import sparse
from scipy.sparse import rand
import primme
import numpy as np
import scipy.sparse
import time
n=40000
A=sparse.csc_matrix((n,n), dtype=complex)
B=sparse.csc_matrix((n,n), dtype=complex)
A = rand(n,n, density=.1, format="csc", random_state=42)+rand(n,n, density=.1, format="csc", random_state=42)complex(0,1)
start=time.time()
B=A
A.conj().T
print(time.time()-start)
start=time.time()
vals, vecs = primme.eigsh(B, 1,which='SA')
print(vals)
print('time')
print(time.time()-start)

Compile error with MATLAB

When I run the following command with MATLAB in the path 'primme-master/matlab'

make

it reports an error that

Error using mex
MEX cannot find library 'm' specified with the -l option.
 MEX looks for a file with one of the names:
 libm.lib
 m.lib
 Please specify the path to this library with the -L option.


Error in make (line 16)
         mex -O -I../include -I../src/include -DF77UNDERSCORE -DPRIMME_BLASINT_SIZE=64 ../src/eigs/auxiliary_eigs.c ../src/eigs/auxiliary_eigs_normal.c
         ../src/eigs/convergence.c ../src/eigs/correction.c ../src/eigs/factorize.c ../src/eigs/init.c ../src/eigs/inner_solve.c ../src/eigs/main_iter.c
         ../src/eigs/ortho.c ../src/eigs/primme_c.c ../src/eigs/primme_f77.c ../src/eigs/primme_interface.c ../src/eigs/restart.c
         ../src/eigs/solve_projection.c ../src/eigs/update_projection.c ../src/eigs/update_W.c ../src/linalg/auxiliary.c ../src/linalg/blaslapack.c
         ../src/linalg/magma_wrapper.c ../src/linalg/memman.c ../src/linalg/wtime.c ../src/svds/primme_svds_c.c ../src/svds/primme_svds_f77.c
         ../src/svds/primme_svds_interface.c primme_mex.cpp  -largeArrayDims '-LC:\Program Files\MATLAB\R2018a\extern\lib\win64\mingw64' -lmwlapack -lmwblas
         -lm -output primme_mex

but I'can't find where is libm.lib or m.lib. Does anyone could help me?

Possible integer overflow in GEMV

I just ran into an issue that dprimme crashed with the following error:

 ** On entry to DGEMV  parameter number  6 had an illegal value

PRIMME is using Fortran interface to BLAS, so parameter number 6 is lda in

dgemv (trans, m, n, alpha, a, lda, x, incx, beta, y, incy)

Dimension of the problem is 3204236779 which is more than can fit into a 32-bit int (PRIMME was compiled with PRIMME_INT_SIZE=64 & PRIMME_BLASINT_SIZE=32). Looking at the code:

XGEMV(transa, &lm, &ln, (const BLASSCALAR *)&alpha, (const BLASSCALAR *)a,

gemv is applied multiple times and care is taken to properly split m into smaller chunks, but lda can still overflow.

Am I missing something or is this a bug?

Thanks!

Static libprimme.a gets built, but linking is done to dynamic lib, which does not exist: pass ../src/primme/libprimme.a instead

--->  Testing primme
Executing:  cd "/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/primme-3.2" && /usr/bin/make test CC="/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/compwrap/cc/usr/bin/gcc-4.2" CXX="/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/compwrap/cxx/usr/bin/g++-4.2" OBJC="/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/compwrap/objc/usr/bin/gcc-4.2" OBJCXX="/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/compwrap/objcxx/usr/bin/g++-4.2" INSTALL="/usr/bin/install -c" 
------------------------------------------------
 Test C examples                                
------------------------------------------------
/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/compwrap/cc/usr/bin/gcc-4.2 -Os -arch ppc -arch ppc -DF77UNDERSCORE  -I../include -c ex_eigs_dseq.c -o ex_eigs_dseq.o
/opt/local/var/macports/build/_opt_PPCRosettaPorts_math_primme/primme/work/compwrap/cc/usr/bin/gcc-4.2 -o ex_eigs_dseq ex_eigs_dseq.o  -I../include ../lib/libprimme.a -llapack -lblas -lm -L/opt/local/lib -Wl,-headerpad_max_install_names -arch ppc -arch ppc 
powerpc-apple-darwin10-gcc-4.2.1: ../lib/libprimme.a: No such file or directory
make[1]: *** [ex_eigs_dseq] Error 1
make: *** [test] Error 2

MATLAB's EIGS vs PRIMME_EIGS

Dear developers,

I'm planning to use PRIMME to calculate 10 interior eigenvalues and eigenvectors of large sparse matrices (~ 810^5 x 810^5) in Matlab. However, I found that PRIMME is ~ 15 times slower in moderately small matrices ( 3K x 3K) with respect to standard eigs function; Eigs is performing the task in ~ 0.05 sec, while for GD_Olsen_plusK it takes 14 seconds (in 8 cores, 3Ghz each).

So, is it possible to speed up the calculation process?

load H.mat;

[~,Emin]=eigs(H,1,'sa');   
[~,Emax]=eigs(H,1,'la');

 e=(Emax-Emin)/2+Emin;
 
tic
  opts = struct();
D = primme_eigs(H,10,e,opts,'JD_Olsen_plusK' );  
toc
%%
tic
[XX EE]=eigs(H,10,e);
toc

Thank you,

Best regards,

Murod

Parallelization in Python

Although in the original publication I think it's being stated that the python version features parallelization out of the box, since being only a wrapper for the actual code, I find no remark in the documentation where the steps that are to be taken are addressed. Am I just missing something, or is it not supported yet?

eigsh (python): return_eigenvectors has no effect

Currently, setting return_eigenvectors=False returns the eigenvectors regardless. It would be an easy fix to simply not return them (for compatbility with scipy), not sure if one might save some computation too? Happy to submit PR.

Performance of PRIMME

My goal is to diagonalize a real symmetric matrix of around a few million or few tens of millions of rows. The matrix is very sparse and is made up of several blocks and its structure is most similar to a tridiagonal Toeplitz matrix, but the elements are not numbers but tridiagonal Toeplitz matrices themselves. As practically this whole structure is filled with the same one block just with different coefficients, instead of the whole matrix, only this one block is stored in memory and I implemented a logic to unfold the matrix-vector operation consequently.

PRIMME seems to be working fine with smaller matrices, unit tests cover the code quite well, but if I put in the desired matrix size, it gets really slow. For a matrix of about 4 million rows, it still haven't finished after 36 hours of iteration. I'm monitoring the convergence, and the numConverged is often changing between 1 and -1 but it doesn't get greater.

My settings:
method: PRIMME_DYNAMIC
target: closest_geq (I must get the middle of the spectrum)
targetShifts: -1 (only one value)
numEvals: 5
eps: 1e-3
Calling dprimme for diagonalization.

CPU: Intel Core i7-8700, 3.20GHz
I'm using 10 threads which all carry out different operations of the same matrix-vector action. Binary is compiled with -O3, AVX2 enabled, I can do about 15-20 matrix-vector multiplication per minute with matrices of 4-7 millions of rows.

I haven't encountered a matrix of this size yet and I don't have any intuition about how well could my setup perform, therefore, I'm assuming I can tweak PRIMME such that it'll diagonalize faster but I just can't seem to found the right way. Could you give me some hint about how to approach this?

How to limit memory usage in Python Interface

I am calling primme like that:

primme.eigsh(H, k=1, which='SA', v0 = psi, tol=1.e-15, return_eigenvectors=True)

Where H is a real symmetric sparse matrix and subclass of scipy.sparse.linalg.LinearOperator.

I think I really need this low tolerance for my algorithm to work properly. The program runs in an environment with limited memory and in some cases the above call to eigsh allocates too much memory so that it crashes.

If I try to limit the number of basis states allocated by passing ncv = 8 to eigsh the memory requirement is lower, however sometimes it gets stuck (does not converge).

If the parameter maxiter = 500 is also passed to eigsh then it will definitely return after 500 matvecs, however in some cases it does not return an approximate eigenvalue eigenvector pair.

Is the behavior as outlined above intended for eigsh ?
How should I call eigsh for limited memory usage ?

Reduce memory usage for `locking==false`

In Section 3.4.1 (Computational Requirements: Memory) of your paper "PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description" it is mentioned that the amount of allocated memory can be reduced when locking is disabled.

In the code I see that V and W are allocated unconditionally, so it seems like there is currently no way to reduce the amount of used memory.

When diagonalizing very big matrices (which are then never actually stored in memory), getting rid of numEvals vectors is a very desirable feature. Are there any plans to implement it? Would you accept PRs in this direction?

Thanks!

Unable to install primme in python

I have tried different versions of primme, but still I cannot install it on my computer.

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Processing c:\users\xxx\downloads\primme-2.2.zip
ERROR: Command errored out with exit status 1:
command: 'c:\anaconda3\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\xxx\AppData\Local\Temp\pip-req-build-5v1x_2yg\setup.py'"'"'; file='"'"'C:\Users\xxx\AppData\Local\Temp\pip-req-build-5v1x_2yg\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info
cwd: C:\Users\xxx\AppData\Local\Temp\pip-req-build-5v1x_2yg
Complete output (5 lines):
Traceback (most recent call last):
File "", line 1, in
File "c:\anaconda3\lib\tokenize.py", line 452, in open
buffer = _builtin_open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\xxx\AppData\Local\Temp\pip-req-build-5v1x_2yg\setup.py'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

user-defined matvec product for python version

Dear primme dev team,

I was using release version 2.2. From there I am able to overwrite the function eigsh in file Primme.py so that it is possible to have user-defined matrix vector product, instead of providing the matrix explicitly, which can be quite huge and memory consuming.

My question is that, now I would like to catch up the master branch of primme, however I notice that python version is now wrapped using cython. Is is possible to provide user-defined matvec feature in the future release?

best

Installation Error

Could somebody mention how to fix this installation error?
As the author recommended in here #27 , I downloaded the package primme-3.1.0.tar.gz from https://pypi.org/project/primme/#files
[Error]
.........
ERROR: Failed building wheel for primme
..........
fatal error LNK1181: cannot open input file 'lapack.lib'
..........
ERROR: Command errored out with exit status 1: 'C:\Python36\python3.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\pygab\AppData\Local\Temp\pip-req-build-n0b8nlzv\setup.py'"'"'; file='"'"'C:\Users\pygab\AppData\Local\Temp\pip-req-build-n0b8nlzv\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\pygab\AppData\Local\Temp\pip-record-hyrlexyo\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Python36\Include\primme' Check the logs for full command output.

libprimme.so does not have a SONAME

Error: /usr/local/lib/python2.7/site-packages/_Primme.so is linked to /usr/local/lib/libprimme.so which does not have a SONAME.  math/primme needs to be fixed.

version 2.1

unable to install primme in matlab @ windows 10

Matlab2018b, Windows10, MEX configured to use 'MinGW64 Compiler (C)'

Running make.m gives me the error

MEX cannot find library 'mwlapack', specified with the -l option.
MEX searched for a file with one of the following names:
libmwlapack.lib
mwlapack.lib
Verify the library name is correct. If the library is not
on the existing path, specify the path with the -L option.

I'm sure the libs are available as the mathworks mex examples run through fine.

Fails to compile with python3.8

Hi,
using make python CC="gcc"
I get tons of errors similar to

primme.c:353:11: error: too many arguments to function ‘PyCode_New’
  353 |           PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
      |           ^~~~~~~~~~
primme.c:57056:15: note: in expansion of macro ‘__Pyx_PyCode_New’
57056 |     py_code = __Pyx_PyCode_New(
      |               ^~~~~~~~~~~~~~~~
In file included from /usr/include/python3.8/compile.h:5,
                 from /usr/include/python3.8/Python.h:138,
                 from primme.c:39:
/usr/include/python3.8/code.h:122:28: note: declared here
  122 | PyAPI_FUNC(PyCodeObject *) PyCode_New(
      |                            ^~~~~~~~~~

I use gcc 9.2.0 and python 3.8.1

Build fails

In file included from primme/src/svds/primme_svds_interfacedouble.c:3:
In file included from primme/src/svds/primme_svds_interface.c:41:
In file included from primme/src/include/numerical.h:36:
primme/src/include/template.h:106:4: error: "An arithmetic should be selected, please define one of USE_DOUBLE, USE_DOUBLECOMPLEX, USE_FLOAT or USE_FLOATCOMPLEX."
#  error "An arithmetic should be selected, please define one of USE_DOUBLE, USE_DOUBLECOMPLEX, USE_FLOAT or USE_FLOATCOMPLEX."
   ^
In file included from primme/src/svds/primme_svds_interfacedouble.c:3:
In file included from primme/src/svds/primme_svds_interface.c:41:
In file included from primme/src/include/numerical.h:36:
In file included from primme/src/include/template.h:372:
/usr/include/malloc.h:3:2: error: "<malloc.h> has been replaced by <stdlib.h>"
#error "<malloc.h> has been replaced by <stdlib.h>"
 ^

Please note that the <malloc.h> header has been discontinued, see here for Linux: https://linux.die.net/man/3/malloc

OS: FreeBSD 11.2

PRIMME and GPU support

According to the documentation, GPU support is realized through the magma library. I'm not really familiar with it, but from their website it seems they put more emphasis on CUDA support, therefore NVIDIA devices only.

Is there a way around your magma interface so that I can use any GPU API for eigenvalue problems? My only goal is to use lower level instructions in my matvec (possibly OpenCL, but it's still flexible) and avoid copying to/from the GPU in every call, so PRIMME should work with my vectors on the GPU.

I'm still a newbie in GPU programming, so maybe I'm missing something here. What is the main concept behind your GPU support? The given example is specifically built around magma, at this point I don't see what is behind this choice.

Bug: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

Hi there,

I'm switching the package manager from pip + requirements.txt to poetry for my project, and I found that this led to a ValueError when importing primme, while it was totally ok if I just install primme with pip. Though I'm not really certain this issue is not poetry-related.

p.s. Both pip and poetry installation are tested within Docker, though I think this is not related...

Thanks!

Error Logs

>>> import numpy as np
>>> import primme
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "primme.pyx", line 1, in init primme
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

Environment

  • Docker: 20.10.14, build a224086
  • Base Docker image: python:3.10
  • Poetry: 1.2.0b1
  • numpy: 1.21.2
  • scipy: 1.8.0
  • primme: 3.2.1

Dockerfile

FROM python:3.10

ARG WORKDIR=/home
ENV PYTHONPATH="${PYTHONPATH}:$WORKDIR" \
    PATH="/root/.local/bin:$PATH"
WORKDIR $WORKDIR
COPY . $WORKDIR

RUN apt update && \
    apt-get install -y --no-install-recommends \
      gfortran libblas-dev liblapack-dev

RUN curl -sSL https://install.python-poetry.org | POETRY_VERSION=1.2.0b1 python3 - && \
    poetry config virtualenvs.create false --local && \
    poetry install

pyproject.toml

[tool.poetry]
name = "my_project"
version = "0.0.1"

[tool.poetry.dependencies]
python = "~3.10"
numpy = "1.21.2"
scipy = "1.8.0"
primme = "3.2.1"

[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

When finding the lowest few eigenvalues, the highest eigenvalues found are inaccurate

Hello,

When calculating the lowest few eigenvalues (by specifying which='SA') of a Hermitian matrix, I have found that the highest 5%–25% of eigenvalues found are inaccurate (see attached image). For this plot, I used a tolerance of tol=1e-10. It is my understanding that this roughly means that

| (λ_actual - λ_approx) / λ_actual | < 1e-10 ,

where λ_actual is the exact eigenvalue and λ_approx is the approximation of this eigenvalue.

Is this interpretation incorrect? Is this behavior expected?

Thanks!

(Software specifications: I am using PRIMME version 3.0.3 in Python 3.7.3.)

256x256

LAPACK calculated eigenvector signs differ between processes when running in parallel

When the small eigenvalue problem is solved in solve_H_d.c, the signs of the resulting eigenvectors are arbitrary. This is in itself not an issue, as the sign of an eigenvector is not well defined: if v is an eigenvector, so is -v. However, when running in parallel the signs of the eigenvectors can differ between the different processes. This is not a problem during the calculation, but if the complete eigenvectors are assembled in the end from the parts of the basis residing on the different processes, the sign of each part is essentially random, leading to wrong results.

Take as an example a 4x4 matrix for which an eigenvector is (a,b,c,d) or equivalently (-a,-b,-c,-d). Primme running with two processes might produce (a,b,-c,-d) as an eigenvector if the signs of eigenvectors of the small eigenvalue problem do not match between processes due to some randomness of the underlying dsyev.

This does not seem to happen on most platforms and LAPACK libraries, but we have observed the issue on Win32 with the MKL library: Diagonalizing the same matrix gives eigenvectors with different signs for the different processes running primme in parallel.

Here are three workarounds from the top of my head:

  1. Introduce an additional function pointer for the diagonalization instead of calling dsyev. The primme user can then set it and handle the problem outside of primme. This is what we are currently doing in ADF, where the dsyev call is replaced with ScaLAPACK (which ensures a consistent sign across all processes). Code for this workaround (only real matrices) can be found here.
  2. Communicate the eigenvectors from procID=0 to all other processes. This can be done (though not super efficiently) using the globalSumDouble function that has to be set for parallel runs anyway. Code for this workaround (only real matrices) can be found here.
  3. Come up with some sign convention for the eigenvectors of the small eigenvalue problem. (Something like "largest element positive".) This probably works for most cases, but I have doubts if it can work in general: It seems like partitioning space into two parts, one of which would have the sign flipped, would always introduce some border at which the sign is random if there is even the slightest noise on the eigenvector ...

We currently use primme 1.2 with workaround 1. in ADF. It would however be nice if this problem could be solved upstream.

PRIMME is not on CRAN

It appears PRIMME is gone from CRAN due to some unresolved problems. PRIMME is the driving engine of our SpectralTAD package, and TADCompare package that depends on SpectralTAD. Both are broken for the moment.

Per Bioconductor package guidelines, all dependencies must satisfy "All packages must be available via Bioconductor or CRAN; users and the automated build system have no way to install packages from other sources.".

Are there any plans to put PRIMME back on CRAN? Or, any other advice?
CC @cresswellkg

Use of educated guesses

I'm using the CRAN version of R package {PRIMME}.
My use-case would be an iterative computation of SVD while removing a small set of variables (or samples).

E.g. imagine I have already computed

set.seed(1)
n <- 2e3
p <- 20e3
U <- matrix(0, n, 10); U[] <- rnorm(length(U))
V <- matrix(0, p, 10); V[] <- rnorm(length(V))
X <- tcrossprod(U, V) + 5 * matrix(rnorm(n * p), n, p)
system.time(svd2 <- PRIMME::svds(X, 10, isreal = TRUE))

But then, I want to recompute the SVD without say the first 100 variables; I tried

ind <- -(1:100)
X2 <- X[, ind]
u0 <- svd2$u
system.time(
  svd2.1 <- PRIMME::svds(X2, 10, isreal = TRUE))
system.time(
  svd2.2 <- PRIMME::svds(X2, 10, isreal = TRUE, u0 = u0, v0 = V[ind, ]))
system.time(
  svd2.3 <- PRIMME::svds(X2, 10, isreal = TRUE, u0 = u0, v0 = svd(V[ind, ])$u))
system.time(
  svd2.4 <- PRIMME::svds(X2, 10, isreal = TRUE, u0 = u0, 
                         v0 = crossprod(X2, sweep(u0, 2, svd2$d, '/'))))

You can see that the last v0 I tried is very close to the result (without accounting for the sign): plot(svd2.1$v, crossprod(X2, sweep(u0, 2, svd2$d, '/')), pch = 20).

All educated guesses are resulting in a similar or larger computation time than using no educated guess at all.
Am I doing something wrong? Any advice on this would be much appreciated.

Parallel programming

Reading through the documentation of the C interface, it seems that parallelization is only supported for cases where the task is distributed among different processes. In my particular problem, the execution would happen on a single machine, therefore, distribution among threads seems more effective (there's a lot to do in the initialization phase). Is there any good practice to achieve this, or there's a reason why you only support distribution among processes? I'm new to parallel programming, possibly I'm asking something trivial and/or nonsense.

One lib archive or two?

Hello,

I've been using v1.1 for years, and just tried out v1.2.1 today. I had some trouble linking until I noticed that make libz now splits routines between libprimme.a and libzprimme.a rather than putting everything into libzprimme.a as I was used to. I'm mentioning this in an issue rather than a pull request since I'm not sure whether it's a bug or a new design paradigm... However, the fact that ZSRC routines go into libprimme.a while the COMMONSRC routines go into libzprimme.a makes me suspect the former.

Another reason I didn't bother with a pull request is that it's a simple one-character fix if it is indeed something to be fixed: Change line 38 of PRIMMESRC/Makefile from
make -C ZSRC lib;
to
make -C ZSRC libz;
(and similarly for libd on line 28).

About 'maxiter' in Python Interface

In Python Interface's eigsh(), I wanted to use maxiter parameter to instruct PRIMME to stop after reaching the specified number of Outer Iterations. I expected to get the eigenpairs it has computed till then and the stats to know which of them are marked as converged via stats['hist']['nconv']. But, adding the parameter raises a PrimmeError exception and does not return anything. Is it possible to get the eigenpairs (converged or not) the routine has computed till then?

Feature Request: A different `convTest`

I have been working on a Python code to determine the solution of the Kohn-Sham Equations, which requires Self-Consistent Field (SCF) iteration routines. The code is based on QuantumESPRESSO's PWscf module, which uses Classical Davidson Method to determine the eigenstates. And I have used PRIMME eigsh()'s PRIMME_GD_plusK method for the same.
On analyzing the performance of my code, I have noticed that a lot more calls were made to matrixMatvec functions. The difference, I believe, is due to the way each code determines convergence. While PRIMME uses the 2-norm of the residual, the QE code uses the difference in the eigenvalues calculated between iterations.

I think the reason behind this convergence criteria is motivated by the following reasons:

  1. As this is an SCF iteration, the accuracy of the eigenvectors for the first few iterations may not be as important.
  2. The vector elements are Fourier components, while the SCF calculation's convergence involves comparing input and output quantities in real-grid. So, the 2-norm may not be an accurate measure of the error in this application.

I do not see a way to construct a convTest to implement this from the documentation. It requires knowing which eigenpair is being checked for convergence, which is not available in any interfaces. Although it may be possible to implement convTest with associated memory and logic to infer which eigenpair from the input, the implementation would not be clean or reliable, especially for degenerate states.
I would like to know your thoughts regarding the discrepancy in performance observed here and the feasibility of the solution proposed above.

Tests fail on FreeBSD

===>   py27-primme-2.1.5.20181010 depends on file: /usr/local/bin/python2.7 - found
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F.FF................F.....F.F....FF.FF...FFFFFF.Assertion failed: (0 <= perm_[tempIndex] && perm_[tempIndex] < n), function permute_vecs_cprimme, file linalg/auxiliary.c, line 330.
*** Signal 6

The shared library needs -fPIC

Otherwise, the link fails:

/usr/bin/ld: eigs/auxiliary_eigsfloat.o: relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC

Uncounted matrix-vector multiplications with blocksize 1 in the beginning of a calculation

When calculating only a small number of eigenvectors using the JD_Olsen_plusK method (didn't test others), primme seems to do a number of matrix-vector multiplications of blocksize 1 in the beginning of a calculation, which are not counted in stats.numMatvecs.

I noticed this because our method that does the matrix-vector multiplication also prints some information about the progress of the calculation. At the beginning of every matrix-vector-multiplication, it prints stats.numOuterIterations, stats.numMatvecs and the number of converged eigenvectors (stored in initSize).

Here is some example output for the diagonalization of a 11664x11664 matrix for which the lowest 14 eigenvectors were calculated. maxBlockSize was set to 14, maxBasisSize was set to 140 and maxRestartSize was automatically set to 56 by primme.

Cycle:     0, MatVecs:      0 (BLK:  14), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     14 (BLK:   1), Conv:     0 /    14
Cycle:     0, MatVecs:     42 (BLK:  14), Conv:     0 /    14
Cycle:     1, MatVecs:     56 (BLK:  14), Conv:     0 /    14
Cycle:     2, MatVecs:     70 (BLK:  14), Conv:     0 /    14
Cycle:     3, MatVecs:     84 (BLK:  14), Conv:     0 /    14
Cycle:     4, MatVecs:     98 (BLK:  14), Conv:     0 /    14
Cycle:     5, MatVecs:    112 (BLK:  14), Conv:     0 /    14
Cycle:     6, MatVecs:    126 (BLK:  14), Conv:     0 /    14
Cycle:     7, MatVecs:    140 (BLK:  14), Conv:     0 /    14
Cycle:     8, MatVecs:    154 (BLK:  14), Conv:     0 /    14
Cycle:     9, MatVecs:    168 (BLK:   8), Conv:     0 /    14
Cycle:    10, MatVecs:    176 (BLK:   4), Conv:     0 /    14
Cycle:    11, MatVecs:    180 (BLK:   4), Conv:     0 /    14
Cycle:    12, MatVecs:    184 (BLK:   4), Conv:     0 /    14
Cycle:    13, MatVecs:    188 (BLK: ---), Conv:    14 /    14

(Note that the last line was printed after primme finished.)

So after the first matrix-vector multiplication with blocksize 14 there is a series of matrix-vector multiplications with blocksize 1 which do not count towards the total number of matrix-vector multiplications.

I found this rather surprising. It also seems quite inefficient to me in cases where there is a certain constant-time overhead in starting a matrix-vector multiplication, which is only amortized with a large blocksize. Is this working as intended?

MATLAB primme_svds/record_history error

An error occurs in MATLAB prime_svds() with the following minimum working example:

>> A = randn(1000,1000);
>> opts = struct();
>> opts.tol = 1e-10;
>> [U,S,V,R,STAT,HIST]=primme_svds(A,10,'L',opts);
Operands to the || and && operators must be convertible to logical scalar values.
Error in primme_svds/record_history (line 635)
      if stage == 0 && methodStage2 ~= 0
Error in primme_svds>@(a0,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13)record_history(a0,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13) (line 528)
               @(a0,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13)record_history(a0,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13));
Error in primme_svds (line 560)
      [ierr, svals, norms, svecsl, svecsr] = primme_mex(xprimme_svds, init{1}, ... 

It appears methodStage2 returns a vector and the if statement cannot parse this (MATLAB 2020a). A workaround is to change methodStage2 ~= 0 to any(methodStage2 ~= 0).

installation error

the requirement have already been satisfied for future, numpy, scipy ,and I have successfully installed mkl through conda. but it report error when building wheel for primme(setup.py) with error: subprocess-exited-with-error
微信图片_20220719021103

int32 / int64 test in primme.h

Hello,
I noticed an error using 32 bits integers during the test on PRIMME_INT_SIZE in the file primme.h :

l.74 : #elif PRIMME_INT == 32 should probably be #elif PRIMME_INT_SIZE == 32

Regards,
Vincent Le Bris

Python `primme.eigsh` parallelization support

I have a huge symmetric real sparse matrix that I will need to solve a eigenproblem on. I am using Python 3.

However, in the input parameters of primme.eigsh, there is nothing to specify the parallelization. Does it automatically support parallel computing? Or is there somewhere else i can enforce parallel computing when running primme.eigsh? Thanks.

primme_c.c:372: wrapper_dprimme: Assertion `__err==0' failed

I get the error above if I choose the PRIMME_LOBPCG_OrthoBasis or PRIMME_LOBPCG_OrthoBasis_Window methods.
In case of PRIMME_LOBPCG_OrthoBasis, the output of primme_display_params right before the call to dprimme:

// ---------------------------------------------------
// primme configuration
// ---------------------------------------------------
primme.n = 1155
primme.nLocal = -1
primme.numProcs = 1
primme.procID = 0

// Output and reporting
primme.printLevel = 5

// Solver parameters
primme.numEvals = 5
primme.aNorm = 0.000000e+00
primme.BNorm = 0.000000e+00
primme.invBNorm = 0.000000e+00
primme.eps = 1.000000e-04
primme.maxBasisSize = 3
primme.minRestartSize = 1
primme.maxBlockSize = 1
primme.maxOuterIterations = 2147483647
primme.maxMatvecs = 100000
primme.target = primme_closest_geq
primme.projection.projection = primme_proj_default
primme.initBasisMode = primme_init_random
primme.numTargetShifts = 1
primme.targetShifts = -1.000000e+00
primme.dynamicMethodSwitch = 0
primme.locking = 0
primme.initSize = 0
primme.numOrthoConst = 0
primme.ldevecs = -1
primme.ldOPs = -1
primme.iseed = -1 -1 -1 -1
primme.restarting.maxPrevRetain = 1

// Correction parameters
primme.correction.precondition = 0
primme.correction.robustShifts = 0
primme.correction.maxInnerIterations = 0
primme.correction.relTolBase = 0
primme.correction.convTest = primme_adaptive_ETolerance

// projectors for JD cor.eq.
primme.correction.projectors.LeftQ = 0
primme.correction.projectors.LeftX = 0
primme.correction.projectors.RightQ = 0
primme.correction.projectors.SkewQ = 0
primme.correction.projectors.RightX = 1
primme.correction.projectors.SkewX = 0
// ---------------------------------------------------

Are my settings wrong or is this a bug?

Cannot compile ex_eigs_dseqf90.f90 example

Hello,
I'm trying to compile ex_eigs_dseqf90.f90 example program. I've tried to compile it using
ifort -I../include -c ex_eigs_dseqf90.f90 -o ex_eigs_dseqf90.o
but I've got numerous compilation errors.
I believe there are some issues with *_f90.inc files.

Compiler: ifort version 19.0.3.199
OS: SLES 15

printLevel

I am using primme.eigsh in Python. For a specific case, I encountered the following error:

PrimmeError: PRIMME error -40: some LAPACK function performing a factorization returned an error code; set 'printLevel' > 0 to see the error code and the call stack

I want to know what problem my matrix is. What is the exact way to set printLevel in the Python version of PRIMME?

Installing R package from source fails

I am having an installation issue.

On R 4.1.0 and 4.2.0 on CentOS 7.6.1810 installing PRIMME 3.2-3 fails with the following error:

* installing *source* package ‘PRIMME’ ...
** package ‘PRIMME’ successfully unpacked and MD5 sums checked
** using staged installation
** libs
g++ -std=gnu++14 -I"/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/include" -DNDEBUG  -I'/hpc/packages/minerva-centos7/rpackages/4.2.0/site-library/Rcpp/include' -I'/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/library/Matrix/include' -I/usr/local/include  -I../inst/include  -DPRIMME_INT_SIZE=0 -DF77UNDERSCORE -DUSE_XHEEV -DUSE_ZGESV -DUSE_XHEGV -DPRIMME_INT_SIZE=0 -DPRIMME_WITHOUT_FLOAT -DPRIMME_BLAS_RCOMPLEX -fpic  -O3 -fopenmp  -c RcppExports.cpp -o RcppExports.o
gcc -I"/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/include" -DNDEBUG  -I'/hpc/packages/minerva-centos7/rpackages/4.2.0/site-library/Rcpp/include' -I'/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/library/Matrix/include' -I/usr/local/include   -fpic  -I/hpc/packages/minerva-centos7/pcre2/10.35/include -O3 -fopenmp  -c init.c -o init.o
g++ -std=gnu++14 -I"/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/include" -DNDEBUG  -I'/hpc/packages/minerva-centos7/rpackages/4.2.0/site-library/Rcpp/include' -I'/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/library/Matrix/include' -I/usr/local/include  -I../inst/include  -DPRIMME_INT_SIZE=0 -DF77UNDERSCORE -DUSE_XHEEV -DUSE_ZGESV -DUSE_XHEGV -DPRIMME_INT_SIZE=0 -DPRIMME_WITHOUT_FLOAT -DPRIMME_BLAS_RCOMPLEX -fpic  -O3 -fopenmp  -c primmeR.cpp -o primmeR.o
make[1]: Entering directory `/tmp/RtmpMyo4pV/R.INSTALL3165311191421/PRIMME/src/primme'
g++ -std=gnu++14 -I/hpc/packages/minerva-centos7/R/4.2.0/lib64/R/include -DNDEBUG -O3 -fopenmp  -fpic -I../inst/include  -DPRIMME_INT_SIZE=0 -DF77UNDERSCORE -DUSE_XHEEV -DUSE_ZGESV -DUSE_XHEGV -DPRIMME_INT_SIZE=0 -DPRIMME_WITHOUT_FLOAT -DPRIMME_BLAS_RCOMPLEX  /hpc/packages/minerva-centos7/proj/8.1.0/include:/hpc/packages/minerva-centos7/gdal/3.3.1/include -I../../inst/include -Iinclude -c eigs/auxiliary_eigs.cpp -o eigs/auxiliary_eigs.o
g++: error: /hpc/packages/minerva-centos7/proj/8.1.0/include:/hpc/packages/minerva-centos7/gdal/3.3.1/include: No such file or directory
make[1]: *** [eigs/auxiliary_eigs.o] Error 1
make[1]: Leaving directory `/tmp/RtmpMyo4pV/R.INSTALL3165311191421/PRIMME/src/primme'
make: *** [primme/libprimme.a] Error 2
ERROR: compilation failed for package ‘PRIMME’
* removing ‘/hpc/users/hoffmg01/.Rlib/R_420/PRIMME’
* restoring previous ‘/hpc/users/hoffmg01/.Rlib/R_420/PRIMME’

The downloaded source packages are in
	‘/tmp/Rtmp9xHZoS/downloaded_packages’
Warning message:
In install.packages("PRIMME") :
  installation of package ‘PRIMME’ had non-zero exit status

I can fix this manually in bash by running g++ replacing

/hpc/packages/minerva-centos7/proj/8.1.0/include:/hpc/packages/minerva-centos7/gdal/3.3.1/include 

with

-I/hpc/packages/minerva-centos7/proj/8.1.0/include:/hpc/packages/minerva-centos7/gdal/3.3.1/include 

Just adding -I fixed it. But then I had to write a bash script to run g++ outside R.

Can you fix this directly in a makefile? I couldn't figure out how

Thanks,
Gabriel

Factors affecting the performance of PRIMME

I would like to ask a question related more to the performance (run time). It is undoubtedly true that the number of non-zero elements or the sparsity will have an influence on the running time. In addition to that, is there any other factors? (e.g., rank of the matrix etc.)

libprimme.so needs to be linked to blas and lapack

$ ldd -a ./work/primme-d089bec/lib/libprimme.so
./work/primme-d089bec/lib/libprimme.so:
	libc.so.7 => /lib/libc.so.7 (0x800823000)

$ nm -a ./work/primme-d089bec/lib/libprimme.so
...
                 U zaxpy_
                 U zcopy_
                 U zgemm_
...

There are unresolved lapack/blas symbols.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.