fommil / netlib-java Goto Github PK
View Code? Open in Web Editor NEW:rocket: High Performance Linear Algebra (low level)
License: Other
:rocket: High Performance Linear Algebra (low level)
License: Other
Original author: [email protected] (February 18, 2012 19:55:09)
What steps will reproduce the problem?
Use netlib-java in AppEngine environment, or other java runtime where customary system properties are not set.
What is the expected output? What do you see instead?
Architecture should be identified as "unknown" and fall back to pure java routines. Instead, an NPE is thrown in JNIMethods.java because System.getProperty("os.arch") is null.
What version of the product are you using? On what operating system?
0.9.3
Please provide any additional information below.
Will post a patch as soon as it's been tested :-)
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=12
Original author: [email protected] (April 22, 2011 22:12:03)
Like it says on the tin - let's revamp the auto detection behaviour.
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=8
#21 would be wonderful, but as a workaround it would be nice to have a child project that can be run easily to get some performance results for natives vs Java on simple routines like ddot
.
Original author: [email protected] (May 14, 2009 19:17:49)
Hi,
I'm trying to use netlib-java with native CBLAS (built by ATLAS) and with
CLAPACK compiled with the same BLAS, all on a 64-bit linux system.
The netlib-java-->CBLAS interface all worked really easily and will speed
up our project a great deal so firstly thanks for your work on netlib-java
and thanks for sharing it.
I've managed to build CLAPACK-3.0 but all the functions take pointers to
64-bit integers whereas the netlib-java autogenerated JNI code passes
pointers to 32-bit ints. The compiler gives a warning about this and when
run, the LAPACK functions complain about silly numbers (picking up the
random memory in the high-end of the 64-bit vals).
The F2C integer is typedef'ed as a 'long int' which is 64-bit here. The
f2c.h that comes inside netlib-java has the same definition.
I tried putting some conversion code in one of the JNI function which made
call work perfectly.
Has anyone else managed to interface with CLAPACK on a 64-bit system. Is
there a way to make the JavaGenerator class make JNI code which does the
conversion for every occurance of a 'jint'?
Thanks for any help you can give,
Oliver
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=2
or contact the performance people to use their code for testing.
Original author: [email protected] (March 11, 2009 22:09:09)
A minor annoyance. For javah, the JLAPACK_JNI_CP should point to:
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=1
A more complete set of performance tests (top 2 most popular BLAS/LAPACK routines for double/float precision) would make for better presentation of results and value-add for native implementations.
One school of thought on configuration is that if the configuration is not honoured, then the problem should fail fast. The other school of thought says to fall back to a sensible default.
Normally, I agree with the former... but for netlib-java I think we should fall back to Java implementations if the native code fails to be loaded. This will not catch native code that loads, but fails to execute (e.g. loading ARM HF on an ARM SF machine).
should be easy: store list of loaded libs and compare params against it. exit if any are found to match.
without this only one lib can be loaded per jvm
Disaster... the Ubuntu 64 bit build does not work in Debian 64 bit.
Original author: jzaugg (December 07, 2010 11:22:15)
What steps will reproduce the problem?
"pool-7-thread-1" daemon prio=2 tid=0x28904000 nid=0x254c runnable [0x2995f000]
java.lang.Thread.State: RUNNABLE
at org.netlib.lapack.Dlarfg.dlarfg(lapack.f)
at org.netlib.lapack.Dgebd2.dlarfg_adapter(lapack.f)
at org.netlib.lapack.Dgebd2.dgebd2(lapack.f)
at org.netlib.lapack.Dgebrd.dgebrd(lapack.f)
at org.netlib.lapack.Dgesdd.dgesdd(lapack.f)
at scalala.tensor.dense.JLAPACK_LAPACKkernel.gesdd(Numerics.scala:86)
at scalala.library.LinearAlgebra$class.svd(LinearAlgebra.scala:96)
at scalala.Scalala$.svd(Scalala.scala:61)
"pool-5-thread-1" daemon prio=2 tid=0x28247400 nid=0x1048 runnable [0x298bf000]
java.lang.Thread.State: RUNNABLE
at org.netlib.lapack.Dlascl.dlascl(lapack.f)
at org.netlib.lapack.Dgesdd.dgesdd(lapack.f)
at scalala.tensor.dense.JLAPACK_LAPACKkernel.gesdd(Numerics.scala:86)
at scalala.library.LinearAlgebra$class.svd(LinearAlgebra.scala:96)
at scalala.Scalala$.svd(Scalala.scala:61)
Analysis
Seems similar to this discussion [1]. Perhaps the library needs an init() call to perform thread-unsafe initialization.
What version of the product are you using? On what operating system?
mtj-0.9.9
Windows XP
Java 1.6
Please provide any additional information below.
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=5
not something I plan to do without funding
I checked out netlib-java master and succeeded in building the parts maven builds, but attempting to build the native subdirectory (using ./configure and make) fails with:
Error: Could not find class file for 'org.netlib.blas.NativeBLAS'.
and I have no idea where that class is supposed to come from (the only reference to NativeBLAS I can find is in native/Makefile itself).
Original author: [email protected] (February 22, 2012 21:53:20)
I cannot get the jni code to build correctly on ubuntu 11.10. I do have some binary .so files around from a year or two ago that do work if I drop them in
But it would be nice to rebuild them if needed.
I checked out the netlib-java svn trunk, ran
ant clean generate compile package
and then tried to configure and make in jni/
(first I have to do chmod a+x configure, the execute bit is not set on that script in your svn). This failed first trying to find clapack.h, so I changed f2j_jni.h to
Then the build succeeds (though with a lot of
expected ‘integer *’ but argument is of type ‘jint *’
warnings). I get new .so files. But when I try to use them I get
symbol lookup error: /usr/local/java/Linux_i386/jdk1.6.0_17/jre/lib/i386/libjnilapack-linux-x86.so: undefined symbol: dlamch_
and ldd looks fishy:
ldd libjnilapack-linux-x86.so
linux-gate.so.1 => (0xb774d000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7516000)
/lib/ld-linux.so.2 (0xb774e000)
Whereas on the same system if I do ldd on the previously compiled so:
ldd libjniblas-linux-x86.so (the good one)
linux-gate.so.1 => (0xb7863000)
libblas.so.3gf => /usr/lib/libblas.so.3gf (0xb7564000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73e8000)
libgfortran.so.3 => /usr/lib/i386-linux-gnu/libgfortran.so.3 (0xb72e5000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb72c7000)
libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xb72ac000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb7282000)
/lib/ld-linux.so.2 (0xb7864000)
libquadmath.so.0 => /usr/lib/i386-linux-gnu/libquadmath.so.0 (0xb720e000)
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=13
GPU acceleration can be utilised to greater effect by batching traditional routines
http://docs.nvidia.com/cuda/cuda-samples/index.html#batchcublas
we could create an additional API to support this.
The old Google Code project page (http://code.google.com/p/netlib-java/, which now has a link to here) states that the project is licensed under the 3-clause BSD license (http://opensource.org/licenses/BSD-3-Clause). The new page here on Github doesn't. The project README doesn't say what license the code is available under, and there isn't a LICENSE file in the repository.
Netlib-java is a really useful project, and making it clear that it is available under the BSD license would help to continue to make adopting it a clear no-brainer for those of us with day jobs in the world of proprietary software development.
The new GPU libraries (cuBLAS, clBLAS) do not actually implement CBLAS: they implement BLAS with non-standard prefixes (and I'm unsure about the use from Fortran... certainly not binary compatible with LAPACK). Although this allows users to specifically use the GPU, it is impractical to expect users - and the many tiers of middleware - to implement source code changes
In addition, it is clear to see that GPU acceleration is actually slower for small arrays (unless batched, which is a non-trivial departure from the BLAS API)
A more practical solution would be to create a libblas (implementing BLAS and then wrapping with CBLAS) that delegates to the correct implementation at runtime. The deciding factors in choosing an implementation (ATLAS vs clBLAS) for each routine could be calculated imperically on a per-machine basis and saved into a config file that allows the delegating lib to decide based on its parameters (e.g. array size)
From a C perspective, I do not know how to load a library containing methods of the same name as those we are implementing. There might need to be some dynamic library loading jiggery pokery.
From a Java perspective, this library would look identical to libblas and therefore no code changes would be necessary. Note that we cannot workaround the issue of name collisions, because the native LAPACK and ARPACK need to be able to call correctly named BLAS.
Original author: [email protected] (May 20, 2011 18:13:07)
What steps will reproduce the problem?
What is the expected output? What do you see instead?
It is expected to work.
What version of the product are you using? On what operating system?
0.9.3, OSX 10.6.7
Please provide any additional information below.
First, I changed configure to point to gfortran as opposed to g95 as my Fortran compiler.
This generates a Makefile.incl which I think has the incorrect
JLAPACK_JNI_CP. I changed this to point to:
JLAPACK_JNI_CP=../netlib-java-0.9.3.jar:../lib/f2j/arpack-combined-0.1.jar
However, even now after type make, I am receiving the following error: This is most likely my fault, but I did not see a discussion list to email.
gcc -Wall -I/System/Library/Frameworks/JavaVM.framework/Home/include -I/System/Library/Frameworks/vecLib.framework/Headers -fPIC -fno-common -c org_netlib_blas_NativeBLAS.c
org_netlib_blas_NativeBLAS.c: In function ‘Java_org_netlib_blas_NativeBLAS_dgbmv’:
org_netlib_blas_NativeBLAS.c:102: error: incompatible type for argument 2 of ‘cblas_dgbmv’
org_netlib_blas_NativeBLAS.c: In function ‘Java_org_netlib_blas_NativeBLAS_dgemm’:
org_netlib_blas_NativeBLAS.c:120: error: incompatible type for argument 2 of ‘cblas_dgemm’
org_netlib_blas_NativeBLAS.c:120: error: incompatible type for argument 3 of ‘cblas_dgemm’
org_netlib_blas_NativeBLAS.c: In function ‘Java_org_netlib_blas_NativeBLAS_dgemv’:
Thanks in advance as I am not sure what else I am missing.
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=9
Original author: [email protected] (July 21, 2010 23:25:41)
What steps will reproduce the problem?
cd jni/ ; sh configure ; make -j4
Parallel make jobs build the dependencies of each target in parallel. For the .c files that require .h files, the .c.o target does not list the .h file as a dependency. The following change provides a reasonable temporary fix:
27c27
> %.o: %.c %.h
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=4
Apparently having a plugin in the reactor is not supported:
http://stackoverflow.com/a/18441860/1041691
(will involve refactoring out the shared dependency code)
for OS X this would be veclib, for Linux this would be /usr/lib/libblas.so etc (allowing ATLAS to be swapped in).
The biggest stumbling block here is working out if all the routines are implemented by the system library or not. We might have to pick the lowest common denominator for ease of packaging.
Original author: [email protected] (September 26, 2011 18:18:54)
What steps will reproduce the problem?
What is the expected output? What do you see instead?
Should fall back to JLAPACK, instead throws SecurityException:
Caused by: java.lang.SecurityException: Google Apphosting does not support System.loadLibrary
at com.google.appengine.runtime.Request.process-70c25d6b1db4cc9c(Request.java)
at java.lang.System.loadLibrary(System.java:152)
at org.netlib.lapack.NativeLAPACK.<init>(NativeLAPACK.java:67)
at org.netlib.lapack.NativeLAPACK.<clinit>(NativeLAPACK.java:58)
... 86 more
What version of the product are you using? On what operating system?
0.9.3
Please provide any additional information below.
Many thanks, super useful project!
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=11
with #27 we're going to need a list of implementations to try.
We should also probably list them in "sensible" order and have the best ones first. Most clients are going to do that anyway and we always have Java fallback.
Original author: [email protected] (April 03, 2011 20:33:13)
First, thanks for this project! I'm making use of it in scalala http://groups.google.com/group/scalala to great effect.
However, in order to support effective memory sharing in dense data structures, we need to incorporate array offsets. This is actually already part of the f2j code -- every array is followed by an _offset argument -- and could also be included in the generated JNI code. However, it's currently removed in your generated wrappers.
Concretely, JBLAS.java provides:
public abstract void dscal(int n,
double da,
double[] dx,
int incx)
Which wraps a call to:
org.netlib.blas.Dscal.dscal(n, da, dx, 0, incx);
What you lose when you set the offset argument to 0 is the ability to effectively slice dense data structures without copying memory. The classic example of this is something like scaling a column or row in a matrix. The matrix is usually stored column-major in an array of doubles, say double[] data of size m x n. If the offset were provided, we could scale the i'th row by 2.0 with:
dscal(n, 2.0, data, i, m)
Or we could scale the j'th column with:
dscal(m, 2.0, data, n*j, 1)
Changing the API to include this offset should be pretty trivial on the F2J side, and fairly straightforward on the native side, too -- it's just adding offset*sizeof(element) to the pointer passed to the native library call.
Of course, it would change the API, but I think in a good way. And if you really want backwards compatibility, you could generate two versions of the API - one with the _offsets and the other without.
What do you think?
dan
What version of the product are you using? On what operating system?
0.9.2
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=7
during new benchmarks:
Parameter 2 to routine cblas_dgemm was incorrect
Illegal TransA setting, -1
Copy cibuddy style packaging and make native LAPACK trivial.
Original author: [email protected] (January 09, 2013 10:50:35)
Hi,
I set up the project, modified headers to use mkl.h, but I am just getting hundreds of various errors. What else do I have to do?
Is this even possible? Or do I have to use GCC? Is MinGW + MSYS enough or do I need a CYGWIN?
The msvc project attached.
Regards
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=16
I'm seeing this when building on Win64:
com_github_fommil_netlib_NativeRefLAPACK.c:60:3: warning: passing argument 12 of 'LAPACKE_dbdsdc_work' from incompatible pointer type [enabled by default]
int returnValue = LAPACKE_dbdsdc_work(LAPACK_COL_MAJOR, jni_uplo[0], jni_compq[0], n, jni_d, jni_e, jni_u, ldu, jni_vt, ldvt, jni_q, jni_iq, jni_work, jni_iwork);
^
must be an incompatibility in the jint
definition in JNI.
Original author: [email protected] (October 16, 2012 19:50:13)
https://issues.sonatype.org/browse/OSSRH-4523
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=14
to minimise the class clutter for devs in IDEs
mvn -Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.NativeRefBLAS -Dcom.github.fommil.netlib.LAPACK=com.github.fommil.netlib.NativeRefLAPACK -Dtest=com.github.fommil.netlib.BLASTest#offsets test
gives
offsets(com.github.fommil.netlib.BLASTest) Time elapsed: 0.014 sec <<< ERROR!
java.lang.UnsatisfiedLinkError: com.github.fommil.netlib.NativeRefBLAS.dscal_offsets(ID[DII)V
at com.github.fommil.netlib.NativeRefBLAS.dscal_offsets(Native Method)
at com.github.fommil.netlib.NativeRefBLAS.dscal(NativeRefBLAS.java:182)
at com.github.fommil.netlib.BLASTest.offsets(BLASTest.java:50)
Passing the full path is a poor workaround for the "libgfortran.so not found" issue. (One of the reasons it is a very bad idea is that it hardcodes the soversion 3.) You're missing the unversioned libgfortran.so
symlink. On Ubuntu, you need to install the lib32gfortran-4.7-dev
package, which installs a /usr/lib/gcc/x86_64-linux-gnu/4.7/32/libgfortran.so
→ /usr/lib32/libgfortran.so.3
symlink. (That package also depends on lib32gfortran3
, so you need to list only lib32gfortran-4.7-dev
in your instructions.) Then -m32 -lgfortran
should just work. (-L/usr/lib32
should not be needed.)
(Note: The package name lib32gfortran-4.7-dev
is for Ubuntu 13.04 (Raring Ringtail) and newer. In Ubuntu 12.10 (Quantal Quetzal), the package is named gfortran-4.7-multilib
instead. But otherwise, the instructions are the same.)
More detailed explanation: lib32gfortran3
is only the runtime package. On GNU/Linux, there are 2 packages for a library: a runtime package, which contains only the versioned library, and a development package, which contains the unversioned symlink. To run something compiled against the library, only the runtime package is needed, but to compile something against the library, you need the development package (and also the runtime package, but normally, the development package depends on the runtime package).
And another reason why hardcoding /usr/lib32/libgfortran.so.3
is a very bad idea is that it is distro-specific and even host-platform-specific: It works on your 64-bit Ubuntu, but it will not work on:
/usr/lib/libgfortran.so.3
/usr/lib/libgfortran.so.3
(The 64-bit version on 64-bit Fedora goes to /usr/lib64/libgfortran.so.3
; this setup called "multilib" allows using 32-bit RPMs unmodified on 64-bit Fedora.)/usr/lib/i386-linux-gnu/libgfortran.so.3
. (The 64-bit version will go to: /usr/lib/x86_64-linux-gnu/libgfortran.so.3
. This setup is called "multiarch", and similarly to Fedora's "multilib", allows using 32-bit debs unmodified on 64-bit Debian/Ubuntu. Many packages in Ubuntu have already been converted.)/usr
.So it is very unwise to hardcode absolute paths for libraries on GNU/Linux.
(from Keith Seymour)
The general build procedure is outlined here:
http://icl.cs.utk.edu/f2j/faq/index.html#322
But in your case, there are a few changes required.
Since you want to include the full blas/lapack with your jar file, you would first build jlapack:
cd jlapack-3.1.1
make lib
Then build jarpack first using the default subset of blas/lapack:
cd jarpack
make arpack_lib
Copy the full blas/lapack jars to the appropriate arpack directories:
cp ../jlapack-3.1.1/src/blas/blas.jar BLAS/
cp ../jlapack-3.1.1/src/lapack/lapack.jar LAPACK/
cp ../jlapack-3.1.1/src/util/f2jutil.jar .
Then make the jar file:
make combined_jar
The jar file should be in the "jar_temp" subdirectory.
I just ran a quick test and this procedure seemed to work, but let me know if you hit a snag.
I've had to ask about this on the macports mailing list
https://lists.macosforge.org/pipermail/macports-users/2013-August/033305.html
http://mac-os-forge.2317878.n4.nabble.com/gcc-and-universal-binaries-td226900.html
we have two BLAS and one LAPACK benchmark... we should have an ARPACK one as well. Perhaps the default diagonalisation of a large sparse matrix would be useful.
Looks like cross compiling for linux (arm/32/64) and windows (32/64) might be a reality at some point:
The "select" argument is an external function whose definition looks like this in Fortran:
LOGICAL FUNCTION DSLECT( ZR, ZI )
DOUBLE PRECISION ZI, ZR
In the Java translation, the code uses reflection to get the first method (under the assumption that it was f2j-generated).
Native support for this could be tricky.
Original author: [email protected] (July 21, 2010 23:21:37)
What steps will reproduce the problem?
The configure file sets the JLAPACK_JNI_CP to ../netlib-java-0.9.0.jar, even though the file name is ../netlib-java.0.9.1.jar
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=3
Original author: [email protected] (October 16, 2012 19:54:27)
The netlib-java generation code was all copied out into Paranamer's JavadocParanamer, however the original implementation remains here.
This RFE is to depend on Paranamer and use the factored out code, greatly simplifying this codebase.
Original issue: http://code.google.com/p/netlib-java/issues/detail?id=15
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.