Git Product home page Git Product logo

librfn's Introduction

librfn: Rectified Factor Networks

Rectified Factor Networks (RFNs) are an unsupervised technique that learns a non-linear, high-dimensional representation of its input. The underlying algorithm has been published in

Rectified Factor Networks, Djork-Arné Clevert, Andreas Mayr, Thomas Unterthiner, Sepp Hochreiter, NIPS 2015.

librfn is implemented in C++ and can be easily integrated in existing code bases. It also contains a high-level Python wrapper for ease of use. The library can run in either CPU or GPU mode. For larger models the GPU mode offers large speedups and is the recommended mode.

librfn has been written by Thomas Unterthiner and Djork-Arné Clevert. Sparse matrix support was added by Balázs Bencze and Thomas Adler.

Installation

  1. (optional) Adjust the Makefile to your needs
  2. Type make to start the building process
  3. To use the python wrapper, just copy rfn.py and librfn.so into your working directory.

Requirements

To run the GPU code, you require a CUDA 7.5 (or higher) compatible GPU. While in theory CUDA 7.0 is also supported, it contains a bug that results in a memory leak when running librfn (and your program is likely to crash with an out-of-memory error).

If you do not have access to a GPU, you can disable GPU support by setting USEGPU = no in the Makefile.

Note that librfn makes heavy use of BLAS and LAPACK, so make sure to link it to a high-quality implementation to get optimal speed (e.g. OpenBLAS or MKL) by modifying the Makefile.

Usage

Implementation Note

The RFN algorithm is based on the EM algorithm. Within the E-step, the published algorithm includes a projection procedure that can be implemented in several ways (see the RFN paper's supplemental section 9). To make sure no optimzation constraints are violated during this projection, the original publication tries the simplest method first, but backs out to more and more complicated updates if easier method fail (suppl. section 9.5.3). In contrast, librfn always uses the simplest/fastest projection method. This is a simplification/approximation of the original algorithm that nevertheless works very well in practice.

License

librfn was developed by Thomas Unterthiner and is licensed under the General Public License (GPL) Version 2 or higher See License.txt for details.

librfn's People

Contributors

gokceneraslan avatar himamis avatar untom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

librfn's Issues

R package install does not work for clang

Hi - I get an error when I try to install the R package. It may be a clang vs gcc issue, but I'm not really a C++ programmer so I'm not sure. I'm using clang6, which is now recommended by CRAN for R builds on macOS (https://cran.r-project.org/bin/macosx/tools/). Thanks in advance!

My output is:

R CMD build libfrn-R
R CMD INSTALL RFN_0.1.tar.gz

R CMD INSTALL RFN_0.1.tar.gz

  • installing to library ‘/Library/Frameworks/R.framework/Versions/3.5/Resources/library’
  • installing source package ‘RFN’ ...
    checking for /opt/intel/mkl/include/mkl_blas.h... no
    configure: creating ./config.status
    config.status: creating src/Makevars
    ** libs
    PKG_LIBS is
    BL_LIBS is
    /usr/local/clang6/bin/clang++ -Wall -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -DCOMPILE_FOR_R -I -I/usr/local/include -DNOGPU -I"/Library/Frameworks/R.framework/Versions/3.5/Resources/library/Rcpp/include" -I/usr/local/include -std=c++11 -fPIC -Wall -g -O2 -c nist_spblas.cc -o nist_spblas.o
    nist_spblas.cc:390:21: error: invalid operands to binary expression ('double'
    and 'complex')
    y[p->second] += alpha * conj(p->first);
    ~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~
    nist_spblas.cc:481:7: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::sp_conj_axpy' requested here
    sp_conj_axpy( alpha * *X, S[i], y, incy);
    ^
    nist_spblas.cc:515:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::nondiag_mult_vec_conj_transpose' requested
    here
    nondiag_mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:840:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_vec_conj_transpose' requested here
    mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1711:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::usmv' requested here
    return M->usmv(transa, alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1928:10: note: in instantiation of function template
    specialization 'BLAS_xusmv' requested here
    return BLAS_xusmv(
    ^
    nist_spblas.cc:421:10: error: invalid operands to binary expression ('double'
    and 'complex')
    *Y += alpha * conj(*d) * *X;
    ~~ ^ ~~~~~~~~~~~~~~~~~~~~~
    nist_spblas.cc:518:7: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_conj_diag' requested here
    mult_conj_diag(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:840:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_vec_conj_transpose' requested here
    mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1711:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::usmv' requested here
    return M->usmv(transa, alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1928:10: note: in instantiation of function template
    specialization 'BLAS_xusmv' requested here
    return BLAS_xusmv(
    ^
    nist_spblas.cc:348:17: error: invalid operands to binary expression ('double'
    and 'complex')
    sum += conj(p->first) * x[p->second];
    ~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    nist_spblas.cc:453:25: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::sp_conj_dot_product' requested here
    y[i] += alpha * sp_conj_dot_product(S[i], x, incx);
    ^
    nist_spblas.cc:521:7: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::nondiag_mult_vec_conj' requested here
    nondiag_mult_vec_conj(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:840:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_vec_conj_transpose' requested here
    mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1711:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::usmv' requested here
    return M->usmv(transa, alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1928:10: note: in instantiation of function template
    specialization 'BLAS_xusmv' requested here
    return BLAS_xusmv(
    ^
    nist_spblas.cc:627:15: error: invalid operands to binary expression ('double'
    and 'typename __libcpp_complex_overload_traits::_ComplexType'
    (aka 'complex'))
    x[jj] /= conj(diag[j]) ;
    ~~~~~ ^ ~~~~~~~~~~~~~
    nist_spblas.cc:885:16: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::transpose_triangular_conj_solve' requested
    here
    return transpose_triangular_conj_solve(alpha, x, incx);
    ^
    nist_spblas.cc:1737:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::ussv' requested here
    return M->ussv(transa, alpha, x, incx);
    ^
    nist_spblas.cc:1959:10: note: in instantiation of function template
    specialization 'BLAS_xussv' requested here
    return BLAS_xussv( transa,
    ^
    nist_spblas.cc:390:21: error: invalid operands to binary expression ('float' and
    'complex')
    y[p->second] += alpha * conj(p->first);
    ~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~
    nist_spblas.cc:481:7: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::sp_conj_axpy' requested here
    sp_conj_axpy( alpha * *X, S[i], y, incy);
    ^
    nist_spblas.cc:515:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::nondiag_mult_vec_conj_transpose' requested
    here
    nondiag_mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:840:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_vec_conj_transpose' requested here
    mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1711:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::usmv' requested here
    return M->usmv(transa, alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:2386:10: note: in instantiation of function template
    specialization 'BLAS_xusmv' requested here
    return BLAS_xusmv(
    ^
    nist_spblas.cc:421:10: error: invalid operands to binary expression ('float' and
    'complex')
    *Y += alpha * conj(*d) * *X;
    ~~ ^ ~~~~~~~~~~~~~~~~~~~~~
    nist_spblas.cc:518:7: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_conj_diag' requested here
    mult_conj_diag(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:840:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_vec_conj_transpose' requested here
    mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1711:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::usmv' requested here
    return M->usmv(transa, alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:2386:10: note: in instantiation of function template
    specialization 'BLAS_xusmv' requested here
    return BLAS_xusmv(
    ^
    nist_spblas.cc:348:17: error: invalid operands to binary expression ('float' and
    'complex')
    sum += conj(p->first) * x[p->second];
    ~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    nist_spblas.cc:453:25: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::sp_conj_dot_product' requested here
    y[i] += alpha * sp_conj_dot_product(S[i], x, incx);
    ^
    nist_spblas.cc:521:7: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::nondiag_mult_vec_conj' requested here
    nondiag_mult_vec_conj(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:840:5: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::mult_vec_conj_transpose' requested here
    mult_vec_conj_transpose(alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:1711:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::usmv' requested here
    return M->usmv(transa, alpha, x, incx, y, incy);
    ^
    nist_spblas.cc:2386:10: note: in instantiation of function template
    specialization 'BLAS_xusmv' requested here
    return BLAS_xusmv(
    ^
    nist_spblas.cc:627:15: error: invalid operands to binary expression ('float' and
    'typename _libcpp_complex_overload_traits::ComplexType'
    (aka 'complex'))
    x[jj] /= conj(diag[j]) ;
    ~~~~~ ^ ~~~~~~~~~~~~~
    nist_spblas.cc:885:16: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::transpose_triangular_conj_solve' requested
    here
    return transpose_triangular_conj_solve(alpha, x, incx);
    ^
    nist_spblas.cc:1737:13: note: in instantiation of member function
    'NIST_SPBLAS::TSp_mat::ussv' requested here
    return M->ussv(transa, alpha, x, incx);
    ^
    nist_spblas.cc:2417:10: note: in instantiation of function template
    specialization 'BLAS_xussv' requested here
    return BLAS_xussv( transa,
    ^
    nist_spblas.cc:106:9: warning: private field 'general
    ' is not used
    [-Wunused-private-field]
    int general
    ;
    ^
    1 warning and 8 errors generated.
    make: *** [nist_spblas.o] Error 1
    ERROR: compilation failed for package ‘RFN’

Cannot build package: no rule to make target 'gpu_operations.cu'

Hi,

I can't build the package. This is the output from the console:

sudo R CMD INSTALL --configure-args='--with-cuda-home=/usr/lib/nvidia-cuda-toolkit' librfn-R-master

  • installing to library ‘/usr/local/lib/R/site-library’
  • installing source package ‘RFN’ ...
    checking for /usr/lib/nvidia-cuda-toolkit/bin/nvcc... yes
    checking for /usr/lib/nvidia-cuda-toolkit/include/cublas.h... yes
    checking for /usr/lib/nvidia-cuda-toolkit/lib64/libcublas.so... yes
    checking for /opt/intel/mkl/include/mkl_blas.h... yes
    checking for /opt/intel/mkl/lib... yes
    checking for /opt/intel/mkl/lib/intel64/libmkl_rt.so... yes
    configure: creating ./config.status
    config.status: creating src/Makevars
    ** libs
    PKG_LIBS is -L/opt/intel/mkl/lib/intel64 -lmkl_rt -L/usr/lib/nvidia-cuda-toolkit/lib64 -lcublas -lcurand -lcuda -lcudart -lcusolver -lcusparse
    BL_LIBS is -L/opt/intel/mkl/lib/intel64 -lmkl_rt
    make: *** No rule to make target 'gpu_operations.cu', needed by 'gpu_operations.o'. Stop.
    ERROR: compilation failed for package ‘RFN’
  • removing ‘/usr/local/lib/R/site-library/RFN’

Is this a bug? Or do you have any suggestions how to install?

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.