Git Product home page Git Product logo

btgraham / sparseconvnet-archived Goto Github PK

View Code? Open in Web Editor NEW
404.0 39.0 123.0 6.06 MB

Spatially-sparse convolutional networks. Allows processing of sparse 2, 3 and 4 dimensional data.Build CNNs on the square/cubic/hypercubic or triangular/tetrahedral/hyper-tetrahedral lattices.

Home Page: https://github.com/btgraham/SparseConvNet/wiki

C++ 38.50% Python 2.12% C 23.71% Cuda 34.83% Makefile 0.68% R 0.16%

sparseconvnet-archived's Introduction

sparseconvnet-archived's People

Contributors

btgraham avatar jiho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparseconvnet-archived's Issues

how about Using linear regression?

hi, btgrahma.
I am so appreciated with your great work. But, i am a little confused with the following sentence in your paper:
"Using linear regression on just the CNN generated probability distributions, without any of the meta-data, also seems to work well."

linear regression is done based features probability distributions?
for example, you have 3 nets and repeat random transformation 3 times, in addition left-right eye, so you have 332*5=90 dimension of features for linear regression?

compiling error [BatchProducer.cu]

Hello, when I try to compile the project in Visual Studio 2013, I met an error. I found it quite similar with #2 , and I replaced that line with workers.emplace_back(std::thread(&BatchProducer::batchProducerThread,this,nThread)); but the error still exits.

1>C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\functional(1148): error : no instance of overloaded function "std::_Pmd_wrap<_Pmd_t, _Rx, _Farg0>::operator() [with _Pmd_t=void (BatchProducer::*)(int), _Rx=void (int), _Farg0=BatchProducer]" matches the argument list
1>              argument types are: (BatchProducer *, int)
1>              object type is: std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>
1>            detected during:
1>              instantiation of "std::_Do_call_ret<_Forced, _Ret, std::decay<_Fun>::type, std::tuple<std::decay<_Types>::type...>, std::tuple<_Ftypes &...>, std::_Arg_idx<_Bindexes...>>::type std::_Bind<_Forced, _Ret, _Fun, _Types...>::_Do_call(std::tuple<_Ftypes &...>, std::_Arg_idx<_Bindexes...>) [with _Forced=false, _Ret=void, _Fun=std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, _Types=<BatchProducer *, int>, _Ftypes=<>, _Bindexes=<0ULL, 1ULL>]" 
1>  (1137): here
1>              instantiation of "std::_Do_call_ret<_Forced, _Ret, std::decay<_Fun>::type, std::tuple<std::decay<_Types>::type...>, std::tuple<_Ftypes &...>, std::_Make_arg_idx<_Types...>::type>::type std::_Bind<_Forced, _Ret, _Fun, _Types...>::operator()(_Ftypes &&...) [with _Forced=false, _Ret=void, _Fun=std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, _Types=<BatchProducer *, int>, _Ftypes=<>]" 
1>  C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\thr/xthread(195): here
1>              instantiation of "unsigned int std::_LaunchPad<_Target>::_Run(std::_LaunchPad<_Target> *) [with _Target=std::_Bind<false, void, std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, BatchProducer *, int>]" 
1>  C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\thr/xthread(187): here
1>              instantiation of "unsigned int std::_LaunchPad<_Target>::_Go() [with _Target=std::_Bind<false, void, std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, BatchProducer *, int>]" 
1>  C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\thr/xthread(183): here
1>              instantiation of "std::_LaunchPad<_Target>::_LaunchPad(_Other &&) [with _Target=std::_Bind<false, void, std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, BatchProducer *, int>, _Other=std::_Bind<false, void, std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, BatchProducer *, int>]" 
1>  C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\thr/xthread(205): here
1>              instantiation of "void std::_Launch(_Thrd_t *, _Target &&) [with _Target=std::_Bind<false, void, std::_Pmd_wrap<void (BatchProducer::*)(int), void (int), BatchProducer>, BatchProducer *, int>]" 
1>  C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\thread(49): here
1>              instantiation of "std::thread::thread(_Fn &&, _Args &&...) [with _Fn=void (BatchProducer::*)(int), _Args=<BatchProducer *, int &>]" 
1>  D:/Projects/diabetic/diabetic/BatchProducer.cu(123): here

Regards.

Running on CPUs

Dear Prof. Ben Warwick,

Nice work with this project!

I'm curious about running your spatially-sparse convolutional networks in the public cloud. Most public cloud options however don't have GPUs. So I was thinking of porting this to fallback to CPUs when GPUs aren't available.

Would welcome your opinion on this, and any suggestions on how best to approach this.

Thanks,

Samuel

PS: Obviously any decent solution I come up will be offered back in a PR

CIFAR10 Segmentation fault

I am getting segmentation fault in the second epoch:
*1 Tesla K40c 11519MB Compute capability: 3.5
Sparse CNN - dimension=2 nInputFeatures=3 nClasses=10
0:Convolution 2^2x3->12
1:Learn 12->32 dropout=0 PReLU
2:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
3:Convolution 2^2x32->128
4:Learn 128->64 dropout=0 PReLU
5:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
6:Convolution 2^2x64->256
7:Learn 256->96 dropout=0 PReLU
8:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
9:Convolution 2^2x96->384
10:Learn 384->128 dropout=0 PReLU
11:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
12:Convolution 2^2x128->512
13:Learn 512->160 dropout=0 PReLU
14:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
15:Convolution 2^2x160->640
16:Learn 640->192 dropout=0 PReLU
17:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
18:Convolution 2^2x192->768
19:Learn 768->224 dropout=0 PReLU
20:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
21:Convolution 2^2x224->896
22:Learn 896->256 dropout=0 PReLU
23:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
24:Convolution 2^2x256->1024
25:Learn 1024->288 dropout=0 PReLU
26:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
27:Convolution 2^2x288->1152
28:Learn 1152->320 dropout=0 PReLU
29:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
30:Convolution 2^2x320->1280
31:Learn 1280->352 dropout=0 PReLU
32:Pseudorandom overlapping Fractional Max Pooling 1.25989 2
33:Convolution 2^2x352->1408
34:Learn 1408->384 dropout=0 PReLU
35:Pseudorandom overlapping Fractional Max Pooling 1.5 2
36:Convolution 2^2x384->1536
37:Learn 1536->416 dropout=0 PReLU
38:Learn 416->448 dropout=0 PReLU
39:Learn 448->10 dropout=0 Softmax Classification
(2,3) (4,5) (6,8) (9,11) (12,15) (16,20) (21,26) (27,34) (35,44) (45,57) (58,73) (74,93) Spatially sparse CNN: input size 94x94
epoch: 1 CIFAR-10 train set Mistakes:79.3582% NLL:2.11228 MegaMultiplyAdds/sample:820 time:93s GigaMultiplyAdds/s:437 rate:533/s
epoch: 2 Segmentation fault (core dumped)

Data/kaggleDiabeticRetinopathy/val_set file missing

Hi Benjamin,
The validation set file val_set is missing from the repository, it is not in Data/kaggleDiabeticRetinopathy/ of kaggle_Diabetic_Retinopathy_competition branch, only train_minus_val_set and test_set files are there. I know I can reconstruct it from trainLabels.csv and train_minus_val_set files but I would like to know if you can push it to the repository.
By the way, would you have the scripts to create a random train / validation partition (ie.: train_minus_val_set / val_set files) ?
Thank you

Compiling on MAC OS X

Trying to compile the source code on my Mac (OS X 10.10) with the latest CUDA 7.0 and Xcode drivers. I cannot get pass the compilation of BatchProducer.cu. The NVCC compiler complains when trying to full the thread vector (workers) in the BatchProducer constructor function. You have each thread binded to the creation of a batchProducerThread object. Unfortunately, I keep getting error that the thread is being binded to a deleted function. If change the thread target to a vector of threads defined explicitly outside the constructor, the everything works fine.

How you seen this before?

1d input vectors?

Hi,

I have a regression problem with sparse 1-dimensional input binary vectors and real valued output \in [0-1]. Should I be able to use SparseConvNet for it almost out of the box?

Thanks,
Ameen.

Installation difficulty

Installing seems to succeed, but than I get the following error upon import sparseconvnet as scn:

~/projects/SparseConv_sndbx/SparseConvNet/PyTorch/sparseconvnet/SCN/init.py in ()
1
2 from torch.utils.ffi import _wrap_function
----> 3 from ._SCN import lib as _lib, ffi as _ffi
4
5 all = []

ImportError: /home/bastiaan/projects/SparseConv_sndbx/SparseConvNet/PyTorch/sparseconvnet/SCN/_SCN.so: undefined symbol: _ZTISt12length_error

platform:
Ubuntu 16
Python 3.5
PyTorch 0.3.1

Any pointers anyone?

Memory leak?

I am trying to apply SparseConvNet to ModelNet40 dataset (http://modelnet.cs.princeton.edu/ModelNet40.zip).

It seems that there is some memory leakage. After about 60 epochs the memory consumptino grows up from ~5Gb to ~15Gb,

My code can be found here https://github.com/yeahrmek/3DRetrieval (it is a fork of this project).
There modelNet.cpp is a copy of shrec2015.cpp modified to be used with ModelNet dataset.
I also modified OffSurfaceModelPicture class constructor to correctly load ModelNet OFF files (in some off files in ModelNet the first line is concatenated with the second)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.