Git Product home page Git Product logo

cuda-convnet's People

Contributors

turinglife avatar

Watchers

 avatar

cuda-convnet's Issues

Visualising feature output from layers

When using startFeatureWriter to get out a feature map, I cannot seem to 
visualise the data properly. It seems there is some sort of channel 
interleaving, which is different depending on the layers. Is there anything I 
am missing to be able to visualise the output featuremap of a layer (even a 
simple layer such as resize or gaussian blur)?

Original issue reported on code.google.com by [email protected] on 12 Nov 2013 at 5:21

Confirmation for strange-looking assert in filter_acts.cu

Hi Alex,

Could you confirm that this slightly strange-looking line from filter_act.cu is 
correct?

assert(paddingStart <= 0 && paddingStart + (numModules-1)*moduleStride + 
filterSize >= imgSize);

In particular, why are you multiplying numModules (which the square of 
numModulesX) by the stride and adding filterSize (which is not the square, but 
the side-length).

If you're sure, I trust you, but if you could additionally lend some intuition 
for why I'd appreciate it.

There is a comment in the code but I still don't get it:
// These routines don't handle the case when only part of the image is visited 
in the convolution

Thanks,
- James

Original issue reported on code.google.com by [email protected] on 10 Jan 2012 at 10:35

Destructors.

It seems like Layer with weights class lacks a destructor. I wonder why it is 
so? Won't it cause memory leaks (for example, biases are never deleted)?


Original issue reported on code.google.com by [email protected] on 22 Jan 2013 at 8:02

size limitation in FilterActs?

What steps will reproduce the problem?
1. use a big image like 512 x 512
2. put lots of filters (like 64)
3. have lots of color channels (again 64?)

What is the expected output? What do you see instead?
I expect a big filtered image, but instead it crashes. 

The blocks are defined such that blocks.y > (2^16) so CUDA refuses to launch 
the kernel.

I'm not sure I understand how to set the number of modules when doing a normal 
convolution, but it seems that an outer loop is required. The trouble with an 
outer loop is that the data is arranged in such a way that it is impossible to 
apply just a fraction of the  filters, or to process just some of each image. 
The data arrangement makes it natural to process just some of the image 
channels... but the color channels don't come into the blocking structure.

Basically... can I use this kernel to perform big convolutions?

Original issue reported on code.google.com by [email protected] on 7 Mar 2012 at 6:55

Local units

Add support for local non-convolutional layers.

Original issue reported on code.google.com by [email protected] on 30 Jun 2011 at 12:33

image 256x256

What steps will reproduce the problem?
1. 1000 images of the size 256x256 into a batch
2. modification of convdata.py
3. Try to train the network

What is the expected output? What do you see instead?
It's working with the size 32x32, 64x64 but if the images are bigger than that 
I have a "device memory allocation error". I think it comes from nvmatrix.cu, 
the cublasallocation is not working. 


What version of the product are you using? On what operating system?
I have a gpu GTX 590.  



Original issue reported on code.google.com by [email protected] on 16 Jun 2014 at 8:07

CUDA 5 compatibility

Since the CUDA 5 I have had no success in compiling the cuda convnet code. 
Are there any plans to make it compatible with CUDA 5?


Original issue reported on code.google.com by [email protected] on 16 Jan 2013 at 1:01

saving multiview predictions (--test-out) not work

What steps will reproduce the problem?
1. train a model
2. multiview test the model and --test-out=1
3.

What is the expected output? What do you see instead?
probs matrix of multiview tested result.
All zero matrix

What version of the product are you using? On what operating system?
latest version

Please provide any additional information below.
Is --test-out function not yet developed? Since I saw the part of writing probs 
matrix is commented. Thanks

Original issue reported on code.google.com by [email protected] on 13 Aug 2014 at 5:45

cuda convnet compilation error GTS 450

What steps will reproduce the problem?
1. vs2010 windows 7 64 bit
2. compile the code
3. ptxas fatal   : Memory allocation failure from logs
I have nvidia GTS 450 gpu: will GTS 450 gpu work ? or i need GTX gpu only?

i have attached pyconvnet.log file 

i have nvidia GTS 450 gpu, is this an error due to gpu gts , do i need to 
install gtx gpu?



Original issue reported on code.google.com by [email protected] on 30 Jun 2014 at 5:49

Attachments:

Getting nan values while using response normalization layer

The network training is fine without adding any contrast normalization layer 
(all types), but ones add the contrast normalization layers, after several 
iterations the net gets nan values.  I tried different values of the size, 
scale and pow values, and tried to place the layer before and after pooling 
layer. 


Original issue reported on code.google.com by [email protected] on 8 May 2013 at 7:44

compile error

What steps will reproduce the problem?
1. compiler error,
2.
3.

What is the expected output? What do you see instead?
when i compile usr ./build.sh will broke see this error

obj/x86_64/release/src/util.cu.o: In function `pyDictGetMatrix(_object*, char 
const*)':
tmpxft_0000033f_00000000-3_util.cudafe1.cpp:(.text+0x27a): undefined reference 
to `Matrix::Matrix(PyArrayObject const*)'
obj/x86_64/release/src/util.cu.o: In function `getMatrixV(_object*)':
tmpxft_0000033f_00000000-3_util.cudafe1.cpp:(.text+0x3fe): undefined reference 
to `Matrix::Matrix(PyArrayObject const*)'

What version of the product are you using? On what operating system?
centos 6.3
gcc 4.4.6-4
cuda 5.0

Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 5 Apr 2013 at 3:30

a small bug in NVMatrix::rightMult()

The result would be incorrect if the target is same as the first operand. The 
target==this version would require this to be of column major. I modified it so 
that this requirement is no longer needed:

void NVMatrix::rightMult(const NVMatrix &b, float scaleAB, NVMatrix &target) 
const {
    assert(isContiguous() && b.isContiguous() && target.isContiguous());
//    assert(&target != &b);
    assert(_numCols == b.getNumRows());
    if(&target != this) {
        target.resize(_numRows, b.getNumCols());
        //target.setTrans(true); // default column major
    }
    assert(target.getNumRows() == _numRows);
    assert(target.getNumCols() == b.getNumCols());
    if(_numRows % 64 != 0 || _numCols % 64 != 0 || b.getNumCols() % 64 != 0) {
        WARN("Matrix dimensions not divisible by 64 -- cublasSgemm performance may suffer.");
    }
    cublasSgemm(getTransChar(), b.getTransChar(), _numRows, b.getNumCols(), _numCols,
                scaleAB, _devData, getLeadingDim(), b.getDevData(), b.getLeadingDim(),
                0, target.getDevData(), getNumRows());
    target.setTrans(true); // added isTrans specification
    checkCublasError("cublasSgemm failed");
//    cudaThreadSynchronize();
}

Original issue reported on code.google.com by [email protected] on 12 Jul 2013 at 3:47

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.