Git Product home page Git Product logo

Comments (5)

griwodz avatar griwodz commented on August 14, 2024

The reason is that CUDA Textures have strict size limits. Until someone adds an alternative code path, the best we can do it giving a better and earlier warning. I propose PR#89 for that.

from popsift.

tic80 avatar tic80 commented on August 14, 2024

I was looking to the code and it seems that you upscale the image per default by a factor of 2.
If I choose a factor of 1, I have enough memory to extract the features.
Is there a big impact on accuracy by not upscaling the input image ?

The out of memory error comes from allocating an 3d array (cudaMalloc3DArray) of
width x upscale x height x upscale x (levels - 1)

Would it be possible to rather use (levels - 1) x 2d array instead ?
In this case, there should be enough memory in most common graphic card.

from popsift.

griwodz avatar griwodz commented on August 14, 2024

The upscaling by 2 is the default of the original SIFT paper.
From my understanding of the algorithm, you would lose the highest quality features because you must take the first image, blur it 4 times, then compute the 3 Difference-of-Gaussian layers, and then you can search for feature point candidates in the middle one of those 3. So ... quite a bit of more blurry than the original image. Upscaling fixes that problem.

The RTX 2080 has a quite large amount of memory, but it limits the maximum Surface2DLayered size to 65536 (bytes) × 32768 × 2048. Are you sure that memory allocation fails in your case and not the creation of the surface?

It is absolutely possible to use very much less memory at the price of slightly slower code. It was just never a goal for me. Several of the currently supported alternative downscaling approaches would not be supported in that case and feature points would have to be collected differently, but it could be done.

Unfortunately, I'm not able to do it in the foreseeable future.

from popsift.

fabiencastan avatar fabiencastan commented on August 14, 2024

You can adjust the SIFT params in popsift::Config.
See setDownsampling:

void setDownsampling( float v );

from popsift.

tic80 avatar tic80 commented on August 14, 2024

@fabiencastan config.setDownsampling(0); solves my issue, Thanks

@griwodz thanks for the explanation, it does make sense.
the first allocation that failed is cudaMalloc3DArray in Octave::alloc_interm_array()
the maximum Surface2dLayered supported by my graphic card is 32768 x 32768 x 2048

from popsift.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.