Git Product home page Git Product logo

Comments (14)

mli0603 avatar mli0603 commented on May 11, 2024 1

Hello everyone,

Thanks for reporting this. I am currently looking into other code-breaking issues. I will circle back shortly and provide updates on this.

from neuralangelo.

RishabhBajaj25 avatar RishabhBajaj25 commented on May 11, 2024

I trained the same for 100k steps too and did not have satisfying results with the default parameters. The training does reach convergence as I can see on weights and biases webpage. However, I think its a problem of tuning the marching cubes hyperparameters, namely, RESOLUTION and BLOCK_RES. I had a look at the original paper and it says a RESOLUTION of 512 was used for the DTU benchmark which is similar to the toy dataset. I am running a few tests with different parameters as we speak. We can discuss more once I have some results.

from neuralangelo.

abhishekmonogram avatar abhishekmonogram commented on May 11, 2024

I think I found the issue with mine. The issue was the hashgrid dict_size. I was trying hashgrid dict_size 20 and the outputs weren't great. When I switched the hash grid dict_size to 21, I got really good outputs. Try it with dict_size 21 and see if you get good outputs.

from neuralangelo.

RishabhBajaj25 avatar RishabhBajaj25 commented on May 11, 2024

Where can I modify the hashgrid resolution?

from neuralangelo.

abhishekmonogram avatar abhishekmonogram commented on May 11, 2024

Read this issue and you will understand. I meant the hashgrid dict_size and not resolution. Corrected my previous reply

from neuralangelo.

RishabhBajaj25 avatar RishabhBajaj25 commented on May 11, 2024

Since I was using the default config.yaml, I trained it on a dict_size of 22. Would further increasing it help? Can you provide the parameters you used for extracting the mesh?

from neuralangelo.

abhishekmonogram avatar abhishekmonogram commented on May 11, 2024

Dict size of 22 should be more than enough. I did it with 21 and got good results. One option is to train for more iterations and also shoot a video of a object on your own and try training on that.

I used the default parameters for mesh extraction. In fact, RESOLUTION can further be lowered to 1024 and 512 and I don't see much difference in reconstructed meshes. BLOCK_RES was kept at 128.

from neuralangelo.

RishabhBajaj25 avatar RishabhBajaj25 commented on May 11, 2024

I suppose the training is complete from what I can see on wandb:
image
However, the mesh I get using the same parameters as you (1024 RESOLUTION and 128 BLOCK_RES) is nowhere close to being clean and high-fidelity:
image
I wonder what am I doing differently? Any thoughts?

from neuralangelo.

RishabhBajaj25 avatar RishabhBajaj25 commented on May 11, 2024

Thanks for looking into this. Here are the training curves for another dataset that I collected on my phone, in case it helps:
image

Will be keeping an eye out for your response.

from neuralangelo.

abhishekmonogram avatar abhishekmonogram commented on May 11, 2024

@mli0603 Could you also please share on how to speed up the training process without losing out on too many details? What hyperparameters would help me do that?

from neuralangelo.

chenhsuanlin avatar chenhsuanlin commented on May 11, 2024

Hi @apavani2 @RishabhBajaj25, I have just pushed a small fix on the mesh extraction script (#41), which may be related to this issue. If you could pull again and still see the same issue, please let me know.

from neuralangelo.

RishabhBajaj25 avatar RishabhBajaj25 commented on May 11, 2024

Hi @chenhsuanlin , thanks for your efforts. Unfortunately, I get the similar results on both the toy example, and my own data after pulling :/. I see that you made changes in the trainer as well: imaginaire/trainers/base.py. Should I train again and see? Also, can you confirm if my loss curves make sense? I am a bit skeptical about the first "curvature" loss.

from neuralangelo.

chenhsuanlin avatar chenhsuanlin commented on May 11, 2024

@RishabhBajaj25 you shouldn't need to retrain, the changes to the base trainer only concerns the checkpoint loading logic. The curvature loss curve seems okay; here is what I have for another scene). For now, you might want to consider training to the full 500k iterations and see if things improves.

image

from neuralangelo.

chenhsuanlin avatar chenhsuanlin commented on May 11, 2024

Closing due to inactivity, please feel free to reopen if there are further issues.

from neuralangelo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.