Comments (14)
Hello everyone,
Thanks for reporting this. I am currently looking into other code-breaking issues. I will circle back shortly and provide updates on this.
from neuralangelo.
I trained the same for 100k steps too and did not have satisfying results with the default parameters. The training does reach convergence as I can see on weights and biases webpage. However, I think its a problem of tuning the marching cubes hyperparameters, namely, RESOLUTION
and BLOCK_RES
. I had a look at the original paper and it says a RESOLUTION
of 512 was used for the DTU benchmark which is similar to the toy dataset. I am running a few tests with different parameters as we speak. We can discuss more once I have some results.
from neuralangelo.
I think I found the issue with mine. The issue was the hashgrid dict_size. I was trying hashgrid dict_size 20 and the outputs weren't great. When I switched the hash grid dict_size to 21, I got really good outputs. Try it with dict_size 21 and see if you get good outputs.
from neuralangelo.
Where can I modify the hashgrid resolution?
from neuralangelo.
Read this issue and you will understand. I meant the hashgrid dict_size and not resolution. Corrected my previous reply
from neuralangelo.
Since I was using the default config.yaml, I trained it on a dict_size of 22. Would further increasing it help? Can you provide the parameters you used for extracting the mesh?
from neuralangelo.
Dict size of 22 should be more than enough. I did it with 21 and got good results. One option is to train for more iterations and also shoot a video of a object on your own and try training on that.
I used the default parameters for mesh extraction. In fact, RESOLUTION can further be lowered to 1024 and 512 and I don't see much difference in reconstructed meshes. BLOCK_RES was kept at 128.
from neuralangelo.
I suppose the training is complete from what I can see on wandb:
However, the mesh I get using the same parameters as you (1024 RESOLUTION
and 128 BLOCK_RES
) is nowhere close to being clean and high-fidelity:
I wonder what am I doing differently? Any thoughts?
from neuralangelo.
Thanks for looking into this. Here are the training curves for another dataset that I collected on my phone, in case it helps:
Will be keeping an eye out for your response.
from neuralangelo.
@mli0603 Could you also please share on how to speed up the training process without losing out on too many details? What hyperparameters would help me do that?
from neuralangelo.
Hi @apavani2 @RishabhBajaj25, I have just pushed a small fix on the mesh extraction script (#41), which may be related to this issue. If you could pull again and still see the same issue, please let me know.
from neuralangelo.
Hi @chenhsuanlin , thanks for your efforts. Unfortunately, I get the similar results on both the toy example, and my own data after pulling :/. I see that you made changes in the trainer as well: imaginaire/trainers/base.py
. Should I train again and see? Also, can you confirm if my loss curves make sense? I am a bit skeptical about the first "curvature" loss.
from neuralangelo.
@RishabhBajaj25 you shouldn't need to retrain, the changes to the base trainer only concerns the checkpoint loading logic. The curvature loss curve seems okay; here is what I have for another scene). For now, you might want to consider training to the full 500k iterations and see if things improves.
from neuralangelo.
Closing due to inactivity, please feel free to reopen if there are further issues.
from neuralangelo.
Related Issues (20)
- Specify a separate validation images HOT 1
- normal epsilon size is smaller than the cell size. HOT 1
- Can I reduce Polygon count? HOT 2
- Holes are Filled In
- KeyError: 'sk_x' when using fox data from instant-ngp HOT 1
- Object masks for DTU scan 83 HOT 3
- PLY TO OBJ Files HOT 2
- --Keep_lcc Not removing Noise
- Fixed poses for Courthouse no longer avaliable
- MacBook Pro M1 compatibility HOT 1
- Colab Demo Broken HOT 1
- What is the difference between numerical taps 4 and 6? HOT 1
- oom problem during train
- ModuleNotFoundError: No module named 'tinycudann'
- I'm curious about how the F1 score was calculated in the paper.
- CondaEnvException: Pip failed------failed to build tinycudann HOT 2
- How to add texture
- How to "run neuralangelo"
- Save rendered images from the trained model
- Can I ask if this code can be used for scene reconstruction?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neuralangelo.