Git Product home page Git Product logo

neuralwarp's People

Contributors

fdarmon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralwarp's Issues

Question about the paper

Hi, your great work is impressing!
But I'm a little confused about the warp loss and Validity masks:
1."our warping-based loss such that every valid patch in the reference image is given the same weight." why is same weight?
2."second, when the reference and source views are on two different sides of the plane defined by xi and the normal ni; third, when a camera center is too close to the plane defined by xi and the normal ni." Why the binary indicator V will be 0.
Looking forward to your reply.

training custom dataset

Hello, first of all thank you for your work, and then may I ask why the loss drops very slowly when training the custom data set and the resulting mesh effect is also poor.

Quantitative results on DTU dataset

As descripted in your paper, "the results for each method are taken from their original paper".
But why results for Neus[34] are different from original paper?

[34] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In Adv. Neural Inform. Process. Syst., 2021.

Error visualization in appendix

Dear @fdarmon

Thank you very much for your work. In your error visualization in appendix, the GT points look very clean compared to the provided GT. I wonder if you preprocess the GT points from DTU to get such visualization.

Look forward to your reply!

custom dataset

Hello, if I need to train a custom dataset, what are the format requirements?

result visualization

Hello,

thanks for your excellent work! I am wondering how you create the result visulization teaser.gif in readme, with a moving camera and a sliding bar to switch between two meshes. Thank you in advance!

Best regards

How to get pair.txt file? dtu_supp

Thank you very much for the author's contribution, I want to try to train my own data, but I am confused about the pair.txt file in "dtu_supp", how do I get this?

Missing license file

Hello!

Thanks for sharing the code, your work is really impressive! I have noticed that you did not include any license file. Per copyright laws, we have to assume the most restrictive license (i.e. all derivatives are forbidden, etc.)

Was that intentional, or do you actually not mind other research projects building on top of your code?

Camera Normalization on DTU dataset

Dear author,
Thank you for your open source code. I notice that volsdf normalize the camera (within a sphere with R=3) on DTU dataset. However, NeuralWarp loads camera params from the original cameras.npz, which is not normalized. Since this project is built upon volsdf, I want to know why the author removed the camera normalization step. Moreover, I think the default value of --bbox_size in extract_mesh.py is not much reasonable. It should be set carefully for each scene because the camera is not normalized. If the value is too large, then a lot of empty space is wasted. If the value is too small, the extracted mesh is defective.
I want to hear the author's viewpoint on this.
Thanks!

Question about the two-stage training

Dear author,
According to the description of the paper, the training pipeline includes two stages. First, train for 100k iterations in the same setting as VolSDF. Then finetune for 50k iterations with the proposed method. Does this mean that I need to add the option "--is_continue --timestamp XXXXX" in stage 2? Moreover, according to the paper, the learning rate of stage 2 is 1e-5, which is different from the learning rate (5.0e-4) in NeuralWarp.conf. Do I need to change the learning rate in the configuration to 1e-5?
Thanks!

Reproducibility concerns

Hello. Thank you for your amazing work and for sharing the code!
I tried training your model from scratch on some of the benchmark scenes and had problems reproducing your results. It seems like the model is quite susceptible to the random seed, and even after several attempts, the quality I obtained was lower than reported.

Experiment Fountain - Full Fountain - Center Herzjesu - Full Herzjesu - Center
pre-trained 7.77 1.91 8.88 2.03
seed 2022 8.03 2.65 7.66 2.55
seed 42 13.36 7.43 10.54 2.58

Could you provide some insights regarding this issue?

How to reduce the GPU memory usage

Hello, I encountered OOM Error after "generate_visu" on RTX3080, can you provide some suggestions on how to reduce batch size or use multiple GPU?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.