Git Product home page Git Product logo

neuralwarp's Introduction

NeuralWarp: Improving neural implicit surfaces geometry with patch warping (CVPR22)

Code release of paper Improving neural implicit surfaces geometry with patch warping
François Darmon, Bénédicte Bascle, Jean-Clément Devaux, Pascal Monasse and Mathieu Aubry

results

Installation

See requirements.txt for the python packages.

Data

Download data with
./download_dtu.sh and ./download_epfl.sh

Extract mesh from a pretrained model

Download the pretrained models with
./download_pretrained_models.sh

Run the extraction script with
python extract_mesh.py --conf CONF --scene SCENE [--OPTIONS]

  • CONF is the configuration file (e.g. confs/NeuralWarp_dtu.conf)
  • SCENE is the scan id for DTU data and either fountain or herzjesu for EPFL.
  • See python extract_mesh.py --help for a detailed explanation of the options. The evaluation in the papers are with default options for DTU and with --bbox_size 4 --no_one_cc --filter_visible_triangles --min_nb_visible 1 for EPFL.

The output mesh will be in evals/CONF_SCENE/ouptut_mesh.ply

You can also run the evaluation: first download the DTU evaluation data ./download_dtu_eval then run the evaluation script
python eval.py --scene SCENE
The evaluation metrics will be written in evals/CONF_SCENE/result.txt.

Train a model from scratch

First train a baseline model (i.e. VolSDF)
python train.py --conf confs/baseline_DATASET --scene SCENE.

Then finetune using our method with
python train.py --conf confs/NeuralWarp_DATASET --scene SCENE.

A visualization html file is generated for each training in exps/CONF_SCENE/TIMESTAMP/visu.html.

Acknowledgments

This repository is inspired by IDR

This work was supported in part by ANR project EnHerit ANR-17-CE23-0008 and was performed using HPC resources from GENCI–IDRIS 2021-AD011011756R1. We thank Tom Monnier and Bruno Lecouat for valuable feedback, and Jingyang Zhang for sending MVSDF results.

Copyright

NeuralWarp All rights reseved to Thales LAS and ENPC.

This code is freely available for academic use only and Provided “as is” without any warranty.

Modification are allowed for academic research provided that the following conditions are met :
  * Redistributions of source code or any format must retain the above copyright notice and this list of conditions.
  * Neither the name of Thales LAS and ENPC nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

neuralwarp's People

Contributors

fdarmon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralwarp's Issues

custom dataset

Hello, if I need to train a custom dataset, what are the format requirements?

How to get pair.txt file? dtu_supp

Thank you very much for the author's contribution, I want to try to train my own data, but I am confused about the pair.txt file in "dtu_supp", how do I get this?

Camera Normalization on DTU dataset

Dear author,
Thank you for your open source code. I notice that volsdf normalize the camera (within a sphere with R=3) on DTU dataset. However, NeuralWarp loads camera params from the original cameras.npz, which is not normalized. Since this project is built upon volsdf, I want to know why the author removed the camera normalization step. Moreover, I think the default value of --bbox_size in extract_mesh.py is not much reasonable. It should be set carefully for each scene because the camera is not normalized. If the value is too large, then a lot of empty space is wasted. If the value is too small, the extracted mesh is defective.
I want to hear the author's viewpoint on this.
Thanks!

Missing license file

Hello!

Thanks for sharing the code, your work is really impressive! I have noticed that you did not include any license file. Per copyright laws, we have to assume the most restrictive license (i.e. all derivatives are forbidden, etc.)

Was that intentional, or do you actually not mind other research projects building on top of your code?

result visualization

Hello,

thanks for your excellent work! I am wondering how you create the result visulization teaser.gif in readme, with a moving camera and a sliding bar to switch between two meshes. Thank you in advance!

Best regards

Question about the paper

Hi, your great work is impressing!
But I'm a little confused about the warp loss and Validity masks:
1."our warping-based loss such that every valid patch in the reference image is given the same weight." why is same weight?
2."second, when the reference and source views are on two different sides of the plane defined by xi and the normal ni; third, when a camera center is too close to the plane defined by xi and the normal ni." Why the binary indicator V will be 0.
Looking forward to your reply.

Error visualization in appendix

Dear @fdarmon

Thank you very much for your work. In your error visualization in appendix, the GT points look very clean compared to the provided GT. I wonder if you preprocess the GT points from DTU to get such visualization.

Look forward to your reply!

Reproducibility concerns

Hello. Thank you for your amazing work and for sharing the code!
I tried training your model from scratch on some of the benchmark scenes and had problems reproducing your results. It seems like the model is quite susceptible to the random seed, and even after several attempts, the quality I obtained was lower than reported.

Experiment Fountain - Full Fountain - Center Herzjesu - Full Herzjesu - Center
pre-trained 7.77 1.91 8.88 2.03
seed 2022 8.03 2.65 7.66 2.55
seed 42 13.36 7.43 10.54 2.58

Could you provide some insights regarding this issue?

Quantitative results on DTU dataset

As descripted in your paper, "the results for each method are taken from their original paper".
But why results for Neus[34] are different from original paper?

[34] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In Adv. Neural Inform. Process. Syst., 2021.

Question about the two-stage training

Dear author,
According to the description of the paper, the training pipeline includes two stages. First, train for 100k iterations in the same setting as VolSDF. Then finetune for 50k iterations with the proposed method. Does this mean that I need to add the option "--is_continue --timestamp XXXXX" in stage 2? Moreover, according to the paper, the learning rate of stage 2 is 1e-5, which is different from the learning rate (5.0e-4) in NeuralWarp.conf. Do I need to change the learning rate in the configuration to 1e-5?
Thanks!

training custom dataset

Hello, first of all thank you for your work, and then may I ask why the loss drops very slowly when training the custom data set and the resulting mesh effect is also poor.

How to reduce the GPU memory usage

Hello, I encountered OOM Error after "generate_visu" on RTX3080, can you provide some suggestions on how to reduce batch size or use multiple GPU?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.