Git Product home page Git Product logo

envidr's Introduction

ENVIDR: Implicit Differentiable Render with Neural Environment Lighting

The official PyTorch codebase for ICCV'23 paper "ENVIDR: Implicit Differentiable Render with Neural Environment Lighting"

| Project Page | Paper |

Open In Colab

img

Updates

[2023/08/11] Release more training and evaluation codes. The pre-trained checkpoints will be updated later.

Setup

The our code is mainly based on the awesome third-party torch-ngp implementation. The instructions to set up the running environment are as follows:

Clone Repo:

git clone --recursive [email protected]:nexuslrf/ENVIDR.git
cd ENVIDR

Install Packages:

conda create -n envidr python=3.8
conda activate envidr
# you might need to manually install torch>=1.10.0 based on your device
pip install -r requirements.txt

Torch Extensions:

By default, cpp_extension.load will build the extension at runtime. However, this may be inconvenient sometimes.

We recommend pre-build all extensions by running the following script:

# install all extension modules locally
bash build_ext.sh

Dataset Preparation

General Scenes/Objects

We use the format of the original nerf-synthetic dataset for the model training. The download links for dataset are shown below:

We by default put the datasets under the folder data/

Spheres for Pre-trained Rendering MLPs

To learn the pre-trained rendering MLPs, you additionally need download a set of HDRI environment light images. In our experiments, we use the 11 HDRIs provided by Filament renderer. We provide a script to download and convert HDRI for Filament renderer:

bash prepare_hdri.sh

In case you fail to convert the HDRI into KTX file, we also provide our pre-computed KTX file.

After obtaining the converted KTX files for environment map, you can run generate_set.py to verify the rendering results and also a get a set of sample images for the evaluation purpose during training.

python generate_set.py

The results will be written to data/env_sphere/env_dataset by default.

Training

Neural Renderer

To train a neural renderer from scratch:

python main_nerf.py --config ./configs/neural_renderer.ini
# Optionally to train rendering MLPs for indirect reflection.
python main_nerf.py --config ./configs/neural_renderer_renv.ini

The results by default will be saved at exps/

We also provide the checkpoints of our pre-trained neural renderer.

General Scenes

python main_nerf.py --config ./configs/scenes/toaster.ini

You can get decent results after 500 epochs of the training.

Note the following flags in .ini file is for enabling the interreflections:

use_renv = True # color encoding MLP $E_{ref}$ in Eqn. 13 of the paper
; indir_ref = True
indir_ref_start_iter = 140 # indir_ref is enabled after 140 epoches
learn_indir_blend = True
grad_rays_start_iter=100
grad_rays_scale=0.05

; dir_only = True # only render direct illumination
; indir_only = True # only render indirect illumination

BTW, these pre-trained rendering MLPs can also be used with NeuS-like (without hash encoding) geometry models:

python main_nerf.py --config ./configs/scenes/materials_neus.ini

Applications

Extract Environment Map

python main_nerf.py --config configs/unwrap_scene.ini \
    --workspace exps/scenes/toaster --swap_env_path exps/scenes/toaster/checkpoints/ngp_ep0500.pth \
    --unwrap_color_intensity=1.0 --unwrap_roughness=0.4

Note you need to manually tune unwrap_roughness to get a clear & detailed environment map.

Relight the Scene

python main_nerf.py --config configs/scenes/toaster.ini --test \
    --swap_env_path exps/envs_all_11_unit_en/checkpoints/env_ckpts/env_net_3.pth \
    --sh_degree 4 --hidden_dim_env 160 \
    --val_folder_name relighting \
    --intensity_scale=0.8 --roughness_scale=0.8

Note you can manually tune intensity_scale and roughness_scale to get the desired relighting results.

Rotate the Environment

python main_nerf.py --config configs/scenes/toaster.ini --test \
    --test_ids 57 --val_folder_name rot_env \
    --env_rot_degree_range 0 360 5 # [degree_start, degree_end, num_views]

Acknowledgement

We also used the following awesome codebases to implement this project:

Citation

@article{liang2023envidr,
  title={ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting},
  author={Liang, Ruofan and Chen, Huiting and Li, Chunlin and Chen, Fan and Panneer, Selvakumar and Vijaykumar, Nandita},
  journal={arXiv preprint arXiv:2303.13022},
  year={2023}
}

envidr's People

Contributors

edward11235 avatar nexuslrf avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.