Git Product home page Git Product logo

neural-light-transport's Introduction

Neural Light Transport (NLT)

ACM Transactions on Graphics 2021

[arXiv] [Video] [Project]

teaser

This is the authors' code release for:

Neural Light Transport for Relighting and View Synthesis
Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul Debevec, Jonathan T. Barron, Ravi Ramamoorthi, William T. Freeman
TOG 2021

in which we show how to train neural networks to perform simultaneous relighting and view sythesis, exhibiting complex light transport effects (such as specularity, subsurface scattering, global illumination, and etc.):

Dragon (specular) Dragon (subsurface scattering)
dragon_specular dragon_sss

This repository contains our rendered data, the code that rendered those data, and TensorFlow 2 (eager) code for training and testing NLT models.

If you use this repository or find it useful, please cite the paper (BibTeX).

This is not an officially supported Google product.

Before You Start...

Relighting Only?

The UV texture space formulation is most useful when views vary. If you are doing relighting from a fixed viewpoint, you can simply skip mapping between the camera and UV spaces. That is, you can just treat the camera-space ("normal-looking") images as UV-unwrapped ones. Intuitively, this is equivalent to using an identity mapping as UV (un)wrapping.

Relighting or View Synthesis (Not Simultaneously)?

If you do not care about simultaneous relighting and view synthesis, you can simply use a "slice" of the released data. For instance, if you are doing just view synthesis, then you can fix lighting by training on just the multi-view data under that lighting condition.

If you are rendering your own scene (see the data generation folder), use a single JSON path with no wildcard to fix the view or light.

Data

We provide both our rendered data and the scripts, so that you can either just use our data or render your own Blender scenes.

Download Metadata

See "Downloads -> Metadata" of the project page.

Download Our Rendered Data

See "Downloads -> Rendered Data" of the project page.

(Optional) Render Your Own Data

Blender 2.78c is used for scene modeling and rendering. The code was tested on Ubuntu 16.04.6 and 18.04.3 LTS, but should work with other reasonable OS versions.

See the data generation folder and its own README.

Model Training and Testing

We use TensorFlow 2 (eager execution) for neural network training and testing. The code was tested on Ubuntu 16.04.6 LTS, but should work with other reasonable TensorFlow or OS versions.

See the training and testing folder and its own README.

Pre-Trained Models

See "Downloads -> Pre-Trained Models" of the project page.

Issues or Questions?

If the issue is code-related, please open an issue here.

For questions, please also consider opening an issue as it may benefit future reader. Otherwise, email Xiuming Zhang.

Changelog

  • 01/05/2021: See the 01/07/2021 commits; v3 paper and video (TOG camera-ready ver.).
  • 08/20/2020: Updated the paper and the video.
  • 08/07/2020: Initial release.

neural-light-transport's People

Contributors

xiumingzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-light-transport's Issues

Issues running nlt_test.py

When running nlt_test.py I'm getting the following error:
image

Running postproc.py to generate the json gives the following error:
image

Am I missing some files to run postproc.py?

Below is the folder structure I'm using:
image

With the following files within ./checkpoints/test_30/
image

And here is the content of the checkpoints.ini
image

Is it possible to run in WIN and how to run nlt_test.py

  1. I want to run the nlt_test.py code. I don't know how to specify the CKPT path,
    Can I just specify the path
    10
  2. I don't quite understand how CKPT-30 and CKPT-10 are used. Could you give me a demo
    11
  3. If the realization of custom characters in the new scene of the image synthesis, how to achieve, I hope you can give some advice
    I hope to receive your prompt reply. Thank you.

image_base_relighting

Hi,
This is a very gread project, Thanks for open source. I have a question: how to use this project achieve human image-based relighting? Looking forward to reply, Thanks.

Why use NLT at all?

This is a very well-designed experiment, so please do not take my words to be overly critical. It is just that what you are proposing in your paper is not entirely true. You say:

"With this learned LT, one can relight the scene photorealistically with a directional light or an HDRI map, synthesize novel views with view-dependent effects, or do both simultaneously, all in a unified framework using a set of sparse, previously seen observations. "

This last statement would indicate that you could take a small number of photographs from different perspectives (such as those presented in the instant-ngp model) and then relight a static observation / scene (i.e. single photograph). Reading more deeply into your paper, this proposal is contradicted. In fact, not only does this project require OLAT data as an input (specifically, hundreds of OLAT images from over 60 RGB cameras placed around the performer) --- which could hardly be qualified as "sparse" -- but it appears you also require a proxy mesh (distributed in a Blender file). I do not see that any rendering is possible without a mesh --- and a reliable, high-resolution mesh cannot be generated using a set of sparse, previously seen observations, as you state.

To make a long story short, all of this could already be done many years ago using photogrammetry and GPU-based rendering engines (such as Cycles, Redshift, and Octane). Perhaps you can explain to me why we would want to take the time to invest thousands of dollars in an OLAT-capture cage when in just a fraction of the time with a single DSLR camera (and a bit of patience), the same quality output could be achieved? This isn't theoretical, I have done it.

In other words, this project would only be useful if it were able to relight scenes from a small set of images. Clearly, from the questions being posted here, the average reader has misunderstood what the original paper was proposing. Perhaps if the language were simplified or at least brought closer to reality, we wouldn't waste our time looking at the code thinking that it could be used to "relight a scene from a set of sparse views", OLAT-derived or not. It cannot.

TL;DR -- instead of downloading 19GB of "rendered data" for a dragon, just take a 2Mb OBJ of the same dragon, import it into Blender, put some point-lights / HDRI around it, and render it in Cycles. No need to mess with "neural light transport", which would require 1) having access to a dragon figurine and 2) taking hundreds of photos of that dragon with actual lights (in a studio or OLAT cage), crossing your fingers, and hoping for the same results as the OBJ in Cycles.

It is likely I am missing the point of this paper. It seems silly to go to so much trouble to do something that has another more simplistic approach that has been around for over a decade now.

`data_gen/gen_file_stats.py` does not exist

Hi,

I was trying to run NLT on my own dataset, and I found that while trying to run I ran into an error to run gen_file_stats.py, but that file doesn't seem to exist. How can I generate Data Status w/o it or where does the file exist?

Thanks!

Edit: I see that postproc.py actually does this, but it might be worth updating the error msg.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.