Git Product home page Git Product logo

indoorlightediting's Introduction

Physically-based Editing of Indoor Scene Lighting from a Single Image

Zhengqin Li, Jia Shi, Sai Bi, Rui Zhu, Kalyan Sunkavalli, Miloš Hašan, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

Related links:

Dependencies

We highly recommend using Anaconda to manage python packages. Required dependencies include:

Train and test models on the OpenRooms dataset

  1. Download the OpenRooms dataset.
  2. Compile Optix-based shadow renderer with python binding.
  3. Modify the pytorch3D code to support RMSE chamfer distance loss.
    • Go to chamfer.py.
    • Add flag isRMSE = False to function chamfer_distance.
    • Modify function chamfer_distance by adding lines below after the definition of cham_x and cham_y.
    if isRMSE == True:
        cham_x = torch.sqrt(cham_x + 1e-6)
        cham_y = torch.sqrt(cham_y + 1e-6)
  4. Train models.
    • Train the material prediction network.
    python trainBRDF.py           # Train material prediction.
    • Train light source prediction networks.
    python trainVisLamp.py        # Train visible lamp prediction.
    python trainVisWindow.py      # Train visible window prediction.
    python trainInvLamp.py        # Train invisible lamp prediction.
    python trainInvWindow.py      # Train invisible window prediction.
    • Train the neural renderer.
    python trainShadowDepth.py --isGradLoss      # Train shadow prediction.
    python trainDirectIndirect.py                # Train indirect illumination prediction.
    python trainPerpixelLighting.py              # Train perpixel lighting prediction.
  5. Test models.
    • Test the material prediction network.
    python testBRDF.py            # Test BRDF prediction. Results in Table 5 in the supp.
    • Test light source prediction networks.
    python testVisLamp.py         # Test visible lamp prediction. Results in Table 3 in the main paper.
    python testVisWindow.py       # Test visible window prediction. Results in Table 3 in the main paper. 
    python testInvLamp.py         # Test invisible lamp prediction. Results in Table 3 in the main paper.
    python testInvWindow.py       # Test invisible window prediction. Results in Table 3 in the main paper.
    • Test the neural renderer.
    python testShadowDepth.py --isGradLoss       # Test shadow prediction. Results in Table 2 in the main paper. 
    python testDirectIndirect.py                 # Test indirect illumination prediction. 
    python testPerpixelLighting.py               # Test perpixel lighting prediction. 
    python testFull.py                           # Test the whole neural renderer with predicted light sources. Results in Table 4 in the main paper. 

Scene editing applications on real images

  1. Prepare input data.
    • Create a root folder, e.g. Example1.
    • Create a folder Example1/input for input data. The folder should include:
      • image.png: Input RGB image of resolution 240 x 320
      • envMask.png: A mask specify the indoor/outdoor regions, where 0 indicates outdoor (window) regions.
      • lampMask_x.png: Masks for visible lamps. x is its ID starting from 0.
      • winMask_x.png: Masks for visible windows. x is its ID starting from 0.
    • Create testList.txt. Add absolute path of Example1 to its first line.
    • An example from our teaser scene can be found in Example1.
  2. Depth prediction. We use DPT in our paper. Higher quality depth from RBGD sensor should lead to better results.
    • Download DPT and save it in folder DPT
    • Run python script testRealDepth.py. Result will be saved as depth.npy in Example1/input
    python testRealDepth.py --testList testList.txt
  3. Material and light source prediction.
    • Run python script testRealBRDFLight.py. Please add flag --isOptimize to improve quality.
    python testRealBRDFLight.py --testList testList.txt --isOptimize
  4. Edit light sources, geometry or materials.
  5. Rerender the image with the neural renderer.
    • Run python script testRealRender.py. You may need to specify --objName when inserting virtual objects. You may need to specify --isVisLampMesh when inserting virtual lamps. You may need to specify --isPerpixelLighting to predict perpixel environment maps, which are used to render specular bunnies on the Garon et al. dataset in the paper.
    python testRealRender.py --testList testList.txt --isOptimize

indoorlightediting's People

Contributors

lzqsd avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.