Git Product home page Git Product logo

half-potato / nmf Goto Github PK

View Code? Open in Web Editor NEW
50.0 9.0 3.0 620.87 MB

Our method takes as input a collection of images (100 in our experiments) with known cameras, and outputs the volumetric density and normals, materials (BRDFs), and far-field illumination (environment map) of the scene.

Home Page: https://half-potato.gitlab.io/posts/nmf/

License: MIT License

Python 1.18% C++ 0.01% C 0.01% Cuda 0.05% Jupyter Notebook 98.76%
deep-learning inverse-rendering nerf 3d-reconstruction 3d-rendering computer-vision

nmf's Introduction

Neural Microfacet Fields for Inverse Rendering

More details can be found at the project page here.

Installation

A conda virtual environment is recommended.

pip install -r requirements.txt

Dataset dir should contain a folder named nerf_synthetic with various datasets in the blender configuration.

python train.py -m expname=v38_noupsample model=microfacet_tensorf2 dataset=ficus,drums,ship,teapot vis_every=5000 datadir={dataset dir}

Experiment configurations are done using hydra, which controls the initialization parameters for all of the modules. Look in configs/model to see what options are available. Setting the BRDF activation would look like adding this:

model.arch.model.brdf.activation="sigmoid"

to the command line argument.

To relight a dataset, you need to first convert the environment map .exr file to a pytorch checkpoint {envmap}.th like this:

python -m scripts.pano2cube backgrounds/christmas_photo_studio_04_4k.exr --output backgrounds/christmas.th

Then, after training some model and obtaining a checkpoint {ckpt}.th, you can run

python train.py -m expname=v38_noupsample model=microfacet_tensorf2 dataset=ficus vis_every=5000 datadir={dataset dir} ckpt={ckpt}.th render_only=True fixed_bg={envmap}.th

Recreating Experiments

Note that something is currently wrong with computation of metrics in the current code and the scripts reval_lpips.ipynb and reeval_norm_err.ipynb currently have to be run. tabularize.ipynb can be used to create the tables, while other fun visualizations are available. You can also download our relighting experiments from here.

Other datasets

Other dataset configurations are available in configs/dataset. Real world datasets are available and do work.

Here is a link to the relighting dataset.

nmf's People

Contributors

apchenstu avatar gkouros avatar half-potato avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

3a1b2c3 jwgu gkouros

nmf's Issues

Regarding the relighting evaluation

May I ask where you get the ground truth images for relighting metrics evaluation for shiny blender dataset? Could you please provide the relighting dataset, or provide the way to generate the relighting evaluation dataset?

How to train with known envmap?

Hi,

Is it possible to train with a known envmap and if so, how can you do that? I'm trying to see how the model performs with regard to material estimation if you reduce the ambiguity by providing known illumination.

I tried to set the fixed_bg argument but it's not working. It seems to expect a torch checkpoint and when I feed it with the backgrounds/forest.th there are some conflicts with resolution (which is easy to fix) and missing keys (not so easy to fix).

Also, is it possible to use the exr files directly?

Thanks.

llff dataset - gardenshperes

I am trying to run refnerf - gardenshperes real dataset (llff type),
https://dorverbin.github.io/refnerf/,
using following config:
scenedir: gardenspheres
dataset_name: llff
downsample_train: 4
downsample_test: 4
ndc_ray: false
near_far: [1, 6]
#stack_norms: false
aabb_scale: 2

But it is not working,

Tue 21 Nov 2023 04:40:12 PM CET
Warp 0.10.1 initialized:
   CUDA Toolkit: 11.5, Driver: 12.0
   Devices:
     "cpu"    | CPU
     "cuda:0" | NVIDIA A40 (sm_86)
   Kernel cache: /home/pnaikade/.cache/warp/0.10.1
[2023-11-21 16:41:31,520][HYDRA] Launching 1 jobs locally
[2023-11-21 16:41:31,520][HYDRA] 	#0 : expname=gardenspheres_test model=microfacet_tensorf2 dataset=toycar vis_every=5000 datadir=/HPS/ColorNeRF/work/ref_nerf_dataset
ic| expname: 'toycar_gardenspheres_test'
ic| self.N_voxel_list: [4283103, 7622116, 12358440, 18736316, 27000000]
ic| self.use_predicted_normals: False
    self.align_pred_norms: True
    self.orient_world_normals: True
2023-11-21 16:41:44.815 | INFO     | __main__:reconstruction:322 - initial ortho_reg_weight
2023-11-21 16:41:44.816 | INFO     | __main__:reconstruction:325 - initial L1_reg_weight
2023-11-21 16:41:44.816 | INFO     | __main__:reconstruction:328 - initial TV_weight density: 0.0 appearance: 0.0
2023-11-21 16:41:45.113 | INFO     | __main__:reconstruction:338 - TensorNeRF(
  (rf): TensorVMSplit(
    (density_rf): TensoRF(
      (app_plane): ParameterList(
          (0): Parameter containing: [torch.float32 of size 1x16x128x128 (cuda:0)]
          (1): Parameter containing: [torch.float32 of size 1x16x128x128 (cuda:0)]
          (2): Parameter containing: [torch.float32 of size 1x16x128x128 (cuda:0)]
      )
      (app_line): ParameterList(
          (0): Parameter containing: [torch.float32 of size 1x16x128x1 (cuda:0)]
          (1): Parameter containing: [torch.float32 of size 1x16x128x1 (cuda:0)]
          (2): Parameter containing: [torch.float32 of size 1x16x128x1 (cuda:0)]
      )
    )
    (app_rf): TensoRF(
      (app_plane): ParameterList(
          (0): Parameter containing: [torch.float32 of size 1x24x128x128 (cuda:0)]
          (1): Parameter containing: [torch.float32 of size 1x24x128x128 (cuda:0)]
          (2): Parameter containing: [torch.float32 of size 1x24x128x128 (cuda:0)]
      )
      (app_line): ParameterList(
          (0): Parameter containing: [torch.float32 of size 1x24x128x1 (cuda:0)]
          (1): Parameter containing: [torch.float32 of size 1x24x128x1 (cuda:0)]
          (2): Parameter containing: [torch.float32 of size 1x24x128x1 (cuda:0)]
      )
    )
    (basis_mat): Linear(in_features=72, out_features=24, bias=False)
    (dbasis_mat): Linear(in_features=48, out_features=1, bias=False)
  )
  (sampler): AlphaGridSampler()
  (model): Microfacet(
    (diffuse_module): RandHydraMLPDiffuse(
      (diffuse_mlp): Sequential(
        (0): Linear(in_features=24, out_features=3, bias=True)
      )
      (tint_mlp): Sequential(
        (0): Linear(in_features=24, out_features=3, bias=True)
      )
      (f0_mlp): Sequential(
        (0): Linear(in_features=24, out_features=3, bias=True)
      )
      (roughness_mlp): Sequential(
        (0): Linear(in_features=24, out_features=2, bias=True)
      )
    )
    (brdf): MLPBRDF(
      (h_encoder): ListISH()
      (d_encoder): ListISH()
      (mlp): Sequential(
        (0): Linear(in_features=66, out_features=64, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=64, out_features=64, bias=True)
        (3): ReLU(inplace=True)
        (4): Linear(in_features=64, out_features=4, bias=True)
      )
    )
    (brdf_sampler): GGXSampler()
  )
  (bg_module): IntegralEquirect()
  (tonemap): SRGBTonemap()
)
ic| white_bg: False
ic| self.nSamples: 625, self.stepsize: tensor(0.0157, device='cuda:0')
ic| self.nSamples: 625, self.stepsize: tensor(0.0157, device='cuda:0')
ic| self.diffuse_bias: 2.326634076573745
    mean_brightness: tensor(0.5488, device='cuda:0')
    v: 0.9110595349675754
ic| bg_brightness: tensor(0.5488, device='cuda:0')
    target_val: 0.9110595349675754
    self.bias: 2.2158484777592573
grid size tensor([128, 128, 128])
aabb tensor([-3.0000, -3.3400, -2.0000,  3.0000,  3.3400,  2.0000], device='cuda:0')
sampling step size:  tensor(0.0157)
sampling number:  625

  0%|          | 0/30000 [00:00<?, ?it/s]ic| ori_decay: 1
ic| normal_decay: 1
ic| gt_bg_path: None
/HPS/ColorNeRF/work/opt/anaconda3/envs/nmf/lib/python3.10/site-packages/numpy/core/fromnumeric.py:3504: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
/HPS/ColorNeRF/work/opt/anaconda3/envs/nmf/lib/python3.10/site-packages/numpy/core/_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide
  ret = ret.dtype.type(ret / rcount)

1.0e+00:   6%|| 1907/30000 [02:27<35:07, 13.33it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1909/30000 [02:27<35:10, 13.31it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1909/30000 [02:27<35:10, 13.31it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1909/30000 [02:27<35:10, 13.31it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1911/30000 [02:27<35:09, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1911/30000 [02:28<35:09, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1911/30000 [02:28<35:09, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1913/30000 [02:28<35:08, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1913/30000 [02:28<35:08, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1913/30000 [02:28<35:08, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1915/30000 [02:28<35:08, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1915/30000 [02:28<35:08, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1915/30000 [02:28<35:08, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1917/30000 [02:28<35:16, 13.27it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1917/30000 [02:28<35:16, 13.27it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1917/30000 [02:28<35:16, 13.27it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1919/30000 [02:28<35:18, 13.25it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1919/30000 [02:28<35:18, 13.25it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1919/30000 [02:28<35:18, 13.25it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1921/30000 [02:28<35:15, 13.27it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1921/30000 [02:28<35:15, 13.27it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1921/30000 [02:28<35:15, 13.27it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1923/30000 [02:28<35:12, 13.29it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1923/30000 [02:28<35:12, 13.29it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1923/30000 [02:29<35:12, 13.29it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1925/30000 [02:29<35:11, 13.30it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1931/30000 [02:29<35:10, 13.30it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1931/30000 [02:29<35:10, 13.30it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1931/30000 [02:29<35:10, 13.30it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1933/30000 [02:29<35:06, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1933/30000 [02:29<35:06, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1939/30000 [02:30<35:16, 13.26it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1941/30000 [02:30<36:16, 12.89it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1941/30000 [02:30<36:16, 12.89it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1941/30000 [02:30<36:16, 12.89it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1943/30000 [02:30<37:41, 12.41it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1943/30000 [02:30<37:41, 12.41it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1943/30000 [02:30<37:41, 12.41it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1945/30000 [02:30<38:57, 12.00it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1945/30000 [02:30<38:57, 12.00it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1945/30000 [02:30<38:57, 12.00it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1947/30000 [02:30<38:21, 12.19it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1947/30000 [02:30<38:21, 12.19it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1947/30000 [02:30<38:21, 12.19it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1949/30000 [02:30<37:30, 12.47it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1949/30000 [02:30<37:30, 12.47it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   6%|| 1949/30000 [02:31<37:30, 12.47it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1951/30000 [02:31<36:53, 12.67it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1951/30000 [02:31<36:53, 12.67it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1951/30000 [02:31<36:53, 12.67it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1953/30000 [02:31<36:31, 12.80it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1953/30000 [02:31<36:31, 12.80it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1953/30000 [02:31<36:31, 12.80it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1955/30000 [02:31<36:10, 12.92it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1955/30000 [02:31<36:10, 12.92it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1955/30000 [02:31<36:10, 12.92it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1957/30000 [02:31<35:48, 13.05it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1957/30000 [02:31<35:48, 13.05it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1957/30000 [02:31<35:48, 13.05it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1995/30000 [02:34<35:02, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1997/30000 [02:34<35:03, 13.31it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1997/30000 [02:34<35:03, 13.31it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1997/30000 [02:34<35:03, 13.31it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1999/30000 [02:34<35:02, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1999/30000 [02:34<35:02, 13.32it/s]
psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 1999/30000 [02:34<35:02, 13.32it/s]/HPS/ColorNeRF/work/opt/anaconda3/envs/nmf/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]

psnr = nan test_psnr = 0.00 loss = 0.00000 envmap = 0.00000 diffuse = 0.00000 brdf = 0.00000 nrays = [100, 1000] mipbias = 1.0e+00:   7%|| 2000/30000 [02:35<36:17, 12.86it/s]
Error executing job with overrides: ['expname=gardenspheres_test', 'model=microfacet_tensorf2', 'dataset=toycar', 'vis_every=5000', 'datadir=/HPS/ColorNeRF/work/ref_nerf_dataset']
Traceback (most recent call last):
  File "/HPS/ColorNeRF/work/nmf/train.py", line 915, in train
    reconstruction(cfg)
  File "/HPS/ColorNeRF/work/nmf/train.py", line 805, in reconstruction
    if tensorf.check_schedule(iteration, 1):
  File "/HPS/ColorNeRF/work/nmf/modules/tensor_nerf.py", line 180, in check_schedule
    require_reassignment |= self.sampler.check_schedule(iter, batch_mul, self.rf)
  File "/HPS/ColorNeRF/work/nmf/samplers/alphagrid.py", line 93, in check_schedule
    self.update(rf)
  File "/HPS/ColorNeRF/work/opt/anaconda3/envs/nmf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/HPS/ColorNeRF/work/nmf/samplers/alphagrid.py", line 105, in update
    new_aabb = self.updateAlphaMask(rf, rf.grid_size)
  File "/HPS/ColorNeRF/work/opt/anaconda3/envs/nmf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/HPS/ColorNeRF/work/nmf/samplers/alphagrid.py", line 267, in updateAlphaMask
    xyz_min = valid_xyz.amin(0)
IndexError: amin(): Expected reduction dim 0 to have non-zero size.

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Tue 21 Nov 2023 04:44:44 PM CET

MAE computation is different from Ref-NeRF

Hi, I have been running experiments on Shiny Blender recently, and I want to compare my results with yours. But I find that your MAE computation is different from Ref-NeRF.

Ref-NeRF averages MAE weighted on alpha*acc:
https://github.com/google-research/multinerf/blob/5b4d4f64608ec8077222c52fdf814d40acc10bc1/internal/ref_utils.py#L45-L50
https://github.com/google-research/multinerf/blob/5b4d4f64608ec8077222c52fdf814d40acc10bc1/eval.py#L156-L163

But you average MAE on all pixels.

nmf/renderer.py

Lines 375 to 376 in 3eb6039

norm_err *= test_dataset.acc_maps[im_idx].squeeze(-1)
norm_errs.append(norm_err.mean())

Since transparent pixels' MAE is 0, final results would be much smaller than Ref-NeRF's.

Question about batch size

Hi.

I see that in the main training loop you use a varying batch size, rather than a fixed one. Can you please explain the rationale behind this decision and how the update rule works?

Thanks.

Computation of Normals using finite-difference kernels

Hi, congrats on a really awesome paper! I am particularly curious about how you compute normals on a tri-plane/TensoRF representation. In the paper you write:

However, like [20], we found that numerically computing spatial gradients of the density field using finite differences rather than using analytic gradients leads to normal vectors that we can use directly, without using features predicted by a separate MLP. Additionally, these numerical gradients can be efficiently computed using 2D and 1D convolution using TensorRF’s low-rank density decomposition

I was able to trace the implementation to the Triplanar and Convolver classes: https://github.com/half-potato/nmf/blob/main/fields/triplanar.py#L101 and https://github.com/half-potato/nmf/blob/main/modules/convolver.py#L52

However it seems that Triplanar class is not referenced in any of the experiments and never imported by other modules in the codebase. I just wanted to confirm whether this code is indeed the reference model, because the codebase also contains code to compute gradients with autograd https://github.com/half-potato/nmf/blob/main/fields/tensor_base.py#L119

Cheers,
Eldar.

Albedo Visualization

Hi,

Thanks for the code, the lighting estimation results are impressive!
Is it possible to visualize the albedo estimated by your model?

Thanks!

Image downsampling bug

The bug occurs when parameter downsample is set to >1 and it is located on the following line:
https://github.com/half-potato/nmf/blob/3ef13e798a27e8c06d6186b87b0354044faad050/dataLoader/blender.py#L162C1-L163C1

To solve it the image should be read with Pillow (uncomment line 158 + comment out line 159) to be able to use the resize operation with the argument Image.LANCZOS. Otherwise, the argument should be removed since it's undefined for the resize function of numpy arrays.

[Question] About the derivation of normal vector from the density field

In the released paper Neural Microfacet Fields for Inverse Rendering (ICCV'23), it was mentioned in Section C of the appendix that

To calculate the normal vectors of the density field, we apply a finite difference kernel, convolved with a 3×3 Gaussian smoothing kernel with σ = 1, then linearly interpolate between samples to get the resulting gradient in the 3D volume.

I appreciate this idea to use smoothed gradient as the shading normal, and want to check the code that implements it.
However, I failed to find the code that implements this finite difference and gaussian filtering functionality.

The forward method of the TensorNeRF class in modules/tensor_nerf.py seems to be using the compute_normals method of the TensorBase class, but this method directly use analytically derived gradients, rather than gradients from finite difference, without doing Gaussian filtering.

Then, I searched the project for 'normal' and found another function called compute_density_norm in fields/triplanar.py that seems doing finite difference (I'm not sure). But this class seems to be not in active use.

I would be very grateful if the authors could refer me to the code implementing the normal derivation described in the paper. Thanks in advance!

Release of blend files

Hi, could you please share the blend files for shiny blender dataset? Thank you so much!

How to run on custom datasets?

Hi, I am trying to run NMF on my own datasets. My dataset format is dtu or neus, for which I write a new dataloader. But I notice the rendered results except the images of the last index are just blank with vis_every=500. I tried to tune the near_far but did not work.

Are any advice for custom datasets? For example, the scale of the scene? the size of the bounding box of the poses?

Lower performance after loading the model for evaluation

As the title suggests, there are some inconsistencies between the evaluation at the end of training and the evaluation after loading the saved model. See the below image as an example. Geometry and materials are the same but the rendering of the reflections is different. Is there something that's not stored in the saved checkpoints or not loaded properly that comes to mind? Or something not being initialized correctly?

The metrics are as follows:

PSNR SSIM LPIPS MAE
Initial evaluation after training 29.9 0.949 0.035 7.843
Reevaluation with loaded model 29.72 0.949 0.034 7.843

Thank you in advance!

temp

Issues with reconstruction of shiny ball scene

Hi. I'm trying to reproduce the results of the paper and it has worked out pretty well on most scenes of the shiny-blender dataset but it still keeps failing in the shiny ball scene. Do you have any idea why? I used the default settings in configs/dataset/ball.yaml.

Below you can see the result with field.smoothing=0.5 and field.smoothing=1.0 (as suggested in #7):
image

image

Question about roughness

r2 = matprop["r1"][bounce_mask]

Shouldn't the key be "r2" instead of "r1"? Is that a bug or is there a reason behind this?

Also, what do r1 and r2 correspond to? They are the output of the roughness MLP but roughness is usually scalar, so what do the two values correspond to?

Thank you in advance!

[Question] Reproduction of the reported scores in the paper

Hi, I have recently found this excellent work and tried a demo with car scene from the Shiny Blender dataset. However, the resulting evaluation scores are lower (with a large margin) than reported in the paper (v2 on arXiv):
PSNR: 29.61 (this score from the saved mean.txt; in the paper: 30.28)
SSIM: 0.9448 (this score from the saved mean.txt; in the paper: 0.951)
MAE: 8.0223 (this score calculated on saved normal maps; in the paper: 2.598)

The command I used:
python train.py -m expname=EXPNAME model=microfacet_tensorf2 dataset=car vis_every=5000 datadir=DATADIR model.arch.bg_module.bg_path=backgrounds/forest.exr

I hope to hear your response soon, thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.