Git Product home page Git Product logo

zipnerf-pytorch's Introduction

ZipNeRF

An unofficial pytorch implementation of "Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields" https://arxiv.org/abs/2304.06706. This work is based on multinerf, so features in refnerf,rawnerf,mipnerf360 are also available.

News

  • (2024.2.2) Add support for nerfstudio, credits to Ling Jing.
  • (2024.12.8) Add support for Intel's DPC++ backend, credits to Zong Wei.
  • (2023.6.22) Add extracting mesh through tsdf; add gradient scaling for near plane floaters.
  • (2023.5.26) Implement the latest version of ZipNeRF https://arxiv.org/abs/2304.06706.
  • (2023.5.22) Add extracting mesh; add logging,checkpointing system

Results

New results(5.27): Pretrained weights

360_v2:

360_v2_0527.mp4

360_v2_glo:(fewer floaters, but worse metric)

360_v2_glo.mp4

mesh results(5.27):

mesh

Mipnerf360(PSNR):

bicycle garden stump room counter kitchen bonsai
Paper 25.80 28.20 27.55 32.65 29.38 32.50 34.46
This repo 25.44 27.98 26.75 32.13 29.10 32.63 34.20

Blender(PSNR):

chair drums ficus hotdog lego materials mic ship
Paper 34.84 25.84 33.90 37.14 34.84 31.66 35.15 31.38
This repo 35.26 25.51 32.66 36.56 35.04 29.43 34.93 31.38

For Mipnerf360 dataset, the model is trained with a downsample factor of 4 for outdoor scene and 2 for indoor scene(same as in paper). Training speed is about 1.5x slower than paper(1.5 hours on 8 A6000).

The hash decay loss seems to have little effect(?), as many floaters can be found in the final results in both experiments (especially in Blender).

Install CUDA backend

# Clone the repo.
git clone https://github.com/SuLvXiangXin/zipnerf-pytorch.git
cd zipnerf-pytorch

# Make a conda environment.
conda create --name zipnerf python=3.9
conda activate zipnerf

# Install requirements.
pip install -r requirements.txt

# Install other cuda extensions
pip install ./extensions/cuda

# Install nvdiffrast (optional, for textured mesh)
git clone https://github.com/NVlabs/nvdiffrast
pip install ./nvdiffrast

# Install a specific cuda version of torch_scatter 
# see more detail at https://github.com/rusty1s/pytorch_scatter
CUDA=cu117
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.html

Install DPCPP backend

  # Install drivers, oneAPI and ipex for Intel GPUs
  Following the steps in the below page to install gpu drivers, oneAPI BaseKit, and pytorch+ipex (abbr. intel-extension-for-pytorch):
  https://intel.github.io/intel-extension-for-pytorch/xpu/1.13.120+xpu/tutorials/installation.html
  For pytorch and Ipex versions, please install the version 1.13.120 with

  python -m pip install torch==1.13.0a0+git6c9b55e intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu

  After the installation is done, make sure it is successfully by running the example provided by
  https://github.com/intel/intel-extension-for-pytorch/tree/release/xpu/1.13.120#inference-on-gpu

Preparing environment

  export DPCPP_HOME=path/to/llvm  # path to the folder for llvm, default value:~
  bash scripts/set_dpcpp_env.sh intel # for intel's gpu
  bash scripts/set_dpcpp_env.sh nvidia # for nvidia's gpu

Reference of DPCPP support for CUDA

  https://github.com/intel/llvm/blob/sycl/sycl/doc/GetStartedGuide.md#build-dpc-toolchain-with-support-for-nvidia-cuda

Dataset

mipnerf360

refnerf

nerf_synthetic

nerf_llff_data

mkdir data
cd data

# e.g. mipnerf360 data
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
unzip 360_v2.zip

Train

# Configure your training (DDP? fp16? ...)
# see https://huggingface.co/docs/accelerate/index for details
accelerate config

# Where your data is 
DATA_DIR=data/360_v2/bicycle
EXP_NAME=360_v2/bicycle

# Experiment will be conducted under "exp/${EXP_NAME}" folder
# "--gin_configs=configs/360.gin" can be seen as a default config 
# and you can add specific config useing --gin_bindings="..." 
accelerate launch train.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
    --gin_bindings="Config.factor = 4"

# or you can also run without accelerate (without DDP)
CUDA_VISIBLE_DEVICES=0 python train.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
      --gin_bindings="Config.factor = 4" 

# alternatively you can use an example training script 
bash scripts/train_360.sh

# blender dataset
bash scripts/train_blender.sh

# metric, render image, etc can be viewed through tensorboard
tensorboard --logdir "exp/${EXP_NAME}"

Train & Render with DPCPP backend

# add config in command line
      --gin_bindings="Config.dpcpp_backend = True" \

Render

Rendering results can be found in the directory exp/${EXP_NAME}/render

accelerate launch render.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
    --gin_bindings="Config.render_path = True" \
    --gin_bindings="Config.render_path_frames = 480" \
    --gin_bindings="Config.render_video_fps = 60" \
    --gin_bindings="Config.factor = 4"  

# alternatively you can use an example rendering script 
bash scripts/render_360.sh

Evaluate

Evaluating results can be found in the directory exp/${EXP_NAME}/test_preds

# using the same exp_name as in training
accelerate launch eval.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
    --gin_bindings="Config.factor = 4"


# alternatively you can use an example evaluating script 
bash scripts/eval_360.sh

Use NerfStudio

https://github.com/nerfstudio-project/nerfstudio
Nerfstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and testing NeRFs. The library supports a more interpretable implementation of NeRFs by modularizing each component. You can use the viewer provided by nerfstudio to view the render results during the training process.

Install

pip install nerfstudio  
# cd zipnerf-pytorch
pip install -e . 
ns-install-cli

Train & eval

ns-train zipnerf --data {DATA_DIR/SCENE}
ns-eval --load-config {outputs/SCENE/zipnerf/EXP_DIR/config.yml}

ns-train zipnerf -h  # show the full list of model configuration options.
ns-train zipnerf colmap -h  # dataparset configuration options

*Nerfstudio's ColmapDataParser rounds down the image size when downscaling, which is different from the 360_v2 dataset.You can use nerfstudio to reprocess the data or modify the code logic for downscale in the library as dicussed in nerfstudio-project/nerfstudio#1438.
*Nerfstudio's train/eval division strategy is different from this repo. Final training and evaluation results may vary.

For more usage or information, please see https://github.com/nerfstudio-project/nerfstudio.

Configuration

for Zipnerf-pytorch

You can create a new .gin file and pass in the 'gin_file' list in ZipNerfModelConfig of zipnerf_ns/zipnerf_config.py or update the contents of the default .gin file.

for nerfstudio

ns-train zipnerf -h
ns-train zipnerf colmap -h

You can modify zipnerf_ns/zipnerf_config.py, or use the instruction.

Viewer

Given a pretrained model checkpoint, you can start the viewer by running

ns-viewer --load-config outputs/SCENE/zipnerf/EXP_TIME/config.yml  

Remote Server

If you are running on a remote machine, you will need to port forward the websocket port (defaults to 7007). SSH must be set up on the remote machine. Then run the following on this machine:

ssh -L <port>:localhost:<port> [email protected]

Extract mesh

Mesh results can be found in the directory exp/${EXP_NAME}/mesh

# more configuration can be found in internal/configs.py
accelerate launch extract.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
    --gin_bindings="Config.factor = 4"
#    --gin_bindings="Config.mesh_radius = 1"  # (optional) smaller for more details e.g. 0.2 in bicycle scene
#    --gin_bindings="Config.isosurface_threshold = 20"  # (optional) empirical value
#    --gin_bindings="Config.mesh_voxels=134217728"  # (optional) number of voxels used to extract mesh, e.g. 134217728 equals to 512**3 . Smaller values may solve OutoFMemoryError
#    --gin_bindings="Config.vertex_color = True"  # (optional) saving mesh with vertex color instead of atlas which is much slower but with more details.
#    --gin_bindings="Config.vertex_projection = True"  # (optional) use projection for vertex color

# or extracting mesh using tsdf method
accelerate launch tsdf.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
    --gin_bindings="Config.factor = 4"

# alternatively you can use an example script 
bash scripts/extract_360.sh

OutOfMemory

you can decrease the total batch size by adding e.g. --gin_bindings="Config.batch_size = 8192" , or decrease the test chunk size by adding e.g. --gin_bindings="Config.render_chunk_size = 8192" , or use more GPU by configure accelerate config .

Preparing custom data

More details can be found at https://github.com/google-research/multinerf

DATA_DIR=my_dataset_dir
bash scripts/local_colmap_and_resize.sh ${DATA_DIR}

TODO

  • Add MultiScale training and testing

Citation

@misc{barron2023zipnerf,
      title={Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields}, 
      author={Jonathan T. Barron and Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman},
      year={2023},
      eprint={2304.06706},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{multinerf2022,
      title={{MultiNeRF}: {A} {Code} {Release} for {Mip-NeRF} 360, {Ref-NeRF}, and {RawNeRF}},
      author={Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman and Ricardo Martin-Brualla and Jonathan T. Barron},
      year={2022},
      url={https://github.com/google-research/multinerf},
}

@Misc{accelerate,
  title =        {Accelerate: Training and inference at scale made simple, efficient and adaptable.},
  author =       {Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar},
  howpublished = {\url{https://github.com/huggingface/accelerate}},
  year =         {2022}
}

@misc{torch-ngp,
    Author = {Jiaxiang Tang},
    Year = {2022},
    Note = {https://github.com/ashawkey/torch-ngp},
    Title = {Torch-ngp: a PyTorch implementation of instant-ngp}
}

Acknowledgements

This work is based on my another repo https://github.com/SuLvXiangXin/multinerf-pytorch, which is basically a pytorch translation from multinerf

  • Thanks to multinerf for amazing multinerf(MipNeRF360,RefNeRF,RawNeRF) implementation
  • Thanks to accelerate for distributed training
  • Thanks to torch-ngp for super useful hashencoder
  • Thanks to Yurui Chen for discussing the details of the paper.

zipnerf-pytorch's People

Contributors

fumore avatar jing1ling avatar sulvxiangxin avatar systho avatar zongwave avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zipnerf-pytorch's Issues

Nan or Inf found in input tensor

I'm trying to use my own data following this

Here is the error. It looks like it happens when writing loss or some other scalar to tensorboard which is a nan.
Now the question is why is it even nan? Has COLMAP not found some links between images or something?
How to even handle such cases?
Using colmap scripts from multineft didnt fail, but generated like 8GB of data. I have 1000 images in that case, is it too much maybe?

[2023-04-28 07:59:22,211] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 0
[2023-04-28 07:59:22,212] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
[2023-04-28 07:59:24,575] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing contract_mean_jacobi
[2023-04-28 07:59:24,633] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo done tracing contract_mean_jacobi (RETURN_VALUE)
[2023-04-28 07:59:24,637] torch._dynamo.output_graph: [INFO] Step 2: calling compiler function debug_wrapper
[2023-04-28 07:59:24,770] torch._inductor.compile_fx: [INFO] Step 3: torchinductor compiling FORWARDS graph 1
[2023-04-28 07:59:26,212] torch._inductor.compile_fx: [INFO] Step 3: torchinductor done compiling FORWARDS graph 1
[2023-04-28 07:59:26,212] torch._dynamo.output_graph: [INFO] Step 2: done compiler function debug_wrapper
W0428 07:59:26.409256 140601599434880 x2num.py:14] NaN or Inf found in input tensor.
  0%|                                                                                                   | 0/25000 [00:21<?, ?it/s]
Traceback (most recent call last):
  File "/workspaces/zipnerf-pytorch/train.py", line 373, in <module>
    app.run(main)
  File "/opt/conda/lib/python3.10/site-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/opt/conda/lib/python3.10/site-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "/workspaces/zipnerf-pytorch/train.py", line 228, in main
    summary_writer.add_histogram('train_' + k, v, step)
  File "/opt/conda/lib/python3.10/site-packages/tensorboardX/writer.py", line 562, in add_histogram
    histogram(tag, values, bins, max_bins=max_bins), global_step, walltime)
  File "/opt/conda/lib/python3.10/site-packages/tensorboardX/summary.py", line 209, in histogram
    hist = make_histogram(values.astype(float), bins, max_bins)
  File "/opt/conda/lib/python3.10/site-packages/tensorboardX/summary.py", line 247, in make_histogram
    raise ValueError('The histogram is empty, please file a bug report.')
ValueError: The histogram is empty, please file a bug report.
Traceback (most recent call last):
  File "/opt/conda/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 923, in launch_command
    simple_launcher(args)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 579, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', 'train.py', '--gin_configs=configs/360.gin', "--gin_bindings=Config.data_dir = './data/office'", "--gin_bindings=Config.exp_name = '360_v2/office'", '--gin_bindings=Config.batch_size = 16384', '--gin_bindings=Config.factor = 4', '--gin_bindings=Config.render_chunk_size = 16384']' returned non-zero exit status 1.

about llff_raw.gin

hi! I have a question about configs.llff_raw.gin.
This config file may missing the definition about Model.raydist_fn, I don't know how to choose this parameters.
Hope for your reply!

Warnings and _pickle.UnpicklingError: pickle data was truncated

I'm trying to train on the bicycle dataset. I ran into issues similar to those described in these posts: #49 (comment) and #27 (comment)
I applied the suggested fixes but now I have a new error: "_pickle.UnpicklingError: pickle data was truncated"

(zipnerf) PS C:\zipnerf-pytorch> accelerate launch train.py --gin_configs=configs/360.gin --gin_bindings="Config.data_dir = 'data/bicycle'" --gin_bindings="Config.exp_name = 'exp/360_v2/bicycle'" --gin_bindings="Config.render_chunk_size = 8192" --gin_bindings="Config.batch_size = 8192"
2023-07-02 18:31:31: Config(dataset_loader='llff', batching='all_images', batch_size=8192, patch_size=1, factor=4, multiscale=False, multiscale_levels=4, forward_facing=False, render_path=False, llffhold=8, llff_use_all_images_for_training=False, llff_use_all_images_for_testing=False, use_tiffs=False, compute_disp_metrics=False, compute_normal_metrics=False, disable_multiscale_loss=False, randomized=True, near=0.2, far=1000000.0, exp_name='exp/360_v2/bicycle', data_dir='data/bicycle', vocab_tree_path=None, render_chunk_size=8192, num_showcase_images=5, deterministic_showcase=True, vis_num_rays=16, vis_decimate=0, max_steps=25000, early_exit_steps=None, checkpoint_every=5000, resume_from_checkpoint=True, checkpoints_total_limit=1, gradient_scaling=False, print_every=100, train_render_every=500, data_loss_type='charb', charb_padding=0.001, data_loss_mult=1.0, data_coarse_loss_mult=0.0, interlevel_loss_mult=0.0, anti_interlevel_loss_mult=0.01, orientation_loss_mult=0.0, orientation_coarse_loss_mult=0.0, orientation_loss_target='normals_pred', predicted_normal_loss_mult=0.0, predicted_normal_coarse_loss_mult=0.0, hash_decay_mults=0.1, lr_init=0.01, lr_final=0.001, lr_delay_steps=5000, lr_delay_mult=1e-08, adam_beta1=0.9, adam_beta2=0.99, adam_eps=1e-15, grad_max_norm=0.0, grad_max_val=0.0, distortion_loss_mult=0.005, opacity_loss_mult=0.0, eval_only_once=True, eval_save_output=True, eval_save_ray_data=False, eval_render_interval=1, eval_dataset_limit=2147483647, eval_quantize_metrics=True, eval_crop_borders=0, render_video_fps=60, render_video_crf=18, render_path_frames=120, z_variation=0.0, z_phase=0.0, render_dist_percentile=0.5, render_dist_curve_fn=<ufunc 'log'>, render_path_file=None, render_resolution=None, render_focal=None, render_camtype=None, render_spherical=False, rosure=False, rawnerf_mode=False, exposure_percentile=97.0, num_border_pixels_to_mask=0, apply_bayer_mask=False, autoexpose_renders=False, eval_raw_affine_cc=False, zero_glo=False, valid_weight_thresh=0.05, isosurface_threshold=20, mesh_voxels=134217728, visibility_resolution=512, mesh_radius=1.0, mesh_max_radius=10.0, std_value=0.0, compute_visibility=False, extract_visibility=True, decimate_target=-1, vertex_color=True, vertex_projection=True, tsdf_radius=2.0, tsdf_resolution=512, truncation_margin=5.0, tsdf_max_radius=10.0)
2023-07-02 18:31:31: Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: fp16

Warning: image_path not found for reconstruction
C:\zipnerf-pytorch\internal\datasets.py:567: RuntimeWarning: invalid value encountered in matmul
  pixtocam = pixtocam @ np.diag([factor, factor, 1.])
Warning: image_path not found for reconstruction
C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\torch\utils\data\dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\torch\utils\data\dataloader.py:560: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
2023-07-02 18:31:46: Checkpoint does not exist. Starting a new training run.
2023-07-02 18:32:02: Number of parameters being optimized: 77622581
2023-07-02 18:32:02: Begin training...
Training:   0%|                                                                                                                          | 0/25000 [00:00<?, ?it/s]Traceback (most recent call last):
Training:   0%|                                                                                                                          | 0/25000 [01:07<?, ?it/s]  File "<string>", line 1, in <module>

  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
2023-07-02 18:33:10: Error!
Traceback (most recent call last):
  File "C:\zipnerf-pytorch\train.py", line 387, in <module>
    app.run(main)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 308, in run
    _run_main(main, args)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "C:\zipnerf-pytorch\train.py", line 144, in main
    batch = next(dataiter)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\accelerate\data_loader.py", line 374, in __iter__
    dataloader_iter = super().__iter__()
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\torch\utils\data\dataloader.py", line 436, in __iter__
    self._iterator = self._get_iterator()
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\torch\utils\data\dataloader.py", line 1042, in __init__
    w.start()
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
    ForkingPickler(file, protocol).dump(obj)
OSError: [Errno 22] Invalid argument
Traceback (most recent call last):
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\Scripts\accelerate.exe\__main__.py", line 7, in <module>
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
    args.func(args)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\accelerate\commands\launch.py", line 941, in launch_command
    simple_launcher(args)
  File "C:\Users\Karen\anaconda3\envs\zipnerf\lib\site-packages\accelerate\commands\launch.py", line 603, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\\Users\\Karen\\anaconda3\\envs\\zipnerf\\python.exe', 'train.py', '--gin_configs=configs/360.gin', "--gin_bindings=Config.data_dir = 'data/bicycle'", "--gin_bindings=Config.exp_name = 'exp/360_v2/bicycle'", '--gin_bindings=Config.render_chunk_size = 8192', '--gin_bindings=Config.batch_size = 8192']' returned non-zero exit status 1.

can't install

tripping up on
pip install ./gridencoder

falls over on line 195 in version.py
I changed it to this,
match = self._regex.search(str(version))
from this
match = self._regex.search(version)

still does not work, seems to return "none"

spent hours searching the web to no evail.
help appreciated, as this looks to be a very promissing implementation.

thx

Render.py: unexpected keyword argument 'fps'

Hello, thanks for your excellent work, I got this issue while rendering my video:

Making video exp//_exp_path_renders_step_25000_color.mp4...
Traceback (most recent call last):
File "/home/
/zipnerf-pytorch/render.py", line 163, in
app.run(main)
File "/home/
/.conda/envs/zipnerf/lib/python3.9/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/
/.conda/envs/zipnerf/lib/python3.9/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "/home/
/zipnerf-pytorch/render.py", line 157, in main
create_videos(config, config.render_dir, out_dir, out_name, dataset.size)
File "/home/
/zipnerf-pytorch/render.py", line 74, in create_videos
writer.append_data(frame)
File "/home/
/.conda/envs/zipnerf/lib/python3.9/site-packages/imageio/v2.py", line 215, in append_data
return self.instance.write(im, **self.write_args)
File "/home/
***/.conda/envs/zipnerf/lib/python3.9/site-packages/imageio/plugins/tifffile_v3.py", line 244, in write
self._fh.write(image, **kwargs)
TypeError: write() got an unexpected keyword argument 'fps'

AttributeError: 'GridEncoder' object has no attribute 'grid_sizes'

@SuLvXiangXin

hello,
After training starts I get this error, I'm using fox dataset from nvlabs/instant-ngp and got the same error with 360_v2/kitchen dataset.

2023-07-03 XX:XX:XX: Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: fp16

2023-07-03 XX:XX:XX: Checkpoint does not exist. Starting a new training run.
2023-07-03 XX:XX:XX: Number of parameters being optimized: 77622581
2023-07-03 XX:XX:XX: Begin training...
Training:   0%|                                                                              | 0/25000 [00:13<?, ?it/s]
2023-07-03 XX:XX:XX: Error!
Traceback (most recent call last):
  File "C:\zipnerf-pytorch\train.py", line 387, in <module>
    app.run(main)
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 308, in run
    _run_main(main, args)
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "C:\zipnerf-pytorch\train.py", line 166, in main
    renderings, ray_history = model(
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\accelerate\utils\operations.py", line 553, in forward
    return model_forward(*args, **kwargs)
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\accelerate\utils\operations.py", line 541, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "C:\zipnerf-pytorch\internal\models.py", line 208, in forward
    ray_results = mlp(
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\zipnerf-pytorch\internal\models.py", line 491, in forward
    raw_density, x, means_contract = self.predict_density(means, stds, rand=rand, no_warp=no_warp)
  File "C:\zipnerf-pytorch\internal\models.py", line 439, in predict_density
    weights = torch.erf(1 / torch.sqrt(8 * stds[..., None] ** 2 * self.encoder.grid_sizes ** 2))
  File "C:\miniconda3\envs\zipnerf\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GridEncoder' object has no attribute 'grid_sizes'

local_colmap_and_resize.sh does not create transform.json

Syntax that I ran:

$ git clone https://github.com/SuLvXiangXin/zipnerf-pytorch

$ cd zipnerf-pytorch

$ pip install -r requirement.txt

#  I create a folder called images which contains lego bulldozer raw images in LEGO folder. 
$ bash scripts/local_colmap_and_resize.sh ./LEGO

# I dont have DDP and I ran without accelerate
$ CUDA_VISIBLE_DEVICES=0 python3 train.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = 'LEGO'" \
    --gin_bindings="Config.exp_name = 'LEGO'" \
      --gin_bindings="Config.factor = 0"

FileNotFoundError: [Errno 2] No such file or directory: 'LEGO/transforms.json'

Is there a version that does not use accelerate?

First of all, I am very grateful to the author for being able to reproduce zipnerf, which helped me a lot
I found out that the command accelerate launch train.py is used when executing the command
This prevents the code from being debugged directly on pycharm. Does the author have a version that does not use the accelerate tool?

mesh file is coarse

I made ply file from kitchen image
the quality is not good i guess, how can i fix it

running config is same

python extract.py
--gin_configs=configs/360.gin
--gin_bindings="Config.data_dir = '${DATA_DIR}'"
--gin_bindings="Config.exp_name = '${EXP_NAME}'"
--gin_bindings="Config.factor = 4"

image

No module named 'pycolmap'

Hello, thanks for developing this nerf method.
I'm trying to recreate a single scene, but cant get past pycolmap module error
I see that you try to import it with sys

sys.path.insert(0, 'internal/pycolmap')
sys.path.insert(0, 'internal/pycolmap/pycolmap')
import pycolmap

However there are not pycolmap files in folder "internal"


Traceback (most recent call last):
  File "/workspaces/zipnerf-pytorch/render.py", line 8, in <module>
    from internal import datasets
  File "/workspaces/zipnerf-pytorch/internal/datasets.py", line 20, in <module>
    import pycolmap
ModuleNotFoundError: No module named 'pycolmap'
Traceback (most recent call last):
  File "/workspaces/zipnerf-pytorch/render.py", line 8, in <module>
    from internal import datasets
  File "/workspaces/zipnerf-pytorch/internal/datasets.py", line 20, in <module>
    import pycolmap
ModuleNotFoundError: No module named 'pycolmap'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 14917) of binary: /opt/conda/bin/python
Traceback (most recent call last):
  File "/opt/conda/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 914, in launch_command
    multi_gpu_launcher(args)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 603, in multi_gpu_launcher
    distrib_run.run(args)
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
render.py FAILED

nvdiffrast ?

That's a great job.
Is there any rendering scirpt for speeding up via nvdiffrast package ?

space = ((grid_max - grid_min).prod() / config.mesh_voxels) ** (1 / 3) AttributeError: 'Config' object has no attribute 'mesh_voxels'

@SuLvXiangXin
space = ((grid_max - grid_min).prod() / config.mesh_voxels) ** (1 / 3)
AttributeError: 'Config' object has no attribute 'mesh_voxels'

accelerate launch extract_mod.py \ --gin_configs=configs/360.gin \ --gin_bindings="Config.data_dir = '${DATA_DIR}'" \ --gin_bindings="Config.exp_name = '${EXP_NAME}'" \ --gin_bindings="Config.factor = 4" \ --gin_bindings="Config.mesh_radius = 1" \ --gin_bindings="Config.isosurface_threshold = 20" \ --gin_bindings="Config.mesh_resolution = 1024" \ --gin_bindings="Config.vertex_color = True"

No Issue, Just want to say thank you

My experiment with Colab Pro (using GPU A100 40GB VRAM). The images of M60 tank was taken by Tanks and Temples dataset. Converted to Colmap format by Nerfstudio ns-process since I don't know why it fail when I used bash python3 scripts/local_colmap_and_resize.sh ${DATA_DIR}

Syntax that I use:

!python3 train.py \
    --gin_configs=configs/360.gin \
    --gin_bindings="Config.data_dir = 'M60_FULL'" \
    --gin_bindings="Config.exp_name = 'M60_FULL_FAC2BATCH32K'" \
    --gin_bindings="Config.factor = 2" \
    --gin_bindings="Config.batch_size = 32768" \
    --gin_bindings="Config.render_chunk_size = 32768"

I don't use accelerate since Colab doesn't have multi & distributed GPU. Just one A100 GPU and it utilized about 38 GB VRAM in the peak. Training process about 3,25 hours and rendering process about 0,5 hour.

M60_FULL_FAC2BATCH32K_exp_path_renders_step_25000_color.mp4

Hopefully, in the next version, it will able to use nerfstudio viewer for making custom camera track & extract mesh to OBJ.

No such file or directory '/data/lego\\transforms_train.json'

Hello, when I run the following bash script:

#!/bin/bash

SCENE=lego
EXPERIMENT=\\blender\\"$SCENE"
DATA_ROOT=\\data
DATA_DIR="$DATA_ROOT"\\"$SCENE"

rm exp\\"$EXPERIMENT"\\*
accelerate launch train.py --gin_configs=configs/blender.gin \
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.exp_name = '${EXPERIMENT}'"

my log_train.txt output is the following error:

2023-06-30 10:48:45: Config(dataset_loader='blender', batching='all_images', batch_size=65536, patch_size=1, factor=0, multiscale=False, multiscale_levels=4, forward_facing=False, render_path=False, llffhold=8, llff_use_all_images_for_training=False, llff_use_all_images_for_testing=False, use_tiffs=False, compute_disp_metrics=False, compute_normal_metrics=False, disable_multiscale_loss=False, randomized=True, near=2, far=6, exp_name='blender/lego', data_dir='/data/lego', vocab_tree_path=None, render_chunk_size=65536, num_showcase_images=5, deterministic_showcase=True, vis_num_rays=16, vis_decimate=0, max_steps=25000, early_exit_steps=None, checkpoint_every=5000, resume_from_checkpoint=True, checkpoints_total_limit=1, gradient_scaling=False, print_every=100, train_render_every=500, data_loss_type='charb', charb_padding=0.001, data_loss_mult=1.0, data_coarse_loss_mult=0.0, interlevel_loss_mult=0.0, anti_interlevel_loss_mult=0.01, orientation_loss_mult=0.0, orientation_coarse_loss_mult=0.0, orientation_loss_target='normals_pred', predicted_normal_loss_mult=0.0, predicted_normal_coarse_loss_mult=0.0, hash_decay_mults=10, lr_init=0.01, lr_final=0.001, lr_delay_steps=5000, lr_delay_mult=1e-08, adam_beta1=0.9, adam_beta2=0.99, adam_eps=1e-15, grad_max_norm=0.0, grad_max_val=0.0, distortion_loss_mult=0.005, opacity_loss_mult=0.0, eval_only_once=True, eval_save_output=True, eval_save_ray_data=False, eval_render_interval=1, eval_dataset_limit=2147483647, eval_quantize_metrics=True, eval_crop_borders=0, render_video_fps=60, render_video_crf=18, render_path_frames=120, z_variation=0.0, z_phase=0.0, render_dist_percentile=0.5, render_dist_curve_fn=<ufunc 'log'>, render_path_file=None, render_resolution=None, render_focal=None, render_camtype=None, render_spherical=False, render_save_async=True, render_spline_keyframes=None, render_spline_n_interp=30, render_spline_degree=5, render_spline_smoothness=0.03, render_spline_interpolate_exposure=False, rawnerf_mode=False, exposure_percentile=97.0, num_border_pixels_to_mask=0, apply_bayer_mask=False, autoexpose_renders=False, eval_raw_affine_cc=False, zero_glo=False, valid_weight_thresh=0.05, isosurface_threshold=20, mesh_voxels=134217728, visibility_resolution=512, mesh_radius=1.0, mesh_max_radius=10.0, std_value=0.0, compute_visibility=False, extract_visibility=True, decimate_target=-1, vertex_color=True, vertex_projection=True, tsdf_radius=2.0, tsdf_resolution=512, truncation_margin=5.0, tsdf_max_radius=10.0)
2023-06-30 10:48:45: Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: no

2023-06-30 10:48:46: Error!
Traceback (most recent call last):
  File "C:\Users\stephen\zipnerf-pytorch\train.py", line 387, in <module>
    app.run(main)
  File "C:\Users\stephen\miniconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 308, in run
    _run_main(main, args)
  File "C:\Users\stephen\miniconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "C:\Users\stephen\zipnerf-pytorch\train.py", line 73, in main
    dataset = datasets.load_dataset('train', config.data_dir, config)
  File "C:\Users\stephen\zipnerf-pytorch\internal\datasets.py", line 39, in load_dataset
    return dataset_dict[config.dataset_loader](split, train_dir, config)
  File "C:\Users\stephen\zipnerf-pytorch\internal\datasets.py", line 278, in __init__
    self._load_renderings(config)
  File "C:\Users\stephen\zipnerf-pytorch\internal\datasets.py", line 474, in _load_renderings
    with utils.open_file(pose_file, 'r') as fp:
  File "C:\Users\stephen\zipnerf-pytorch\internal\utils.py", line 71, in open_file
    return open(pth, mode=mode)
FileNotFoundError: [Errno 2] No such file or directory: '/data/lego\\transforms_train.json'
2023-06-30 10:53:34: Config(dataset_loader='blender', batching='all_images', batch_size=65536, patch_size=1, factor=0, multiscale=False, multiscale_levels=4, forward_facing=False, render_path=False, llffhold=8, llff_use_all_images_for_training=False, llff_use_all_images_for_testing=False, use_tiffs=False, compute_disp_metrics=False, compute_normal_metrics=False, disable_multiscale_loss=False, randomized=True, near=2, far=6, exp_name='blender\\lego', data_dir='\\data\\lego', vocab_tree_path=None, render_chunk_size=65536, num_showcase_images=5, deterministic_showcase=True, vis_num_rays=16, vis_decimate=0, max_steps=25000, early_exit_steps=None, checkpoint_every=5000, resume_from_checkpoint=True, checkpoints_total_limit=1, gradient_scaling=False, print_every=100, train_render_every=500, data_loss_type='charb', charb_padding=0.001, data_loss_mult=1.0, data_coarse_loss_mult=0.0, interlevel_loss_mult=0.0, anti_interlevel_loss_mult=0.01, orientation_loss_mult=0.0, orientation_coarse_loss_mult=0.0, orientation_loss_target='normals_pred', predicted_normal_loss_mult=0.0, predicted_normal_coarse_loss_mult=0.0, hash_decay_mults=10, lr_init=0.01, lr_final=0.001, lr_delay_steps=5000, lr_delay_mult=1e-08, adam_beta1=0.9, adam_beta2=0.99, adam_eps=1e-15, grad_max_norm=0.0, grad_max_val=0.0, distortion_loss_mult=0.005, opacity_loss_mult=0.0, eval_only_once=True, eval_save_output=True, eval_save_ray_data=False, eval_render_interval=1, eval_dataset_limit=2147483647, eval_quantize_metrics=True, eval_crop_borders=0, render_video_fps=60, render_video_crf=18, render_path_frames=120, z_variation=0.0, z_phase=0.0, render_dist_percentile=0.5, render_dist_curve_fn=<ufunc 'log'>, render_path_file=None, render_resolution=None, render_focal=None, render_camtype=None, render_spherical=False, render_save_async=True, render_spline_keyframes=None, render_spline_n_interp=30, render_spline_degree=5, render_spline_smoothness=0.03, render_spline_interpolate_exposure=False, rawnerf_mode=False, exposure_percentile=97.0, num_border_pixels_to_mask=0, apply_bayer_mask=False, autoexpose_renders=False, eval_raw_affine_cc=False, zero_glo=False, valid_weight_thresh=0.05, isosurface_threshold=20, mesh_voxels=134217728, visibility_resolution=512, mesh_radius=1.0, mesh_max_radius=10.0, std_value=0.0, compute_visibility=False, extract_visibility=True, decimate_target=-1, vertex_color=True, vertex_projection=True, tsdf_radius=2.0, tsdf_resolution=512, truncation_margin=5.0, tsdf_max_radius=10.0)
2023-06-30 10:53:34: Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: no

2023-06-30 10:53:35: Error!
Traceback (most recent call last):
  File "C:\Users\stephen\zipnerf-pytorch\train.py", line 387, in <module>
    app.run(main)
  File "C:\Users\stephen\miniconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 308, in run
    _run_main(main, args)
  File "C:\Users\stephen\miniconda3\envs\zipnerf\lib\site-packages\absl\app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "C:\Users\stephen\zipnerf-pytorch\train.py", line 73, in main
    dataset = datasets.load_dataset('train', config.data_dir, config)
  File "C:\Users\stephen\zipnerf-pytorch\internal\datasets.py", line 39, in load_dataset
    return dataset_dict[config.dataset_loader](split, train_dir, config)
  File "C:\Users\stephen\zipnerf-pytorch\internal\datasets.py", line 278, in __init__
    self._load_renderings(config)
  File "C:\Users\stephen\zipnerf-pytorch\internal\datasets.py", line 474, in _load_renderings
    with utils.open_file(pose_file, 'r') as fp:
  File "C:\Users\stephen\zipnerf-pytorch\internal\utils.py", line 71, in open_file
    return open(pth, mode=mode)
FileNotFoundError: [Errno 2] No such file or directory: '\\data\\lego\\transforms_train.json'

I know this must be something to do with how windows treats paths in terms of back or forward slashes based on this line:
FileNotFoundError: [Errno 2] No such file or directory: '/data/lego\\transforms_train.json'
but I'm struggling to figure out the correct fix. Anyone got any advice? I'm on windows 10.

My 'data folder' has the following structure:

data/
--/test/
--/train/
--/val/
--transforms_test.json
--transforms_train.json
--transforms_val.json

using equirectangular images

Hi, thanks for the implementation.
Could this be used with 360º images as source, like the OmniNerf implementation
Thanks in advance and kind regards

GridEncoder

Hi! Why do you use custom implementation grid encoder and not tinycudann HashGrid? what are the differences?
Thanks for your answers.

Apply jacobian on wrong variable.

Hi, thanks for your great work. I am reading your code and have found some problems with your implementation. From Eq (8) of Zip-NeRF and Eq (8) of Mip-NeRF 360, jacobian is applied on sampled Gaussian mean before contraction. However, in your first commit (the code below), you apply jacobian on the mean after contraction, though you have updated your code in latest commit which is much faster, I have tested that it makes no difference.

    pre_shape = mean.shape[:-1]
    mean = mean.reshape(-1, 3)
    std = std.reshape(-1)
    mean = contract(mean)
    jvp = vmap(jacrev(fn))(mean)

struct.error: unpack requires a buffer of 4 bytes

I wanted to use the simple bicycle example under Windows 11

accelerate launch train.py --gin_configs=configs/360.gin --gin_bindings="Config.data_dir = 'data/bicycle'" --gin_bindings="Config.exp_name = 'bicycle'" --gin_bindings="Config.factor = 4"

and got this error

No such file or directory: '/bicycle/transforms.json'

!pwd
os.chdir('/content/zipnerf-pytorch/')
!pwd

!accelerate config default

Where your data is
!DATA_DIR=data/bicycle
!EXP_NAME=/bicycle

Experiment will be conducted under "exp/${EXP_NAME}" folder
"--gin_configs=configs/360.gin" can be seen as a default config
and you can add specific config useing --gin_bindings="..."
!accelerate launch train.py
--gin_configs=configs/360.gin
--gin_bindings="Config.data_dir = '${DATA_DIR}'"
--gin_bindings="Config.exp_name = '${EXP_NAME}'"
--gin_bindings="Config.factor = 0"

I get this error:

Found no checkpoint files in exp/ with prefix checkpoint_

No such file or directory: '/bicycle/transforms.json'

Surface level must be within volume data range

@SuLvXiangXin
hello
when i run bash scripts/extract_360.sh,there is a error:
extract.py", line 379, in main
verts, faces, normals, values = measure.marching_cubes(

ValueError: Surface level must be within volume data range.

My dataset is bicycle

About density_bias and rgb_bias

Hi GuChun,
Thanks for your great work,I noticed there are some manually set bias in the MLP,for example:
density_bias: float = -1. # Shift added to raw densities pre-activation.
rgb_bias: float = 0. # The shift added to raw colors pre-activation.
I don't understand why there should have bias and why it set the bias=-1 for density and bias=0 for rgb. If I add some new features after the MLP, What bias should I set.
Thanks a lot.

Out of memory when training

Hi, I am trying to use the example script to train the garden dataset.

#!/bin/bash


SCENE=garden
EXPERIMENT=360_v2/"$SCENE"
DATA_ROOT=/mnt/c/Users/B/zipnerf-pytorch/data/
DATA_DIR="$DATA_ROOT"/"$SCENE"

rm exp/"$EXPERIMENT"/*
accelerate launch train.py --gin_configs=configs/360.gin \
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.exp_name = '${EXPERIMENT}'" \
  --gin_bindings="Config.factor = 4"

and I see an out of memory error after config as below:

In which compute environment are you running?
This machine
------------------------------------------------------------------------------------------------------------------------Which type of machine are you using?
No distributed training
Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:No
Do you wish to optimize your script with torch dynamo?[yes/NO]:No
Do you want to use DeepSpeed? [yes/NO]: No
What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:0
------------------------------------------------------------------------------------------------------------------------Do you wish to use FP16 or BF16 (mixed precision)?
no
accelerate configuration saved at /home/fox/.cache/huggingface/accelerate/default_config.yaml

When I add the lines as mentioned:

rm exp/"$EXPERIMENT"/*
accelerate launch train.py --gin_configs=configs/360.gin \
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.exp_name = '${EXPERIMENT}'" \
  --gin_bindings="Config.factor = 4"
  --gin_bindings="Config.batch_size = 4096" 
  --gin_bindings="Config.render_chunk_size = 4096" 

I get:

scripts/train_360.sh: line 13: --gin_bindings=Config.batch_size = 4096: command not found
scripts/train_360.sh: line 14: --gin_bindings=Config.render_chunk_size = 4096: command not found

This is on a local machine with a single RTX 3090. Should I be able to run on this card?

Thanks!

extract mesh error

hello
@SuLvXiangXin
I have encountered a problem when i run the latest code
The error is :
zipnerf_new/extract.py", line 373, in main
visibility_mask = torch.load(visibility_path, map_location=device)
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

How can I solve this problem?

NerfStudio Model

This would be extremely useful as a NerfStudio model. Then you could make use of all the NerfStudio tooling for previewing output, preparing training data, etc etc.

You can see an example of how to create an external NerfStudio model in the TetraNerf Repo.

NerfStudio can be configured to use an external package via the NERFSTUDIO_METHOD_CONFIGS environment variable.

Friendly ask for pretrained weights

Hi,
Thanks for your great work! Is that convenient for you to share the pretrained weights that have been shown in the readme file? I want to extract the depth maps of those scences in my work. I have train and test it with the bicycle datasets, which is awesome! But I only have a single-GPU computer, it will takes me a lot of time to train for all scences.
Still thanks you if it's not convenient.

ValueError: The histogram is empty, please file a bug report.

When resume training from checkpoint,it has such error.
2023-06-25 17:16:24: Error!
Traceback (most recent call last):
File "/home/zengxr/project/zipnerf_v2/train.py", line 387, in
app.run(main)
File "/home/zengxr/anaconda3/envs/multinerf/lib/python3.9/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/zengxr/anaconda3/envs/multinerf/lib/python3.9/site-packages/absl/app.py", line 254, in run_main
sys.exit(main(argv))
File "/home/zengxr/project/zipnerf_v2/train.py", line 254, in main
summary_writer.add_histogram('train
' + k, v, step)
File "/home/zengxr/anaconda3/envs/multinerf/lib/python3.9/site-packages/tensorboardX/writer.py", line 562, in add_histogram
histogram(tag, values, bins, max_bins=max_bins), global_step, walltime)
File "/home/zengxr/anaconda3/envs/multinerf/lib/python3.9/site-packages/tensorboardX/summary.py", line 209, in histogram
hist = make_histogram(values.astype(float), bins, max_bins)
File "/home/zengxr/anaconda3/envs/multinerf/lib/python3.9/site-packages/tensorboardX/summary.py", line 247, in make_histogram
raise ValueError('The histogram is empty, please file a bug report.')
ValueError: The histogram is empty, please file a bug report.

About implementation of Affine GLO

Hi there, @SuLvXiangXin!

First of all, I wanted to thank you for sharing your implementation. I appreciate the work you've done.

I did notice some floaters in the demo video and wanted to bring it to your attention. I believe that the issue may be related to GLO.

After reading through Appendix A of the paper, I noticed that the authors did not concatenate GLO at the bottleneck vector, as you did in this part. Instead, they used an mlp to map GLO to affine transform vectors.

I hope this information is helpful in improving the quality of your implementation. Thank you so much for your hard work and dedication.

Best regards,
Sang Min Kim.

Extracting Mesh raises an OutOfMemoryError

First of all, big thanks for your work, it's awesome. And even more thanks for actually taking the time of addressing issues !

As the title suggests, I get an OutOfMemoryError when extracting Mesh.
I have a "small" machine (Laptop with RTX2080Super, 8giB video memory, 32giB RAM)

The crashes happens after (or during) the computation of density map.

Here is the log :

accelerate launch extract.py --gin_configs=configs/360.gin \                                                                                       [15:59:48]
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.exp_name = '${EXP_NAME}'" \
  --gin_bindings="Config.factor = 4" \
  --gin_bindings="Config.batch_size = 4096" \
  --gin_bindings="Config.render_chunk_size = 4096"

I0608 15:59:50.251319 139980354081152 logging.py:47] Config(dataset_loader='llff', batching='all_images', batch_size=4096, patch_size=1, factor=4, multiscale=False, multiscale_levels=4, forward_facing=False, render_path=False, llffhold=8, llff_use_all_images_for_training=False, llff_use_all_images_for_testing=False, use_tiffs=False, compute_disp_metrics=False, compute_normal_metrics=False, disable_multiscale_loss=False, randomized=True, near=0.2, far=1000000.0, exp_name='chill_room_113', data_dir='data/chill_room_113', vocab_tree_path=None, render_chunk_size=4096, num_showcase_images=5, deterministic_showcase=True, vis_num_rays=16, vis_decimate=0, max_steps=25000, early_exit_steps=None, checkpoint_every=5000, resume_from_checkpoint=True, checkpoints_total_limit=1, print_every=100, train_render_every=500, data_loss_type='charb', charb_padding=0.001, data_loss_mult=1.0, data_coarse_loss_mult=0.0, interlevel_loss_mult=0.0, anti_interlevel_loss_mult=0.01, orientation_loss_mult=0.0, orientation_coarse_loss_mult=0.0, orientation_loss_target='normals_pred', predicted_normal_loss_mult=0.0, predicted_normal_coarse_loss_mult=0.0, hash_decay_mults=0.1, lr_init=0.01, lr_final=0.001, lr_delay_steps=5000, lr_delay_mult=1e-08, adam_beta1=0.9, adam_beta2=0.99, adam_eps=1e-15, grad_max_norm=0.0, grad_max_val=0.0, distortion_loss_mult=0.005, opacity_loss_mult=0.0, eval_only_once=True, eval_save_output=True, eval_save_ray_data=False, eval_render_interval=1, eval_dataset_limit=2147483647, eval_quantize_metrics=True, eval_crop_borders=0, render_video_fps=60, render_video_crf=18, render_path_frames=120, z_variation=0.0, z_phase=0.0, render_dist_percentile=0.5, render_dist_curve_fn=<ufunc 'log'>, render_path_file=None, render_resolution=None, render_focal=None, render_camtype=None, render_spherical=False, render_save_async=True, render_spline_keyframes=None, render_spline_n_interp=30, render_spline_degree=5, render_spline_smoothness=0.03, render_spline_interpolate_exposure=False, rawnerf_mode=False, exposure_percentile=97.0, num_border_pixels_to_mask=0, apply_bayer_mask=False, autoexpose_renders=False, eval_raw_affine_cc=False, zero_glo=False, valid_weight_thresh=0.05, isosurface_threshold=20, mesh_voxels=134217728, visibility_resolution=512, mesh_radius=1.0, mesh_max_radius=10.0, std_value=0.0, compute_visibility=True, extract_visibility=True, decimate_target=-1, vertex_color=True, vertex_projection=True)
I0608 15:59:50.251490 139980354081152 logging.py:47] Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: no

I0608 15:59:51.605862 139980354081152 logging.py:47] Resuming from checkpoint exp/chill_room_113/checkpoints/025000
I0608 15:59:51.606026 139980354081152 logging.py:47] Loading states from exp/chill_room_113/checkpoints/025000
I0608 15:59:51.874377 139980354081152 logging.py:47] All model weights loaded successfully
I0608 15:59:51.874529 139980354081152 logging.py:47] All optimizer states loaded successfully
I0608 15:59:51.874603 139980354081152 logging.py:47] All scheduler states loaded successfully
I0608 15:59:51.875286 139980354081152 logging.py:47] All random states loaded successfully
I0608 15:59:51.875415 139980354081152 logging.py:47] Loading in 0 custom states
I0608 15:59:51.875766 139980354081152 logging.py:47] Generate visibility mask...
Generating visibility grid: 100%|████████████████████████████████████████████████████████████████████████| 98/98 [05:07<00:00,  3.13s/it, visibility_mask=0.0363]
I0608 16:04:59.862720 139980354081152 logging.py:47] Extract mesh from visibility mask...
I0608 16:05:06.356143 139980354081152 logging.py:47] Extract visibility mask done.
I0608 16:05:06.505425 139980354081152 logging.py:47] Process grid cell (1/1, 1/1, 1/1)...
E0608 16:05:22.405724 139980354081152 utils.py:40] Error!                                                                                                        
Traceback (most recent call last):
  File "extract.py", line 638, in <module>
    app.run(main)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "extract.py", line 431, in main
    points_world = coord.inv_contract(2 * points)
  File "/home/pve/Projects/zipnerf-pytorch/internal/coord.py", line 23, in inv_contract
    z_mag_sq = torch.sum(z ** 2, dim=-1, keepdim=True).clamp_min(eps)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/torch/_tensor.py", line 39, in wrapped
    return f(*args, **kwargs)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 7.78 GiB total capacity; 4.60 GiB already allocated; 420.19 MiB free; 5.71 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
  File "/home/pve/.pyenv/versions/3.7.2/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/accelerate/commands/launch.py", line 918, in launch_command
    simple_launcher(args)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/accelerate/commands/launch.py", line 580, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/pve/.pyenv/versions/3.7.2/bin/python3.7', 'extract.py', '--gin_configs=configs/360.gin', "--gin_bindings=Config.data_dir = 'data/chill_room_113'", "--gin_bindings=Config.exp_name = 'chill_room_113'", '--gin_bindings=Config.factor = 4', '--gin_bindings=Config.batch_size = 4096', '--gin_bindings=Config.render_chunk_size = 4096']' returned non-zero exit status 1.
>>> elapsed time 5m34s

Tuning batch_size and render_chink_size lower (down to 1024) has no effect.

About the performance results

The PSNR results reported in README are even worse than MipNeRF-360. What are the possible reasons for this?

Will it produce better results if all-resolution images are used to train the model? In the original ZipNeRF paper, images in all resolutions (1x/2x/4x/8x) are used to train the model.

multi360 rendering : IndexError: index 15 is out of bounds for axis 0 with size 15

I managed to train an model using multi360 config but then when I try to render a video I get this error, any idea how to fix it ?

 ❯ bash scripts/render_360.sh                                        [14:12:36]
I0612 14:12:39.029760 139869331714432 logging.py:47] Config(dataset_loader='llff', batching='all_images', batch_size=6000, patch_size=1, factor=2, multiscale=True, multiscale_levels=4, forward_facing=False, render_path=True, llffhold=8, llff_use_all_images_for_training=False, llff_use_all_images_for_testing=False, use_tiffs=False, compute_disp_metrics=False, compute_normal_metrics=False, disable_multiscale_loss=False, randomized=True, near=0.2, far=1000000.0, exp_name='chill_room_113_multi360', data_dir='data/chill_room_113', vocab_tree_path=None, render_chunk_size=6000, num_showcase_images=5, deterministic_showcase=True, vis_num_rays=16, vis_decimate=0, max_steps=25000, early_exit_steps=None, checkpoint_every=5000, resume_from_checkpoint=True, checkpoints_total_limit=1, print_every=100, train_render_every=500, data_loss_type='charb', charb_padding=0.001, data_loss_mult=1.0, data_coarse_loss_mult=0.0, interlevel_loss_mult=0.0, anti_interlevel_loss_mult=0.01, orientation_loss_mult=0.0, orientation_coarse_loss_mult=0.0, orientation_loss_target='normals_pred', predicted_normal_loss_mult=0.0, predicted_normal_coarse_loss_mult=0.0, hash_decay_mults=0.1, lr_init=0.01, lr_final=0.001, lr_delay_steps=5000, lr_delay_mult=1e-08, adam_beta1=0.9, adam_beta2=0.99, adam_eps=1e-15, grad_max_norm=0.0, grad_max_val=0.0, distortion_loss_mult=0.005, opacity_loss_mult=0.0, eval_only_once=True, eval_save_output=True, eval_save_ray_data=False, eval_render_interval=1, eval_dataset_limit=2147483647, eval_quantize_metrics=True, eval_crop_borders=0, render_video_fps=30, render_video_crf=18, render_path_frames=120, z_variation=0.0, z_phase=0.0, render_dist_percentile=0.5, render_dist_curve_fn=<ufunc 'log'>, render_path_file=None, render_resolution=None, render_focal=None, render_camtype=None, render_spherical=False, render_save_async=True, render_spline_keyframes=None, render_spline_n_interp=30, render_spline_degree=5, render_spline_smoothness=0.03, render_spline_interpolate_exposure=False, rawnerf_mode=False, exposure_percentile=97.0, num_border_pixels_to_mask=0, apply_bayer_mask=False, autoexpose_renders=False, eval_raw_affine_cc=False, zero_glo=False, valid_weight_thresh=0.05, isosurface_threshold=20, mesh_voxels=134217728, visibility_resolution=512, mesh_radius=1.0, mesh_max_radius=10.0, std_value=0.0, compute_visibility=False, extract_visibility=True, decimate_target=-1, vertex_color=True, vertex_projection=True)
I0612 14:12:39.029906 139869331714432 logging.py:47] Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda

Mixed precision type: no

E0612 14:12:39.579381 139869331714432 utils.py:40] Error!                       
Traceback (most recent call last):
  File "render.py", line 172, in <module>
    app.run(main)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
  File "render.py", line 107, in main
    dataset = datasets.load_dataset('test', config.data_dir, config)
  File "/home/pve/Projects/zipnerf-pytorch/internal/datasets.py", line 39, in load_dataset
    return dataset_dict[config.dataset_loader](split, train_dir, config)
  File "/home/pve/Projects/zipnerf-pytorch/internal/datasets.py", line 909, in __init__
    self.images.append(self.down2(images[i], (self.heights[-1], self.widths[-1])))
IndexError: index 15 is out of bounds for axis 0 with size 15
Traceback (most recent call last):
  File "/home/pve/.pyenv/versions/3.7.2/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/accelerate/commands/launch.py", line 918, in launch_command
    simple_launcher(args)
  File "/home/pve/.pyenv/versions/3.7.2/lib/python3.7/site-packages/accelerate/commands/launch.py", line 580, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/pve/.pyenv/versions/3.7.2/bin/python3.7', 'render.py', '--gin_configs=configs/multi360.gin', "--gin_bindings=Config.data_dir = 'data/chill_room_113'", "--gin_bindings=Config.exp_name = 'chill_room_113_multi360'", '--gin_bindings=Config.render_path = True', '--gin_bindings=Config.render_path_frames = 120', '--gin_bindings=Config.render_video_fps = 30', '--gin_bindings=Config.factor = 2', '--gin_bindings=Config.batch_size = 6000', '--gin_bindings=Config.render_chunk_size = 6000']' returned non-zero exit status 1.

Wierd render results

Trying to reproduce the results but they are really wierd.
GPU: RTX 3090 24GB
Commit: 1146e86
Notice that I set batch size to 2, is that okay?

Script executed:

#!/bin/bash

SCENE=bicycle
EXPERIMENT=360_v2/"$SCENE"
DATA_ROOT=/SSD_DISK/datasets/360_v2
DATA_DIR=./data/bicycle



CUDA_VISIBLE_DEVICES=0 accelerate launch render.py \
  --gin_configs=configs/360.gin \
  --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
  --gin_bindings="Config.exp_name = '${EXPERIMENT}'" \
  --gin_bindings="Config.batch_size = 2" \
  --gin_bindings="Config.render_path = True" \
  --gin_bindings="Config.render_path_frames = 120" \
  --gin_bindings="Config.render_video_fps = 60" \
  --gin_bindings="Config.factor = 4"

No additional changes were made in the scripts.
In the output folder exp/360_v2/bicycle/render/path_renders_step_0 I have images like this:
image

I see that it created a folder named step_0, does this mean I have to render again with multiple steps?
distance_mean*.tiff and acc*.tiff images are black.
Generated video cant be played on vlc for some reason, but generated thumbnails indicate that there is nothing good there too

image

Edit: I removed the batch size argument and will try to render again, maybe thats the problem. Im not sure that it even does anything because I dont see any changes in GPU memory. Maybe there is some sort of accumulation

Multisampling realisation

Hello, @SuLvXiangXin ! Thank you for your work. I see your multisampling code and I have some questions.

  1. What is cam_dirs? I see this defenition on code but it gives me zero ideas why did you use cam_dirs in multisampling.
  2. If we want to constract two vectors perpendicular to the view direction, we can use direction vector instead of cam_dirs, e.g.
# two basis in parallel to the image plane
rand_vec = torch.randn_like(directions)
ortho1 = F.normalize(torch.cross(directions, rand_vec, dim=-1), dim=-1)
ortho2 = F.normalize(torch.cross(directions, ortho1, dim=-1), dim=-1)

Please correct me if I'm not right. Thanks!

FileNotFoundError: [Errno 2] No such file or directory: 'exp/360_v2/bicycle/checkpoints/025000/scaler.pt'

I trained and got:
[...]
2023-05-26 18:17:06: 24900/25000: loss=0.03918, psnr=23.821, lr=1.01e-03 | data=0.03578, anti=7.3e-05, dist=0.00016, hash=0.00317, 51921 r/s
2023-05-26 18:17:22: 25000/25000: loss=0.03925, psnr=23.803, lr=1.00e-03 | data=0.03585, anti=7.4e-05, dist=0.00016, hash=0.00316, 51515 r/s
2023-05-26 18:17:22: Saving current state to exp/360_v2/bicycle/checkpoints/025000
2023-05-26 18:17:23: Model weights saved in exp/360_v2/bicycle/checkpoints/025000/pytorch_model.bin
2023-05-26 18:17:24: Optimizer state saved in exp/360_v2/bicycle/checkpoints/025000/optimizer.bin
2023-05-26 18:17:24: Random states saved in exp/360_v2/bicycle/checkpoints/025000/random_states_0.pkl
2023-05-26 18:17:46: Eval 25000: 22.273s, 45653 rays/sec
2023-05-26 18:17:46: Metrics computed in 0.086s
2023-05-26 18:17:46: psnr = 25.3830
2023-05-26 18:17:46: ssim = 0.7320
2023-05-26 18:17:46: Visualized in 0.233s
Training: 25001it [1:27:45, 4.75it/s]
2023-05-26 18:17:49: Saving last checkpoint at step 25000 to exp/360_v2/bicycle/checkpoints
2023-05-26 18:17:49: Saving current state to exp/360_v2/bicycle/checkpoints/025000
2023-05-26 18:17:50: Model weights saved in exp/360_v2/bicycle/checkpoints/025000/pytorch_model.bin
2023-05-26 18:17:51: Optimizer state saved in exp/360_v2/bicycle/checkpoints/025000/optimizer.bin
2023-05-26 18:17:51: Random states saved in exp/360_v2/bicycle/checkpoints/025000/random_states_0.pkl
2023-05-26 18:17:51: Finish training.

then I run eval.py, got this:
[...]
Warning: image_path not found for reconstruction
2023-05-26 19:09:34: Resuming from checkpoint exp/360_v2/bicycle/checkpoints/025000
2023-05-26 19:09:34: Loading states from exp/360_v2/bicycle/checkpoints/025000
2023-05-26 19:09:34: All model weights loaded successfully
2023-05-26 19:09:34: All optimizer states loaded successfully
2023-05-26 19:09:34: All scheduler states loaded successfully
2023-05-26 19:09:34: Error!
Traceback (most recent call last):
File "/home/omnispace/Projects/zipnerf-pytorch/eval.py", line 307, in
app.run(main)
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "/home/omnispace/Projects/zipnerf-pytorch/eval.py", line 115, in main
step = checkpoints.restore_checkpoint(config.checkpoint_dir, accelerator, logger)
File "/home/omnispace/Projects/zipnerf-pytorch/internal/checkpoints.py", line 24, in restore_checkpoint
accelerator.load_state(path)
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/accelerate/accelerator.py", line 2466, in load_state
load_accelerator_state(
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/accelerate/checkpointing.py", line 177, in load_accelerator_state
scaler.load_state_dict(torch.load(input_scaler_file))
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/torch/serialization.py", line 791, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/torch/serialization.py", line 271, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/omnispace/anaconda3/envs/zipnerf/lib/python3.9/site-packages/torch/serialization.py", line 252, in init
super().init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'exp/360_v2/bicycle/checkpoints/025000/scaler.pt'

Render video cannot show

Hi @SuLvXiangXin ,

Thank you for great work.

When I run render script after trained, I can get images in path_render_step. One of color image as below
color_012

However, rendering videos cannot show. I use Windows Subsystem for Linux (WSL) for training and rendering.

Reproduce:
Train:
SCENE=bicycle
DATA_DIR=data/bicycle
EXP_NAME=bicycle

python train.py --gin_configs=configs/360.gin --gin_bindings="Config.data_dir = '${DATA_DIR}'" --gin_bindings="Config.exp_name = '${EXP_NAME}'" --gin_bindings="Config.factor = 4" --gin_bindings="Config.batch_size = 8192" --gin_bindings="Config.render_chunk_size = 8192"

Render:
bash scripts/render_360.sh

File render_360.sh:
SCENE=bicycle
EXPERIMENT="$SCENE"
DATA_ROOT=data
DATA_DIR="$DATA_ROOT"/"$SCENE"

accelerate launch render.py
--gin_configs=configs/360.gin
--gin_bindings="Config.data_dir = '${DATA_DIR}'"
--gin_bindings="Config.exp_name = '${EXPERIMENT}'"
--gin_bindings="Config.render_path = True"
--gin_bindings="Config.render_path_frames = 120"
--gin_bindings="Config.render_video_fps = 60"
--gin_bindings="Config.factor = 4"

Appreciate if you can give me some advice. Many thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.