Git Product home page Git Product logo

llff's People

Contributors

akar43 avatar alberthuyb avatar blm81 avatar bmild avatar rodrygojose avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llff's Issues

View poses rendering

Hi,
can you please explain me process of generating and storing values for positions of views for making video since I would like to modify it and make my own paths?

Thank you

Skipping the colmap

Since colmap is not able to register some pictures and I have data abut camera intrinsic and extrinsic matrices because I'm testing it on MPEG materials, I would like to skip it. Is colmap necessary in this case and can you please explain me how to insert these parameters in the code?
Thanks!

Training code

Could you please provide the training code for your project? I am planning to compare performance on a new dataset.

Thank you!

Question about disparity maps

Hello,

Thank you for the amazing work. I wanted to know which color map you are using? I see that the produced disparity maps are not grayscale maps and I could not find where you are mentioning the "cmap = " in the code. Thanks in advance.

NVCC not found

Hi,
My system has a ubuntu-18.04 and has cuda 10.2. im trying to set it up in a conda environment but while running make in cuda_renderer its throwing nvcc not found. I have tried many approaches of the net but none seem to work. please help

Run to run variation between runs with the same input images

Hi,

Thanks a lot for your hard work and amazing contribution. I have been able to reproduce good novel views outputs using the SW, but I noticed, few things as mentioned below:

Run to Run variation at Camera Parameter Estimation step in between runs [Significant differences in the Camera Parameters]
Run to Run variation at MPI generation step even with same camera parameters i.e MPIs generated are different in between the runs
Run to Run variation happens at different resolutions i.e 1080p, 480p, 360p etc
Run to Run variation occurs with different content too, including Fern.

Is this expected? I am able to reproduce at my end, on two different machines.

Looking forward to hear from you soon :) Thanks a lot in advance!!

Thanks,
Sumit

Windows installation of llff for Nerf using Conda

As Nerf can be (successfully) used in windows with Conda with minors hacks for tf 2.1
As Nerf need LLFF and imageMagick

Can someone like @BingEdison can provide conda steps ?

Because on windows 10 with wsl (wich avoid the cygwin installation) and virtualbox, Docker can"t work.

And... Is there any plan to combine nerf & llff ?

btw, both project seems not updated

Recover camera poses ERROR running `python img2poses.py $your-images-folder`

Hello
i am trying to run python img2poses.py $your-images-folder to get camera poses for training (Neural Radiance Fields) and get this ERROR.
Hardware: MacOS M1
Can you explain what the problem might be and how to fix it

Need to run COLMAP
[option_manager.cc:811] Check failed: ExistsDir(*image_path)
ERROR: Invalid options provided.
Traceback (most recent call last):
  File "imgs2poses.py", line 18, in <module>
    gen_poses(args.scenedir, args.match_type)
  File "/Users/ab/ProjectTest/LLFF/llff/poses/pose_utils.py", line 268, in gen_poses
    run_colmap(basedir, match_type)
  File "/Users/ab/ProjectTest/LLFF/llff/poses/colmap_wrapper.py", line 35, in run_colmap
    feat_output = ( subprocess.check_output(feature_extractor_args, universal_newlines=True) )
  File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['colmap', 'feature_extractor', '--database_path', '/Users/ab/ProjectTest/imgs_nerf/database.db', '--image_path', '/Users/ab/ProjectTest/imgs_nerf/images', '--ImageReader.single_camera', '1']' returned non-zero exit status 1.

Train Error python train.py requested GPUs: [0]

Hello,
I have followed your example to train NERF on my own data.

Im using MacOS based M1 Chip, does anyone have any idea what this might be related to. I can't figure out what's going on. Can you explain what the problem might be and how to fix it ? Thank you in advance for any help.

I am trying to run and get this error.

python train.py \
   --dataset_name llff \
   --root_dir /Users/ab/ProjectTest/LLFF \
   --N_importance 64 --img_wh 504 378 \
   --num_epochs 30 --batch_size 1024 \
   --optimizer adam --lr 5e-4 \
   --lr_scheduler steplr --decay_step 10 20 --decay_gamma 0.5 \
   --exp_name exp
Traceback (most recent call last):
  File "train.py", line 178, in <module>
    profiler=hparams.num_gpus==1)
  File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 438, in __init__
    self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
  File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 712, in parse_gpu_ids
    gpus = sanitize_gpu_ids(gpus)
  File "/opt/anaconda3/envs/nerf_pl/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 678, in sanitize_gpu_ids
    """)
pytorch_lightning.utilities.exceptions.MisconfigurationException: 
                You requested GPUs: [0]
                But your machine only has: [].

Depthmaps

Hi, can your model generate accurate depth maps estimates for each of the input images, or one depth map view of the scene?

Modify viewer to use only one MPI

Hi!
Could you provide any advice on how should I modify viewer.cpp to render using the only MPI, not the mixture of them (e.g., consider the baseline Stereo Magnification paper)?
As far as I understand, I am to deal with the only node of the viewmesh instance. But currently, I cannot identify how to disable the mixture with the neighbor nodes.
Also, simply running the viewer on the folder that has only mpi00/ subdirectory and edited metadata.txt, leads to Segmentation fault error.

IOS application

Did you plan to release your application for the photo capturing?

Create a 360° video

Hi. I am wondering whether your framework is able to create a 360° "surround" effect. For example, can I take pictures of a soda can from different angles as I walk around it and render a video using these pictures. I have tried so and I found that there would be a big black shadow on the edge of the video at the beginning and ending And it still looks like on a single plane. I want to know why that happens and if there is anyway I can fix this. If not, I am wondering what features of your framework hinder me to do so, that is to say, the reason why the pictures must be taken at a single plane.

Post Colmap error | sparse folder empty

I'm getting the following error:

Traceback (most recent call last):
File "imgs2poses.py", line 18, in
gen_poses(args.scenedir, args.match_type)
File "/content/LLFF/llff/poses/pose_utils.py", line 274, in gen_poses
poses, pts3d, perm = load_colmap_data(basedir)
File "/content/LLFF/llff/poses/pose_utils.py", line 14, in load_colmap_data
camdata = read_model.read_cameras_binary(camerasfile)
File "/content/LLFF/llff/poses/colmap_read_model.py", line 115, in read_cameras_binary
with open(path_to_model_file, "rb") as fid:
FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/My Drive/nerf_pl/mug_for_nerf/sparse/0/cameras.bin'

My input: 59 3024x3024 images
Could you help? Thanks!

Bundle adjustment Not converged

hi, authors, many thanks for your impressive work and open-source code!

I have followed your instructions and got some amazing outputs(videos). however, for some other input images test cases, I got some errors.

By using imgs2poses.py, the bundle adjustment report in the file colmap_output.txt says No convergence and all the other images could not register neither.

I got 24 input images(the other test case 20 images), and there should be plenty of features within the images.

the colmap_output.txt is like this:

==============================================================================
Feature extraction

Processed file [1/24]
Name: 01.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7604
Processed file [2/24]
Name: 02.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7460
Processed file [3/24]
Name: 03.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7671
Processed file [4/24]
Name: 04.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7636
Processed file [5/24]
Name: 05.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 8000
Processed file [6/24]
Name: 06.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 8334
Processed file [7/24]
Name: 07.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7943
Processed file [8/24]
Name: 08.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7749
Processed file [9/24]
Name: 09.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 8346
Processed file [10/24]
Name: 10.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 8143
Processed file [11/24]
Name: 11.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7826
Processed file [12/24]
Name: 12.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 5609
Processed file [13/24]
Name: 13.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 6984
Processed file [14/24]
Name: 14.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7445
Processed file [15/24]
Name: 15.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 6511
Processed file [16/24]
Name: 16.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7683
Processed file [17/24]
Name: 17.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 8220
Processed file [18/24]
Name: 18.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 8505
Processed file [19/24]
Name: 19.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 9363
Processed file [20/24]
Name: 20.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 6798
Processed file [21/24]
Name: 21.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7544
Processed file [22/24]
Name: 22.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 5691
Processed file [23/24]
Name: 23.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7068
Processed file [24/24]
Name: 24.jpg
Dimensions: 1440 x 1080
Camera: #1 - SIMPLE_RADIAL
Focal Length: 1728.00px
Features: 7409
Elapsed time: 0.041 [minutes]

==============================================================================
Exhaustive feature matching

Matching block [1/1, 1/1] in 6.299s
Elapsed time: 0.109 [minutes]

==============================================================================
Loading database

Loading cameras... 1 in 0.000s
Loading matches... 276 in 0.020s
Loading images... 24 in 0.027s (connected 24)
Building correspondence graph... in 0.108s (ignored 0)

Elapsed time: 0.003 [minutes]

==============================================================================
Initializing with image pair #10 and #5

==============================================================================
Global bundle adjustment

iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 3.135909e+03 0.00e+00 1.41e+05 0.00e+00 0.00e+00 1.00e+04 0 1.75e-03 6.96e-03
1 3.892343e+03 -7.56e+02 0.00e+00 4.13e+01 -3.03e-01 5.00e+03 1 3.41e-03 1.04e-02
2 2.438515e+03 6.97e+02 1.06e+05 3.85e+01 2.86e-01 4.64e+03 1 4.46e-03 1.49e-02
3 6.548220e+02 1.78e+03 4.14e+03 5.68e+00 9.99e-01 1.39e+04 1 4.03e-03 1.89e-02
4 6.482664e+02 6.56e+00 1.20e+03 5.05e+00 9.94e-01 4.17e+04 1 3.28e-03 2.22e-02
5 6.452768e+02 2.99e+00 1.27e+03 1.75e+01 8.18e-01 5.61e+04 1 3.35e-03 2.56e-02
6 6.423903e+02 2.89e+00 1.93e+03 2.31e+01 5.51e-01 5.62e+04 1 3.35e-03 2.90e-02
7 6.377616e+02 4.63e+00 1.60e+03 2.26e+01 7.00e-01 6.00e+04 1 3.31e-03 3.23e-02
8 6.337074e+02 4.05e+00 1.75e+03 2.35e+01 6.49e-01 6.17e+04 1 3.68e-03 3.60e-02
9 6.294490e+02 4.26e+00 1.83e+03 2.36e+01 6.75e-01 6.44e+04 1 6.69e-03 4.27e-02
10 6.254248e+02 4.02e+00 1.99e+03 2.40e+01 6.59e-01 6.66e+04 1 3.82e-03 4.66e-02
11 6.214221e+02 4.00e+00 2.10e+03 2.43e+01 6.64e-01 6.90e+04 1 5.85e-03 5.25e-02
12 6.175485e+02 3.87e+00 2.23e+03 2.46e+01 6.58e-01 7.12e+04 1 6.20e-03 5.88e-02
13 6.137530e+02 3.80e+00 2.33e+03 2.48e+01 6.57e-01 7.35e+04 1 3.55e-03 6.23e-02
14 6.100624e+02 3.69e+00 2.43e+03 2.51e+01 6.53e-01 7.57e+04 1 3.46e-03 6.58e-02
15 6.064668e+02 3.60e+00 2.52e+03 2.53e+01 6.52e-01 7.78e+04 1 4.29e-03 7.01e-02
16 6.029737e+02 3.49e+00 2.61e+03 2.54e+01 6.49e-01 7.99e+04 1 3.60e-03 7.38e-02
17 5.995826e+02 3.39e+00 2.68e+03 2.56e+01 6.47e-01 8.20e+04 1 3.33e-03 7.71e-02
18 5.962960e+02 3.29e+00 2.74e+03 2.57e+01 6.45e-01 8.41e+04 1 3.59e-03 8.07e-02
19 5.931150e+02 3.18e+00 2.79e+03 2.57e+01 6.43e-01 8.61e+04 1 3.38e-03 8.41e-02
20 5.900405e+02 3.07e+00 2.84e+03 2.58e+01 6.41e-01 8.81e+04 1 3.50e-03 8.76e-02
21 5.870732e+02 2.97e+00 2.88e+03 2.58e+01 6.40e-01 9.00e+04 1 3.42e-03 9.11e-02
22 5.842132e+02 2.86e+00 2.90e+03 2.58e+01 6.38e-01 9.20e+04 1 3.44e-03 9.45e-02
23 5.814602e+02 2.75e+00 2.93e+03 2.57e+01 6.37e-01 9.39e+04 1 3.67e-03 9.82e-02
24 5.788136e+02 2.65e+00 2.94e+03 2.56e+01 6.35e-01 9.58e+04 1 6.73e-03 1.05e-01
25 5.762724e+02 2.54e+00 2.94e+03 2.55e+01 6.34e-01 9.77e+04 1 3.61e-03 1.09e-01
26 5.738352e+02 2.44e+00 2.94e+03 2.54e+01 6.33e-01 9.96e+04 1 6.27e-03 1.15e-01
27 5.715003e+02 2.33e+00 2.94e+03 2.52e+01 6.32e-01 1.01e+05 1 3.35e-03 1.18e-01
28 5.692657e+02 2.23e+00 2.92e+03 2.50e+01 6.31e-01 1.03e+05 1 3.17e-03 1.21e-01
29 5.671292e+02 2.14e+00 2.91e+03 2.48e+01 6.30e-01 1.05e+05 1 3.16e-03 1.25e-01
30 5.650882e+02 2.04e+00 2.88e+03 2.45e+01 6.29e-01 1.07e+05 1 4.22e-03 1.29e-01
31 5.631401e+02 1.95e+00 2.86e+03 2.42e+01 6.29e-01 1.09e+05 1 6.12e-03 1.35e-01
32 5.612820e+02 1.86e+00 2.83e+03 2.39e+01 6.28e-01 1.11e+05 1 7.77e-03 1.43e-01
33 5.595109e+02 1.77e+00 2.79e+03 2.36e+01 6.27e-01 1.13e+05 1 3.59e-03 1.46e-01
34 5.578237e+02 1.69e+00 2.75e+03 2.33e+01 6.27e-01 1.14e+05 1 3.35e-03 1.50e-01
35 5.562172e+02 1.61e+00 2.71e+03 2.29e+01 6.26e-01 1.16e+05 1 3.34e-03 1.53e-01
36 5.546881e+02 1.53e+00 2.67e+03 2.25e+01 6.26e-01 1.18e+05 1 3.27e-03 1.56e-01
37 5.532332e+02 1.45e+00 2.62e+03 2.21e+01 6.25e-01 1.20e+05 1 3.31e-03 1.60e-01
38 5.518493e+02 1.38e+00 2.57e+03 2.17e+01 6.25e-01 1.22e+05 1 3.53e-03 1.63e-01
39 5.505330e+02 1.32e+00 2.52e+03 2.13e+01 6.25e-01 1.24e+05 1 3.65e-03 1.67e-01
40 5.492811e+02 1.25e+00 2.47e+03 2.08e+01 6.25e-01 1.26e+05 1 3.54e-03 1.71e-01
41 5.480904e+02 1.19e+00 2.42e+03 2.04e+01 6.25e-01 1.28e+05 1 3.46e-03 1.74e-01
42 5.469577e+02 1.13e+00 2.37e+03 1.99e+01 6.25e-01 1.30e+05 1 3.32e-03 1.77e-01
43 5.458800e+02 1.08e+00 2.31e+03 1.95e+01 6.25e-01 1.32e+05 1 3.22e-03 1.81e-01
44 5.448544e+02 1.03e+00 2.26e+03 1.90e+01 6.26e-01 1.34e+05 1 3.22e-03 1.84e-01
45 5.438778e+02 9.77e-01 2.21e+03 1.85e+01 6.26e-01 1.36e+05 1 3.36e-03 1.87e-01
46 5.429474e+02 9.30e-01 2.15e+03 1.80e+01 6.27e-01 1.39e+05 1 3.38e-03 1.91e-01
47 5.420605e+02 8.87e-01 2.10e+03 1.76e+01 6.27e-01 1.41e+05 1 3.25e-03 1.94e-01
48 5.412145e+02 8.46e-01 2.05e+03 1.71e+01 6.28e-01 1.43e+05 1 3.35e-03 1.97e-01
49 5.404067e+02 8.08e-01 2.00e+03 1.66e+01 6.30e-01 1.46e+05 1 3.28e-03 2.00e-01
50 5.396346e+02 7.72e-01 1.95e+03 1.62e+01 6.31e-01 1.48e+05 1 3.31e-03 2.04e-01
51 5.388960e+02 7.39e-01 1.90e+03 1.57e+01 6.32e-01 1.51e+05 1 3.36e-03 2.07e-01
52 5.381884e+02 7.08e-01 1.85e+03 1.52e+01 6.34e-01 1.54e+05 1 3.94e-03 2.11e-01
53 5.375097e+02 6.79e-01 1.80e+03 1.48e+01 6.36e-01 1.57e+05 1 3.37e-03 2.15e-01
54 5.368576e+02 6.52e-01 1.76e+03 1.43e+01 6.38e-01 1.61e+05 1 3.58e-03 2.18e-01
55 5.362300e+02 6.28e-01 1.72e+03 1.39e+01 6.41e-01 1.65e+05 1 3.37e-03 2.22e-01
56 5.356248e+02 6.05e-01 1.68e+03 1.35e+01 6.44e-01 1.69e+05 1 3.36e-03 2.25e-01
57 5.350400e+02 5.85e-01 1.64e+03 1.31e+01 6.47e-01 1.73e+05 1 3.40e-03 2.28e-01
58 5.344735e+02 5.66e-01 1.60e+03 1.27e+01 6.50e-01 1.78e+05 1 9.00e-03 2.37e-01
59 5.339232e+02 5.50e-01 1.56e+03 1.23e+01 6.54e-01 1.83e+05 1 4.21e-03 2.42e-01
60 5.333872e+02 5.36e-01 1.53e+03 1.19e+01 6.58e-01 1.89e+05 1 3.32e-03 2.45e-01
61 5.328631e+02 5.24e-01 1.50e+03 1.16e+01 6.63e-01 1.96e+05 1 3.29e-03 2.48e-01
62 5.323487e+02 5.14e-01 1.48e+03 1.13e+01 6.69e-01 2.04e+05 1 3.31e-03 2.52e-01
63 5.318415e+02 5.07e-01 1.45e+03 1.11e+01 6.75e-01 2.13e+05 1 3.36e-03 2.55e-01
64 5.313387e+02 5.03e-01 1.44e+03 1.09e+01 6.82e-01 2.24e+05 1 3.43e-03 2.58e-01
65 5.308372e+02 5.02e-01 1.42e+03 1.08e+01 6.89e-01 2.36e+05 1 3.42e-03 2.62e-01
66 5.303331e+02 5.04e-01 1.41e+03 1.07e+01 6.98e-01 2.52e+05 1 3.46e-03 2.65e-01
67 5.298219e+02 5.11e-01 1.40e+03 1.08e+01 7.08e-01 2.72e+05 1 3.34e-03 2.69e-01
68 5.292984e+02 5.24e-01 1.40e+03 1.11e+01 7.18e-01 2.96e+05 1 3.34e-03 2.72e-01
69 5.287569e+02 5.42e-01 1.39e+03 1.17e+01 7.27e-01 3.27e+05 1 3.51e-03 2.76e-01
70 5.281945e+02 5.62e-01 1.36e+03 1.25e+01 7.28e-01 3.61e+05 1 3.52e-03 2.79e-01
71 5.276208e+02 5.74e-01 1.26e+03 1.37e+01 7.07e-01 3.88e+05 1 3.42e-03 2.83e-01
72 5.270656e+02 5.55e-01 1.02e+03 1.48e+01 6.45e-01 3.98e+05 1 3.34e-03 2.86e-01
73 5.265535e+02 5.12e-01 6.44e+02 1.55e+01 5.55e-01 3.98e+05 1 3.34e-03 2.89e-01
74 5.261001e+02 4.53e-01 5.15e+02 1.61e+01 4.48e-01 3.98e+05 1 3.31e-03 2.93e-01
75 5.257127e+02 3.87e-01 6.49e+02 1.66e+01 3.38e-01 3.85e+05 1 3.49e-03 2.96e-01
76 5.252750e+02 4.38e-01 7.28e+02 1.67e+01 3.32e-01 3.71e+05 1 3.56e-03 3.00e-01
77 5.248302e+02 4.45e-01 7.83e+02 1.67e+01 3.14e-01 3.52e+05 1 3.46e-03 3.03e-01
78 5.243386e+02 4.92e-01 8.03e+02 1.65e+01 3.33e-01 3.40e+05 1 3.41e-03 3.07e-01
79 5.238930e+02 4.46e-01 8.30e+02 1.65e+01 3.04e-01 3.20e+05 1 3.34e-03 3.10e-01
80 5.233664e+02 5.27e-01 8.10e+02 1.60e+01 3.56e-01 3.13e+05 1 3.35e-03 3.13e-01
81 5.229668e+02 4.00e-01 8.73e+02 1.61e+01 2.86e-01 2.90e+05 1 3.40e-03 3.17e-01
82 5.223966e+02 5.70e-01 8.34e+02 1.54e+01 4.04e-01 2.88e+05 1 3.46e-03 3.20e-01
83 5.220552e+02 3.41e-01 8.91e+02 1.56e+01 2.72e-01 2.63e+05 1 3.39e-03 3.24e-01
84 5.214732e+02 5.82e-01 8.03e+02 1.46e+01 4.51e-01 2.63e+05 1 3.38e-03 3.27e-01
85 5.211545e+02 3.19e-01 8.47e+02 1.49e+01 2.93e-01 2.46e+05 1 3.42e-03 3.30e-01
86 5.206662e+02 4.88e-01 7.80e+02 1.42e+01 4.33e-01 2.45e+05 1 3.44e-03 3.34e-01
87 5.203502e+02 3.16e-01 8.10e+02 1.44e+01 3.17e-01 2.34e+05 1 3.44e-03 3.37e-01
88 5.199311e+02 4.19e-01 7.68e+02 1.40e+01 4.09e-01 2.32e+05 1 3.48e-03 3.41e-01
89 5.196121e+02 3.19e-01 7.85e+02 1.41e+01 3.36e-01 2.24e+05 1 3.37e-03 3.44e-01
90 5.192324e+02 3.80e-01 7.57e+02 1.38e+01 3.94e-01 2.22e+05 1 3.54e-03 3.48e-01
91 5.189100e+02 3.22e-01 7.64e+02 1.38e+01 3.51e-01 2.17e+05 1 3.48e-03 3.51e-01
92 5.185548e+02 3.55e-01 7.44e+02 1.37e+01 3.85e-01 2.14e+05 1 3.51e-03 3.55e-01
93 5.182307e+02 3.24e-01 7.43e+02 1.36e+01 3.63e-01 2.10e+05 1 3.47e-03 3.58e-01
94 5.178916e+02 3.39e-01 7.29e+02 1.35e+01 3.82e-01 2.07e+05 1 5.19e-03 3.63e-01
95 5.175684e+02 3.23e-01 7.24e+02 1.35e+01 3.72e-01 2.04e+05 1 3.98e-03 3.68e-01
96 5.172400e+02 3.28e-01 7.13e+02 1.34e+01 3.81e-01 2.01e+05 1 7.22e-03 3.75e-01
97 5.169195e+02 3.20e-01 7.12e+02 1.33e+01 3.79e-01 1.98e+05 1 7.04e-03 3.82e-01
98 5.165985e+02 3.21e-01 7.13e+02 1.32e+01 3.83e-01 1.96e+05 1 3.49e-03 3.85e-01
99 5.162817e+02 3.17e-01 7.14e+02 1.31e+01 3.84e-01 1.93e+05 1 3.45e-03 3.89e-01
100 5.159661e+02 3.16e-01 7.16e+02 1.31e+01 3.86e-01 1.91e+05 1 3.44e-03 3.92e-01

Bundle adjustment report

Residuals : 4080

Parameters : 3067
Iterations : 101
Time : 0.392774 [s]
Initial cost : 0.876701 [px]
Final cost : 0.355615 [px]
Termination : No convergence

=> Filtered observations: 123
=> Filtered images: 0

==============================================================================
Registering image #4 (3)

=> Image sees 702 / 4523 points
=> Could not register, trying another image.

==============================================================================
Registering image #9 (3)

=> Image sees 694 / 4947 points
=> Could not register, trying another image.

==============================================================================
Registering image #8 (3)

=> Image sees 654 / 4674 points
=> Could not register, trying another image.

==============================================================================
Registering image #3 (3)

=> Image sees 646 / 4530 points
=> Could not register, trying another image.

==============================================================================
Registering image #6 (3)

=> Image sees 673 / 4529 points
=> Could not register, trying another image.

==============================================================================
Registering image #11 (3)

=> Image sees 644 / 4900 points
=> Could not register, trying another image.

==============================================================================
Registering image #7 (3)

=> Image sees 616 / 4430 points
=> Could not register, trying another image.

==============================================================================
Registering image #16 (3)

=> Image sees 559 / 4688 points
=> Could not register, trying another image.

==============================================================================
Registering image #14 (3)

=> Image sees 561 / 4622 points
=> Could not register, trying another image.

==============================================================================
Registering image #2 (3)

=> Image sees 522 / 4383 points
=> Could not register, trying another image.

==============================================================================
Registering image #17 (3)

=> Image sees 493 / 4716 points
=> Could not register, trying another image.

==============================================================================
Registering image #18 (3)

=> Image sees 453 / 4233 points
=> Could not register, trying another image.

==============================================================================
Registering image #1 (3)

=> Image sees 420 / 3982 points
=> Could not register, trying another image.

==============================================================================
Registering image #12 (3)

=> Image sees 429 / 3419 points
=> Could not register, trying another image.

==============================================================================
Registering image #15 (3)

=> Image sees 430 / 3892 points
=> Could not register, trying another image.

==============================================================================
Registering image #21 (3)

=> Image sees 373 / 3995 points
=> Could not register, trying another image.

==============================================================================
Registering image #13 (3)

=> Image sees 366 / 4086 points
=> Could not register, trying another image.

==============================================================================
Registering image #20 (3)

=> Image sees 347 / 3149 points
=> Could not register, trying another image.

==============================================================================
Registering image #19 (3)

=> Image sees 316 / 3682 points
=> Could not register, trying another image.

==============================================================================
Registering image #22 (3)

=> Image sees 289 / 3132 points
=> Could not register, trying another image.

==============================================================================
Registering image #23 (3)

=> Image sees 213 / 3458 points
=> Could not register, trying another image.

==============================================================================
Registering image #24 (3)

=> Image sees 157 / 3211 points
=> Could not register, trying another image.

==============================================================================
Retriangulation

=> Merged observations: 0
=> Completed observations: 0
=> Retriangulated observations: 0

==============================================================================
Global bundle adjustment

iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 4.660303e+02 0.00e+00 6.94e+02 0.00e+00 0.00e+00 1.00e+04 0 1.29e-03 6.86e-03
1 4.542867e+02 1.17e+01 1.86e+01 2.67e+01 9.90e-01 3.00e+04 1 3.47e-03 1.04e-02
2 4.539819e+02 3.05e-01 4.54e+00 1.73e+01 9.57e-01 9.00e+04 1 3.31e-03 1.37e-02
3 4.537922e+02 1.90e-01 5.62e+01 4.26e+01 9.58e-01 2.70e+05 1 3.37e-03 1.71e-02
4 4.535130e+02 2.79e-01 3.76e+02 1.21e+02 4.99e-01 2.70e+05 1 3.38e-03 2.05e-02
5 4.529053e+02 6.08e-01 3.06e+02 1.19e+02 7.51e-01 3.09e+05 1 3.11e-03 2.36e-02
6 4.523693e+02 5.36e-01 3.21e+02 1.34e+02 6.79e-01 3.24e+05 1 3.11e-03 2.67e-02
7 4.517444e+02 6.25e-01 2.76e+02 1.40e+02 7.35e-01 3.61e+05 1 3.07e-03 2.98e-02
8 4.511257e+02 6.19e-01 2.48e+02 1.54e+02 7.11e-01 3.91e+05 1 3.08e-03 3.29e-02
9 4.504388e+02 6.87e-01 1.74e+02 1.65e+02 7.43e-01 4.41e+05 1 3.15e-03 3.61e-02
10 4.497332e+02 7.06e-01 1.10e+02 1.85e+02 7.26e-01 4.86e+05 1 3.07e-03 3.91e-02
11 4.489893e+02 7.44e-01 1.22e+02 2.01e+02 7.11e-01 5.25e+05 1 3.11e-03 4.23e-02
12 4.483014e+02 6.88e-01 3.84e+02 2.16e+02 6.16e-01 5.31e+05 1 3.05e-03 4.53e-02
13 4.476780e+02 6.23e-01 6.34e+02 2.17e+02 5.08e-01 5.31e+05 1 3.06e-03 4.84e-02
14 4.471637e+02 5.14e-01 8.62e+02 2.16e+02 3.73e-01 5.23e+05 1 3.13e-03 5.16e-02
15 4.466950e+02 4.69e-01 1.05e+03 2.12e+02 2.91e-01 4.87e+05 1 3.02e-03 5.46e-02
16 4.460323e+02 6.63e-01 1.09e+03 1.97e+02 3.63e-01 4.77e+05 1 3.07e-03 5.77e-02
17 4.455845e+02 4.48e-01 1.20e+03 1.93e+02 2.46e-01 4.22e+05 1 3.13e-03 6.08e-02
18 4.446908e+02 8.94e-01 1.07e+03 1.70e+02 4.59e-01 4.22e+05 1 3.15e-03 6.40e-02
19 4.443183e+02 3.72e-01 1.17e+03 1.70e+02 2.30e-01 3.64e+05 1 4.25e-03 6.83e-02
20 4.434147e+02 9.04e-01 9.67e+02 1.47e+02 5.21e-01 3.64e+05 1 3.41e-03 7.17e-02
21 4.430443e+02 3.70e-01 1.04e+03 1.47e+02 2.82e-01 3.36e+05 1 5.73e-03 7.75e-02
22 4.424412e+02 6.03e-01 9.57e+02 1.36e+02 4.34e-01 3.36e+05 1 5.30e-03 8.28e-02
23 4.420824e+02 3.59e-01 1.01e+03 1.36e+02 2.92e-01 3.13e+05 1 5.48e-03 8.83e-02
24 4.415397e+02 5.43e-01 9.43e+02 1.27e+02 4.24e-01 3.12e+05 1 3.33e-03 9.16e-02
25 4.411907e+02 3.49e-01 9.89e+02 1.26e+02 3.04e-01 2.95e+05 1 3.33e-03 9.50e-02
26 4.407024e+02 4.88e-01 9.31e+02 1.19e+02 4.13e-01 2.93e+05 1 3.36e-03 9.84e-02
27 4.403605e+02 3.42e-01 9.67e+02 1.19e+02 3.17e-01 2.79e+05 1 3.44e-03 1.02e-01
28 4.399180e+02 4.43e-01 9.22e+02 1.14e+02 4.02e-01 2.77e+05 1 3.45e-03 1.05e-01
29 4.395810e+02 3.37e-01 9.49e+02 1.13e+02 3.30e-01 2.67e+05 1 3.15e-03 1.08e-01
30 4.391754e+02 4.06e-01 9.17e+02 1.09e+02 3.92e-01 2.64e+05 1 3.24e-03 1.12e-01
31 4.388422e+02 3.33e-01 9.34e+02 1.08e+02 3.42e-01 2.56e+05 1 3.12e-03 1.15e-01
32 4.384659e+02 3.76e-01 9.13e+02 1.05e+02 3.85e-01 2.53e+05 1 7.35e-03 1.22e-01
33 4.381364e+02 3.29e-01 9.23e+02 1.04e+02 3.53e-01 2.47e+05 1 3.25e-03 1.26e-01
34 4.377831e+02 3.53e-01 9.09e+02 1.02e+02 3.80e-01 2.43e+05 1 3.47e-03 1.29e-01
35 4.374583e+02 3.25e-01 9.14e+02 1.01e+02 3.63e-01 2.38e+05 1 3.45e-03 1.32e-01
36 4.371228e+02 3.35e-01 9.05e+02 9.91e+01 3.79e-01 2.35e+05 1 7.07e-03 1.40e-01
37 4.368037e+02 3.19e-01 9.06e+02 9.81e+01 3.70e-01 2.31e+05 1 5.73e-03 1.45e-01
38 4.364820e+02 3.22e-01 9.01e+02 9.68e+01 3.79e-01 2.28e+05 1 3.20e-03 1.49e-01
39 4.361697e+02 3.12e-01 9.01e+02 9.58e+01 3.77e-01 2.24e+05 1 6.12e-03 1.55e-01
40 4.358587e+02 3.11e-01 8.98e+02 9.48e+01 3.82e-01 2.21e+05 1 5.48e-03 1.60e-01
41 4.355537e+02 3.05e-01 8.97e+02 9.40e+01 3.82e-01 2.19e+05 1 2.78e-03 1.63e-01
42 4.352514e+02 3.02e-01 8.95e+02 9.32e+01 3.85e-01 2.16e+05 1 2.82e-03 1.66e-01
43 4.349534e+02 2.98e-01 8.94e+02 9.26e+01 3.86e-01 2.13e+05 1 2.91e-03 1.69e-01
44 4.346585e+02 2.95e-01 8.93e+02 9.19e+01 3.89e-01 2.11e+05 1 4.07e-03 1.73e-01
45 4.343670e+02 2.91e-01 8.93e+02 9.14e+01 3.91e-01 2.09e+05 1 3.17e-03 1.76e-01
46 4.340786e+02 2.88e-01 8.92e+02 9.10e+01 3.93e-01 2.07e+05 1 2.92e-03 1.79e-01
47 4.337931e+02 2.86e-01 8.92e+02 9.06e+01 3.95e-01 2.05e+05 1 2.99e-03 1.82e-01
48 4.335103e+02 2.83e-01 8.92e+02 9.02e+01 3.97e-01 2.03e+05 1 2.97e-03 1.85e-01
49 4.332302e+02 2.80e-01 8.92e+02 9.00e+01 3.99e-01 2.01e+05 1 2.95e-03 1.88e-01
50 4.329527e+02 2.78e-01 8.92e+02 8.98e+01 4.01e-01 2.00e+05 1 3.44e-03 1.91e-01
51 4.326775e+02 2.75e-01 8.93e+02 8.96e+01 4.03e-01 1.98e+05 1 3.06e-03 1.94e-01
52 4.324046e+02 2.73e-01 8.93e+02 8.95e+01 4.05e-01 1.97e+05 1 3.12e-03 1.98e-01
53 4.321340e+02 2.71e-01 8.94e+02 8.95e+01 4.07e-01 1.96e+05 1 2.97e-03 2.01e-01
54 4.318654e+02 2.69e-01 8.94e+02 8.95e+01 4.09e-01 1.95e+05 1 3.02e-03 2.04e-01
55 4.315990e+02 2.66e-01 8.95e+02 8.95e+01 4.11e-01 1.94e+05 1 2.95e-03 2.07e-01
56 4.313345e+02 2.64e-01 8.96e+02 8.96e+01 4.13e-01 1.93e+05 1 2.95e-03 2.10e-01
57 4.310720e+02 2.63e-01 8.97e+02 8.98e+01 4.15e-01 1.92e+05 1 2.99e-03 2.13e-01
58 4.308113e+02 2.61e-01 8.97e+02 9.00e+01 4.17e-01 1.91e+05 1 3.05e-03 2.16e-01
59 4.305526e+02 2.59e-01 8.98e+02 9.02e+01 4.20e-01 1.90e+05 1 2.96e-03 2.19e-01
60 4.302956e+02 2.57e-01 8.99e+02 9.05e+01 4.22e-01 1.89e+05 1 3.01e-03 2.22e-01
61 4.300404e+02 2.55e-01 8.99e+02 9.08e+01 4.24e-01 1.89e+05 1 2.96e-03 2.25e-01
62 4.297870e+02 2.53e-01 9.00e+02 9.11e+01 4.26e-01 1.88e+05 1 3.00e-03 2.28e-01
63 4.295353e+02 2.52e-01 9.00e+02 9.15e+01 4.29e-01 1.87e+05 1 2.96e-03 2.31e-01
64 4.292854e+02 2.50e-01 9.00e+02 9.19e+01 4.31e-01 1.87e+05 1 2.95e-03 2.34e-01
65 4.290372e+02 2.48e-01 9.00e+02 9.24e+01 4.34e-01 1.87e+05 1 3.02e-03 2.37e-01
66 4.287907e+02 2.46e-01 9.00e+02 9.29e+01 4.36e-01 1.86e+05 1 2.99e-03 2.40e-01
67 4.285459e+02 2.45e-01 9.00e+02 9.34e+01 4.39e-01 1.86e+05 1 2.94e-03 2.43e-01
68 4.283029e+02 2.43e-01 8.99e+02 9.40e+01 4.42e-01 1.86e+05 1 3.09e-03 2.46e-01
69 4.280616e+02 2.41e-01 8.98e+02 9.46e+01 4.45e-01 1.85e+05 1 2.98e-03 2.49e-01
70 4.278220e+02 2.40e-01 8.97e+02 9.52e+01 4.48e-01 1.85e+05 1 3.08e-03 2.52e-01
71 4.275843e+02 2.38e-01 8.95e+02 9.59e+01 4.51e-01 1.85e+05 1 3.04e-03 2.55e-01
72 4.273483e+02 2.36e-01 8.93e+02 9.66e+01 4.54e-01 1.85e+05 1 2.98e-03 2.58e-01
73 4.271141e+02 2.34e-01 8.91e+02 9.73e+01 4.58e-01 1.85e+05 1 3.05e-03 2.61e-01
74 4.268818e+02 2.32e-01 8.88e+02 9.80e+01 4.62e-01 1.85e+05 1 3.48e-03 2.64e-01
75 4.266514e+02 2.30e-01 8.85e+02 9.88e+01 4.66e-01 1.85e+05 1 3.24e-03 2.68e-01
76 4.264229e+02 2.28e-01 8.81e+02 9.96e+01 4.70e-01 1.84e+05 1 3.06e-03 2.71e-01
77 4.261964e+02 2.27e-01 8.77e+02 1.00e+02 4.74e-01 1.84e+05 1 3.12e-03 2.74e-01
78 4.259719e+02 2.25e-01 8.72e+02 1.01e+02 4.79e-01 1.84e+05 1 3.03e-03 2.77e-01
79 4.257495e+02 2.22e-01 8.66e+02 1.02e+02 4.83e-01 1.84e+05 1 3.04e-03 2.80e-01
80 4.255292e+02 2.20e-01 8.60e+02 1.03e+02 4.88e-01 1.84e+05 1 3.05e-03 2.83e-01
81 4.253111e+02 2.18e-01 8.53e+02 1.04e+02 4.94e-01 1.84e+05 1 3.07e-03 2.86e-01
82 4.250953e+02 2.16e-01 8.45e+02 1.05e+02 4.99e-01 1.84e+05 1 3.12e-03 2.89e-01
83 4.248818e+02 2.13e-01 8.37e+02 1.05e+02 5.05e-01 1.84e+05 1 3.06e-03 2.92e-01
84 4.246707e+02 2.11e-01 8.28e+02 1.06e+02 5.11e-01 1.84e+05 1 3.28e-03 2.96e-01
85 4.244622e+02 2.09e-01 8.19e+02 1.07e+02 5.18e-01 1.84e+05 1 3.00e-03 2.99e-01
86 4.242562e+02 2.06e-01 8.09e+02 1.08e+02 5.25e-01 1.84e+05 1 3.17e-03 3.02e-01
87 4.240529e+02 2.03e-01 7.98e+02 1.09e+02 5.31e-01 1.85e+05 1 3.08e-03 3.05e-01
88 4.238524e+02 2.01e-01 7.87e+02 1.10e+02 5.38e-01 1.85e+05 1 3.13e-03 3.08e-01
89 4.236546e+02 1.98e-01 7.76e+02 1.11e+02 5.45e-01 1.85e+05 1 3.05e-03 3.11e-01
90 4.234597e+02 1.95e-01 7.65e+02 1.12e+02 5.52e-01 1.85e+05 1 3.24e-03 3.14e-01
91 4.232676e+02 1.92e-01 7.53e+02 1.13e+02 5.59e-01 1.85e+05 1 3.09e-03 3.17e-01
92 4.230783e+02 1.89e-01 7.42e+02 1.14e+02 5.66e-01 1.86e+05 1 3.04e-03 3.21e-01
93 4.228919e+02 1.86e-01 7.32e+02 1.15e+02 5.73e-01 1.86e+05 1 7.14e-03 3.28e-01
94 4.227082e+02 1.84e-01 7.21e+02 1.17e+02 5.79e-01 1.87e+05 1 4.19e-03 3.32e-01
95 4.225271e+02 1.81e-01 7.12e+02 1.18e+02 5.85e-01 1.88e+05 1 3.00e-03 3.35e-01
96 4.223486e+02 1.79e-01 7.03e+02 1.20e+02 5.91e-01 1.89e+05 1 2.78e-03 3.38e-01
97 4.221724e+02 1.76e-01 6.95e+02 1.21e+02 5.96e-01 1.90e+05 1 2.81e-03 3.41e-01
98 4.219985e+02 1.74e-01 6.88e+02 1.23e+02 6.01e-01 1.92e+05 1 2.96e-03 3.44e-01
99 4.218266e+02 1.72e-01 6.81e+02 1.25e+02 6.05e-01 1.94e+05 1 2.90e-03 3.46e-01
100 4.216566e+02 1.70e-01 6.75e+02 1.27e+02 6.09e-01 1.96e+05 1 4.07e-03 3.51e-01

Bundle adjustment report

Residuals : 3588

Parameters : 2698
Iterations : 101
Time : 0.350847 [s]
Initial cost : 0.360397 [px]
Final cost : 0.34281 [px]
Termination : No convergence

=> Merged observations: 0
=> Completed observations: 0
=> Filtered observations: 897
=> Changed observations: 0.500000

==============================================================================
Global bundle adjustment

=> Merged observations: 0
=> Completed observations: 0
=> Filtered observations: 0
=> Changed observations: -nan

==============================================================================
Global bundle adjustment

=> Merged observations: 0
=> Completed observations: 0
=> Filtered observations: 0
=> Changed observations: -nan

==============================================================================
Global bundle adjustment

=> Merged observations: 0
=> Completed observations: 0
=> Filtered observations: 0
=> Changed observations: -nan

==============================================================================
Global bundle adjustment

=> Merged observations: 0
=> Completed observations: 0
=> Filtered observations: 0
=> Changed observations: -nan
=> Filtered images: 0

Elapsed time: 0.022 [minutes]

Baseline implementations

Hi - would you consider adding the implementations of the various baselines to this repo, e.g. LFI, ULR, Soft3d?

In the paper appendix it is specified that these are implemented by the authors, since in most cases there are no open source implementations. It would be very helpful to many people (including myself) to have some implementation of these baselines.

extensive GPU/memory usage with high resolution output

Hi, many thanks for your impressive work!

I come up with some questions that I might need your help when I try to get the outputs with high-resolution.

  1. I notice that the net given has a basic resolution of 640x480x3( and 32 default depth planes). When the resolution is higher than this, patches will be used via cutting the whole image. I tried a resolution of 1280x960, it worked fine. However, if I increase the resolution upto e.g. 4032x3024, the memory used will be increased dramatically(out of memory in my case).

Have you ever tried this case and met the same problem? Is it possible to generate an output with high resolution?

net

the image showed above is the net structure given in your paper. As for the tensor named nnup7, the output channel is 64. if using the maximum resolution of 640x480 and 32 default planes. the full size of nnup7 is 1x32x480x640x64(batch, planes, height, width, channel). As I test this net structure, I find the GPU needed for this net is very large(greater than 10G).

My question is that am I right about the size of nnup7? If I am right, how many GPU have your used?

Still images only?

Hello, this is very impresive, but what came to my mind, in a similar faschion of thos 6D video cameras, would it be possible to create a planar array of small cameras, like action cams to make each frame be an mpi? not sure if the sync would need to be subframe level like if it was an stero video, I saw gopros in the past have a genloc sync cable.

Colmap affecting disparity values?

Hello,

I noticed that the disparity values (from disparity map generated by LLFF) are very different when compared it with "theoretical value" (apparent pixel difference between the pair of stereo images). Is this due to the different scaling from colmap camera parameters?

Thanks in advance!

Test with known camera pose

Hi,
Thank you very much for your excellent work and your open code source!
I have followed your tutorial and got really amazing synthesis results, however, when I test with some other light field data, it seems that colmap can't work correctly and some errors happened. To avoid the problem caused by colmap, I want to skip the img2poses step and give directly camera poses to the following step, is there any way to feed camera poses for it? (I found in your code, you have done some processings like transpose to the estimated poses, but not much comments to explain these processings, could you please give some explaination over camera pose processing after img2poses?)
As for other test data in your paper, I'm very intreseted their output, but I didn't find a download link, are these data available to the public?
Thank you very much for your attention.
Yours sincerely,
Jinglei

glviewer

Got a black screen viewing the results on glviewer, everything seemed fine so I looked at the code and uncommenting out the line 455 on viewer.cpp:

view = glm::lookAt(glm::vec3(0,0,0), glm::vec3(0,0,-1), glm::vec3(0,1,0));

fixes it for me, in case anyone has the same problem.

Would be cool to have access to generating the synthetic data and training at some point, if you guys decide to release that.

Awesome work! Waiting for LLFFvid ;)

Issues when running img2mpis

hey,
i am running mig2mpis , but the output of console is weired. And it never stops.

Weights restored
0 (of 20) <- [0, 8, 9, 1, 10, 0] depths 15.697662947784377 162.3284852276472
('abdriged outputs to', dict_keys(['mpi0', 'disps', 'psv']))
('0 of 1',)
('(3024, 4032) gridded into 12 x 15',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)
('.',)

What kind of photo information does `imgs2poses.py` use?

I tried to run python imgs2poses.py <scene_dir> as written in README.md. I used the photo taken in Android(Pixel 3) camera, and it was failed to generate poses but in iPad Pro (2020) camera (which may be the same brand as test images Fyusion use), the poses are successfully generated.
The script failed to make the directory sparse/0 when the inputs are photos taken in Android camera, and also failed to generate bin file. The error message "sparse/0 does not exist" is returned.

What is the difference between them, and what kind of information does imgs2poses.py use to generate poses? e.g., the value of 1/f, ISO.

The generation of MPI

Dear authors,

Many thanks for your excellent work and open-source code!

However I came up with some problems when I used my own dataset and tried to generate the MPI images.

  1. I had 32 MPI images indeed, but as far as I know: MPI contains 32 RGB layers and 32 alpha layers(I am not sure am I right or not). Are these 32 MPI images generated by corresponding RGB * alpha ?
  2. I also notice that when the inference runs, the closest indices of neighbors will be printed out, like [0, 6, 10, 4, 13, 0]. I suppose that the first 0 is the reference image, [0, 6, 10, 4, 13] are its closest neighbors. What does the last zero mean? Does it mean 0 is the target index of image?

Low quality video

Hi,

I have to scale number of planes of neural network so I can run it on my graphics card so when I get the output video it has really bad quality. Can you explain me how to scale this software to work on graphics card with lower computation power while preserving the video quality?

Inference with disparities provided by users

Thanks for open-sourcing the great work!
I wonder if your model supports inference with disparities given by users? In my scenario, I am interested in the results if I manually edit the predicted disparity.
Thanks!

Error runing the demo

Hello,Thank you very much for your open-sourcing code!
when trying to run the demo,I have been encountering some problems.

when i want to create MPIs using pretrained network,

(j3) m00486393@ubuntu:~/j500003470/Code/LLFF$ python imgs2mpis.py     data/testscene/     data/testscene/mpis_360     --height 360

factor/width/height args: [None, None, 360]
Loaded image data (360, 480, 3, 20) [ 360.         480.         391.3782448]
Creating session
2019-08-05 23:28:31.966128: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2019-08-05 23:28:31.966155: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2019-08-05 23:28:31.966163: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2019-08-05 23:28:31.966171: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2019-08-05 23:28:31.966178: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2019-08-05 23:28:40.093371: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:2d:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:40.595427: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x333ce40 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:40.597315: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 1 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:31:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:41.108226: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x3341140 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:41.110100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 2 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:35:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:41.655882: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x3345440 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:41.657770: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 3 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:39:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:42.211243: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x33497a0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:42.213097: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 4 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:a9:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:42.785812: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x334dce0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:42.787681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 5 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:ad:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:43.374371: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x3352220 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:43.376293: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 6 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:b1:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:43.974571: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x3356760 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-08-05 23:28:43.976437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 7 with properties:
name: Tesla P100-PCIE-16GB
major: 6 minor: 0 memoryClockRate (GHz) 1.3285
pciBusID 0000:b5:00.0
Total memory: 15.89GiB
Free memory: 15.60GiB
2019-08-05 23:28:43.982265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 0 and 4
2019-08-05 23:28:43.982292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 0 and 5
2019-08-05 23:28:43.982308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 0 and 6
2019-08-05 23:28:43.982324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 0 and 7
2019-08-05 23:28:43.986110: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 1 and 4
2019-08-05 23:28:43.986132: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 1 and 5
2019-08-05 23:28:43.986147: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 1 and 6
2019-08-05 23:28:43.986162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 1 and 7
2019-08-05 23:28:43.988118: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 2 and 4
2019-08-05 23:28:43.988140: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 2 and 5
2019-08-05 23:28:43.988156: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 2 and 6
2019-08-05 23:28:43.988171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 2 and 7
2019-08-05 23:28:43.988254: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 3 and 4
2019-08-05 23:28:43.988270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 3 and 5
2019-08-05 23:28:43.988285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 3 and 6
2019-08-05 23:28:43.988299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 3 and 7
2019-08-05 23:28:43.988315: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 4 and 0
2019-08-05 23:28:43.988330: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 4 and 1
2019-08-05 23:28:43.988344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 4 and 2
2019-08-05 23:28:43.988359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 4 and 3
2019-08-05 23:28:43.994068: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 5 and 0
2019-08-05 23:28:43.994091: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 5 and 1
2019-08-05 23:28:43.994106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 5 and 2
2019-08-05 23:28:43.994121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 5 and 3
2019-08-05 23:28:43.997893: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 6 and 0
2019-08-05 23:28:43.997915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 6 and 1
2019-08-05 23:28:43.997930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 6 and 2
2019-08-05 23:28:43.997945: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 6 and 3
2019-08-05 23:28:43.999903: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 7 and 0
2019-08-05 23:28:43.999925: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 7 and 1
2019-08-05 23:28:43.999941: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 7 and 2
2019-08-05 23:28:43.999956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:847] Peer access not supported between device ordinals 7 and 3
2019-08-05 23:28:44.000405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1 2 3 4 5 6 7
2019-08-05 23:28:44.000418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y Y Y Y N N N N
2019-08-05 23:28:44.000426: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1:   Y Y Y Y N N N N
2019-08-05 23:28:44.000434: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 2:   Y Y Y Y N N N N
2019-08-05 23:28:44.000441: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 3:   Y Y Y Y N N N N
2019-08-05 23:28:44.000448: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 4:   N N N N Y Y Y Y
2019-08-05 23:28:44.000456: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 5:   N N N N Y Y Y Y
2019-08-05 23:28:44.000463: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 6:   N N N N Y Y Y Y
2019-08-05 23:28:44.000470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 7:   N N N N Y Y Y Y
2019-08-05 23:28:44.000485: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:2d:00.0)
2019-08-05 23:28:44.000495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Tesla P100-PCIE-16GB, pci bus id: 0000:31:00.0)
2019-08-05 23:28:44.000504: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:2) -> (device: 2, name: Tesla P100-PCIE-16GB, pci bus id: 0000:35:00.0)
2019-08-05 23:28:44.000510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:3) -> (device: 3, name: Tesla P100-PCIE-16GB, pci bus id: 0000:39:00.0)
2019-08-05 23:28:44.000518: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:4) -> (device: 4, name: Tesla P100-PCIE-16GB, pci bus id: 0000:a9:00.0)
2019-08-05 23:28:44.000525: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:5) -> (device: 5, name: Tesla P100-PCIE-16GB, pci bus id: 0000:ad:00.0)
2019-08-05 23:28:44.000532: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:6) -> (device: 6, name: Tesla P100-PCIE-16GB, pci bus id: 0000:b1:00.0)
2019-08-05 23:28:44.000539: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:7) -> (device: 7, name: Tesla P100-PCIE-16GB, pci bus id: 0000:b5:00.0)
Restoring from ./checkpoints/papermodel/checkpoint
Meta restored
Found inputs:
['imgs:0', 'depths:0', 'poses:0', 'num_depths:0', 'close_depth:0', 'inf_depth:0', 'window:0']
Found outputs:
['accum', 'alpha_acc', 'base_img', 'disp0', 'disps', 'imgs', 'inplaces', 'mpi0', 'mpis', 'psv', 'psv1', 'renderings', 'renderings_all', 'renderings_mean', 'renderings_single', 'scales', 'target_disp', 'target_img']
Setup renderer
Weights restored
0 (of 20) <- [0, 8, 9, 1, 10, 0] depths 15.6985977596 166.511276286
('abdriged outputs to', dict_keys(['mpi0', 'disps', 'psv']))
('0 of 1',)
('(360, 480) gridded into 3 x 4',)
('.',)
Traceback (most recent call last):
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call
    return fn(*args)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1306, in _run_fn
    status, run_metadata)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/contextlib.py", line 88, in __exit__
    next(self.gen)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits must be 2-dimensional
         [[Node: get_mpi3d_multi/Softmax = Softmax[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](get_mpi3d_multi/concat_1)]]
         [[Node: disps_1/_731 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_12995_disps_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "imgs2mpis.py", line 83, in <module>
    args.numplanes, args.no_mpis, True, args.psvs)
  File "imgs2mpis.py", line 54, in gen_mpis
    mpis = run_inference(imgs, poses, mpi_bds, ibr_runner, num_planes, patched, disps=disps, psvs=psvs)
  File "/home/m00486393/j500003470/Code/LLFF/llff/inference/mpi_utils.py", line 156, in run_inference
    mpi.generate(generator, num_planes)
  File "/home/m00486393/j500003470/Code/LLFF/llff/inference/mpi_utils.py", line 55, in generate
    outputs = generator(inputs)
  File "/home/m00486393/j500003470/Code/LLFF/llff/inference/mpi_utils.py", line 136, in <lambda>
    generator = lambda inputs : ibr_runner.run_inference(inputs, test_keys=keys, patched=patched, valid=120, buffer=80)
  File "/home/m00486393/j500003470/Code/LLFF/llff/inference/mpi_tester.py", line 216, in run_inference
    out_ = sess.run(outputs, feed_dict=fdict)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1124, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    options, run_metadata)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits must be 2-dimensional
         [[Node: get_mpi3d_multi/Softmax = Softmax[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](get_mpi3d_multi/concat_1)]]
         [[Node: disps_1/_731 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_12995_disps_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op 'get_mpi3d_multi/Softmax', defined at:
  File "imgs2mpis.py", line 83, in <module>
    args.numplanes, args.no_mpis, True, args.psvs)
  File "imgs2mpis.py", line 44, in gen_mpis
    ibr_runner.load_graph(logdir)
  File "/home/m00486393/j500003470/Code/LLFF/llff/inference/mpi_tester.py", line 112, in load_graph
    self.saver = tf.train.import_meta_graph(ckpt_path + '.meta')
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1698, in import_meta_graph
    **kwargs)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 656, in import_scoped_meta_graph
    producer_op_list=producer_op_list)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
    op_def=op_def)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/m00486393/anaconda2/envs/j3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): logits must be 2-dimensional
         [[Node: get_mpi3d_multi/Softmax = Softmax[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](get_mpi3d_multi/concat_1)]]
         [[Node: disps_1/_731 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_12995_disps_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

I don't know how to fix the problem, Could you please help me identify where could possibly go wrong?

CUDA error when running demo.sh/imgs2poses.py

Hi authors, thanks for open-sourcing the code of your amazing paper! I've been encountering with a few issues when trying to run through your demo.

I have followed your README to install the provided nvidia-docker. But it seems that when running demo.sh inside of the docker, CUDA errors exist. More specifically, when running this line, program complaints about "PyramidCU::GenerateFeatureList: an illegal memory access was encountered". I've prepared my running log after both with and without cuda-memcheck for debugging purpose. Could you please help me identify where could possibly go wrong?

I'm using RTX 2080 Ti + cuda-10.1 + NVIDIA driver 418.39 on Ubuntu 18.04. Not sure if being too updated could be the problem.

depth map

Hi, very impressive work! can you please explain a little bit more about how you generate the disparity/depth maps?
BTW, I pulled out the raw data of mpis[i].disps, looks like it is not the depth or the disparity (the unit does not match). What should I do to recover the meaningful values?

Thanks!

Where are the psv input?

Hello, Thank you very much for your open-sourcing code!
When trying to understand the code with your paper, I have some problem.
In my understanding, the input of the network should be psvs of 5 viewpoints, but when I look at the code, I don't see the psvs transformation when the image is put into the network to infer mpi, but psvs are in the output of the network instead. I would like to ask, in the input of network do we need psvs transformation? Where is it embodied in the procedure?

Thank you very much for your attention.
Best,
Yilei Chen

Unable to use light field datasets other than the testscene

Hi, I have finished the installation and rendered the testscene successfully. However, when I tried to use pictures from other datasets, I just failed. The dataset I use is the MIT Synthetic Light Field Archive.
I checked the log and I found the first error occured here:

Need to run COLMAP
Features extracted
Features matched
Sparse map created
Finished running COLMAP, see data/carscene/output_5x5m/colmap_output.txt for logs
Post-colmap
('Cameras', 5)
('Images #', 2)
Traceback (most recent call last):
  File "imgs2poses.py", line 11, in <module>
    gen_poses(args.scenedir)
  File "/host/data2/l00362246/boyutian/LLFF/llff/poses/pose_utils.py", line 273, in gen_poses
    save_poses(basedir, poses, pts3d, perm)
  File "/host/data2/l00362246/boyutian/LLFF/llff/poses/pose_utils.py", line 63, in save_poses
    cams[ind-1] = 1
IndexError: list assignment index out of range

The scene I use contains 25 pictures, but only 2 pictures(the initial pair) has been registered successfully after running COLMAP. I think this is the main reason for the failure. I was wondering why that happens. Also, I checked the colmap output. One of the differences is that the pictures I use do not contain GPS information. I attach the colmap_output here.
car_colmap_output.txt

how to generate the database.db

i have download the testscene.zip and trained model you provided in the right place.
when i run the command "python imgs2poses.py /home/lvhao/LLFF-master/data/testscene/"
it occur error:
Need to run COLMAP
Traceback (most recent call last):
File "imgs2poses.py", line 11, in
gen_poses(args.scenedir)
File "/home/lvhao/LLFF-master/llff/poses/pose_utils.py", line 265, in gen_poses
run_colmap(basedir)
File "/home/lvhao/LLFF-master/llff/poses/colmap_wrapper.py", line 35, in run_colmap
feat_output = ( subprocess.check_output(feature_extractor_args, universal_newlines=True) )
File "/usr/lib/python2.7/subprocess.py", line 567, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

when i debug it , i found i dont have the '/home/lvhao/LLFF-master/data/testscene/database.db' .how to generate the database.db or can you offer it as it not in the testscene.zip.thank you for your time

Traning code ?

Hello, I am just wondering if you guys can upload the training code as well as the real image training data that have been mentioned in the paper ?

I'm on Ubuntu 18.04 LTS / opengl black screen even after press/release.

I'm on Ubuntu 18.04 LTS and when running the opengl viewer as described in the readme I'm still getting a black Frame, even after uncommenting L:455. Any Ideas?

Console Log:

-----------
(0) : error C5145: must write to gl_Position

 -- --------------------------------------------------- -- 
shader2
ERROR::SHADER::FILE_NOT_SUCCESFULLY_READ
ERROR::PROGRAM_LINKING_ERROR of type: PROGRAM
Vertex info
-----------
(0) : error C5145: must write to gl_Position

 -- --------------------------------------------------- -- 

Originally posted by @Schizo in #3 (comment)

Error running `python imgs2poses.py data/testscene`

Im getting this error, when trying to do manual installation

Need to run COLMAP
Features extracted
Features matched
ERROR: Failed to parse options: unrecognised option '--output_path'.
Traceback (most recent call last):
  File "imgs2poses.py", line 18, in <module>
    gen_poses(args.scenedir, args.match_type)
  File "/home/gera/Desktop/CVIT/Research/code/LLFF/llff/poses/pose_utils.py", line 268, in gen_poses
    run_colmap(basedir, match_type)
  File "/home/gera/Desktop/CVIT/Research/code/LLFF/llff/poses/colmap_wrapper.py", line 71, in run_colmap
    map_output = ( subprocess.check_output(mapper_args, universal_newlines=True) )
  File "/home/gera/anaconda3/envs/llff/lib/python3.7/subprocess.py", line 411, in check_output
    **kwargs).stdout
  File "/home/gera/anaconda3/envs/llff/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['colmap', 'mapper', '--database_path', 'data/testscene/database.db', '--image_path', 'data/testscene/images', '--output_path', 'data/testscene/sparse', '--Mapper.num_threads', '16', '--Mapper.init_min_tri_angle', '4', '--Mapper.multiple_models', '0', '--Mapper.extract_colors', '0']' returned non-zero exit status 1.

Mapper.multiple_models in command 'colmap mapper'

What's the meaning of the arguments Mapper.multiple_models, Mapper.num_threads, Mapper.init_min_tri_angle, Mapper.extract_colors? I can't find the documents about these arguments in the Colmap tutorial.

about blending weights' code inconsistent with equation

Dear author,
sorry to bother you. As I looked at your great job in details I had a question that I had not noticed.
Here's your blending weights equation
image
but in your code
image
Is the part I circled with the red frame consistent with your equation? I think that this line of code means color multiplied by weight, divided by alpha multiplied by weight. This means that the equation's numerator does not have alpha? If I misunderstood, could you please explain it for me! Thank you so much!

Error using nvdia-docker

Hello. I am trying to get the results as displayed but when I execute sudo nvidia-docker run --rm --volume /:/host --workdir /host$PWD tf_colmap bash demo.sh the docker returns a error. Below is the error that I have encountered:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.0 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411 --pid=7506 /var/lib/docker/overlay2/74b368071c67140593255d9461eb525598dbbca0ab382047da530356351746c6/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.

I just added the four test scenes from Figure 9 (airplants, pond, fern, t-rex) to the google drive supplement, you can find them here now:

I just added the four test scenes from Figure 9 (airplants, pond, fern, t-rex) to the google drive supplement, you can find them here now:
https://drive.google.com/open?id=1Xzn-bRYhNE5P9N7wnwLDXmo37x7m3RsO

Here's an explanation of the poses_bounds.npy file format. This file stores a numpy array of size Nx17 (where N is the number of input images). You can see how that is loaded in the three lines here. Each row of length 17 gets reshaped into a 3x5 pose matrix and 2 depth values that bound the closest and farthest scene content from that point of view.

The pose matrix is a 3x4 camera-to-world affine transform concatenated with a 3x1 column [image height, image width, focal length] along axis=1.

The rotation (first 3x3 block in the camera-to-world transform) is stored in a somewhat unusual order, which is why there are the transposes. From the point of view of the camera, the three axes are
[ down, right, backwards ]
which some people might consider to be [-y,x,z].

So the steps to reproduce this should be (if you have a set of 3x4 poses for your images, plus focal lengths and close/far depth bounds):

  1. Make sure your poses are in camera-to-world format, not world-to-camera.
  2. Make sure your rotation matrices have the columns in the same order I use (downward, right, backwards).
  3. Concatenate each pose with the [height, width, focal] vector to get a 3x5 matrix.
  4. Flatten each of those into 15 elements and concatenate the close/far depths.
  5. Concatenate each 17d vector to get a Nx17 matrix and use np.save to store it as poses_bounds.npy.

Hopefully that helps explain my pose processing after colmap. Let me know if you have any more questions.

Originally posted by @bmild in #10 (comment)

Awkward flickering while using opengl_viewer

Thanks for sharing your code!

I have built the environment without any issue, and the results generated from cuda_renderer looks perfect to me. However, when I turn to use opengl_viewer, I found that some parts of the MPIs keep appearing and disappearing when I move my mouse, which causes annoying flickering. Do you have idea on what is happening?
Thanks!

Can I use this method for lytro style light field?

As stated, since colmap fails to register for Lytro style data, I am trying to manually generate poses and mpi_bds. However, I think I might do something wrong here.

I figured out that the poses is a 5x3 matrix which generates the homography.
If the horizontal baseline and vertical baseline of each view is x and y and the image is of size hxw and the focus is f, is this the right matrix for poses?

1 0 0 (-y) h
0 1 0 (-x) w
0 0 1 0    f

Unable to run demo.sh

I've installed all the prerequisites, and the program should run, but it's not... I'm getting an error at line 70 of the colmap_wrapper.py file, which is trying to call the subprocess and there's some problem...

(Here's my setup: Nvidia mx250, Ubuntu 19.10, tensorflow 1.13)
(Is it a known issue with Ubuntu 19.10 or hasn't been tested yet?)

Here's my entire traceback:

output.txt
errors.txt:

PC: @     0x7f92c7012ae8 ceres::internal::ProgramEvaluator<>::Evaluate()
*** SIGSEGV (@0x0) received by PID 22202 (TID 0x7f92acda5700) from PID 0; stack trace: ***
    @     0x7f92c7452641 (unknown)
    @     0x7f92c67dd540 (unknown)
    @     0x7f92c7012ae8 ceres::internal::ProgramEvaluator<>::Evaluate()
    @     0x7f92c709265f ceres::internal::TrustRegionMinimizer::EvaluateGradientAndJacobian()
    @     0x7f92c7092f4a ceres::internal::TrustRegionMinimizer::IterationZero()
    @     0x7f92c70972d4 ceres::internal::TrustRegionMinimizer::Minimize()
    @     0x7f92c7088cbc ceres::Solver::Solve()
    @     0x7f92c70899b9 ceres::Solve()
    @     0x55bb3bb1a2eb colmap::BundleAdjuster::Solve()
    @     0x55bb3bb78037 colmap::IncrementalMapper::AdjustGlobalBundle()
    @     0x55bb3bac3f0c (unknown)
    @     0x55bb3bac521d colmap::IncrementalMapperController::Reconstruct()
    @     0x55bb3bac6a9b colmap::IncrementalMapperController::Run()
    @     0x55bb3bbd7dfc colmap::Thread::RunFunc()
    @     0x7f92c5ad9f74 (unknown)
    @     0x7f92c67d1669 start_thread
    @     0x7f92c578f323 clone
Traceback (most recent call last):
  File "imgs2poses.py", line 11, in <module>
    gen_poses(args.scenedir)
  File "/home/abhigyan/Code/CVProject/LLFF/llff/poses/pose_utils.py", line 265, in gen_poses
    run_colmap(basedir)
  File "/home/abhigyan/Code/CVProject/LLFF/llff/poses/colmap_wrapper.py", line 70, in run_colmap
    map_output = ( subprocess.check_output(mapper_args, universal_newlines=True) )
  File "/home/abhigyan/anaconda3/lib/python3.7/subprocess.py", line 395, in check_output
    **kwargs).stdout
  File "/home/abhigyan/anaconda3/lib/python3.7/subprocess.py", line 487, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['colmap', 'mapper', '--database_path', 'data/testscene/database.db', '--image_path', 'data/testscene/images', '--output_path', 'data/testscene/sparse', '--Mapper.num_threads', '16', '--Mapper.init_min_tri_angle', '4', '--Mapper.multiple_models', '0', '--Mapper.extract_colors', '0']' died with <Signals.SIGSEGV: 11>.
Traceback (most recent call last):
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/home/abhigyan/anaconda3/lib/python3.7/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/home/abhigyan/anaconda3/lib/python3.7/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "imgs2mpis.py", line 10, in <module>
    from llff.inference.mpi_utils import run_inference
  File "/home/abhigyan/Code/CVProject/LLFF/llff/inference/mpi_utils.py", line 6, in <module>
    from llff.inference.mpi_tester import DeepIBR
  File "/home/abhigyan/Code/CVProject/LLFF/llff/inference/mpi_tester.py", line 1, in <module>
    import tensorflow as tf
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/home/abhigyan/anaconda3/lib/python3.7/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/home/abhigyan/anaconda3/lib/python3.7/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.
mkdir: cannot create directory ‘data/testscene/outputs/’: File exists
Traceback (most recent call last):
  File "imgs2renderpath.py", line 34, in <module>
    poses, bds = load_data(args.scenedir, load_imgs=False)
  File "/home/abhigyan/Code/CVProject/LLFF/llff/poses/pose_utils.py", line 195, in load_data
    poses_arr = np.load(os.path.join(basedir, 'poses_bounds.npy'))
  File "/home/abhigyan/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 428, in load
    fid = open(os_fspath(file), "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'data/testscene/poses_bounds.npy'
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
demo.sh: line 24: 22221 Aborted                 (core dumped) cuda_renderer/cuda_renderer data/testscene/mpis_360 data/testscene/outputs/test_path.txt data/testscene/outputs/test_vid.mp4 360 .8 18

Please tell me what I need to do to get it to run

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.