Git Product home page Git Product logo

scnerf's Issues

Parameter setting different with paper

In scripts/main_table_2/fern/main2_fern_ours.sh, last line is:

--ft_path logs/main1_fern_nerf/200000.tar

which means using main1_fern_nerf to init model.
but this 200000 iter in table1 nerf setting is trained with --run_without_colmap both,
and in paper the Table2 result is initialized by COLMAP camera information. So the first 200000 iter should be trained with --run_without_colmap none, instead of --run_without_colmap both,

According to the description above, there will be conflicts.
And I think maybe it should be

--ft_path logs/main2_fern_nerf/200000.tar

?

Question about downsampling factor

Hi,
Thanks for sharing your work!

I am confused about the downsampling factor in the LLFF dataset. In your config file, downsample factor is set to 8. However, in the original nerf repo, downsample factor is set to 4 according to this. I am curious that why do not follow the original setting. This may be unfair for comparision.

How to use pretrained weights for inference on custom datasets?

All the scripts mentioned are training the NeRF on a dataset and evaluating them.
Is it possible to run the pretrained model on any set of images captured by a camera?
Or is the pipeline such that the end-end training has to happen for every set of camera images?

Problems running colmap_utils script

Hi,

Thanks for the repo. I was trying to run SCNeRF with only images, but after looking at the code and issues related, it seems like I need to run colmap_utils script nontheless. But there are several errors trying to run the script:

File "/home/SCNeRF/colmap_utils/read_sparse_model.py", line 378, in main
    depth_ext = os.listdir(os.path.join(args.working_dir, "depth"))[0][-4:]

and

File "/home/SCNeRF/colmap_utils/post_colmap.py", line 33, in load_colmap_data
    with open(os.path.join(realdir, "train.txt"), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/data/TUM_desk_rgb/train.txt'

I checked the code, I think the error happens because there is no depth output from colmap directly, and no idea what is train.txt. Could you double check the provided script work for a pure rgb dataset all the way through?

And if possible, it would be very helpful if you could provide a more detailed guide on how to run with only image inputs.

Thanks in advance!

Ablation Study on Tank and Temple datasets

Hi,
thanks for your great work! i have some questions about applying this work on some large scale datasets.
1.In your paper you did ablation study on LLFF datasets about IE OD PRD, did you do the ablation study on Tank and Temple dataset ?
2.Is it feasible to apply this work on some large scale datasets without initial poses by colmap?
3.In BARF: Bundle-Adjusting Neural Radiance Fields paper, they mention that it's hard to optimize nerf and poses because of the position encoding, in your experienments, do you think it is necessary to change the position encoding function as BARF said ?
@chrischoy @minsucho @soskek @joonahn
Looking forward to your reply!

susperious data leakage ?

Hi, I found that you used SuperGlue and SuperPoint as feature extractor and matcher, as far as I know, these two algorithms are trained supervisedly, Is there any suspicion of data leakage here? This approach may affect the fairness of your experiment, because the Colmap-based pose information is not data-driven, and your method somehow references external extra data unless your experimental data is entirely based on SIFT and Bfmatcher.

Possible errors on your setup

Hi

First of all thank you very much for uploading your code

I think you have a typo on the requirement.txt file, should not it be named "requirements.txt"?

On the other hand, for running the demo.sh at least, the softlink should be created to ./data/, and not to data/nerf_llff_data, otherwise an error will be raised

Thanks,

Demo error

Hello,
I really appreciate your great work!

I tried to run the demo.sh as specified (just changed the number of iterations) but it gave me the following error.

Any advice could be appreciated.
image

Question about the equations in the paper

Q1
image
Is it true that each element of n is divided by c ? not f?

Also, what is the meaning of p' value?
undistorted pixel?

Q2
image
In this equation, I'm not sure why $z_d$ is multiplied twice.

Q3
image
In this equation, why divide it by $r_{A,d} \cdot r_{A,d}$ instead of $||r_{A,d}||$ ?

Q4
image
In this equation, round L in the last term should be round r?

Does `--dataset_type custom` work for 360 scenes with photos only ?

Two questions, if you don't mind:

  1. Does the code in "custom" branch with --dataset_type custom work for 360 degree scenes like the "tanks_and_temples" images ? Or is it only for forward facing scenes like LLFF fern dataset?
  2. If it does work for 360 scenes, can you confirm that it doesn't need any COLMAP camera parameters, initialization, etc. ?

Scripts for paper's results

Thanks for you wonderful code first,

I'd like to reproduce result on your paper at supplementary Table 3.

But flower dataset with demo.sh goes little bit different way.
Could I get some advice for reproduce setup?

this is wandb link in my environment.

Thanks

Implementing SCNeRF on custom dataset

Hi @jeongyw12382 ,

I have a set of images. However, I am aware of the FOV and θ, Φ 3D angles for each image.
Would it be possible for me train the NeRF model without COLMAP?
Unfortunately, colmap doesn't work well on my dataset. I get an error saying:
ERROR: the correct camera poses for current points cannot be accessed

Error trying the script on custom image set

I have two streams taken from cameras aligned vertically that I have no information about. The streams look like the following:

Camera 1:
ezgif-frame-001

Camera 2:

ezgif-frame-001

where the plant rotates, so it is the only moving object in the scene.

I wanted to give your method a try to obtain the intrinsics of these cameras (so that I can 3d reconstruct plant), and therefore turned the video streams into images (.png) and executed the following on the camera1's images:

bash colmap_utils/colmap.sh ./images/

However, I received an output that I could not make any sense of:

colmap_utils/colmap.sh: line 7: colmap: command not found
colmap_utils/colmap.sh: line 13: colmap: command not found
colmap_utils/colmap.sh: line 19: colmap: command not found
Traceback (most recent call last):
  File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 417, in <module>
    main()
  File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 369, in main
    cameras, images, points3D = read_model(path=model_path, ext=".bin")
  File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 305, in read_model
    cameras = read_cameras_binary(os.path.join(path, "cameras" + ext))
  File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 120, in read_cameras_binary
    with open(path_to_model_file, "rb") as fid:
FileNotFoundError: [Errno 2] No such file or directory: './images/sparse/0/cameras.bin'
Post-colmap
Traceback (most recent call last):
  File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/bla/anaconda3/envs/icn/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/bla/Desktop/SCNeRF/colmap_utils/post_colmap.py", line 266, in <module>
    gen_poses(args.working_dir)
  File "/home/bla/Desktop/SCNeRF/colmap_utils/post_colmap.py", line 247, in gen_poses
    poses, pts3d, perm = load_colmap_data(basedir)
  File "/home/bla/Desktop/SCNeRF/colmap_utils/post_colmap.py", line 13, in load_colmap_data
    camdata = read_model.read_cameras_binary(camerasfile)
  File "/home/bla/Desktop/SCNeRF/colmap_utils/read_sparse_model.py", line 120, in read_cameras_binary
    with open(path_to_model_file, "rb") as fid:
FileNotFoundError: [Errno 2] No such file or directory: './images/sparse/0/cameras.bin'

What am I doing wrong?

IndexError: index 0 is out of bounds for axis 0 with size 0

Training Done
Starts Train Rendering
0%| | 0/17 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/run_nerf.py", line 1047, in
train()
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/run_nerf.py", line 973, in train
rgbs, disps = render_path(
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/render.py", line 157, in render_path
rgb, disp, acc, _ = render(
File "/home/user/linzejun01/linzejun_mutiply_view01/SCNeRF-master/NeRF/render.py", line 44, in render
idx_in_camera_param=np.where(i_map==image_idx)[0][0]
IndexError: index 0 is out of bounds for axis 0 with size 0

proj_ray_dist_threshold

Hello. Thank you for great paper and code.

I have one small question.

The threshold value of projected ray distance loss is set to 5, is there a reason why you chose this value?

Also, is this threshold value used even when the camera parameters are initialized with identity matrix and zero vector? (Self-Calibration experiments)
When the camera parameter is coarse or has a bad value, I think the proj_ray_dist loss will be much larger than 5, but wasn't it?
I wonder if this threshold works.

Thank you!

ERROR Error while calling W&B API: project not found (<Response [404]>)

Loaded SuperPoint model
Loaded SuperGlue model ("outdoor" weights)
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 2
wandb: You chose 'Use an existing W&B account'
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter:
wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc
wandb: ERROR Error while calling W&B API: project not found (<Response [404]>)
Thread SenderThread:
Traceback (most recent call last):
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/sdk/lib/retry.py", line 102, in call
result = self._call_fn(*args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/sdk/internal/internal_api.py", line 138, in execute
six.reraise(*sys.exc_info())
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/six.py", line 719, in reraise
raise value
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/sdk/internal/internal_api.py", line 132, in execute
return self.client.execute(*args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/vendor/gql-0.2.0/gql/client.py", line 52, in execute
result = self._get_result(document, *args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/vendor/gql-0.2.0/gql/client.py", line 60, in _get_result
return self.transport.execute(document, *args, **kwargs)
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/wandb/vendor/gql-0.2.0/gql/transport/requests.py", line 39, in execute
request.raise_for_status()
File "/root/anaconda3/envs/icn/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://api.wandb.ai/graphql

have bug with the custom image set

First, i generating COLMAP poses of custom image set. According to my own dataset, i modify Demo.sh. But, it run the following problems.
image

Reproducing the results (Table 4)

Hello, thanks for your great work.

I am interested in training the FishEyeNeRF dataset with the NeRF++ model.
Specifically, I would like to reproduce the NeRF++[RD] presented in Table 4 of the paper using the code you provided.

Would this be possible?

Thanks

How to get more detail about the camera pose?

Hello author,

Thank you for sharing your code. I want to capture the corresponding camera pose by taking images of a circle. For example, I take a picture every 10 degrees. After I train the network, I find the results in logs are some images and some .tar file. Can I get some information about the camera pose like a 4x4 matrix?

Looking forward to your reply.

about main table 1

Thank you for sharing your code.
I'm trying to reproduce the results in the main 1 table.
Now I fully trained NeRF results (not 'ours' results) and all of the values are showing slightly worse than the values in the table.
Following is the Test Set Results / Train Set Result / Result in the paper.

<style> </style>
test   psnr ssim lpips prd
nerf flower 13.628 0.2909 0.7835 nan
nerf fortress 15.618 0.4311 0.6794 nan
nerf leaves 12.734 0.1451 0.7938 nan
nerf trex 12.419 0.3743 0.6729 nan
<style> </style>
train   psnr ssim lpips prd
nerf flower 13.062 0.2887 0.8028 nan
nerf fortress 13.539 0.3868 0.7249 nan
nerf leaves 12.38599 0.143 0.819662 nan
nerf trex 12.58406 0.425573 0.692024 nan
<style> </style>
paper   psnr ssim lpips prd
nerf flower 13.8 0.302 0.716 nan
nerf fortress 16.3 0.524 0.445 nan
nerf leaves 13.01 0.18 0.687 nan
nerf trex 15.7 0.409 0.575 nan

Can I get a clue?
Also, I wonder which dataset is used for the table among train/val/test set

How to run experiment with only photos?

Hi Mr,

I would like to run an experiment with your model using a list of pictures from an object to get the estimated camera poses of each picture. How can I mount that experiment?

Thanks in advance,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.