Git Product home page Git Product logo

Comments (16)

hmdolatabadi avatar hmdolatabadi commented on August 16, 2024 2

@jeongyw12382 Hi. Thanks for the interesting paper and providing the code. I read through the custom code run snippet that you provided above:

python run_nerf.py --config configs/llff_data/flower.txt --expname $(basename "${0%.*}") --chunk 8192 --N_rand 1024 --camera_model pinhole_rot_noise_10k_rayo_rayd --ray_loss_type proj_ray_dist --multiplicative_noise True --i_ray_dist_loss 10 --grid_size 10 --ray_dist_loss_weight 0.0001 --N_iters 800001 --ray_o_noise_scale 1e-3 --ray_d_noise_scale 1e-3 --add_ie 200000 --add_od 400000 --add_prd 600000 --lrate_decay 400 --dataset_type custom --run_without_colmap both

Is this going to run NeRF or SCNeRF?
Cause I saw a little difference with the .sh files in the original repo, where you add a --ft_model also to run SCNeRF.
Also, could you please tell that after training is done, how we can render a video using our trained model?

Thanks for your help in advance.

from scnerf.

hmdolatabadi avatar hmdolatabadi commented on August 16, 2024 2

@jeongyw12382 Thanks for your prompt reply. Appreciate it a lot. Can you also answer my second question as of how can I generate a video sequence of the scenes using the model after training? Thanks.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024 1

The code in the "custom" branch would not be merged into the master branch since it is only for reference.
If you have further questions about extending on other datasets, feel free to mail me "[email protected]".
The script above is a sample for running the code in the "custom" branch.

from scnerf.

franciscoWizz avatar franciscoWizz commented on August 16, 2024 1

Ok thank you so much mr, I'll give it a try and let you know

from scnerf.

vishnukool avatar vishnukool commented on August 16, 2024 1

Hi @jeongyw12382
Two more questions, if you don't mind:

  1. Does the code in "custom" branch with --dataset_type custom work for 360 degree scenes like the "tanks_and_temples" images ? Or is it only for forward facing scenes like LLFF fern dataset?
  2. If it does work for 360 scenes, can you confirm that it doesn't need any COLMAP camera parameters, initialization, etc. ?

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024 1

@hmdolatabadi
The script you've mentioned will run SCNeRF since the camera model is set to "pinhole_rot_noise_10k_rayo_rayd." --ft_path loads the pre-trained model.
Thus, if you are running four stages independently, then you should add the ft_path to load the pre-trained model of the previous stage. However, if you run the script above, the code will automatically run the four stages sequentially. If you need more help, please let me know.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024 1

@vishnukool
I'll respond to this issue on your newly uploaded issue.

from scnerf.

hmdolatabadi avatar hmdolatabadi commented on August 16, 2024 1

@jeongyw12382 Thanks a lot. I will checkout the links and functions and see how it goes. Thanks again.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024

We have not fixed the data loader part since our codes were assumed to have camera information to compare with NeRF + COLMAP with our model. A simple trick for this solution is to remove the file loading parts of the LLFF loader and set all the poses to be equal.

I've just added the sample code for running with custom images in the new branch "custom". The code here would be not perfect since it is not verified perfectly. Furthermore, I'm not sure what kind of task you are planning to do. Thus, evaluating projected ray distance loss on the test set is currently unavailable in the "custom" branch. You should modify the code slightly by adjusting codes to reflect your idea. In the current version, the "test" set is equal to the "train" set.

python run_nerf.py --config configs/llff_data/flower.txt --expname $(basename "${0%.*}") --chunk 8192 --N_rand 1024 --camera_model pinhole_rot_noise_10k_rayo_rayd --ray_loss_type proj_ray_dist --multiplicative_noise True --i_ray_dist_loss 10 --grid_size 10 --ray_dist_loss_weight 0.0001 --N_iters 800001 --ray_o_noise_scale 1e-3 --ray_d_noise_scale 1e-3 --add_ie 200000 --add_od 400000 --add_prd 600000 --lrate_decay 400 --dataset_type custom --run_without_colmap both

Don't forget to add "--run_without_colmap both" while running the code. Ignoring it might result in a wrong initialization. Feel free to ask further questions about code usage.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024

Announce me if the newest version works fine on your environment @franciscoWizz. After your announcement, I'll close the issue.

from scnerf.

xufengfan96 avatar xufengfan96 commented on August 16, 2024

Ok thank you so much mr, I'll give it a try and let you know

Hello,

Do you try to get camera pose about each image like a 4X4 matrix which include rotation matrix and translation matrix? And do you succeed in getting it?

Looking forward to your reply.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024

Sorry for being late. I have added descriptions on the new issue you have just added.
@xufengfan96

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024

Please reopen the issue whenever you need help in this issue.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024

@hmdolatabadi

Depending on the data, video rendering steps are different.

  1. Tanks and Temples
    When we fully utilize the train and test data and connect the rendered images, we can get the video sequence.

  2. LLFF

    images, noisy_extrinsic, bds, render_poses, i_test, gt_camera_info

    When you render the poses in the variable "render_poses", you can get natural image sequences. Then, concatenate the images to generate a video. (Spherified Camera Pose)

  3. Synthetic

    images, noisy_extrinsic, render_poses, hwf, i_split, gt_camera_info

    When you render the poses in the variable "render_poses", you can get natural image sequences. Then, concatenate the images to generate a video. (360 Camera Pose)

I recommend the links below to generate a camera path and render videos

from scnerf.

hmdolatabadi avatar hmdolatabadi commented on August 16, 2024

@jeongyw12382 I tried training a model on images only, and everything went well. Now, to generate a video sequence, I tried the above codes, but all of them need the render_poses variable. I guess my original question comes back to this: when you only have images, how do you generate the render_poses variable so that you can generate/render images? Thanks.

from scnerf.

jeongyw12382 avatar jeongyw12382 commented on August 16, 2024

@hmdolatabadi

It depends on the type of camera trajectories you want to render.

  1. If you want to generate a 360-scene video, then based on the estimated pose, you can generate the spherical pose with the code here.

    render_poses = torch.stack(

  2. If you want to generate the same camera trajectory with the train/val/test set (assuming video), then there are two choices

  • If frames are sufficiently many, then you can directly re-render the train/val/test set to generate the video.
  • If frames are insufficiently many, then you should implement codes that estimate intermediate camera trajectory from given frames(poses).
  1. If you want to generate a circular path (refer to the LLFF dataset video), then you should utilize the code below.

    render_poses = render_path_spiral(c2w_path, up, rads, focal, zdelta, zrate=.5, rots=N_rots, N=N_views)

  2. Otherwise, probably you should implement codes generating a camera trajectory.

from scnerf.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.