Git Product home page Git Product logo

city-super / scaffold-gs Goto Github PK

View Code? Open in Web Editor NEW
498.0 21.0 39.0 22.22 MB

[CVPR 2024 Highlight] Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering

Home Page: https://city-super.github.io/scaffold-gs

License: Other

CMake 9.54% C++ 73.42% GLSL 2.60% Gnuplot 0.11% C 0.43% Python 10.22% Batchfile 0.09% Shell 0.29% Cuda 3.28%
gaussian-splatting reconstruction rendering cvpr2024 3d

scaffold-gs's Introduction

Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering

Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai

[Project Page][arxiv]

News

[2024.04.05] Scaffold-GS is selected as a ๐ŸŽˆhighlight in CVPR2024.

[2024.03.27] ๐ŸŽˆWe release Octree-GS, supporting an explicit LOD representation, rendering faster in large-scale scene with high quality.

[2024.03.26] ๐ŸŽˆWe release GSDF, which improves rendering and reconstruction quality simultaneously.

[2024.02.27] Accepted to CVPR 2024.

[2024.01.22] We add the appearance embedding to Scaffold-GS to handle wild scenes.

[2024.01.22] ๐ŸŽˆ๐Ÿ‘€ The viewer for Scaffold-GS is available now.

[2023.12.10] We release the code.

TODO List

  • Explore on removing the MLP module
  • Improve the training configuration system

Overview

We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum.

Our method performs superior on scenes with challenging observing views. e.g. transparency, specularity, reflection, texture-less regions and fine-scale details.

Installation

We tested on a server configured with Ubuntu 18.04, cuda 11.6 and gcc 9.4.0. Other similar configurations should also work, but we have not verified each one individually.

  1. Clone this repo:
git clone https://github.com/city-super/Scaffold-GS.git --recursive
cd Scaffold-GS
  1. Install dependencies
SET DISTUTILS_USE_SDK=1 # Windows only
conda env create --file environment.yml
conda activate scaffold_gs

Data

First, create a data/ folder inside the project path by

mkdir data

The data structure will be organised as follows:

data/
โ”œโ”€โ”€ dataset_name
โ”‚   โ”œโ”€โ”€ scene1/
โ”‚   โ”‚   โ”œโ”€โ”€ images
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ IMG_0.jpg
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ IMG_1.jpg
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ...
โ”‚   โ”‚   โ”œโ”€โ”€ sparse/
โ”‚   โ”‚       โ””โ”€โ”€0/
โ”‚   โ”œโ”€โ”€ scene2/
โ”‚   โ”‚   โ”œโ”€โ”€ images
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ IMG_0.jpg
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ IMG_1.jpg
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ...
โ”‚   โ”‚   โ”œโ”€โ”€ sparse/
โ”‚   โ”‚       โ””โ”€โ”€0/
...

Public Data

The BungeeNeRF dataset is available in Google Drive/็™พๅบฆ็ฝ‘็›˜[ๆๅ–็ :4whv]. The MipNeRF360 scenes are provided by the paper author here. And we test on scenes bicycle, bonsai, counter, garden, kitchen, room, stump. The SfM data sets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the data/ folder.

Custom Data

For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/ folder.

Training

Training multiple scenes

To train multiple scenes in parallel, we provide batch training scripts:

  • Tanks&Temples: train_tnt.sh
  • MipNeRF360: train_mip360.sh
  • BungeeNeRF: train_bungee.sh
  • Deep Blending: train_db.sh
  • Nerf Synthetic: base ->train_nerfsynthetic.sh; with warmup->train_nerfsynthetic_withwarmup.sh

run them with

bash train_xxx.sh

Notice 1: Make sure you have enough GPU cards and memories to run these scenes at the same time.

Notice 2: Each process occupies many cpu cores, which may slow down the training process. Set torch.set_num_threads(32) accordingly in the train.py to alleviate it.

Training a single scene

For training a single scene, modify the path and configurations in single_train.sh accordingly and run it:

bash ./single_train.sh
  • scene: scene name with a format of dataset_name/scene_name/ or scene_name/;
  • exp_name: user-defined experiment name;
  • gpu: specify the GPU id to run the code. '-1' denotes using the most idle GPU.
  • voxel_size: size for voxelizing the SfM points, smaller value denotes finer structure and higher overhead, '0' means using the median of each point's 1-NN distance as the voxel size.
  • update_init_factor: initial resolution for growing new anchors. A larger one will start placing new anchor in a coarser resolution.

For these public datasets, the configurations of 'voxel_size' and 'update_init_factor' can refer to the above batch training script.

This script will store the log (with running-time code) into outputs/dataset_name/scene_name/exp_name/cur_time automatically.

Evaluation

We've integrated the rendering and metrics calculation process into the training code. So, when completing training, the rendering results, fps and quality metrics will be printed automatically. And the rendering results will be save in the log dir. Mind that the fps is roughly estimated by

torch.cuda.synchronize();t_start=time.time()
rendering...
torch.cuda.synchronize();t_end=time.time()

which may differ somewhat from the original 3D-GS, but it does not affect the analysis.

Meanwhile, we keep the manual rendering function with a similar usage of the counterpart in 3D-GS, one can run it by

python render.py -m <path to trained model> # Generate renderings
python metrics.py -m <path to trained model> # Compute error metrics on renderings

Viewer

The viewer for Scaffold-GS is available now.

Contact

Citation

If you find our work helpful, please consider citing:

@article{scaffoldgs,
  author    = {Lu, Tao, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai.},
  title     = {Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering},
  journal   = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2024},
}

LICENSE

Please follow the LICENSE of 3D-GS.

Acknowledgement

We thank all authors from 3D-GS for presenting such an excellent work.

scaffold-gs's People

Contributors

eveneveno avatar ingra14m avatar inspirelt avatar mulinyu avatar tongji-rkr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scaffold-gs's Issues

Why the training effect is good on my own data, but the testing effect is poor๏ผŸ

Training progress: 100%|__________________________________________________________| 30000/30000 [1:58:29<00:00, 4.22it/s, Loss=0.0295401]
2024-03-14 03:49:43,489 - INFO:
[ITER 30000] Evaluating test: L1 0.2103012849887212 PSNR 11.615882555643717
2024-03-14 03:50:12,150 - INFO:
[ITER 30000] Evaluating train: L1 0.01723341103643179 PSNR 32.359920501708984

Viewing training or rendering output

Hi,
thx for the source code of this wonderful project. I managed to train a model although I stopped after 7k iterations.
How to view the output? The regular SIBR viewer crashes. The one you are including does not build

CMake Error at src/CMakeLists.txt:79 (add_subdirectory):
add_subdirectory given source "core/view" which is not an existing
directory.

render.py gives: FileNotFoundError: [Errno 2] No such file or directory: 'outputs/truck/train/ours_7000/renders/00000.png'

Thx

error about building SIBR in the project

Thank you for your great work! The src/core under SIBR you provided does not have the view folder, but there is the add_subdirectory(core/view) code under cmakelist. I would like to know whether this view folder is necessary?

Train Error ! missing 2 arguments: 'kernel_size' and 'subpixel_offset'

thank your Scaffold-GS , i like it very much !

python train.py  ^
-s E:\AI\A07\240310\win_cuda118\input\TaiDi_1280_35=5x7 ^
-m E:\AI\A07\240310\win_cuda118\input\TaiDi_1280_35=5x7\output ^
--voxel_size 0.001 --update_init_factor 16 ^
--appearance_dim 0 --ratio 1


Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
D:\conda\envs\cuda118\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
D:\conda\envs\cuda118\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: D:\conda\envs\cuda118\lib\site-packages\lpips\weights\v0.1\vgg.pth
found tf board
2024-03-25 02:53:13,816 - INFO: args: Namespace(sh_degree=3, feat_dim=32, n_offsets=10, voxel_size=0.001, 
update_depth=3, update_init_factor=16, update_hierachy_factor=4, use_feat_bank=False, 
source_path='E:\\AI\\A07\\240310\\win_cuda118\\input\\TaiDi_1280_35=5x7', 
model_path='E:\\AI\\A07\\240310\\win_cuda118\\input\\TaiDi_1280_35=5x7\\output', images='images', 
resolution=-1, white_background=False, data_device='cuda', eval=False, lod=0, appearance_dim=0, lowpoly=False, 
ds=1, ratio=1, undistorted=False, add_opacity_dist=False, add_cov_dist=False, add_color_dist=False, 
iterations=30000, position_lr_init=0.0, position_lr_final=0.0, position_lr_delay_mult=0.01, 
position_lr_max_steps=30000, offset_lr_init=0.01, offset_lr_final=0.0001, offset_lr_delay_mult=0.01, 
offset_lr_max_steps=30000, feature_lr=0.0075, opacity_lr=0.02, scaling_lr=0.007, rotation_lr=0.002, 
mlp_opacity_lr_init=0.002, mlp_opacity_lr_final=2e-05, mlp_opacity_lr_delay_mult=0.01, 
mlp_opacity_lr_max_steps=30000, mlp_cov_lr_init=0.004, mlp_cov_lr_final=0.004, mlp_cov_lr_delay_mult=0.01, 
mlp_cov_lr_max_steps=30000, mlp_color_lr_init=0.008, mlp_color_lr_final=5e-05, mlp_color_lr_delay_mult=0.01, 
mlp_color_lr_max_steps=30000, mlp_featurebank_lr_init=0.01, mlp_featurebank_lr_final=1e-05, 
mlp_featurebank_lr_delay_mult=0.01, mlp_featurebank_lr_max_steps=30000, appearance_lr_init=0.05, 
appearance_lr_final=0.0005, appearance_lr_delay_mult=0.01, appearance_lr_max_steps=30000, percent_dense=0.01, 
lambda_dssim=0.2, start_stat=500, update_from=1500, update_interval=100, update_until=15000, 
min_opacity=0.005, success_threshold=0.8, densify_grad_threshold=0.0002, convert_SHs_python=False, 
compute_cov3D_python=False, debug=False, ip='127.0.0.1', port=6009, debug_from=-1, detect_anomaly=False, 
warmup=False, use_wandb=False, test_iterations=[7000, 30000], save_iterations=[7000, 30000, 30000], quiet=False, 
checkpoint_iterations=[], start_checkpoint=None, gpu='0')

$CUDA_VISIBLE_DEVICES
2024-03-25 02:53:13,825 - INFO: using GPU 0
2024-03-25 02:53:13,827 - INFO: save code failed~
2024-03-25 02:53:13,827 - INFO: Optimizing E:\AI\A07\240310\win_cuda118\input\TaiDi_1280_35=5x7\output
Output folder: E:\AI\A07\240310\win_cuda118\input\TaiDi_1280_35=5x7\output [25/03 02:53:13]
Reading camera 35/35 [25/03 02:53:13]
start fetching data from ply file [25/03 02:53:13]
Loading Training Cameras [25/03 02:53:13]
Loading Test Cameras [25/03 02:53:14]
Initial voxel_size: 0.001 [25/03 02:53:14]
Number of points at initialisation :  5579 [25/03 02:53:14]
Training progress:   0%|                                                                                                             | 0/30000 [00:00<?, ?it/s]Traceback (most recent call last):
  
File "E:\AI\A07\240310\win_cuda118\train.py", line 536, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), dataset,  args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, wandb, logger)
  
File "E:\AI\A07\240310\win_cuda118\train.py", line 136, in training
    voxel_visible_mask = prefilter_voxel(viewpoint_cam, gaussians, pipe,background)
  
File "E:\AI\A07\240310\win_cuda118\gaussian_renderer\__init__.py", line 203, in prefilter_voxel
    raster_settings = GaussianRasterizationSettings(

TypeError: <lambda>() missing 2 required positional arguments: 'kernel_size' and 'subpixel_offset'

Training progress:   0%|                                                                                                             | 0/30000 [00:00<?, ?it/s]

positional encoding

Hi, thanks for the great work and congrats for CVPR acceptance. I saw in the code that ob_dist and ob_view are directly concatenated with anchor feature and input to the MLPs. Would using positional encoding (like in NeRF) further improve the view-dependent appearance rendering? as it captures high frequency variations, thanks.

Render depth map

Thank you for your outstanding work, does the code implement an interface for rendering depth maps?

Question about anchor adjustment

Hi, thank you for your great work!

With your anchor growing and pruning design, do you think if it exists such a case that all neural gaussians from one anchor will be moved to the voxel of another anchor. In this case you will not prune the first anchor because the opacity values of the GS from that anchor are still large enough. Just like the picture from your paper, there are some anchors floating around the object. Or is this acceptable? Maybe I have understand it wrong, please correct me.
image

Thanks!

error about SIBR_viewer

hello, it's an excellent work. However, when I compile the render tools in the SIBR_viewer, I encounter a problem that I can not find the torch. Do you know how I can solve this problem? Thank you so much.

Screenshot from 2024-03-17 20-23-34

self.use_feat_bank's impact on the results

Hi๏ผI have a few questions:

  1. How can we decide whether self.use_feat_bank should be False or True, and what impact does this have on the results? I found the results worsened after setting self.use_feat_bank to True.
  2. Why can't self.appearance_dim be equal to zero when self.use_feat_bank is set to true?

Looking forward to your reply, thanks!

image

How to view ply in supersplat

Hi,
This work is perfect, but the result cannot be displayed in those existing tools like supersplat and unity.
Do you have any ideal to deal with this problem?

Actual training speed

The training converges faster in terms of iterations, how about the actual speed (time)? Since you use mlps to decode the features (and the features are multi-res), I assume the training speed per iteration is slower than 3DGS. So how does the graph look like if we compare with 3DGS with time in x-axis and psnr in y-axis?

Testing accuracy gap

Thanks for your great work!

Did you meet the problem that the testing accuracy is much lower than the training one?
My reproduced test PSNR is only 19.75 (25.34 in the paper) while the training PSNR is the same as the paper.

I repeated the experiment many times but the problem always exists.

Looking for your reply.

132501712754502_ pic

Wrong in saving for checkpoint

self._local is not be used and need to save

Traceback (most recent call last):                                                                                                                                                                                                   
  File "/irip/liyihui_2022/job/Scaffold-GS/train.py", line 527, in <module>                                                                                                                                                          
    training(lp.extract(args), op.extract(args), pp.extract(args), dataset,  args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, wandb, logger)                          
  File "/irip/liyihui_2022/job/Scaffold-GS/train.py", line 187, in training                                                                                                                                                          
    torch.save((gaussians.capture(), iteration), scene.model_path + "/chkpnt" + str(iteration) + ".pth")                                                                                                                             
  File "/irip/liyihui_2022/job/Scaffold-GS/scene/gaussian_model.py", line 158, in capture                                                                                                                                            
    self._local,                                                                                                                                                                                                                      
AttributeError: 'GaussianModel' object has no attribute '_local'  

And I also found that self.demon is not be used and need to save

Traceback (most recent call last):                                                                                                                                                                                                   
  File "/irip/liyihui_2022/job/Scaffold-GS/train.py", line 527, in <module>                                                                                                                                                          
    training(lp.extract(args), op.extract(args), pp.extract(args), dataset,  args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, wandb, logger)                          
  File "/irip/liyihui_2022/job/Scaffold-GS/train.py", line 187, in training                                                                                                                                                          
    torch.save((gaussians.capture(), iteration), scene.model_path + "/chkpnt" + str(iteration) + ".pth")                                                                                                                             
  File "/irip/liyihui_2022/job/Scaffold-GS/scene/gaussian_model.py", line 158, in capture                                                                                                                                            
    self.denom,                                                                                                                                                                                                                      
AttributeError: 'GaussianModel' object has no attribute 'denom'                                             

Question for get the size

)3~NYHT{5IN)S2Q 0SD%5Q
hello๏ผŒit's a great work! and i want to konw how to get the Mem. when i run this work, i didn't get the Mem from the result.
Thanks

nvrtc: error: invalid value for --gpu-architecture (-arch)

Thanks for the great work.

I tried to run the code in single_train.sh with colmap data.

However, I got the following error

  }
};

extern "C"
__launch_bounds__(512, 4)
__global__ void reduction_prod_kernel(ReduceJitOp r){
  r.run();
}
nvrtc: error: invalid value for --gpu-architecture (-arch)

Training progress:   0%|                                                   | 0/30000 [00:00<?, ?it/s]

What could be the problem?

Floaters in Free Camera Trajectories scenes

Hello!
I have tried to use Scaffold-GS to render scenes with Free Camera Trajectories from F2-NeRF dataset (https://totoro97.github.io/projects/f2-nerf/).

The rendering of train/test camera views works fine, but when I want to create new views from the side (changing T vector), a lot of floating gaussians appear.

Here example below:

Render SfM point cloud

It is worth noting that the point cloud from sfm is quite sparse, but how to avoid the appearance of floaters for such scenes?

Please!! When to Relase your code?

Hi, sorry to bother you, I'm very inspired by your work, however, I encountered some problems when trying to reproduce the paper, so when will you release your code?

When will you release your rendering tool?

I ran the network viewer once my training is started but it didn't automatically pick the training. Do I need to pass any additional command line arguments?
SIBR_viewers/install/bin/SIBR_gaussianViewer_app doesn't seem to pick the training. I have the training in the same machine but SIBR app is in different directory.
Also for rendering, why we can't just use the SIBR viewer from original Gaussian splatting authors?

submodules not defined

There is no .gitmodule file, I don't know which version of diff_gaussian_rasterization you are using. There seems to be a visible_filter function that you added on top of the original code.

radii_pure = rasterizer.visible_filter(means3D = means3D,
scales = scales[:,:3],
rotations = rotations,
cov3D_precomp = cov3D_precomp)

this function cannot be found

Reproduce synthetic nerf result

Thanks for sharing this nice code.
I am trying to reproducing the result on the synthetic nerf.
I found that there is no predefined configuration for it, so I used the default settings from sing_train.sh file like,

scene='nerf_synthetic/lego'
exp_name='lego'
voxel_size=0.01
update_init_factor=16
gpu=3

./train.sh -d ${scene} -l ${exp_name} --gpu ${gpu} --voxel_size ${voxel_size} --update_init_factor ${update_init_factor}

After 30k iterations I got 32.05 dB for the lego scene, which is 3.64 dB lower than that in your paper. I think the default setting I used is not optimal training options. Could you share your testing options?

Keys Comparison with Vanilla GS

I am trying to render on a webbased tool which expect the following keys from the original GS .ply file.

color = np.array( [ 0.5 + SH_C0 * v["f_dc_0"], 0.5 + SH_C0 * v["f_dc_1"], 0.5 + SH_C0 * v["f_dc_2"], 1 / (1 + np.exp(-v["opacity"])), ] )
Can you please comment what are the alternative keys in the .ply file saved by your method as I couldn't find the following (property float f_dc_0
property float f_dc_1
property float f_dc_2 )keys in your method's .ply file?

question about appearance embedding

Hi, I see that the appearance embedding is constructed based on the number of views in train cameras, and when shifting to eval mode, the uid of the test camera is directly used to query the learned embedding.

If I understand the appearance embedding correctly, it is set up so that the view-dependent effect can be better encoded; but since the test cameras and train cameras have different views, and so their uids have different meaning in this aspect, I think querying the same learned embedding would lead to wrong effect ? Thanks

Training result has no color

Hi there,

Thank you for the great work.

I was testing with a custom dataset and the result looks super strange. It's all black. Any insight about this?

img_v3_0261_6ba19182-a114-459d-bf87-4a065f7125eg

Thank you so much

PSNR about Mip-NeRF360

The original paper (3DGS) is 27.21. In your paper, it is 28.69. What processing did you do?

error about building SIBR in the project

thanks for your great work, some errors about building SIBR

Scaffold-GS/SIBR_viewers/src/projects/gaussianviewer/renderer/GaussianView.cpp:550:53: error: โ€˜visible_filterโ€™ is not a member of โ€˜CudaRasterizer::Rasterizerโ€™ 550 | CudaRasterizer::Rasterizer::visible_filter( | ^~~~~~~~~~~~~~ Scaffold-GS/SIBR_viewers/src/projects/gaussianviewer/renderer/GaussianView.cpp:556:68: warning: โ€˜T* at::Tensor::data() const [with T = float]โ€™ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations] 556 | ak_pos_all.contiguous().data<float>(),

looking forward to your reply, thanks! @inspirelt

ply issues

This project turned out really great, but the ply files don't seem to be the same as the usual splat files. The existing web viewers and Unity projects can't properly display them. When viewing it on the web viewer, SuperSplat, all the scenes are just black.

error about the prebuild viewer on window

hi, thanks for your great work, i got the errors when using the prebuild viewer on window,
cuda 11.4 RTX3080

LINE 720, FUNC sibr::GaussianView::onRenderIBR A CUDA error occurred during rendering:an illegal memory access was encountered. Please rerun in Debug to find the exact line!

looking forward to your reply, thanks!

How to load the dataset from VR-NeRF?

Great Work.
The paper shows some results in the dataset proposed in VR-NeRF. However, the training scripts are not found in the repo.
Could you provide the loading code or converter code for evaluating such dataset?

Debug CUDA

Nice work๏ผ
ๆƒณ่ฏทๆ•™ไธ€ไธชๆฏ”่พƒ้€š็”จ็š„้—ฎ้ข˜๏ผŒๅ†™CUDA็š„ๆ—ถๅ€™๏ผŒไธ€่ˆฌๆ˜ฏ็”จไธ‹้ข่ฟ™็งๆ–นๆณ•่ฟ›่กŒ่ฐƒ่ฏ•ๅ—๏ผŸ่ฏทๆŒ‡็‚นไธ‹~
image

Training on 4K resolution

Hi! I'm trying to train on scenes with higher resolution, but getting worse results. What changes/optimizations do you suggest to achieve better reconstruction on 2K or 4K images?

Thanks!

can't see the generated result

I tried to use the compiled SIBR_viewer provided by you, and the SIBR_viewer compiled by myself, and both ended up with a white screen and a flash back. This is what it shows:
[SIBR] -- INFOS --: OpenGL Version: 4.6.0 NVIDIA 537.34[major: 4, minor: 6]
[SIBR] -- INFOS --: Dataset type:
Number of input Images to read: 257
Number of Cameras set up: 257
LOADSFM: Try to open C:\Users\lcl124252\Desktop\code\Scaffold-GS\data\mydataset\scene1/sparse/0/points3D.bin
Num 3D pts 65407
[SIBR] -- INFOS --: SfM Mesh 'C:\Users\lcl124252\Desktop\code\Scaffold-GS\data\mydataset\scene1/sparse/0/points3d.bin successfully loaded. (65407) vertices detected. Init GL ...
[SIBR] -- INFOS --: Init GL mesh complete
[SIBR] -- INFOS --: Loading 251516 Anchor points
[SIBR] -- INFOS --: opacity_mlp : 0
[SIBR] -- INFOS --: cov_mlp : 0
[SIBR] -- INFOS --: color_mlp : 0
[SIBR] -- INFOS --: embedding_appearance : 0

How did you configure your test set?

Thanks for a great paper.

I was wondering, how did you evaluate the numbers like PSNR, SSIM, etc. in your paper?

My question is how many test cases did you pull out of the total number of datasets to evaluate the numbers.

I ask because it doesn't seem to be directly mentioned in the paper.

train without mlp

hi, thanks for your great work, is it possible to modify anchor points without mlp in order to change scaffold-gs to explicit model? in unity / UE or other applications
@tongji-rkr
Looking forward to your reply, thanks!

How are the values for new anchors given during anchor_growing?

I am having a hard time understanding how the parameters for new anchors are given.

From what I have understood, the offsets are initialized as 0's.

But, I cannot understand how the new anchor's features (new_feat) is created.

  • Also, why is new_scaling, new_rotation, new_opacities being created when mlp_cov and mlp_opacity is being used?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.