Git Product home page Git Product logo

gaustudio's Introduction

gaustudio

GauStudio is a modular framework that supports and accelerates research and development in the rapidly advancing field of 3D Gaussian Splatting (3DGS) and its diverse applications.

gaustudio

Here's an improved version of the text with better structure and flow:

Dataset

To comprehensively evaluate the robustness of 3DGS methods under diverse lighting conditions, materials, and geometric structures, we have curated the following datasets:

1. Collection of 5 Synthetic Datasets in COLMAP Format

We have integrated 5 synthetic datasets: nerf_synthetic, refnerf_synthetic, nero_synthetic, nsvf_synthetic, and BlendedMVS, totaling 143 complex real-world scenes. To ensure compatibility, we have utilized COLMAP to perform feature matching and triangulation based on the original poses, uniformly converting all data to the COLMAP format.

2. Real-world Scenes with High-quality Normal Annotations and LoFTR Initialization

  • COLMAP-format MuSHRoom: To address the difficulty of acquiring indoor scene data such as ScanNet++, we have processed and generated COLMAP-compatible data based on the publicly available MuSHRoom dataset. Please remember to use this data under the original license.

  • More Complete Tanks and Temples: To address the lack of ground truth poses in the Tanks and Temples test set, we have converted the pose information provided by MVSNet to generate COLMAP-format data. This supports algorithm evaluation on a wider range of indoor and outdoor scenes. The leaderboard submission script will be released in a subsequent version.

  • Normal Annotations and LoFTR Initialization To tackle modeling challenges such as sparse viewpoints and specular highlight regions, we have annotated high-quality, temporally consistent normal data based on our private model, providing new avenues for indoor and unbounded scene 3DGS reconstruction. The annotation code will be released soon. Additionally, we provide LoFTR-based initial point clouds to support better initialization.

Installation

Before installing the software, please note that the following steps have been tested on Ubuntu 20.04. If you encounter any issues during the installation on Windows, we are open to addressing and resolving such issues.

Prerequisites

  • NVIDIA graphics card with at least 6GB VRAM
  • CUDA installed
  • Python >= 3.8

Optional Step: Create a Conda Environment

It is recommended to create a conda environment before proceeding with the installation. You can create a conda environment using the following commands:

# Create a new conda environment
conda create -n gaustudio python=3.8
# Activate the conda environment
conda activate gaustudio

Step 1: Install PyTorch

You will need to install PyTorch. The software has been tested with torch1.12.1+cu113 and torch2.0.1+cu118, but other versions should also work fine. You can install PyTorch using conda as follows:

# Example command to install PyTorch version 1.12.1+cu113
conda install pytorch=1.12.1 torchvision=0.13.1 cudatoolkit=11.3 -c pytorch

# Example command to install PyTorch version 2.0.1+cu118
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

Step 2: Install Dependencies

Install the necessary dependencies by running the following command:

pip install -r requirements.txt

Step 3: Install Customed Rasterizer and Gaustudio

cd submodules/gaustudio-diff-gaussian-rasterization
python setup.py install
cd ../../
python setup.py develop

Optional Step: Install PyTorch3D

If you require mesh rendering and further mesh refinement, you can install PyTorch3D follow the link:

QuickStart

Mesh Extraction for 3DGS

gaustudio

Prepare the input data

We currently support the output directory generated by most gaussian splatting methods such as 3DGS, mip-splatting, GaussianPro with the following minimal structure:

- output_dir
    - cameras.json (necessary)
    - point_cloud 
        - iteration_xxxx
            - point_cloud.ply (necessary)

We are preparing some demo data(comming soon) for quick-start testing.

Running the Mesh Extraction

To extract a mesh from the input data, run the following command:

gs-extract-mesh -m ./data/1750250955326095360_data/result -o ./output/1750250955326095360_data

Replace ./data/1750250955326095360_data/result with the path to your input output_dir. Replace ./output/1750250955326095360_data with the desired path for the output mesh.

Binding Texture to the Mesh

The output data is organized in the same format as mvs-texturing. Follow these steps to add texture to the mesh:

  • Compile the mvs-texturing repository on your system.
  • Add the build/bin directory to your PATH environment variable
  • Navigate to the output directory containing the mesh.
  • Run the following command:
texrecon ./images ./fused_mesh.ply ./textured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results

Plan of Release

GauStudio will support more 3DGS-based methods in the near future, if you are also interested in GauStudio and want to improve it, welcome to submit PR!

  • Release mesh extraction and rendering toolkit
  • Release common nerf and neus dataset loader and preprocessing code.
  • Release Semi-Dense, MVSplat-based, and DepthAnything-based Gaussians Initialization
  • Release of full pipelines for training
  • Release Gaussian Sky Modeling and Sky Mask Generation Scripts
  • Release VastGaussian Reimplementation
  • Release Mip-Splatting, Scaffold-GS, and Triplane-GS training
  • Release 'gs-viewer' for online visualization and 'gs-compress' for 3DGS postprocessing
  • Release SparseGS and FSGS training
  • Release Sugar and GaussianPro training

BibTeX

If you found this library useful for your research, please consider citing:

@article{ye2024gaustudio,
  title={GauStudio: A Modular Framework for 3D Gaussian Splatting and Beyond},
  author={Ye, Chongjie and Nie, Yinyu and Chang, Jiahao and Chen, Yuantao and Zhi, Yihao and Han, Xiaoguang},
  journal={arXiv preprint arXiv:2403.19632},
  year={2024}
}

License

The code is released under the MIT License except the rasterizer. We also welcome commercial cooperation to advance the applications of 3DGS and address unresolved issues. If you are interested, welcome to contact Chongjie at [email protected]

gaustudio's People

Contributors

eltociear avatar hugoycj avatar jiahao620 avatar tao-11-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gaustudio's Issues

ModuleNotFoundError: No module named 'diff_surfel_rasterization'

When I try to run gs-extract-mesh, it encountered issue with 'ModuleNotFoundError: No module named 'diff_surfel_rasterization'.
When can we instal 'diff_surfel_rasterization', it seems that there isn't 'diff_surfel_rasterization' in submodules.

Can you provide any assistant and insights on it?

Mesh extraction takes too long.

Thanks for your amazing work.
i use the following command to extract the mesh:
gs-extract-mesh -m ./data/1750250955326095360_data/result -o ./output/1750250955326095360_data

but it takes too long:
image

Is there any solution?

About DTU dataset

Thanks for your great work!
How can I train 3dgs with the DTU dataset, will you support this in the future?

undefined symbol: _ZN2at4_ops5zeros4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalINS2_10ScalarTypeEEENS6_INS2_6LayoutEEENS6_INS2_6DeviceEEENS6_IbEE

i use the following command to extract the mesh:

gs-extract-mesh -m ./data/1750250955326095360_data/result -o ./output/1750250955326095360_data

But the following errors will occur
屏幕截图 2024-03-26 092548

when i run pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118 later,my torch version change to 2.2.1,like this
2
and you requirements.txt don't have detail version

Can you help me?Thank you so much

Code for SparseGS has been released.

Thank you so much for compiling such a great repo!

As titled, the code for SparseGS has been released. We are excited to see your integration of SparseGS/FSGS. Best wishes!

Methods to Improve Indoor Scene Mesh | 提高mesh质量的方法

image
你好,感谢这篇出色的工作,在生成一些小物体上有很高的质量。
但是在我生成像playroom这样的场景mesh时,会出现孔洞较多的问题,我具体做法是Original Gaussian Splatting 30000轮后,用GauS生成mesh,结果如上图所示。
再次感谢这篇出色的工作!!!

mesh extraction process Killed

Hello,

after training the 3DGS model on my data, I tried extracting a mesh from it using your repository. However the mesh extraction process gets killed without any explanation error. Would you have an idea why this happens? It seems to always get killed at around 80%.

Here is my output:

Loading trained model at iteration 30000
Loaded 99455 points from {path/to/my/data/tpose27}/point_cloud/iteration_30000/point_cloud.ply
Loading camera data from {path/to/my/data/tpose27}/cameras.json
  0%|                                                                                           | 0/11 [00:00<?, ?it/s]/home/imc/miniconda3/envs/gaustudio/lib/python3.8/site-packages/torch/functional.py:507: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3549.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
 82%|███████████████████████████████████████████████████████████████████▉               | 9/11 [05:52<01:12, 36.49s/it]Killed```

SuGaR results in the GauStudio paper

Thanks for the impressive work!

For comparison purpose, I would like to ask how the SuGaR's results (Fig. 4 in your paper) for Blender dataset were obtained.

As the official code of SuGaR didn't provide such adaption (espetially dataloader and training settings), may I ask you for the implementation details? It would be greatly appreciated, if these codes can be shared.

texrecon /images /renders/iteration_7000/fused_mesh.ply ./results/textured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results texrecon (built on Mar 21 2024, 21:02:27) Load and prepare mesh: PLY Loader: comment https://github.com/mikedh/trimesh Reading PLY: 71353 verts... 134136 faces... done. Generating texture views: No proper input scene descriptor given. A input descriptor can be: BUNDLE_FILE - a bundle file (currently onle .nvm files are supported) SCENE_FOLDER - a folder containing images and .cam files MVE_SCENE::EMBEDDING - a mve scene and embedding

texrecon /images /renders/iteration_7000/fused_mesh.ply ./results/textured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results texrecon (built on Mar 21 2024, 21:02:27) Load and prepare mesh: PLY Loader: comment https://github.com/mikedh/trimesh Reading PLY: 71353 verts... 134136 faces... done. Generating texture views: No proper input scene descriptor given. A input descriptor can be: BUNDLE_FILE - a bundle file (currently onle .nvm files are supported) SCENE_FOLDER - a folder containing images and .cam files MVE_SCENE::EMBEDDING - a mve scene and embedding

ls images
100.png 114.png 128.png 141.png 155.png 169.png 182.png 196.png 209.png 222.png 236.png 24.png 263.png 277.png 290.png 303.png 317.png 330.png 344.png 358.png 371.png 385.png 399.png 411.png 51.png 65.png 79.png 92.png
101.png 115.png 129.png 142.png 156.png 16.png 183.png 197.png 20.png 223.png 237.png 250.png 264.png 278.png 291.png 304.png 318.png 331.png 345.png 359.png 372.png 386.png 39.png 412.png 52.png 66.png 7.png 93.png
102.png 116.png 12.png 143.png 157.png 170.png 184.png 198.png 210.png 224.png 238.png 251.png 265.png 279.png 292.png 305.png 319.png 332.png 346.png 35.png 373.png 387.png 3.png 413.png 53.png 67.png 80.png 94.png
103.png 117.png 130.png 144.png 158.png 171.png 185.png 199.png 211.png 225.png 239.png 252.png 266.png 27.png 293.png 306.png 31.png 333.png 347.png 360.png 374.png 388.png 400.png 414.png 54.png 68.png 81.png 95.png
104.png 118.png 131.png 145.png 159.png 172.png 186.png 19.png 212.png 226.png 23.png 253.png 267.png 280.png 294.png 307.png 32

An error occurred while generating the mesh

Thanks for your amazing work!

i use the following command to extract the mesh:
gs-extract-mesh -m ./data/1750250955326095360_data/result -o ./output/1750250955326095360_data

But the following errors will occur

Loading trained model at iteration 10000
./output/point_cloud/iteration_10000/point_cloud.ply
Traceback (most recent call last):
  File "/home/yxiong/anaconda3/envs/gaustudio/bin/gs-extract-mesh", line 33, in <module>
    sys.exit(load_entry_point('gaustudio', 'console_scripts', 'gs-extract-mesh')())
  File "/home/yxiong/gaustudio/gaustudio/gaustudio/scripts/extract_mesh.py", line 64, in main
    pcd.load(os.path.join(args.model,"point_cloud", "iteration_" + str(loaded_iter), "point_cloud.ply"))
  File "/home/yxiong/gaustudio/gaustudio/gaustudio/models/base.py", line 68, in load
    assert len(names) == self.config["attributes"][elem]
AssertionError

Is there any solution?

有关scaflod-gs的结果如何在GauStudio上提取mesh

感谢大佬的精彩工作,请问一下如何在GauStudio上运行scaffold的结果呢,我按照vanilla.yml将里面的vanilla字符改为了scafflod字符后,执行如下指令后

gs-extract-mesh -m SCGS/result -o SCGS/result/output --config scaffold
代码报错
image
能麻烦您告知下如何解决吗

About Extract Mesh

Hi,
can i use the rgb and depth of the nerfstudio splatfacto model pred and
use your method to extract the mesh. Is this universal?

Thanks~

bash: texrecon: command not found

when i run texrecon ./images ./fused_mesh.ply ./results/textured_mesh --outlier_removal=gauss_clamping --data_term=area --no_intermediate_results happen that bash: texrecon: command not found
3

and i had run
git clone https://github.com/nmoehrle/mvs-texturing.git cd mvs-texturing mkdir build && cd build && cmake .. make (or make -j for parallel compilation)

Windows support issue

GauStudio makes use of VDBFusion (for meshing) which is current supported only under Linux.
This is the error when runninng under Windows
>pip install -r requirements.txt

ERROR: Could not find a version that satisfies the requirement vdbfusion (from versions: none) ERROR: No matching distribution found for vdbfusion

Not sure if it helps to report this issue here.

Questions on params-based input data

Hi, it's an amazing work for gaussian splatting research.

It seems the input data format for mesh extraction is based on vanilla gaussian splatting with a .ply file.

I wonder if I want to use param-based output data from some 3dgs algorithms (eg. splatam), whose output is a set of 3dgs params:

    params = {
        'means3D'
        'rgb_colors'
        'unnorm_rotations'
        'logit_opacities'
        'log_scales'
    }

how to modify the scripts.

Will the full training code be released?

The vision of this project is very exciting, and the creation of a modular Gaussian pipeline may simplify the whole process, but it seems that this will require more time for maintainers.
The previous expected release time was early May, but now May is almost past lol,close to forgot about this project

About segmentation and object bounding box

感谢大佬的工作,太牛了。顺便问一下大佬会不会在这个repo里加入一些支持3D Gaussian 点云的编辑算法(分割,伪label,in-paint,removal等)。

oneTBB 2021.5 error when compiling mvs-texturing

Hello,

thanks for your excellent work! I meet a problem when I try to compile mvs-texturing following those commands:

git clone https://github.com/nmoehrle/mvs-texturing.git
cd mvs-texturing
mkdir build && cd build && cmake ..
make (or make -j for parallel compilation)

But I meet this issue:

-- Setting build type to 'RELWITHDEBINFO' as none was specified.
CMake Error at elibs/tbb/FindTBB.cmake:187 (file):
  file failed to open for reading (No such file or directory):

    /usr/include/tbb/tbb_stddef.h
Call Stack (most recent call first):
  CMakeLists.txt:16 (FIND_PACKAGE)

System: Linux 22.04
TBB version 2021.5
, it doesn't include tbb_stddef.h anymore.

Anyone meet this issue? Thanks a lot in advance!

Single frame point cloud

Excuse me, what should I do if I only need to input a single frame point cloud? Do I still need to input the camera pose?

Gaussian surface reconstruction guidance

How should I complete the gaussian surface reconstruction after the mesh extraction by the provided command "gs-extract-mesh -m ./data/1750250955326095360_data/result -o ./output/1750250955326095360_data"? Is there any possible guidance?

video of the path

Can the online visualization "gs-viewer" draw a circle around an object and then get a video of the path that rotates around the object?

How to use Custom RGB Image as background during rendering

Hi,
This looks like an awesome library. I see that you are building support for background models. I was wondering if it would be possible to use a custom RBB image (3 channels) as background during training instead of using a single background color as done in vanilla 3DGS.
I want to reconstruct a scene with single object with precise boundaries, however the matting that I apply to my multiview images (using off-the-shelf matting methods) is not 100% perfect so I get some artefacts in my scene during rendering. Rendering it with a pre-captured background could solve it, I guess, but I think it's not possible to do this with vanilla 3DGS. Do you plan to provide something like that?

Win11上面安装此项目

貌似在安装过程中,有一个依赖项vdbfusion,它只能在linux上面安装,在Win11上面pip找不到这个包,请问有什么办法解决吗?

extract_mesh

Hello, I encountered a StopIteration error when running gs-extract-mesh, but running extract_mesh works fine for extracting mesh. I would like to know if these two are the same.

有关mesh提取的想法

由于3DGS生成的.ply点云数据周围噪点太多,导致提取的mesh质量较差,我使用CC将点云进行处理一下,例如去噪,再进行mesh提取,但是报错这个.ply文件缺少了opacity字段。有什么方法可以解决,初始点云噪点导致mesh提取质量的问题嘛?

PointCloud provided is empty

Thanks for your amazing work!

 gs-extract-mesh -m ../../gaussian-splatting/output/75744664-a/ -o ./output

I used the above command and encountered this problem. How to solve it?

Loading trained model at iteration 30000
Loaded 12195953 points from ../../gaussian-splatting/output/75744664-a/point_cloud/iteration_30000/point_cloud.ply
Loading camera data from ../../gaussian-splatting/output/75744664-a/cameras.json
  0%|                                                                                                                                                                                                                                                                                                                                               | 0/300 [00:00<?, ?it/s]/root/miniconda3/envs/gaussian_splatting/lib/python3.7/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1659484801627/work/aten/src/ATen/native/TensorShape.cpp:2894.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
PointCloud provided is empty
  0%|█                                                                                                                                                                                                                                                                                                                                      | 1/300 [00:01<06:04,  1.22s/it]PointCloud provided is empty
  1%|██▏                                                                                                                                                                                                                                                                                                                                    | 2/300 [00:01<02:52,  1.73it/s]PointCloud provided is empty
  1%|███▎                                                                                                                                                                                                                                                                                                                                   | 3/300 [00:01<01:57,  2.52it/s]PointCloud provided is empty
  1%|████▎                                                                                                                                                                                                                                                                                                                                  | 4/300 [00:01<01:31,  3.25it/s]

the structure of camera json file

Could you show an example of cameras.json file? I want to transfer a gaussian model to mesh but have no idea on how to build the cameras.json.

vdbfusion does not support windows

The installation package of vdbfusion under pip does not support win platform
Perhaps we need to modify its dependencies to accommodate windows

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.