Comments (6)
--- COLMAP ISN'T YOUR ENEMY --- ;) Work with your dataset better.
Why? Colmap that gives you very few images are the best 'signal' that you have bad dataset and you must learn how to improve your skils.
Few more topics:
- Most important thing from 'Colmap route' are precise cameras.
- You can seed train with random point cloud but without cameras you get medicore output. (no modification is needed)
- 1000 pic taking forever (not a hour, but days) ... you have VRAM issue. Resize pics , select it better and keep all trainning in VRAM (not a swap) and you are golden.
- Colmap are fine with 15-18 deg coverage.
- Still you mad at Colmap? Try 'Kapture' route. GL
from gaussian-splatting.
--- COLMAP ISN'T YOUR ENEMY --- ;) Work with your dataset better. Why? Colmap that gives you very few images are the best 'signal' that you have bad dataset and you must learn how to improve your skils.
Few more topics:
* Most important thing from 'Colmap route' are precise cameras. * You can seed train with random point cloud but without cameras you get medicore output. (no modification is needed) * 1000 pic taking forever (not a hour, but days) ... you have VRAM issue. Resize pics , select it better and keep all trainning in VRAM (not a swap) and you are golden. * Colmap are fine with 15-18 deg coverage. * Still you mad at Colmap? Try 'Kapture' route. GL
I think you misunderstand my problem. I am trying to create a gaussian splatting scene from a blender project. That means that I already have all the camera position information for my images and can create a point cloud from the project mesh in the same relative coordinate system. In fact, I have generated NeRF synthetic datasets with a similar method before (by using BlenderNeRF extension) and it seems that for gaussian splatting adding a point cloud to that representation would not be that hard.
That being said, is there an easy way to package all this information that I already have (images, camera positions, point cloud) into a dataset readable by this gaussian splatting training implementation?
from gaussian-splatting.
Hi, do you have any idea about this problem now? I constructed my own synthetic dataset using meshes and projected images in the same way, but the loss during training process did not change, the rendered scene was all black, and no valid 3d Gaussians seemed to be generated.I suppose this kind of question is worthy of further exploration.
from gaussian-splatting.
@Dmitry-Filippov-Rival I think you can constructed a colmap format using the known poses following this tutorial Reconstruct sparse/dense model from known camera poses. also you can combine point clouds from(lidar or other source) with the colmap outputs. them following the COLMAP dataset structure. use the 3dgs to train it.
from gaussian-splatting.
I went with a slightly different method in the end. There are existing Blender plugins to generate NeRF training data as I mentioned before. After further code exploration I have realized that when training from a NeRF dataset the algorithm generates a random point cloud, saves it as points3d.ply and uses that to seed the gaussians. All I had to do is save a .ply representation of my project in the training dataset folder as points3d.ply and make minor adjustments to the file reader (the code on main currently expects color data from the .ply file that you cannot easily bake in blender).
from gaussian-splatting.
For anyone looking to recreate my method, I am using BlenderNeRF to generate training data, as well as render few scenes from likely view directions for testing data (usually around 20-25) . I then use the normal Blender export to get a .ply file of the scene and put it in the dataset as points3d.ply. The resulting dataset folder looks like so
dataset
--test
----001.png
----...
--train
----001.png
----...
--points3d.ply
--transform_test.json
--transform_train.json
I also needed to modify scene/dataset_readers.py starting at line 107
def fetchPly(path, points=100000):
plydata = PlyData.read(path)
vertices = np.random.choice(plydata['vertex'], points)
positions = np.vstack([vertices['x'], vertices['y'], vertices['z']]).T
try:
colors = np.vstack([vertices['red'], vertices['green'], vertices['blue']]).T / 255.0
except:
colors = np.zeros_like(positions)
try:
normals = np.vstack([vertices['nx'], vertices['ny'], vertices['nz']]).T
except:
normals = np.zeros_like(positions)
return BasicPointCloud(points=positions, colors=colors, normals=normals)
I have sampled 100000 points from the mesh vertices because with too many initialization points my resulting gaussian splat folders ended up too large.
from gaussian-splatting.
Related Issues (20)
- I found that the ground rendering is not smooth enough. Is there any good solution to improve it?
- Significant issues - size issues HOT 1
- Question about keyboard controlling in VR mode HOT 1
- gaussian render is always giving two output images no matter what my input images are HOT 2
- Can we spefify the initial image pair in colmap.py?
- Issues about the Fov Y in SIBR viewer.
- point cloud
- dont know what to do with submodules HOT 1
- NeRF Synthetic blender dataset depth loss?
- PSNR
- normal maps HOT 3
- Can 3DGS render depth maps? HOT 6
- Novel View Synthesis from render.py HOT 4
- Failed To build diff_gaussian_rasterization simple_knn
- Error when installing the SIBR viewer on Ubuntu 22.04
- How can i improve my result in driving scene HOT 2
- What does the purple points and pruple cameras mean?
- How does cuda code work when rasterization is performed ? HOT 1
- How to merge multiple Gaussian models HOT 2
- The rendered images are blurry.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gaussian-splatting.