yihua7 / nerf-texture Goto Github PK
View Code? Open in Web Editor NEW[SIGGRAPH 2023, TPAMI 2024] Code for NeRF-Texture: Texture Synthesis with Neural Radiance Fields
Home Page: https://yihua7.github.io/NeRF-Texture-web/
[SIGGRAPH 2023, TPAMI 2024] Code for NeRF-Texture: Texture Synthesis with Neural Radiance Fields
Home Page: https://yihua7.github.io/NeRF-Texture-web/
Hello, I ran the training part from scratch according to the readme, but the execution effect did not meet expectations. The surface of the object obtained when executing "Start training NeRF-Texture" looks more "furry", and the same situation occurs when "Apply synthesized textures to shapes" is executed.
For example, the first picture below is obtained by training, and the second picture is the original ngp. May I ask if there are any settings that need to be modified or for other reasons.
Thanks in advance for your help.
I am accessing a linux server from Apple M1 MacBook Pro by ssh -Y
and try to reproduce the results.
I have installed all the packages but get stuck by the following errors when extracting the coarse mesh.
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
I try to remove the flag --gui
but got error
CUDA_VISIBLE_DEVICES=0 python main_nerf.py --path data/my_white_lamp_base/ --workspace ./logs/my_white_lamp_base -O --bound 1.0 --scale 0.8 --dt_gamma 0 --ff --mode colmap
Namespace(H=1080, O=True, W=1920, bg_radius=-1, bound=1.0, ckpt='latest', clip_text='', color_space='srgb', cuda_ray=True, density_thresh=10, dt_gamma=0.0, error_map=False, ff=True, fovy=50, fp16=True, gui=False, iters=40000, lr=0.01, max_ray_batch=4096, max_spp=64, max_steps=1024, min_near=0.2, mode='colmap', num_rays=4096, num_steps=512, path='data/my_white_lamp_base/', preload=False, radius=5, rand_pose=-1, scale=0.8, seed=0, tcnn=False, test=False, upsample_steps=0, workspace='./logs/my_white_lamp_base')
Loading trainval data:: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 31/31 [00:00<00:00, 37.61it/s]
[INFO] dataset camera poses: radius = 0.9458, bound = 1.0
NeRFNetwork(
(encoder): GridEncoder: input_dim=3 num_levels=16 level_dim=2 resolution=16 -> 2048 per_level_scale=1.3819 params=(6098120, 2) gridtype=hash align_corners=True
(sigma_net): FFMLP: input_dim=32 output_dim=16 hidden_dim=64 num_layers=2 activation=0
(encoder_dir): SHEncoder: input_dim=3 degree=4
(color_net): FFMLP: input_dim=32 output_dim=3 hidden_dim=64 num_layers=3 activation=0
)
[INFO] Trainer: ngp | 2023-06-12_23-28-07 | cuda | fp16 | ./logs/my_white_lamp_base
[INFO] #parameters: 12214703
[INFO] Loading latest checkpoint ...
[WARN] No checkpoint found, model randomly initialized.
Loading val data:: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 37.55it/s]
[INFO] dataset camera poses: radius = 1.0909, bound = 1.0
==> Start Training Epoch 1, lr=0.010000 ...
0% 0/31 [00:00<?, ?it/s]Traceback (most recent call last):
File "main_nerf.py", line 145, in <module>
trainer.train(train_loader, valid_loader, max_epoch)
File "/data/ruihan/projects/NeRF-Texture/nerf/utils.py", line 948, in train
self.train_one_epoch(train_loader)
File "/data/ruihan/projects/NeRF-Texture/nerf/utils.py", line 1322, in train_one_epoch
self.model.update_gridfield(target_stage=int(self.global_step // self.num_iterations_per_stage))
File "/data/ruihan/anaconda3/envs/ns/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'NeRFNetwork' object has no attribute 'update_gridfield'
0% 0/31 [00:00<?, ?it/s]
Did you encounter similar OpenGL error and if not, is there a workaround to obtain the training results without using --gui
?
Hearty congratulations for presenting such an excellent job!
When I test on my own datasets, I have three problems:
CUDA_VISIBLE_DEVICES=0 python main_nerf.py ./data/apple --workspace ./logs/apple -O --bound 1.0 --scale 0.8 --dt_gamma 0 --ff --mode colmap --gui
udf = np.abs(trimesh.proximity.ProximityQuery(surface_mesh).signed_distance(scanned_ply))
.My environment:
OS: Ubuntu 22.04
Python: 3.10.12
CUDA: 11.3
pytorch: 1.12.1
pytorch3d: 0.7.4
Looking for you early reply. Best wishes!
I got the following error when trying to reproduce the results:
Use xatlas UV mapping ...
Traceback (most recent call last):
File "/media/ryanrzzhang/CE4E3B8A4E3B6A7B/yankesong/NeRF-Texture/main.py", line 159, in
model = NeRFNetwork(
File "/media/ryanrzzhang/CE4E3B8A4E3B6A7B/yankesong/NeRF-Texture/nerf/network_curvedfield.py", line 130, in init
self.meshfea_field = MeshFeatureField(hash=hash, mesh_path=surface_mesh_path, h_threshold=h_threshold, K=8, bound=bound, clustering=clustering, prob_model=prob_model, pred_normal=self.render_light_model, use_lip_mlp_for_normal=self.use_lip_mlp_for_normal, pattern_rate=pattern_rate, num_level=num_level, bound_output_normal=bound_output_normal)
File "/media/ryanrzzhang/CE4E3B8A4E3B6A7B/yankesong/NeRF-Texture/tools/map.py", line 600, in init
self.meshprojector = MeshProjector(device=self.device, mesh_path=self.mesh_path, store_f=True, store_uv=True)
File "/media/ryanrzzhang/CE4E3B8A4E3B6A7B/yankesong/NeRF-Texture/tools/map.py", line 359, in init
self.mesh.export('./test_data/uv_mapped.obj')
File "/home/ryanrzzhang/anaconda3/envs/NeRF-Texture/lib/python3.10/site-packages/trimesh/base.py", line 2830, in export
return export_mesh(
File "/home/ryanrzzhang/anaconda3/envs/NeRF-Texture/lib/python3.10/site-packages/trimesh/exchange/export.py", line 60, in export_mesh
file_obj = open(file_path, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: '/media/ryanrzzhang/CE4E3B8A4E3B6A7B/yankesong/NeRF-Texture/test_data/uv_mapped.obj'
Some help please?
Love the paper - super cool approach. How would you use this to reconstruct photorealistic portrait? (They gave example of this in an older paper - Deferred Neural Rendering: Image Synthesis using Neural Textures)
Sec 3.2.1 of the paper mentions, "we place square scan arrays of 128 × 128 resolution on each tangent plane of the coarse mesh to obtain the intersections of the scanning rays with the mesh. "
Does the last "mesh" refers to the coarse mesh or the fine mesh with meso-structure?
As far as I understand, the implementation is this part. The variable "intersections" is used for knn search, so sounds like the points on the fine mesh. However, it is obtained by the ray-tracing from scan rays to the coarse mesh.
Did I misunderstand something?
I want to add rotated patches during the patch matching process, and found that the prepareExamplePatches
function in the patch_matching_and_quilting.py
file has the following code:
def prepareExamplePatches(self):
print('Preparing example patches ...')
result = self.patches
stbn = self.sample_tbn
print(f"patches shape: {result.shape}")
print(f"stbn shape: {stbn.shape}")
self.total_patches_count = result.shape[0]
if self.mirror_hor:
hor_result = result[:, ::-1, :, :]
result = np.concatenate((result, hor_result))
hor_stbn = np.copy(stbn)
hor_stbn[..., 0, :] *= -1
stbn = np.concatenate([stbn, hor_stbn], axis=0)
if self.mirror_vert:
vert_result = result[:, :, ::-1, :]
result = np.concatenate((result, vert_result))
hor_vtbn = np.copy(stbn)
hor_vtbn[..., 1, :] *= -1
stbn = np.concatenate([stbn, hor_vtbn], axis=0)
if self.rotate:
rot_result1 = np.rot90(result, 2)
rot_result2 = np.rot90(rot_result1, 2)
rot_result3 = np.rot90(rot_result2, 2)
result = np.concatenate((result, rot_result1, rot_result2, rot_result3))
return result, stbn
I have the following questions:
stbn
?stbn
is (n, 9), is only one line of stbn
transformed in the mirror_hor
and mirror_vert
code branches?rotate
code branch, how to change stbn
?Thanks
I am trying to reproduce the results synthesizing durian's 3D texture and copy it to the banana.
For Step 3, I can select the bottom region using "rectangle selection" tool in MeshLab, but how shall I cut it afterwards? I tried "delete selected faces and vertices", but it is not a closed mesh object for saving.
If skipping this step, I could get synthesized texture, but not uniform as demonstrated.
--tcnn
flag instead of --ff
flag, as I was unable to install ffmlp
package.Do you have any suggestion on that?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.