Git Product home page Git Product logo

get3d's People

Contributors

frankshen07 avatar marctlaw avatar stevejungao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

get3d's Issues

Persue higher resolution for mesh

I am happy to see pre-trained model released! Thank you!

Now I have infered a few times with checkpoint of car category. As far as I observe, the triangular meshes has nearly the same resolution of about 5 cm, or 5% of model total length. Is there a way to make more detailed meshes?

Also, I guess your mesh reconstruction is based on SDF-driven DMTet algorithm. Do you expect bad behavior due to inaccuracy of SDF when applyibg finer resolution?

p.s. My inference took 20 mins to produce 100 textured meshes, so computation complexity do not quite impact me.

Final step of environment setup / WGET

First of all, congratulations to the team for such groundbreaking work! I'm not a programmer but 20+ years in CGI have forced me to try and understand code. I'm on Win10, using Anaconda, my environment is pretty much configured, except that on the very last step according to the readme, I get an error when running this wget command:

wget https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/metrics/inception-2015-12-05.pkl

The error states the identity of the server could not be confirmed and suggests using a flag to bypass authentication but ngc rejects such a connection. I've gone through the NGC documentation and in the NGC library we can generate wget or curl commands to pull stuff but they are identical to what the readme suggests. I've created an NGC account and gone through the setup, which generates a config file, but haven't been able to use that when calling wget. I know this is not even related to the project but for other up-and-coming W10 users this might be a point where they get stuck. Any pointers?

Thanks in advance,
L

I have a bug that is fatal error: EGL/egl.h: No such file or directory.

I met this bug when I try training. Here are the details.

[3/4] c++ -MMD -MF glutil.o.d -DTORCH_EXTENSION_NAME=nvdiffrast_plugin_gl -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include/TH -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /SSD_DISK/users/anaconda3/envs/get3d1/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -DNVDR_TORCH -c /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/nvdiffrast/common/glutil.cpp -o glutil.o
FAILED: glutil.o
c++ -MMD -MF glutil.o.d -DTORCH_EXTENSION_NAME=nvdiffrast_plugin_gl -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include/TH -isystem /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /SSD_DISK/users/anaconda3/envs/get3d1/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -DNVDR_TORCH -c /SSD_DISK/users/qianjiachen/anaconda3/envs/get3d1/lib/python3.8/site-packages/nvdiffrast/common/glutil.cpp -o glutil.o
In file included from /SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/nvdiffrast/common/glutil.cpp:14:
/SSD_DISK/users/anaconda3/envs/get3d1/lib/python3.8/site-packages/nvdiffrast/common/glutil.h:36:10: fatal error: EGL/egl.h: No such file or directory
36 | #include <EGL/egl.h>
| ^~~~~~~~~~~
compilation terminated.

Remove hardcoded values for datasets and add args for dataset input

Right now there are many references in the code to specific datasets, as well as a data_camera_mode option that takes a dataset name which gets handled with some complex if statements at various points in the training code.

Raising the issue here to remove these specific requirements and generalize modes ("object", "human", etc). Maybe we pass a configuration file in so that future researchers can add their own custom parameters.

(by the way, I'm working on a PR for this!)

Release of trained models

Hi,

I'm wondering if there are plans to release trained models so we can play with the trained models for inference only.

Thanks.

no lfd feature calculated using compute_lfd_feat_multiprocess.py

Hi there, I've been testing the evaluation metrics using 2 example dataset which contains 2 ground truth obj, and 20 generated obj.

It seems that

Step 4: We first generat the Light Field feature for each object by running

python compute_lfd_feat_multiprocess.py --gen_path PATH_TO_THE_MODEL_PREDICTION --save_path PATH_FOR_LFD_OUTPUT_FOR_PRED

only create null files like mesh_q4_v_1.8.art, mesh_q8_v1.8.art, etc.

Then when I try to run the compute_lfd scripts: using python compute_lfd.py --split_path PATH_TO_TEST_SPLIT --dataset_path PATH_FOR_LFD_OUTPUT_FOR_GT --gen_path PATH_FOR_LFD_OUTPUT_FOR_PRED --save_name results/our/lfd.pkl
I got Segmentation fault (core dumped) when loading the data.

result = load_data.run(os.path.join(f, 'mesh'))

Is the data supposed to be null? or it should be the calculated lfd feature? I've roughly checked the align_mesh() function in lfd_me.py but it seems that I couldn't find a line of code that actually write lfd feature to the created null files: mesh_q4_v_1.8.art, mesh_q8_v1.8.art, etc.

Appreciate your help!

"Connection refused" error when training

Hi,
I want to train this network on my own dataset (which I have processed using the provided rendering scripts) however I get an "ConnectionRefusedError: [Errno 111] Connection refused" error and an "NotImplementedError" raised in the discriminator_architecture script when I run the training. What might be the issue here?

Screenshot from 2022-10-20 12-49-13

Support for ShapeNet V2

Question--

  1. is there any reason, other than normalizing folder structure, that ShapeNetCore V2 wouldn't work, or reason you chose ShapeNet V1 for the release?
  2. Would you accept a PR for support for ShapeNet V2?

Not Found Camera Root

When training with the following command:

python train_3d.py --outdir=results/ --data=render_shapenet_data/dataset/img/02958343/ --camera_path=render_shapenet_data/dataset/camera/02958343/ --gpus=8 --batch=32 --gamma=40 --data_camera_mode shapenet_car  --dmtet_scale 1.0  --use_shapenet_split 1  --one_3d_generator 1  --fp32 0

I keep getting this output:

20%|##        | 57/279 [00:51<02:59,  1.24it/s]==> not found camera root
==> not found camera root
==> not found camera root

Do I need to fix something?

Run inference on a specific image

Right now, inference is run on the eval images.

It'd be awfully handy to be able to assess arbitrary images by passing one to the CLI!

shapenet_chair inference

I see shapenetcorev1 has chair images. When we run inference on shapenet_chair, I see log messages that ./tmp has 1234 images but I cannot see those. Also val.txt in 3dgan_data_split folder has 573 ids. When we run inference, what is the input image? The fakes_000000_00.png file has 25 generated images. mesh_pred folder has 10 obj files. Can you please explain what is the correlation between these? What was the input image to the inference, in other words, what was the 2D image for which the 3D was generated?

Multiple classes in same checkpoint

I am hoping to train a general model on 3D understanding beyond just a single class or range of classes.

I am doing this by resuming training on a checkpoint while swapping out datasets, however I am unsure of the limitations here.

Will training on a wider range of data make the model worse at individual tasks? Or is there otherwise any reason not to do this?

Volume Sub-division

Where is subdivision used in the code? The function batch_subdivide_volume doesn't seem to be used.

Text-Guided 3D Synthesis

Hi @SteveJunGao thank you for all your help!
When you release the pretrained model (Issue #16) will you also be releasing the weights for the fine-tuned model on CLIP?
Excited to continue working on this!

2d images to 3d model

Hello everyone, I did not find in the article a description of how to make 3d models from 2d pictures, just a little about the verbal description of the style. Logically, of course, it can be assumed that instead of a sample from the distribution, it is possible to submit a picture to the MLP input, but due to the closeness of the architecture, I could not do this experience. Perhaps someone has already tried to conduct a similar experience? Could you tell us how to do this on this project ?

Material Generation Supported?

Hi, I am interested in the material generation ability described in the paper. However, I can't find relevant code relating to either rendering images using material properties or generating real images from Turbosquid models (with consistent materials) using Blender.
Did I miss something in the repo? Or the material generation code is not available yet?

Using Blender Eevee and HDRI lighting for rendering ShapeNet

We've modified the input scripts and are starting on our journey of training the models:
https://github.com/webaverse/GET3D/blob/master/render_shapenet_data/render_shapenet.py

We got much faster and more game-like generation results by switching to Eevee renderer and going to HDRI. Eevee renders 24-frame multiview in ~3-4 seconds per model, and it can crank through an entire average-sized shape category set in about an hour or two.

Also, because it's a realtime-style rasterizing renderer, it's pixel perfect and not subject to noise. To compensate for a lack of dimension we've lit the scene with a studio HDRI and applied an ambient occlusion post processing effect.

So, questions for the team:

  1. do you see any issues with this, is there a requirement for the realistic renderer?
  2. would you accept a pull request for this? we will need to modify our code since we are also doing some transformation to normalize the ShapeNet V2 dataset, but I could make a PR to switch between eevee and cycles.

About the pretrain model

Maybe it's my fault that I didn't find the download address of the pre training model. This training scale is too large for me. I want to directly infer to see the effect of the network.I want to ask if there is a trained model that can be directly used for inference.

Issue on training

Hi There,

I followed the training guide in README๏ผŒand I met this problem: TypeError: unsupported operand type(s) for /: 'Nonetype' and 'int'
Do you have any ideas how to fix this problem?

HY6R M`ZU4G~5WPT2KS _V4

My training script is like this:
image

How to set gamma to generate objects of the bowl category?

Thanks for sharing very wonderful work to generate synthetic data.

Now I recognize that I should set gamma according to the complexity of dataset from the issue No.15(#15).

But could you recommend roughly "gamma" value for the "bowl" category data of ShapeNet v1 dataset?

Sample images of "bowl" category look like below.

Thanks in advance.

reals

Bug in inference with lower resolution

Hi,
Thanks for you awesome work!
I am trying to use your code to train the model with image resolution = 512x512. However, it seems this line of code
always assumes the resolution with 1024x1024, and thus I cannot do inference for my trained model. Maybe it should be fixed~

Deformation prediction

Hi,
Thanks for your great work! I have a question about the deformation prediction:
I noticed that in your previous work (i.e. DEFTet and DMTet), you used GCN to predict the deformation. However, in GET3D, you use a simple MLP as your network architecture. Could you please give some explainations for this modification?
Thank you.

About the results of the pretrained model

I'm so sorry to bother you again. But I believe that you are so enthusiastic that you could solve my confusion.
I infer GET3D with the pretrained weight named shapenet_xx.pt and others on colab. But the results is not as good as your presentation, especially the 3D mesh of motorbikes. The stickiness of details is high. Even so, it's a good work. The steps i took are as your readme file. Is there anything i missed or ignored? or maybe i should train the model by myself?

Where is the generated model? Is it fbx or obj?

QQๆต่งˆๅ™จๆˆชๅ›พ20221015161529
Hi
Where is the generated model? Is it fbx or obj? If not, how can I use the generated 3d mesh model?

If I train the model myself, what do I need? A lot of png, or a lot of mp4 videos?

Do you have a specific use tutorial?

any support for Mac M1?

congrats on an amazing achievement! i'm on a MacStudio M1 Max with 64 GB of shared RAM so i can run it on 16GB. can this be run as is or has anyone actually tried running it on a Mac M1/M2? if not is there any plan for a Google Colab version? also wondering if there's a forum to get info from others trying this out?

How to set gamma and dmtet_scale for different data?

Thanks for sharing your awesome work!

Since you gave different settings for different classes, I am wondering if you have any insight to tune the 'gamma' and 'dmtet_scale' in your training script? Currently we are trying to implement your framework to other datasets, your advice would be very helpful.

Thank you.

Regarding dataset

This is a very good work. Congratulations! Would you please provide relevant experimental datasets, such as the House dataset (563 shapes) collected from Turbosquid? Thanks in advance.

How to feed in image as inference input?

My understanding is that a single image, in form of output of inception-v3 encoder, can serve as input for inference input. I also noticed the pkl file of the pre-trained inception model you offered as a part of necessary files. I am newbie. I simply don't know how to utilize them. Is there some way to feed the image (or its latent code) into inference script?
Thank you!

Issue on rendering Shapenet Dataset

When running:
python render_all.py --save_folder dataset/ --dataset_folder ShapeNetCore.v1/ --blender_root <path-to-blender>

Getting the following error:

ALSA lib confmisc.c:767:(parse_card) cannot find card '0'
ALSA lib conf.c:4732:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory
ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings
ALSA lib conf.c:4732:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name
ALSA lib conf.c:4732:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5220:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2642:(snd_pcm_open_noupdate) Unknown PCM default

Inference for other Shapenet Datasets

Hi,
I trained on the shapenet_rocket dataset using this PR: #23
I am trying to do inference on the model I trained however I am getting this error:

==> resume from pretrained path checkpoints/shapenet_rocket/00007-stylegan2-04099429-gpus8-batch32-gamma80/network-snapshot-001843.pt
Traceback (most recent call last):
  File "train_3d.py", line 319, in <module>
    main()  # pylint: disable=no-value-for-parameter
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "train_3d.py", line 313, in main
    launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run)
  File "train_3d.py", line 103, in launch_training
    subprocess_fn(rank=0, c=c, temp_dir=temp_dir)
  File "train_3d.py", line 46, in subprocess_fn
    inference_3d.inference(rank=rank, **c)
  File "/workspace/GET3D/training/inference_3d.py", line 81, in inference
    G.load_state_dict(model_state_dict['G'], strict=True)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GeneratorDMTETMesh:
        Missing key(s) in state_dict: "synthesis.generator.tri_plane_synthesis.b4.const", "synthesis.generator.tri_plane_synthesis.b4.resample_filter", "synthesis.generator.tri_plane_synthesis.b4.conv1.weight", "synthesis.generator.tri_plane_synthesis.b4.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b4.conv1.bias", "synthesis.generator.tri_plane_synthesis.b4.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b4.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b4.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b4.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b4.totex.weight", "synthesis.generator.tri_plane_synthesis.b4.totex.bias", "synthesis.generator.tri_plane_synthesis.b4.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b4.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b4.togeo.weight", "synthesis.generator.tri_plane_synthesis.b4.togeo.bias", "synthesis.generator.tri_plane_synthesis.b4.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b4.togeo.affine.bias", "synthesis.generator.tri_plane_synthesis.b8.resample_filter", "synthesis.generator.tri_plane_synthesis.b8.conv0.weight", "synthesis.generator.tri_plane_synthesis.b8.conv0.noise_strength", "synthesis.generator.tri_plane_synthesis.b8.conv0.bias", "synthesis.generator.tri_plane_synthesis.b8.conv0.resample_filter", "synthesis.generator.tri_plane_synthesis.b8.conv0.noise_const", "synthesis.generator.tri_plane_synthesis.b8.conv0.affine.weight", "synthesis.generator.tri_plane_synthesis.b8.conv0.affine.bias", "synthesis.generator.tri_plane_synthesis.b8.conv1.weight", "synthesis.generator.tri_plane_synthesis.b8.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b8.conv1.bias", "synthesis.generator.tri_plane_synthesis.b8.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b8.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b8.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b8.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b8.totex.weight", "synthesis.generator.tri_plane_synthesis.b8.totex.bias", "synthesis.generator.tri_plane_synthesis.b8.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b8.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b8.togeo.weight", "synthesis.generator.tri_plane_synthesis.b8.togeo.bias", "synthesis.generator.tri_plane_synthesis.b8.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b8.togeo.affine.bias", "synthesis.generator.tri_plane_synthesis.b16.resample_filter", "synthesis.generator.tri_plane_synthesis.b16.conv0.weight", "synthesis.generator.tri_plane_synthesis.b16.conv0.noise_strength", "synthesis.generator.tri_plane_synthesis.b16.conv0.bias", "synthesis.generator.tri_plane_synthesis.b16.conv0.resample_filter", "synthesis.generator.tri_plane_synthesis.b16.conv0.noise_const", "synthesis.generator.tri_plane_synthesis.b16.conv0.affine.weight", "synthesis.generator.tri_plane_synthesis.b16.conv0.affine.bias", "synthesis.generator.tri_plane_synthesis.b16.conv1.weight", "synthesis.generator.tri_plane_synthesis.b16.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b16.conv1.bias", "synthesis.generator.tri_plane_synthesis.b16.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b16.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b16.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b16.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b16.totex.weight", "synthesis.generator.tri_plane_synthesis.b16.totex.bias", "synthesis.generator.tri_plane_synthesis.b16.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b16.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b16.togeo.weight", "synthesis.generator.tri_plane_synthesis.b16.togeo.bias", "synthesis.generator.tri_plane_synthesis.b16.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b16.togeo.affine.bias", "synthesis.generator.tri_plane_synthesis.b32.resample_filter", "synthesis.generator.tri_plane_synthesis.b32.conv0.weight", "synthesis.generator.tri_plane_synthesis.b32.conv0.noise_strength", "synthesis.generator.tri_plane_synthesis.b32.conv0.bias", "synthesis.generator.tri_plane_synthesis.b32.conv0.resample_filter", "synthesis.generator.tri_plane_synthesis.b32.conv0.noise_const", "synthesis.generator.tri_plane_synthesis.b32.conv0.affine.weight", "synthesis.generator.tri_plane_synthesis.b32.conv0.affine.bias", "synthesis.generator.tri_plane_synthesis.b32.conv1.weight", "synthesis.generator.tri_plane_synthesis.b32.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b32.conv1.bias", "synthesis.generator.tri_plane_synthesis.b32.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b32.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b32.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b32.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b32.totex.weight", "synthesis.generator.tri_plane_synthesis.b32.totex.bias", "synthesis.generator.tri_plane_synthesis.b32.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b32.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b32.togeo.weight", "synthesis.generator.tri_plane_synthesis.b32.togeo.bias", "synthesis.generator.tri_plane_synthesis.b32.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b32.togeo.affine.bias", "synthesis.generator.tri_plane_synthesis.b64.resample_filter", "synthesis.generator.tri_plane_synthesis.b64.conv0.weight", "synthesis.generator.tri_plane_synthesis.b64.conv0.noise_strength", "synthesis.generator.tri_plane_synthesis.b64.conv0.bias", "synthesis.generator.tri_plane_synthesis.b64.conv0.resample_filter", "synthesis.generator.tri_plane_synthesis.b64.conv0.noise_const", "synthesis.generator.tri_plane_synthesis.b64.conv0.affine.weight", "synthesis.generator.tri_plane_synthesis.b64.conv0.affine.bias", "synthesis.generator.tri_plane_synthesis.b64.conv1.weight", "synthesis.generator.tri_plane_synthesis.b64.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b64.conv1.bias", "synthesis.generator.tri_plane_synthesis.b64.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b64.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b64.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b64.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b64.totex.weight", "synthesis.generator.tri_plane_synthesis.b64.totex.bias", "synthesis.generator.tri_plane_synthesis.b64.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b64.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b64.togeo.weight", "synthesis.generator.tri_plane_synthesis.b64.togeo.bias", "synthesis.generator.tri_plane_synthesis.b64.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b64.togeo.affine.bias", "synthesis.generator.tri_plane_synthesis.b128.resample_filter", "synthesis.generator.tri_plane_synthesis.b128.conv0.weight", "synthesis.generator.tri_plane_synthesis.b128.conv0.noise_strength", "synthesis.generator.tri_plane_synthesis.b128.conv0.bias", "synthesis.generator.tri_plane_synthesis.b128.conv0.resample_filter", "synthesis.generator.tri_plane_synthesis.b128.conv0.noise_const", "synthesis.generator.tri_plane_synthesis.b128.conv0.affine.weight", "synthesis.generator.tri_plane_synthesis.b128.conv0.affine.bias", "synthesis.generator.tri_plane_synthesis.b128.conv1.weight", "synthesis.generator.tri_plane_synthesis.b128.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b128.conv1.bias", "synthesis.generator.tri_plane_synthesis.b128.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b128.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b128.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b128.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b128.totex.weight", "synthesis.generator.tri_plane_synthesis.b128.totex.bias", "synthesis.generator.tri_plane_synthesis.b128.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b128.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b128.togeo.weight", "synthesis.generator.tri_plane_synthesis.b128.togeo.bias", "synthesis.generator.tri_plane_synthesis.b128.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b128.togeo.affine.bias", "synthesis.generator.tri_plane_synthesis.b256.resample_filter", "synthesis.generator.tri_plane_synthesis.b256.conv0.weight", "synthesis.generator.tri_plane_synthesis.b256.conv0.noise_strength", "synthesis.generator.tri_plane_synthesis.b256.conv0.bias", "synthesis.generator.tri_plane_synthesis.b256.conv0.resample_filter", "synthesis.generator.tri_plane_synthesis.b256.conv0.noise_const", "synthesis.generator.tri_plane_synthesis.b256.conv0.affine.weight", "synthesis.generator.tri_plane_synthesis.b256.conv0.affine.bias", "synthesis.generator.tri_plane_synthesis.b256.conv1.weight", "synthesis.generator.tri_plane_synthesis.b256.conv1.noise_strength", "synthesis.generator.tri_plane_synthesis.b256.conv1.bias", "synthesis.generator.tri_plane_synthesis.b256.conv1.resample_filter", "synthesis.generator.tri_plane_synthesis.b256.conv1.noise_const", "synthesis.generator.tri_plane_synthesis.b256.conv1.affine.weight", "synthesis.generator.tri_plane_synthesis.b256.conv1.affine.bias", "synthesis.generator.tri_plane_synthesis.b256.totex.weight", "synthesis.generator.tri_plane_synthesis.b256.totex.bias", "synthesis.generator.tri_plane_synthesis.b256.totex.affine.weight", "synthesis.generator.tri_plane_synthesis.b256.totex.affine.bias", "synthesis.generator.tri_plane_synthesis.b256.togeo.weight", "synthesis.generator.tri_plane_synthesis.b256.togeo.bias", "synthesis.generator.tri_plane_synthesis.b256.togeo.affine.weight", "synthesis.generator.tri_plane_synthesis.b256.togeo.affine.bias", "synthesis.generator.mlp_synthesis_tex.layers.0.weight", "synthesis.generator.mlp_synthesis_tex.layers.0.bias", "synthesis.generator.mlp_synthesis_tex.layers.0.affine.weight", "synthesis.generator.mlp_synthesis_tex.layers.0.affine.bias", "synthesis.generator.mlp_synthesis_tex.layers.1.weight", "synthesis.generator.mlp_synthesis_tex.layers.1.bias", "synthesis.generator.mlp_synthesis_tex.layers.1.affine.weight", "synthesis.generator.mlp_synthesis_tex.layers.1.affine.bias", "synthesis.generator.mlp_synthesis_sdf.layers.0.weight", "synthesis.generator.mlp_synthesis_sdf.layers.0.bias", "synthesis.generator.mlp_synthesis_sdf.layers.0.affine.weight", "synthesis.generator.mlp_synthesis_sdf.layers.0.affine.bias", "synthesis.generator.mlp_synthesis_sdf.layers.1.weight", "synthesis.generator.mlp_synthesis_sdf.layers.1.bias", "synthesis.generator.mlp_synthesis_sdf.layers.1.affine.weight", "synthesis.generator.mlp_synthesis_sdf.layers.1.affine.bias", "synthesis.generator.mlp_synthesis_def.layers.0.weight", "synthesis.generator.mlp_synthesis_def.layers.0.bias", "synthesis.generator.mlp_synthesis_def.layers.0.affine.weight", "synthesis.generator.mlp_synthesis_def.layers.0.affine.bias", "synthesis.generator.mlp_synthesis_def.layers.1.weight", "synthesis.generator.mlp_synthesis_def.layers.1.bias", "synthesis.generator.mlp_synthesis_def.layers.1.affine.weight", "synthesis.generator.mlp_synthesis_def.layers.1.affine.bias". 
        Unexpected key(s) in state_dict: "synthesis.geometry_synthesis_sdf.b4.const", "synthesis.geometry_synthesis_sdf.b4.conv1.weight", "synthesis.geometry_synthesis_sdf.b4.conv1.noise_strength", "synthesis.geometry_synthesis_sdf.b4.conv1.bias", "synthesis.geometry_synthesis_sdf.b4.conv1.noise_const", "synthesis.geometry_synthesis_sdf.b4.conv1.affine.weight", "synthesis.geometry_synthesis_sdf.b4.conv1.affine.bias", "synthesis.geometry_synthesis_sdf.b8.conv0.weight", "synthesis.geometry_synthesis_sdf.b8.conv0.noise_strength", "synthesis.geometry_synthesis_sdf.b8.conv0.bias", "synthesis.geometry_synthesis_sdf.b8.conv0.noise_const", "synthesis.geometry_synthesis_sdf.b8.conv0.affine.weight", "synthesis.geometry_synthesis_sdf.b8.conv0.affine.bias", "synthesis.geometry_synthesis_sdf.b8.conv1.weight", "synthesis.geometry_synthesis_sdf.b8.conv1.noise_strength", "synthesis.geometry_synthesis_sdf.b8.conv1.bias", "synthesis.geometry_synthesis_sdf.b8.conv1.noise_const", "synthesis.geometry_synthesis_sdf.b8.conv1.affine.weight", "synthesis.geometry_synthesis_sdf.b8.conv1.affine.bias", "synthesis.geometry_synthesis_sdf.b8.skip.weight", "synthesis.geometry_synthesis_sdf.b8.skip.resample_filter", "synthesis.geometry_synthesis_sdf.b16.conv0.weight", "synthesis.geometry_synthesis_sdf.b16.conv0.noise_strength", "synthesis.geometry_synthesis_sdf.b16.conv0.bias", "synthesis.geometry_synthesis_sdf.b16.conv0.noise_const", "synthesis.geometry_synthesis_sdf.b16.conv0.affine.weight", "synthesis.geometry_synthesis_sdf.b16.conv0.affine.bias", "synthesis.geometry_synthesis_sdf.b16.conv1.weight", "synthesis.geometry_synthesis_sdf.b16.conv1.noise_strength", "synthesis.geometry_synthesis_sdf.b16.conv1.bias", "synthesis.geometry_synthesis_sdf.b16.conv1.noise_const", "synthesis.geometry_synthesis_sdf.b16.conv1.affine.weight", "synthesis.geometry_synthesis_sdf.b16.conv1.affine.bias", "synthesis.geometry_synthesis_sdf.b16.skip.weight", "synthesis.geometry_synthesis_sdf.b16.skip.resample_filter", "synthesis.geometry_synthesis_sdf.b32.conv0.weight", "synthesis.geometry_synthesis_sdf.b32.conv0.noise_strength", "synthesis.geometry_synthesis_sdf.b32.conv0.bias", "synthesis.geometry_synthesis_sdf.b32.conv0.noise_const", "synthesis.geometry_synthesis_sdf.b32.conv0.affine.weight", "synthesis.geometry_synthesis_sdf.b32.conv0.affine.bias", "synthesis.geometry_synthesis_sdf.b32.conv1.weight", "synthesis.geometry_synthesis_sdf.b32.conv1.noise_strength", "synthesis.geometry_synthesis_sdf.b32.conv1.bias", "synthesis.geometry_synthesis_sdf.b32.conv1.noise_const", "synthesis.geometry_synthesis_sdf.b32.conv1.affine.weight", "synthesis.geometry_synthesis_sdf.b32.conv1.affine.bias", "synthesis.geometry_synthesis_sdf.b32.skip.weight", "synthesis.geometry_synthesis_sdf.b32.skip.resample_filter", "synthesis.geometry_synthesis_sdf.layers.0.weight", "synthesis.geometry_synthesis_sdf.layers.0.bias", "synthesis.geometry_synthesis_sdf.layers.0.affine.weight", "synthesis.geometry_synthesis_sdf.layers.0.affine.bias", "synthesis.geometry_synthesis_sdf.layers.1.weight", "synthesis.geometry_synthesis_sdf.layers.1.bias", "synthesis.geometry_synthesis_sdf.layers.1.affine.weight", "synthesis.geometry_synthesis_sdf.layers.1.affine.bias", "synthesis.geometry_synthesis_sdf.layers.2.weight", "synthesis.geometry_synthesis_sdf.layers.2.bias", "synthesis.geometry_synthesis_sdf.layers.2.affine.weight", "synthesis.geometry_synthesis_sdf.layers.2.affine.bias", "synthesis.geometry_synthesis_sdf.layers.3.weight", "synthesis.geometry_synthesis_sdf.layers.3.bias", "synthesis.geometry_synthesis_sdf.layers.3.affine.weight", "synthesis.geometry_synthesis_sdf.layers.3.affine.bias", "synthesis.geometry_synthesis_def.b4.const", "synthesis.geometry_synthesis_def.b4.conv1.weight", "synthesis.geometry_synthesis_def.b4.conv1.noise_strength", "synthesis.geometry_synthesis_def.b4.conv1.bias", "synthesis.geometry_synthesis_def.b4.conv1.noise_const", "synthesis.geometry_synthesis_def.b4.conv1.affine.weight", "synthesis.geometry_synthesis_def.b4.conv1.affine.bias", "synthesis.geometry_synthesis_def.b8.conv0.weight", "synthesis.geometry_synthesis_def.b8.conv0.noise_strength", "synthesis.geometry_synthesis_def.b8.conv0.bias", "synthesis.geometry_synthesis_def.b8.conv0.noise_const", "synthesis.geometry_synthesis_def.b8.conv0.affine.weight", "synthesis.geometry_synthesis_def.b8.conv0.affine.bias", "synthesis.geometry_synthesis_def.b8.conv1.weight", "synthesis.geometry_synthesis_def.b8.conv1.noise_strength", "synthesis.geometry_synthesis_def.b8.conv1.bias", "synthesis.geometry_synthesis_def.b8.conv1.noise_const", "synthesis.geometry_synthesis_def.b8.conv1.affine.weight", "synthesis.geometry_synthesis_def.b8.conv1.affine.bias", "synthesis.geometry_synthesis_def.b8.skip.weight", "synthesis.geometry_synthesis_def.b8.skip.resample_filter", "synthesis.geometry_synthesis_def.b16.conv0.weight", "synthesis.geometry_synthesis_def.b16.conv0.noise_strength", "synthesis.geometry_synthesis_def.b16.conv0.bias", "synthesis.geometry_synthesis_def.b16.conv0.noise_const", "synthesis.geometry_synthesis_def.b16.conv0.affine.weight", "synthesis.geometry_synthesis_def.b16.conv0.affine.bias", "synthesis.geometry_synthesis_def.b16.conv1.weight", "synthesis.geometry_synthesis_def.b16.conv1.noise_strength", "synthesis.geometry_synthesis_def.b16.conv1.bias", "synthesis.geometry_synthesis_def.b16.conv1.noise_const", "synthesis.geometry_synthesis_def.b16.conv1.affine.weight", "synthesis.geometry_synthesis_def.b16.conv1.affine.bias", "synthesis.geometry_synthesis_def.b16.skip.weight", "synthesis.geometry_synthesis_def.b16.skip.resample_filter", "synthesis.geometry_synthesis_def.b32.conv0.weight", "synthesis.geometry_synthesis_def.b32.conv0.noise_strength", "synthesis.geometry_synthesis_def.b32.conv0.bias", "synthesis.geometry_synthesis_def.b32.conv0.noise_const", "synthesis.geometry_synthesis_def.b32.conv0.affine.weight", "synthesis.geometry_synthesis_def.b32.conv0.affine.bias", "synthesis.geometry_synthesis_def.b32.conv1.weight", "synthesis.geometry_synthesis_def.b32.conv1.noise_strength", "synthesis.geometry_synthesis_def.b32.conv1.bias", "synthesis.geometry_synthesis_def.b32.conv1.noise_const", "synthesis.geometry_synthesis_def.b32.conv1.affine.weight", "synthesis.geometry_synthesis_def.b32.conv1.affine.bias", "synthesis.geometry_synthesis_def.b32.skip.weight", "synthesis.geometry_synthesis_def.b32.skip.resample_filter", "synthesis.geometry_synthesis_def.layers.0.weight", "synthesis.geometry_synthesis_def.layers.0.bias", "synthesis.geometry_synthesis_def.layers.0.affine.weight", "synthesis.geometry_synthesis_def.layers.0.affine.bias", "synthesis.geometry_synthesis_def.layers.1.weight", "synthesis.geometry_synthesis_def.layers.1.bias", "synthesis.geometry_synthesis_def.layers.1.affine.weight", "synthesis.geometry_synthesis_def.layers.1.affine.bias", "synthesis.geometry_synthesis_def.layers.2.weight", "synthesis.geometry_synthesis_def.layers.2.bias", "synthesis.geometry_synthesis_def.layers.2.affine.weight", "synthesis.geometry_synthesis_def.layers.2.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b4.torgb.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv0.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b8.torgb.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv0.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b16.torgb.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv0.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b32.torgb.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv0.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b64.torgb.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv0.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b128.torgb.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv0.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.noise_strength", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.resample_filter", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.noise_const", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.conv1.affine.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.torgb.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.torgb.bias", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.torgb.affine.weight", "synthesis.geometry_synthesis_tex.tri_plane_synthesis.b256.torgb.affine.bias", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.0.weight", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.0.bias", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.0.affine.weight", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.0.affine.bias", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.1.weight", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.1.bias", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.1.affine.weight", "synthesis.geometry_synthesis_tex.mlp_synthesis.layers.1.affine.bias". 

training on FFHQ?

Thanks for your great work. I find that you compare your result with eg3d in paper and claim that your result is better than it in 3 dataset. I have a question that GET3D is better than EG3D in human face dataset like FFHQ or CelebA.

The camera intrinsic of the rendered data

Hi, thanks for your great work.
I am trying to find the camera intrinsic from the rendering scripts but I did not find anything related to it.
Could you please give me some hint about how to determine the intrinsic of the camera in your rendering scripts?

question about setting blender

Hi, @SteveJunGao ,

According to the steps, the following two lines are just used for ensuring the existence of pip and numpy:

./python3.7m -m ensurepip
./python3.7m -m pip install numpy 

Do we need to specify the location of bpy module, as it is needed here, when executing render_all.py ?

When I execute

python render_all.py --save_folder /media/root/data2/ShapeNet/render_view --dataset_folder /media/root/data2/ShapeNet/ShapeNetCore.v1 --blender_root /root/Downloads/blender-2.90.0-linux64/2.90/python/bin

I got the following error:

sh: 1: /root/Downloads/blender-2.90.0-linux64/2.90/python/bin: Permission denied

Any hints to solve this issue?

Thanks~

fp32 = better output?

I'm current training on V100s and run out of memory when setting FP32 to 1

Would there be any quality benefit to upgrading to higher RAM GPUs and training this way with the unified generator?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.