Git Product home page Git Product logo

rignet's Introduction

This is the code repository implementing the paper "RigNet: Neural Rigging for Articulated Characters" published on SIGGRAPH 2020 [Project page].

[2023.02.17] About dataset: If you are from a research lab and interested in the dataset for non-commercial, research-only purposes, please send a request email to me at [email protected].

[2021.07.20] Another add-on for Blender, implemented by @L-Medici. Please check the Github link.

[2020.11.23] There is now a great add-on for Blender based on our work, implemented by @pKrime. Please check the Github link, and the video demo.

Dependecy and Setup

The project is developed on Ubuntu 16.04 with cuda10.0 and cudnn7.6.3. It has also been successfully tested on Windows 10. On both platforms, we suggest to use conda virtual environment.

For Linux user

[2023.05.21] I have tested the code on Ubuntu 22.04, with cuda 11.3. The following commands have been updated.

conda create --name rignet python=3.7
conda activate rignet
conda install pytorch==1.12.0 torchvision==0.13.0 cudatoolkit=11.3 -c pytorch

# load cuda_toolkit 11.3
export PATH=/usr/local/cuda_11.3/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda_11.3/lib64

# require g++ < 10 to install the following pytorch geometric version.
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-1.12.0+cu113.html # this take a while
pip install torch-geometric==1.7.2

pip install numpy scipy matplotlib tensorboard open3d==0.9.0 opencv-python "rtree>=0.8,<0.9" trimesh[easy]  # Make sure to install open3d 0.9.0.

For Windows user

The code has been tested on Windows 10 with cuda 10.1. The most important difference from Linux setup is, you need to download Windows-compiled Rtree from here, and install it by pip install Rtree‑0.9.4‑cp37‑cp37m‑win_amd64.whl (64-bit system) or pip install Rtree‑0.9.4‑cp37‑cp37m‑win32.whl (32-bit system). Other libraries can be installed in the same way as Linux setup instructions.

Quick start

We provide a script for quick start. First download our trained models from here. Put the checkpoints folder into the project folder.

Check and run quick_start.py. We provide some examples in this script. Due to randomness, the results might be slightly different among each run. Generally you will get the results similar to the ones shown below:

results figure

If you want to try your own models, remember to simplify the meshes so that the remeshed ones have vertices between 1K to 5K. I use quadratic edge collapse in MeshLab for this. Please name the simplified meshed as *_remesh.obj.

The predicted rigs are saved as *_rig.txt. You can combine the OBJ file and *_rig.txt into FBX format by running maya_save_fbx.py provided by us in Maya using mayapy. (To use numpy in mayapy, download windows compiled numpy from here and put it in mayapy library folder. For example, mine is C:\Program Files\Autodesk\Maya2019\Python\Lib\site-packages)

Data

Our dataset ModelsResource-RigNetv1 has 2,703 models. We split it into 80% for training (2,163‬ models), 10% for validation (270 models), and 10% for testing. All models in fbx format can be downloaded here.

To use this dataset in this project, pre-processing is performed. We put the pre-processed data here, which consists of several sub-folders.

  • obj: all meshes in OBJ format.
  • rig_info: we store the rigging information into a txt file. Each txt file has four blocks. (1) Lines starting with "joint" define a joint with its 3D position. Each of joint line has four elements, which are joint_name, X, Y, and Z. (2) Line starting with "root" defines the name of root joint. (3) Lines starting with "hier" define the hierarchy of skeleton. Each hierarchy line has two elements, which are parent joint name and its child joint name. One parent joint can have multiple children joints. (4) Lines starting with "skin" define the skinning weights. Each skinning line follows the format as vertex_id, bind_joint_name_1, bind_weight_1, bind_joint_name_2, bind_weight_2 ... The vertex_id follows the vertice order in obj files in the above obj folder.
  • obj_remesh: This folder contains the obj files of the remeshed models. Meshes with fewer than 1K vertices were subdivided, and those with more than 5K vertices were simplified; as a result all training and test meshes contained between 1K and 5K vertices.
  • rig_info_remesh: Rigging information files corresponding to the remeshed obj. Joints, hierarchy and root are the same. The skinning is recalculated based on nearest neighbor from each remeshed vertex to original vertices.
  • pretrain_attention: Pre-calculated supervision to pretrin the attention module, which are calculated by the script geometric_proc/compute_pretrain_attn.py. Each file is a N-by-1 text where N is the number of vertices corresponding to remeshed OBJ file, the i-th row stores the surpervision for vertex i.
  • volumetric_geodesic: Pre-calculated volumetric geodesic distance between each vertex-bones pair. The algorithm is an approaximation, which is implemented in geometric_proc/compute_volumetric_geodesic.py. Each file is an N-by-B numpy array where N is the number of vertices corresponding to remeshed OBJ file, B is the number of bones, and (i, j) stores the volumetric geodesic distance between vertex i and bone j.
  • vox: voxelized models used for inside/outside check. Obtained with binvox. The resolution of the grid is 88x88x88.

After downloading the pre-processed data, one needs to create the data directly used for training/testing, please check and run our script:

python gen_dataset.py

Remember to change the root_folder to the directory you uncompress the pre-processed data.

Training

Notes: As new features, we have three improvements from the paper: (1) To train the joint prediction module, now we pretrain both the regression module and the attention module, and then fine-tune them together with differentiable clustering. (2) We optimized the hyper-parameters in the fine-tuning step. (3) the input feature for skinning now includes another dimension per bone (--Lf), indicating whether this bone is a virtual leaf bone or not. (To enable control from the end-joints, we presume a virtual bone for them. Please check the code for more details.)

  1. Joint prediction:

    1.1 Pretrain regression module: python -u run_joint_pretrain.py --train_folder='DATASET_DIR/train/' --val_folder='DATASET_DIR/val/' --test_folder='DATASET_DIR/test/' --checkpoint='checkpoints/pretrain_jointnet' --logdir='logs/pretrain_jointnet' --train_batch=6 --test_batch=6 --lr 5e-4 --schedule 50 --arch='jointnet'

    1.2 Pretrain attention module: python -u run_joint_pretrain.py --train_folder='DATASET_DIR/train/' --val_folder='DATASET_DIR/val/' --test_folder='DATASET_DIR/test/' --checkpoint='checkpoints/pretrain_masknet' --logdir='logs/pretrain_masknet' --train_batch=6 --test_batch=6 --lr 1e-4 --schedule 50 --arch='masknet'

    1.3 Finetune two modules with a clustering module: python -u run_joint_finetune.py --train_folder='DATASET_DIR/train/' --val_folder='DATASET_DIR/val/' --test_folder='DATASET_DIR/test/' --checkpoint='checkpoints/gcn_meanshift' --logdir='logs/gcn_meanshift' --train_batch=1 --test_batch=1 --jointnet_lr=1e-6 --masknet_lr=1e-6 --bandwidth_lr=1e-6 --epoch=50

  2. Connectivity prediction

    2.1 BoneNet: python -u run_pair_cls.py --train_folder='DATASET_DIR/train/' --val_folder='DATASET_DIR/val/' --test_folder='DATASET_DIR/test/' --checkpoint='checkpoints/bonenet' --logdir='logs/bonenet' --train_batch=6 --test_batch=6 --lr=1e-3

    2.2 RootNet: python -u run_root_cls.py --train_folder='DATASET_DIR/train/' --val_folder='DATASET_DIR/val/' --test_folder='DATASET_DIR/test/' --checkpoint='checkpoints/rootnet' --logdir='logs/rootnet' --train_batch=6 --test_batch=6 --lr=1e-3

  3. Skinning prediction: python -u run_skinning.py --train_folder='DATASET_DIR/train/' --val_folder='DATASET_DIR/val/' --test_folder='DATASET_DIR/test/' --checkpoint='checkpoints/skinnet' --logdir='logs/skinnet' --train_batch=4 --test_batch=4 --lr=1e-4 --Dg --Lf

License

This project is under LICENSE-GPLv3.

rignet's People

Contributors

manuelkoester avatar vincentlcy avatar yzhou359 avatar zhan-xu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rignet's Issues

Cuda memory run out

Hello dear Zhan-xu,

Thank you for your big work.
I have an issue: during mixed training of both jointnet and masknet networks together I am getting cuda out of memory in my Geforce GTX 1080 (with 8Gb Video Ram). My train and test batches are 1. If your Video memory was greater then 8 Gb?
Please let me know what I can do in this case?

Best Regards,
Tigran

About comparison with Pinocchio

Hi, in your paper, you did comparison with the work from Pinocchio.
However, some of the meshes in your dataset won't work with the code they provided(https://github.com/pmolodo/Pinocchio)
(I see that some of the meshes are not the boundary of the connected volume, which is a restriction the authors of the Pinocchio mentioned..)
can you share with us how you managed work

Loading pre-trained skinning models gives size mismatch error

I'm currently trying to load the Skinning model using run_skinning.py using the checkpoints given in the README, but the parameter sizes appear to be mismatched. Is there any sort of pre-processing / commands I need to do in order to load from the checkpoint?

I've attached the error trace below.

=> loading checkpoint 'checkpoints/skinnet/model_best.pth.tar'
Traceback (most recent call last):
  File "run_skinning.py", line 248, in <module>
    main(parser.parse_args())
  File "run_skinning.py", line 109, in main
    model.load_state_dict(checkpoint['state_dict'])
  File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1044, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SKINNET:
        size mismatch for multi_layer_tranform1.0.0.weight: copying a param with shape torch.Size([128, 43]) from checkpoint, the shape in current model is torch.Size([128, 33]).

cmds.file(obj_name, o=True) RuntimeError

when i run maya_save_fbx.py using mayapy , i got this Error .

pymel.internal.startup : WARNING : Maya startup file C:/Users/Administrator/Documents/maya\2019\prefs\userNamedCommands.mel does not exist
pymel.internal.startup : WARNING : Maya startup file C:/Users/Administrator/Documents/maya\2019\prefs\pluginPrefs.mel does not exist
smith
Traceback (most recent call last):
File "D:\OpenProject\RigNet-master\RigNet-master\maya_save_fbx.py", line 94, in
cmds.file(obj_name, o=True)
RuntimeError

env : autodesk maya 2019、 win10、numpy you provided

I have run quick_start.py successfully.

[Open3D WARNING] Read OBJ failed: Cannot open file [quick_start/17872_remesh.obj]

Hi. I got the following error when I ran python quick_start.py

loading all networks...
     joint prediction network loaded.
     root prediction network loaded.
     connection prediction network loaded.
     skinning prediction network loaded.
creating data for model ID 17872
[Open3D WARNING] Read OBJ failed: Cannot open file [quick_start/17872_remesh.obj]

     gathering topological edges.
Traceback (most recent call last):
  File "quick_start.py", line 390, in <module>
    data, vox, surface_geodesic = create_single_data(mesh_filename)
  File "quick_start.py", line 56, in create_single_data
    tpl_e = get_tpl_edges(mesh_v, mesh_f).T
  File "/home/cloud/youtube/rignet/RigNet/gen_dataset.py", line 33, in get_tpl_edges
    edge_index = np.concatenate(edge_index, axis=0)
  File "<__array_function__ internals>", line 6, in concatenate
ValueError: need at least one array to concatenate

How can I fix this? Thank you~

The process killed in quick_start.py

Hi,

Thanks for your perfect work. I have problems in quick_start.py. When I run the quick_strat.py in Linux, it will be always killed like this:

loading all networks...
joint prediction network loaded.
root prediction network loaded.
connection prediction network loaded.
skinning prediction network loaded.
creating data for model ID 1347
gathering topological edges.
calculating surface geodesic matrix.
Killed

Has anyone encountered the same problem and how to solve it.
Thx

The test result is not as good as expect on my OBJ file

I use Instant Mesh to remesh obj file downto 5K and tested with the quick_start.py, looks not as good as your example , I do not know if it's because the bandwidth, threshold parameters not set correctly? Do we need to use diffrent parameters for diffrent 3D models?
图片

Issue running run_joint_pretrain.py

Dear authors,
Thanks for the great paper !
I have some problem about training, could you please explain the file path of DATASET_DIR in detail? I have not used the dataset of torch.geometric before. Take the training set as example, should the pre-processed data you have provided in readme be displayed this way: DATASET_DIR/train/raw/...txt,...binvox?

+1Cannot install the dependencies for the virtual environment

Linux:I am just a user
1-The cuda version is 9.1,but I can successfully install torch1.3.0+cu100 and use it.But now torch-scatter、torch-sparse、torch-cluster ...can not be successfully installed.
ERROR: Failed building wheel for torch-scatte
ERROR: Command errored out with exit status 1: /home/xxx/anaconda3/envs/rignet/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-hzg0tdiq/torch-scatter/setup.py'"'"'; file='"'"'/tmp/pip-install-hzg0tdiq/torch-scatter/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-mwb3ncx3/install-record.txt --single-version-externally-managed --compile --install-headers /home/xxx/anaconda3/envs/rignet/include/python3.6m/torch-scatter Check the logs for full command output.
... ...
How to solve ????
2-When I install the latest torch version 1.6.0,the latest torch-scatter...can be successfully installed.I try to run quick_start.py, error occurred:
RuntimeError: Detected that PyTorch and torch_scatter were compiled with different CUDA versions. PyTorch has CUDA version 10.2 and torch_scatter has CUDA version 0.0. Please reinstall the torch_scatter that matches your PyTorch install.

I wish I can find a solution:)

List index out of range in gen_dataset

Hi,

I am trying to run the script python gen_dataset.py and I'm getting this error:

python gen_dataset.py 
process ID 0
process ID 1
process ID 2
process ID 3
process ID 4
process ID 5
process ID 6
process ID 7
surface geodesic calculation: 6.02768874168396 seconds
surface geodesic calculation: 6.660515546798706 seconds
surface geodesic calculation: 6.463137149810791 seconds
surface geodesic calculation: 6.739922046661377 seconds
surface geodesic calculation: 7.609318971633911 seconds
surface geodesic calculation: 8.185250043869019 seconds
surface geodesic calculation: 9.591989040374756 seconds
surface geodesic calculation: 9.440694570541382 seconds
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/dhorka/miniconda3/envs/rignet/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/home/dhorka/miniconda3/envs/rignet/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
    return list(map(*args))
  File "gen_dataset.py", line 121, in genDataset
    skin = rig_info.joint_skin[vert_remesh_id]
IndexError: list index out of range
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "gen_dataset.py", line 166, in <module>
    p.map(genDataset, [0, 1, 2, 3, 4, 5, 6, 7])
  File "/home/dhorka/miniconda3/envs/rignet/lib/python3.8/multiprocessing/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/dhorka/miniconda3/envs/rignet/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
IndexError: list index out of range

I have been following your instructions.

Thanks

Simplified Blender addon

Hello, me and my colleagues made a blender addon as a project for one of our exam at university. It's a really straight forward and fast setup that utilize your Rignet and checkpoints's models as choosable character to rig. Hope you'll like it and thank you for all your hard work, you do really cool stuff <3

https://github.com/L-Medici/Rignet_blender_addon

Train RigNet without skin prediction

Hello dear Zhan-Xu again,

I am with another approach at this time:

I want to train your network without skinning prediction.
For this I used your script to extract rig_info and obj from original fbx, but with commented skinning parts in your original script.
So I got obj, and rig_info files. Also I couldn't find any command line automated way to remesh my models, so I used the original models also instead remeshed ones. Then I ran pretrain_attn script, that's gone normally, but I cannot run compute_volumetric_geodesic.py, it stops on the first model id and starts increase RAM up to 32 GB, with doing nothing.

Now please let me know what you think about my approach, and do you see anything to help me with this? (with automated remesh of models and with not working volumetric_geodesic)?

If I'll do that, I'll tell you about results.

Thanks and Best Regards,
Tigran

Stuck in predicting connectivity when running quick_start.py

Hi,
when I run quick_start.py, it seems stucking in predicting connectivity step. I tried several models but I failed to pass thourgh this step, shown as following pictures. Could you please figure out what's the matter? Thank you very much.

Screenshot from 2021-04-23 16-29-41

Cannot install the dependencies for the virtual environment

I am using Windows 10 and running conda.

image

The first dependency did not work because there was a missing '=' so instead of open3d=0.9.0, it's supposed to be open3d==0.9.0.

The second dependency line cannot find those versions of torch and torchvision.

When I went through the link to try to find them myself, it's not clear which version I should get.

Please advise!


Edit:

if necessary I can switch to any linux or previous windows environments to try to make this work.

If anyone has a working setup, I would love to know.

Maya alternative to producing rigs

Hello, thanks so much for building RigNet; it's very impressive! I was wondering if there is an alternative to Maya for combining the resulting .obj and .txt files into an .fbx. Is it possible to provide a script for Blender?

Memory crash from compute_volumetric_geodesic.py

Dear zhan-xu, I tried to run compute_volumetric_geodesic.py on my own dataset(much like the preprocessed folder you have), but here is my issue. Unable to allocate 76.2 GiB for an array with shape (1136630306, 3, 3)

Traceback (most recent call last):
  File "compute_volumetric_geodesic.py", line 173, in <module>
    one_process(dataset_folder, start_id, end_id)
  File "compute_volumetric_geodesic.py", line 130, in one_process
    pts_bone_visibility = calc_pts2bone_visible_mat(mesh_ori, origins, ends)
  File "compute_volumetric_geodesic.py", line 64, in calc_pts2bone_visible_mat
    locations, index_ray, index_tri = RayMeshIntersector.intersects_location(origins, ray_dir + 1e-15)
  File "C:\Anaconda3\envs\rigger\lib\site-packages\trimesh\ray\ray_triangle.py", line 107, in intersects_location
    **kwargs)
  File "C:\Anaconda3\envs\rigger\lib\site-packages\trimesh\ray\ray_triangle.py", line 66, in intersects_id
    triangles_normal=self.mesh.face_normals)
  File "C:\Anaconda3\envs\rigger\lib\site-packages\trimesh\ray\ray_triangle.py", line 244, in ray_triangle_id
    triangle_candidates = triangles[ray_candidates]
numpy.core._exceptions.MemoryError: Unable to allocate **76.2 GiB** for an array with shape (1136630306, 3, 3) and data type float64

Problem when run run_joint_pretrain.py!!

Hello author,I have read your paper and I really think it's a great work.I download the pre-processed data and run gen_dataset.py. When I run run_joint_pretrain.py,I meet a mistake "RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity".Here is the all message:
Namespace(aggr='max', arch='jointnet', checkpoint='checkpoints/pretrain_jointnet', epochs=100, evaluate=False, gamma=0.2, input_normal=False, logdir='logs/pretrain_jointnet', lr=0.0005, resume='', schedule=[50], start_epoch=0, test_batch=6, test_folder='/home/lab505/rignet/RigNet-master/data/ModelResource_RigNetv1_preproccessed/test/', train_batch=6, train_folder='/home/lab505/rignet/RigNet-master/data/ModelResource_RigNetv1_preproccessed/train/', val_folder='/home/lab505/rignet/RigNet-master/data/ModelResource_RigNetv1_preproccessed/val/', weight_decay=0.0001)
Total params: 3.83M

Epoch: 1 | LR: 0.00050000
Traceback (most recent call last):
File "run_joint_pretrain.py", line 212, in
main(parser.parse_args())
File "run_joint_pretrain.py", line 94, in main
train_loss = train(train_loader, model, optimizer, args)
File "run_joint_pretrain.py", line 138, in train
loss += chamfer_distance_with_average(y_pred_i.unsqueeze(0), joint_gt.unsqueeze(0))
File "/home/lab505/rignet/RigNet-master/models/supplemental_layers/pytorch_chamfer_dist.py", line 22, in chamfer_distance_with_average
dist1 = torch.min(dist_norm, dim=1)[0]
RuntimeError: cannot perform reduction function min on tensor with no elements because the operation does not have an identity

ModelResource_RigNetv1_preproccessed/train/1095_v.txt not found.

Hello, dear Zhan-Xu,

I'm trying to run Skinning prediction with this command python -u run_skinning.py --train_folder='ModelResource_RigNetv1_preproccessed/train/' --val_folder='ModelResource_RigNetv1_preproccessed/val/' --test_folder='ModelResource_RigNetv1_preproccessed/test/' --checkpoint='checkpoints/skinnet' --logdir='logs/skinnet' --train_batch=4 --test_batch=4 --lr=1e-4 --Dg --Lf

However, I got an error that is "OSError: ModelResource_RigNetv1_preproccessed/train/raw/ModelResource_RigNetv1_preproccessed/train/1095_v.txt not found."
I'm sure that the file 1095_v.txt exist at /ModelResource_RigNetv1_preproccessed/train/
Can you please provide some suggestions?

Thanks

why remesh the model?

I noticed that the preprocessed data you released has a remesh obj version, which is used to calculate geodesics and other places. I want to know why remesh is needed and how to get it?

thranks

Is it possible to process A-pose mesh

Thanks for your provided code.
I got some environment problem and havenot successfully run it to get the result.
I see your input mesh are all in T-pose, is it possible to process other posed mesh, like A-pose?Whats the result.

too slow to preprocessing the volumetric geodesic distances

Hello! thanks to sharing your great work.
I want to apply this framework to my dataset, but it is too slow to processing the volumetric geodesic distances.
I used your code(geometric_proc/compute_volumetric_geodesic.py) as it is, and it seems to take 10 minutes ~ 40 minutes per 1 obj.
Also, it requires a lot of RAM memory, over 100GB for 1 obj.
The numbers I wrote above are not averaged values, but it seems too slow to processing data for my dataset.
My dataset includes some complex shapes which have up to 12000 vertices, but most of shapes in the dataset have under 4K vertices similar to your dataset.
Is it normal that processing time? or am I missing something?

Preprocess data

Dear Zhan-xu, thank you for your huge work.
Could you provide some insight on how to generate rigging joints and skinning data the way you have in .txt format?

`joints ankle_r -0.11857700 0.06815820 -0.27102600
joints tail_4 -0.00000002 0.31470800 -0.34982900
joints collar -0.00000000 0.28504700 0.02488300
joints face -0.00000000 0.31088600 0.05898700
joints arm_l1 0.11882800 0.20566000 -0.00539320

skin 10 mouth_r 1.0000
skin 11 mouth 0.4000 mouth_r 0.6000
skin 12 mouth 0.7000 mouth_r 0.3000
skin 13 mouth 0.7000 mouth_r 0.3000
skin 14 mouth 0.7000 mouth_r 0.3000 `

RuntimeError: repeats must have the same size as input along dim

Hello~ Thanks for making this great repo.
We are currently trying to use this Rignet to predict our own obj model. We have used Instant Mesh to reduce the vertices. Then input the remesh obj and use quick_start.py to predict. However, we have encountered this error:


loading all networks...
joint prediction network loaded.
root prediction network loaded.
connection prediction network loaded.
skinning prediction network loaded.
creating data for model ID man
gathering topological edges.
calculating surface geodesic matrix.
surface geodesic calculation: 7.219671726226807 seconds
gathering geodesic edges.
(5248, 5248)
in here

--- [binvox] mesh voxelizer, version 1.29, build #681 on 2020/01/29 09:22:12
--- written by Patrick Min, 2004-2020

Windows does not allow widths below 116, increasing dimension to 176 and performing -down as needed
Note: setting -dmin to 1
loading model file...
MeshFileIdentifier::*create_mesh_file(quick_start/man_normalized.obj)
ObjMeshFile::load(quick_start/man_normalized.obj)
material library: man_normalized.mtl
ignoring [usemtl man_normalized]
15757 lines read
Read 10502 faces, 5248 vertices.
Mesh memory use is 16032 KB (15 MB)
Mesh::normalize, bounding box: [-0.37032, 0, -0.084787, 1] - [0.37032, 1, 0.084787, 1]
longest length: 1
normalization transform:
(1) translate [0.37032, -0, 0.084787, 1], (2) scale 1, (3) translate [0, 0, 0]
got mesh bounding box [-0.37032, 0, -0.084787, 1] - [0.37032, 1, 0.084787, 1]

voxel model dimension: 176
GLwindow::init_lighting
Graphics card and driver information:
Vendor: NVIDIA Corporation
Renderer: GeForce RTX 2060 SUPER/PCIe/SSE2
Version: 4.6.0 NVIDIA 445.87
Voxels::init(176, 176, 176, 0)
Voxels::clear
Voxelizer constructor
SimpleMeshView constructor
starting voxelization...
Voxelizer::carve_voxelize
doing x direction
doing y direction
doing z direction
Voxelizer::fill_voxels
Voxels::clear
Voxelizer::parity_vote_voxelize
doing x direction
doing y direction
doing z direction
Voxels::process_votes
Voxelizer::voxelize took 17.671 seconds
Voxels::downsample
Voxels::print_filled_area
Voxels::compute_bounding_box
area filled: 65 x 15 x 88
integer bounding box: [0,0,0] - [64,14,87]
Voxels::print_count
counted 9790 set voxels out of 681472

writing voxel file...
VoxelFile::write_file(quick_start/man_normalized.binvox)
Wrote 9790 set voxels out of 681472, in 7908 bytes
done

predicting joints
predicting connectivity
Traceback (most recent call last):
File "quick_start.py", line 440, in
mesh_filename=mesh_filename.replace("_remesh.obj", "_normalized.obj"))
File "quick_start.py", line 187, in predict_skeleton
connect_prob, _ = bone_pred_net(input_data, permute_joints=False)
File "C:\Users\h\Miniconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Workspace\RigNet\models\PairCls_GCN.py", line 112, in forward
joint_feature = torch.repeat_interleave(joint_feature, torch.bincount(data.pairs_batch), dim=0)
RuntimeError: repeats must have the same size as input along dim


Could you please take a look to see what possibly goes wrong? Thank you

Use CUDA 11

I have a lot of trouble using the 3000 series gpu because CUDA 11 is required.

Upgrading to CUDA 11 on this repo failed for me.

Issue running quick_start.py, The specified procedure could not be found

I'm using Windows 64, when I try to run quick_start.py, I got this error message pop up and this traceback:
error quick start

(rignet_env) C:\Users\csviv\Desktop\github\RigNet>python quick_start.py
Traceback (most recent call last):
File "quick_start.py", line 17, in
from torch_geometric.data import Data
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_geometric_init_.py", line 2, in
import torch_geometric.nn
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_geometric\nn_init_.py", line 2, in
from .data_parallel import DataParallel
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_geometric\nn\data_parallel.py", line 5, in
from torch_geometric.data import Batch
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_geometric\data_init_.py", line 1, in
from .data import Data
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_geometric\data\data.py", line 7, in
from torch_sparse import coalesce, SparseTensor
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_sparse_init_.py", line 13, in
library, [osp.dirname(file)]).origin)
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\site-packages\torch_ops.py", line 105, in load_library
ctypes.CDLL(path)
File "C:\Users\csviv\Anaconda3\envs\rignet_env\lib\ctypes_init_.py", line 364, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] The specified procedure could not be found

Preprocessing join path problem on Windows

Thank you for your huge amount of work.

In the gen_dataset.py line143, with open(os.path.join(dataset_folder, 'with open(os.path.join(dataset_folder, '{:s}/{:d}_skin.txt').format(split_name, model_id), 'w') as fout:.
The .format is placed outside the join function. On the Windows system, python will consider the {: as a drive name, so the first dataset_folder component will be dropped. Put the .format direct after the string will fix this problem.

Thank You

Mismatched number of arguments Quick_start problem

`loading all networks...
joint prediction network loaded.
root prediction network loaded.
connection prediction network loaded.
skinning prediction network loaded.
creating data for model ID 17872
gathering topological edges.
calculating surface geodesic matrix.
surface geodesic calculation: 6.194427251815796 seconds
gathering geodesic edges.
predicting joints
predicting connectivity
Traceback (most recent call last):
File "/home/wonder/Documents/RigNet/quick_start.py", line 434, in
mesh_filename=mesh_filename.replace("_remesh.obj", "_normalized.obj"))
File "/home/wonder/Documents/RigNet/quick_start.py", line 183, in predict_skeleton
root_id = getInitId(input_data, root_pred_net)
File "/home/wonder/Documents/RigNet/mst_generate.py", line 105, in getInitId
root_prob, _ = model(data, shuffle=False)
File "/home/wonder/anaconda3/envs/rigrig/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/wonder/Documents/RigNet/models/ROOT_GCN.py", line 140, in forward
joint_feature = self.joint_encoder(torch.abs(joints_norepeat[:, 0:1]), joints_norepeat, joints_batch)
File "/home/wonder/anaconda3/envs/rigrig/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/wonder/Documents/RigNet/models/ROOT_GCN.py", line 98, in forward
sa1_joint = self.sa1_joint(*sa0_joint)
File "/home/wonder/anaconda3/envs/rigrig/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/wonder/Documents/RigNet/models/ROOT_GCN.py", line 25, in forward
max_num_neighbors=64)
File "/home/wonder/anaconda3/envs/rigrig/lib/python3.6/site-packages/torch_geometric/nn/pool/init.py", line 172, in radius
num_workers)
TypeError: radius() takes from 3 to 6 positional arguments but 7 were given

Process finished with exit code 1
`

Hi I am trying to run the demo quick_start, but I met the problem shown above, mismatched number of arguments, can you please assist.

Issue running quick_start.py

I get the following error when trying to run quick_start.py

root@a2349b494f3e:/workspace/RigNet# python3 quick_start.py
Traceback (most recent call last):
  File "quick_start.py", line 31, in <module>
    from run_skinning import post_filter
  File "/workspace/RigNet/run_skinning.py", line 26, in <module>
    from datasets.skin_dataset import SkinDataset
ModuleNotFoundError: No module named 'datasets'

I've installed all the Python modules listed in the README. Is there another that needs to be installed? Or a missing directory in the repository?

quick_start/17872_normalized.obj: Permission denied

Thank for this project!
I got an error when I run python quick_start.py

loading all networks...
joint prediction network loaded.
root prediction network loaded.
connection prediction network loaded.
skinning prediction network loaded.
creating data for model ID 17872
gathering topological edges.
calculating surface geodesic matrix.
surface geodesic calculation: 9.081551790237427 seconds
gathering geodesic edges.
sh: 1: quick_start/17872_normalized.obj: Permission denied
Traceback (most recent call last):
File "quick_start.py", line 440, in
data, vox, surface_geodesic, translation_normalize, scale_normalize = create_single_data(mesh_filename)
File "quick_start.py", line 103, in create_single_data
with open(mesh_filaname.replace('_remesh.obj', '_normalized.binvox'), 'rb') as fvox:
FileNotFoundError: [Errno 2] No such file or directory: 'quick_start/17872_normalized.binvox'

Is there any solution?

RuntimeErrror: There were no tensor arguments to this function - torch.cat(joints.shuffle)

Dear Zhan-Xu, all the training code has been successfully executed but here is another issue when testing quick_start.py.
I assume it is because of run_root_cls.py, but model_best.pth.tar has no issues during execution, besides printing the test_loss for the last checkpoint specified.

Traceback (most recent call last): File "quick_start.py", line 449, in <module> mesh_filename=mesh_filename.replace("_remesh.obj", "_normalized.obj")) File "quick_start.py", line 190, in predict_skeleton root_id = getInitId(input_data, root_pred_net) File "C:\JavaTemp\RigNet\mst_generate.py", line 105, in getInitId root_prob, _ = model(data, shuffle=False) File "C:\Users\alfre\anaconda3\envs\anewenv\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\JavaTemp\RigNet\models\ROOT_GCN.py", line 130, in forward joints_shuffle = torch.cat(joints_shuffle, dim=0) RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPU, CUDA, QuantizedCPU, Autograd, Profiler, Tracer, Autocast]

Evaluation setting about parameters

Hello~! I have a question about your paper.
Inquick_start.py, we can control the level of details of rig with different bandwidth.
I also empirically found that slight difference in the value of threshold can change the rigging results quite a lot.
So my question is, how can we set the bandwidth and threshold value when we want to evaluate quantitatively?
I think the bandwidth can be the learnt value from the training set, but I have a doubt about setting the values of threshold to all 0.
Is there no problem about that? and, How did you set the threshold value?

Is the visualization based on Open3d or Maya?

Hi, thanks for the sharing of this great work! When trying to reproduce the results, I wonder if the visualization of the skeleton& joints is shown on Open3d or Maya?
For example, is this shown in Open3d?
image
Because I'm using a headless server, I'm not sure whether I need to transfer the results into Open3d functions or a Maya scene?

Where is the "randomness" from?

1, In the section of Quick Start, you mentioned due to randomness, the results might be slightly different among each run. I'm wondering where is the "randomness" from? made by dropout?(i think dropout should only be applied in the training part).

2, Another is about the vertices number of input models, I have tested a model about 10k vertices, seems work well. If there is a model with several hundread thousands vertices, can we use the model directly(don't consider the hardware limit, memory and time)? I know that you simplify it to 1k-5k firstly and then use some ohter techniques(such as nearest neighbour) to get the high-resolution result.

Thanks.

wooooooo

image

Traceback (most recent call last): File "D:/python/pytorch/RigNet-master/gen_dataset.py", line 165, in <module> p.map(genDataset, [0, 1, 2, 3, 4, 5, 6, 7]) File "D:\Anaconda3\envs\pytorch\lib\multiprocessing\pool.py", line 268, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "D:\Anaconda3\envs\pytorch\lib\multiprocessing\pool.py", line 657, in get raise self._value NameError: name 'dataset_folder' is not defined

what is meaning???

The .exe file cannot be opened and cannot be installed。。。。
Also, the train and test files can be built directly under the root directory?

quick_start Running problem

'.' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Traceback (most recent call last): File "D:/python/pytorch/RigNet-master/quick_start.py", line 425, in <module> data, vox, surface_geodesic, translation_normalize, scale_normalize = create_single_data(mesh_filename) File "D:/python/pytorch/RigNet-master/quick_start.py", line 95, in create_single_data with open(mesh_filaname.replace('_remesh.obj', '_normalized.binvox'), 'rb') as fvox: FileNotFoundError: [Errno 2] No such file or directory: 'quick_start/8210_normalized.binvox'
`
That big guy can give me some guidance, after a long time, it’s still the result

How to pre-process the data and do you have plan to release the code?

Hello,

I noticed the data used in this project is pre-processed and the processed version is released together with the original data (in fbx format). Here you also mentioned why we need to do the re-mesh process #7 .

Is there any plan to release the data pre-processing code? I'd like to try the method on mesh models obtained from other sources (also in fbx format). Should we convert the fbx to obj first and use binvox to remesh the model?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.