Git Product home page Git Product logo

deep-motion-editing's Introduction

Deep-motion-editing

Python Pytorch Blender

This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The code contains end-to-end modules, from reading and editing animation files to visualizing and rendering (using Blender) them.

The main deep editing operations provided here, motion retargeting and motion style transfer, are based on two works published in SIGGRAPH 2020:

Skeleton-Aware Networks for Deep Motion Retargeting: Project | Paper | Video


Unpaired Motion Style Transfer from Video to Animation: Project | Paper | Video


This library is written and maintained by Kfir Aberman, Peizhuo Li and Yijia Weng. The library is still under development.

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Quick Start

We provide pretrained models together with demo examples using animation files specified in bvh format.

Motion Retargeting

Download and extract the test dataset from Google Drive or Baidu Disk (ye1q). Then place the Mixamo directory within retargeting/datasets.

To generate the demo examples with the pretrained model, run

cd retargeting
sh demo.sh

The results will be saved in retargeting/examples.

To reconstruct the quantitative result with the pretrained model, run

cd retargeting
python test.py

The retargeted demo results, that consists both intra-structual retargeting and cross-structural retargeting, will be saved in retargeting/pretrained/results.

Motion Style Transfer

To receive the demo examples, simply run

sh style_transfer/demo.sh

The results will be saved in style_transfer/demo_results, where each folder contains the raw output raw.bvh and the output after footskate clean-up fixed.bvh.

Train from scratch

We provide instructions for retraining our models

Motion Retargeting

Dataset

We use Mixamo dataset to train our model. You can download our preprocessed data from Google Drive or Baidu Disk(4rgv). Then place the Mixamo directory within retargeting/datasets.

Otherwise, if you want to download Mixamo dataset or use your own dataset, please follow the instructions below. Unless specifically mentioned, all script should be run in retargeting directory.

  • To download Mixamo on your own, you can refer to this good tutorial. You will need to download as fbx file (skin is not required) and make a subdirectory for each character in retargeting/datasets/Mixamo. In our original implementation we download 60fps fbx files and downsample them into 30fps. Since we use an unpaired way in training, it is recommended to divide all motions into two equal size sets for each group and equal size sets for each character in each group. If you use your own data, you need to make sure that your dataset consists of bvh files with same t-pose. You should also put your dataset in subdirectories of retargeting/datasets/Mixamo.

  • Enter retargeting/datasets directory and run blender -b -P fbx2bvh.py to convert fbx files to bvh files. If you already have bvh file as dataset, please skil this step.

  • In our original implementation, we manually split three joints for skeletons in group A. If you want to follow our routine, run python datasets/split_joint.py. This step is optional.

  • Run python datasets/preprocess.py to simplify the skeleton by removing some less interesting joints, e.g. fingers and convert bvh files into npy files. If you use your own data, you'll need to define simplified structure in retargeting/datasets/bvh_parser.py. This information currently is hard-coded in the code. See the comment in source file for more details. There are four steps to make your own dataset work.

  • Training and testing character are hard-coded in retargeting/datasets/__init__.py. You'll need to modify it if you want to use your own dataset.

Train

After preparing dataset, simply run

cd retargeting
python train.py --save_dir=./training/

It will use default hyper-parameters to train the model and save trained model in retargeting/training directory. More options are available in retargeting/option_parser.py. You can use tensorboard to monitor the training progress by running

tensorboard --logdir=./retargeting/training/logs/

Motion Style Transfer

Dataset

  • Download the dataset from Google Drive or Baidu Drive (zzck). The dataset consists of two parts: one is the taken from the motion style transfer dataset proposed by Xia et al. and the other is our BFA dataset, where both parts contain .bvh files retargeted to the standard skeleton of CMU mocap dataset.

  • Extract the .zip files into style_transfer/data

  • Pre-process data for training:

    cd style_transfer/data_proc
    sh gen_dataset.sh

    This will produce xia.npz, bfa.npz in style_transfer/data.

Train

After downloading the dataset simply run

python style_transfer/train.py

Style from videos

To run our models in test time with your own videos, you first need to use OpenPose to extract the 2D joint positions from the video, then use the resulting JSON files as described in the demo examples.

Blender Visualization

We provide a simple wrapper of blender's python API (2.80) for rendering 3D animations.

Prerequisites

The Blender releases distributed from blender.org include a complete Python installation across all platforms, which means that any extensions you have installed in your systems Python won’t appear in Blender.

To use external python libraries, you can install new packages directly to Blender's python distribution. Alternatively, you can change the default blender python interpreter by:

  1. Remove the built-in python directory: [blender_path]/2.80/python.

  2. Make a symbolic link or simply copy a python interpreter at [blender_path]/2.80/python. E.g. ln -s ~/anaconda3/envs/env_name [blender_path]/2.80/python

This interpreter should be python 3.7.x version and contains at least: numpy, scipy.

Usage

Arguments

Due to blender's argparse system, the argument list should be separated from the python file with an extra '--', for example:

blender -P render.py -- --arg1 [ARG1] --arg2 [ARG2]

engine: "cycles" or "eevee". Please refer to Render section for more details.

render: 0 or 1. If set to 1, the data will be rendered outside blender's GUI. It is recommended to use render = 0 in case you need to manually adjust the camera.

The full parameters list can be displayed by: blender -P render.py -- -h

Load bvh File (load_bvh.py)

To load example.bvh, run blender -P load_bvh.py. Please finish the preparation first.

Note that currently it uses primitive_cone with 5 vertices for limbs.

Note that Blender and bvh file have different xyz-coordinate systems. In bvh file, the "height" axis is y-axis while in blender it's z-axis. load_bvh.py swaps the axis in the BVH_file class initialization funtion.

Currently all the End Sites in bvh file are discarded, this is because of the out-side code used in utils/.

After loading the bvh file, it's height is normalized to 10.

Material, Texture, Light and Camera (scene.py)

This file enables to add a checkerboard floor, camera, a "sun" to the scene and to apply a basic color material to character.

The floor is placed at y=0, and should be corrected manually in case that it is needed (depends on the character parametes in the bvh file).

Rendering

We support 2 render engines provided in Blender 2.80: Eevee and Cycles, where the trade-off is between speed and quality.

Eevee (left) is a fast, real-time, render engine provides limited quality, while Cycles (right) is a slower, unbiased, ray-tracing render engine provides photo-level rendering result. Cycles also supports CUDA and OpenGL acceleration.

Skinning

Automatic Skinning

We provide a blender script that applies "skinning" to the output skeletons. You first need to download the fbx file which corresponds to the targeted character (for example, "mousey"). Then, you can get a skinned animation by simply run

blender -P blender_rendering/skinning.py -- --bvh_file [bvh file path] --fbx_file [fbx file path]

Note that the script might not work well for all the fbx and bvh files. If it fails, you can try to tweak the script or follow the manual skinning guideline below.

Manual Skinning

Here we provide a "quick and dirty" guideline for how to apply skin to the resulting bvh files, with blender:

  • Download the fbx file that corresponds to the retargeted character (for example, "mousey")
  • Import the fbx file to blender (uncheck the "import animation" option)
  • Merge meshes - select all the parts and merge them (ctrl+J)
  • Import the retargeted bvh file
  • Click "context" (menu bar) -> "Rest Position" (under sekeleton)
  • Manually align the mesh and the skeleton (rotation + translation)
  • Select the skeleton and the mesh (the skeleton object should be highlighted)
  • Click Object -> Parent -> with automatic weights (or Ctrl+P)

Now the skeleton and the skin are bound and the animation can be rendered.

Acknowledgments

The code in the utils directory is mostly taken from Holden et al. [2016].
In addition, part of the MoCap dataset is taken from Adobe Mixamo and from the work of Xia et al..

Citation

If you use this code for your research, please cite our papers:

@article{aberman2020skeleton,
  author = {Aberman, Kfir and Li, Peizhuo and Sorkine-Hornung Olga and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}
}

and

@article{aberman2020unpaired,
  author = {Aberman, Kfir and Weng, Yijia and Lischinski, Dani and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Unpaired Motion Style Transfer from Video to Animation},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {64},
  year = {2020},
  publisher = {ACM}
}

deep-motion-editing's People

Contributors

deepmotionediting avatar halfsummer11 avatar jdbodyfelt avatar jonathanrein avatar kfiraberman avatar mids avatar peizhuoli avatar thecrazyt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-motion-editing's Issues

Is it necessary to use "paired" data for training when eliminating the adversarial loss of GAN?

Hi Peizhuo,

Thanks for the guidance these days, I've reproduced part of your works.()(https://github.com/crissallan/Skeleton_Aware_Networks_for_Deep_Motion_Retargeting)

However, I haven't add the GAN part to the model(since training GAN needs lots of tricks). My training pipeline are now working in a "paired" way. Which means the .bvh file of skeleton A and B should be the same during every training iteration.

I tried to train the model in an "Un-paired" way, but the loss is really hard to converge. According to the ablation study of your paper, it states that "omitting the adversarial loss improves the performance for intra-structural skeletons"

So I'd like to know that after deprecating the adv_loss, did you train the network in a "paired"(supervised) way or still in a "Un-paired" way?

Many thanks!

how to obtain training dataset?

Hi, thanks for sharing such a great project! Howver, I'd like to know that is there any way to obtain or create some customized trainig data?
Because I'd like to reproduce the fully pipeline of the project!
Many thanks!

Some obfuscation in the instance function "to_numpy" of class"BVH_file" in retargeting/datasets/bvh_parser.py

Hi, I was kind of confused about a code snippet in the instance function "to_numpy" of class"BVH_file" in retargeting/datasets/bvh_parser.py.

from line #180 to line #184
if edge:
>>>>index = []
>>>> for e in self.edges:
>>>>>>>> index.append(e[0])
>>>>rotations = rotations[:, index, :]
rotations = rotations.reshape(rotations.shape[0], -1)

According to my understanding so far, after doing:
rotations = self.anim.rotations[:, self.corps, :] in line #174
the variable "rotation" will hold the rotation info of the simplified skeleton. Then why we still need to do the operation from line #180 to line #184?

B.T.W could you please explain the "MOTION" part of the .bvh file for me? I searched some explainations from the website, but I didn't get a satisfied one. I guessed each line of the "MOTION" part presents the rotation information of the skeleton's joints in one frame. But how to understand the order of these numbers?

Many thanks!

How to test with customer data?

Hi,
When I use other characters on the Mixamo dataset in retarget work, I will need a std file, but I didn't find the code to calculate the std. Could you please provide the code to test with the customer data?
Thanks!

About the mean and var of each character?

Hi,

I have a question about the mean and var located in the ./retargeting/datasets/Mixamo/mean_var.

I'd like to know the mean and var of each character's motion data are just calculated based on the test data or they are calculated based on the whole dataset?

If it is the whole dataset, how many data did you use to calculate the mean and var?

Many thanks!

Error: The system cannot find the path specified: './pretrained/results/bvh\\Mousey_m'

Error when running: sh demo.sh

System:
Windows 10 build 19.09
Python 3.7.7

1)I cloned project and downloaded the Mixamo dataset and placed it in the datasets folder
2) ran sh demo.sh

Error:

loading from epoch 20000......
load succeed!
The syntax of the command is incorrect.
Traceback (most recent call last):
File "eval_single_pair.py", line 99, in
main()
File "eval_single_pair.py", line 93, in main
model.test()
File "D:\Development\deep-motion-editing\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\Development\deep-motion-editing\retargeting\models\architecture.py", line 296, in compute_test_result
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_test)))
File "D:\Development\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\Development\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\Development\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\Aj\0_gt.bvh'
Traceback (most recent call last):
File "demo.py", line 46, in
example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
File "demo.py", line 42, in example
height)
File "D:\Development\deep-motion-editing\retargeting\models\IK.py", line 57, in fix_foot_contact
anim, name, ftime = BVH.load(input_file)
File "../utils\BVH.py", line 58, in load
f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure\result.bvh'

Error when running python test.py

loading from epoch 20000......
load succeed!
loading from ./pretrained/models\topology1
loading from epoch 20000......
load succeed!
0%| | 0/106 [00:00<?
he syntax of the command is incorrect.
0%| | 0/106 [00:02<?
Traceback (most recent call last):
File "eval.py", line 37, in
main()
File "eval.py", line 33, in main
model.test()
File "D:\Mocap Software\deep-motion-editing\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\Mocap Software\deep-motion-editing\retargeting\models\architecture.py", line 296, in compute_test_re
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_t
File "D:\Mocap Software\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\Mocap Software\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\Mocap Software\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\BigVegas\0_gt.bvh'
Collecting test error...
Traceback (most recent call last):
File "test.py", line 35, in
cross_error += full_batch(0)
File "D:\Mocap Software\deep-motion-editing\retargeting\get_error.py", line 15, in full_batch
res.append(batch(char, suffix))
File "D:\Mocap Software\deep-motion-editing\retargeting\get_error.py", line 31, in batch
files = [f for f in os.listdir(new_p) if
FileNotFoundError: [WinError 3] The system cannot find the path specified: './pretrained/results/bvh\Mousey_m'

I have been looking for a project like this. There are many excellent projects out there that output 2D points such as OpenPose but not many that do the conversion to 3D. The ones I didn't really work well. I am excited about this project since it is specifically what I am looking for.

Thanks,
Dan

What is "xxx_virtual" joints produced by the function "build_joint_topology" in skeleton.py?

Hi there, I got a question about the function "build_joint_topology" in skeleton.py.

I could understand that this function is mapping the "edges" to "joints" thus we could write the info of "joints" into a .bvh file.

But why we need to add "xxx_vitual" joints to the skeleton?

And I was kind of confused about the role of the variable named "out_degree" defined in the line#295.

train with my dataset

Hi, I want to train style transfer with my database, I tried many times, but the program always gave me an error.
The error is "RuntimeError: CUDA error: device-side assert triggered" and "index out of bounds".

So I want to ask that how do I define a YML file?
After defining a database like xia, can I leave the rest of the code unchanged?
Thank you!

all windows length is -1

================================windows length is -1
Traceback (most recent call last):
File "./datasets/preprocess.py", line 72, in
write_statistics(character, './datasets/Mixamo/mean_var/')
File "./datasets/preprocess.py", line 34, in write_statistics
dataset = MotionData(new_args)
File "/home/wuxiaoliang/docker/motion_retarget/deep-motion-editing/retargeting/datasets/motion_dataset.py", line 33, in init
new_windows = self.get_windows(motions)
File "/home/wuxiaoliang/docker/motion_retarget/deep-motion-editing/retargeting/datasets/motion_dataset.py", line 106, in get_windows
return torch.cat(new_windows)
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]

how to solve the problem? @kfiraberman

Can this be used with other 3D programs and different rigs?

I use reallusion iClone and Character Creator which are similar to DAZ3D.

What I am looking for is to have full body motion capture face and hands from a web camera.

One of the problems I have been running into using Character Creator 3 armature is that the facial movement is mostly based on BlendShapes (Morphs) and I don't know how to translate that from xyz coordinates. Actually I am just learning 3D and motion capture but the price for markerless mocap is high and I am just a hobbyist so I have been searching for an opensource solution.

Your software looks it will be fantastic!

Thanks,
Dan

The reason for represent rotation by quaternion instead of Euler angle.

Hi, sorry for bothering again,

May I ask the reason that why choose to use quaternion to represent the roration instead of the Euler angle? At first I thought that it is because if we don't know the implementation order of Euler angle, we might obtain wrong rotation, but since we could know the order of Euler angle according to the .bvh file, why we still use the quaternion representation?

Many thanks!

what's the role of the function "find_seq" defined in the class "SkeletonPool" in skeleton.py?

Hi=.=, it's me again,

Could you please describe the role of the function "find_seq" defined in the class "SkeletonPool" in skeleton.py as well as the role of the attribute "self.seq_list" for me?

According to my understanding so far, after the Skeleton pooling, we need a new topology to help us to calculate the neigboring list/matrix for the next Skeleton convolution layer.

But I haven't make it clear after reading the code.

Many thanks!

demo cannot find pytorch while pip shows is installed

Hello there
First of all kudos for this interesting project. I have tons of animation and from time to time I bang my head on how to port them gracefully...

I'm try to run the demo but it fails.

user:retargeting source demo.sh 
Traceback (most recent call last):
  File "eval_single_pair.py", line 2, in <module>
    import torch
ImportError: No module named torch
Traceback (most recent call last):
  File "demo.py", line 49, in <module>
    example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
  File "demo.py", line 45, in example
    height)
  File "/Users/max/Developer/Library/Graphics/moremocap/deep-motion-editing/retargeting/models/IK.py", line 57, in fix_foot_contact
    anim, name, ftime = BVH.load(input_file)
  File "../utils/BVH.py", line 58, in load
    f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure/result.bvh'

Seems it cannot find pytorch, but my pip shows is installed:

user:retargeting pip list
Package          Version
---------------- --------
appnope          0.1.0
backcall         0.1.0
certifi          2019.3.9
decorator        4.4.2
future           0.18.2
ipython          7.13.0
ipython-genutils 0.2.0
jedi             0.17.0
macpack          1.0.3
numpy            1.16.2
parso            0.7.0
pexpect          4.8.0
pickleshare      0.7.5
pip              20.1.1
prompt-toolkit   3.0.5
ptyprocess       0.6.0
Pygments         2.6.1
scipy            1.4.1
setuptools       39.0.1
six              1.14.0
torch            1.5.0
tqdm             4.46.1
traitlets        4.3.3
wcwidth          0.1.9
wheel            0.34.2

Anyway, if I run eval_single_pair.py directly it works:

user:retargeting python eval_single_pair.py
usage: eval_single_pair.py [-h] [--save_dir SAVE_DIR]
                           [--cuda_device CUDA_DEVICE]
                           [--num_layers NUM_LAYERS]
                           [--learning_rate LEARNING_RATE] [--alpha ALPHA]
                           [--batch_size BATCH_SIZE] [--upsampling UPSAMPLING]
                           [--downsampling DOWNSAMPLING]
                           [--batch_normalization BATCH_NORMALIZATION]
                           [--activation ACTIVATION] [--rotation ROTATION]
                           [--data_augment DATA_AUGMENT]
                           [--epoch_num EPOCH_NUM] [--window_size WINDOW_SIZE]
                           [--kernel_size KERNEL_SIZE]
                           [--base_channel_num BASE_CHANNEL_NUM]
                           [--normalization NORMALIZATION] [--verbose VERBOSE]
                           [--skeleton_dist SKELETON_DIST]
                           [--skeleton_pool SKELETON_POOL]
                           [--extra_conv EXTRA_CONV]
                           [--padding_mode PADDING_MODE] [--dataset DATASET]
                           [--fk_world FK_WORLD] [--patch_gan PATCH_GAN]
                           [--debug DEBUG] [--skeleton_info SKELETON_INFO]
                           [--ee_loss_fact EE_LOSS_FACT] [--pos_repr POS_REPR]
                           [--D_global_velo D_GLOBAL_VELO]
                           [--gan_mode GAN_MODE] [--pool_size POOL_SIZE]
                           [--is_train IS_TRAIN] [--model MODEL]
                           [--epoch_begin EPOCH_BEGIN]
                           [--lambda_rec LAMBDA_REC]
                           [--lambda_cycle LAMBDA_CYCLE]
                           [--lambda_ee LAMBDA_EE]
                           [--lambda_global_pose LAMBDA_GLOBAL_POSE]
                           [--lambda_position LAMBDA_POSITION]
                           [--ee_velo EE_VELO] [--ee_from_root EE_FROM_ROOT]
                           [--scheduler SCHEDULER]
                           [--rec_loss_mode REC_LOSS_MODE]
                           [--adaptive_ee ADAPTIVE_EE]
                           [--simple_operator SIMPLE_OPERATOR]
                           [--use_sep_ee USE_SEP_EE] [--eval_seq EVAL_SEQ]
                           --input_bvh INPUT_BVH --target_bvh TARGET_BVH
                           --test_type TEST_TYPE --output_filename
                           OUTPUT_FILENAME
eval_single_pair.py: error: the following arguments are required: --input_bvh, --target_bvh, --test_type, --output_filename

That means a python scripts run directly finds pytorch ok. Maybe could be something related to os.system calls? My setup:

Mac Os X 10.14.3 Mojave
default python.org Python 3.7.0, no envs

Thank you

run python test.py error ,maybe some error in code

python test.py
Batch [1/4]
Error: Numpy + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-7c85b1e2.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set NPY_MKL_FORCE_INTEL to force it.
Collecting test error...
Traceback (most recent call last):
File "test.py", line 35, in
cross_error += full_batch(0)
File "/home/linux/workspace/skeletonAware/deep-motion-editing-master/retargeting/get_error.py", line 15, in full_batch
res.append(batch(char, suffix))
File "/home/linux/workspace/skeletonAware/deep-motion-editing-master/retargeting/get_error.py", line 54, in batch
err = (pos - pos_ref) * (pos - pos_ref)
ValueError: operands could not be broadcast together with shapes (108,28,3) (156,28,3)

training of retarget

what a great work!
Could you please give us the training method and the retarget datasets

How to convert the .fbx file into .bvh file format?

Hi, I was recently attracted by your fancy project and tried to train my custom model. However, I met a problem when preprocessing the data:

The animation data obtained from the Mixamo is in a .fbx format. But based on your source code, I think you use the .bvh data as the raw input. So could you please give me some suggestions about how to convert the .fbx data into a .bvh format!

Many Thanks!

about skining

in the skining step ,i have some question.

  1. after we import the fbx what the mean about "Merge meshes - select all the parts and merge them (ctrl+J)"?
  2. could you please give me more details about skining?

Why only send the position of root joint of each frame into the model as input?

Hi,
I was kind of curious that why only send the position of root joint of each frame into the model as input?
I mean after parsing the .bvh file, we could get the positions of every joint at every frame. But we just use the root joint's position as input, is it because other joints' position info is redundant?

Mant thanks!

Raise error in eval_single_pair.py after running train.py in motion retargeting.

Hi, I customized a TrainDataset and tried to use this dataset to train the motion retargeting model from scratch. However, after run the train.py(it even isn't been run successfully ), then when I run the demo.py or eval_single_pair.py it throw our errors like below:
**
loading from ./pretrained/models/topology0
loading from epoch 20000......
Traceback (most recent call last):
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/eval_single_pair.py", line 98, in
main()
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/eval_single_pair.py", line 78, in main
model.load(epoch=20000)
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/models/architecture.py", line 274, in load
model.load(os.path.join(self.model_save_dir, 'topology{}'.format(i)), epoch)
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/models/integrated.py", line 83, in load
map_location=self.args.cuda_device))
File "/home/hair_gans/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 847, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for AE:
Missing key(s) in state_dict: "enc.layers.2.0.mask", "enc.layers.2.0.weight", "enc.layers.2.0.bias", "enc.layers.2.0.offset_enc.bias", "enc.layers.2.0.offset_enc.weight", "enc.layers.2.0.offset_enc.mask", "enc.layers.2.1.weight", "dec.layers.2.1.weight", "dec.layers.2.2.mask", "dec.layers.2.2.weight", "dec.layers.2.2.bias", "dec.layers.2.2.offset_enc.bias", "dec.layers.2.2.offset_enc.weight", "dec.layers.2.2.offset_enc.mask", "dec.unpools.2.weight", "dec.enc.layers.2.0.mask", "dec.enc.layers.2.0.weight", "dec.enc.layers.2.0.bias", "dec.enc.layers.2.0.offset_enc.bias", "dec.enc.layers.2.0.offset_enc.weight", "dec.enc.layers.2.0.offset_enc.mask", "dec.enc.layers.2.1.weight".
Unexpected key(s) in state_dict: "dec.layers.1.2.bias".
size mismatch for dec.layers.0.1.weight: copying a param with shape torch.Size([192, 112]) from checkpoint, the shape in current model is torch.Size([224, 224]).
size mismatch for dec.layers.0.2.mask: copying a param with shape torch.Size([96, 192, 15]) from checkpoint, the shape in current model is torch.Size([112, 224, 15]).
size mismatch for dec.layers.0.2.weight: copying a param with shape torch.Size([96, 192, 15]) from checkpoint, the shape in current model is torch.Size([112, 224, 15]).
size mismatch for dec.layers.0.2.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([112]).
size mismatch for dec.layers.0.2.offset_enc.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([112]).
size mismatch for dec.layers.0.2.offset_enc.weight: copying a param with shape torch.Size([96, 72]) from checkpoint, the shape in current model is torch.Size([112, 84]).
size mismatch for dec.layers.0.2.offset_enc.mask: copying a param with shape torch.Size([96, 72]) from checkpoint, the shape in current model is torch.Size([112, 84]).
size mismatch for dec.layers.1.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([192, 112]).
size mismatch for dec.layers.1.2.mask: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([96, 192, 15]).
size mismatch for dec.layers.1.2.weight: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([96, 192, 15]).
size mismatch for dec.layers.1.2.offset_enc.bias: copying a param with shape torch.Size([92]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for dec.layers.1.2.offset_enc.weight: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([96, 72]).
size mismatch for dec.layers.1.2.offset_enc.mask: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([96, 72]).
size mismatch for dec.unpools.0.weight: copying a param with shape torch.Size([192, 112]) from checkpoint, the shape in current model is torch.Size([224, 224]).
size mismatch for dec.unpools.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([192, 112]).

Process finished with exit code 1
**

Even I use version control tools roll back the code, it stills raise this error!
Have you ever met this before?
Many thanks!

Try loading BVH in Blender doesn't seem to work

Hello, as per the title

import sys
import os
sys.path.append("../utils")
sys.path.append("./")
import BVH
import numpy as np
import bpy
import mathutils
import pdb

#scale factor for bone length
global_scale = 10

class BVH_file:
    def __init__(self, file_path):
        self.anim, self.names, self.frametime = BVH.load(file_path)

        #permute (x, y, z) to (z, x, y)
        tmp = self.anim.offsets.copy()
        self.anim.offsets[..., 0] = tmp[..., 2]
        self.anim.offsets[..., 1] = tmp[..., 0]
        self.anim.offsets[..., 2] = tmp[..., 1]

BVH.load(file_path) refers to import BVH, which is not in current Blender (2.83).
What python package is that?

combined_dataset

Is it possible to train the model combining your.npy files(mixamo characters) with custom rig .npy(new custom character)?

Can Unpaired Motion Style Transfer be adjusted?

From looking at both videos I see that this is being designed for "intelligent ai retargeting and stylized motion. Can the strength of the stylized motion be adjusted? Is this project related to FACS AU as far as using emotions for mocap instead of direct xyz?

Thanks!

Where is model?

Your work is great, and I'm curious about the model for the bvh data. I want to render the animation of Jerry model in your presentation video

Why create a "global_part_neighbor" in the function "find_neighbor" in the skeleton.py?

Hi,
I was recently reading the source code very carefully in order to understand the whole pipeline of your work.

I was kind of confused that why init a variable named "global_part_neighbor" in the in the function "find_neighbor" in the line #373 of the models/skeleton.py?

Besides, why append a "edge_num" into those global_part_neighbor's list in line #375?

Many thanks!

something is wrong with the file

D:\deep-motion-editing-master\retargeting>python demo.py
命令语法不正确。
loading from ./pretrained/models\topology0
loading from epoch 20000......
load succeed!
loading from ./pretrained/models\topology1
loading from epoch 20000......
load succeed!
C:\Users\anbanglee\miniconda3\envs\avatarify\lib\site-packages\torch\nn\modules\upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
命令语法不正确。
Traceback (most recent call last):
File "eval_single_pair.py", line 99, in
main()
File "eval_single_pair.py", line 93, in main
model.test()
File "D:\deep-motion-editing-master\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\deep-motion-editing-master\retargeting\models\architecture.py", line 296, in compute_test_result
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_test)))
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\Aj\0_gt.bvh'
Traceback (most recent call last):
File "demo.py", line 46, in
example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
File "demo.py", line 42, in example
height)
File "D:\deep-motion-editing-master\retargeting\models\IK.py", line 57, in fix_foot_contact
anim, name, ftime = BVH.load(input_file)
File "../utils\BVH.py", line 58, in load
f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure\result.bvh'

(avatarify) D:\deep-motion-editing-master\retargeting>sh demo.sh
'sh' 不是内部或外部命令,也不是可运行的程序
或批处理文件。

(avatarify) D:\deep-motion-editing-master\retargeting>python test.py
Batch [1/4]
命令语法不正确。
loading from ./pretrained/models\topology0
loading from epoch 20000......
load succeed!
loading from ./pretrained/models\topology1
loading from epoch 20000......
load succeed!
0%| | 0/106 [00:00<?, ?it/s]C:\Users\anbanglee\miniconda3\envs\avatarify\lib\site-packages\torch\nn\modules\upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
命令语法不正确。

Traceback (most recent call last):
File "eval.py", line 37, in
main()
File "eval.py", line 33, in main
model.test()
File "D:\deep-motion-editing-master\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\deep-motion-editing-master\retargeting\models\architecture.py", line 296, in compute_test_result
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_test)))
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\BigVegas\0_gt.bvh'
Collecting test error...
Traceback (most recent call last):
File "test.py", line 35, in
cross_error += full_batch(0)
File "D:\deep-motion-editing-master\retargeting\get_error.py", line 15, in full_batch
res.append(batch(char, suffix))
File "D:\deep-motion-editing-master\retargeting\get_error.py", line 31, in batch
files = [f for f in os.listdir(new_p) if
FileNotFoundError: [WinError 3] 系统找不到指定的路径。: './pretrained/results/bvh\Mousey_m'

about architecture.py->def backward_d(self) : in retargeting

i want to retain the retarget model use my own dataset(eg:cmu dataset).
when i use cmu dataset ,i find split_joint.py cant split the dataset into '***_m'. so my len(characters)==1 in the retargeting/datasets/init.py
then when the code run in architecture.py->def backward_d(self)->fake = self.fake_pools[i].query(self.fake_pos[2 - i]): i will have a error:IndexError: list index out of range
so what the mean about "self.fake_pos[2 - i]"?

Use my own data for training, but there is no loss value

Excuse me, I use my own data for training, but there is no loss value. Is it because there is a problem with my default configuration?

====characters= [['mouse'], ['strong']]
load from file ./datasets/Mixamo/mouse.npy
Window count: 4, total frame (without downsampling): 3572
load from file ./datasets/Mixamo/strong.npy
Window count: 4, total frame (without downsampling): 3560

No scalar data was found.
Probable causes:

You haven’t written any scalar data to your event files.
TensorBoard can’t find your event files.

If you’re new to using TensorBoard, and want to find out how to add data and set up your event files, check out the README and perhaps the TensorBoard tutorial.

If you think TensorBoard is configured properly, please see the section of the README devoted to missing data problems and consider filing an issue on GitHub.

@kfiraberman

Do I need to train new models for new characters in retargeting?

Hi, all

I read your papers carefully, you mentioned that "since the domains have different dimensions, the two networks (A → B, A → B) cannot share weights, so they had to be trained separately." in the paper.

Does this mean I need to train a new model if I want to use a character with a different number of bones from the training set?

Thanks!

Basic concepts about "SkeletonConv" and "SkeletonUnpool".

Hi, after reading the implementation code of the skeleton.py. I got some questions which I'd like to disccuss with you.

1.Could I regard the "SkeletonConv" as a binary mask which created based on the neighboring list of each joint. If joint A is the neighbor of joint B, when convolve joint B, the binary mask on joint A is 1 else the mask is set to 0.

2.Does the "SkeletonUnpool" just duplicate the features of the pooled joint to increase the nodes of the skeleton graph?

Many thanks!

I use my own data for retargeting, but the program will give an exception

Traceback (most recent call last):
File "eval_single_pair.py", line 104, in
main()
File "eval_single_pair.py", line 92, in main
new_motion = (new_motion - dataset.mean[i][j]) / dataset.var[i][j]
RuntimeError: The size of tensor a (99) must match the size of tensor b (91) at non-singleton dimension 1
Traceback (most recent call last):
File "demo.py", line 46, in
example('Aj', 'BigVegas', '01.bvh', 'intra', './examples/intra_structure')
File "demo.py", line 42, in example
height)
File "/home/wuxiaoliang/docker/newAPP/deep-motion-editing/retargeting/models/IK.py", line 57, in fix_foot_contact
anim, name, ftime = BVH.load(input_file)
File "../utils/BVH.py", line 58, in load
f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure/result.bvh'

Hello, how can I solve this problem?@ @kfiraberman

What's the role of "std_bvh" files?

Hi, could you please descirbe the role of std_bvh files? I noticed that they were loaded during testing as well as training. What's that for ?

test in style_transfer

In styleTransfer,when i use other BVHs,why cant run the style_transfer/demo.sh successfully?
is my skeleton in my BVH is lack? the skeleton must match it that you give?

Question about Mixamo dataset

Hi, great works! I am curious about the dataset setting: are all of Mixamo animations manually designed or are they also created by some kind of motion retargeting? It raises my doubts when some characters in Mixamo, e.g., Sword Woman, do not behave naturally in certain motions. If the Mixamo animations are not designed by human artists, then why can we take them as ground truth (please feel free to correct me)? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.