Git Product home page Git Product logo

neural-blend-shapes's Introduction

Learning Skeletal Articulations with Neural Blend Shapes

Python Pytorch Blender

This repository provides an end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool. It is based on our work Learning Skeletal Articulations with Neural Blend Shapes that is published in SIGGRAPH 2021.

Prerequisites

Our code has been tested on Ubuntu 18.04. Before starting, please configure your Anaconda environment by

conda env create -f environment.yaml
conda activate neural-blend-shapes

Or you may install the following packages (and their dependencies) manually:

  • pytorch 1.8
  • tensorboard
  • tqdm
  • chumpy

Note the provided environment only includes the PyTorch CPU version for compatibility consideration.

Quick Start

We provide a pretrained model that is dedicated for biped characters. Download and extract the pretrained model from Google Drive or Baidu Disk (9ras) and put the pre_trained folder under the project directory. Run

python demo.py --pose_file=./eval_constant/sequences/greeting.npy --obj_path=./eval_constant/meshes/maynard.obj

The nice greeting animation showed above will be saved in demo/obj as obj files. In addition, the generated skeleton will be saved as demo/skeleton.bvh and the skinning weight matrix will be saved as demo/weight.npy. If you need the bvh file animated, you may specify --animated_bvh=1.

If you are interested in traditional linear blend skinning (LBS) technique result generated with our rig, you can specify --envelope_only=1 to evaluate our model only with the envelope branch.

We also provide other several meshes and animation sequences. Feel free to try their combinations!

FBX Output (New!)

Now you can choose to output the animation as a single fbx file instead of a sequence of obj files! Simply do following:

python demo.py --animated_bvh=1 --obj_output=0
cd blender_scripts
blender -b -P nbs_fbx_output.py -- --input ../demo --output ../demo/output.fbx

Note that you need to install Blender (>=2.80) to generate the fbx file. You may explore more options on the generated fbx file in the source code.

This code is contributed by @huh8686.

Test on Customized Meshes

You may try to run our model with your own meshes by pointing the --obj_path argument to the input mesh. Please make sure your mesh is triangulated and has a consistent upright and front facing orientation. Since our model requires the input meshes are spatially aligned, please specify --normalize=1. Alternatively, you can try to scale and translate your mesh to align the provided eval_constant/meshes/smpl_std.obj without specifying --normalize=1.

Evaluation

To reconstruct the quantitative result with the pretrained model, you need to download the test dataset from Google Drive or Baidu Disk (8b0f) and put the two extracted folders under ./dataset and run

python evaluation.py

Train from Scratch

We provide instructions for retraining our model.

Note that you may need to reinstall the PyTorch CUDA version since the provided environment only includes the PyTorch CPU version.

To train the model from scratch, you need to download the training set from Google Drive or Baidu Disk (uqub) and put the extracted folders under ./dataset.

The training process contains tow stages, each stage corresponding to one branch. To train the first stage, please run

python train.py --envelope=1 --save_path=[path to save the model] --device=[cpu/cuda:0/cuda:1/...]

For the second stage, it is strongly recommended to use a pre-process to extract the blend shapes basis then start the training for much better efficiency by

python preprocess_bs.py --save_path=[same path as the first stage] --device=[computing device]
python train.py --residual=1 --save_path=[same path as the first stage] --device=[computing device] --lr=1e-4

Blender Visualization

We provide a simple wrapper of blender's python API (>=2.80) for rendering 3D mesh animations and visualize skinning weight. The following code has been tested on Ubuntu 18.04 and macOS Big Sur with Blender 2.92.

Note that due to the limitation of Blender, you cannot run Eevee render engine with a headless machine.

We also provide several arguments to control the behavior of the scripts. Please refer to the code for more details. To pass arguments to python script in blender, please do following:

blender [blend file path (optional)] -P [python script path] [-b (running at backstage, optional)] -- --arg1 [ARG1] --arg2 [ARG2]

Animation

We provide a simple light and camera setting in eval_constant/simple_scene.blend. You may need to adjust it before using. We use ffmpeg to convert images into video. Please make sure you have installed it before running. To render the obj files generated above, run

cd blender_script
blender ../eval_constant/simple_scene.blend -P render_mesh.py -b

The rendered per-frame image will be saved in demo/images and composited video will be saved as demo/video.mov.

Skinning Weight

Visualizing the skinning weight is a good sanity check to see whether the model works as expected. We provide a script using Blender's built-in ShaderNodeVertexColor to visualize the skinning weight. Simply run

cd blender_script
blender -P vertex_color.py

You will see something similar to this if the model works as expected:

Meanwhile, you can import the generated skeleton (in demo/skeleton.bvh) to Blender. For skeleton rendering, please refer to deep-motion-editing.

Acknowledgements

The code in blender_scripts/nbs_fbx_output.py is contributed by @huh8686.

The code in meshcnn is adapted from MeshCNN by @ranahanocka.

The code in models/skeleton.py is adapted from deep-motion-editing by @kfiraberman, @PeizhuoLi and @HalfSummer11.

The code in dataset/smpl.py is adapted from SMPL by @CalciferZh.

Part of the test models are taken from SMPL, MultiGarmentNetwork and Adobe Mixamo.

Citation

If you use this code for your research, please cite our paper:

@article{li2021learning,
  author = {Li, Peizhuo and Aberman, Kfir and Hanocka, Rana and Liu, Libin and Sorkine-Hornung, Olga and Chen, Baoquan},
  title = {Learning Skeletal Articulations with Neural Blend Shapes},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {40},
  number = {4},
  pages = {130},
  year = {2021},
  publisher = {ACM}
}

neural-blend-shapes's People

Contributors

huh8686 avatar kfiraberman avatar peizhuoli avatar ranahanocka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-blend-shapes's Issues

when I run the command 'blender ......',I got this

Traceback (most recent call last):
File "", line 1, in
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 96, in
batch_render(args.obj_path)
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 56, in batch_render
render_obj(os.path.join(dir, file))
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 33, in render_obj
add_material_for_objs([obj], 'character')
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 28, in add_material_for_objs
raise Exception("This line shouldn't be reached")
Exception: This line shouldn't be reached

I don't know why this happen.
My command is ' blender -P render_mesh.py -b'

forward kinematics implementation

Hello, I was trying to understand the forward kinematics implementation and was looking at https://www.alecjacobson.com/weblog/?p=3763 this blog post for clarification, and the implementation doesn't seem to match up for non-parent bones.

`for i, p in enumerate(self.parents):
if i != 0:
rots[:, i] = torch.matmul(rots[:, p], rots[:, i])
pos[:, i] = torch.matmul(rots[:, p], offsets[:, i].unsqueeze(-1)).squeeze(-1) + pos[:, p]
rest_pos[:, i] = rest_pos[:, p] + offsets[:, i]

        res[:, i, :3, :3] = rots[:, i]
        res[:, i, :, 3] = torch.matmul(rots[:, i], -rest_pos[:, i].unsqueeze(-1)).squeeze(-1) + pos[:, i]

`
Suppose I have two bones located at b1, b2 with m1 and m2 as the local transformation matrix and b1 is b2's parent bone, according to the implementation in this codebase, the global translation for bone 2 is m1 * m2 * (-b1-b2) + ( m1 * b2 + b1 ).

However, according to the formula in this blog post, it should be m1 * m2 * (-b2) + m1 * (b1+b2) + b1

I'm wondering if you know why is the difference in this implementation?

Error when try to show the animation

Great work!
When I try to visualize the animation via blender following this, an error occured.

Saved: '../demo/images/0000.png'
 Time: 00:00.28 (Saving: 00:00.25)

Traceback (most recent call last):
  File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 96, in <module>
    batch_render(args.obj_path)
  File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 56, in batch_render
    render_obj(os.path.join(dir, file))
  File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 42, in render_obj
    obj.select_set(True)
AttributeError: 'Object' object has no attribute 'select_set'

I am using Blender v2.91a. Any suggestion?

骨骼的坐标系问题

作者您好!当我运行这个命令python demo.py --animated_bvh=0 --obj_output=0 cd blender_scripts blender -b -P nbs_fbx_output.py -- --input ../demo --output ../demo/output.fbx
后,我得到了T-pose的fbx文件。但我把它导入unity时,我发现几乎所有骨骼的坐标系都不与unity重合。而标准的smpl的初始骨骼坐标系应该都与unity坐标系平行。这导致我无法用smpl的姿态参数去驱动使用neural-blend-shapes绑定后的骨骼。请问有什么方法可以在不改变T-pose的姿态下将所有骨骼坐标系都与unity对齐吗?
这是标准的smpl:
2023-02-04 11-29-37 的屏幕截图
这是用neural-blend-shapes绑定后的:
2023-02-04 11-35-27 的屏幕截图
这是启动参数:
2023-02-04 11-32-32 的屏幕截图

broken custom mesh after running demo

I met some similar issue as other mentioned before. I'm not sure if I missed some step to prepare the mesh. I tried to align it and fix the non manifold issue, but the mesh is still broken after I run the demo. I've attached my files, it contains the one I use as referral(maynard:Mesh) and the one I'm running the demo on(castle:Castleguard1), if you can take a look. Thanks!
image

Uploading confusion.fbx…

Weird artifacts while running on custom mesh

I am trying to run this on custom meshes and I am getting this weird artifact. I am using this model from SketchFab - https://sketchfab.com/3d-models/trackerman-5b04147173fd41f19ccd4fe42f549a15 and also with a custom mesh. I have triangulated the mesh and calling is using normalize - python demo.py --pose_file=.\eval_constant\sequences\greeting.npy --obj_path=.\eval_constant\meshes\trackerman\trackerman.obj --animated_bvh=1 --obj_output=0 --normalize=1. Any possible reason for why could be happening and any suggestions/filters to avoid this, would be greatly appreciated.
artifacts_error1
artifacts_error2

Issue with custom mesh

Thanks @PeizhuoLi for this amazing work.
I tred this repo and it's working fine with the sample character provided here.
But when i'm trying a custom normalized humanoid character mesh in T-Pose, it's producing
weird result.

I first tried,
python demo.py --obj_path samples/model1.obj --result_path samples/output/ --normalize=1 --animated_bvh=1 --obj_output=0

And then exported the .fbx animation using the provided script. I also checked the skeleton.bvh and it's not including correct animation.

Then I tried without --normalize=1 as my mesh looks already normalized,
python demo.py --obj_path samples/model1.obj --result_path samples/output/ --animated_bvh=1 --obj_output=0

In this case skeleton.bvh includes correct animation except a bit bent leg.
But the exported .fbx file is not correct as it's hand is extended weirdly,

Here is reference output,

Kazam_screencast_00011.mp4

And here is the custom mesh,
https://drive.google.com/file/d/1550yYhpFnil3XixcEtoKrSVUJYdGH41l/view?usp=share_link

Please suggest some way out.
Thanks.

maya

hello,I am a student of USTB,Thanks for excite project,I want to apply your project to maya, but I can't understand rig,I make a FBX file and run this code to get blendshape values,But with the wrong rig,the results is a bit bad, if you have made a fbx file?

Workflow to generate custom pose (e.g. from Mixamo fbx)

Hi.
Could you tell how the sequence pose files(located in ./eval_constant/sequences/ *.npy) are generated ??
It seems like those files were converted from Mixamo files but couldn't find a clue to create those.
If you don't mind , could you let us know this process ?

Instructions to run Blender Eevee under a headless machine

  1. sudo apt install -y -q xvfb unzip wget flatpak
  2. sudo flatpak remote-add --if-not-exists flathub https://flathub.org repo/flathub.flatpakrepo
  3. sudo flatpak install flathub org.blender.Blender -y
  4. sudo flatpak override org.blender.Blender --talk-name=org.freedesktop.Flatpak
DRI_PRIME=0 xvfb-run --auto-servernum flatpak run org.blender.Blender --background --python `pwd`/run.py -- `pwd

Tested on Ubuntu 18.04

Something weird with the skeleton

Hi! When I reproduced this great work and trained from scratch, the skeleton produced some weird result, as shown in the picture. The left is the result I reproduced, and the right is the result produced from the pre_trained model. I think it may because the end effector was set wrongly, but I didn't change any code about it. Could you please help me with this problem? Looking forward to your reply!
屏幕截图 2021-10-28 223501

bpy with python 3.8.5

python 3.8.5 is the default setup in environment.yaml, but bpy does not support python 3.8, how to solve it?

penetration occur when use a fat person

Hi @PeizhuoLi, thanks for sharing this wonderful work ,and I have try some obj with different shapes, but encountered penetration problem.
it seems this situation didn't discuss in the paper, can you give some advice? thanks~

evaluation CD-B2B error

Hi Peizhuo:

I use the code to evaluation RigNet testset and find the following error, I want to make sure for that:

res[i][j] = segment2segment(a[i][0], a[i][0] + a[i][1], b[i][0], b[i][0] + b[i][1])

should be

83       res[i][j] = segment2segment(a[i][0], a[i][0] + a[i][1], b[j][0], b[j][0] + b[j][1]) 

where b[i] should be b[j]. and the evaluation is not the same as paper

What is in a pose file?

How different is it from a skeletal transform representation of the animation on standard rig you chose?

Error: ZeroDivisionError: division by zero when importing custom mesh

(neural-blend-shapes) D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes>python demo.py --pose_file=.\eval_constant\sequences\greeting.npy --obj_path=.\eval_constant\meshes\test_remesh.obj --animated_bvh=1 --obj_output=0 --normalize=1
Traceback (most recent call last):
  File "demo.py", line 150, in <module>
    main()
  File "demo.py", line 132, in main
    env_model, res_model = load_model(device, model_args, topo_loader, args.model_path)
  File "demo.py", line 59, in load_model
    geo, att, gen = create_envelope_model(device, model_args, topo_loader, is_train=False, parents=parent_smpl)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\architecture\__init__.py", line 34, in create_envelope_model
    save_freq=args.save_freq)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\models\networks.py", line 54, in __init__
    topo_loader, requires_recorder, is_cont, save_freq)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\models\meshcnn_base.py", line 56, in __init__
    val.extend([1 / len(neighbors)] * len(neighbors))
ZeroDivisionError: division by zero

I get this error, everytime I try your model with custom meshes. I checked to make sure they are triangulated and called with normalize=1 but this happens for all the meshes I have. Is there some MeshLab filter that I can apply/script which could parse my custom mesh before sending it to your model or a simple fix for this issue. Thank you for your amazing work!

Provide instructions for GPU based pytorch

Similar to the current cpu version. Provide an environment.yml for cuda based pytorch.

It is non trivial to switch between gpu and cpu. https://github.com/pytorch/pytorch/issues/30664.

Questions regarding blendshapes

Thank you for your great work. After reading the paper, I still have some questions. So I hope you or anyone else can answer me if possible.

In the paper, the Residual Deformation Branch learns to predict blendshapes for each individual character. I'm wondering how these blendshapes are defined.

I'm not familiar with body blendshapes but as far as I know, blendshapes for facial expressions like jawOpen, eyeBlinkLeft, smileRight, etc., are semantically defined. In the paper[1], personalized facial blendshapes are learned via blendshape gradient loss function, which forces each of the generated blendshapes to have specific semantic meaning.

Another way to use blendshapes for facial expression is like what was done in MetaHuman[2] (Unreal Engine), where expressions are produced by bones. Blendshapes (called morph targets in Unreal Engine) are used to refine the face, which add more details. I think this is more similar to your work.

So I would like to know some details on your blendshapes: how they are defined, how they are learned, etc.

I really appreciate it if you could answer my questions.

Ref:
[1] Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting
[2] MetaHuman

License

Hi,

Amazing work. Can you please add license to the repo.

Thanks!!

Results not matching

The results provided in the paper are not matching. I'm getting the following results after training using the train script and evaluated using the given script:

image

about data pre process

您好,我测试了一些自己的模型,增加了normalize=1,发现骨骼尺度上有差距,最终出来的蒙皮权重就很差,有几个问题想问下:

  1. 三角化后的模型consistent upright and front facing orientation.是指模型导入引擎,朝向(比如fbx脚本里的axis_forward='-Z', axis_up='Y')一致,然后是T-pose就行? 还是三角面片坐标上有什么含义?
  2. 和smpl_std.obj对齐,直接使用您给定的归一化函数就可以?还是有什么更细致的要求?
    1665651058186

Other than human models

For training models other than humans, I think a custom SMPL is needed which can generate deformed shapes of your desired model (which is very difficult to find). And once I have that I will need to define the bone hierarchy.

Is it correct or did I miss anything?

Export Animated bvh or FBX?

First, thanks for your great work.

Just wondering if exporting animated format(such as bvh or fbx) is supported ?
Looks like demo.py only exports T-pose skeleton .
I've tried to combine all outputs to fbx format but had trouble with animating character with a given pose. (In blender)
I used pose (FrameN, Joint_Num,3) data as an input of euler value but it doesn't work.

Any Suggestions??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.