peizhuoli / ganimator Goto Github PK
View Code? Open in Web Editor NEWA motion generation model learned from a single example [SIGGRAPH 2022]
License: Other
A motion generation model learned from a single example [SIGGRAPH 2022]
License: Other
Hi, Peizhou, in the Training a Conditional Generator part , you mentioned that we can change the conditional joints.
I wonder if I change the root joints as uppper body joints, could this model be used for generate the full body motion? Hope you can share more details on how to change the get_layered_mask() function to use other body joints as condition., thanks~
Just wanna ask. It definitely not blender or maya .
Hi, Peizhou, I meet some errors like below.
First, gen.load_state_dict(gen_state)
got missing keys in state_dict, and I add strict=False to solve it.
Second, after trained the regular generator, I try to train the conditonal generator follow your example, got the Exception error in gan1d.py that said : condition is required for condition generator.
That's a great work! I'm also interested in your another work: the style transfer in Deep-motion-editing.
I want to train ganimator in my own datasets, and I'm doubt about how to generate the BVH file from my own videos. I'm following this iusse(DeepMotionEditing/deep-motion-editing#34) to generate the json with keypoints info by openpose.
I'm looking forward for your replay! Thanks!
Hi, thanks for the great paper and library!
I'm also attempting to train the model on a custom .bvh animation, which is downloaded from Mixamo as .fbx files.
When I tried to change .fbx into .bvh file, I put .fbx into Blender and exported as .bvh. However, the number of channels of child joints is not fixed, seems to be some 3 and some 6.
dance.txt
Is there any way to export .bvh files and limit the channels?
Hi, amazing work!
I just wanted to as if you did any study regarding the effects of using skeletal operators vs only using regular convolutions. Were the results very different?
Thank you!
Hello, I would like to know how to get style transfer result with my own data, for I don't have idea how to train these generators in style transfer
mode
Hi, thanks for the great paper and library!
I'm attempting to train the model on a custom .bvh animation. However, I receive the error:
File "ganimator/bvh/bvh_io.py", line 178, in load
data_block = data_block.reshape(N, 6)
ValueError: cannot reshape array of size 528 into shape (89,6)
It seems that my data of size 528 cannot fit into the data_block of size 534. It's off by a single parent. I provide the sample.bvh file here for inspection. It has been exported from a .fbx file in blender.
Hello, thank you for your geart work!
Here is a question. When I set skeleton_aware=0,
0%| | 0/15000 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 132, in <module>
main()
File "train.py", line 114, in main
joint_train(reals, gens[:curr_stage], group_gan_models, lengths,
File "/nfs7/y50021900/ganimator/models/architecture.py", line 71, in joint_train
list(map(optimize_lambda, gan_models))
File "/nfs7/y50021900/ganimator/models/architecture.py", line 68, in <lambda>
optimize_lambda = lambda x: x.optimize_parameters(gen=True, disc=False, rec=False)
File "/nfs7/y50021900/ganimator/models/gan1d.py", line 164, in optimize_parameters
self.backward_G()
File "/nfs7/y50021900/ganimator/models/gan1d.py", line 136, in backward_G
loss_total.backward(retain_graph=True)
File "/nfs7/y50021900/miniconda3/envs/ganimator/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/nfs7/y50021900/miniconda3/envs/ganimator/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [429, 256, 1, 5]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
How can I fix it?
Hi,
I want to train with several bvh files and to use motion mixing feature.
From what I see bvh_name gets single string, and multiple_sequences is deprecated.
How you actually do it ?
Thanks for assistance.
Can you please provide the arguments you used to train the model in the quaternion representation (e.g., for the Salsa clip shown in the SIGGRAPH video)?
Hi there!
Thanks for your excellent work!
When I use the pretrained model to get synthesized Salsa-Dancing-6, and import synthesized bvh to blender, I find something a little weird and I would be grateful if you could tell me they are correct or bugs?
Here is gif. The left one is the original one, ane the middle is the synthesized result without contact fixed, the right one is the synthesized result with contact fixed.
And I'm confused about gt_ and rec_ bvh file in results, they stand for...?
Thanks again for your reply! Wish you all the best :-D
Hi there,
thank you very much for your wonderful paper and code!
I try to use the baseball-milling model to directly generate action sequences, extract 21 frames of it as key-frame bvh. However, the test result is not very good, which may be related to the quality of key frame extraction.
I also fixed the root node position, some improvement but still need more effort.
Do you have any advice on a simpler key frame generation method? Should I get key-frame-bvh from the origin train dataset directly?
Thank you!
Hi Peizhuo! Thank you for your great work!
I want to try and run your code on different skeleton structures. What is the source of the crab bvh? Do they have there some other animated animals to try on?
Thanks:)
Hi, I'm having trouble understanding the pooling operator. Posting for others to recreate.
My goal to see if it's possible to describe each frame of the animation in tabular (spreadsheet) input format so I can compute it as a ludwig compute. https://ludwig-ai.github.io/ludwig-docs/0.5/
My goal is to identify per bone the pooled limb and the specific bone classification without using text processing.
This training input data solves for the bone classification using text processing. https://github.com/V-Sekai-fire/ML_avatar_wellness/blob/main/ml/train.tsv
I run demo.sh on my computer and get the result_fixed.bvh. But the BVH skeleton don't move like in your video.
Here is my result_fixed.bvh for salsa: https://drive.google.com/file/d/1g-geTpO3On4Ep1hMXBUab1HTSctVr44E/view?usp=sharing
This is excellent work. Could you provide demos for style transfer and key-frame editing?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.