guytevet / motion-diffusion-model Goto Github PK
View Code? Open in Web Editor NEWThe official PyTorch implementation of the paper "Human Motion Diffusion Model"
License: MIT License
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
License: MIT License
Thanks for your awesome work!!!
I encounter errors when I want to evaluate performance following the instruction.
The problem is caused by fixed absolution path and missing checkpoints
Can you consider releasing the 'model/finest.tar' checkpoint? I believe fixing the path should be easy.
Thanks,
Qinsheng
Im getting this error
(mdm) E:\AI\motion-diffusion-model>bash prepare/download_smpl_files.sh
An error occurred mounting one of your file systems. Please run 'dmesg' for more details.
(mdm) E:\AI\motion-diffusion-model>dmesg
'dmesg' is not recognized as an internal or external command,
operable program or batch file.
(mdm) E:\AI\motion-diffusion-model>
I was trying to run the evaluation on both datasets. However, the variable motion_embeddings for the gt and generated motions (in eval_humanml.py) contains NaN, which throws an error during the calculation of FID.
I took a deeper look at where the NaN comes from. It actually comes from the dataloaders. The motions from the dataloaders sometimes contain NaN.
Do you have an idea how to fix that? @GuyTevet
This is an outstanding job!
But I want to know how to export the generated action file as a .bvh file?
Looking forward to your reply!
while trying to run the sample code
python -m sample --model_path ./save/humanml_trans_enc_512/model000200000.pt --num_samples 10 --num_repetitions 3
I am getting this error
AttributeError: can't set attribute
Hi, Guy,
Thanks for sharing the code and excellent work!
I'm running your code and find that the default loss weights in this repo for joint position loss, joint velocity loss and foot contact loss are all set to 0 as shown here, and I didn't find the value of these weights in the paper either, could you please tell me the loss weights you used during the training? Thx!
it is possible
First of all, thank you for uploading the code and providing a guide.
When following the guide and installing the dependencies for action to motion
some problems came up.
Firstly, I could not find a list of all the required packages.
This was not a big issue, since executing
python -m sample.generate --model_path ./save/humanact12/model000350000.pt --num_samples 10 --num_repetitions 3
to sample motion showed which libraries are missing.
Secondly, and more problematic, was that the execution of the command above did not work even though I strictly followed the
instructions. I get the error bellow. In the beginning it says "[libopenh264 @ 0x5587c35f6040] Incorrect library version loaded". I am unsure if this causes the problem and find no indication of what and which library version to install.
Furthermore, an error "AttributeError: can't set attribute" is thrown. This might be caused by the previous error, but again, I am unsure.
Do you know what (if so) needs to be installed or is it perhaps a bug in the implementation?
Thanks in advance, Anthony.
Loading dataset...
Creating model and diffusion...
TRANS_ENC init
EMBED ACTION
Loading checkpoints from [./save/humanact12/model000350000.pt]...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 305.73it/s]
created 2 samples
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:02<00:00, 341.51it/s]
created 4 samples
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:02<00:00, 351.69it/s]
created 6 samples
saving results file to [./save/humanact12/samples_humanact12_000350000_seed10/results.npy]
saving visualizations to [./save/humanact12/samples_humanact12_000350000_seed10]...
["drink" (00) | Rep #00 | -> sample00_rep00.mp4]
MovieWriter stderr:
[libopenh264 @ 0x5587c35f6040] Incorrect library version loaded
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Traceback (most recent call last):
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 234, in saving
yield self
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 1076, in save
anim._init_draw() # Clear the initial frame
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 1696, in _init_draw
self._draw_frame(frame_data)
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 1718, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "/home/anthony/Documents/MA/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 95, in update
ax.lines = []
AttributeError: can't set attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Documents/MA/motion-diffusion-model/sample/generate.py", line 256, in
main()
File "/home/anthony/Documents/MA/motion-diffusion-model/sample/generate.py", line 189, in main
plot_3d_motion(animation_save_path, skeleton, motion, dataset=args.dataset, title=caption, fps=fps)
File "/home/anthony/Documents/MA/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 128, in plot_3d_motion
ani.save(save_path, fps=fps)
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 1093, in save
writer.grab_frame(**savefig_kwargs)
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/contextlib.py", line 130, in exit
self.gen.throw(type, value, traceback)
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 236, in saving
self.finish()
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 342, in finish
self._cleanup() # Inline _cleanup() once cleanup() is removed.
File "/home/anthony/anaconda3/envs/mdm/lib/python3.7/site-packages/matplotlib/animation.py", line 374, in _cleanup
self._proc.returncode, self._proc.args, out, err)
subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '300x300', '-pix_fmt', 'rgba', '-r', '20', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-y', './save/humanact12/samples_humanact12_000350000_seed10/sample00_rep00.mp4']' returned non-zero exit status 1.
Hi, thx for the excellent work!
I find that the max_text_len is set to 20 by default, while the texts length in humanML are always exceed. Is there some specific reason for this config?
Hi,
First of all thank you very much for uploading your repository.
Could you help me with this issue? I'm having an error when running this script:
python -m sample --model_path ./save/humanml_trans_enc_512/model000200000.pt --text_prompt "the person walked forward and is picking up his toolbox."
The output is:
Creating model and diffusion...
TRANS_ENC init
EMBED TEXT
Loading CLIP...
Loading checkpoints from [./save/humanml_trans_enc_512/model000200000.pt]...
Loading dataset...
Reading ././dataset/humanml_opt.txt
Loading dataset t2m ...
100% 4384/4384 [00:00<00:00, 14537.12it/s]
0% 0/1000 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/usr/local/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/local/envs/mdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/motion-diffusion-model/sample.py", line 175, in
main()
File "/content/motion-diffusion-model/sample.py", line 116, in main
const_noise=False,
File "/content/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 654, in p_sample_loop
const_noise=const_noise,
File "/content/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 727, in p_sample_loop_progressive
const_noise=const_noise,
File "/content/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 530, in p_sample
model_kwargs=model_kwargs,
File "/content/motion-diffusion-model/diffusion/respace.py", line 92, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "/content/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 305, in p_mean_variance
model_output = model(x, self._scale_timesteps(t), **model_kwargs)
File "/content/motion-diffusion-model/diffusion/respace.py", line 129, in call
return self.model(x, new_ts, **kwargs)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/motion-diffusion-model/model/cfg_sampler.py", line 26, in forward
out = self.model(x, timesteps, y)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/motion-diffusion-model/model/mdm.py", line 158, in forward
enc_text = self.encode_text(y['text'])
File "/content/motion-diffusion-model/model/mdm.py", line 145, in encode_text
return self.clip_model.encode_text(texts).float()
File "/usr/local/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 348, in encode_text
x = self.transformer(x)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 203, in forward
return self.resblocks(x)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 190, in forward
x = x + self.attention(self.ln_1(x))
File "/usr/local/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 187, in attention
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 985, in forward
attn_mask=attn_mask)
File "/usr/local/envs/mdm/lib/python3.7/site-packages/torch/nn/functional.py", line 4294, in multi_head_attention_forward
attn_output_weights = torch.bmm(q, k.transpose(1, 2))
RuntimeError: "baddbmm__mkl" not implemented for 'Half'
In training process, the code just reports every iter's loss and doesn't explictly tell which checkpoint is best. So I wonder how I can choose the best one to evaluate.
Is someone run the evaluate and is that good?
I'm on a step:
Loading CLIP...
22%|████████▌ | 74.5M/338M [09:58<25:45, 178kiB/s]
And the loading speed is about 150kiB/s. I guess, the model is being loaded, but where? Why the speed is so low?
I'm using CPU version on ubuntu 22.04, but I doubt that I have such narrow bottlenecks in my system...
Exported .npy has SMPL parameters but it would be great if it also get exported as pkl file with SMPL parameters.
Section 5.1 of the paper describes the in-betweening and body part editing applications.
However, I cannot find sample code for the applications in this repository.
I guess I need to prepare a fixed image and repeat the sample loop, but I don't know how to format it and give it.
Motion-edit example shows editing motion in the dataset.
Would you show me how to edit motion which is generated with a text?
Hi, thanks for sharing your amazing work!
Two questions:
In the current visualization would it be possible to fix the skeleton such that it does not move in the X/Z plane? I tried to set the translation to false here but that did not work.
Have you tried predicting SMPL rotations instead of joint offsets in xyz? Could you share some insight :)
Hi! I have my own COCO keypoint joint structure of 18 joints and their positions along the X,Y,Z axis.
I am looking to convert these keypoints to the SMPL format so that I can eventually run my pose animations on Mixamo.
How can I use your work or the sub-folder Joints2SMPL to do this?: https://github.com/GuyTevet/motion-diffusion-model/tree/main/visualize/joints2smpl
More specifically, what is the content and stucture of the npy file that is to be used as an input to your line: python -m visualize.render_mesh --input_path /path/to/mp4/stick/figure/file?
OR
"python fit_seq.py --files test_motion2.npy" in joints2smpl
What is the test_motion2.npy?
And also any other further help regarding how to use the output to render the animation!
ERROR When running text to motion:
saving results file to [./save/humanml_trans_enc_512/samples_humanml_trans_enc_512_000200000_seed10_the_person_walked_forward_and_is_picking_up_his_toolbox/results.npy]
saving visualizations to [./save/humanml_trans_enc_512/samples_humanml_trans_enc_512_000200000_seed10_the_person_walked_forward_and_is_picking_up_his_toolbox]...
["the person walked forward and is picking up his toolbox." (00) | Rep #00 | -> sample00_rep00.mp4]
Traceback (most recent call last):
File "/home/jpaskett/miniconda3/envs/mdm/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/jpaskett/miniconda3/envs/mdm/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/jpaskett/motion-diffusion-model/sample/generate.py", line 256, in
main()
File "/home/jpaskett/motion-diffusion-model/sample/generate.py", line 189, in main
plot_3d_motion(animation_save_path, skeleton, motion, dataset=args.dataset, title=caption, fps=fps)
File "/home/jpaskett/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 128, in plot_3d_motion
ani.save(save_path, fps=fps)
File "/home/jpaskett/miniconda3/envs/mdm/lib/python3.10/site-packages/matplotlib/animation.py", line 1068, in save
anim._init_draw() # Clear the initial frame
File "/home/jpaskett/miniconda3/envs/mdm/lib/python3.10/site-packages/matplotlib/animation.py", line 1706, in _init_draw
self._draw_frame(frame_data)
File "/home/jpaskett/miniconda3/envs/mdm/lib/python3.10/site-packages/matplotlib/animation.py", line 1728, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "/home/jpaskett/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 95, in update
ax.lines = []
AttributeError: can't set attribute 'lines'
Is there any specific version of anaconda needed after the updates? My current version fails to solve the environment
Im getting this error :D
E:\AI\motion-diffusion-model\motion-diffusion-model>conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- pysocks==1.7.1=py37h89c1867_5
- python==3.7.13=h12debd9_0
- cymem==2.0.6=py37hd23a5d3_3
- fftw==3.3.9=h27cfd23_1
- h5py==3.7.0=py37h737f45e_0
- libpng==1.6.37=hbc83047_0
- tqdm==4.64.1=py37h06a4308_0
- zlib==1.2.12=h5eee18b_3
- libgfortran5==11.2.0=h1234567_1
- gst-plugins-base==1.14.0=h8213a91_2
- expat==2.4.9=h6a678d5_0
- libstdcxx-ng==11.2.0=h1234567_1
- lz4-c==1.9.3=h295c915_1
- hdf5==1.10.6=h3ffc7dd_1
- spacy==3.3.1=py37h79cecc1_0
- jpeg==9b=h024ee3a_2
- cffi==1.15.1=py37h74dc2b5_0
- qt==5.9.7=h5867ecd_1
- libgomp==11.2.0=h1234567_1
- tornado==6.2=py37h5eee18b_0
- scipy==1.7.3=py37h6c91a56_2
- freetype==2.11.0=h70c0345_0
- dbus==1.13.18=hb2f20db_0
- intel-openmp==2021.4.0=h06a4308_3561
- libwebp==1.2.0=h89dd481_0
- mkl-service==2.4.0=py37h7f8727e_0
- ninja-base==1.10.2=hd09550d_5
- numpy-base==1.21.5=py37ha15fc14_3
- cryptography==35.0.0=py37hf1a17b8_2
- pip==22.2.2=py37h06a4308_0
- kiwisolver==1.4.2=py37h295c915_0
- pyqt==5.9.2=py37h05f1152_2
- readline==8.1.2=h7f8727e_1
- libgcc-ng==11.2.0=h1234567_1
- sip==4.19.8=py37hf484d3e_0
- ld_impl_linux-64==2.38=h1181459_1
- libtiff==4.1.0=h2733197_1
- ncurses==6.3=h5eee18b_3
- libuuid==1.0.3=h7f8727e_2
- pyparsing==3.0.9=py37h06a4308_0
- sqlite==3.39.3=h5082296_0
- pytorch==1.7.1=py3.7_cuda11.0.221_cudnn8.0.5_0
- tk==8.6.12=h1ccaba5_0
- lcms2==2.12=h3be6417_0
- libuv==1.40.0=h7b6447c_0
- pcre==8.45=h295c915_0
- pydantic==1.8.2=py37h5e8e339_2
- pillow==9.2.0=py37hace64e9_1
- cudatoolkit==11.0.221=h6bb024c_0
- libgfortran-ng==11.2.0=h00389a5_1
- zstd==1.4.9=haebb681_0
- gstreamer==1.14.0=h28cd5cc_2
- matplotlib-base==3.1.3=py37hef1b27d_0
- icu==58.2=he6710b0_3
- ninja==1.10.2=h06a4308_5
- setuptools==63.4.1=py37h06a4308_0
- mkl==2021.4.0=h06a4308_640
- libffi==3.3=he6710b0_2
- libxcb==1.15=h7f8727e_0
- mkl_random==1.2.2=py37h51133e4_0
- openssl==1.1.1q=h7f8727e_0
- xz==5.2.6=h5eee18b_0
- giflib==5.2.1=h7b6447c_0
- ca-certificates==2022.9.24=ha878542_0
- glib==2.69.1=h4ff587b_1
- numpy==1.21.5=py37h6c91a56_3
- mkl_fft==1.3.1=py37hd3c417c_0
- _openmp_mutex==5.1=1_gnu
- brotlipy==0.7.0=py37h540881e_1004
- fontconfig==2.13.1=h6c09931_0
- libxml2==2.9.14=h74e7548_0
- markupsafe==2.1.1=py37h540881e_1
- catalogue==2.0.8=py37h89c1867_0
E:\AI\motion-diffusion-model\motion-diffusion-model>
Hi, I found the motion shape is: (30, 22, 3, 120)
may I ask why a pose is represented in [30, 22, 3]
?
When I command "bash prepare/download_glove.sh"
It cannot find glove.zip in the following Gdrive link.
My goal is to get the animation file(.rbx or something like that) from the meshes (.obj files) and params(.npy file) so that I can import the animation to Unity. As it is stated in the info, SMPL add on for Blender is recommended way for doing this. Probably a beginner question, but what exactly am I supposed to take to Blender? The whole mesh sequence or one mesh?
My other question is, is it possible to automate this process somehow? I noticed that SMPL also has python example on how to load SMPL files, but I am not sure if that is of any help.
Thanks for the help!
im getting the error:
(mdm) E:\AI\motion-diffusion-model>python -m sample --model_path ./save/humanml_trans_enc_512/model000200000.pt --input_text ./assets/example_text_prompts.txt
Creating model and diffusion...
TRANS_ENC init
EMBED TEXT
Loading CLIP...
100%|███████████████████████████████████████| 338M/338M [00:18<00:00, 18.8MiB/s]
Traceback (most recent call last):
File "C:\Users\Abdullah\miniconda3\envs\mdm\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\Abdullah\miniconda3\envs\mdm\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\AI\motion-diffusion-model\sample.py", line 178, in <module>
main()
File "E:\AI\motion-diffusion-model\sample.py", line 45, in main
model, diffusion = create_model_and_diffusion(args)
File "E:\AI\motion-diffusion-model\utils\model_util.py", line 13, in create_model_and_diffusion
model = MDM(**get_model_args(args))
File "E:\AI\motion-diffusion-model\model\mdm.py", line 102, in __init__
self.rot2xyz = Rotation2xyz(device='cpu', dataset=self.dataset)
File "E:\AI\motion-diffusion-model\model\rotation2xyz.py", line 15, in __init__
self.smpl_model = SMPL().eval().to(device)
File "E:\AI\motion-diffusion-model\model\smpl.py", line 72, in __init__
super(SMPL, self).__init__(**kwargs)
File "C:\Users\Abdullah\miniconda3\envs\mdm\lib\site-packages\smplx\body_models.py", line 406, in __init__
**kwargs,
File "C:\Users\Abdullah\miniconda3\envs\mdm\lib\site-packages\smplx\body_models.py", line 134, in __init__
smpl_path)
AssertionError: Path ./body_models/smpl\SMPL_NEUTRAL.pkl does not exist!
(mdm) E:\AI\motion-diffusion-model>
motion-diffusion-model/sample.py
Line 125 in 3c0ce16
Hi, I was looking at the sample example code you shared.
What is the output of recover_from_ric ?
Are these 22 joints exactly the SMPL joints, root and 21 body joints?
(in axis-angle or another notation)
Thanks a lot for the help
Cheers,
Has anyone experienced an error while running python -m spacy download en_core_web_sm
line?
I am getting
OSError: ~/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/lib/../../../../libcublas.so.11: undefined symbol: free_gemm_select, version libcublasLt.so.11
Some people've solved it and disscused it in this post on Pytorch forum, but if I follow their instructions, pytorch package is downgraded to cpu version, as stated here:
pytorch 1.7.1-py3.7_cuda11.0.221_cudnn8.0.5_0 --> 1.7.1-py3.7_cpu_0
The discussion is from 2021, so I am not pretty sure how useful is it in this case.
Any ideas how to resolve this?
Thanks :)
Thanks for the great work!
I am able to reproduce the text2motion results. I would like to look at the action2motion and motion editing.
Do I have to train it on my own to get the results?
Libraries can't be installed in windows. Can there be windows support for installation?
Each sentence generate 6 seconds motion data. I want to generate long animation by connecting them.
I combined each motion data into one array. But it produces jumping motion between them.
Would you suggest any idea to combine motion data?
I think if I can use last motion data of previous sentence as first input data for following sentence, it will connect each motion smoothly.
Hi everyone
Has anyone been able to use the information in results.npy or sampleXX_repXX_smpl_params.npy directly on blender SMPL addon? I prefer using results because then I don't have to use visualize.render.mesh.
Anyway, I have been trying to do this for the past week, but so far, I only got a creepy creature moving around (see attached video). Obviously there is a weird world translation going on. I tried to modify this code #5 (comment)
I will share my code in case that helps professionals to identify the issue.
Thank you
Mohammad
I meet this error when I try to evaluate.
Traceback (most recent call last):
File "/home/zhao_yang/anaconda3/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/zhao_yang/anaconda3/envs/mdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/zhao_yang/project/mdm/eval/eval_humanml.py", line 282, in
model, diffusion = create_model_and_diffusion(args)
TypeError: create_model_and_diffusion() missing 1 required positional argument: 'data'
In eval_humanml.py, there is no argument 'data', while create_model_and_diffusion needs this argument. I wonder what's wrong with it.
Is there a way to add physics to the model's flesh? or change/edit the character model?
solved in here
Hi, I have checked the version of matplotlib
is the same on the requirements file, and ffmpeg
is also installed. However, this error still happens:
["the person walked forward and is picking up his toolbox." (00) | Rep #00 | -> sample00_rep00.mp4]
MovieWriter stderr:
[libopenh264 @ 0x561facde50c0] Incorrect library version loaded
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Traceback (most recent call last):
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 234, in saving
yield self
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1076, in save
anim._init_draw() # Clear the initial frame
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1696, in _init_draw
self._draw_frame(frame_data)
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1718, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "/home/tiger/code/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 95, in update
ax.lines = []
AttributeError: can't set attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tiger/miniconda3/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/tiger/miniconda3/envs/mdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/tiger/code/motion-diffusion-model/sample/generate.py", line 256, in <module>
main()
File "/home/tiger/code/motion-diffusion-model/sample/generate.py", line 189, in main
plot_3d_motion(animation_save_path, skeleton, motion, dataset=args.dataset, title=caption, fps=fps)
File "/home/tiger/code/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 128, in plot_3d_motion
ani.save(save_path, fps=fps)
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1093, in save
writer.grab_frame(**savefig_kwargs)
File "/home/tiger/miniconda3/envs/mdm/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 236, in saving
self.finish()
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 342, in finish
self._cleanup() # Inline _cleanup() once cleanup() is removed.
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 374, in _cleanup
self._proc.returncode, self._proc.args, out, err)
subprocess.CalledProcessError: Command '['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '300x300', '-pix_fmt', 'rgba', '-r', '20', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-y', 'pretrained_models/HumanML3D/humanml_trans_enc_512/samples_humanml_trans_enc_512_000200000_seed10_the_person_walked_forward_and_is_picking_up_his_toolbox/sample00_rep00.mp4']' returned non-zero exit status 1.
# packages in environment at /home/tiger/miniconda3/envs/mdm:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main anaconda
_openmp_mutex 5.1 1_gnu anaconda
beautifulsoup4 4.11.1 pyha770c72_0 conda-forge
blas 1.0 mkl anaconda
blis 0.7.8 pypi_0 pypi
brotlipy 0.7.0 py37h540881e_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.10.11 h06a4308_0
catalogue 2.0.8 py37h89c1867_0 conda-forge
certifi 2022.9.24 py37h06a4308_0
cffi 1.15.1 py37h74dc2b5_0 anaconda
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
chumpy 0.70 pypi_0 pypi
click 8.0.4 py37h06a4308_0 anaconda
clip 1.0 pypi_0 pypi
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
confection 0.0.2 pypi_0 pypi
cryptography 35.0.0 py37hf1a17b8_2 conda-forge
cudatoolkit 11.3.1 h2bc3f7f_2
cycler 0.11.0 pyhd3eb1b0_0 anaconda
cymem 2.0.6 py37hd23a5d3_3 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
dbus 1.13.18 hb2f20db_0 anaconda
expat 2.4.9 h6a678d5_0
ffmpeg 1.4 pypi_0 pypi
fftw 3.3.9 h27cfd23_1 anaconda
filelock 3.8.0 pyhd8ed1ab_0 conda-forge
fontconfig 2.13.1 h6c09931_0 anaconda
freetype 2.11.0 h70c0345_0 anaconda
ftfy 6.1.1 pypi_0 pypi
gdown 4.5.1 pyhd8ed1ab_0 conda-forge
giflib 5.2.1 h7b6447c_0 anaconda
glib 2.69.1 h4ff587b_1 anaconda
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
gst-plugins-base 1.14.0 h8213a91_2 anaconda
gstreamer 1.14.0 h28cd5cc_2 anaconda
h5py 3.7.0 py37h737f45e_0 anaconda
hdf5 1.10.6 h3ffc7dd_1 anaconda
icu 58.2 he6710b0_3 anaconda
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 5.0.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561 anaconda
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
joblib 1.1.0 pyhd3eb1b0_0 anaconda
jpeg 9b h024ee3a_2
kiwisolver 1.4.2 py37h295c915_0 anaconda
lame 3.100 h7b6447c_0
langcodes 3.3.0 pyhd8ed1ab_0 conda-forge
lcms2 2.12 h3be6417_0 anaconda
ld_impl_linux-64 2.38 h1181459_1 anaconda
libffi 3.3 he6710b0_2 anaconda
libgcc-ng 11.2.0 h1234567_1 anaconda
libgfortran-ng 11.2.0 h00389a5_1 anaconda
libgfortran5 11.2.0 h1234567_1 anaconda
libgomp 11.2.0 h1234567_1 anaconda
libiconv 1.16 h7f8727e_2
libidn2 2.3.2 h7f8727e_0
libpng 1.6.37 hbc83047_0 anaconda
libstdcxx-ng 11.2.0 h1234567_1 anaconda
libtasn1 4.16.0 h27cfd23_0
libtiff 4.1.0 h2733197_1
libunistring 0.9.10 h27cfd23_0
libuuid 1.0.3 h7f8727e_2 anaconda
libuv 1.40.0 h7b6447c_0 anaconda
libwebp 1.2.0 h89dd481_0 anaconda
libxcb 1.15 h7f8727e_0 anaconda
libxml2 2.9.14 h74e7548_0 anaconda
lxml 4.9.1 pypi_0 pypi
lz4-c 1.9.3 h295c915_1 anaconda
markupsafe 2.1.1 py37h540881e_1 conda-forge
matplotlib 3.1.3 py37_0 anaconda
matplotlib-base 3.1.3 py37hef1b27d_0
mkl 2021.4.0 h06a4308_640 anaconda
mkl-service 2.4.0 py37h7f8727e_0 anaconda
mkl_fft 1.3.1 py37hd3c417c_0 anaconda
mkl_random 1.2.2 py37h51133e4_0 anaconda
murmurhash 1.0.8 pypi_0 pypi
ncurses 6.3 h5eee18b_3 anaconda
nettle 3.7.3 hbbd107a_1
ninja 1.10.2 h06a4308_5 anaconda
ninja-base 1.10.2 hd09550d_5 anaconda
numpy 1.21.5 py37h6c91a56_3 anaconda
numpy-base 1.21.5 py37ha15fc14_3 anaconda
openh264 2.1.1 h4ff587b_0
openssl 1.1.1s h7f8727e_0
packaging 21.3 pyhd8ed1ab_0 conda-forge
pathy 0.6.2 pyhd8ed1ab_0 conda-forge
pcre 8.45 h295c915_0 anaconda
pillow 9.2.0 py37hace64e9_1 anaconda
pip 22.2.2 py37h06a4308_0
preshed 3.0.7 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pycryptodomex 3.15.0 pypi_0 pypi
pydantic 1.8.2 py37h5e8e339_2 conda-forge
pyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge
pyparsing 3.0.9 py37h06a4308_0
pyqt 5.9.2 py37h05f1152_2 anaconda
pysocks 1.7.1 py37h89c1867_5 conda-forge
python 3.7.13 h12debd9_0 anaconda
python-dateutil 2.8.2 pyhd3eb1b0_0 anaconda
python_abi 3.7 2_cp37m conda-forge
pytorch 1.12.1 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
pytorch-mutex 1.0 cuda pytorch
qt 5.9.7 h5867ecd_1
readline 8.1.2 h7f8727e_1 anaconda
regex 2022.9.13 pypi_0 pypi
requests 2.28.1 pyhd8ed1ab_1 conda-forge
scikit-learn 1.0.2 py37h51133e4_1 anaconda
scipy 1.7.3 py37h6c91a56_2
setuptools 63.4.1 py37h06a4308_0
shellingham 1.5.0 pyhd8ed1ab_0 conda-forge
sip 4.19.8 py37hf484d3e_0 anaconda
six 1.16.0 pyhd3eb1b0_1 anaconda
smart_open 5.2.1 pyhd8ed1ab_0 conda-forge
smplx 0.1.28 pypi_0 pypi
soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge
spacy 3.3.1 py37h79cecc1_0 anaconda
spacy-legacy 3.0.10 pyhd8ed1ab_0 conda-forge
spacy-loggers 1.0.3 pyhd8ed1ab_0 conda-forge
sqlite 3.39.3 h5082296_0
srsly 2.4.4 pypi_0 pypi
thinc 8.0.17 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0 anaconda
tk 8.6.12 h1ccaba5_0 anaconda
torchaudio 0.12.1 py37_cu113 pytorch
torchvision 0.13.1 py37_cu113 pytorch
tornado 6.2 py37h5eee18b_0
tqdm 4.64.1 py37h06a4308_0
trimesh 3.15.3 pyh1a96a4e_0 conda-forge
typer 0.4.2 pyhd8ed1ab_0 conda-forge
typing-extensions 4.1.1 pypi_0 pypi
urllib3 1.26.11 py37h06a4308_0 anaconda
wasabi 0.10.1 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0 anaconda
xz 5.2.6 h5eee18b_0
zipp 3.8.1 pyhd8ed1ab_0 conda-forge
zlib 1.2.12 h5eee18b_3
zstd 1.4.9 haebb681_0 anaconda
I also re-build the environment, however, another error shows:
Traceback (most recent call last):
File "/home/tiger/miniconda3/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/tiger/miniconda3/envs/mdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/tiger/code/motion-diffusion-model/sample/generate.py", line 256, in <module>
main()
File "/home/tiger/code/motion-diffusion-model/sample/generate.py", line 189, in main
plot_3d_motion(animation_save_path, skeleton, motion, dataset=args.dataset, title=caption, fps=fps)
File "/home/tiger/code/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 128, in plot_3d_motion
ani.save(save_path, fps=fps)
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1076, in save
anim._init_draw() # Clear the initial frame
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1696, in _init_draw
self._draw_frame(frame_data)
File "/home/tiger/.local/lib/python3.7/site-packages/matplotlib/animation.py", line 1718, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "/home/tiger/code/motion-diffusion-model/data_loaders/humanml/utils/plot_script.py", line 95, in update
ax.lines = []
AttributeError: can't set attribute
new environment is :
# packages in environment at /home/tiger/miniconda3/envs/mdm:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main anaconda
_openmp_mutex 5.1 1_gnu anaconda
beautifulsoup4 4.11.1 pyha770c72_0 conda-forge
blas 1.0 mkl anaconda
blis 0.7.8 pypi_0 pypi
brotlipy 0.7.0 py37h540881e_1004 conda-forge
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.10.11 h06a4308_0
catalogue 2.0.8 py37h89c1867_0 conda-forge
certifi 2022.9.24 py37h06a4308_0
cffi 1.15.1 py37h74dc2b5_0 anaconda
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
chumpy 0.70 pypi_0 pypi
click 8.0.4 py37h06a4308_0 anaconda
clip 1.0 pypi_0 pypi
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
confection 0.0.2 pypi_0 pypi
cryptography 35.0.0 py37hf1a17b8_2 conda-forge
cudatoolkit 11.3.1 h2bc3f7f_2
cycler 0.11.0 pyhd3eb1b0_0 anaconda
cymem 2.0.6 py37hd23a5d3_3 conda-forge
cython-blis 0.7.7 py37hce1f21e_0
dataclasses 0.8 pyhc8e2a94_3 conda-forge
dbus 1.13.18 hb2f20db_0 anaconda
expat 2.4.9 h6a678d5_0
ffmpeg 1.4 pypi_0 pypi
fftw 3.3.9 h27cfd23_1 anaconda
filelock 3.8.0 pyhd8ed1ab_0 conda-forge
fontconfig 2.13.1 h6c09931_0 anaconda
freetype 2.11.0 h70c0345_0 anaconda
ftfy 6.1.1 pypi_0 pypi
gdown 4.5.1 pyhd8ed1ab_0 conda-forge
giflib 5.2.1 h7b6447c_0 anaconda
glib 2.69.1 h4ff587b_1 anaconda
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
gst-plugins-base 1.14.0 h8213a91_2 anaconda
gstreamer 1.14.0 h28cd5cc_2 anaconda
h5py 3.7.0 py37h737f45e_0 anaconda
hdf5 1.10.6 h3ffc7dd_1 anaconda
icu 58.2 he6710b0_3 anaconda
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 5.0.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561 anaconda
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
joblib 1.1.0 pyhd3eb1b0_0 anaconda
jpeg 9b h024ee3a_2
kiwisolver 1.4.2 py37h295c915_0 anaconda
lame 3.100 h7b6447c_0
langcodes 3.3.0 pyhd8ed1ab_0 conda-forge
lcms2 2.12 h3be6417_0 anaconda
ld_impl_linux-64 2.38 h1181459_1 anaconda
libffi 3.3 he6710b0_2 anaconda
libgcc-ng 11.2.0 h1234567_1 anaconda
libgfortran-ng 11.2.0 h00389a5_1 anaconda
libgfortran5 11.2.0 h1234567_1 anaconda
libgomp 11.2.0 h1234567_1 anaconda
libiconv 1.16 h7f8727e_2
libidn2 2.3.2 h7f8727e_0
libopus 1.3.1 h7b6447c_0
libpng 1.6.37 hbc83047_0 anaconda
libstdcxx-ng 11.2.0 h1234567_1 anaconda
libtasn1 4.16.0 h27cfd23_0
libtiff 4.1.0 h2733197_1
libunistring 0.9.10 h27cfd23_0
libuuid 1.0.3 h7f8727e_2 anaconda
libuv 1.40.0 h7b6447c_0 anaconda
libvpx 1.7.0 h439df22_0
libwebp 1.2.0 h89dd481_0 anaconda
libxcb 1.15 h7f8727e_0 anaconda
libxml2 2.9.14 h74e7548_0 anaconda
lxml 4.9.1 pypi_0 pypi
lz4-c 1.9.3 h295c915_1 anaconda
markupsafe 2.1.1 py37h540881e_1 conda-forge
matplotlib 3.1.3 py37_0 anaconda
matplotlib-base 3.1.3 py37hef1b27d_0
mkl 2021.4.0 h06a4308_640 anaconda
mkl-service 2.4.0 py37h7f8727e_0 anaconda
mkl_fft 1.3.1 py37hd3c417c_0 anaconda
mkl_random 1.2.2 py37h51133e4_0 anaconda
murmurhash 1.0.8 pypi_0 pypi
ncurses 6.3 h5eee18b_3 anaconda
nettle 3.7.3 hbbd107a_1
ninja 1.10.2 h06a4308_5 anaconda
ninja-base 1.10.2 hd09550d_5 anaconda
numpy 1.21.5 py37h6c91a56_3 anaconda
numpy-base 1.21.5 py37ha15fc14_3 anaconda
openh264 2.1.1 h4ff587b_0
openssl 1.1.1s h7f8727e_0
packaging 21.3 pyhd8ed1ab_0 conda-forge
pathy 0.6.2 pyhd8ed1ab_0 conda-forge
pcre 8.45 h295c915_0 anaconda
pillow 9.2.0 py37hace64e9_1 anaconda
pip 22.2.2 py37h06a4308_0
preshed 3.0.7 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pycryptodomex 3.15.0 pypi_0 pypi
pydantic 1.8.2 py37h5e8e339_2 conda-forge
pyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge
pyparsing 3.0.9 py37h06a4308_0
pyqt 5.9.2 py37h05f1152_2 anaconda
pysocks 1.7.1 py37h89c1867_5 conda-forge
python 3.7.13 h12debd9_0 anaconda
python-dateutil 2.8.2 pyhd3eb1b0_0 anaconda
python_abi 3.7 2_cp37m conda-forge
pytorch 1.12.1 py3.7_cuda11.3_cudnn8.3.2_0 pytorch
pytorch-mutex 1.0 cuda pytorch
qt 5.9.7 h5867ecd_1
readline 8.1.2 h7f8727e_1 anaconda
regex 2022.9.13 pypi_0 pypi
requests 2.28.1 pyhd8ed1ab_1 conda-forge
scikit-learn 1.0.2 py37h51133e4_1 anaconda
scipy 1.7.3 py37h6c91a56_2
setuptools 63.4.1 py37h06a4308_0
shellingham 1.5.0 pyhd8ed1ab_0 conda-forge
sip 4.19.8 py37hf484d3e_0 anaconda
six 1.16.0 pyhd3eb1b0_1 anaconda
smart_open 5.2.1 pyhd8ed1ab_0 conda-forge
smplx 0.1.28 pypi_0 pypi
soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge
spacy 3.3.1 py37h79cecc1_0 anaconda
spacy-legacy 3.0.10 pyhd8ed1ab_0 conda-forge
spacy-loggers 1.0.3 pyhd8ed1ab_0 conda-forge
sqlite 3.39.3 h5082296_0
srsly 2.4.4 pypi_0 pypi
thinc 8.0.17 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0 anaconda
tk 8.6.12 h1ccaba5_0 anaconda
torchaudio 0.12.1 py37_cu113 pytorch
torchvision 0.13.1 py37_cu113 pytorch
tornado 6.2 py37h5eee18b_0
tqdm 4.64.1 py37h06a4308_0
trimesh 3.15.3 pyh1a96a4e_0 conda-forge
typer 0.4.2 pyhd8ed1ab_0 conda-forge
typing-extensions 4.1.1 pypi_0 pypi
typing_extensions 3.10.0.2 pyh06a4308_0
urllib3 1.26.11 py37h06a4308_0 anaconda
wasabi 0.10.1 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
wheel 0.37.1 pyhd3eb1b0_0 anaconda
x264 1!157.20191217 h7b6447c_0
xz 5.2.6 h5eee18b_0
zipp 3.8.1 pyhd8ed1ab_0 conda-forge
zlib 1.2.12 h5eee18b_3
zstd 1.4.9 haebb681_0 anaconda
Hi
first of all, thanks for your great paper and code, really inspiring.
I tried using your demo and could see examples of the motions as .mp4 files.
But the colab code is actually different than the current repository, and does not support SMPL conversion. The below script does not work in the Colab version.
python -m visualize.render_mesh --input_path /path/to/mp4/stick/figure/file
Would be great if you can update the Colab demo, so that all the functionalities of this repo, including SMPL export, work there.
Thanks a lot
hi, is there a way to using the output animation driven mixamo model?
the skeleton between smpl and mixamo are not very likely.
I found the MDM shows the ability to generate based on conditions and seed motions, but how to run an amazing example like this?
Reminder, the following is just a picture :)
Is there any simple way of combining the output obj sequence files with the data structure to preserve the vertices order in blender ? to get a result similar to the one displayed in the main page of the repo.
Hi,
Thank you for sharing this amazing work!
I see that you are using an old version of GMM prior for joint2smpl IK. There is a more advanced alternative introduced in SMPLify-X; i.e. VPoser. I have already adapted the code, please have a look.
The VPoser IK works on batches on GPU and the output is exactly as AMASS and can be directly played in Blender via Blender SMPL-X addon.
I believe this setup includes less hassle compared too creating FBX directly from pkl files that people are already doing. Moreover, using the blender addon one cn also create FBX files.
Keep up the great work.
Best,
Nima
I tried to train MDM on HumanML3D with the provided training script but the loss shows Nan. And the predicted result is not correct. Is anything wrong?
By the way, error occurs when running training with --eval_during_training or --train_platform_type {ClearmlPlatform, TensorboardPlatform}.
I followed the setup instructions using ubuntu 22.04.1 for the humanml_encoder_512 model.
I'm just trying to run the text to motion part and I ran the given example:
python -m sample.generate --model_path ./save/humanml_trans_enc_512/model000200000.pt --text_prompt "the person walked forward and is picking up his toolbox."
I get RuntimeError: "baddbmm__mkl" not implemented for 'Half'
Here's the stack trace:
``(mdm) shubhamkapoor@shubhamkapoor-VirtualBox:~/motion-diffusion-model$ python -m sample.generate --model_path ./save/humanml_trans_enc_512/model000200000.pt --text_prompt "the person walked forward and is picking up his toolbox"
/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /opt/conda/conda-bld/pytorch_1607370156314/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Loading dataset...
Reading ././dataset/humanml_opt.txt
Loading dataset t2m ...
100%|███████████████████████████████████| 4384/4384 [00:00<00:00, 15494.64it/s]
Creating model and diffusion...
TRANS_ENC init
EMBED TEXT
Loading CLIP...
Loading checkpoints from [./save/humanml_trans_enc_512/model000200000.pt]...
0%| | 0/1000 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/shubhamkapoor/motion-diffusion-model/sample/generate.py", line 256, in
main()
File "/home/shubhamkapoor/motion-diffusion-model/sample/generate.py", line 124, in main
const_noise=False,
File "/home/shubhamkapoor/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 654, in p_sample_loop
const_noise=const_noise,
File "/home/shubhamkapoor/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 727, in p_sample_loop_progressive
const_noise=const_noise,
File "/home/shubhamkapoor/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 530, in p_sample
model_kwargs=model_kwargs,
File "/home/shubhamkapoor/motion-diffusion-model/diffusion/respace.py", line 92, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "/home/shubhamkapoor/motion-diffusion-model/diffusion/gaussian_diffusion.py", line 305, in p_mean_variance
model_output = model(x, self._scale_timesteps(t), **model_kwargs)
File "/home/shubhamkapoor/motion-diffusion-model/diffusion/respace.py", line 129, in call
return self.model(x, new_ts, **kwargs)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shubhamkapoor/motion-diffusion-model/model/cfg_sampler.py", line 29, in forward
out = self.model(x, timesteps, y)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shubhamkapoor/motion-diffusion-model/model/mdm.py", line 151, in forward
enc_text = self.encode_text(y['text'])
File "/home/shubhamkapoor/motion-diffusion-model/model/mdm.py", line 139, in encode_text
return self.clip_model.encode_text(texts).float()
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 348, in encode_text
x = self.transformer(x)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 203, in forward
return self.resblocks(x)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 190, in forward
x = x + self.attention(self.ln_1(x))
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/clip/model.py", line 187, in attention
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 985, in forward
attn_mask=attn_mask)
File "/home/shubhamkapoor/miniconda3/envs/mdm/lib/python3.7/site-packages/torch/nn/functional.py", line 4294, in multi_head_attention_forward
attn_output_weights = torch.bmm(q, k.transpose(1, 2))
RuntimeError: "baddbmm__mkl" not implemented for 'Half'
(mdm) shubhamkapoor@shubhamkapoor-VirtualBox:~/motion-diffusion-model$``
Hello,
When I run this command:
python -m sample.generate --model_path ./save/unconstrained/model000450000.pt --num_samples 10 --num_repetitions 3
I am having this message:
FileNotFoundError: [Errno 2] No such file or directory: 'dataset/HumanAct12Poses/humanact12poses.pkl',
Where I can get that file: humanact12poses.pkl
?
thank you!
In your paper, you present 4 loss terms. However, in the code, lambda_rcxyz
, lambda_vel
, and lambda_fc
are all zero.
Could you please show the setting of the
Thanks for this great work. One small question. In the paper you mentioned that: At each step t, MDM predicts the clean sample x0, and diffuses it back to xt_1. Seems this is different from DDPM. Can you point me to the code about how you did this?
Hi!
First of all congrats on your project. Was looking how to contact you and this was the only option I found.
I've seen that motion is created based on prompt.
My question is simple, is it possible to train the model with a video dataset of people moving?
As I seen so far at your video is that it is only one person. Could it work with multiperson?
If it isn't possible, do you know any project that works in this direction?
Thanks for taking your time
Hello!
First, Thank you for your excellent paper and codes.
At present, I hope to train my own martial arts dataset on the MDM model. This dataset has been completely processed into HumanML3D format, and uses completely same super parameters. And I haven't made any changes to the training code yet.
Although the loss is convergence, the motion generated in the late training period (after about 20,000 steps) will tend to stay still, with only global translation. The motion in the early training period seems relatively normal.
Do you have any experience to share in dealing with this situation? Thank you very much.
Thanks for publishing action2motion and motion editing functions.
I tested with it and got a error with render_mesh.py.
I uses venv which is made by environment.yml.
I guess it is caused by file name and I am going to debug it.
If you have any clue to resolve it, let me know.
** visualize.render_mesh error on the result file of action2motion.**
python -m visualize.render_mesh --input_path ./save/humanact12/samples_humanact12_000350000_seed10/sample01.mp4
Traceback (most recent call last):
File "/home/jovyan/anaconda3/envs/mdm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/jovyan/anaconda3/envs/mdm/lib/python3.7/runpy.py", line 85, in run_code
exec(code, run_globals)
File "/home/jovyan/dev/motion-diffusion-model/visualize/render_mesh.py", line 16, in
sample_i, rep_i = [int(e) for e in parsed_name.split('')]
ValueError: not enough values to unpack (expected 2, got 1)
When I use the following command:
python -m sample.generate --model_path ./save/humanml_trans_enc_512/model000200000.pt --text_prompt "the person walked forward and is picking up his toolbox."
The result is
Hi @GuyTevet, thanks for releasing this and putting time into supporting it! I really appreciate the time it takes to help people with silly mistakes!
I'm running within anaconda on Windows 10, using the instructions from the and I'm getting this warning when trying to generate:
Warning: was not able to load [unconstrained], using default value [False] instead.
I believe I've downloaded everything, although I did need to intervein in bash prepare/download_recognition_models.sh
as wget wasn't available.
Any suggestions?
@GuyTevet I am pretty sure that this is a result of being a noob, but I have been stuck for hours on what seems to be a simple issue. Thank you in advance for the help! 🙏
Where is the module "sample" located?
Here is the code and an image of folder structure:
!python -m sample.generate --model_path "./save/humanml_trans_enc_512/model000200000.pt" --num_samples 10 --num_repetitions 3
Here is the error:
/usr/local/bin/python: Error while finding module specification for 'sample.generate' (ModuleNotFoundError: No module named 'sample')
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.