Git Product home page Git Product logo

dmmr's Introduction

[3DV2021] Dynamic Multi-Person Mesh Recovery From Uncalibrated Multi-View Cameras (DMMR)

The code for 3DV 2021 paper "Dynamic Multi-Person Mesh Recovery From Uncalibrated Multi-View Cameras"
Buzhen Huang, Yuan Shu, Tianshu Zhang, Yangang Wang
[Paper] [Video]

figure

figure

Dependencies

Windows or Linux, Python3.7

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt

Getting Started

Step1:
Download the official SMPL model from SMPLify website and put it in models/smpl. (see models/smpl/readme.txt)

Step2:
Download the test data and trained motion prior from Google Drive or Baidu Netdisk (extraction code [jomn]) and put them in data.

Step3:
Run

python main.py --config cfg_files/fit_smpl.yaml

You can visualize the motions and cameras in optimization with the command:

python main.py --config cfg_files/fit_smpl.yaml --visualize true

The code can also be used for motion capture with known cameras:

python main.py --config cfg_files/fit_smpl.yaml --opt_cam false

Results

The fitted results will be saved in output.
You can visualize the estimated extrinsic camera parameters by running:

python viz_cameras.py

figure

Citation

If you find this code useful for your research, please consider citing the paper.

@article{huang2023simultaneously,
  title={Simultaneously Recovering Multi-Person Meshes and Multi-View Cameras with Human Semantics},
  author={Huang, Buzhen and Ju, Jingyi and Shu, Yuan and Wang, Yangang},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  year={2023},
  publisher={IEEE}
}
@inproceedings{huang2021dynamic,
      title={Dynamic Multi-Person Mesh Recovery From Uncalibrated Multi-View Cameras}, 
      author={Buzhen Huang and Yuan Shu and Tianshu Zhang and Yangang Wang},
      year={2021},
      booktitle={3DV},
}

Acknowledgments

Some of the code are based on the following works. We gratefully appreciate the impact it has on our work.
SMPLify-x
SPIN
EasyMocap
MvSMPLfitting

dmmr's People

Contributors

boycehbz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dmmr's Issues

pose json version

In the json file of 2d pose, my version is ‘alphapose v0.3', but why is '1.1' in yours. my alphapose code is the latest on github.

viz_cameras nothing show

i run "python viz_cameras.py",but shows nothing. I found the folder "output" doesn't have "output\cameras".

如何得到正确的keypoints.json数据

你好,我在处理shelf数据集的时候,第一张图片00300.jpg,我看到图片中只有两个人,并且我使用你forked AlphaPose-boycehbz 进行检测及追踪的时候,得到的00300_keypoints.json文件也显示有两个“pose_keypoints_2d”。
Screenshot from 2022-09-02 13-54-21
如果在fit_smpl.yaml中设置num_people为 2 的话,只能显示两个人的模型。如果设置num_people: 4的话,程序运行不通过。
我看你的00300_keypoints.json,里面存在4个“pose_keypoints_2d”数据【两个有数值,两个null】
Screenshot from 2022-09-02 14-01-03
我想请问你是如何修改AlphaPose得到这个数据的?

“cannot convert float NaN to integer”

当我使用自己的数据,执行fitting的时候,遇到一个错误如下:
执行 loss_dict['reproj'] += int(joint_loss.data)的时候,报错错误:
ValueError: cannot convert float NaN to integer
我想问一下,开始是我的数据在那个方面不合适导致 nan?

RuntimeError: einsum(): subscript l has size 300 for operand 1 which does not broadcast with previously seen size 10

/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torch/utils/_contextlib.py:125: UserWarning: Decorating classes is deprecated and will be disabled in future versions. You should only decorate functions or methods. To preserve the current behavior of class decoration, you can directly decorate the init method and nothing else.
warnings.warn("Decorating classes is deprecated and will be disabled in "
Processing: doubleB/Camera00/00001.jpg
load pretrain parameters from data/motionprior_hp.pkl
/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=ResNet50_Weights.IMAGENET1K_V1. You can also use weights=ResNet50_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
betas shape: torch.Size([1, 10])
shape_disps shape: torch.Size([6890, 3, 300])
Traceback (most recent call last):
File "main.py", line 56, in
main(**args)
File "main.py", line 33, in main
setting = load_camera(data, setting, **args)
File "/home/pbc/project/inference/DMMR/DMMR-main/core/utils/module_utils.py", line 694, in load_camera
extris = extris_est(spin, data, data_folder, intris)
File "/home/pbc/project/inference/DMMR/DMMR-main/core/utils/module_utils.py", line 568, in extris_est
output, vert = spin(norm_img)
File "/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "/home/pbc/project/inference/DMMR/DMMR-main/core/SPIN/spin.py", line 33, in forward
pred_output, verts = self.smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
File "/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "/home/pbc/project/inference/DMMR/DMMR-main/core/SPIN/smpl.py", line 19, in forward
smpl_output = super(SMPL, self).forward(*args, **kwargs)
File "/home/pbc/project/inference/DMMR/DMMR-main/core/smplx/body_models.py", line 373, in forward
vertices, joints = lbs(betas, full_pose, self.v_template,
File "/home/pbc/project/inference/DMMR/DMMR-main/core/smplx/lbs
.py", line 179, in lbs
v_shaped = v_template + blend_shapes(betas, shapedirs)
File "/home/pbc/project/inference/DMMR/DMMR-main/core/smplx/lbs
.py", line 268, in blend_shapes
blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
File "/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torch/functional.py", line 373, in einsum
return einsum(equation, *_operands)
File "/home/pbc/miniconda3/envs/zxh/lib/python3.8/site-packages/torch/functional.py", line 378, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): subscript l has size 300 for operand 1 which does not broadcast with previously seen size 10

Uploading PixPin_2024-07-16_20-41-50.png…

I am glad to see this exciting project, but I got this error when running it. My smpl model name is correct, but the dimension is wrong. How should I solve it? Thank you?

能不能看一下你的write_json函数

我发现我使用AlphaPose得到的shelf文件和你给的shelf数据主要不同在于:

  1. 我发现你的keypints文件中有的people的pose_keypoints_2d 是设置为null的,但是我怎么能知道应该是哪个people的pose_keypoints_2d 应设置为null呢?就是如何规定每个人的顺序呢?比如,我在00300.keypoints.json中发现四个人中,0和2是有数据的,1和3的关节数据为null,如何认定哪个人是0 或1 呢?我安装位置顺序设定的吗?

关于追踪器 Track3D

你好,我看代码中,有一个追踪器Track3D,但是你的代码里面并没有调用它的 auto_track方法,请问该如何实现对场景中多个人物的追踪呢?

RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]

hello, when i run the code and got error above, could you help me understand how to fix the bug?
thanks a lot!

here is the error:
Traceback (most recent call last):
File "main.py", line 56, in
main(**args)
File "main.py", line 33, in main
setting = load_camera(data, setting, **args)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\utils\module_utils.py", line 694, in load_camera
extris = extris_est(spin, data, data_folder, intris)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\utils\module_utils.py", line 570, in extris_est
output, vert = spin(norm_img)
File "C:\Users\black\anaconda3\envs\dmmr\lib\site-packages\torch\nn\modules\module.py", line 1102, in call_impl
return forward_call(*input, **kwargs)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\SPIN\spin.py", line 33, in forward
pred_output, verts = self.smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
File "C:\Users\black\anaconda3\envs\dmmr\lib\site-packages\torch\nn\modules\module.py", line 1102, in call_impl
return forward_call(*input, **kwargs)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\SPIN\smpl.py", line 19, in forward
smpl_output = super(SMPL, self).forward(*args, **kwargs)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\smplx\body_models.py", line 376, in forward
self.lbs_weights, pose2rot=pose2rot, dtype=self.dtype)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\smplx\lbs
.py", line 179, in lbs
v_shaped = v_template + blend_shapes(betas, shapedirs)
File "C:\NVR\MeshRecovery\mvsmpl\DMMR-main\core\smplx\lbs
.py", line 268, in blend_shapes
blend_shape = torch.einsum("bl,mkl->bmk", betas, shape_disps)
File "C:\Users\black\anaconda3\envs\dmmr\lib\site-packages\torch\functional.py", line 327, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [1, 10]->[1, 1, 1, 10] [6890, 3, 300]->[1, 6890, 3, 300]

Experiments on MHHI datasets

Hello, thanks for sharing your code. I have two questions about the experiments on MHHI datasets:

  1. Sup. Mat. can't be found and I want to know the details of this dataset.

The details of the datasets that are used for training and testing can be found in the Sup. Mat.

  1. How to select the paired 3D vertices of the tracked markers.

The numbers are the mean distance with standard deviation between the tracked 38 markers and its paired 3D vertices in mm.

测试数据中人物身上的亮点

你好,我看到你给的测试数据的图片,其中一个人身上有很多类似穿戴设备的亮点,请问代码中用的这些设备得到的数据了吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.