Git Product home page Git Product logo

pymaf's Introduction

🚩 [Update] The compatible EFT label files are now available here, which helps to train a much stronger HMR baseline. See issue #58.

PyMAF [ICCV'21 Oral] & PyMAF-X [TPAMI'23]

This repository contains the code for the following papers:

PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images
Hongwen Zhang, Yating Tian, Yuxiang Zhang, Mengcheng Li, Liang An, Zhenan Sun, Yebin Liu

TPAMI, 2023

[Project Page] [Paper] [Code: smplx branch]

PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop
Hongwen Zhang*, Yating Tian*, Xinchi Zhou, Wanli Ouyang, Yebin Liu, Limin Wang, Zhenan Sun

* Equal contribution

ICCV, 2021 (Oral Paper)

[Project Page] [Paper] [Code: smpl branch]

Instruction for PyMAF

Preview of demo results:


Frame by frame reconstruction. Video clipped from here.


Frame by frame reconstruction. Video from here.

More results: Click Here

Requirements

  • Python 3.8
conda create --no-default-packages -n pymafx python=3.8
conda activate pymafx

packages

conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c conda-forge
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"
  • other packages listed in requirements.txt
pip install -r requirements.txt

necessary files

mesh_downsampling.npz & DensePose UV data

  • Run the following script to fetch mesh_downsampling.npz & DensePose UV data from other repositories.
bash fetch_data.sh

SMPL model files

Fetch preprocessed data from SPIN.

Fetch final_fits data from SPIN. [important note: using EFT fits for training is much better. Compatible npz files are available here]

Download the pre-trained model and put it into the ./data/pretrained_model directory.

After collecting the above necessary files, the directory structure of ./data is expected as follows.

./data
├── dataset_extras
│   └── .npz files
├── J_regressor_extra.npy
├── J_regressor_h36m.npy
├── mesh_downsampling.npz
├── pretrained_model
│   └── PyMAF_model_checkpoint.pt
├── smpl
│   ├── SMPL_FEMALE.pkl
│   ├── SMPL_MALE.pkl
│   └── SMPL_NEUTRAL.pkl
├── smpl_mean_params.npz
├── final_fits
│   └── .npy files
└── UV_data
    ├── UV_Processed.mat
    └── UV_symmetry_transforms.mat

Demo

You can first give it a try on Google Colab using the notebook we have prepared, which is no need to prepare the environment yourself: Open In Colab

Run the demo code.

For image input:

python3 demo.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt --img_file examples/COCO_val2014_000000019667.jpg

For video input:

# video with single person
python3 demo.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt --vid_file examples/dancer.mp4
# video with multiple persons
python3 demo.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt --vid_file examples/flashmob.mp4

Evaluation

COCO Keypoint Localization

  1. Download the preprocessed data coco_2014_val.npz. Put it into the ./data/dataset_extras directory.

  2. Run the COCO evaluation code.

python3 eval_coco.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt

3DPW

Run the evaluation code. Using --dataset to specify the evaluation dataset.

# Example usage:
# 3DPW
python3 eval.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt --dataset=3dpw --log_freq=20

Training

🚀 [Important update]: Using EFT fits is recommended, as it can significantly improve the baseline. Compatible data is available here. See issue #58 for more training details using the EFT labels.

Below messages are the training details of the conference version of PyMAF.

To perform training, we need to collect preprocessed files of training datasets at first.

The preprocessed labels have the same format as SPIN and can be retrieved from here. Please refer to SPIN for more details about data preprocessing.

PyMAF is trained on Human3.6M at the first stage and then trained on the mixture of both 2D and 3D datasets at the second stage. Example usage:

# training on COCO
CUDA_VISIBLE_DEVICES=0 python3 train.py --regressor pymaf_net --single_dataset --misc TRAIN.BATCH_SIZE 64
# training on mixed datasets
CUDA_VISIBLE_DEVICES=0 python3 train.py --regressor pymaf_net --pretrained_checkpoint path/to/checkpoint_file.pt --misc TRAIN.BATCH_SIZE 64

Running the above commands will use Human3.6M or mixed datasets for training, respectively. We can monitor the training process by setting up a TensorBoard at the directory ./logs.

Citation

If this work is helpful in your research, please cite the following paper.

@inproceedings{pymaf2021,
  title={PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop},
  author={Zhang, Hongwen and Tian, Yating and Zhou, Xinchi and Ouyang, Wanli and Liu, Yebin and Wang, Limin and Sun, Zhenan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2021}
}

@article{pymafx2023,
  title={PyMAF-X: Towards Well-aligned Full-body Model Regression from Monocular Images},
  author={Zhang, Hongwen and Tian, Yating and Zhang, Yuxiang and Li, Mengcheng and An, Liang and Sun, Zhenan and Liu, Yebin},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023}
}

Acknowledgments

The code is developed upon the following projects. Many thanks to their contributions.

pymaf's People

Contributors

hongwenzhang avatar mohamad-hasan-sohan-ajini avatar tdelort avatar tinatiansjz avatar yw0208 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pymaf's Issues

run demo.py for test

INFO:OpenGL.acceleratesupport:No OpenGL_accelerate module loaded: No module named 'OpenGL_accelerate'
Run demo for a single input image.
INFO:models.hmr:loaded resnet50 imagenet pretrained model
/home/lxz/soft/conda/envs/pymaf/lib/python3.6/site-packages/torch/cuda/init.py:106: UserWarning:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
File "demo.py", line 569, in
run_image_demo(args)
File "demo.py", line 85, in run_image_demo
model.load_state_dict(checkpoint['model'], strict=True)
File "/home/lxz/soft/conda/envs/pymaf/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PyMAF:
While copying the parameter named "regressor.0.smpl.J_regressor", whose dimensions in the model are torch.Size([24, 6890]) and whose dimensions in the checkpoint are torch.Size([24, 6890]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "regressor.0.smpl.posedirs", whose dimensions in the model are torch.Size([207, 20670]) and whose dimensions in the checkpoint are torch.Size([207, 20670]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "regressor.1.smpl.J_regressor", whose dimensions in the model are torch.Size([24, 6890]) and whose dimensions in the checkpoint are torch.Size([24, 6890]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "regressor.1.smpl.posedirs", whose dimensions in the model are torch.Size([207, 20670]) and whose dimensions in the checkpoint are torch.Size([207, 20670]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "regressor.2.smpl.J_regressor", whose dimensions in the model are torch.Size([24, 6890]) and whose dimensions in the checkpoint are torch.Size([24, 6890]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).
While copying the parameter named "regressor.2.smpl.posedirs", whose dimensions in the model are torch.Size([207, 20670]) and whose dimensions in the checkpoint are torch.Size([207, 20670]), an exception occurred : ('CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.',).

human36m GT SMPL parameter

You said "The ground truth SMPL parameters in Human3.6M are generated by applying MoSH [34] to the sparse 3D MoCap marker data, as done in Kanazawa et al. "
I have angle files,3D points of Human3.6M. But i dont know how to SMPL parameters from the dataset.
Do you have code ? or Is there other data from Human3.6M??

In basedataset.py

Why we need to self.keypoints = np.concatenate([keypoints_openpose, keypoints_gt], axis=1), concat the two list.What is the rule for openpose data?
thanks

run demo.py error

File "demo.py", line 97, in run_image_demo
renderer = PyRenderer(resolution=(constants.IMG_RES, constants.IMG_RES))
File "/home/chenys/matting/PyMAF/utils/renderer.py", line 65, in init
point_size=1.0
File "/usr/local/lib/python3.7/site-packages/pyrender-0.1.45-py3.7.egg/pyrender/offscreen.py", line 31, in init
self._create()
File "/usr/local/lib/python3.7/site-packages/pyrender-0.1.45-py3.7.egg/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/usr/local/lib/python3.7/site-packages/pyrender-0.1.45-py3.7.egg/pyrender/platforms/egl.py", line 177, in init_context
assert eglInitialize(self._egl_display, major, minor)
File "/usr/local/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 402, in call
return self( *args, **named )
File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
OpenGL.error.GLError: GLError(
err = 12289,
baseOperation = eglInitialize,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7f698b96e0d0>,
c_long(0),
c_long(0),
),
result = 0

h36m mosh

Hi,

I'm trying to run the code on the h36m dataset. I noticed that when I do this, I get the error:

FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset_extras/h36m_mosh_train.npz'

Which procedure should I use to generate the moshed data from the h36m dataset?

how to get 2d keypoints in image?

Firstly,Thks for your great work. I have three question.

  1. you use weak perspective projection or perspective projection?
  2. how to get 2d keypoints in image?
    Looking forward to your reply!

checkpoint error

Traceback (most recent call last):
File "/root/Downloads/PyMAF/demo.py", line 569, in
run_image_demo(args)
File "/root/Downloads/PyMAF/demo.py", line 85, in run_image_demo
model.load_state_dict(checkpoint['model'], strict=False)
File "/root/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PyMAF:
size mismatch for regressor.0.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).
size mismatch for regressor.1.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).
size mismatch for regressor.2.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).

About generate static_fits

Hi,I want to train with my own dataset but I didn't have the 3D groundtruth or corresponding smpl initial parameters. So how can I generate the static_fits and final_fits? What about these .npy file using for?
Thanks ;)

About MPJPE

Hi, I want to know the MPJPE is compyted on which space? Pixel space or real space?
Why multiply 1000 in the code like spin? What are their units and in your paper the number is after multiply 1000?
thank u ;)

train code

Hi, I just want to train the code on the coco dataset, how can I use the order?
thanks

Mesh-aligned Features

Hi,

You use the projected points to extract mesh-aligned feature, have you tried to just concat the projected image and the feature maps? Would it be better because it retains all the features and let the network learn how to combine them?

Thanks!

[Question] n_iter in Regressor.forward

Hi !

I have a question about the n_iter parameter of the Regressor class. Did you try to bump up that number higher than 1 and what was the effect on the results ?

Training on other datasets

I want to try training, but do not currently have access to the H3.6M dataset. Are there other datasets I can try training on (despite perhaps lower accuracy)? If so, what is the command/flag structure for running on other datasets?

Questions about training in remote server

Traceback (most recent call last):
File "train.py", line 69, in
main(None, ngpus_per_node, options)
File "train.py", line 37, in main
trainer = Trainer(options)
File "/root/ours/pymaf/core/base_trainer.py", line 28, in init
self.init_fn()
File "/root/ours/pymaf/core/trainer.py", line 56, in init_fn
self.smpl_render = PyRenderer(resolution=(constants.IMG_RES, constants.IMG_RES))
File "/root/ours/pymaf/utils/renderer.py", line 446, in init
point_size=1.0
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyrender/platforms/pyglet_platform.py", line 52, in init_context
width=1, height=1)
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyglet/window/xlib/init.py", line 165, in init
super(XlibWindow, self).init(*args, **kwargs)
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyglet/window/init.py", line 570, in init
display = pyglet.canvas.get_display()
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyglet/canvas/init.py", line 94, in get_display
return Display()
File "/opt/conda/envs/mono/lib/python3.6/site-packages/pyglet/canvas/xlib.py", line 123, in init
raise NoSuchDisplayException('Cannot connect to "%s"' % name)
pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to "None"

Have you ever encountered this problem? I don't have these problems when I train locally but when I train on the server it does

question about file ‘trainer.py’

in line122
self.valid_loader = DataLoader(
dataset=self.valid_ds,
batch_size=cfg.TEST.BATCH_SIZE,
shuffle=False,
num_workers=cfg.TRAIN.NUM_WORKERS,
pin_memory=cfg.TRAIN.PIN_MEMORY,
sampler=val_sampler
)
the batch_size of valdata is cfg.TEST.BATCH_SIZE

so the code in line 537
pbar = tqdm(desc='Eval', total=len(self.valid_ds) // cfg.TRAIN.BATCH_SIZE)

should be

pbar = tqdm(desc='Eval', total=len(self.valid_ds) // cfg.TEST.BATCH_SIZE)

right?

SOLVER config

Thank you for sharing the great job!

Could you please tell us what the SOLVER config is? Especially about the learning rate setting.

Thanks!

question about per channel pixel-noise augmentation strategy

请教一下,per channel pixel-noise这个augmentation主要是针对什么问题呢?我理解是可以提升网络输出旋转角的稳定性?这个augmentation策略对于结果是否有明显的收益呢? 是否可以分享一下,感谢

does anyone run the demo successfully in pytorch1.7.0 or higher version?

win11 rtx3080

when i try the demo,i got that error

Traceback (most recent call last):
File "demo.py", line 39, in
from utils.renderer import OpenDRenderer, PyRenderer
File "E:\Users\ShamLich\Documents\GitHub\PyMAF\utils\renderer.py", line 30, in
class WeakPerspectiveCamera(pyrender.Camera):
NameError: name 'pyrender' is not defined

but i have already install the pyrender ,and if i change the code in file 'renderer.py' from

try:
import math
import pyrender
from pyrender.constants import RenderFlags
except:
pass

to

import pyrender
try:
import math
from pyrender.constants import RenderFlags
except:
pass

i got this error,i dont know what is going on .

Traceback (most recent call last):
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\egl.py", line 70, in EGL
mode=ctypes.RTLD_GLOBAL
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\ctypesloader.py", line 45, in loadLibrary
return dllType( name, mode )
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\ctypes_init
.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 找不到指定的模块。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo.py", line 39, in
from utils.renderer import OpenDRenderer, PyRenderer
File "E:\Users\ShamLich\Documents\GitHub\PyMAF\utils\renderer.py", line 12, in
import pyrender
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\pyrender_init_.py", line 3, in
from .light import Light, PointLight, DirectionalLight, SpotLight
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\pyrender\light.py", line 10, in
from OpenGL.GL import *
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\GL_init_.py", line 3, in
from OpenGL import error as _error
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\error.py", line 12, in
from OpenGL import platform, configflags
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform_init
.py", line 35, in
load()
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform_init
.py", line 32, in load
plugin.install(globals())
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\baseplatform.py", line 92, in install
namespace[ name ] = getattr(self,name,None)
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\baseplatform.py", line 14, in get
value = self.fget( obj )
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\egl.py", line 93, in GetCurrentContext
return self.EGL.eglGetCurrentContext
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\baseplatform.py", line 14, in get
value = self.fget( obj )
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\egl.py", line 73, in EGL
raise ImportError("Unable to load EGL library", *err.args)
ImportError: ('Unable to load EGL library', 22, '找不到指定的模块。', None, 126, None, 'EGL', None)
(pymaf) PS E:\Users\ShamLich\Documents\GitHub\PyMAF> python utils/r.py
(pymaf) PS E:\Users\ShamLich\Documents\GitHub\PyMAF> python utils/r.py
Traceback (most recent call last):
File "utils/r.py", line 45, in
class PyRenderer:
File "utils/r.py", line 83, in PyRenderer
def call(self, verts, img=np.zeros((224, 224, 3)), cam=np.array([1, 0, 0]),
NameError: name 'np' is not defined
(pymaf) PS E:\Users\ShamLich\Documents\GitHub\PyMAF> python utils/r.py
Traceback (most recent call last):
File "utils/r.py", line 10, in
from models.smpl import get_smpl_faces
ModuleNotFoundError: No module named 'models'
(pymaf) PS E:\Users\ShamLich\Documents\GitHub\PyMAF> python utils/r.py
(pymaf) PS E:\Users\ShamLich\Documents\GitHub\PyMAF> python demo.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt --img_file examples/COCO_val2014_000000019667.jpg
Traceback (most recent call last):
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\egl.py", line 70, in EGL
mode=ctypes.RTLD_GLOBAL
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\ctypesloader.py", line 45, in loadLibrary
return dllType( name, mode )
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\ctypes_init
.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 找不到指定的模块。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo.py", line 39, in
from utils.renderer import OpenDRenderer, PyRenderer
File "E:\Users\ShamLich\Documents\GitHub\PyMAF\utils\renderer.py", line 12, in
import pyrender
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\pyrender_init_.py", line 3, in
from .light import Light, PointLight, DirectionalLight, SpotLight
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\pyrender\light.py", line 10, in
from OpenGL.GL import *
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\GL_init_.py", line 3, in
from OpenGL import error as _error
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\error.py", line 12, in
from OpenGL import platform, configflags
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform_init
.py", line 35, in
load()
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform_init
.py", line 32, in load
plugin.install(globals())
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\baseplatform.py", line 92, in install
namespace[ name ] = getattr(self,name,None)
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\baseplatform.py", line 14, in get
value = self.fget( obj )
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\egl.py", line 93, in GetCurrentContext
return self.EGL.eglGetCurrentContext
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\baseplatform.py", line 14, in get
value = self.fget( obj )
File "D:\Users\ShamLich\anaconda3\envs\pymaf\lib\site-packages\OpenGL\platform\egl.py", line 73, in EGL
raise ImportError("Unable to load EGL library", *err.args)
ImportError: ('Unable to load EGL library', 22, '找不到指定的模块。', None, 126, None, 'EGL', None)

Question about training PyMAF

Hi, Hongwen! Thanks for your great job!

I have two questions about training PyMAF.
1.

PyMAF is trained on Human3.6M at the first stage and then trained on the mixture of both 2D and 3D datasets at the second stage.

Why PyMAF needs to train on H36m first? And will PyMAF get comparable performance to the two-stage training method if PyMAF is trained only on mixed datasets?
2.
In PyMAF, num_epochs is set to 200. Compared with other methods in HPS, the training procedure is too long. Is it possible to reduce the num of epochs to reduce the training time while keeping the performance?

Single image and demo?

Hello. Thank you for sharing your great work.
I have ran your demo code, but it looks like it only supports video?

I just want to render a single person 3D mesh reconstruction with different viewpoints. For more detail, I want it like below example images.
0002_c1s1_000451_03
0002_c1s1_000451_03

How can I do this?? Could you give me some advice.

Thanks in advance!.

About focal_length=5000

Thanks for your brilliant work!
I notice that you estimate the camera parameters by using a function called "estimate_translation_np" (borrowed from SPIN). I feel puzzled that why you could set the same focal length (5000) for different datasets and different images.
Thanks again and hope for your reply.

.npy files

Where should I get "J_regressor_extra.npy" ,"J_regressor_h36m.npy" and "static_fits/ .npy" files?
Could you send me the files to the following e-mail address?
[email protected]
Thank you!

Unsuitable checkpoints

When I run demo.py I get an error:

Traceback (most recent call last):
File "demo.py", line 460, in
main(args)
File "demo.py", line 143, in main
model.load_state_dict(checkpoint['model'], strict=True)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 777, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PyMAF:
size mismatch for regressor.0.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).
size mismatch for regressor.1.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).
size mismatch for regressor.2.smpl.shapedirs: copying a param with shape torch.Size([6890, 3, 10]) from checkpoint, the shape in current model is torch.Size([6890, 3, 300]).

Can you give correct checkpoints?

demo.py opengl errors

Hi,

Thank you for the project, it's very exciting! I'm trying to run the demo, but I am getting a weird opengl error. I tried use_opendr, but the output was just two images: one was the input image, and the other was just a black square. I presume this means opendr is not happy either, even though opendr doesn't throw an error.

Looking through the other issues, I have tried the fix here; this did not change the output AFAICT.

$ python3 demo.py --checkpoint=data/pretrained_model/PyMAF_model_checkpoint.pt --img_file examples/COCO_val2014_000000019667.jpg
INFO:OpenGL.acceleratesupport:OpenGL_accelerate module loaded
INFO:OpenGL.arrays.arraydatatype:Using accelerated ArrayDatatype
Run demo for a single input image.
INFO:models.hmr:loaded resnet50 imagenet pretrained model
Traceback (most recent call last):
  File "demo.py", line 569, in <module>
    run_image_demo(args)
  File "demo.py", line 126, in run_image_demo
    mesh_filename=save_mesh_path
  File "/home/aespielberg/ResearchCode/pinscreen/zozo/PyMAF/utils/renderer.py", line 141, in __call__
    rgb, _ = self.renderer.render(self.scene, flags=render_flags)
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/offscreen.py", line 102, in render
    retval = self._renderer.render(scene, flags, seg_node_map)
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/renderer.py", line 125, in render
    self._update_context(scene, flags)
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/renderer.py", line 737, in _update_context
    p._add_to_context()
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/primitive.py", line 324, in _add_to_context
    self._vaid = glGenVertexArrays(1)
  File "src/latebind.pyx", line 39, in OpenGL_accelerate.latebind.LateBind.__call__
  File "src/wrapper.pyx", line 314, in OpenGL_accelerate.wrapper.Wrapper.__call__
  File "src/wrapper.pyx", line 311, in OpenGL_accelerate.wrapper.Wrapper.__call__
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 401, in __call__
    if self.load():
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 390, in load
    error_checker = self.error_checker,
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 148, in constructFunction
    if (not is_core) and not self.checkExtension( extension ):
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 270, in checkExtension
    result = extensions.ExtensionQuerier.hasExtension( name )
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/extensions.py", line 98, in hasExtension
    result = registered( specifier )
  File "/home/aespielberg/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/extensions.py", line 105, in __call__
    if not specifier.startswith( self.prefix ):
TypeError: ('startswith first arg must be bytes or a tuple of bytes, not str', (1, array([0], dtype=uint32)))

import neural_renderer

ImportError: /root/anaconda3/envs/pymaf/lib/python3.6/site-packages/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: __cudaPopCallConfiguration
cuda 9.0 torch 1.1.0 why this package not compatible with this version when run the demo which input photo?

fail to import Renderer. How to solve this problem?

fail to import Renderer.
INFO:utils.train_utils:log name: logs/pymaf_res50_mix/pymaf_res50_mix_as_lp3_mlp256-128-64-5_Dec28-10-52-18-ZWK
INFO:models.hmr:loaded resnet50 imagenet pretrained model
INFO:datasets.base_dataset:len of h36m: 312188
INFO:datasets.base_dataset:len of lsp-orig: 1000
INFO:datasets.base_dataset:len of mpii: 14810
INFO:datasets.base_dataset:len of lspet: 9428
INFO:datasets.base_dataset:len of coco: 28344
INFO:datasets.base_dataset:len of mpi-inf-3dhp: 96507
INFO:datasets.base_dataset:len of h36m-p2-mosh: 26859
No renderer for visualization.
Traceback (most recent call last):
File "train.py", line 69, in
main(None, ngpus_per_node, options)
File "train.py", line 37, in main
trainer = Trainer(options)
File "/user3/lwx1055260/ID2462_PyMAF_for_pytorch/core/base_trainer.py", line 28, in init
self.init_fn()
File "/user3/lwx1055260/ID2462_PyMAF_for_pytorch/core/trainer.py", line 143, in init_fn
self.iuv_maker = IUV_Renderer(output_size=cfg.MODEL.PyMAF.DP_HEATMAP_SIZE)
NameError: name 'IUV_Renderer' is not defined

demo pose and shape bug

Hi,
Thank you share the great work about human pose and shape estimation. I run the project in my datasets, and use the output "pose" and "shape" parameters, I found it will create the weird 3d mesh. And I found the bug in the demo.py:
Should change the 229 -> 230
pred_pose.append(output['theta'][:, 3:75].reshape(batch_size * seqlen, -1))
pred_betas.append(output['theta'][:, 75:].reshape(batch_size * seqlen, -1))
to:
pred_pose.append(output['theta'][:, 13:85].reshape(batch_size * seqlen, -1))
pred_betas.append(output['theta'][:, 3:13].reshape(batch_size * seqlen, -1))

Thanks

how to get finnal_fits/xxx.npz

Hello, I would like to know how did you get the file finnal_fits/xxx.npz required during the training?
I want to train the model in my own dataset.
thanks

train error about h36m

hello,i meet a question like this,i failed to resolve it ,need your help,thanks
INFO:datasets.base_dataset:len of h36m-p2-mosh: 26859
0%| | 0/200 [00:23<?, ?it/s]
Traceback (most recent call last):_Jan05-08-53-12-BCm Epoch 0: 0%| | 1/4877 [00:12<16:54:58, 12.49s/it]
File "train.py", line 69, in
main(None, ngpus_per_node, options)
File "train.py", line 38, in main
trainer.fit()
File "/home/data1/zxt/PyMAF-master/core/trainer.py", line 502, in fit
self.train(epoch)
File "/home/data1/zxt/PyMAF-master/core/trainer.py", line 278, in train
out = self.train_step(batch)
File "/home/data1/zxt/PyMAF-master/core/trainer.py", line 324, in train_step
gt_out = self.smpl(betas=gt_betas, body_pose=gt_pose[:,3:], global_orient=gt_pose[:,:3])
File "/home/data2/anaconda1/envs/pymaf/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/data1/zxt/PyMAF-master/models/smpl.py", line 34, in forward
smpl_output = super().forward(*args, **kwargs)
File "/home/data2/anaconda1/envs/pymaf/lib/python3.6/site-packages/smplx/body_models.py", line 376, in forward
self.lbs_weights, pose2rot=pose2rot, dtype=self.dtype)
File "/home/data2/anaconda1/envs/pymaf/lib/python3.6/site-packages/smplx/lbs.py", line 179, in lbs
v_shaped = v_template + blend_shapes(betas, shapedirs)
File "/home/data2/anaconda1/envs/pymaf/lib/python3.6/site-packages/smplx/lbs.py", line 265, in blend_shapes
blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
File "/home/data2/anaconda1/envs/pymaf/lib/python3.6/site-packages/torch/functional.py", line 211, in einsum
return torch._C._VariableFunctions.einsum(equation, operands)
RuntimeError: size of dimension does not match previous size, operand 1, dim 2
pymaf_res50_as_lp3_mlp256-128-64-5_Jan05-08-53-12-BCm Epoch 0: 0%|

error while render/visualize

First error occured while training:
Traceback (most recent call last):r15-19-58-15-iXP Epoch 0: 41%|████ | 1000/2438 [31:24<29:48, 1.24s/it]
File "train.py", line 69, in
main(None, ngpus_per_node, options)
File "train.py", line 38, in main
trainer.fit()
File "/home/gqj/PyMAF/core/trainer.py", line 502, in fit
self.train(epoch)
File "/home/gqj/PyMAF/core/trainer.py", line 294, in train
self.visualize(self.step_count, batch, 'train', **out)
File "/home/gqj/miniconda3/envs/gqj/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/gqj/PyMAF/core/trainer.py", line 676, in visualize
render_imgs.append(self.renderer(
TypeError: call() got an unexpected keyword argument 'image'
fixed it by modify trainer.py:

                    image=img_vis,  #704

next trouble:
Traceback (most recent call last):_Apr15-21-13-13-jDp Epoch 0: 41%|████ | 1000/2438 [13:46<17:42, 1.35it/s]
File "train.py", line 69, in
main(None, ngpus_per_node, options)
File "train.py", line 38, in main
trainer.fit()
File "/home/gqj/PyMAF/core/trainer.py", line 502, in fit
self.train(epoch)
File "/home/gqj/PyMAF/core/trainer.py", line 294, in train
self.visualize(self.step_count, batch, 'train', **out)
File "/home/gqj/miniconda3/envs/gqj/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/gqj/PyMAF/core/trainer.py", line 676, in visualize
render_imgs.append(self.renderer(
File "/home/gqj/PyMAF/utils/renderer.py", line 211, in call
if faces == None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
fixed it by modify renderer.py:

        try:                       # line 211
            faces.shape
        except:
            faces = self.faces

Another problem I cannot solved:
Traceback (most recent call last):
File "train.py", line 69, in
main(None, ngpus_per_node, options)
File "train.py", line 38, in main
trainer.fit()
File "/home/gqj/PyMAF/core/trainer.py", line 502, in fit
self.train(epoch)
File "/home/gqj/PyMAF/core/trainer.py", line 294, in train
self.visualize(self.step_count, batch, 'train', **out)
File "/home/gqj/miniconda3/envs/gqj/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/gqj/PyMAF/core/trainer.py", line 676, in visualize
render_imgs.append(self.renderer(
File "/home/gqj/PyMAF/utils/renderer.py", line 276, in call
rendered_image = rn.r
File "/home/gqj/miniconda3/envs/gqj/lib/python3.8/site-packages/chumpy/ch.py", line 594, in r
self._call_on_changed()
File "/home/gqj/miniconda3/envs/gqj/lib/python3.8/site-packages/chumpy/ch.py", line 589, in _call_on_changed
self.on_changed(self._dirty_vars)
File "/home/gqj/miniconda3/envs/gqj/lib/python3.8/site-packages/opendr-0.73-py3.8.egg/opendr/renderer.py", line 1082, in on_changed
self.vbo_verts_face.set_array(np.array(self.verts_by_face).astype(np.float32))
AttributeError: 'ColoredRenderer' object has no attribute 'vbo_verts_face'
pymaf_res50_as_lp3_mlp256-128-64-5_Apr15-21-31-20-AfG Epoch 0: 41%|████ | 1000/2438 [11:21<16:20, 1.47it/s]

Could you please check out this bug?

Confirmation about necessary files

'Collect SMPL model files from https://smpl.is.tue.mpg.de and UP. Rename model files' means below ?

download SMPL_Python_v1.1.0.zip
basicmodel_f_lbs_10_207_0_v1.1.0.pkl > SMPL_FEMALE.pkl
basicmodel_m_lbs_10_207_0_v1.1.0.pkl > SMPL_MALE.pkl

download basicModel_neutral_lbs_10_207_0_v1.0.0.pkl > SMPL_NEUTRAL.pkl

License

Hi,

This is very interesting work. Can you please add license to the library.

Thanks!

TypeError: __call__() got an unexpected keyword argument 'image'

Thanks for your brilliant work!
But when I train the first stage,I meet one error:
Traceback (most recent call last):
File "/media/data1/mypath/PyMAF/train.py", line 69, in <module>
main(None, ngpus_per_node, options)
File "/media/data1/mypath/PyMAF/train.py", line 38, in main
trainer.fit()
File "/media/data1/mypath/PyMAF/core/trainer.py", line 502, in fit
self.train(epoch)
File "/media/data1/mypath/PyMAF/core/trainer.py", line 294, in train
self.visualize(self.step_count, batch, 'train', **out)
File "/media/data1/mypath/anaconda3/envs/pymaf/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/media/data1/mypath/PyMAF/core/trainer.py", line 681, in visualize
addlight=True
TypeError: __call__() got an unexpected keyword argument 'image'
pymaf_res50_as_lp3_mlp256-128-64-5_May06-19-26-01-asJ Epoch 0: 0%| | 10/4877 [

orginal cfg.TRAIN_VIS_ITER_FERQ =1000, when I set cfg.TRAIN_VIS_ITER_FERQ =10, got the same error.
Maybe there is no graphical interface on the remote server?
And my env:
Ubuntu20
python 3.6.10
torch 1.8.0+cu11.1

run demo.py

Traceback (most recent call last):
File "demo.py", line 572, in
run_video_demo(args)
File "demo.py", line 395, in run_video_demo
renderer = PyRenderer(resolution=(orig_width, orig_height))
File "/root/guoshsh/PyMAF-master/utils/renderer.py", line 65, in init
point_size=1.0
File "/root/softwares/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/root/softwares/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/root/softwares/anaconda3/envs/pymaf/lib/python3.6/site-packages/pyrender/platforms/egl.py", line 188, in init_context
EGL_NO_CONTEXT, context_attributes
File "/root/softwares/anaconda3/envs/pymaf/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 402, in call
return self( *args, **named )
File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
OpenGL.error.GLError: GLError(
err = 12297,
baseOperation = eglCreateContext,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7f4f4ffb2048>,
<OpenGL._opaque.EGLConfig_pointer object at 0x7f4f4ffb2e18>,
<OpenGL._opaque.EGLContext_pointer object at 0x7f4f36ac1400>,
<OpenGL.arrays.lists.c_int_Array_7 object at 0x7f4f36aba6a8>,
),
result = <OpenGL._opaque.EGLContext_pointer object at 0x7f4f50097400>
)

Hi,when I run the demo.py I got this problem with the code,I don"t know how to fix it out

About H3.6M dataset

In your core/path_config.py, it writes:

[10] H36M_ROOT = join(expanduser('~'), 'Datasets/human/h36m/c2f_vol')
[32-34] 'h36m-p2-mosh': join(DATASET_NPZ_PATH, 'h36m_mosh_valid_p2.npz'),
[45] 'h36m': join(DATASET_NPZ_PATH, 'h36m_mosh_train.npz'),
[58-61] 'h36m': H36M_ROOT,
'h36m-p1': H36M_ROOT,
'h36m-p2': H36M_ROOT,
'h36m-p2-mosh': H36M_ROOT,

How can I make the preprocessed H3.6M dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.