Git Product home page Git Product logo

videoto3dposeandbvh's Introduction

Video to 3DPose and Bvh motion file

This project integrates some project working, example as VideoPose3D,video-to-pose3D , video2bvh, AlphaPose, Higher-HRNet-Human-Pose-Estimation,openpose, thanks for the mentioned above project.

The project extracted the 2d joint key point from the video by using AlphaPose,HRNet and so on. Then transform the 2d point to the 3D joint point by using VideoPose3D. Finally We convert the 3d joint point to the bvh motion file.

Environment

  • Windows 10
  • Anaconda
  • Python > 3.6

Dependencies

You can refer to the project dependencies of video-to-pose3D for setting.

This is the dependencies of the project of video-to-pose3D, and modifyed by me to solve some bug.

  • Packages
    • Pytorch > 1.1.0 (I use the Pytorch1.1.0 - GPU)
    • torchsample
    • ffmpeg (note:you must copy the ffmpeg.exe to the directory of python install)
    • tqdm
    • pillow
    • scipy
    • pandas
    • h5py
    • visdom
    • nibabel
    • opencv-python (install with pip)
    • matplotlib
  • 2D Joint detectors
    • Alphapose (Recommended)
      • Download duc_se.pth from (Google Drive | Baidu pan), place to ./joints_detectors/Alphapose/models/sppe
      • Download yolov3-spp.weights from (Google Drive | Baidu pan), place to ./joints_detectors/Alphapose/models/yolo
    • HR-Net (Bad 3d joints performance in my testing environment)
      • Download pose_hrnet* from Google Drive, place to ./joints_detectors/hrnet/models/pytorch/pose_coco/
      • Download yolov3.weights from here, place to ./joints_detectors/hrnet/lib/detector/yolo
  • 3D Joint detectors - Download pretrained_h36m_detectron_coco.bin from here, place it into ./checkpoint folder
  • 2D Pose trackers (Optional) - PoseFlow (Recommended) No extra dependences - LightTrack (Bad 2d tracking performance in my testing environment) - See original README, and perform same get started step on ./pose_trackers/lighttrack

How to Use it

Please place your video to the .\outputs\inputvideo, and setting the path to the videopose.py, like this

inference_video('outputs/inputvideo/kunkun_cut.mp4', 'alpha_pose')

Waiting some minute, you can see the output video in the \outputs\outputvideo directory,and the bvh file in the \outputs\outputvideo\alpha_pose_kunkun_cut\bvh directory.

videoto3dposeandbvh's People

Contributors

dependabot[bot] avatar hw140701 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

videoto3dposeandbvh's Issues

计算方向余弦矩阵

请问在计算方向余弦矩阵之前,你是怎么计算每个节点的x_dir,y_dir,z_dir以及定义order的。

可视化

后面这个有坐标的可视化是怎么完成的,我跑的时候不会出现这种图像

Coco骨架转bvh

你好,我现有有coco骨架(18个关键点)的npy数据。但是看到在bvh_skeleton里面,cmu_skeleton和human3.6_skeleton两种骨架是有代码写好直接可以转成bvh文件的,但是coco却没有。请问我应该怎么样仿写呢?

是否支持Trinity Speech-Gesture dataset的bvh数据转换

您好,
请教,是否支持Trinity Speech-Gesture dataset的数据转化。或者有没有方法可以转换,谢谢~

例子:

HIERARCHY
ROOT Hips
{
	OFFSET -14.64140 90.27770 -84.91600
	CHANNELS 6 Xposition Yposition Zposition Zrotation Xrotation Yrotation
	JOINT Spine
	{
		OFFSET 0.00000 13.20850 -1.60436
		CHANNELS 3 Zrotation Xrotation Yrotation
		JOINT Spine1
		{
			OFFSET 0.00000 8.61716 0.00000
			CHANNELS 3 Zrotation Xrotation Yrotation
			JOINT Spine2
			{
				OFFSET 0.00000 8.61717 0.00000
				CHANNELS 3 Zrotation Xrotation Yrotation
				JOINT Spine3
				{
					OFFSET 0.00000 8.61717 0.00000
					CHANNELS 3 Zrotation Xrotation Yrotation
					JOINT Neck
					{
						OFFSET 0.00000 11.07920 1.10792
						CHANNELS 3 Zrotation Xrotation Yrotation
						JOINT Neck1
						{
							OFFSET 0.00000 7.08032 0.00000
							CHANNELS 3 Zrotation Xrotation Yrotation
							JOINT Head
							{
								OFFSET 0.00000 7.08031 0.00000
								CHANNELS 3 Zrotation Xrotation Yrotation
								End site
								{
									OFFSET 0.00000 0.00000 0.00000
								}
							}
						}
					}
					JOINT RightShoulder
					{
						OFFSET -0.01000 7.91373 5.19711
						CHANNELS 3 Zrotation Xrotation Yrotation
						JOINT RightArm
						{
							OFFSET -18.41580 0.00000 0.00000
							CHANNELS 3 Zrotation Xrotation Yrotation
							JOINT RightForeArm
							{
								OFFSET -29.11090 0.00000 0.00000
								CHANNELS 3 Zrotation Xrotation Yrotation
								JOINT RightHand
								{
									OFFSET -24.65040 0.00000 0.00000
									CHANNELS 3 Zrotation Xrotation Yrotation
									JOINT RightHandThumb1
									{
										OFFSET -5.35114 -0.85590 3.97906
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT RightHandThumb2
										{
											OFFSET -4.52789 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT RightHandThumb3
											{
												OFFSET -2.46976 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT RightHandIndex1
									{
										OFFSET -14.26970 0.00000 2.88139
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT RightHandIndex2
										{
											OFFSET -5.48836 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT RightHandIndex3
											{
												OFFSET -3.01859 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT RightHandMiddle1
									{
										OFFSET -14.26970 0.00000 -0.09147
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT RightHandMiddle2
										{
											OFFSET -6.17441 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT RightHandMiddle3
											{
												OFFSET -3.56743 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT RightHandRing1
									{
										OFFSET -12.69180 0.00000 -3.06433
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT RightHandRing2
										{
											OFFSET -5.62556 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT RightHandRing3
											{
												OFFSET -3.56744 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT RightHandPinky1
									{
										OFFSET -11.11390 0.00000 -6.03719
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT RightHandPinky2
										{
											OFFSET -4.52789 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT RightHandPinky3
											{
												OFFSET -2.46976 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
								}
							}
						}
					}
					JOINT LeftShoulder
					{
						OFFSET 0.01000 7.91373 5.19711
						CHANNELS 3 Zrotation Xrotation Yrotation
						JOINT LeftArm
						{
							OFFSET 18.41580 0.00000 0.00000
							CHANNELS 3 Zrotation Xrotation Yrotation
							JOINT LeftForeArm
							{
								OFFSET 29.11090 0.00000 0.00000
								CHANNELS 3 Zrotation Xrotation Yrotation
								JOINT LeftHand
								{
									OFFSET 24.65040 0.00000 0.00000
									CHANNELS 3 Zrotation Xrotation Yrotation
									JOINT LeftHandThumb1
									{
										OFFSET 5.35114 -0.85590 3.97906
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT LeftHandThumb2
										{
											OFFSET 4.52789 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT LeftHandThumb3
											{
												OFFSET 2.46977 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT LeftHandIndex1
									{
										OFFSET 14.26970 0.00000 2.88139
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT LeftHandIndex2
										{
											OFFSET 5.48836 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT LeftHandIndex3
											{
												OFFSET 3.01859 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT LeftHandMiddle1
									{
										OFFSET 14.26970 0.00000 -0.09147
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT LeftHandMiddle2
										{
											OFFSET 6.17440 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT LeftHandMiddle3
											{
												OFFSET 3.56744 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT LeftHandRing1
									{
										OFFSET 12.69180 0.00000 -3.06433
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT LeftHandRing2
										{
											OFFSET 5.62556 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT LeftHandRing3
											{
												OFFSET 3.56744 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
									JOINT LeftHandPinky1
									{
										OFFSET 11.11390 0.00000 -6.03719
										CHANNELS 3 Zrotation Xrotation Yrotation
										JOINT LeftHandPinky2
										{
											OFFSET 4.52789 0.00000 0.00000
											CHANNELS 3 Zrotation Xrotation Yrotation
											JOINT LeftHandPinky3
											{
												OFFSET 2.46976 0.00000 0.00000
												CHANNELS 3 Zrotation Xrotation Yrotation
												End site
												{
													OFFSET 0.00000 0.00000 0.00000
												}
											}
										}
									}
								}
							}
						}
					}
					JOINT pCube4
					{
						OFFSET 0.00000 10.91090 1.10792
						CHANNELS 3 Zrotation Xrotation Yrotation
						End site
						{
							OFFSET 0.00000 0.00000 0.00000
						}
					}
				}
			}
		}
	}
	JOINT RightUpLeg
	{
		OFFSET -9.98441 0.00000 0.00000
		CHANNELS 3 Zrotation Xrotation Yrotation
		JOINT RightLeg
		{
			OFFSET 0.00000 -42.11790 0.00000
			CHANNELS 3 Zrotation Xrotation Yrotation
			JOINT RightFoot
			{
				OFFSET 0.00000 -45.00020 -0.00001
				CHANNELS 3 Zrotation Xrotation Yrotation
				JOINT RightForeFoot
				{
					OFFSET 0.00000 -3.74820 0.00000
					CHANNELS 3 Zrotation Xrotation Yrotation
					JOINT RightToeBase
					{
						OFFSET 0.00000 0.06325 14.43120
						CHANNELS 3 Zrotation Xrotation Yrotation
						End site
						{
							OFFSET 0.00000 0.00000 0.00000
						}
					}
				}
			}
		}
	}
	JOINT LeftUpLeg
	{
		OFFSET 9.98441 0.00000 0.00000
		CHANNELS 3 Zrotation Xrotation Yrotation
		JOINT LeftLeg
		{
			OFFSET 0.00000 -42.11800 0.00000
			CHANNELS 3 Zrotation Xrotation Yrotation
			JOINT LeftFoot
			{
				OFFSET 0.00000 -45.00070 0.00000
				CHANNELS 3 Zrotation Xrotation Yrotation
				JOINT LeftForeFoot
				{
					OFFSET 0.00000 -3.74824 0.00000
					CHANNELS 3 Zrotation Xrotation Yrotation
					JOINT LeftToeBase
					{
						OFFSET 0.00000 0.06325 14.43110
						CHANNELS 3 Zrotation Xrotation Yrotation
						End site
						{
							OFFSET 0.00000 0.00000 0.00000
						}
					}
				}
			}
		}
	}
}
MOTION
Frames:1
Frame Time: 0.050000
0.0 0.0 0.0 -0.0 0.0 -0.0 -1.0438600060281022 2.3587152053059897 -1.2019897431581434 -1.001710861815905 3.9499432452818337 0.4159642251569221 -1.0308159394841072 -1.0254198939760328 -0.3421914256452675 0.020989867445547768 0.19973630310387566 -0.618310852232598 -0.3197276373360509 12.097122524927356 0.18173158781933554 0.016670012435241553 -0.5045475503531174 -0.9459365635586078 0.14449033402431094 -15.811948793899363 -0.05198356166208762 -4.803187267192833 1.425229544967309 -14.953113151318428 67.19286454829422 10.776513302970162 13.750833647655174 22.399628423650412 0.5910102517395841 78.32902930570262 16.32197871348333 0.28157159400535886 -5.0503092453904825 4.52871 -0.12703 -1.59312 -4.64595e-06 9.01534e-06 -56.2342 8.23465e-06 -9.55045e-07 -56.2342 -38.7697 2.616e-06 -3.03728e-06 1.51694e-05 2.54582e-06 3.19778e-06 -3.36779e-06 6.40269e-06 -1.74067e-06 -36.6938 -3.44578e-06 4.82161e-06 1.03628e-05 2.25643e-06 -1.00237e-06 -1.59028e-15 3.18055e-15 1.90833e-14 -19.7583 1.99729e-06 9.82298e-07 1.19088e-05 -1.70535e-06 -1.47446e-06 -7.95139e-16 3.57812e-15 3.18055e-15 24.0542 -1.29415e-06 -2.03572e-07 28.8651 4.69993e-06 -2.49464e-06 28.8651 4.67541e-06 2.50252e-06 5.506066226025543 1.3766285140780876 13.34645761940873 -67.05098683202189 19.972927943281537 -15.991218926177313 -26.63707388373633 -1.732274591107232 -76.56005279831253 -12.684982116077768 0.25795658013984124 0.5589712097729244 -0.474211 0.0677175 -8.12381 -7.57701e-06 -7.52602e-07 44.5675 -2.34874e-06 -9.76287e-07 44.5675 12.4601 -8.06117e-07 4.96022e-06 -1.98259e-05 -5.83324e-07 -1.40447e-07 -3.18055e-15 1.59028e-15 -9.54166e-15 11.1059 -5.13129e-07 -3.38467e-06 3.12111e-07 1.50825e-07 1.16352e-06 5.56875e-08 -1.53149e-07 -1.37717e-07 4.77241 1.48801e-06 -1.13665e-06 -2.79682e-07 7.26536e-07 7.29834e-07 -1.95777e-07 5.08575e-07 5.10883e-07 -5.60077 7.82297e-07 -2.03231e-07 -6.72092 -3.47366e-06 4.68464e-06 -6.72092 -1.30631e-08 -7.87263e-07 -1.19271e-15 1.09925e-15 -3.18055e-15 8.04465 3.49397 -19.2195 -2.21424e-05 10.395 0.0142586 -5.62103 -15.6599 3.19225 3.97569e-16 7.95139e-16 -1.59028e-15 1.81331e-08 4.69756 -4.30096e-07 12.4594 -8.23857 6.88324 1.34357e-05 46.3794 0.00174233 -4.32522 -14.7485 -3.83531 3.18055e-15 -3.18055e-15 9.54166e-15 1.35867e-07 -10.745 -2.59575e-06


关于原理的一些问题

看了转换的代码,还是有一些问题,pose3d 是否能真正的转换成bvh?我感觉靠3d坐标,只能得到旋转角度,但比如手臂的自旋是如何计算的呢?比如一个T-pose,变成掌心向上,手臂的自旋角度计算是如何进行的,如果和3d模型绑定,会不会很奇怪啊?

load_camera_params: only integer scalar arrays can be converted to a scalar index

camera.py源文件如下:
import h5py
import numpy as np
from pathlib import Path

def load_camera_params(file):
cam_file = Path(file)
cam_params = {}
azimuth = {
'54138969': 70, '55011271': -70, '58860488': 110, '60457274': -100
}
with h5py.File(cam_file) as f:
subjects = [1, 5, 6, 7, 8, 9, 11]
for s in subjects:
cam_params[f'S{s}'] = {}
for _, params in f[f'subject{s}'].items():
name = params['Name']
name = ''.join([chr(c) for c in name])
val = {}
val['R'] = np.array(params['R'])
val['T'] = np.array(params['T'])
val['c'] = np.array(params['c'])
val['f'] = np.array(params['f'])
val['k'] = np.array(params['k'])
val['p'] = np.array(params['p'])
val['azimuth'] = azimuth[name]
cam_params[f'S{s}'][name] = val

return cam_params

在运行语句cam_params = load_camera_params('cameras.h5')[subject][cam_id]

在这一行以及下面很多行都报错
---> 17 name = ''.join([chr(c) for c in name])

TypeError: only integer scalar arrays can be converted to a scalar index

导出bvh骨骼数据

导出成bvh后我放入Blender后,发现骨骼整体向前倾的,请教一下可能导致的原因
image

怎麽使用openpose和hrnet模型

樓主,
想問下怎麽使用openpose和hrnet模型來預測2d關鍵點呢?? 可以更新下嗎?
openpose用不了
hrnet卻報錯:
Traceback (most recent call last):
File "videopose.py", line 332, in
inference_video('outputs/inputvideo/video.avi', 'hr_pose')
File "videopose.py", line 195, in inference_video
main(args)
File "videopose.py", line 91, in main
model_pos = model_pos.cuda()
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 307, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 203, in _apply
module._apply(fn)
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 225, in _apply
param_applied = fn(param)
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 307, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: device-side assert triggered

可以幫忙下嗎, 謝謝您

/joints_detectors/Alphapose/gene_npz.py中疑似死循环

在调试过程中发现长时间无反应,甚至等待了一个小时之久,于是打断点进行debug。
最后发现程序在/joints_detectors/Alphapose/gene_npz.py中第190行的while循环体中一直循环,疑似进入死循环。具体部分代码如下:

while writer.running()
    pass
writer.stop()
final_result = writer.results()

while中只有一句pass,而pass语句不执行任何操作,那么循环体应怎样跳出呢?为何要将writer.stop()放于循环体后呢?
尝试着进行了修改,但跳过循环后的输出bvh信息只有视频的第一帧。

在下学识浅薄,对多线程编程理解不深,如果方便的话请您留下邮箱方便交流。

帧率设置

您好,请问输出的帧率是30fps,这个怎么替换成其他帧率啊?

Error Alpha

Hello !
I did a clean install twice but I still have this concern.

image

writing to .fbx format

Hello. Thank you for the work. Do you think its possible to write to .fbx format using the output of the SMPL model ? I have the following parameters :

pred_cam (n_frames, 3) # weak perspective camera parameters in cropped image space (s,tx,ty)
orig_cam (n_frames, 4) # weak perspective camera parameters in original image space (sx,sy,tx,ty)
verts (n_frames, 6890, 3) # SMPL mesh vertices
pose (n_frames, 72) # SMPL pose parameters
betas (n_frames, 10) # SMPL body shape parameters
joints3d (n_frames, 49, 3) # SMPL 3D joints
joints2d (n_frames, 21, 3) # 2D keypoint detections by STAF if pose tracking enabled otherwise None
bboxes (n_frames, 4) # bbox detections (cx,cy,w,h)
frame_ids (n_frames,) # frame ids in which subject with tracking id

关于程序执行的问题

作者您好,我想请教一下,如果我想在GPU上运行程序而不是CPU上,请问有什么代码可以实现呀

One little BUG

In joints_detectors\Alphapose\fn.py, in line 219, stickwidth -> int(stickwidth)

ValueError: setting an array element with a sequence.

outputs/inputvideo/video.avi --- elapsed time: 7.605630503036082 s
Traceback (most recent call last):
File "videopose.py", line 332, in
inference_video('outputs/inputvideo/video.avi', 'alpha_pose')
File "videopose.py", line 195, in inference_video
main(args)
File "videopose.py", line 74, in main
keypoints = detector_2d(video_name)
File "/mnt/sdb1/pose_estimation/3D_pose_estimation/VideoTo3dPoseAndBvh/joints_detectors/Alphapose/gene_npz.py", line 63, in generate_kpts
kpts = np.array(kpts).astype(np.float32)
ValueError: setting an array element with a sequence.

Can u help?

坐标点格式

你好,请问输出的3d point坐标是归一化之后的坐标吗,为什么会有大于1的值

关于一个output的报错

作者您好,我尝试跑了一下您的代码,然后发现了如下报错,好像我这他没有生成处理后的文件,我想问问哪出问题了。。
image

如何安装torchsample

关于torchsample我是用您给的链接中的方法安装不上,错误如下:

pip install -e git+https://github.com/ncullen93/torchsample.git#egg=torchsample
Obtaining torchsample from git+https://github.com/ncullen93/torchsample.git#egg=torchsample
Cloning https://github.com/ncullen93/torchsample.git to d:\game\mmd\mmdmatic\src\torchsample
Running command git clone -q https://github.com/ncullen93/torchsample.git 'D:\game\MMD\mmdmatic\src\torchsample'
fatal: early EOF
fatal: the remote end hung up unexpectedly
fatal: index-pack failed
error: RPC failed; curl 56 OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 10054
ERROR: Command errored out with exit status 128: git clone -q https://github.com/ncullen93/torchsample.git 'D:\XXXXXX\src\torchsample' Check the logs for full command output.

我实在anaconad3的虚拟环境下使用这条命令的,能否告知这个torchsample手动安装的话,具体装在什么位置?谢谢了。

bvh咨询

Hi,您好,请教下,任务只有上半身的时候,为什么下半身的bvh文件比较乱啊
image

RuntimeError: CUDA error: out of memory

Do you know whats the problem?
$ python videopose.py
the video is 25.051 f/s
Loading YOLO model..
outputs/inputvideo/kunkun_cut_one_second.mp4 --- elapsed time: 1.0411371119844262 s
Traceback (most recent call last):
File "videopose.py", line 332, in
inference_video('outputs/inputvideo/kunkun_cut_one_second.mp4', 'alpha_pose')
File "videopose.py", line 195, in inference_video
main(args)
File "videopose.py", line 74, in main
keypoints = detector_2d(video_name)
File "/mnt/sdb1/pose_estimation/3D_pose_estimation/VideoTo3dPoseAndBvh/joints_detectors/Alphapose/gene_npz.py", line 36, in generate_kpts
final_result, video_name = handle_video(video_file)
File "/mnt/sdb1/pose_estimation/3D_pose_estimation/VideoTo3dPoseAndBvh/joints_detectors/Alphapose/gene_npz.py", line 122, in handle_video
det_loader = DetectionLoader(data_loader, batchSize=args.detbatch).start()
File "/mnt/sdb1/pose_estimation/3D_pose_estimation/VideoTo3dPoseAndBvh/joints_detectors/Alphapose/dataloader.py", line 280, in init
self.det_model.cuda()
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 307, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 203, in _apply
module._apply(fn)
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 203, in _apply
module._apply(fn)
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 203, in _apply
module._apply(fn)
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 225, in _apply
param_applied = fn(param)
File "/home/ihomelab/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 307, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.