Comments (16)
I am so sorry that I forget to upload the realtimehandposepipeline.py
.......
from handpose.
hello , I have updated a realtime demo here.
The input is a depth map got from realsense, which is the main device in our lab.
If you want to use kinect to obtain depth image, just change the depth image input. I think it will be easy for you.
from handpose.
from handpose.
good luck
from handpose.
Hello! I'am sorry but I have another question: when I run the deep_prior++ algorithm, I found that theano-gpu-version must be cuda8+cudnn5,but my cuda version is cuda10,I can't allways run with cuda10 except cuda8+cudnn5(cuda8+cudnn6 can't either!). So I want to ask you: What's your version of theano,cuda and cudnn?
from handpose.
The code of mine is based on tensorflow, so there is no theano. Before you run the code, I think you should read the README.MD first.
-
Theano has stopped updating, so at that time, it only supports cuda8+cudnn5.
-
This code is tensorflow based, in my computer, I use gtx1080+cuda9.0+cudnn v7.1.3+tensorflow 1.9
-
If your cuda is 10.0 version, it only supports tensorflow 1.13. As for the cudnn, you can find the corresponding version in this site.
from handpose.
Yes, my tensorflow is 1.13, I have no problem with tensorflow , but this line "from util.realtimehandposepipeline import RealtimeHandposePipeline" uses the theano, is it right? Because I got some problems with running the realtime demo when running the theano
from handpose.
I have uploaded the realtimehandposepipeline.py
.
You can try again.
from handpose.
Thank you very much!
from handpose.
Hello! I test the kinect2 online ,but I can't get good results, what reasons do you think may be? Is it the data or I should test it online with the realsense SR300? But I have only kinect2 device
from handpose.
The demo will open 2 windows, one of which is the hand detected, and is also the input of the model.
Do you get the correct cropped hand map by moving your hand in the scene?
What's more, if you use the demo, you need change the camera info to your kinect2's (fx,fy,ux,uy).
By the way, have you got the trained model with well test results?
from handpose.
Yes, I trained it and I have changed the camera info, but the cropped hand map is not always correct and the wrong rate is high.
from handpose.
The hand detection is based on depth threshold,so you need put your hand in front ,which is closest to your camera.
I just tested the demo with realsense, it can crop the hand and predict the pose well.
from handpose.
Sorry! Can I ask you more questions? I test with the NYU dataset(The code is shown below, I put it in the 'realtime_demo dir') ,but I find that the result is not really good. Is there anything I did wrong in the code?
import numpy as np
import cv2
import sys
sys.path.append('../')#add root directory
import tensorflow as tf
#import pyrealsense2 as rs
from data.importers import DepthImporter
import argparse
from netlib.basemodel import basenet2
from util.realtimehandposepipeline import RealtimeHandposePipeline
import tensorflow.contrib.slim as slim
import tensorflow.contrib.layers as layers
#rootpath='E:/nyu_hand_dataset_v2/dataset/train/'
#max_imgind=72757
rootpath='E:/nyu_hand_dataset_v2/dataset/test/'
max_imgind=8252
min_imgind=1
strlist=['000000','00000','0000','000','00'];
class model_setup():
def init(self,dataset,model_path):
self._dataset=dataset
self.model_path=model_path
self.inputs = tf.placeholder(dtype=tf.float32, shape=(None, 96, 96, 1))
self.hand_tensor=None
self.model()
self.saver = tf.train.Saver(max_to_keep=15)
def __self_dict(self):
if self._dataset=='icvl':
return (16,6,10)
if self._dataset=='nyu':
return (14,9,5)
if self._dataset in ['msra','bighand']:
return (21,6,15)
def __config(self):
#set your own realsense info here
flag=-1
if self._dataset == 'icvl':
flag=1
#588.03, 587.07, 320., 240.
#di = DepthImporter(fx=475.268, fy=flag*475.268, ux=313.821, uy=246.075)
di = DepthImporter(fx=588.03, fy=flag*587.07, ux=320., uy=240.)
#di = DepthImporter(fx=369.713, fy=flag*369.713, ux=254.652, uy=205.019)
config = None
if self._dataset=='msra':
config = {'fx': di.fx, 'fy': abs(di.fy), 'cube': (175, 175, 175), 'im_size': (96, 96)}
if self._dataset == 'nyu':
config = {'fx': di.fx, 'fy': abs(di.fy), 'cube': (250,250, 250), 'im_size': (96, 96)}
if self._dataset == 'icvl':
config = {'fx': di.fx, 'fy': abs(di.fy), 'cube': (240, 240, 240), 'im_size': (96, 96)}
if self._dataset == 'bighand':
config = {'fx': di.fx, 'fy': abs(di.fy), 'cube': (220, 220, 220), 'im_size': (96, 96)}
return di, config
def __crop_cube(self):
return self.__config()[1]['cube'][0]
def __joint_num(self):
#print(self.__self_dict())
return self.__self_dict()[0]
def model(self):
outdims=self.__self_dict()
print(outdims)
fn = layers.l2_regularizer(1e-5)
fn0 = tf.no_regularizer
with slim.arg_scope([slim.conv2d, slim.fully_connected],
weights_regularizer=fn,
biases_regularizer=fn0, normalizer_fn=slim.batch_norm):
with slim.arg_scope([slim.batch_norm],
is_training=False,
updates_collections=None,
decay=0.9,
center=True,
scale=True,
epsilon=1e-5):
pred_comb_ht, pred_comb_hand, pred_hand, pred_ht = basenet2(self.inputs, kp=1, is_training=False)
self.hand_tensor=pred_hand
def sess_run(self):
_di,_config=self.__config()
print(_di.fx)
print(_di.fy)
print(_di.ux)
print(_di.uy)
rtp = RealtimeHandposePipeline(1, config=_config, di=_di, verbose=False, comrefNet=None)
joint_num=self.__joint_num()
cube_size=self.__crop_cube()
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
print(self.model_path)
self.saver.restore(sess, self.model_path)
for i in range(1,max_imgind):
print('predicting online......')
str0=str(i)
len1=len(str0)
str1=strlist[len1-1]
strpath=rootpath+'depth_1_'+str1+str0 +'.png'
depth_img=cv2.imread(strpath)
##depth_frame, color_frame = realsense_dev.get_image()
dpt1=depth_img[:,:,0]
dpt2=depth_img[:,:,1]
dpt1 = np.asarray(dpt1, dtype='float32')
dpt2 = np.asarray(dpt2, dtype='float32')
dpt2 *= 256
depth_frame = dpt1 + dpt2
depth_frame = np.asarray(depth_frame, dtype='uint16')
#depth_frame=np.fliplr(depth_frame)
if self._dataset=='icvl':
depth_frame=np.fliplr(depth_frame)
frame2 = depth_frame.copy()
print('detecting online......')
crop1, M, com3D = rtp.detect(frame2)
crop = crop1.reshape(1, crop1.shape[0], crop1.shape[1], 1).astype('float32')
pred_ = sess.run(self.hand_tensor, feed_dict={self.inputs: crop})
norm_hand = np.reshape(pred_, (joint_num, 3))
pose = norm_hand * cube_size / 2. + com3D
img = rtp.show2(depth_frame, pose,self._dataset)
img = rtp.addStatusBar(img)
cv2.imshow('img', img)
cv2.imshow('crop', np.asarray(crop1,dtype='uint8'))
if cv2.waitKey(1) >= 0:
break
cv2.destroyAllWindows()
command line: python real_time_kinect2_demo.py --dataset nyu
if name=='main':
parser = argparse.ArgumentParser(description='realsense_realtime_demo')
parser.add_argument('--dataset', type=str, default=None)
args = parser.parse_args()
dataset_input=args.dataset
if dataset_input == 'msra':
#set the model from 0 to 8 that just you like.
model = model_setup(dataset_input, '../model/crossInfoNet_{}.ckpt'.format(dataset_input))
else:
model=model_setup(dataset_input,'../model/crossInfoNet_{}.ckpt'.format(dataset_input))
model.sess_run()
from handpose.
Sorry, here is the code, it can show better.
real_time_nyu_dataset.txt
_
from handpose.
if you want to test the nyu test dataset, please run the py file handpose/network/NYU/test_nyu_cross.py
, in which the way of obtainng centers for test dataset is the same as the training dataset.
While the realtime demo is just a real-time demo, the center of your hand is only obtained by depth threshold, so you need move your hand in the scene, in order to obatin a similar center to the training or testing.
Oh, the center is used for cropping the hand from depth map.
That's all.
from handpose.
Related Issues (20)
- Inference time while testing HOT 5
- improper GPU utilization HOT 10
- Code_workflow HOT 5
- About Gesture Segmentation HOT 5
- About ICVL dataset for train
- Low GPU utilization and high CPU utilization HOT 15
- test error between paper and the code HOT 1
- Resnet-50 basemodel structure HOT 2
- Hands 2017 dataloader
- Variable crop_joint_idx ? HOT 1
- About Gesture Segmentation HOT 1
- Question about differences in accuracy of real-time demo HOT 5
- Question about MSRA HOT 3
- pre-trained model HOT 1
- About data Processing HOT 1
- Could you provide Pretrained Model?
- May you provide pretrain model?
- ValueError:too many values to unpack
- MSRA subsets HOT 1
- ICVL dataset training file
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from handpose.