dumyy / handpose Goto Github PK
View Code? Open in Web Editor NEWCrossInfoNet of CVPR 2019 for hand pose estimation
CrossInfoNet of CVPR 2019 for hand pose estimation
Hello, thanks a lot for your codes. The GPU utilization is low, about 50~100%, which is even worse when I run several instance the same time on one GPU server. In the other hand, the CPU utilization is very high, which is more than 300%. I think whether it is the data load process take too many time. Do you have the same problem when you run the code. It took about 11 hours on NYU dataset(i9-9900k, Titan Xp). Thank you!!!
Hi, thank you for your repo. I have two questions to ask you.
1 Is your gesture segmentation based on the depth threshold?
2 Where to set the depth threshold?
Hello, can you provide the training and testing file for ICVL dataset? Thank you very much!!!
Thanks so much for sharing your work! Could you please provide a link to download the pre-trained models for the datasets you used in your network (HANDS 17, ICVL, NYU and MSRA)? Thanks a lot!
Thank you very much for sharing your project!
When I trained msra using depth thresholding or pretrained com from v2v-posenet, performance was not good as I expected .
I want to know how you get the com in msra dataset.
Thank you.
Thank you for your generous sharing! I want to predict kinect(v2) depth image online.How to do it?
Shall we need detection centers when prediction?
Hello.
Thank you for sharing your code and paper.
Can i ask how you measured the inference time of other algorithms in your paper?
Thank you
Hi, could you please provide a link to download the pre-trained model?
Thansk
Hi dumyy, I'm sorry to bother u. When I tried to run the realtime demo, it reported such an error. ValueError:too many values to unpack.
then I made a little change to the handdetector.py, It didn't report an error, but the code will stop without a demo interface after running.Can you help me with this problem?
Hi, thanks a lot for your awesome work!
I meet this problem when i run MSRA/train_and_test.py,could you please help me with it ?
thank you so much!!!!!
/media/cv/Project/bf/CrossInfoNet/network/MSRA/train_and_test.py
Loading cache data from ../../cache/MSRA//MSRA15Importer_P0_None_com_200_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P1_None_com_200_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P2_None_com_200_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P3_None_com_180_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P4_None_com_180_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P5_None_com_180_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P6_None_com_170_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P7_None_com_160_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P8_None_com_150_cache.pkl
Traceback (most recent call last):
File "/media/cv/Project/bf/CrossInfoNet/network/MSRA/train_and_test.py", line 36, in
Seq_test_raw = Seq_all.pop(MID)
TypeError: 'NoneType' object cannot be interpreted as an integer
Shuffling
Hello.
Thank you for sharing your excellent project.
I have a question while analyzing your code.
According to your code, it seems to calculate the error for the test set for each epoch and store the model with the smallest error.
Is this usual way to use this model for testing?
Thank you
hi,
thanks for sharing your project, I have one question here:
Can this project train ICVL data sets? How to train?
Hello, what is the specific principle of gesture segmentation? The article does not elaborate on it.Or is there any article about it?
Hi
Thanks a lot for the code. Can you tell what information crop_joint_idx variable in the file data/importer.py represents ? It is being used as an index and the gtorig[crop_joint_idx ] is passed as center of mass, but how is this crop_joint_idx index decided/chosen ?
-Sidharth.
Hello, I would like to ask about the training of icvl dataset.
How to set the following parameters?
(1)train_root (2)Seq_train (3)Seq_test (4)outdims (5)gt_fing_ht (6)gt_palm_ht (7)gt_fing (8)gt_palm
I hope you can answer it. Thank you very much.
Thanks a lot for sharing your code. While training the NYU dataset, seems like it is not utilizing GPU properly and training time is significantly higher. Could you please tell me how did you configure the GPU there? I tried to configure the GPU as following:
It is noteworthy that i've checked the GPU are working, but utilization is pretty low, almost minimal
Hi, Thanks for sharing great code.
I have a question about a real-time demo.
I used your training source, trained on NYU and MSRA datasets to get a model.
I ran a real-time demo using the model I got and it's much less accurate than the demo video on your project page.
Can you tell me what dataset you trained on the project demo video?
Hi, I have same questions about the result error of this paper. I am looking forward for your replying!!!
Can you tell how you get the test error? Because the dataset of eval error part is the same with the test part. In my opinion, you choose the best validation result of the training process as the test error.
Why the best error result of ablation part in Table1 is 8.48mm
on ICVL
dataset, which is not the same with the result in Table1 (6.73mm
).
Congratulations on your graduation!
How should I process data if I want to train/test other hand datasets? Using the base data reprocess and online data augmentation codes providing by DeepPrior++?
Hello!Thank you for your share very much! When I train the model(run train_and_test.py) I come with a problem:
File "../..\data\importers.py", line 1000, in loadSequence
f = open(comlabels, 'r')
FileNotFoundError: [Errno 2] No such file or directory: 'E:/nyu_hand_dataset_v2/dataset//train_NYU.txt'
How to solve it?
Hi, thanks a lot again for your awesome work! Could you please share the data loader for the hands 2017 challenge dataset?
Hi, Thanks again for sharing your code. I was going through your codebase and got confused on the following. Could share your view on those?
Hi @dumyy,
Thanks for your paper, Would you please tell which subsets of MSRA have used for training/validation/testing?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.