Git Product home page Git Product logo

handpose's People

Contributors

dumyy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

handpose's Issues

Low GPU utilization and high CPU utilization

Hello, thanks a lot for your codes. The GPU utilization is low, about 50~100%, which is even worse when I run several instance the same time on one GPU server. In the other hand, the CPU utilization is very high, which is more than 300%. I think whether it is the data load process take too many time. Do you have the same problem when you run the code. It took about 11 hours on NYU dataset(i9-9900k, Titan Xp). Thank you!!!

About Gesture Segmentation

Hi, thank you for your repo. I have two questions to ask you.
1 Is your gesture segmentation based on the depth threshold?
2 Where to set the depth threshold?

Pretrained models

Thanks so much for sharing your work! Could you please provide a link to download the pre-trained models for the datasets you used in your network (HANDS 17, ICVL, NYU and MSRA)? Thanks a lot!

MSRA crop

Thank you very much for sharing your project!

When I trained msra using depth thresholding or pretrained com from v2v-posenet, performance was not good as I expected .

I want to know how you get the com in msra dataset.

Thank you.

Inference time while testing

Hello.

Thank you for sharing your code and paper.

Can i ask how you measured the inference time of other algorithms in your paper?

Thank you

pre-trained model

Hi, could you please provide a link to download the pre-trained model?
Thansk

ValueError:too many values to unpack

Hi dumyy, I'm sorry to bother u. When I tried to run the realtime demo, it reported such an error. ValueError:too many values to unpack. then I made a little change to the handdetector.py, It didn't report an error, but the code will stop without a demo interface after running.Can you help me with this problem?

Question about MSRA

Hi, thanks a lot for your awesome work!
I meet this problem when i run MSRA/train_and_test.py,could you please help me with it ?
thank you so much!!!!!
/media/cv/Project/bf/CrossInfoNet/network/MSRA/train_and_test.py
Loading cache data from ../../cache/MSRA//MSRA15Importer_P0_None_com_200_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P1_None_com_200_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P2_None_com_200_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P3_None_com_180_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P4_None_com_180_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P5_None_com_180_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P6_None_com_170_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P7_None_com_160_cache.pkl
Shuffling
Loading cache data from ../../cache/MSRA//MSRA15Importer_P8_None_com_150_cache.pkl
Traceback (most recent call last):
File "/media/cv/Project/bf/CrossInfoNet/network/MSRA/train_and_test.py", line 36, in
Seq_test_raw = Seq_all.pop(MID)
TypeError: 'NoneType' object cannot be interpreted as an integer
Shuffling

save model

Hello.
Thank you for sharing your excellent project.
I have a question while analyzing your code.
According to your code, it seems to calculate the error for the test set for each epoch and store the model with the smallest error.
Is this usual way to use this model for testing?
Thank you

Resnet-50 basemodel structure

Hi,
Thanks again for sharing your code. While going through your codebase, i can see that you created some resnet blocks like this:
resnet-q1
Where base depth is 512 for ''block4'.
But you implemented it in a slightly different way:
resnet-q2
Last 2 blocks have 64*4 each if i am correct. Is there a reason behind that?

About train

hi,
thanks for sharing your project, I have one question here:
Can this project train ICVL data sets? How to train?

About Gesture Segmentation

Hello, what is the specific principle of gesture segmentation? The article does not elaborate on it.Or is there any article about it?

Variable crop_joint_idx ?

Hi
Thanks a lot for the code. Can you tell what information crop_joint_idx variable in the file data/importer.py represents ? It is being used as an index and the gtorig[crop_joint_idx ] is passed as center of mass, but how is this crop_joint_idx index decided/chosen ?

-Sidharth.

About ICVL dataset for train

Hello, I would like to ask about the training of icvl dataset.
How to set the following parameters?
(1)train_root (2)Seq_train (3)Seq_test (4)outdims (5)gt_fing_ht (6)gt_palm_ht (7)gt_fing (8)gt_palm
I hope you can answer it. Thank you very much.

improper GPU utilization

Thanks a lot for sharing your code. While training the NYU dataset, seems like it is not utilizing GPU properly and training time is significantly higher. Could you please tell me how did you configure the GPU there? I tried to configure the GPU as following:
It is noteworthy that i've checked the GPU are working, but utilization is pretty low, almost minimal

GPU_uti

GPU_uti1

Question about differences in accuracy of real-time demo

Hi, Thanks for sharing great code.
I have a question about a real-time demo.

I used your training source, trained on NYU and MSRA datasets to get a model.
I ran a real-time demo using the model I got and it's much less accurate than the demo video on your project page.

Can you tell me what dataset you trained on the project demo video?

test error between paper and the code

Hi, I have same questions about the result error of this paper. I am looking forward for your replying!!!

  1. Can you tell how you get the test error? Because the dataset of eval error part is the same with the test part. In my opinion, you choose the best validation result of the training process as the test error.

  2. Why the best error result of ablation part in Table1 is 8.48mm on ICVL dataset, which is not the same with the result in Table1 (6.73mm).

About data Processing

Congratulations on your graduation!
How should I process data if I want to train/test other hand datasets? Using the base data reprocess and online data augmentation codes providing by DeepPrior++?

train error

Hello!Thank you for your share very much! When I train the model(run train_and_test.py) I come with a problem:
File "../..\data\importers.py", line 1000, in loadSequence
f = open(comlabels, 'r')
FileNotFoundError: [Errno 2] No such file or directory: 'E:/nyu_hand_dataset_v2/dataset//train_NYU.txt'

How to solve it?

Hands 2017 dataloader

Hi, thanks a lot again for your awesome work! Could you please share the data loader for the hands 2017 challenge dataset?

Code_workflow

Hi, Thanks again for sharing your code. I was going through your codebase and got confused on the following. Could share your view on those?

  1. In the NYU train_and_test.py script, what is the purpose of declaring cubes(250,250,250)? Are those denoting correspondence to depth map?
  2. In the same script, where did you compute initial feature extraction value, T? Was it Basenet2 in Basemodel module? Because, I'm planning to use the block for 48 by 48 input along with 96 by 96 to concatenate T feature maps.

MSRA subsets

Hi @dumyy,

Thanks for your paper, Would you please tell which subsets of MSRA have used for training/validation/testing?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.