Git Product home page Git Product logo

coco-wholebody's People

Contributors

fang-haoshu avatar jin-s13 avatar luminxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coco-wholebody's Issues

face keypoints versus body keypoints

hello again author,

i have some querry regarding the face keypoints implementation.

i hope you know the popular openpose and tfpose implementation for pose detection. when i test those models caffe or tensorflow, it takes the occluded parts into consideration (for example, if i cover my eyes with the hands, the model wont detect my eyes). but, when i use the facial landmarks model, it tries to detect all the keypoints and best match the face. Is it because of the type of dataset? (the coco dataset takes the occlusions into consideration) and the (68 facial landmarks dataset doesn't).

Am i correct? when i implement the model with your whole body dataset, all the facial keypoints will be detected even if the facial points are missing right? or even for the side face angles, it still detects the ears and jaw line on the other side of the face.

sorry for my noob question
thank you

The question about the float flag of face and hand kpt?

hi,
1 认真分析了下coco-wholebody数据集,感觉face valid为ture 标记过的脸只有2480多个,标记为false的为1388个,这个合理吗? 感觉加起来只有四五千个,跟11万多的图片比起来还是少了很多啊,这个是不是哪里有问题呢?为什么咱们真正标注的只有四五千个face?
2 其他的比如lefthand 和 righthand 也存在类似问题,不知道是否有官方的数据统计 valid标注有效为true的有多少,包括foot?
3 尤其不明白的一点是,hand 和 face都有许多 float类型的v 也就是三元组(x,y ,v )中到的v 表示改点是否可见或者被遮挡的数值,但是咱们这里不是 coco原始使用的0 1 2 ,而是采用了 0到1之间的浮点数,不知道这里的0和1 以及中间的浮点数代表什么含义,是跟概率和置信度有关吗? 如果使用这些数据进行训练,那么该选择多大范围内的标注点才有效呢? 有木有官方的说明呢? 多谢
4 附加一条,hand和face的 三元组第三位 的数值还不一样,hand是从0到1范围,而face 是有0 还有 1到2之内的吧,如果没统计错的话,face并没有0到1之内的数值?这个是什么意思呢? 期待您的答复
BR

Evalution result of groundtruth is not 1.0

Hi! Thanks for your wonderful work!

However, when I tried to use your evaluation code, I found a strange thing.
I took the 'annotations' part of your 'coco_wholebody_val_v1.0.json' file and set every element's score to 1.0, then I passed it to evaluate_mAP function as res_file (and the gt_file is still 'coco_wholebody_val_v1.0.json'). The confusing thing is that the result AP is not equal to 1.0 and even very low.

Is there anything important that I missed? And here is the result:
image
image
image

Foot keypoint sigmas

Hello!
Thank you for making this nice work available.
In myeval_foot.py, line 163 you use the following sigmas:
sigmas = np.array([0.68, 0.66, 0.66, 0.92, 0.94, 0.94]) / 10.0
So you are using different sigmas for the left and right foot. Could you let me know, why this is the case?
All the other sigmas are symmetrical, so that the corresponding left and right body parts have the same sigma.
Best,
Duncan

tensorflow implementation

hello author,

thank you for the full body dataset. can this dataset be used to train a model with tensorflow? will it slow down the realtime performance in terms of fps?

thank you

wholebody json foot_kpts顺序问题!

你好,
请问下几个问题:
1 在coco_wholebody_XXX_v1.0.json 里面 "foot_kpts" 里面 坐标点的顺序是"大脚趾 18 小脚趾 19 足跟 20 右脚: 大脚趾 21 小脚趾 22 足跟 23 "吗? 还是其他顺序 比如"右脚:足跟 20 大脚趾 18 小脚趾 19 左脚 : 足跟 23 大脚趾 21 小脚趾 22 "
"foot_kpts": [
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
]
image
2 "lefthand_kpts" "righthand_kpts" "face_kpts" 在json里面保存的坐标点顺序 都是按照如下图 标号从小到大排列的吗?
image
期待您的答复,多谢
BR

About evaluation code

In myeval_righthand.py, I find this line L165, which multipies sigmas with 2. However, there is no similar process here in myeval_lefthand.py. Is it a mistake?

hrnet手部ap计算

image
你好,我知道了这个ap是左右手ap平均得到的,那我想问一下,左右手ap是一个模型测试出来的,还是左手一个模型,右手一个模型,分别测试再计算的?祝工作顺利!

Annotation issue

Thank you for sharing this wonderful project!

I just noticed that the visiability flags of lefthand and righthand keypointed are float numbers as the dict item shown below:

        "lefthand_kpts": [
            237.0,
            426.0,
            0.10405432432889938,
            245.0,
            428.0,
            0.20745894312858582,
            253.0,
            430.0,
            0.20745894312858582,
            261.0,
            433.0,
            0.5343613624572754,
            269.0,
            438.0,
            0.2143213450908661,
            265.0,
            429.0,
            0.12357126176357269....]

Why they are set in a different format insead of the orginal flags [0, 1, 2]?

WholeBody-Hand (WBH) Where?

Hey bro, is WBH a separate dataset cropped from the COCO-WholeBody and if so where can I get it. Wish you the best at work!

pip install xtcocotools not working

Hello guys,

Your works seems pretty amazing. I wanted to give it a look and tried to install your stuff using
pip install xtcocotools
as described in the README, however I only got this as a response:

pip install xtcocotools
ERROR: Could not find a version that satisfies the requirement xtcocotools (from versions: none)
ERROR: No matching distribution found for xtcocotools

Is there another way to install it by chance?
Thank you for your help

Laurent

How do I run the code on new images?

Thanks for releasing the code to your great work. I want to generate Coco-JSON files for my own images. Can you provide the steps to do the same?

Calculating Sigmas

Thank you very much for your repo!

How do you calculate the sigmas for each keypoint?

Float keypoint visibility annotation

Hi, thank you very much for making this dataset. I've got a question about annotations.

Some of the annotations (for example, annotation id=193390, lefthand_kpts) have float numbers for the visibility flag instead of integers as described in the DATA_FORMAT file. How to interpret these? Thanks.

image

COCO-Wholebody dataset with detection networks

Hi,

I read in the additional material of your ECCV paper that you compared Zoomnet results with face and hand bounding box detection from Faster-RCNN. I was wandering if you ever tried Faster-RCNN on the whole dataset or you tried other detection networks.

Thanks.

COCO dataset - head boxes

Hi, in the readme file you outlined some popular datasets and the annotations each of them provides.
For COCO 2014 dataset you checked that head box is available. However, I cannot find head boxes in their annotations. It only contains 80 categories (none of them is head) and 17 person keypoints.
Am I looking at the wrong place or is it a mistake in the readme file?

Statistics of COCO-WholeBody

First of all, thank you so much for open sourcing this dataset.

I am trying to get some statistics for the dataset. Specifically, I am trying to obtain the number of annotated body part bounding boxes. According to Figure 3b in the paper, the number of left-hand and right-hand bounding boxes are around 120-130K. Do you consider only hand instances for which validity is True? Because if I consider only hand bounding boxes for which valid field is True, I obtain around 40K left-hand and 40K right-hand boxes.

Here is the code I use to obtain these statistics:

splits = ['train', 'val']

for split in splits:
    with open('coco_wholebody_{}_v1.0.json'.format(split)) as fp:
        d = json.load(fp)
    annotations = d['annotations']
    stats = {}
    stats['body'] = len(annotations)
    stats['lefthand'], stats['righthand'], stats['face'], stats['foot'] = 0, 0, 0, 0
    for ann in annotations:
        bbox = ann['bbox']
        lefthand_valid = ann['lefthand_valid']
        righthand_valid = ann['righthand_valid']
        face_valid = ann['face_valid']
        foot_valid = ann['foot_valid']
        lefthand_box = ann['lefthand_box']
        righthand_box = ann['righthand_box']
        face_box = ann['face_box']
        foot_box = [0.0, 0.0, 0.0, 0.0]
        foot_kpts = ann['foot_kpts']
        if foot_valid:
            # Consider only reliable foot keypoints for generating bounding box
            foot_kpts_x = [foot_kpts[3*i] for i in range(num_foot_kpts) if foot_kpts[3*i+2]]
            foot_kpts_y = [foot_kpts[3*i+1] for i in range(num_foot_kpts) if foot_kpts[3*i+2]]
            x1, x2 = min(foot_kpts_x), max(foot_kpts_x)
            y1, y2 = min(foot_kpts_y), max(foot_kpts_y)
            w, h = x2 - x1, y2 - y1 
            foot_box = [x1, y1, w, h]
        stats['lefthand'] += lefthand_valid
        stats['righthand'] += righthand_valid
        stats['face'] += face_valid
        stats['foot'] += foot_valid
    print("=============================")
    print("Statistics for {} split:".format(split))
    print(stats)
    print("=============================")

Am I missing something here?

Thanks

how to get multi person heatmaps/confidence maps

hello authors,

Sorry i want to ask a question related to Multi-person detection.
I want to ask wheter the COCO person dataset is specialized for multi-person?
i have trained a simple model using FCN for face keypoint regression. Can you please tell me how we can get heatmaps of muliple faces in the bottom-up approach. My current model takes input of 96x96x1 image and gives 96x96x15 size heatmaps for 15 keypoints. I trained my model using datset consisting of images with single face. Do i need the datset with multiple faces? and do I need bouding box information or mask information too?

Please give me your advice
thank you

nice work, missing keypoints

It is a nice work.

However, I found that some clear images (e.g., 000000385029.jpg, 000000524456.jpg ) include hands, but keypoints labeling for those hands are missing. Will you update the labeling files in the future, or just let it go as it now?

keypoints name list

Hi,
Is there a complete name list of all the different keypoints? I want to make sth like a label list , tks

BR

Why do foot keypoints tends to merge in ZoomNet and ZoomNAS?

From the qualitative results you published from your ZoomNet paper (figure 12-13) and ZoomNAS paper (figure 11), the majority of them seem to have the big toe and the small toe keypoints merged into one spot and it usually lands at the middle toe. In a few times they split, and both points almost overlap.

Is this an unintended behavior?
If yes, Is it caused by the error in the dataset or the problem in the model?

Matching of foot keypoints to each body parts

@jin-s13 @Fang-Haoshu @Canwang-sjtu @luminxu Hi, first at all, thanks in open sourcing the coco whole body annotations. I have searched through Google and found the matching of actual body parts to coco 'keypoints': [x1,y1,v1,...,xk,yk,vk] (k=17).
['Nose', 'Leye', 'Reye', 'Lear', 'Rear', 'Lsho', 'Rsho', 'Relb', 'Lwri', 'Rwri', 'Lhip', 'Rhip', 'Lkne', 'Rkne', 'Lank', 'Rank']

However, I cannot find the matching list for foot keypoints. Could you tell me which body part matches to each coco 'foot kpts'? Looking forward to your reply. Thanks in advance.

val annotations quesiton

I saw your annotations for validation is not very clear.
Why keypoints hand_left have confidence like(0.731)? is your results of results?

Tagging wholebody

Hi, is there any tagging tool that supports the coco whole-body format?
We want to generate a dataset using this format for a use case, but we can't find any good tagger.
Any additional advice related to generating labels and then fixing them is also welcome.
Thanks!

Is the real annotation like this? (Including a large number of tiny-bounding boxes)

Hello, thank you very much for your work.

During the process of checking the training set, I found that the annotations contain many samples with tiny-boxes, and their key point information is lost. Is this reasonable?

As the picture shows,
train_batch1

Should I filter out these samples with tiny boxes? Since the AP from my trained model is very bad.

Question about benchmark AP and AR

Good evening. First of all, many thanks and compliments for your work. I have a few questions about the AR and AP values given in the COCO-WholeBody Benchmark table:

  • Is AP as intended for standard COCO dataset evaluation? That is, the mean of AP for each class over 10 IoU threshold values from 0.5 to 0.95 with 0.05 as the step?
  • Is AR computed with 100 maximum predicted boxes for each image and no area restriction on these boxes?

thanks in advance and have a good day.

Whole Hand AP

Hey bro,I got the AP and AR of the left and right hands respectively, so how do I get the AP and AR of the whole hand? like the evaluation results of other methods.
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.