Git Product home page Git Product logo

eval-mpii-pose's Introduction

eval-mpii-pose

Scripts for evaluating results on the MPII human pose dataset. "Single person pose estimation" is the only type of evaluation currently supported.

Disclaimer: This is an unofficial repository, I am not from MPI and I was not involved in the creation of the dataset.

Input format

Predictions are expected to have the following format:

  • Must be a Matlab (.mat) or HDF5 (.h5) file
    • Must have one field, preds, which is the joint predictions tensor
    • Tensor size must be [2 x 16 x n] or [n x 16 x 2]
  • Must correspond to one of the following subsets: train, val, test
    • See annot/{train,valid,test}.h5 for which examples are in each of these subsets

Predictions produced by the following repositories meet these requirements:

Metrics

PCKh

The PCKh performance metric is the percentage of joints with predicted locations that are no further than half of the head segment length from the ground truth.

"PCKh total" excludes the pelvis and thorax joints from the calculation, presumably because they are very easy to predict given that the approximate person center is provided.

Scripts

evalMPII.m

Loads predictions from Matlab or HDF5 files and compares them with ground truth labels to calculate accuracy metrics (eg PCKh). You will want to edit getExpParamsNew.m to add new sets of predictions, and evalMPII.m to specify which predictions to include and which subset (train/val) to use.

prepareTestResults.m

Loads flat test set predictions and prepares them for submission.

Reference predictions

The preds/reference directory contains multiple validation set prediction files generated by established pose estimation models. You can compare against these predictions using evalMPII.m.

NOTE: Since the reference predictions are for the validation set, they are not compatible with the prepareTestResults.m script.

File origins

In order to keep evaluation in line with existing work, a lot of files in this repository were copied verbatim from other sources.

eval-mpii-pose's People

Contributors

anibali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

eval-mpii-pose's Issues

Question in prepareTestResults.m

Thanks for share your code.
Correlation between person centers and thorax predictions in prepareTestResults.m
0.9893
0.9083
What's the use of two above?

About the comparison between the anewell's and bearpaw's predictions on the validation set

Hi,

Thanks for sharing your code. Do you run the https://github.com/anewell/pose-hg-demo/ to get he anewell's prediction on the validation set? If so, I think the comparison is NOT totally comparable. Because the trick in file https://github.com/anewell/pose-hg-train/blob/master/src/util/pose.lua

-- Very simple post-processing step to improve performance at tight PCK thresholds for i = 1,p:size(1) do for j = 1,p:size(2) do local hm = tmpOutput[i][j] local pX,pY = p[i][j][1], p[i][j][2] scores[i][j] = hm[pY][pX] if pX > 1 and pX < opt.outputRes and pY > 1 and pY < opt.outputRes then local diff = torch.Tensor({hm[pY][pX+1]-hm[pY][pX-1], hm[pY+1][pX]-hm[pY-1][pX]}) p[i][j]:add(diff:sign():mul(.25)) end end end p:add(0.5)

is not used in the getPreds function of file https://github.com/anewell/pose-hg-demo/blob/master/util.lua
. Besides, the transform function in https://github.com/anewell/pose-hg-train/blob/master/src/util/img.lua is kind of different from transform function in https://github.com/anewell/pose-hg-demo/blob/master/img.lua.

The bearpaw's prediction are based on the code in https://github.com/anewell/pose-hg-train rather than https://github.com/anewell/pose-hg-demo, which should have more advantages.

a question of test.json

I use the test.json(7000+ samples) provided by HRNet to generated pred.mat, then i use your code to convert it to pred_mpii_keypoints.mat, but in this process, there rise a error that the numbers of sample is not same 10000+. In other words, the test.json provided by HRNet does not include all test samples. So i use the official test.h5 to prepare the pred_mpii_keypoints.mat. But i find there is a difference between test.json(70000)+ and tets.h5(10000) + in the items of center and scale. does this have a significant influence about testing? Or can i directly use test.h5 to submit?

Converting .mat to .csv file

Hi @anibali and thanks for your work!

Can you share the script for converting the mpii_human_pose_v1_u12_1.mat to the .h5 files? Do you know if there is code for converting the mpii_human_pose_v1_u12_1.mat to a .csv file?
Im asking because I want to train my model explicity on one category of movement (act_id) within MPII dataset. So im thinking of converting the dataset to .csv file i order to delete the not desired data manually. Then i want to bring it in this .csv format or this .h5 format. What do you think about my idea? Any hints for solving that?

Question in regard `prepareTestResults.m`

Hi @anibali ,

I saw their codes you referenced for plotting, sth confusing is that for validation and testing they used the same index from MPII and as it is mentioned in stacked hourglass paper they provide results on validation because test annotation is not available , and you have provided annotation folder which separate train, val, test index, I confused a little, would you please clarify me?

Actual size of single person test set

Hi,
You are stating that this repo is only for single person evaluation. What confuses me is the fact that the test.h5 database contains 11,731 persons, while the official mpii_human_pose_v1_u12_1.mat file has a total of 11,823 persons annotated as test. Of these, only 7,247 are annotated as single persons. So, I am wondering which format the submission would actually need. I find the documentation quite sparse, but if I understood this (http://human-pose.mpi-inf.mpg.de/#evaluation) correctly, only the 7,247 persons should be predicted in the single-person case.
Could you explain why there are 11,731 persons annotated in test.h5?

the number of gt is not matching dt's

Hi,

Thanks for sharing your code. I want to evaluate our dataset on single keypoint. but met some issues, I find the number of gt must be match with dt's? Thanks very much.
```
uv_error = pos_pred_src - pos_gt_src
ValueError: operands could not be broadcast together with shapes (14,2,1743) (1917,14,2)
```

pchk

你好 pckh什么意思呢????比如DNN得到预测的一堆xy坐标 和ground truth的xy直接 怎样计算呢??

help

Showed the"Assertion failed."when I specified the predictions in evalMPII.m

Scripts
Loads predictions from Matlab or HDF5 files and compares them with ground truth labels to calculate accuracy metrics (eg PCKh). You will want to edit getExpParamsNew.m to add new sets of predictions, and evalMPII.m to specify which predictions to include and which subset (train/val) to use.


I have edited getExpParamsNew.m to add the new prediction such as

case 6
    p.name = 'XXX';
    p.predFilename = 'preds/reference/XXX.mat';

And I edited the evalMPII.m

PRED_IDS = [1, 2, 5, 6];

But it shows the error
Error using evalMPII (line 82)
Assertion failed.

Question about predition mat generation

When we do Multi person Estimation and generate the prediction mat

% Set predicted x_pred, y_pred coordinates and prediction score for each body joint
pred(imgidx).annorect(ridx).annopoints.point(pidx).x = x_pred;
pred(imgidx).annorect(ridx).annopoints.point(pidx).y = y_pred;
pred(imgidx).annorect(ridx).annopoints.point(pidx).score = score;

Should ridx be in rectidxs_multi_test (extracted by getMultiPersonGroups(groups,RELEASE,false))? Or it's just a iterator over the people in an image according to my preditions result?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.