Git Product home page Git Product logo

Comments (4)

KindleHe avatar KindleHe commented on May 23, 2024 2

Maybe something goes wrong for my last test!
I re-ran again and get the right result as bellow,
image
We can conclude:

  1. 1-gpu training script provided by your readme can reproduce the accuracy of paper
  2. https://github.com/wondervictor/WiderFace-Evaluation is about 2~4% lower than that of offical eval tools

from img2pose.

vitoralbiero avatar vitoralbiero commented on May 23, 2024

Hello,

The eval tools are just Matlab scripts provided with the WIDER FACE dataset.
It is the official scripts to get mAP and plot curves on WIDER FACE.
You can find a short description on how to use it in #8.

Although I haven't tested, you can also try non official python code from any GitHub repo. One example can be found at https://github.com/wondervictor/WiderFace-Evaluation

Hope this helps.

from img2pose.

KindleHe avatar KindleHe commented on May 23, 2024

@vitoralbiero
As the paper reported, the AP@val is

Easy   Val AP: 0.908
Medium Val AP: 0.899
Hard   Val AP: 0.847

However, the result of scripts in https://github.com/wondervictor/WiderFace-Evaluation is much lower than the former one.

Easy   Val AP: 0.8469243842505618
Medium Val AP: 0.827844108399036
Hard   Val AP: 0.7493400384483289

I use the model mentioned in your readme

python3 evaluation/evaluate_wider.py \
--dataset_path datasets/WIDER_Face/WIDER_val/images/ \
--dataset_list datasets/WIDER_Face/wider_face_split/wider_face_val_bbx_gt.txt \
--pretrained_path models/img2pose_v1.pth \
--output_path results/WIDER_FACE/Val/

I also use eval tools provided by WIDER FACE to get the result of your publised model img2pose_v1.pth ,

Easy   Val AP: 0.876
Medium Val AP: 0.855 
Hard   Val AP: 0.79

No matter which test script being used, model img2pose_v1.pth can not achieve the precision of paper, begging for your advices, thanks so much!

from img2pose.

vitoralbiero avatar vitoralbiero commented on May 23, 2024

Hi @KindleHe,

I re-ran the evaluate_wider.py script using the img2pose_v1.pth model and got the same results as reported in our paper using eval_tools.
Did you change anything inside evaluation/evaluate_wider.py or are you using it as provided?
When you ran wider_eval.m, does it give you any error?

from img2pose.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.