Git Product home page Git Product logo

lrp's Introduction

This repository is deprecated. Please see LRP-Error, generalised version of LRP for various visual detection tasks.

LRP (Localization Recall Precision) Performance Metric & Thresholder for Object Detection

This repository contains Python and MATLAB implementations of the [LRP] object detection performance metric. The repository supports both PASCAL-VOC and MS COCO datasets. Please cite the following paper if you use LRP.

Kemal Oksuz, Baris Can Cam, Emre Akbas, Sinan Kalkan, "Localization Recall Precision (LRP): A New Performance Metric for Object Detection," In: European Conference on Computer Vision (2018).

In a nutshell, LRP is an alternative to average precision (AP), which is the area under the recall-precision curve and is currently the dominant performance measure used in object detection.

LRP Toy Example

In the figure above, three different object detection results are shown (for an image from ILSVRC 2015 Dataset) with very different RP (recall-precision) curves. Note that they all have same same AP. AP is not able to identify the difference between these curves. In (a), (b) and (c), red, blue and green colors denote ground-truth bounding boxes, true positive detections and false positive detections respectively. The numerical values in the images denote confidence scores. (d), (e) and (f) show RP curves, AP and oLRP results for the corresponding detections in (a),(b),(c). Red crosses denote Optimal LRP points.

What does LRP provide?

  1. The Performance Metric for the Object Detection Problem: Average precision (AP), the area under the recall-precision (RP) curve, is the standard performance measure for object detection. Despite its wide acceptance, it has a number of shortcomings, the most important of which are (i) the inability to distinguish very different RP curves, and (ii) the lack of directly measuring bounding box localization accuracy. ''Localization Recall Precision (LRP) Error'' is a new metric which is specifically designed for object detection. LRP Error is composed of three components related to localization, false negative (FN) rate and false positive (FP) rate. Based on LRP, we introduce the ''Optimal LRP'', the minimum achievable LRP error representing the best achievable configuration of the detector in terms of recall-precision and the tightness of the boxes. In our experiments, we show that, for state-of-the-art object (SOTA) detectors, Optimal LRP provides richer and more discriminative information than AP.

  2. LRP As a Thresholder: In contrast to AP, which considers precisions over the entire recall domain, Optimal LRP determines the ''best'' confidence score threshold for a class, which balances the trade-off between localization and recall-precision. We demonstrate that the best confidence score thresholds vary significantly among classes and detectors. Moreover, we present LRP results of a simple online video object detector which uses a SOTA still image object detector and show that the class-specific optimized thresholds increase the accuracy against the common approach of using a general threshold for all classes.

Getting Started:

MS COCO dataset

The official MS COCO toolkit is modified for LRP Metric evaluation. So you will find a similar folder organization with the official toolkit. Currently, you can find the 2017 train/val annotations under the annotations folder of the cocoLRPapi-master and a Faster R-CNN result file under the results folder of cocoLRPapi-master.

Pascal VOC dataset

Python implementation has been released. Please see below.

What the implementation provides

In any case, besides the paramaters of the evaluation, this implementation provides 4 different set of outputs:

  1. LRP values and LRP components for each class and each confidence score threshold
  2. oLRP values and oLRP components for each class
  3. moLPR value and moLRP components for the detector
  4. Optimal Class Specific Thresholds for each class

Evaluation on MS COCO:

First clone/download the "cocoLRPapi-master" folder:

Using Python:

  1. Execute the command "make" from terminal in the PythonAPI folder.
  2. For the demo, just run the evalDemoLRP.py script to test whether your computer satisfies the requirements.
  3. In order to test with your own ground truth and detection results, set the following 4 parameters in the evalDemoLRP.m script: the ground truth file path in line 8, the detection result file path in line 11, the tau parameter, the minimum IoU to validate a detection in line 14 and finally DetailedLRPResultNeeded parameter to 0 or 1. If it is DetailedLRPResultNeeded is 1, then you will see all of the 4 different set of outputs in the terminal. If it is 0, then you will see the results for 2-4 (oLRP, moLRP values and Optimal Class Specific Thresholds).

Using MATLAB:

  1. For the demo, just run the evalDemoLRP.m script to test whether your computer satisfies the requirements.
  2. In order to test with your own ground truth and detection results, set the following 3 parameters in the evalDemoLRP.m script: the ground truth file path in line 7, the detection result file path in line 10 and the tau parameter, the minimum IoU to validate a detection in line 21.

Note that MS COCO uses json files as the standard detection&annotation format. See http://cocodataset.org for further information.

Evaluation on PASCAL-VOC:

Evaluation steps for PASCAL-VOC.

Preperation:

Clone this repository into your local.

  git clone https://github.com/cancam/LRP

Dataset:

This repository follows the offical structure of PASCAL-VOC development kit.

  1. Download training, validation (optional) and test data and VOC-devkit.
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar

Extract all content.

tar xvf VOCtrainval_06-Nov-2007.tar
tar xvf VOCtest_06-Nov-2007.tar
tar xvf VOCdevkit_08-Jun-2007.tar
  1. Directory should have the following basis structure.
$VOCdevkit/                           # development kit
$VOCdevkit/VOCcode/                   # VOC utility code
$VOCdevkit/VOC2007                    # image sets, annotations, etc.
# ... and several other directories ...
  1. Either you can put the entire pascal-voc evaluation kit under the pascal-voc-lrp directory or a better approach is that you can create symbolic link to "VOCdevkit" under pascal-voc-lrp directory with the following command.
ln -s $VOCdevkit VOCdevkit

Using Python:

Execution:

pascal-voc-lrp evaluation kit can be executed with two ways. Either you can provide a pickle file in which all the detections are included or you can provide offical pascal-voc class-wise text files. The pickle file should have the same format with the one provided as an example (see: ${pascal-voc-lrp}/results/det/detections_voc_test_base.pkl). Evaluation results will be provided by a text file that contains class-wise and overall results in ${pascal-voc-lrp}/results/eval/lrp_results.txt by default.

Example Execution:

The toolkit can be tested using the example pickle file that is located under "/results/det".

python pascal_voc --use_pickle --boxes_path ${lrp_eval}/results/det/detections_voc_test_base.pkl

Or the framework can evaluate the detections using standart form of PASCAL-VOC text file detections.

python pascal_voc

Arguments:

--use_pickle: Flag to evaluate model detections directly from saved pickle file.*
 
--boxes_path: Path to previously mentioned pickle file.
 
--tau: IoU threshold to evaluate detection.
 
--save_results: To specify the path of the text file that contains class-wise and overall results under lrp and ap metrics.
 
--set: Which set to perform evaluation on. (train, val, test)
 
--year: Which year to perform evaluation on. (i.e.: VOC2007, VOC2012)
 
--comp: Whether to use competition mode or not.
 
--devkit_path: To specify a different devkit path.

Using MATLAB:

MATLAB steps...(Coming Soon)

Requirements:

Python 2.7 or MATLAB (Our implementation is based on MATLAB R2017b)

Citation

If you find this code useful for your research, please consider citing our paper:

@Inproceedings{LRP-ECCV18,
  Title          = {Localization Recall Precision (LRP): A New Performance Metric for Object Detection},
  Author         = {Oksuz, Kemal and Cam, Baris and Akbas, Emre and Kalkan, Sinan},
  Booktitle      = {European Conference on Computer Vision (ECCV)},
  Year           = {2018}
}

lrp's People

Contributors

cancam avatar eakbas avatar sinankalkan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lrp's Issues

PASCAL's moLRP divides by _background_ class too

Hey there,

First off, interesting work and thanks for the implementation!

Second, I've noticed that in the PASCAL version (did not look at the COCO version as it is not relevant for me) you calculate moLRP by taking the mean over all classes (self.resultsLRP.olrp.mean()).
Should you not exclude the __background__ class, which is always 0 when calculating the mean?
E.g. by (self.resultsLRP.olrp[1::]).mean()?

Pascal VOC support

The github page states that Pascal VOC support should be coming soon. However, there haven't been any commits lately. Are the plans to support Pascal VOC still on?

Bug: oLRP

So I was testing LRP code with just GT and prediction of one image. First five categories have no prediction or ground truth labels and therefore I get -1 value which makes sense. However, in 2. Mean Optimal LRP and Components: simply average is taken over all the value resulting in negative mean optimal values.

Is it okay to have negative values?
Ideally we should not consider those classes in mean calculation which doesn't exist. So I guess it should be sum of three class divide by three. (Correct me if I'm wrong)

Following are the result of my LRP calculation.

loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
DONE (t=0.00s).
Accumulating evaluation results...
/home/pgohil/LRP/cocoLRPapi-master/PythonAPI/pycocotools/cocoevalLRP.py:336: RuntimeWarning: invalid value encountered in double_scalars
  LocError[s,k]=np.sum(IoUoverlap[:thrind])/omega[s,k];
/home/pgohil/LRP/cocoLRPapi-master/PythonAPI/pycocotools/cocoevalLRP.py:337: RuntimeWarning: invalid value encountered in double_scalars
  FPError[s,k]=nhat[s,k]/(omega[s,k]+nhat[s,k]);
DONE (t=0.01s).
oLRP, moLRP and Class Specific Optimal Thresholds are as follows: 
 
------------------------------------------------------ 
 
------------------------------------------------------ 
 
1.Optimal LRP and Components:
------------------------------------------------------ 
 
oLRP=[[-1.         -1.         -1.         -1.         -1.          1.     0.94463622       1. ]]

oLRPLocalization=
[[-1.         -1.         -1.         -1.         -1.          nan    0.29930628         nan]]

oLRPFalsePositive=
[[-1.   -1.   -1.   -1.   -1.     nan  0.85   nan]]

oLRPFalseNegative=
[[-1.         -1.         -1.         -1.         -1.          1.    0.36842105        1. ]]

------------------------------------------------------ 

------------------------------------------------------ 
 
2.Mean Optimal LRP and Components:
------------------------------------------------------ 
 
moLRP=-0.2569, moLRP_LocComp=-0.7834, moLRP_FPComp=-0.6917, moLRP_FPComp=-0.3289 

------------------------------------------------------ 
 
------------------------------------------------------ 
 
3.Optimal Class Specific Thresholds:

[[-1.   -1.   -1.   -1.   -1.    0.    0.08  0.  ]]
------------------------------------------------------ 
 
------------------------------------------------------ 

Quick update in COCOevalLRP class to support python3?

Hi! I have been using your cocoevalLRP.py integrated in my cocoapi python folder (this is the only file I took from your repository)

I had to change this line here:

np.set_printoptions(threshold=np.nan)

to np.set_printoptions(threshold=sys.maxsize) because I was constantly getting some errors there. After searching online, it seems that's something that was allowed in numpy on python 2.7, but not anymore.

I'm using python 3, and this was the only change I had to make, so was wondering if you don't want to just change this for a better integration with whoever uses your code. I'm not submitting a PR just because I wouldn't be able to test if this simple change breaks functionality in a python2.7 environment.

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.