Git Product home page Git Product logo

extended-berkeley-segmentation-benchmark's Introduction

Extended Berkeley Segmentation Benchmark

A more comprehensive benchmark can now be found at davidstutz/superpixel-benchmark.

This is an extended version of the Berkeley Segmentation Benchmark, available here and introduced in [1], used to assess superpixel algorithms.

[1] P. Arbeláez, M. Maire, C. Fowlkes, J. Malik.
    Contour detection and hierarchical image segmentation.
    Transactions on Pattern Analysis and Machine Intelligence, volume 33, number 5, pages 898–916, 2011.

The extended version was implemented in the course of the following work:

[2] D. Stutz, A. Hermans, B. Leibe.
    Superpixel Segmentation using Depth Information.
    Bachelor thesis, RWTH Aachen University, Aachen, Germany, 2014.
[7] D. Stutz.
	Superpixel Segmentation: An Evaluation.
	Pattern Recognition (J. Gall, P. Gehler, B. Leibe (Eds.)), Lecture Notes in Computer Science, vol. 9358, pages 555 - 562, 2015.

When using this benchmark, please cite [1] and [2]. Additional information can also be found on http://davidstutz.de.

Installation / Compiling

To compile the benchmark on 32-but/64-bit Linux follow the instructions found in source/README:

To compile the benchmarking software from source code, run:

source build.sh

This script should compile the correspondPixels mex file and copy it into the ../benchmarks/ directory.

Measures and Usage

The original benchmark already includes the following measures:

  • Boundary Recall, Boundary Precision and F-measure;
  • Probabilistic Rand Index, Segmentation Covering and Variation of Information.

Details on these measures may be found in [1] or [2]. As most of these measures are unsuited for assessing superpixel algorithms (except for Boundary Recall), the extended version of the Berkeley Segmentation Benchmark adds the following measures:

  • Undersegmentation Error (UE, implemented as discussed in [3];
  • Achievable Segmentation Accuracy (ASA) [4];
  • Compactness (CO) [5];
  • Sum-Of-Squared Error (SSE);
  • Explained Variation (EV) (e.g. as discussed in [6]);

For details, see [3], [4], [5], [6] or [2]:

[3] P. Neubert, P. Protzel.
    Superpixel benchmark and comparison.
    Forum Bildverarbeitung, 2012.

[4] M. Y. Lui, O. Tuzel, S. Ramalingam, R. Chellappa.
    Entropy rate superpixel segmentation.
    Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 2097–2104, 2011.

[5] A. Schick, M. Fischer, R. Stiefelhagen.
    Measuring and evaluating the compactness of superpixels.
    Proceedings of the International Conference on Pattern Recognition, pages 930–934, 2012.

[6] D. Tang, H. Fu, and X. Cao.
    Topology preserved regular superpixel.
    In Multimedia and Expo, International Conference on, pages 765–768, Melbourne, Australia, July 2012

For details on how to use the benchmark, please consult test_benchmarks.m - the script demonstrates the usage of all the above measures. For details on the required file format, test data is provided in the /data folder. For example, the allBench function will run all measures on test data generated by some superpixel algorithms:

imgDir = 'data/BSDS500/images';
gtDir = 'data/BSDS500/groundTruth';
inDir = 'data/BSDS500/superpixel_segs';
outDir = 'tests/test_6';
mkdir(outDir);
nthresh = 5;

tic;
allBench(imgDir, gtDir, inDir, outDir, nthresh);
toc;

Note: The Berkeley Segmentation Dataset provides several ground truth segmentations per image (e.g. at least 5 ground truth segmentations per image). Therefore, all measures can be computed using two different approaches:

  1. Per image, the best value of the measure over all available ground truth segmentations is used and then averaged over all images.
  2. The measure is averaged over all images and then the best value over all ground truth segmentations is determined.

Among others, the output folder will contain the following files:

  • eval_asa.txt: Overall results for Achievable Segmentation Accuracy, in this order: index of ground truth segmentation selected for approach 2; Achievable Segmentation Accuracy for approach 2; Achievable Segmentation Accuracy for approach 1.
  • eval_asa_img.txt: Achievable Segmentation Accuracy per image, in this order: index of image; index of ground truth segmentation with minimum Achievable Segmentation Accuracy; corresponding Achievable Segmentation Accuracy.

Note: For Achievable Segmentation Accuracy, with best value the minimum value is meant. This results in a lower bound on the Achievable Segmentation Accuracy. By adapting collect_eval_asa.m this behavior can be changed.

  • eval_bdry.txt: Overall Boundary Recall, Boundary Precision and F-measure, in this order: index of ground truth segmentation selected for approach 2; Boundary Recall for approach 2; Boundary Precision for approach 2; F-measure for approach 2; Boundary Recall for approach 1; Boundary Precision for approach 1; F-measure for approach 1.

Note: Both Boundary Precision and F-measure are not suited for evaluating superpixel algorithms, see [2].

  • eval_bdry_img.txt: Boundary Recall, Boundary Precision and F-measure per image, in this order: index of image; index of ground truth segmentation with best F-measure; corresponding Boundary Recall, corresponding Boundary Precision; corresponding F-measure.
  • eval_compactness.txt: Overall Compactness, in this order: best Compactness; average Compactness; worst Compactness.
  • eval_compactness_img.txt: Compactness per image, in this order: index of image; Compactness.
  • eval_superpixels.txt: In this order: Highest number of superpixels; average number of superpixels; lowest number of superpixels.
  • eval_superpixels_img.txt: In this order: index of image; number of superpixels.
  • eval_undersegmentation.txt: Overall Undersegmentation Error, in this order: index of ground truth segmentation selected for approach 2; Undersegmentation Error for approach 2; Undersegmentation Error for approach 1.
  • eval_undersegmentation_img.txt: Undersegmentation Error per image, in this order: index of image; index of ground truth segmentation with best Undersegmentation Error; corresponding Undersegmentation Error.
  • eval_sse.txt: Overall Sum-Of-Squared Error, in this order: average Sum-Of-Squared Error for x,y coordinates (may be used as compactness measure); average Sum-Of-Squared Error for r,g,b color.
  • eval_sse_img.txt: Sum-Of-Squared Error, in this order: index of image; Sum-Of-Squared Error for x,y coordinates; Sum-Of-Squared Error for r,g,b color.
  • eval_ev.txt: Overall Explained Variation, only contains the average Explained Variation.
  • eval_ev_img.txt.: Explained Variation per image, in this order: index of image; Explained Variation.

License

For detailed license information on the original Berkeley Segmentation Benchmark, please consult the corresponding homepage at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html or [1].

The implementation of all additional measures is distributed under the following license:

Copyright (c) 2014, David Stutz All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

extended-berkeley-segmentation-benchmark's People

Contributors

davidstutz avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.