Git Product home page Git Product logo

jderobot / detectionmetrics Goto Github PK

View Code? Open in Web Editor NEW
72.0 72.0 38.0 23.21 MB

Tool to evaluate deep-learning detection and segmentation models, and to create datasets

Home Page: https://jderobot.github.io/DetectionMetrics/

License: GNU General Public License v3.0

CMake 6.08% C++ 77.85% C 2.30% Python 12.46% Shell 0.29% Slice 0.77% Ruby 0.02% Dockerfile 0.22%
coco darknet deep-learning imagenet keras object-detection object-segmentation pascal-voc tensorflow

detectionmetrics's Issues

Very Slow Parsing of large JSON files using boost property tree

In order to add support for COCO dataset #5 , it is necessary to parse large json files of the order of 300MB for 2014 and 450 MB for 2017.
This would require a fast JSON parser.
boost property tree is vaer slow and takes up all of the ram (8GB) and then hangs.
So, there are 2 possibilities which are must faster and can be used.

  1. rapidJSON: Parses in 14 seconds without taking much RAM.
  2. nholmann::json: Parses in 38 seconds without taking much RAM.

Therefore, I have decided to use rapidJSON, and is a header only library can be added to our code in the Deps folder.

Link to Comparision Code

Also, I will be adding rapidJSON in the Deps folder, so that there is no need to add extra documentation to install it, and is compatible across all environments, and also to improve version compatibility.

[1] https://github.com/Tencent/rapidjson
[2] https://github.com/nlohmann/json

Speeding up TensorFlow Inference

I just realized that tensorflow inference speed can be drastically improved by passing session as a parameter instead of graph to the inferencing function.
I will implement the same and submit a pull request

Support for Keras (python) inference should be added

So DetectionSuite, which is written in C++, may invoke inference of a neural network from Keras-python for detection.

Maybe some pre-processing of the images can be required before injecting them into the Keras network.

Maybe some post-processing of the network results (bounding boxes...) can be required to deliver them in the right structure to compute statistics.

Test dl-DetectionSuite

Hi,

I'm trying to try DeepLearningSuite, but I don't know if I'm doing it the right way. I am able to run DatasetEvaluationApp, but I get a warning that I don't know if it should come out:



datasetPath
/sampleFiles/datasets/home
evaluationsPath
/sampleFiles/evaluations
inferencesPath
/sampleFiles/evaluations
namesPath
/sampleFiles/cfg/SampleGenerator
netCfgPath
/sampleFiles/cfg/darknet
weightsPath
/sampleFiles/weights/yolo_2017_07




datasetPath
/sampleFiles/datasets/home
evaluationsPath
/sampleFiles/evaluations
inferencesPath
/sampleFiles/evaluations
namesPath
/sampleFiles/cfg/SampleGenerator
netCfgPath
/sampleFiles/cfg/darknet
weightsPath
/sampleFiles/weights/yolo_2017_07


WARNING: Logging before InitGoogleLogging() is written to STDERR
W0129 17:31:21.528326 14332 ListViewConfig.cpp:89] path: /sampleFiles/weights/yolo_2017_07 does not exist
W0129 17:31:21.528415 14332 ListViewConfig.cpp:89] path: /sampleFiles/cfg/darknet does not exist
W0129 17:31:21.528518 14332 ListViewConfig.cpp:89] path: /sampleFiles/cfg/SampleGenerator does not exist

The result is:

evaluation

My appConfig.txt is:

--datasetPath
/mnt/large/pentalo/deep/datasets

--evaluationsPath
/mnt/large/pentalo/deep/evaluations

--weightsPath
/mnt/large/pentalo/deep/weights

--netCfgPath
/mnt/large/pentalo/deep/cfg/darknet

--namesPath
/mnt/large/pentalo/deep/cfg/SampleGenerator

--inferencesPath
/mnt/large/pentalo/deep/evaluations

Also, I tried to run SampleGenerationApp and the result is:

WARNING: Logging before InitGoogleLogging() is written to STDERR
W0129 17:25:17.989045 13853 SampleGenerationApp.cpp:99] Key: outputPath is not defined in the configuration file
W0129 17:25:17.989125 13853 SampleGenerationApp.cpp:99] Key: reader is not defined in the configuration file
W0129 17:25:17.989130 13853 SampleGenerationApp.cpp:99] Key: detector is not defined in the configuration file

The configuration file is:

--outputPath
/URJC

--dataPath
/sampleFiles/images
--detector
#datasetReader
deepLearning
#pentalo-bg

--inferencerImplementation
yolo

--inferencerNames
/sampleFiles/cfg/SampleGenerator/person1class.names

--inferencerConfig
/sampleFiles/cfg/darknet/yolo-voc-07-2017.cfg

--inferencerWeights
/sampleFiles/weights/yolo_2017_07/yolo-voc-07-2017.weights

--reader
#spinello
recorder-rgbd

--readerNames
none

Thank you very much,
Regards
.

Viewing Spinello dataset

I was trying to view spinello's depth images. And the output I am getting is something like this.

image

And it doesn't match with the video.
So is there some problem in conversion from grayscale to color mapping of depth ?
Or this is the desired output ?

This is a sample conversion from grayscale
image

Calculation of Metrics in Evaluation

Hi @chanfr ,
I was going through the code written in DetectionsEvaluator.cpp here.
And, what if multiple objects of the same class are present. For example 4 chair(s) are present in a single sample. Then for a single detected chair, it will iterate over all the ground truth regions and the first one will be matched, which might not be the same object of the chair.

So, doesn't it assumes by default that detections are in the same order as Ground Truth Values.

Extend DetectionSuite to include Python based detection networks

as those generated from TensorFlow and Keras. Their code should be run from the C++ DetectionSuite code.

A Python based model should be trained on its corresponding neural networks middleware, but its inference step should be called from DetectionSuite, providing it an image as input and reading its output bounding boxes from DetectionSuite too.

Image Format Specification for Inferencers

Inferencers generally accept rgb images for inferencing whereas opencv reads images in BGR format, therefore it might be necessary for some cases to swap R and B channels.
So, an option has to be integrated in the UI to take care of this factor.
And by default it should be RGB.

Add Support for Caffe Framework

Hi @chanfr ,
Actually, I wanted to discuss something about Caffe support. So, let me start with TensorFlow and Keras support first.
TensorFlow doesn't require anythink apart from the frozen_inference_graph.pb and is sufficient for generating inferences, for any network architecture, like SSD, Faster R-CNN, etc.
Whereas, Keras requires a config file and the weights for it, though, in newer versions only HDF5 file is enough which contains both.
But, it does require implementation of Custom functions like AnchorBoxesGenerator and L2Normalization which architectures like Faster R-CNN and SSD do require. And these functions have to be passed in the ModelBuilder.

Now, coming to Caffe, in order to implement support for networks like Faster R-CNN and SSD, people have made changes to the base repository, and reinstalled it support custom architectures like Faster R-CNN and SSD, because there is no support for them in the base repository, and these custom functions cannot be passed in the Model Builder.

So, we are left with 2 solutions to implementing Caffe support:

  1. Fork the base repository's code and change it according to our needs, but this will increase a dependency and users would have to build Caffe again for caffe support.
  2. OpenCV's dnn module has already implemented the same by writing all the custom layers required and custom fucntions in C++, and they have also written parsers for config file and .caffemodel weights file.
    And since OpenCV is already a dependency, we can use there module to implement caffe support.

Also, OpenCV doesn't have Keras support, and their TensorFlow support is too lengthy, and requires to generate a separate config file for the same.
So, we have better TensorFlow and Keras support, but we can use their Caffe implementation.

Robust Mapping between class names for Stable Conversion of datasets.

To Convert Datasets, there is a need to convert Classes from one dataset to another, especially to solve issues, like existence of synonyms between datasets for the same class. This can be solved by two methods listed below:

  1. Using a robust mapping technique which can map between synonyms, similar classes, like couch and sofa, or if one dataset contains a parent class and the other all the children, for instance one dataset contains furniture and another chair, sofa, table, etc.
    A suitable data structure has to be used to implement such a mapping.
    Still this has a drawback if the original dataset contain a class which the desired dataset doesn't, in that case some classes have to be discarded leading to losses.

  2. Writer also outputs a class names file, containing all the classes which have been encountered, but again this will make the reading process dependent on the resulting names file, and isn't a universal solution i.e wouldn't be valid outside DetectionSuite.

So, a good solution would be to incorporate both of them, i.e let the user decide, which one to opt for.

Couldn't build detectionsuite

I am getting following error while linking DataSetEvaluationApp,

/opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `typeinfo for IceInternal::Cpp11FnCallbackNC' /opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `IceInternal::Cpp11FnCallbackNC::Cpp11FnCallbackNC(std::function<void (IceUtil::Exception const&)> const&, std::function<void (bool)> const&)' /opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `vtable for IceInternal::Cpp11FnCallbackNC' /opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `IceInternal::Cpp11FnCallbackNC::verify(IceInternal::Handle<Ice::LocalObject> const&)' /opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `IceInternal::Cpp11FnCallbackNC::exception(IceInternal::Handle<Ice::AsyncResult> const&, IceUtil::Exception const&) const' /opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `IceInternal::Cpp11FnCallbackNC::hasSentCallback() const' /opt/jderobot/lib/libJderobotInterfaces.so: undefined reference to `IceInternal::Cpp11FnCallbackNC::sent(IceInternal::Handle<Ice::AsyncResult> const&) const' collect2: error: ld returned 1 exit status DatasetEvaluationApp/CMakeFiles/DatasetEvaluationApp.dir/build.make:476: recipe for target 'DatasetEvaluationApp/DatasetEvaluationApp' failed make[2]: *** [DatasetEvaluationApp/DatasetEvaluationApp] Error 1 CMakeFiles/Makefile2:142: recipe for target 'DatasetEvaluationApp/CMakeFiles/DatasetEvaluationApp.dir/all' failed make[1]: *** [DatasetEvaluationApp/CMakeFiles/DatasetEvaluationApp.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

This possibly means that JdeRobotInterfaces isn't build using std=c++11, although it is supposed to.
Maybe, Ice isn't installed with c++11 standard.
I will try building ice from source and then try the same.

MacOS build support

Support for build on MacOS which uses Apple's LLVM Compiler. This would require some changes in CMake files, and build instructions for MacOS using brew as the packet manager.

Support for TensorFlow (python) inference should be added

So DetectionSuite, which is written in C++, may invoke inference of a neural network from TensorFlow-python for detection.

Maybe some pre-processing of the images can be required before injecting them into the TensorFlow network.

Maybe some post-processing of the network results (bounding boxes...) can be required to deliver them in the right structure to compute statistics.

Request for tutorial to run detector

Is there any way to save the inference files that we get with deployer? I want to save the bounding boxes and the frames of the video. I really appreciate if you can help me know it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.