Git Product home page Git Product logo

finnickniu / tensorflow_object_detection_tflite Goto Github PK

View Code? Open in Web Editor NEW
45.0 3.0 17.0 225.95 MB

This is a repo for training and implementing the mobilenet-ssd v2 to tflite with c++ on x86 and arm64

License: MIT License

CMake 0.76% C++ 63.88% Python 27.16% C 3.05% Java 0.84% Shell 1.00% Objective-C 0.13% Objective-C++ 0.35% Jupyter Notebook 1.62% Ruby 0.01% C# 0.40% Swift 0.09% Makefile 0.36% Batchfile 0.02% RobotFramework 0.01% PHP 0.25% Assembly 0.01% Pascal 0.01% Smarty 0.03% HTML 0.04%
tensorflow-examples tensorflowlite object-detection tflite-cpp object-detect-tflite mobilenet-ssd-lite

tensorflow_object_detection_tflite's Introduction

System Environment:

System: Ubuntu 18.04

Opencv: opencv 3.2

Tensorflow: 1.13.1

Instructions:

The instruction for mobilenet-ssdv2 training

Convert images to tf.record

  1. Write label.pbtxt under the path models/research/object_detection/mobilenet_ssd_v2_train/dataset/ The format like this:
    item {
   id: 1
   name: 'person'
}
 
    item {
   id: 2
   name: 'car'
}


  1. Convert yout images and annotations to tfrecord by models datatool api.

    Commands:

    cd models/research/
    export PYTHONPATH=$PYTHONPATH:path to /models/research/:path to/models/research/slim
    protoc object_detection/protos/*.proto --python_out=.
    python object_detection/dataset_tools/create_coco_tf_record.py --image_dir=/path_to/img/ --ann_dir=/path_to/ann/ --output_path=/path_to/train.record --label_map_path=/path_to/demo/label.pbtxt

Train tensorflow model(fine tuning)

  1. Download the 'mobilenet ssd v2 quantized' model from model zoo, and replace it with models/research/object_detection/mobilenet_ssd_v2_train/pretrained_models(the origin one is the 'mobilenet ssd v2 quantized' as well)
  2. Modify the data path in pipeline.config. The data is the tf record you generated.
   train
   input_path:  "/path_to/train.record"

   test
   input_path:  "/path_to/train.record"
  1. Training
    
    cd models/research/
    export PYTHONPATH=$PYTHONPATH: path to /models/research/:path to/models/research/slim
    protoc object_detection/protos/*.proto --python_out=.
    python object_detection/legacy/train.py --train_dir=path to /models/research/object_detection/mobilenet_ssd_v2_train/CP/ --pipeline_config_path=path to/models/research/object_detection/mobilenet_ssd_v2_train/pipeline.config

  1. Using Tensorboard to visualize your training process
    tensorboard --logdir=/path_to/mobilenet_ssd_v2_train/CP
  1. Export your model to frozen graph, which can cheeck the results with demo.py.
    python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=/path_to/pipleline.config --trained_checkpoint_prefix=/path_to/mobilenet_ssd_v2_train/CP/model.ckpt-xxxxxx --output_directory=/path_to/mobilenet_ssd_v2_train/IG/
  1. Add label_map.json.
    {
        "1": "person"
    }
    {
        "2": "car"
    }
  1. Run demo.py.
    python demo.py PATH_TO_FROZEN_GRAPH cam_dir js_file
  1. Convert frozen_graph to tf_lite Use export_tflite_ssd_graph.py generate tflite_graph.pb
    python object_detection/export_tflite_ssd_graph.py --input_type=image_tensor --pipeline_config_path=path to/models/research/object_detection/mobilenet_ssd_v2_train/IG/pipeline.config --trained_checkpoint_prefix=path to/models/research/object_detection/mobilenet_ssd_v2_train/IG/model.ckpt --output_directory=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite --add_postprocessing_op=true
  1. Convert tflite_graph.pb to model.tflite
    tflite_convert --output_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/model.tflite --graph_def_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/tflite_graph.pb --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --input_shape=1,300,300,3 --allow_custom_ops --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127

Apply tflite model with c++

  1. Modify the line 89 at demo.cpp
   tflite::FlatBufferModel::BuildFromFile("../model.tflite");
  1. Modify the labelmap.txt with you annotation if you fine tuned your model.

  2. Run demo.cpp on x86 unbuntu, make sure opencv and bazel are installed.

    1. Build libtensorflowlite.so, under the tensorflow directory.
                bazel build -c opt //tensorflow/lite:libtensorflowlite.so --fat_apk_cpu=arm64-v8a
    
    1. Move .so to tensorflow_object_detection_tflite/lib
    2. Change find_library(TFLITE_LIBRARY tensorflow-lite "lib") to find_library(TFLITE_LIBRARY tensorflowlite "lib") in CMakeLists.txt.
    3. Build cmake
            mkdir build
            cd build
            cmake ..
            make -j
            ./demo
    
  3. Run demo.cpp on arm64-v8a ubuntu.

    1. Intall opencv on your arm64 motherboard.
    2. Build libtensorflow-lite.a, followed by the tensorflow tutorial https://www.tensorflow.org/lite/guide/build_arm64. Careful about the arm version, v7 or v8.
    3. Move .a to tensorflow_object_detection_tflite/lib
    4. keep find_library(TFLITE_LIBRARY tensorflow-lite "lib") unchanged.
    5. Build cmake
            mkdir build
            cd build
            cmake ..
            make -j
            ./demo
    
  4. If there is a flatbuffers error, you should build flatbuffers on your desktop, and use its header files and .a lib file, put and replace them into tensorflow_object_detection_tflite/include and tensorflow_object_detection_tflite/lib, respectively. You can check here to know how to build. google/flatbuffers#5569 (comment)

  5. Result image

Screenshot

tensorflow_object_detection_tflite's People

Contributors

ark-archana avatar finnickniu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tensorflow_object_detection_tflite's Issues

Does this work for SSD Mobilenet v2?

Hi,
I was working with TFLite. I tried the infernece code for the tflite model in your repo. It is working good.
But when i run the code with the SSD mobilenet V2 tflite model, i get wrong classes and also boxes make no sense...Is this something you noticed?

Can you please help me?

I convert the model using following commands.
python object_detection/export_tflite_ssd_graph.py
--pipeline_config_path=$CONFIG_FILE
--trained_checkpoint_prefix=$CHKPT_DIR
--output_directory=$MODEL_DIR
--add_postprocessing_op=true

tflite_convert --graph_def_file $MODEL_DIR/tflite_graph.pb --output_file $MODEL_DIR/detect.tflite
--input_arrays=normalized_input_image_tensor
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'
--input_shapes=1,300,300,3 --inference_type=QUANTIZED_UINT8
--mean_values=128 --std_dev_values=128
--change_concat_input_ranges=false --allow_custom_ops
--default_ranges_min=0 --default_ranges_max=255

An issue on demo.cpp

First of all, thanks for sharing the code publicly, thanks for share a c++ code with tensorflow. I was working on codes with OpenCV, Tensorflow and many vision libs and i saw in your code the score is focusing on num_detections instead of "score outputs".

In the line
float score = expit(nums[count]); // How has this to be done?

where nums came from num_detections, this should have been from output scores like:
scores = interpreter->tensor(interpreter->outputs()[2]);
auto scrs = scores->data.f;

In this position, interpreter gives you "the score outputs" and then this line would change to:
float score = expit(scrs[count]);

After all these changes, we can see how expit function has make more sense values. I hope this helps you.

core dumped

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./demo'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 __memcpy_avx_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S:238
238 ../sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S: No such file or directory.
[Current thread is 1 (Thread 0x7fadc938c9c0 (LWP 27232))]

undefined reference to `tflite::InterpreterBuilder::operator()

CMakeFiles/demo.dir/demo.cpp.o: In function test()': demo.cpp:(.text+0x862): undefined reference to tflite::InterpreterBuilder::InterpreterBuilder(tflite::FlatBufferModel const&, tflite::OpResolver const&)'
demo.cpp:(.text+0x87b): undefined reference to tflite::InterpreterBuilder::operator()(std::unique_ptr<tflite::Interpreter, std::default_delete<tflite::Interpreter> >*)' demo.cpp:(.text+0x88a): undefined reference to tflite::InterpreterBuilder::~InterpreterBuilder()'
demo.cpp:(.text+0xaf9): undefined reference to tflite::Interpreter::AllocateTensors()' demo.cpp:(.text+0xbd9): undefined reference to tflite::Interpreter::SetAllowFp16PrecisionForFp32(bool)'
demo.cpp:(.text+0xbf5): undefined reference to tflite::Interpreter::SetNumThreads(int)' demo.cpp:(.text+0xc0c): undefined reference to tflite::Interpreter::Invoke()'
demo.cpp:(.text+0x1521): undefined reference to tflite::InterpreterBuilder::~InterpreterBuilder()' CMakeFiles/demo.dir/demo.cpp.o: In function std::default_deletetflite::Interpreter::operator()(tflite::Interpreter*) const':
demo.cpp:(.text.ZNKSt14default_deleteIN6tflite11InterpreterEEclEPS1[ZNKSt14default_deleteIN6tflite11InterpreterEEclEPS1]+0x1e): undefined reference to `tflite::Interpreter::~Interpreter()'
collect2: error: ld returned 1 exit status
CMakeFiles/demo.dir/build.make:111: recipe for target 'demo' failed
make[2]: *** [demo] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/demo.dir/all' failed
make[1]: *** [CMakeFiles/demo.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

I'm not able to resolve this error. Can someone help me?

Score is different in C++ inference compared to python.

@finnickniu I retrained an SSD Mobilenet model and converted it to tflite. I'm able to get accurate detections using the python code however while inferencing with C++ code, the score seems to be incorrect and therefore I'm getting a lot of false detections. Also the score seems to be always 0.9995.
In the python code the score is taken from the 3rd output tensor, but in the C++ code it is extracted by calling the expit function with nums[count] as argument. Why is this so ?

Also, I'm only trying to detect one class.

A quick reply will be really appreciated as I'm in a race against time. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.