Git Product home page Git Product logo

sample-tensorflow-imageclassifier's Introduction

TensorFlow Lite IoT Image Classifier

This sample demonstrates how to run TensorFlow Lite inference on Android Things. Push a button to capture an image with the camera, and TensorFlow Lite will tell you what it is! Follow the Image Classifier Codelab step-by-step instructions on how to build a similar sample.

Note: The Android Things Console will be turned down for non-commercial use on January 5, 2022. For more details, see the FAQ page.

Introduction

When a button is pushed or when the touchscreen is touched, the current image is captured from the camera. The image is then converted and piped into a TensorFlow Lite classifier model that identifies what is in the image. Up to three results with the highest confidence returned by the classifier are shown on the screen, if there is an attached display. Also, the result is spoken out loud using Text-To-Speech to the default audio output.

This project is based on the TensorFlow Android Camera Demo TF_Classify app and was adapted to use TensorFlow Lite, a lightweight version of TensorFlow targeted at mobile devices. The TensorFlow classifier model is MobileNet_v1 pre-trained on the ImageNet ILSVRC2012 dataset.

This sample uses the TensorFlow Lite inference library and does not require any native build tools. You can add the TensorFlow Lite inference library to your project by adding a dependency in your build.gradle, for example:

dependencies {
    compile 'org.tensorflow:tensorflow-lite:0.1.1'
}

Note: this sample requires a camera. Find an appropriate board in the documentation.

Screenshots

TensorFlow Lite image classifier sample demo

(Watch the demo on YouTube)

Pre-requisites

  • Android Things compatible board and an attached camera
  • Android Studio 2.2+
  • The following optional components:
    • one button and one resistor for triggering the camera
    • one LED and one resistor for the "ready" indicator
    • speaker or headphones for Text-To-Speech results
    • touchscreen or display for showing results

Schematics

Schematics

Run on Android Things Starter Kit

If you have an Android Things Starter Kit, you can easily run this sample on your i.MX7D development board from the Android Things Toolkit app.

To run the sample on your i.MX7D development board:

  1. Set up your device using Toolkit
  2. Navigate to the Apps tab
  3. Select Run next to the Image Classifier sample.
  4. Press the "A" button on your Rainbow HAT or tap on the display to take a photo.

Running Image Classifier Sample on Toolkit

Build and Install

On Android Studio, click on the "Run" button. If you prefer to run on the command line, type

./gradlew installDebug
adb shell am start com.example.androidthings.imageclassifier/.ImageClassifierActivity

If you have everything set up correctly:

  1. Wait until the LED turns on
  2. Point the camera to something like a dog, cat or a furniture
  3. Push the button to take a picture
  4. The LED should go off while running. In a Raspberry Pi 3, it takes about 500 millisecond to capture the picture and run it through TensorFlow, and some extra time to speak the results through Text-To-Speech
  5. Inference results will show in logcat and, if there is a display connected, both the image and the results will be shown
  6. If a speaker or headphones are connected, the results will be spoken via text to speech

Enable auto-launch behavior

This sample app is currently configured to launch only when deployed from your development machine. To enable the main activity to launch automatically on boot, add the following intent-filter to the app's manifest file:

<activity ...>

   <intent-filter>
       <action android:name="android.intent.action.MAIN"/>
       <category android:name="android.intent.category.HOME"/>
       <category android:name="android.intent.category.DEFAULT"/>
   </intent-filter>

</activity>

License

Copyright 2018 The Android Things Samples Authors.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

sample-tensorflow-imageclassifier's People

Contributors

atn832 avatar daj avatar fleker avatar irataxy avatar jdkoren avatar mangini avatar proppy avatar shawngit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sample-tensorflow-imageclassifier's Issues

audio 4 pole jack video and audio

Alas I don't have to hand a 4 pole jack to hand. Off hand does anyone know which is ground and audio?

Has any one tried a little usb speaker? or tweaked the dt in the config.txt to use the pwn pins?

thanks in advance

Raspberry Pi 3 Board.

java.lang.NoClassDefFoundError: Failed resolution of: Lcom/google/android/things/pio/PeripheralManager;

Tried to run the sample on Rasberry Pi flashed with Android Things and got this exception. Any idea?

(Not sure if related, but I failed to make my device's speaker work with TextToSpeech and thus wanted to try this sample. So it's possible there's some missing setup steps. But I have no clue right now)

E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.androidthings.imageclassifier, PID: 2191
java.lang.NoClassDefFoundError: Failed resolution of: Lcom/google/android/things/pio/PeripheralManager;
at com.example.androidthings.imageclassifier.ImageClassifierActivity.initPIO(ImageClassifierActivity.java:108)
at com.example.androidthings.imageclassifier.ImageClassifierActivity.init(ImageClassifierActivity.java:95)
at com.example.androidthings.imageclassifier.ImageClassifierActivity.onCreate(ImageClassifierActivity.java:90)
at android.app.Activity.performCreate(Activity.java:7000)
at android.app.Activity.performCreate(Activity.java:6991)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1214)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2731)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2856)
at android.app.ActivityThread.-wrap11(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1589)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6494)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807)
Caused by: java.lang.ClassNotFoundException: Didn't find class "com.google.android.things.pio.PeripheralManager" on path: DexPathList[[zip file "/system/framework/com.google.android.things.jar", zip file "/data/app/com.example.androidthings.imageclassifier-VRljk3yGxv0DgffFbpgFRw==/base.apk"],nativeLibraryDirectories=[/data/app/com.example.androidthings.imageclassifier-VRljk3yGxv0DgffFbpgFRw==/lib/arm, /data/app/com.example.androidthings.imageclassifier-VRljk3yGxv0DgffFbpgFRw==/base.apk!/lib/armeabi-v7a, /system/lib, /vendor/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125)
at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
at com.example.androidthings.imageclassifier.ImageClassifierActivity.initPIO(ImageClassifierActivity.java:108) 
at com.example.androidthings.imageclassifier.ImageClassifierActivity.init(ImageClassifierActivity.java:95) 
at com.example.androidthings.imageclassifier.ImageClassifierActivity.onCreate(ImageClassifierActivity.java:90) 
at android.app.Activity.performCreate(Activity.java:7000) 
at android.app.Activity.performCreate(Activity.java:6991) 
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1214) 
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2731) 
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2856) 
at android.app.ActivityThread.-wrap11(Unknown Source:0) 
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1589) 
at android.os.Handler.dispatchMessage(Handler.java:106) 
at android.os.Looper.loop(Looper.java:164) 
at android.app.ActivityThread.main(ActivityThread.java:6494) 
at java.lang.reflect.Method.invoke(Native Method) 
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) 
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807) 

Gradle error

using android studio 2.3, windows 10, 64bit

Error:(4, 0) Declaring custom 'clean' task when using the standard Gradle lifecycle plugins is not allowed.
Open File

Gradle sync failed: Declaring custom 'clean' task when using the standard Gradle lifecycle plugins is not allowed.
Consult IDE log for more details (Help | Show Log)

Pushbutton on electrical circuit

Hi,

Code installs ok but having problems with building the physical electrical circuit; in particular the push button. The push button has two pins and I can't work out how to align it so that it will make a circuit. Would really appreciate assistance here.

Cheers

G

Sample app doesn't start

I have a devkit distributed at Google Dev Days in 2018 in Poland

I was able to build and deploy the app to the board (iot_imx7d_pico 0.8.0 Dev preview 8) but when I run
adb shell am start com.example.androidthings.imageclassifier/.ImageClassifierActivity it only blinks the screen and nothing happens. Console output:
Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=com.example.androidthings.imageclassifier/.ImageClassifierActivity }

Is there any way to check the log? Or is there a known solution?

Custom Trained Model Not Working

This application is working fine with mobilenet_quant_v1_224.tflite model. I've trained custom model following Tensorflow for Poet Google Codelab and created graph using this script:
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
python -m scripts.retrain
--bottleneck_dir=tf_files/bottlenecks
--how_many_training_steps=500
--model_dir=tf_files/models/
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"
--output_graph=tf_files/retrained_graph.pb
--output_labels=tf_files/retrained_labels.txt
--architecture="${ARCHITECTURE}"
--image_dir=tf_files/flower_photos

and for this Android Things sample to train Lite model I've followed tensorflow-for-poets-2-tflite google Codelab and converted using this script
toco
--input_file=tf_files/retrained_graph.pb
--output_file=tf_files/optimized_graph.lite
--input_format=TENSORFLOW_GRAPHDEF
--output_format=TFLITE
--input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3
--input_array=input
--output_array=final_result
--inference_type=FLOAT
--input_data_type=FLOAT

after capturing from raspberry pi 3 model b it is giving me this error
2018-06-28 12:13:09.115 7685-7735/com.example.androidthings.imageclassifier E/AndroidRuntime: FATAL EXCEPTION: BackgroundThread
Process: com.example.androidthings.imageclassifier, PID: 7685
java.lang.IllegalArgumentException: Failed to get input dimensions. 0-th input should have 602112 bytes, but found 150528 bytes.
at org.tensorflow.lite.NativeInterpreterWrapper.getInputDims(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:98)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:142)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:120)
at com.example.androidthings.tensorflow.classifier.TensorFlowImageClassifier.doRecognize(TensorFlowImageClassifier.java:99)
at com.example.androidthings.tensorflow.ImageClassifierActivity.onImageAvailable(ImageClassifierActivity.java:244)
at android.media.ImageReader$ListenerHandler.handleMessage(ImageReader.java:812)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)

Please help with this and i am a beginner with tensorflow.

java.lang.IllegalArgumentException: No intent supplied

When I run
adb shell am start
I get
java.lang.IllegalArgumentException: No intent supplied at android.content.Intent.parseCommandArgs(Intent.java:6682) at com.android.server.am.ActivityManagerShellCommand.makeIntent(ActivityManagerShellCommand.java:278) at com.android.server.am.ActivityManagerShellCommand.runStartActivity(ActivityManagerShellCommand.java:328) at com.android.server.am.ActivityManagerShellCommand.onCommand(ActivityManagerShellCommand.java:141) at android.os.ShellCommand.exec(ShellCommand.java:96) at com.android.server.am.ActivityManagerService.onShellCommand(ActivityManagerService.java:15014) at android.os.Binder.shellCommand(Binder.java:594) at android.os.Binder.onTransact(Binder.java:492) at android.app.IActivityManager$Stub.onTransact(IActivityManager.java:4243) at com.android.server.am.ActivityManagerService.onTransact(ActivityManagerService.java:2919) at android.os.Binder.execTransact(Binder.java:697)

Image Classifier OpenGL ES API Error

Hi,

I've been running into a couple of issues with the image processing. Could anyone shed some light on how to address this?

I'm running the sample code on master adapted for the rainbowHat on top of a Pico i.MX7. My fork is located here, but the only changes to the source shouldn't affect the issue (changing the mapping of a button & LED). Currently using SDK API Level 25 (Android 7.1.1).

Error text below.

Reproduced by

  1. Running app
  2. Pressing button to trigger Camera "Cheese"
06-05 16:10:36.885 1214-2122/com.example.androidthings.imageclassifier E/libEGL: called unimplemented OpenGL ES API
06-05 16:10:36.911 1214-2122/com.example.androidthings.imageclassifier E/SurfaceTextureRenderer: Could not compile shader 35633:
06-05 16:10:36.911 1214-2122/com.example.androidthings.imageclassifier E/SurfaceTextureRenderer:  
06-05 16:10:36.946 1214-2122/com.example.androidthings.imageclassifier E/CameraDeviceGLThread-0: Received exception on GL render thread: 
                                                                                                 java.lang.IllegalStateException: Could not compile shader 35633
                                                                                                     at android.hardware.camera2.legacy.SurfaceTextureRenderer.loadShader(SurfaceTextureRenderer.java:214)
                                                                                                     at android.hardware.camera2.legacy.SurfaceTextureRenderer.createProgram(SurfaceTextureRenderer.java:220)
                                                                                                     at android.hardware.camera2.legacy.SurfaceTextureRenderer.initializeGLState(SurfaceTextureRenderer.java:353)
                                                                                                     at android.hardware.camera2.legacy.SurfaceTextureRenderer.configureSurfaces(SurfaceTextureRenderer.java:673)
                                                                                                     at android.hardware.camera2.legacy.GLThreadManager$1.handleMessage(GLThreadManager.java:89)
                                                                                                     at android.os.Handler.dispatchMessage(Handler.java:98)
                                                                                                     at android.os.Looper.loop(Looper.java:154)
                                                                                                     at android.os.HandlerThread.run(HandlerThread.java:61)

Edit: This may be because the pico i.MX7 doesn't support OpenGL

Why I take a dark photo

Hello,
I use a raspberry pi 3b and a CSI camera to run this demo.
When I take pictures,the images are very dark,just like exposure not enough.
How can I fix this issues?

andriod things object tracking

can we implement object tracking based on SSD object detection API on android things.
if not what are the complications as I have not seen even a single object detection implementation based on tensorflowlite and android things.

Live camera preview?

Are there any samples out there that has this project but with a live camera preview before taking the actual picture?

I see different samples out there but I'm struggling to add this functionality with all the boilerplate code needed for the android camera API

Predicted labels print in incorrect order

There is a bug here causing predicted labels to be printed in incorrect order. Traversing a java.util.PriorityQueue with a range-based for loop does not guarantee traversal in sorted order. Fixed by converting PQ to array, sorting array with PQ.comparator(), and returning the results in sorted order.

More information on the undefined behavior of traversing PQ here.

I just put in a pull request to fix this issue in the similar Codelabs TF image classifier repo. The same fix to TensorFlowHelper.java will work here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.