Git Product home page Git Product logo

pico-tflmicro's Introduction

TensorFlow Lite Micro

An Open Source Machine Learning Framework for Everyone.

Introduction

This is a version of the TensorFlow Lite Micro library for the Raspberry Pi Pico microcontroller. It allows you to run machine learning models to do things like voice recognition, detect people in images, recognize gestures from an accelerometer, and other sensor analysis tasks. This version has scripts to upstream changes from the Google codebase. It also takes advantage of the RP2040's dual cores for increased speed on some operations.

Getting Started

First you'll need to follow the Pico setup instructions to initialize the development environment on your machine. Once that is done, make sure that the PICO_SDK_PATH environment variable has been set to the location of the Pico SDK, either in the shell you're building in, or the CMake configure environment variable setting of the extension if you're using VS Code.

You should then be able to build the library, tests, and examples. The easiest way to build is using VS Code's CMake integration, by loading the project and choosing the build option at the bottom of the window.

Alternatively you can build the entire project, including tests, by running the following commands from a terminal once you're in this repo's directory:

mkdir build
cd build
cmake ..
make

What's Included

There are several example applications included. The simplest one to begin with is the hello_world project. This demonstrates the fundamentals of deploying an ML model on a device, driving the Pico's LED in a learned sine-wave pattern. Once you have built the project, a UF2 file you can copy to the Pico should be present at build/examples/hello_world/hello_world.uf2.

Another example is the person detector, but since the Pico doesn't come with image inputs you'll need to write some code to hook up your own sensor. You can find a fork of TFLM for the Arducam Pico4ML that does this at arducam.com/pico4ml-an-rp2040-based-platform-for-tiny-machine-learning/.

Contributing

This repository (https://github.com/raspberrypi/pico-tflmicro) is read-only, because it has been automatically generated from the master TensorFlow repository at https://github.com/tensorflow/tensorflow. It's maintained by @petewarden on a best effort basis, so bugs and PRs may not get addressed. You can generate an updated version of this generated project by running the command:

sync/sync_with_upstream.sh

This should create a Pico-compatible project from the latest version of the TensorFlow repository.

Learning More

The TensorFlow website has information on training, tutorials, and other resources.

The TinyML Book is a guide to using TensorFlow Lite Micro across a variety of different systems.

TensorFlowLite Micro: Embedded Machine Learning on TinyML Systems has more details on the design and implementation of the framework.

Licensing

The TensorFlow source code is covered by the Apache 2 license described in src/tensorflow/LICENSE, components from other libraries have the appropriate licenses included in their third_party folders.

pico-tflmicro's People

Contributors

aallan avatar liamfraser avatar petewarden avatar sandeepmistry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pico-tflmicro's Issues

Unable to get output from model

I've been trying to get an output from the sequential model that I'm running on a Raspberry Pi Pico W, but the output is always the same.

it is best to checkout the code in the repository I'm currently working on, https://github.com/risb21/pico-shape-detection/tree/main

Here I've defined a method to predict values, calling Invoke() on the micro interpreter and then passing an output tensor data pointer.
https://github.com/risb21/pico-shape-detection/blob/c50be45e462dbe6fd0bb65b4ae5ed76494c5db7a/src/tflite_wrapper.cpp#L94-L108

void* TFLMicro::predict() {
    TfLiteStatus invoke_status = _interpreter -> Invoke();

    if (invoke_status != kTfLiteOk) {
        MicroPrintf("Could not Invoke interpreter\n");
        return nullptr;
    }

    _output_tensor = _interpreter -> output(0);

    // float y_quantized = _output_tensor -> data.f;
    // float y = (y_quantized - _output_tensor -> params.zero_point) *
    //           _output_tensor -> params.scale;
    return _output_tensor -> data.data;
}

But when I read from it, the data stays the same, even though the input accelerometer data is different every time.
https://github.com/risb21/pico-shape-detection/blob/c50be45e462dbe6fd0bb65b4ae5ed76494c5db7a/src/main.cpp#L209-L235

        if (flags & Flag::predict) {
            // Unset predict flag
            flags &= 0xFF ^ Flag::predict;

            float scale = model.input_scale();
            int32_t zp = model.input_zero_point();
            for (int line = 0; line < MAX_RECORD_LEN; line++) {
                input[line*3] = rec_data[line].x;
                input[line*3 + 1] = rec_data[line].y;
                input[line*3 + 2] = rec_data[line].z;
            }

            float *pred = reinterpret_cast<float *>(model.predict());

            if (pred == nullptr) {
                printf("Error in predicting shape\n");
                continue;
            }

            printf("+----------+----------+----------+\n"
                   "|  Circle  |  Square  | Triangle |\n"
                   "+----------+----------+----------+\n"
                   "| %8.3f | %8.3f | %8.3f |\n"
                   "+----------+----------+----------+\n",
                   pred[0], pred[1], pred[2]);

        }

image

The tflite model has 83 X 3 input nodes and 3 output nodes:
image

Quantized model not working

Hello,

I am trying to run a quantized model on the pico w. However when I try to flash it with a quantized model, the pico w never even turns on? The device becomes unrecognized and I can't get an output from it. I changed the hello_world example for the purpose of this model.

However running the same model but unquantized works fine, the device gets recognized and I can get the output normally.

The quantized model works on the Arduino Nano 33 BLE and the coral micro, so I doubt the model itself is the problem.

/* Copyright 2022 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

#include "constants.h"
#include <stdio.h>
#include "pico/stdlib.h"
#include "hello_world_float_model_data.h"
#include "main_functions.h"
#include "output_handler.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
//#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;

constexpr int kTensorArenaSize = 20000;
uint8_t tensor_arena[kTensorArenaSize];
}  // namespace

absolute_time_t endTime;
absolute_time_t startTime;
absolute_time_t invokeDuration;
const int8_t test_data[] = { 
9,9,6,9,8,6,8,2,9,1,3,6,0,5,1,4,1,2,6,3,8,7,2,0,0,5,8,0,6,8,8,4,8,7,3,4,0,7,9,5,2,0,5,2,6,1,5,0,3,1,1,8,7,3,5,1,9,6,6,9,6,8,6,4,0,4,8,8,3,3,2,5,1,8,2,8,0,3,8,1,1,1,3,4,1,2,5,1,3,5,4,0,3,4,6,1,1,1,5,2,9,0,5,5,4,5,7,3,7,5,9,8,1,1,2,3,7,6,6,9,7,9,8,1,6,4,6,5,2,5,4,4,6,8,4,6,5,8,0,1,0,2,5,4,7,3,6,6,0,9,6,6,7,6,8,6,2,1,8,8,2,0,5,9,2,6,9,3,4,1,8,1,1,7,2,3,2,1,8,2,4,5,2,6,0,0,4,4,7,3,8,5,4,8,7,2,2,3,0,0,3,3,9,5,0,5,0,3,7,1,1,3,3,4,8,3,2,1,8,9,1,0,5,2,3,0,5,3,2,4,7,8,5,3,4,1,5,7,6,2,0,6,9,7,7,4,3,6,5,3,5,0,5,8,6,5,3,2,9,3,4,6,0,2,3,1,6,4,4,9,7,8,1,0,3,9,5,5,3,7,6,8,0,3,5,8,2,0,8,0,6,5,1,1,7,7,2,4,7,1,6,2,3,5,4,9,5,2,8,3,4,9,8,7,2,8,8,5,3,6,7,3,9,0,2,6,9,5,9,2,6,4,8,8,1,8,3,2,9,8,8,5,4,4,6,6,4,4,8,7,1,6,8,7,3,9,7,0,3,8,2,0,1,4,4,2,7,1,3,6,4,2,7,8,8,7,0,7,0,2,0,2,1,8,3,6,3,8,7,7,1,1,6,5,7,3,4,6,5,4,9,3,2,3,2,6,1,4,5,7,7,2,9,9,7,4,9,4,6,6,6,9,1,1,0,1,6,5,4,7,5,4,5,5,9,4,2,8,5,5,9,0,6,9,9,4,2,5,6,8,6,6,5,6,7,2,9,1,3,4,0,6,9,4,1,0,6,5,4,2,5,4,3,5,1,1,5,4,9,6,1,4,0,3,1,5,5,3,9,4,0,2,4,6,6,7,3,5,5,8,7,5,5,7,3,6,6,5,5,5,7,3,0,7,4,5,9,6,4,0,6,1,2,1,3,4,3,9,0,8,1,1,3,7,3,3,1,1,5,9,2,3,9,8,6,6,3,8,5,9,8,2,2,3,2,1,7,7,2,0,2,3,4,5,6,6,1,5,0,7,6,7,6,8,9,7,8,0,3,4,5,2,6,0,3,9,2,8,0,6,9,5,8,2,8,8,7,0,8,5,4,8,0,6,9,4,4,0,6,5,6,0,0,0,1,7,6,5,6,9,9,6,3,4,3,4,8,0,9,0,2,7,2,0,1,0,5,9,9,6,7,1,5,0,7,4,7,3,5,1,1,4,7,7,1,1,6,7,3,6,2,1,7,3,7,3,3,2,4,7,9,9,9,0,3,9,2,8,7,1,7,0,7,6,0,2,3,3,9,1,0,8,3,2,7,7,4,9,1,5,2,4,5,6,5,8,1,7,9,7,9,7,2,5,5,7,7,2,0,9,7,7,4,1,3,6,2,4,2,2,0,9,6,7,5,9,8,6,2,6,0,8,};


// The name of this function is important for Arduino compatibility.
void setup() {
  stdio_init_all();
  tflite::InitializeTarget();

  // Map the model into a usable data structure. This doesn't involve any
  // copying or parsing, it's a very lightweight operation.
  model = tflite::GetModel(g_hello_world_float_model_data);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    MicroPrintf(
        "Model provided is schema version %d not equal "
        "to supported version %d.",
        model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  // This pulls in all the operation implementations we need.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroMutableOpResolver<3> resolver;
  resolver.AddFullyConnected();
  resolver.AddSoftmax();
  resolver.AddReshape();
  resolver.AddQuantize();
  resolver.AddDequantize();
  //static tflite::AllOpsResolver resolver;
  // if (resolve_status != kTfLiteOk) {
  //   MicroPrintf("Op resolution failed");
  //   return;
  // }

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    MicroPrintf("AllocateTensors() failed");
    return;
  }

  // Obtain pointers to the model's input and output tensors.
  input = interpreter->input(0);
  output = interpreter->output(0);

  // Keep track of how many inferences we have performed.
  inference_count = 0;
}

// The name of this function is important for Arduino compatibility.
void loop() {

  for (int i = 0; i < 240; i++) {
    MicroPrintf("filling tensor");
    MicroPrintf("%d", i);
    input->data.int8[i] = test_data[i];
  }
  // Run inference, and report any error
  startTime = get_absolute_time();
  TfLiteStatus invoke_status = interpreter->Invoke();
  if (invoke_status != kTfLiteOk) {
    MicroPrintf("Invoke failed");
    return;
  }
  endTime = get_absolute_time();
  invokeDuration = endTime - startTime;

  MicroPrintf("time elapsed %llu \n", invokeDuration);
  sleep_ms(1000);
  // Increment the inference_counter, and reset it if we have reached
  // the total number per cycle
  inference_count += 1;
  if (inference_count >= kInferencesPerCycle) inference_count = 0;
}

Will this PORT work for other microcontrollers?

Hi @petewarden

I believe you were also a big part of the TFLITE for microcontrollers Github at https://github.com/tensorflow/tflite-micro-arduino-examples

Can this pico-tfflmicro repo be easily ported to other microcontrollers or is it set specifically to the Pico? If only for the Pico can some of the concepts you used to port the main TFliteMicro repo work for other microcontrollers? I am mainly interested in the STM32 boards: the PortentaH7, NiclaVision, RAK2270 etc.

Any suggestions?

Is this repo still "read-only"?

With the lack of support from upstream is this effectively the place where the RP2040 port of TFLite "lives" now?

If so we should get some nightlies going and expand the README and other getting-started instructions and associated documentation.

Looks like @kilograham has a commit from back in 2020 that we should probably merge in I guess? Graham, what was that about, do you remember at all?

Hello World example not working?

I was having trouble getting the Hello World example to work. Code builds and runs on the Raspberry Pico, but the LED is constantly on.

When debugging, I notice that x_quantized value is always 0, despite the x value moving up and down. With further debugging, I notice that the inputโ†’params.scale and inputโ†’params.zero_point are always 0.

Am I doing something wrong? Looking at the original TensorFlow Lite example I see that the x-value isn't quantised, and wondered if keeping it a float would work for the Pico. It seems to, so putting the code here in case anyone else is encountering this issue (happy to submit a PR if helpful).

#include "constants.h"
#include "hello_world_float_model_data.h"
#include "main_functions.h"
#include "output_handler.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
const tflite::Model *model = nullptr;
tflite::MicroInterpreter *interpreter = nullptr;
TfLiteTensor *input = nullptr;
TfLiteTensor *output = nullptr;
int inference_count = 0;

constexpr int kTensorArenaSize = 2000;
uint8_t tensor_arena[kTensorArenaSize];
}  // namespace

// The name of this function is important for Arduino compatibility.
void setup() {
  tflite::InitializeTarget();

  // Map the model into a usable data structure. This doesn't involve any
  // copying or parsing, it's a very lightweight operation.
  model = tflite::GetModel(g_hello_world_float_model_data);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    MicroPrintf(
        "Model provided is schema version %d not equal "
        "to supported version %d.",
        model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  // This pulls in all the operation implementations we need.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroMutableOpResolver<1> resolver;
  TfLiteStatus resolve_status = resolver.AddFullyConnected();
  if (resolve_status != kTfLiteOk) {
    MicroPrintf("Op resolution failed");
    return;
  }

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    MicroPrintf("AllocateTensors() failed");
    return;
  }

  // Obtain pointers to the model's input and output tensors.
  input = interpreter->input(0);
  output = interpreter->output(0);

  // Keep track of how many inferences we have performed.
  inference_count = 0;
}

// The name of this function is important for Arduino compatibility.
void loop() {
  // Calculate an x value to feed into the model. We compare the current
  // inference_count to the number of inferences per cycle to determine
  // our position within the range of possible x values the model was
  // trained on, and use this to calculate a value.
  float position = static_cast<float>(inference_count) /
                   static_cast<float>(kInferencesPerCycle);
  float x = position * kXrange;

  // int8_t x_quantized = x / input->params.scale + input->params.zero_point;
  // Place the quantized input in the model's input tensor
  // input->data.int8[0] = x_quantized;
  input->data.f[0] = x;

  // Run inference, and report any error
  TfLiteStatus invoke_status = interpreter->Invoke();
  if (invoke_status != kTfLiteOk) {
    MicroPrintf("Invoke failed on x: %f\n", static_cast<double>(x));
    return;
  }

  // Obtain the quantized output from model's output tensor
  // int8_t y_quantized = output->data.int8[0];
  // Dequantize the output from integer to floating-point
  // float y = (y_quantized - output->params.zero_point) * output->params.scale;
  float y = output->data.f[0];

  // Output the results. A custom HandleOutput function can be implemented
  // for each supported hardware target.
  HandleOutput(x, y);

  // Increment the inference_counter, and reset it if we have reached
  // the total number per cycle
  inference_count += 1;
  if (inference_count >= kInferencesPerCycle) inference_count = 0;
}

PS: Thank you @petewarden for all your great work!

microspeech examples missing

Hi,

I noticed the micro speech examples are missing in the latest commit. Is there a reason for this?

Thanks,
Lukas

Simple model does not work

Hi, thanks your great work.

I'm trying to write code to make inference with simple xor-gate model.

import numpy as np

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import RMSprop
from tensorflow.lite.python import lite

X_train = np.array([[0, 0],
                    [255, 0],
                    [0, 255],
                    [255, 255]], dtype = 'int8')
Y_train = np.array([0,
                    255,
                    255,
                    0], dtype = 'int8')
model = Sequential()
output_count_layer0 = 2
model.add(
    Dense(
      output_count_layer0,
      input_shape=(2, ),
      activation='sigmoid'))  # Need to specify input shape for input layer
output_count_layer1 = 1
model.add(Dense(output_count_layer1, activation='linear'))
model.compile(
    loss='mean_squared_error', optimizer=RMSprop(), metrics=['accuracy'])
BATCH_SIZE = 4
history = model.fit(
    X_train, Y_train, batch_size=BATCH_SIZE, epochs=3600, verbose=1)
X_test = X_train
Y_test = Y_train
score = model.evaluate(X_test, Y_test, verbose=0)
model.save('xor_model.h5')

converter = lite.TFLiteConverter.from_keras_model_file('xor_model.h5')
#converter.optimizations = [lite.Optimize.DEFAULT]
#converter.target_spec.supported_types = [tf.float32]
tflite_model = converter.convert()
open('xor_model.tflite', 'wb').write(tflite_model)
#include <stdio.h>
#include "pico/stdlib.h"

#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

alignas(8) const unsigned char xor_model_tflite[] = {
  0x1c, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x12, 0x00,
  0x1c, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00,
  0x00, 0x00, 0x18, 0x00, 0x12, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
  0x14, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00, 0x9c, 0x00, 0x00, 0x00,
  0x1c, 0x00, 0x00, 0x00, 0x44, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0x64, 0x02, 0x00, 0x00, 0xb4, 0x01, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
  0xa4, 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0xb0, 0x04, 0x00, 0x00,
  0xac, 0x04, 0x00, 0x00, 0xd0, 0x03, 0x00, 0x00, 0x58, 0x03, 0x00, 0x00,
  0xf8, 0x02, 0x00, 0x00, 0x9c, 0x02, 0x00, 0x00, 0x98, 0x04, 0x00, 0x00,
  0x94, 0x04, 0x00, 0x00, 0x90, 0x04, 0x00, 0x00, 0x38, 0x00, 0x00, 0x00,
  0x01, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00,
  0x04, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00,
  0x09, 0x00, 0x00, 0x00, 0x13, 0x00, 0x00, 0x00, 0x6d, 0x69, 0x6e, 0x5f,
  0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x76, 0x65, 0x72, 0x73,
  0x69, 0x6f, 0x6e, 0x00, 0x8a, 0xfc, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
  0x10, 0x00, 0x00, 0x00, 0x31, 0x2e, 0x31, 0x34, 0x2e, 0x30, 0x00, 0x00,
  0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x00, 0x00, 0x00,
  0x4d, 0x4c, 0x49, 0x52, 0x20, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74,
  0x65, 0x64, 0x2e, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x18, 0x00, 0x04, 0x00,
  0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00, 0x0e, 0x00, 0x00, 0x00,
  0x14, 0x00, 0x00, 0x00, 0x34, 0x00, 0x00, 0x00, 0x38, 0x00, 0x00, 0x00,
  0x3c, 0x00, 0x00, 0x00, 0x48, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00,
  0xb0, 0x03, 0x00, 0x00, 0x40, 0x03, 0x00, 0x00, 0xb4, 0x02, 0x00, 0x00,
  0x60, 0x02, 0x00, 0x00, 0xfc, 0x01, 0x00, 0x00, 0x90, 0x01, 0x00, 0x00,
  0xe0, 0x00, 0x00, 0x00, 0x60, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
  0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
  0x03, 0x00, 0x00, 0x00, 0x28, 0x01, 0x00, 0x00, 0x94, 0x00, 0x00, 0x00,
  0x10, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x6d, 0x61, 0x69, 0x6e,
  0x00, 0x00, 0x00, 0x00, 0xfe, 0xfe, 0xff, 0xff, 0x00, 0x00, 0x00, 0x08,
  0x18, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
  0x6c, 0xfc, 0xff, 0xff, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
  0x03, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
  0x02, 0x00, 0x00, 0x00, 0xe0, 0xfc, 0xff, 0xff, 0x14, 0x00, 0x00, 0x00,
  0x08, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x30, 0x00, 0x00, 0x00,
  0x10, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
  0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff,
  0x01, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x49, 0x64, 0x65, 0x6e,
  0x74, 0x69, 0x74, 0x79, 0x00, 0x00, 0x00, 0x00, 0xac, 0xfd, 0xff, 0xff,
  0x00, 0x00, 0x0a, 0x00, 0x10, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00,
  0x0a, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
  0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00,
  0x01, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x5e, 0xff, 0xff, 0xff,
  0x00, 0x00, 0x00, 0x0e, 0x01, 0x00, 0x00, 0x00, 0x5c, 0xfd, 0xff, 0xff,
  0x14, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00,
  0x40, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0xff, 0xff, 0xff, 0xff, 0x02, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00,
  0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x2f, 0x64,
  0x65, 0x6e, 0x73, 0x65, 0x2f, 0x53, 0x69, 0x67, 0x6d, 0x6f, 0x69, 0x64,
  0x00, 0x00, 0x00, 0x00, 0x38, 0xfe, 0xff, 0xff, 0x00, 0x00, 0x0e, 0x00,
  0x14, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00,
  0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x18, 0x00, 0x00, 0x00,
  0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x7c, 0xfd, 0xff, 0xff,
  0x01, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
  0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
  0x00, 0x00, 0x0a, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x00, 0x00, 0x08, 0x00,
  0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x01, 0x00, 0x00, 0x00,
  0x08, 0xfe, 0xff, 0xff, 0x14, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00,
  0x24, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
  0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0x02, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0x02, 0x00, 0x00, 0x00,
  0x18, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
  0x61, 0x6c, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x2f, 0x42, 0x69, 0x61,
  0x73, 0x41, 0x64, 0x64, 0x00, 0x00, 0x00, 0x00, 0x00, 0xfe, 0xff, 0xff,
  0xde, 0xfe, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00,
  0x5d, 0x07, 0x2c, 0xc0, 0x69, 0xd9, 0x18, 0xc0, 0xd6, 0xfe, 0xff, 0xff,
  0x10, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00,
  0x30, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
  0x02, 0x00, 0x00, 0x00, 0x19, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
  0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65,
  0x5f, 0x31, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x00, 0x00, 0x00,
  0x58, 0xfe, 0xff, 0xff, 0x36, 0xff, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
  0x10, 0x00, 0x00, 0x00, 0x33, 0x3e, 0x7b, 0x40, 0x22, 0x81, 0x15, 0xc0,
  0x71, 0xec, 0x55, 0xc0, 0xd1, 0xa6, 0xaf, 0xbf, 0x36, 0xff, 0xff, 0xff,
  0x10, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00,
  0x2c, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0x02, 0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
  0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65,
  0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x00, 0xb4, 0xfe, 0xff, 0xff,
  0x92, 0xff, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
  0xba, 0xa5, 0xf1, 0x3f, 0x86, 0xff, 0xff, 0xff, 0x10, 0x00, 0x00, 0x00,
  0x03, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x48, 0x00, 0x00, 0x00,
  0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x32, 0x00, 0x00, 0x00,
  0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x2f, 0x64,
  0x65, 0x6e, 0x73, 0x65, 0x5f, 0x31, 0x2f, 0x42, 0x69, 0x61, 0x73, 0x41,
  0x64, 0x64, 0x2f, 0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69, 0x61,
  0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72,
  0x63, 0x65, 0x00, 0x00, 0x04, 0x00, 0x06, 0x00, 0x04, 0x00, 0x00, 0x00,
  0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x04, 0x00, 0x06, 0x00, 0x00, 0x00,
  0x04, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x83, 0xca, 0x5e, 0xc0,
  0x88, 0xc9, 0x99, 0x3f, 0x00, 0x00, 0x0e, 0x00, 0x14, 0x00, 0x04, 0x00,
  0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
  0x10, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
  0x44, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0x30, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
  0x61, 0x6c, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x2f, 0x42, 0x69, 0x61,
  0x73, 0x41, 0x64, 0x64, 0x2f, 0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72,
  0x69, 0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x72, 0x65, 0x73, 0x6f,
  0x75, 0x72, 0x63, 0x65, 0x00, 0x00, 0x00, 0x00, 0xa4, 0xff, 0xff, 0xff,
  0x14, 0x00, 0x18, 0x00, 0x04, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00,
  0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x14, 0x00, 0x00, 0x00,
  0x14, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00,
  0x30, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
  0xff, 0xff, 0xff, 0xff, 0x02, 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00,
  0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x00,
  0xfc, 0xff, 0xff, 0xff, 0x04, 0x00, 0x04, 0x00, 0x04, 0x00, 0x00, 0x00

};

namespace {
  tflite::ErrorReporter* error_reporter = nullptr;
  const tflite::Model* model = nullptr;
  tflite::MicroInterpreter* interpreter = nullptr;
  TfLiteTensor* input = nullptr;
  TfLiteTensor* output = nullptr;
  constexpr int kTensorArenaSize = 2000;
  uint8_t tensor_arena[kTensorArenaSize];
}

int main() {
  stdio_init_all();

  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  model = tflite::GetModel(xor_model_tflite);

  if (model->version() != TFLITE_SCHEMA_VERSION) {
    TF_LITE_REPORT_ERROR(error_reporter,
                         "Model provided is schema version %d not equal "
                         "to supported version %d.",
                         model->version(), TFLITE_SCHEMA_VERSION);
    return 1;
  }

  // This pulls in all the operation implementations we need.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::AllOpsResolver resolver;

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus status = interpreter->AllocateTensors();
  if (status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
    return 1;
  }

  input = interpreter->input_tensor(0);
  output = interpreter->output_tensor(0);
  int8_t x_quantized = 1 / input->params.scale + input->params.zero_point;

  while (true) {
    printf("%d, %d, %d\n", input->type, input->dims[0].data[0], input->dims[0].size);
    printf("%d, %d, %d\n", output->type, output->dims[0].data[0], output->dims[0].size);
    int c1 = getchar();
    printf("%c\n", c1);
    int c2 = getchar();
    printf("%c\n", c2);
    //input->data.f16[0].data = c1 == '1' ? 1.0f : 0.0f;
    //input->data.f16[1].data = c2 == '1' ? 1.0f : 0.0f;
    int i;
    for (i = 1; i < 235; i++)
      input->data.int8[i] = 0;
    input->data.int8[0] = c1 == '1' ? 255 : 0;
    input->data.int8[1] = c2 == '1' ? 255 : 0;
    status = interpreter->Invoke();
    if (status != kTfLiteOk) {
      TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
      continue;
    }
    printf("%d,%d\n", output->data.int8[0], output->data.int8[1]);
    sleep_ms(1000);
  }
  return 0;
}

But output always be zero. Is this a bug of pico-tfmicro?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.