Git Product home page Git Product logo

azsphere's Introduction

Overview

Azure Sphere Demo

We show machine learning model deployment on MT3620 Azure Sphere using Apache TVM. We show multiple deployments from a simple a + b example to a Conv2D operation and finally we deploy Keyword Spotting model developed by ARM.

Hardware Requirements

Software Requirements

Getting Started

  1. Clone this repository (use git clone --recursive to clone submodules)
  2. Install TVM
    • NOTE: Ensure you enable LLVM by setting set(USE_LLVM ON). (This repository has been tested against LLVM-10)
    • NOTE: Checkout f5b02fdb1b5a7b6be79df97035ec1c3b80e3c665 before installation.
  3. Setup virtual env
$ python3 -mvenv _venv
$ . _venv/bin/activate
$ pip3 install -r requirements.txt -c constraints.txt
$ export PYTHONPATH=$(pwd)/python:$PYTHONPATH
$ export PYTHONPATH=$(pwd)/3rdparty/ML_KWS:$PYTHONPATH

Prepare the Hardware

  1. Connect Azure Sphere board to PC with micro USB cable.
  2. In the current directory run make connect to connect to device. (This requires sudo access)
  3. Enable evelopment by running make enable_development command.
  4. Optional: Follow this to enable network capability:
    • Disconnect the device and attach the network shield.
    • Setup static IP
    Address 192.168.0.10
    Netmask 24
    Gateway 192.168.0.1

Run Samples

The basic sample is a + b operation. In this example, we deploy a simple operation on Azure Sphere using C Runtime from TVM. To deploy this follow these instructions:

$ make delete_a7
$ make cleanall
$ make test
$ make program

After programming the Azure Sphere, it reads TVM graph and parameters from FLASH and creates the runtime. Then it will read input data from FLASH, pass it to the TVM Relay model and finally compares the output with expected output from X86 machine. If the result maches, LED1 on the Azure Sphere would change to green.

Next sample is Conv2D operation. To run this example, follow previous instructions and use conv2d instead of test. If you want to use network capabilities, use -DAS_NETWORKING=1. Make sure to follow previous instruction on conecting Ethernet shield to Azure Sphere and setup the network.

Debugging

Azure Sphere provides debugging capabilities over the micro USB connection with no extra hardware requirements. To use debugger open Visual Studio Code in current directory and follow instructions. To enable debugging in samples follow these:

  1. Build the sample with these option:-DAS_DEBUG=1 or change them in config file.
  2. Use Start Debugging option in VsCode and look for the output window.

Keyword Spotting (KWS) Model on Azure Sphere

We deploy KWS, a tensorflow model developed by ARM, on Azure Sphere Cortex-A7 core using TVM. To enable this, we need to follow several steps as I explain in following. But to see the final deployment quickly, run these commands to deploy KWS model on Azure Sphere. In this deployment, we use a relay quantized KWS DS-CNN model. We build this model in TVM along with one of the WAV files in samples as input data. Then we run this model on Azure Sphere and compare the TVM output with expected result from X86. If the result matches, we see a green LED on the board.

$ make delete_a7
$ make cleanall
$ make kws
$ make program

In following subsection, we explain how we achieve this deployment in more details.

Importing KWS, Quantization and Accuracy Test

KWS models are originally developed in Tensorflow. Here we focus on DS-CNN pre-trained models provided by ARM. To import the model and perform Relay quantization, run this command. This will save the relay module as a pickle file which we can use to build the runtime.

python3 -m model.kws.kws --export --quantize --global-scale 4.0 -o build

Here is the output:

INFO: Quantizing...
INFO: Global Scale: 4.0
INFO: build/module_gs_4.0.pickle saved!

To test the accuracy of the quantized model run the following. This will load the Relay module and run 1000 audio samples from KWS dataset and shows the accuracy.

$ python3 -m model.kws.kws --test 1000 --module build/module_gs_4.0.pickle 

This task will take few minutes the first time because of downloading the dataset. Here is the output:

INFO: testing 1000 samples
Accuracy for 1000 samples: 0.907

Now, we can build TVM runtime graph using this module. This command uses the saved quantized model to build runtime graph with no tuning.

$ python3 -m build_model --keyword --module build/module_gs_4.0.pickle -o build

Here is the output:

INFO: keyword_model.o saved!
INFO: keyword_graph.bin saved!
INFO: keyword_graph.json saved!
INFO: keyword_params.bin saved!
...
INFO: sample audio file used: python/model/kws/samples/silence.wav
INFO: keyword_data.bin saved!
INFO: keyword_output.bin saved!

Real-time Demo

We deployed an end-to-end demo of Keyword Spotting model on Azure Sphere. We implemented audio pre-processing and microhpnone interface on Cortex-M4 as a partner application and TVM on Cortex-A7.

  1. Connect a Microphone with analog interface to Azure Sphere ADC interface (we used MAX4466). Follow instruction from the partner App.

    • NOTE: if you don't have a microphone, you can deploy DEMO1 from partner app which reads pre-recorded data from memory.
  2. Follow the steps in apps/kws_mic/README.md to deploy partner app on Cortex-M4. You can choose DEMO1 (pre-loaded .wav file) or DEMO2 (recorded live from microphone).

  3. Deploy the TVM runtime application on Cortex-A7:

    make cleanall
    make kws_demo
    make program
    
  4. If you push button B, it acquires one second speech from microphone and shows the result label on four LEDs. Here are the LED colors for each label.

    Label Yes No Up Down Left Right
    LEDs ⚫⚫💚💚 ⚫⚫🔴🔴 ⚫⚫💚⚫ ⚫⚫🔵⚫ ⚪⚫⚫⚫ ⚫⚫⚫⚪
    Label On Off Stop Go Silence Unknown
    LEDs ⚪⚫⚪⚪ 🔴⚫⚫⚫ 🔴⚫🔴🔴 💚⚫💚💚 ⚫⚫⚫🔵 ⚫⚫⚫💚

References

Here are some of the references used in this project:

azsphere's People

Contributors

mehrdadh avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azsphere's Issues

model.kws.kws failed on JSONReader: Unknown field global_key, candidates are:

Hi octoml team

Running this command failed on JSONReader: Unknown field global_key, candidates are:

$ python3 -m model.kws.kws --test 1000 --module model/kws/saved/module_gs_4.0.pickle --debug

Error messages are:

$ python3 -m model.kws.kws --test 1000 --module model/kws/saved/module_gs_4.0.pickle --debug

Traceback (most recent call last):
  File "python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "Github/azsphere/python/model/kws/kws.py", line 335, in <module>
    test_accuracy(OPTS, target='llvm --system-lib')
  File "Github/azsphere/python/model/kws/kws.py", line 230, in test_accuracy
    mod = pickle.load(handle)
  File "Github/tvm-upstream/python/tvm/runtime/object.py", line 88, in __setstate__
    self.__init_handle_by_constructor__(_ffi_node_api.LoadJSON, handle)
  File "Github/tvm-upstream/python/tvm/_ffi/_ctypes/object.py", line 131, in __init_handle_by_constructor__
    handle = __init_by_constructor__(fconstructor, args)
  File "Github/tvm-upstream/python/tvm/_ffi/_ctypes/packed_func.py", line 260, in __init_handle_by_constructor__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) 9   libtvm.dylib                        0x000000010fec6206 TVMFuncCall + 70
  [bt] (7) 8   libtvm.dylib                        0x000000010f2e1968 void tvm::runtime::TypedPackedFunc<tvm::runtime::ObjectRef (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)>::AssignTypedLambda<tvm::runtime::ObjectRef (*)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)>(tvm::runtime::ObjectRef (*)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >))::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const + 472
  [bt] (6) 7   libtvm.dylib                        0x000000010f2d0fbe tvm::LoadJSON(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 222
  [bt] (5) 6   libtvm.dylib                        0x000000010f2d1f0e tvm::JSONGraph::Load(dmlc::JSONReader*) + 318
  [bt] (4) 5   libtvm.dylib                        0x000000010f2d9f2e dmlc::JSONObjectReadHelper::ReadAllFields(dmlc::JSONReader*) + 318
  [bt] (3) 4   libtvm.dylib                        0x000000010f2dad7e dmlc::json::ArrayHandler<std::__1::vector<tvm::JSONNode, std::__1::allocator<tvm::JSONNode> > >::Read(dmlc::JSONReader*, std::__1::vector<tvm::JSONNode, std::__1::allocator<tvm::JSONNode> >*) + 238
  [bt] (2) 3   libtvm.dylib                        0x000000010f2db346 tvm::JSONNode::Load(dmlc::JSONReader*) + 566
  [bt] (1) 2   libtvm.dylib                        0x000000010f2da1fc dmlc::JSONObjectReadHelper::ReadAllFields(dmlc::JSONReader*) + 1036
  [bt] (0) 1   libtvm.dylib                        0x000000010f02e5af dmlc::LogMessageFatal::~LogMessageFatal() + 111
  File "Github/tvm-upstream/3rdparty/dmlc-core/include/dmlc/json.h", line 947
JSONReader: Unknown field global_key, candidates are: 
"attrs"
"data"
"keys"
"repr_b64"
"repr_str"
"type_key"


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.