Git Product home page Git Product logo

mmwave-gesture-recognition's Introduction

Basic Gesture Recognition Using mmWave Sensor - TI AWR1642

Collecting data from the TI AWR1642 via its serial port, this setup allows the user to choose one of several neural network architectures - convolutional, ResNet, LSTM, or Transformer. The selected network is then used for the recognition and classification of specific gestures:

  • None (random non-gestures)
  • Swipe Up
  • Swipe Down
  • Swipe Right
  • Swipe Left
  • Spin Clockwise
  • Spin Counterclockwise
  • Letter Z
  • Letter S
  • Letter X

Demo

Getting Started

Deps:

  • python 3.8+
  • unzip (optional)
  • curl (optional)

unzip and curl are used by the fetch script.

Installation

Install mmwave_gesture package locally:

git clone https://github.com/vilari-mickopf/mmwave-gesture-recognition.git
cd mmwave-gesture-recognition
pip install -e .

Data and models

You can run ./fetch script to download and extract:

  • data (20k samples - 2k per class) ~120Mb

  • models (Conv1D, Conv2D, ResNet1D, ResNet2D, LSTM and Transformer models) ~320Mb

To access the required data manually, follow the provided links to download the files. Once downloaded, manually extract the contents to the directories mmwave_gesture/data/ and mmwave_gesture/models/ as appropriate.

End result should look like this:

mmwave_gesture/
│ communication/
│ data/
│ │ ccw/
│ │ cw/
│ │ down/
│ │ │ sample_1.npz
│ │ │ sample_2.npz
│ │ │ ...
│ │ └ sample_2000.npz
│ │ left/
│ │ none/
│ │ right/
│ │ s/
│ │ up/
│ │ x/
│ │ z/
│ │ __init__.py
│ │ formats.py
│ │ generator.py
│ │ loader.py
│ │ logger.py
│ └ preprocessor.py
│ models/
│ │ Conv1DModel/
│ │ │ confusion_matrix.png
│ │ │ history
│ │ │ model.h5
│ │ │ model.png
│ │ └ preprocessor
│ │ Conv2DModel/
│ │ LstmModel/
│ │ ResNet1DModel/
│ │ ResNet2DModel/
│ └ TransModel/
│ utils/
│ __init__.py
│ model.py
...

Serial permissions

The group name can differ from distribution to distribution.

Arch

gpasswd -a <username> uucp

Ubuntu:

gpasswd -a <username> dialout

The change will take effect on the next login.

The group name can be obtained by running:

stat /dev/ttyACM* | grep Gid

One time only (permissions will be reseted after unplugging):

chmod 666 /dev/ttyACM*

Flashing

The firmware code used for AWR1642 is just a mmWaveSDK demo provided with the version 02.00.00.04. Bin file is located in firmware directory.

  1. Close SOP0 and SOP2, and reset the power.
  2. Start the console and run flash command:
python mmwave-console.py
>> flash xwr16xx_mmw_demo.bin
  1. Remove SOP0 and reset the power again.

Running

If the board was connected before starting the console, the script should automatically find the ports and connect to them. This is only applicable for boards with XDS. If the board is connected after starting the console, autoconnect command should be run. If for some reason this is not working, manual connection is available via connect command. Manual connection can also be used for boards without XDS. Type help connect or help autoconnect for more info.

If the board is connected, the prompt will be green, otherwise, it will be red.

After connecting run plotter and prediction with following commands:

python mmwave-console.py
>> plot
>> predict

Use Ctrl-C to stop this command.

Collecting data

The console can be used for easy data collection. Use log command to save gesture samples in .npz format in mmwave/data/ directory (or custom directory specified by set_data_dir command). If nothing is captured for more than a half a second, the command will automatically be stopped. redraw/remove commands will redraw/remove the last captured sample.

python mmwave-console.py
>> listen
>> plot
>> set_data_dir /path/to/custom/data/dir
>> log up
>> log up
>> redraw up
>> remove up
>> log down
>> log ccw

Training

python mmwave-console.py
>> set_data_dir /path/to/custom/data/dir
>> train

or

python mmwave_gesture/model.py

Note: Default data dir is mmwave_gesture/data.

Selecting model

By default, conv2d model is used. Other models can be selected using set_model option.

python mmwave-console.py
>> set_model conv1d
>> set_model lstm
>> set_model trans

Help

Use help command to list all available commands and get documentation on them.

python mmwave-console.py
>> help
>> help flash
>> help listen

Acknowledgments

  • Thanks to NOVELIC for providing me with sensors

Authors

  • Filip Markovic

License: MIT

mmwave-gesture-recognition's People

Contributors

vilari-mickopf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

mmwave-gesture-recognition's Issues

Questions about the data

Hello, I am a college student from China. Thank you very much that your code is very clear and helped me a lot. However, I found that there is no data in the data folder. Could you please share all the .csv files in the data folder to me, thank you very very much, and my email is [email protected] or [email protected].

I have several questions

Hi Filip Markovic,

Thanks for your great repository ‘mmwave-gesture-recognition’. It really helps me a lot.
However I still have some questions that I don’t understand. I need to seek your advice

  1. What is the definition of the variable ‘frame’ in the python code? Is it the same as the FMCW radar frame?
  2. What is the physical meaning of the column “frame” in the data set? Why are there so many same frame values? The counts of the values are not the same.
  3. What are the physical meanings of the other columns such as x, y, range_idx, peak value, doppler_idx, and xyz_q_format in the data set? What are the unit and range of these values? Did you re-scale these values to between 0 and 1?

Thanks for your help in advance.
I wish to hear from you as soon as possible.

Testing Environment

Hi,
I have some problems when testing the project. I have checked the accuracy of models, all of them achieve 99%. However, during real testing model fails to correctly recognize my gestures. Could you please make a short video showing your testing environment and show how one should move hands to properly recognize gestures?

Thanks.

Peak groups in the configuration file

Hello @vilari-mickopf.
Thank you very much for making 'mmwave-gesture-recognition' public, I learned a lot.

Use the profile in your project to accurately identify gestures. Now I want to generate different point cloud data by modifying the configuration file, but in the configuration file, there is a peak grouping instruction, I do not understand. And I can't find it in the mmwave_sdk_user_guide.pdf file. So I would like to ask you some questions:
1: What does each number after this instruction mean?
2: How is this algorithm implemented? Mainly, I want to understand the implementation process of this algorithm.
3: Is peak grouping performed on each frame of point cloud individually, or is it done on the accumulated point cloud of all frames?
4: When I modify peakGrouping -1 1 0 0 1 511 to peakGrouping -1 11 11 511, there is only one target point for the hands.
1003

The following error occurred when I removed the peak packet instruction.
error.txt

Looking forward to your reply very much.

Off-line Testing

Hello @vilari-mickopf.
Thanks for your great repository ‘mmwave-gesture-recognition’. Howover,I have a question that I would like to consult with you?

Now I can use the 1642 board to actually test gestures and accurately recognize them. But now I want to use the CSV file in the data for offline testing (that is, write a separate predicted file(.py), pass the CSV data in during runtime, and directly print out the results), but I don't know how to use the existing code to achieve it? Can you help me take a look?

Thanks for your help in advance.
I wish to hear from you as soon as possible.

Failed sending configuration

The previous problem has been well solved with your help. Thank you very much. Now there are two new problems. The configuration file cannot be sent to the board. I use iwr1642 and mmwave_ sdk_ 03_ 05_ 00_ 04, which is slightly different from what you use, so do I need to change the configuration file.

In addition, during flash operation, the console only displays' Ping mmwave ', and the version information should be displayed according to the code.

Such as' get version...

          Done

          Version:

          Done’

Thanks again!
hfutball

image
image

git LFS

Hi,
First of all, I would like to thank you for this amazing work.
I get the following error message when I run git lfs pull

batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
error: failed to fetch some objects from 'https://github.com/vilari-mickopf/mmwave-gesture-recognition.git/info/lfs'

Could please share the source code with me?

Off-line data display

Hello @vilari-mickopf.
I'm sorry to bother you again. I have a new problem, that is, how to display the data in the mmwave\data\ file as dynamically as in the plotter through offline mode.

Because I want to verify the correctness of the data I collected on the other radar boards in this way.

Can support other mmWave module?

Hi

Thank you for public this amazing project. I want to follow your work, but I don't have AWR1642. I have AWR1843 and IWR6843 in hand. How I can use these two modules to run your project? Appreciate your contribution!

Best Regard!
Kirk

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.