Git Product home page Git Product logo

esp32-spi-message-demo's Introduction

NOTE!

OAK IoT series and this repository is community supported only, and is provided as-is. We most likely won't update it and we don't provide support for it (Discord, forums, email...).

Demo

See below for this running on the BW1092:

SPI ESP32 Interface with DepthAI

Building

The first time you build, the repository submodules need be initialized:

git submodule update --init --recursive

# Tip: You can ask Git to do that automatically:
git config submodule.recurse true

Later submodules also need to be updated. To build an example, you will need to use ESP-IDF's idf.py. Examples here were only tested with ESP-IDF version 4.1 and we encourage you to use the same version as well.

SPI Protocol

SPI messaging is currently arranged in 2 layers. The first is the spi protocol. The spi protocol is the lowest level. It defines a standard packet for all SPI communication. It is a 256 byte packet arranged in the following manner:

typedef struct {
    uint8_t start;
    uint8_t data[SPI_PROTOCOL_PAYLOAD_SIZE];
    uint8_t crc[2];
    uint8_t end;
} SpiProtocolPacket;

start and end are constant bytes to mark the beginning and end of packets.

static const uint8_t START_BYTE_MAGIC = 0b10101010;
static const uint8_t END_BYTE_MAGIC = 0b00000000;

SPI Messaging

On top of this we have a layer called SPI messaging. This code defines the following:

  • A list of a supported commands,
  • A way to encapsulate commands going to the MyriadX over SPI.
  • A way to receive and parse command responses.
  • We’ll go into greater depth as to how exactly to use this in the SPI Messaging Example.

esp32-spi-message-demo's People

Contributors

erol444 avatar jonngai avatar luxonis-brandon avatar szabolcsgergely avatar themarpe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

esp32-spi-message-demo's Issues

Build error: GPIO_PIN_INTR_NEGEDGE undeclared

When i run idf.py build for jpeg_webserver_demo, i get the following error:

user1@machine:/esp/esp32-spi-message-demo/components/depthai-spi-api/common/esp32_spi_impl.c:73:20: error: 'GPIO_PIN_INTR_NEGEDGE' undeclared (first use in this function); did you mean 'GPIO_INTR_NEGEDGE'?
         .intr_type=GPIO_PIN_INTR_NEGEDGE,
                    ^~~~~~~~~~~~~~~~~~~~~
                    GPIO_INTR_NEGEDGE

Is this a problem in the code, or do i need to do some additional configuration (i have ran idf.py menuconfig and entered the wifi ssid and password).

Camera clogging up sending messages to esp32.

Hi,
Thanks for the awesome hardware and software!

After some weeks of struggling and posting on discord, Erik very wisely told me to post the Issue here.

The idea is I wait for the user to connect to esp32 and when so, sends a start message to the oak that enables a boolean so it calculates all the things I want the camera to do. (Oak receiving and sending messages inside a script) When a user disconnects it sends a message to the oak to stop and so it does not do any kind of calculations that eat CPU ram and battery if a user is not there.
I am doing by the moment with a boolean but I'd love to be able to actually 'stop' the pipeline or calculations.
Erik suggested this:

cfg = dai.CameraControl()
cfg.setStopStreaming()
ctrlQ.send(cfg)

which I have seen it makes the camera disconnect but not 'stop'.

Now that I have put it in context I will explain the issue/s here.

First of all, I had some issues of communication between the OAK D-Lite and the esp-32 that I 'fixed' by switching to a 'blocking'
call to getData() and try installing different versions of the software till now that I am using 2.15.0.0.

data = node.io['spimetain'].get().getData() # this is blocking

Because of this, at the end of the while loop of the esp32, I do send a message that is always the same 'ack' (acknowledgment. TCP/IP vibes 😎) so I do 'keep' the camera alive.

What happens to me now is the part of the oak receiving the 'start' action on the script while true works the boolean is set to True and starts sending the calculations from the Camera out till after 2 to 3 seconds after, it stops sending data out and itself blocks and so esp32 also shows timeout messages.

I have reduced any possible errors on my code by looking at documentation and code samples and I tested script node communication example adding a while(1) so it works on loop and the oak does not clog up.

Following the same philosophy I found out in my case it does clog up and I am sending messages of a size of 32 bytes for example and a maximum size of 1500 bytes. But I have seen that by only sending 32 bytes it does also clog up so it is not a message size thing.

Here is how I create the nodes:

print("Creating SPI in node...")
        spiIn = pipeline.create(dai.node.SPIIn)
        spiIn.setStreamName("spimetain")
        spiIn.setBusId(0)
        spiIn.out.link(script.inputs['spimetain'])

# set up SPI out node and link to nn1
        spiOut = pipeline.create(dai.node.SPIOut)
        spiOut.setStreamName("spimetaout")
        spiOut.setBusId(0)
        spiOut.input.setBlocking(False)
        spiOut.input.setQueueSize(2000)

print("Creating SPI out node...")
        script.outputs['host'].link(spiOut.input)
        script.outputs['camControl'].link(cam.inputControl)

        imu = pipeline.create(dai.node.IMU)
        imu.enableIMUSensor([dai.IMUSensor.ARVR_STABILIZED_GAME_ROTATION_VECTOR], 500)
        imu.setBatchReportThreshold(1)
        imu.setMaxBatchReports(10)


        spiOut1 = pipeline.create(dai.node.SPIOut)
        spiOut1.setStreamName("spiimuout")
        spiOut1.setBusId(0)
        spiOut1.input.setBlocking(False)
        spiOut1.input.setQueueSize(2)

        # Link plugins IMU -> XLINK
        imu.out.link(spiOut1.input)

Here is the while true inside the script that reads incoming messages and sends out a response accordingly:

while True:
    data = node.io['spimetain'].get().getData() #this is blocking
    
    jsonStr = str(data, 'utf-8')

    msg_dict = json.loads(jsonStr)
    ${_TRACE1} (f"Manager received: {msg_dict}")

    if msg_dict['action'] == 'stop':
        ${_TRACE1} ("Stop")

        result = dict([("action", 'stopped')])
        send_result(result)

        cfg_cam_control.setStopStreaming()
        node.io['camControl'].send(cfg_cam_control)
        calc = False

    elif msg_dict['action'] == 'start':
        ${_TRACE1} ("Start")

        result = dict([("action", 'started')])
        send_result(result)

        cfg_cam_control.setStartStreaming()
        node.io['camControl'].send(cfg_cam_control)
        calc = True

    elif msg_dict['action'] == 'syn':
        ${_TRACE1} ("MCU is alive")
        result = dict([("action", 'ack')])
        send_result(result)

    elif msg_dict['action'] == 'ack':
        ${_TRACE1} ("MCU is alive")
   
    if calc:
             // calculates some data  and sends to host that sends to spi

Here is the reduced esp32 code that tries receiving a message from oak and sends an Acknowledgement or ack message.

uint8_t req_success = 0;

    mySpiApi.set_send_spi_impl(&esp32_send_spi);
    mySpiApi.set_recv_spi_impl(&esp32_recv_spi);

    delay(1000);

    while(1) {
        client_available = check_if_client_is_available();

        if (receivedStartMessage == false && client_available) 
        {
            ESP_LOGI(TAG, "Client available and no start message received. Means we have a new user connected or is resuming a previous session");
            start_camera();
        }
        
        /* STEP 2: If client is available we send message to camera to start */
        if (client_available) {
            
            ESP_LOGI(TAG, "Client available INSIDE IF");
            if (mySpiApi.req_message(&received_msg, SPI_IMU_OUT)) {
                if (received_msg.raw_meta.size > 0)
                {   
                    ESP_LOGI(TAG, "Received metadata from camera (Probably IMU data): %s\n", received_msg.raw_meta.data);
                    
                    // dai::RawIMUData det;

                    // mySpiApi.parse_metadata(&received_msg.raw_meta, det);

                    // for(const auto& det : det.packets){
                    //     printf("x: %f, y: %f, z: %f, w:%f \n", det.rotationVector.i, det.rotationVector.j, det.rotationVector.k, det.rotationVector.real);
                    // }
                }
            }
            
            if(mySpiApi.req_message(&received_msg , METASTREAM))
            {
                if (received_msg.raw_data.size > 0)
                {
                    ESP_LOGI(TAG, "Received message from camera: %s\n", received_msg.raw_data.data);
                    // parse json message
                    cJSON *root;
                    root = cJSON_ParseWithLength((char *)received_msg.raw_data.data, received_msg.raw_data.size);

                    cJSON *action;
                    action = cJSON_GetObjectItemCaseSensitive(root, "action");

                    if (action != NULL) {
                        // parse oak response from our start or stop action
                        parse_action(action);
                    }
                    else
                    {
                        if (receivedStartMessage)
                        {
                            // we  have received the calculations from the oak 
                            ESP_LOGI(TAG,"Received message on loop: %s\n", received_msg.raw_data.data);
                            exampleDecodeRawMobilenet(received_msg.raw_data.data, received_msg.raw_data.size); 
                        }
                    }

                    mySpiApi.free_message(&received_msg);
                    req_success = mySpiApi.spi_pop_messages();
                    ESP_LOGI(TAG, "FREEING MESSAGE req_success: %d\n", req_success);
                }
                else{
                    ESP_LOGI(TAG, "Received empty message from camera \n");
                }
            }
            else
            {
                String error = "Camera not available yet\n"; 

                if (receivedStartMessage)
                {
                    error = "Camera timeout or something\n";
                }
            
                on_timeout(error);
            }
        }

        send_message_to_oak("action", "ack");

        if (receivedStartMessage == false && client_available == false){
            // ESP_LOGI(TAG, "KEEPING LOOP BUSY");
            delay(5000);
        }else{
            // delay(10);
            sleep(0.1);
        }
    }

Thanks for the support
Ask me whatever you need and if I should modify this issue or give more context.

Raw IMU Data metadata parsing fails

Hi,
I am trying to parse IMU raw data from the IMU node on my oak d lite and it is failing.

print("Creating IMU node...")
imu = pipeline.create(dai.node.IMU)
imu.enableIMUSensor([dai.IMUSensor.ARVR_STABILIZED_GAME_ROTATION_VECTOR], 400)
# above this threshold packets will be sent in batch of X, if the host is not blocked and USB bandwidth is available
imu.setBatchReportThreshold(1)
# maximum number of IMU packets in a batch, if it's reached device will block sending until host can receive it
# if lower or equal to batchReportThreshold then the sending is always blocking on device
# useful to reduce device's CPU load  and number of lost packets, if CPU load is high on device side due to multiple nodes
imu.setMaxBatchReports(10)


spiOut1 = pipeline.create(dai.node.SPIOut)
spiOut1.setStreamName("spiimuout")
spiOut1.setBusId(0)
spiOut1.input.setBlocking(False)
spiOut1.input.setQueueSize(2)

# Link plugins IMU -> XLINK
imu.out.link(spiOut1.input)
dai::RawIMUData imu_data;

    if(mySpiApi.parse_metadata(&imu_msg.raw_meta, imu_data)){
        ESP_LOGI(TAG, "Metadata parsed successfully\n");
    }
    else{
        ESP_LOGI(TAG, "Metadata parsing failed\n");
    }

I am receiving data as if I print data:

if (mySpiApi.req_message(&imu_msg, SPI_IMU_OUT)) {
       ESP_LOGI(TAG, "Received imu data from camera: %s\n", imu_msg.raw_meta.data);

I read this:
��packets���acceleroMeter��accuracy

Thanks
Cordially,

sync problem

if I run two-stage inference on the MX, How can I make Frames, object detections, and recognition results are all synced on the esp32 board side? I can't find a demo for this requirement.

failed to allocate 0 bytes

Running against the latest main branch ea52682. With the BW1092 esp32 I keep seeing the following from idf.py monitor when running the people-tracker demo using the latest main from experiments gen2-people-tracker on the myriad;

req_data | spi_get_size response: 0, ret: 1
sending spi_get_size cmd.
receive spi_get_size response from remote device...
response: 0
failed to allocate 0 bytes
sending POP_MESSAGES cmd.
receive POP_MESSAGES response from remote device...

The demos were previously working on an older version of the repo.

[FeatureRequest] People tracking decoding and publishing to cloud

Start with the why:

A great demo on how to send metadata to a cloud platform, which would be useful to the community

Move to the what:

  • Decode movement into left/right/up/down as in python script (bonus points if decoding is done in script node and just results are forwarded to host via xlink / esp32 via SPI)
  • Send movements over https/mqtt to azure/aws/gcp

Move to the how:

Magic 🪄

Questions: Enabling SPI peripheral mode

Hi, I'm attempting to interface the BW1098EMB_R0 development kit with an MCU. I'm using the ESP32 SPI messaging demo as a guide, and am looking for clarification to a couple questions:

  • The bootloader and pipeline were flashed following the sample in depthai-core. With the configurations SPI Mode 0 and 500kHz/1MHz/4MHz clock, sending a request to grab available streams or request data will result in no response from the module. Is there something that I may have overlooked setting up for SPI communication?
  • There were also steps to update the firmware for SPI peripheral mode using dfu-util — In trying to update the firmware, running ‘-build’ for depthai.py aborts after device reset and the package fails to build. Is it correct to assume the depthai_flash.fw be generated successfully here? Also, if I want to enable other GPIO, how do I update the firmware for that?

Thank you!

Cannot build due to '#include "esp_wifi.h" issues

I am trying to build the jpeg_webserver_demo example however when I go to build and flash the example my compilation fails because the following error message:

/home/pi/Dev/esp32-spi-message-demo/components/depthai-spi-api/common/esp32_spi_impl.h:18:10: fatal error: esp_wifi.h: No such file or directory 18 | #include "esp_wifi.h" | ^~~~~~~~~~~~ compilation terminated.

I just did a fresh installation following the ESP-IDF installation instructions but I cannot build the example.

`gen2-spi`/`device-yolo-parsing` running error

I ran the device-yolo-parsing demo on OAK-D-IOT-40
use the default blob tiny-yolo-v3.blob.sh4cmx4NCE1, The code runs normally, and the esp32 output is normal through the serial debugging tool.

But using other models, such as the model here, it only runs normally for 1-2 seconds and the esp32 output displays Timeout: no response from remote device... failed to recv packet

Is there any special requirements for the yolo model when converting to a blob ?

depthai version

Name: depthai
Version: 2.12.0.0
Summary: DepthAI Python Library
Home-page: https://github.com/luxonis/depthai-python
Author: Luxonis
Author-email: [email protected]
License: MIT
Location: /home/mulong/mambaforge/envs/depthai/lib/python3.10/site-packages
Requires: 
Required-by: depthai-sdk

Missing `mobilenet-ssd.blob`

After following steps from here there doesn't seem to be mobilenet-ssd/mobilenet-ssd.blob, so the last step from here fails.

Just a minor issue, but may be annoying for people just starting out.

SPI output stops while XLink and the rest of the pipeline keep running

I'm running into a really weird issue. I've got a pipeline that sends a lot of data to the host over serial and just spatial detection data to the ESP32 over SPI. After a few minutes (2-10) of running, one of two things will happen:

  1. The whole pipeline crashes with the message "[system] [critical] Fatal error. Please report to the developers. Log: 'PlgWarpHW' '741'. It looks like this issue was addressed in this pull request, which I am just waiting to be merged into main.
  2. The SPI output appears to die while all the XLink outputs still work fine. There are no error messages on the host, and the ESP32 outputs "Timeout: no response from remote device..." and "failed to recv packet" on repeat.

I've tried checking the temperatures using the SystemLogger node, and the temps were around 80 C when the SPI output stopped working the first time, (although this isn't consistent; I've seen it work at 90 C), which is well within the 105 C limit. Any thoughts on what might be causing number 2 and how I could address this issue?

I'm working on getting a minimal example that reproduces the code. I've been able to recreate the issue by adapting the spatial-mobilenet example to use the same pipeline I'm using. Now I'm trying to figure out what I can remove while still making the problem occur.

Failed to resolve component 'depthai-spi-api'.

Hi, I'm trying to run the examples of the repository but when I try to use the command idf.py menuconfig it gives me this error " Failed to resolve component 'depthai-spi-api'.".

How can I fix it?

SPIOut blocking the pipeline

Problem
Hi, as the title suggests, the problem regards the SPIOut node. If I add this node the pipeline blocks otherwise the pipeline works fluently by removing the SPIOut node. Precisely, if SPIOut is present then the keypoints drawed seem to be blocked and there is no any message written in the serial channel.

Pipeline structure
The pipeline is formed by these nodes:

MonoCamera → ImageManip → NeuralNetwork → SPIOut

ImageManip → XLinkOut
NeuralNetwork → XLinkOut

The two XLinkOut nodes are used for testing purpose on the host.

Code of the pipeline

def create_pipeline(model_config, camera, sync=False):
    model_config["shaves"] = 6

    pipeline = dai.Pipeline()

    cam_left = pipeline.create(dai.node.MonoCamera)
    cam_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
    cam_left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
    cam_left.setFps(30)

    imageManip = pipeline.create(dai.node.ImageManip)
    imageManip.setResize(*model_config["input_size"])
    imageManip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)

    cam_left.out.link(imageManip.inputImage)

    # Create pose estimation network
    pose_nn = pipeline.createNeuralNetwork()
    model_blob = (model_config["blob"] + "_sh" + str(model_config["shaves"]) + ".blob")
    path = os.path.join(MODELS_FOLDER, model_blob)
    if not os.path.exists(path):
        raise ValueError("Blob file '{}' does not exist.".format(path))
    print("Blob file:", path)
    pose_nn.setBlobPath(path)


    pose_nn.input.setQueueSize(1)
    pose_nn.input.setBlocking(False)
    imageManip.out.link(pose_nn.input)

    xout = pipeline.create(dai.node.XLinkOut)
    xout.setStreamName("pose")
    pose_nn.out.link(xout.input)

    spiout = pipeline.createSPIOut()
    spiout.setStreamName("cose123")
    spiout.setBusId(0)
    pose_nn.out.link(spiout.input)
    
    xoutManip = pipeline.create(dai.node.XLinkOut)
    xoutManip.setStreamName("preview")
    imageManip.out.link(xoutManip.input)

    return pipeline

Example of the problem
20220328_161133

Build error when running mjpeg-streaming-wifi

Hello, I'm trying to run the "mjpeg-streaming-wifi" demo with the OAK-D-IOT 75. I'm using the ESP-IDF extension in Visual Studio Code. When I click the build project button in the extension, I receive the following error message in the terminal.

image

image

image

It also records these problems in the

image

Any tips on how I can resolve this? I'm unsure if this is a workspace setup problem or actually something in the files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.