Git Product home page Git Product logo

depthai-python's Introduction

DepthAI Python Library

License: MIT Python Wheel CI

Python bindings for C++ depthai-core library

Documentation

Documentation is available over at Luxonis DepthAI API

Installation

Prebuilt wheels are available in Luxonis repository Make sure pip is upgraded

python3 -m pip install -U pip
python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ depthai

Building from source

Dependencies

  • cmake >= 3.4
  • C++14 compiler (clang, gcc, msvc, ...)
  • Python3

Along these, dependencies of depthai-core are also required See: depthai-core dependencies

Building

The first time you build, the repository submodules need be initialized:

git submodule update --init --recursive

# Tip: You can ask Git to do that automatically:
git config submodule.recurse true

Later submodules also need to be updated.

Local build with pip

To build and install using pip:

python3 -m pip install .

Add parameter -v to see the output of the building process.

Wheel with pip

To build a wheel, execute the following

python3 -m pip wheel . -w wheelhouse

Shared library

To build a shared library from source perform the following:

ℹ️ To speed up build times, use cmake --build build --parallel [num CPU cores] (CMake >= 3.12). For older versions use: Linux/macOS: cmake --build build -- -j[num CPU cores], MSVC: cmake --build build -- /MP[num CPU cores]

cmake -H. -Bbuild
cmake --build build

To specify custom Python executable to build for, use cmake -H. -Bbuild -D PYTHON_EXECUTABLE=/full/path/to/python.

Common issues

  • Many build fails due to missing dependencies. This also happens when submodules are missing or outdated (git submodule update --recursive).
  • If libraries and headers are not in standard places, or not on the search paths, CMake reports it cannot find what it needs (e.g. libusb). CMake can be hinted at where to look, for exmpale: CMAKE_LIBRARY_PATH=/opt/local/lib CMAKE_INCLUDE_PATH=/opt/local/include pip install .
  • Some distribution installers may not get the desired library. For example, an install on a RaspberryPi failed, missing libusb, as the default installation with APT led to v0.1.3 at the time, whereas the library here required v1.0.

Running tests

To run the tests build the library with the following options

git submodule update --init --recursive
cmake -H. -Bbuild -D DEPTHAI_PYTHON_ENABLE_TESTS=ON -D DEPTHAI_PYTHON_ENABLE_EXAMPLES=ON -D DEPTHAI_PYTHON_TEST_EXAMPLES=ON
cmake --build build

Then navigate to build folder and run ctest

cd build
ctest

To test a specific example/test with a custom timeout (in seconds) use following:

TEST_TIMEOUT=0 ctest -R "01_rgb_preview" --verbose

If TEST_TIMEOUT=0, the test will run until stopped or it ends.

Tested platforms

  • Windows 10, Windows 11
  • Ubuntu 18.04, 20.04, 22.04;
  • Raspbian 10;
  • macOS 10.14.6, 10.15.4;

Building documentation

  • Using Docker (with Docker Compose)

    cd docs
    sudo docker-compose build
    sudo docker-compose up
    

    ℹ️ You can leave out the sudo if you have added your user to the docker group (or are using rootless docker). Then open http://localhost:8000.

    This docker container will watch changes in the docs/source directory and rebuild the docs automatically

  • Linux

    First, please install the required dependencies

    Then run the following commands to build the docs website

    python3 -m pip install -U pip
    python3 -m pip install -r docs/requirements.txt
    cmake -H. -Bbuild -D DEPTHAI_BUILD_DOCS=ON -D DEPTHAI_PYTHON_BUILD_DOCS=ON
    cmake --build build --target sphinx
    python3 -m http.server --bind 0.0.0.0 8000 --directory build/docs/sphinx
    

    Then open http://localhost:8000.

    This will build documentation based on current sources, so if some new changes will be made, run this command in a new terminal window to update the website source

    cmake --build build --target sphinx
    

    Then refresh your page - it should load the updated website that was just built

Troubleshooting

Relocation link error

Build failure on Ubuntu 18.04 ("relocation ..." link error) with gcc 7.4.0 (default) - issue #3

  • the solution was to upgrade gcc to version 8:

    sudo apt install g++-8
    sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 70
    sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-8 70
    

Hunter

Hunter is a CMake-only dependency manager for C/C++ projects.

If you are stuck with error message which mentions external libraries (subdirectory of .hunter) like the following:

/usr/bin/ld: /home/[user]/.hunter/_Base/062a19a/ccfed35/a84a713/Install/lib/liblzma.a(stream_flags_decoder.c.o): warning: relocation against `lzma_footer_magic' in read-only section `.text'

Try erasing the Hunter cache folder.

Linux/MacOS:

rm -r ~/.hunter

Windows:

del C:/.hunter

or

del C:/[user]/.hunter

LTO - link time optimization

If following message appears:

lto1: internal compiler error: in add_symbol_to_partition_1, at lto/lto-partition.c:152
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-10/README.Bugs> for instructions.
lto-wrapper: fatal error: /usr/bin/c++ returned 1 exit status
compilation terminated.
/usr/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/depthai.dir/build.make:227: depthai.cpython-38-x86_64-linux-gnu.so] Error 1
make[1]: *** [CMakeFiles/Makefile2:98: CMakeFiles/depthai.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

One fix is to update linker: (In case you are on Ubuntu 20.04: /usr/bin/ld --version: 2.30)

# Add to the end of /etc/apt/sources.list:

echo "deb http://ro.archive.ubuntu.com/ubuntu groovy main" >> /etc/apt/sources.list

# Replace ro with your countries local cache server (check the content of the file to find out which is)
# Not mandatory, but faster

sudo apt update
sudo apt install binutils

# Should upgrade to 2.35.1
# Check version:
/usr/bin/ld --version
# Output should be: GNU ld (GNU Binutils for Ubuntu) 2.35.1
# Revert /etc/apt/sources.list to previous state (comment out line) to prevent updating other packages.
sudo apt update

Another option is to use clang compiler:

sudo apt install clang-10
mkdir build && cd build
CC=clang-10 CXX=clang++-10 cmake ..
cmake --build . --parallel

depthai-python's People

Contributors

aaronplusone avatar alex-luxonis avatar anielalexa avatar asahtik avatar cafemoloko avatar csaba-luxonis avatar dhruvsheth-ai avatar erol444 avatar filipproch avatar florinbuica-luxonis avatar jakaskerl avatar jakgra avatar janlipovsek avatar jonngai avatar kunaltyagi avatar luxonis-brandon avatar mambawong avatar moj0 avatar moratom avatar oanamariavatavu avatar pamolloy avatar petrnovota avatar saching13 avatar stuartsmoore avatar szabolcsgergely avatar themarpe avatar vandavv avatar whoactuallycares avatar zigimigi avatar zrezke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

depthai-python's Issues

USB communication attempt with a device already disconnected

An error usbfs: usb_submit_urb returned -19 occurred in the kernel ring buffer logs dmesg -wH  after the depthai-python example program is stopped.
log

Spotted on Intel NUC with Ubuntu 20.04.
5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

gen2 no errors thrown if layer in NNData is not of the correct dimensions

We found that that even when the data passed in a NNData object to a XLinkIn queue is not of the correct dimension for the connected neural network, the pipeline still runs normally with no errors thrown.

Working off 17_video_mobilenet, the pipeline was able to run even when we changed this line:

nn_data.setLayer("data", to_planar(frame, (300, 300)))

to

nn_data.setLayer("data", frame)

where frame is a np array of dimensions (720, 480, 3) instead of the expected (3, 300, 300), the pipeline ran without any errors. The NN simply constantly output no detections.

This was on v0.0.2.1, maybe this was fixed in revisions after?

OAK-D not recognized as USB3.0 Device

Hi Luxonis Team,

I've been investigating communication speeds of the DepthAI for realtime applications and I noticed strange behavior regarding the USB3.0 capabilities of the device.

Basically, I am unable to have the OAK-D device recognized as USB3.1. Instead, it gets recognized as USB2.0 with HighSpeed. I use lsusb & lsusb -t to verify the communication speed (480M = USB2.0, 5000M = USB3.0, 10000M = USB3.1) of the device.

I am using the OAK-D hardware. I noticed this problem on two different devices. I tried with and without power supply, did not change the problem. I also verified consistency of the problem accross different platforms: two computers under Ubuntu, a Raspberry-Pi and a Windows computer. The problem occurred on all the different platforms. I did verify that my USB ports were working (by plugging other USB3.0 devices, it was recognized as such). I can provide you with logs if wanted.

Also here is another test I did which is interesting. I have an IntelRealsense t265 (in theory, it also supports USB3.0, and also has a MyriadX module). Suprinsingly the same problem occured: the MyriadX module was never recognized as 3.X. I suspect a driver problem of the Myriad module, which I guess would be hard to solve. On the documentation of RealSense (accessible here: https://dev.intelrealsense.com/docs/compiling-librealsense-for-linux-ubuntu-guide), it is mentioned:

Important: Running RealSense Depth Cameras on Linux requires patching and inserting modified kernel drivers. Some OEM/Vendors choose to lock the kernel for modifications. Unlocking this capability may requires to modify BIOS settings.

Do you have an idea of where this issue comes from ? Is there a fix that I can do on Linux to solve it ? I would greatly appreciate a hint !
Thanks in advance ! Arthur


intermittent error on develop branch

outQ1 = dev.getOutputQueue('ve1Out', maxSize=30, blocking=True)

Occasionally when running a modified version of the encoding_max_limit.py, encoding_max_limit-AREL.py, file I get the following error.

    outQ1 = dev.getOutputQueue('ve1Out', maxSize=30, blocking=True)

ValueError: Device already closed or disconnected

I used to get an XLINK error, but it looks like that stopped.
I am using the develop branch, and I ran install_requirements.py after changing to the develop branch.

The program is running nine times for 5 seconds with a 10 second break in between. It fails 1 to 2 times each set of 9 runs:

# change this to the base directory holding depthai-python
BASE=/CARE-U

# get time
START=$(date +'%Y-%m-%d-%H%M%S')
# make directory for time
mkdir -p ${BASE}/recordings/$START

DIR=${BASE}/recordings/$START


LOG=$DIR/recording.log
echo START - $START >> $LOG

# locations of tanks
tanks1=(tank1 tank2 tank3 tank4 tank5 tank6 tank7 tank8 tank9)
tanks2=(tank10 tank11 tank12 tank13 tank14 tank15 tank16 tank17 tank18)
loc=("${tanks1[@]}")  # use tanks1

# record for 5 seconds each
REC_TIME=5

# for each tank 1 to 9 or 10 to 18
for i in ${!loc[@]}; do
    # goto tank n
    # python3 ${BASE}/python-host/switchbot_py3_AREL.py -d $DEVICE -c ${loc[$i]}
    # wait ten seconds
    sleep 10
    echo ${loc[$i]} - $(date +'%Y-%m-%d-%H%M%S') >> $LOG

    # record for 5 seconds
    python3 ${BASE}/depthai-python/examples/encoding_max_limit-AREL.py $DIR ${loc[$i]} $REC_TIME

done

Pipeline - OpenVINO version required by 'NeuralNetwork' node (id: 2), isn't compatible with 'NeuralNetwork' node (id: 7)

Hello I'm new in this world, I am trying to use an OAK-D. I have successfully run some examples I found in the documentation and I want to go further.

From the https://github.com/openvinotoolkit/open_model_zoo I have obtained: https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/face-recognition-resnet50-arcface trained on MX-net, I have tried to convert it to .blob as follows:

cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\bin
setupvars.bat

cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\model_optimizer
python mo_mxnet.py --input_model model-0000.params --input_shape [1,3,112,112] --output_dir FP16_ga --data_type FP16 --scale 0.9993 --mean_values [127.5,127.5,127.5]

cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\inference_engine\bin\intel64\Release
myriad_compile -m model-0000.xml -o gamodel.blob -ip FP16 -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4

With the depthai examples I get a face and the facial points, but when I create the network for face-recognition-resnet50-arcface converted the following error appears:

C:\Users\Eduardo\AppData\Local\Programs\Python\Python38\python.exe E:/2021/agen_dai/ga_main.py
Creating Color Camera...
Creating Face Detection Neural Network...
Creating Landmarks Detection Neural Network...
Creating GA Detection Neural Network...
Traceback (most recent call last):
File "E:/2021/agen_dai/ga_main.py", line 152, in
main()
File "E:/2021/agen_dai/ga_main.py", line 87, in main
device = depthai.Device(pipeline)
RuntimeError: Pipeline - OpenVINO version required by 'NeuralNetwork' node (id: 2), isn't compatible with 'NeuralNetwork' node (id: 7)

Process finished with exit code 1

I leave the link for the code I am testing, in advance I appreciate all the help.
https://github.com/borgiaE/gam

Issue in OpenVINO model conversion to blob

I was installing OpenVINO on armv7l system, the conversion to blob file command yields an error. Rest steps went as expected. Here's what the output looks like - (I'm using a Virtual Env)

(venv2) pi@raspberrypi:~/open_model_zoo_downloads/intel/face-detection-retail-0004 $ $MYRIAD_COMPILE -m ~/open_model_zoo_downloads/intel/face-detection-retail-0004/FP16/face-detection-retail-0004.xml -ip U8 -VPU_MYRIAD_PLATFORM VPU_MYRIAD_2480 -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4
Inference Engine: 
	API version ............ 2.1
	Build .................. custom_releases/2020/1_d349c3ba4a2508be72f413fa4dee92cc0e4bc0e1
	Description ....... API
Check 'axis < static_cast<size_t>(input_rank)' failed at /teamcity/work/scoring_engine_build/releases_2020_1/ngraph/src/ngraph/op/gather.cpp:140:
While validating node 'Gather[Gather_394](patternLabel_390: float{10,20,30}, patternLabel_391: int64_t{5}, patternLabel_393: int64_t{1}) -> (??)':
The axis must => 0 and <= input_rank (axis: 4294967295).

Build failed when installing from PyPi on Ubuntu Server 20.04.2 LTS

I am trying to install the depthai library from PyPi on a Raspberry Pi 4 B (4GB version) running Ubuntu Server 20.04.2 LTS. I followed the installation instructions from the documentation but am getting this error:

ERROR: Could not build wheels for depthai which use PEP 517 and cannot be installed directly

I am using:

  • Python 3.8.5
  • Pip 21.0.1
  • setuptools 54.1.1
  • wheel 0.36.2

inside of a virtualenv.

Installation worked fine on my Laptop (Ubuntu 20.04.2 LTS, similar environment). Does this relate to the Raspberry Architecture?

Full error message:

ERROR: Command errored out with exit status 1:
command: /home/andre/audrive/venv/bin/python3 /home/andre/audrive/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpkf4hiiey
cwd: /tmp/pip-install-94mv2zsb/depthai_fdfb973f18934363b8bc1bea6a743d88
Complete output (155 lines):
running bdist_wheel
running build
running build_ext
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.25.1")
-- [hunter] Calculating Toolchain-SHA1
-- [hunter] Calculating Config-SHA1
-- [hunter] HUNTER_ROOT: /home/andre/.hunter
-- [hunter] [ Hunter-ID: 062a19a | Toolchain-ID: f61e57a | Config-ID: 6590c1c ]
-- [hunter] NLOHMANN_JSON_ROOT: /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Install (ver.: 3.9.1)
-- [hunter] XLINK_ROOT: /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Install (ver.: luxonis-2021.2-develop)
-- [hunter] Building XLink
loading initial cache file /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/cache.cmake
loading initial cache file /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink/args.cmake
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
-- Build files have been written to: /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink/Build
Scanning dependencies of target XLink-Release
[ 12%] Creating directories for 'XLink-Release'
[ 25%] Performing download step (download, verify and extract) for 'XLink-Release'
-- verifying file...
file='/home/andre/.hunter/_Base/Download/XLink/luxonis-2021.2-develop/7210831/ee361ecba950335390ad539e509f8ab96313b6b4.tar.gz'
-- File already exists and hash match (skip download):
file='/home/andre/.hunter/_Base/Download/XLink/luxonis-2021.2-develop/7210831/ee361ecba950335390ad539e509f8ab96313b6b4.tar.gz'
SHA1='72108319bf2289d91157a3933663ed5fb2b6eb18'
-- extracting...
src='/home/andre/.hunter/_Base/Download/XLink/luxonis-2021.2-develop/7210831/ee361ecba950335390ad539e509f8ab96313b6b4.tar.gz'
dst='/home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink/Source'
-- extracting... [tar xfz]
-- extracting... [analysis]
-- extracting... [rename]
-- extracting... [clean up]
-- extracting... done
[ 37%] No patch step for 'XLink-Release'
[ 50%] No update step for 'XLink-Release'
[ 62%] Performing configure step for 'XLink-Release'
loading initial cache file /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/cache.cmake
loading initial cache file /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink/args.cmake
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
CMake Error at XLink.cmake:27 (message):
libusb is required
Call Stack (most recent call first):
CMakeLists.txt:8 (include)

-- Configuring incomplete, errors occurred!
See also "/home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink/Build/XLink-Release-prefix/src/XLink-Release-build/CMakeFiles/CMakeOutput.log".
See also "/home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink/Build/XLink-Release-prefix/src/XLink-Release-build/CMakeFiles/CMakeError.log".
make[2]: *** [CMakeFiles/XLink-Release.dir/build.make:111: XLink-Release-prefix/src/XLink-Release-stamp/XLink-Release-configure] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/XLink-Release.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

[hunter ** FATAL ERROR **] Build step failed (dir: /home/andre/.hunter/_Base/062a19a/f61e57a/6590c1c/Build/XLink
[hunter ** FATAL ERROR **] [Directory:/tmp/pip-install-94mv2zsb/depthai_fdfb973f18934363b8bc1bea6a743d88/depthai-core/cmake]

------------------------------ ERROR -----------------------------
https://hunter.readthedocs.io/en/latest/reference/errors/error.external.build.failed.html

CMake Error at /home/andre/.hunter/_Base/Download/Hunter/0.23.258/062a19a/Unpacked/cmake/modules/hunter_error_page.cmake:12 (message):
Call Stack (most recent call first):
/home/andre/.hunter/_Base/Download/Hunter/0.23.258/062a19a/Unpacked/cmake/modules/hunter_fatal_error.cmake:20 (hunter_error_page)
/home/andre/.hunter/_Base/Download/Hunter/0.23.258/062a19a/Unpacked/cmake/modules/hunter_download.cmake:623 (hunter_fatal_error)
/home/andre/.hunter/_Base/Download/Hunter/0.23.258/062a19a/Unpacked/cmake/modules/hunter_add_package.cmake:53 (hunter_download)
depthai-core/cmake/depthaiDependencies.cmake:6 (hunter_add_package)
depthai-core/CMakeLists.txt:82 (include)

-- Configuring incomplete, errors occurred!
See also "/tmp/pip-install-94mv2zsb/depthai_fdfb973f18934363b8bc1bea6a743d88/build/temp.linux-aarch64-3.8/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "/home/andre/audrive/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in
main()
File "/home/andre/audrive/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/andre/audrive/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 204, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/tmp/pip-build-env-1ze6gv2k/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 216, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
File "/tmp/pip-build-env-1ze6gv2k/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 202, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-1ze6gv2k/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 253, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-1ze6gv2k/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, file, 'exec'), locals())
File "setup.py", line 172, in
setup(
File "/tmp/pip-build-env-1ze6gv2k/overlay/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-1ze6gv2k/overlay/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 83, in run
self.build_extension(ext)
File "setup.py", line 167, in build_extension
subprocess.check_call(['cmake', ext.sourcedir] + cmake_args, cwd=self.build_temp, env=env)
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-install-94mv2zsb/depthai_fdfb973f18934363b8bc1bea6a743d88', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/tmp/pip-install-94mv2zsb/depthai_fdfb973f18934363b8bc1bea6a743d88/build/lib.linux-aarch64-3.8/', '-DPYTHON_EXECUTABLE=/home/andre/audrive/venv/bin/python3', '-DDEPTHAI_PYTHON_COMMIT_HASH=b9f829caed435ac8a72c5b6a03db4fa3ccb8c226', '-DCMAKE_BUILD_TYPE=Release', '-DHUNTER_CONFIGURATION_TYPES=Release']' returned non-zero exit status 1.

ERROR: Failed building wheel for depthai
Failed to build depthai
ERROR: Could not build wheels for depthai which use PEP 517 and cannot be installed directly

How to install DEPTHAI with CONDA?

Hey!
I'm new. I'm trying to install DEPTHAI via CONDA but it won't work.
I've tried using the command:
pip install depthai
But it is not working in conda. It gives error saying "No module named: cv2"
Any kind of help would be highly appreciated!
Cheers!

MEDIAN_OFF does work when displaying a depth map.

I just ran into this issue when trying to turn off median filtering (when also displaying a depth map).

[14442C10D1039DCD00] [26.001] [system] [critical] Fatal error. Please report to developers. Log: ‘StereoSipp’ ‘527’
Traceback (most recent call last):
File “/Users/billk/dev/BigVision/LearnOpenCV/OAK_Gen2/Project/depthai-python/OAK_Course_Examples/27_spatial_location_calculator_v4.py”, line 150, in
inDepth = depthQueue.get() # blocking call, will wait until a new data has arrived
RuntimeError: Communication exception - possible device error/misconfiguration. Original message ’Couldn’t read data from stream: ‘depth’ (X_LINK_ERROR)’

I get this error when I try to turn median filtering OFF as follow below, AND when I am displaying a depth map.

Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)

median = dai.StereoDepthProperties.MedianFilter.MEDIAN_OFF # For depth filtering
depth.setMedianFilter(median)

If I use any example that displays a disparity map then I can turn off median filtering as above just fine and I get the expected (un-filtered) map.

You can replicate by running any example that displays a depth map (e.g., 26_1_spatial_mobilenet.py, 27_spatial_location_calculator.py).

If you run 03_depth_preview.py, that script will allow you to turn off median filtering, but the other examples above will not.

To summarize, I can use median filtering with various kernels (3x3, 5x5, 7x7), when displaying either a depth map OR a disparity map. And I can turn median filtering OFF when I use a demo that displays a disparity map, BUT when I try to explicitly turn OFF median filtering while also displaying a depth map I get the crash.

Improve error handling when Device object is deleted during pipeline runtime

While experimenting, I've noticed that the pipeline fails if the device object is deleted while the pipeline is running, the pipeline will fail with an error

E: [global] [    223721] [python] addEvent:262	Condition failed: event->header.flags.bitField.ack != 1
E: [global] [    223721] [python] addEventWithPerf:276	 addEvent(event) method call failed with an error: 3
E: [global] [    223721] [python] XLinkReadData:156	Condition failed: (addEventWithPerf(&event, &opTime))

The DepthAI will stop sending any packets but won't throw any error that would stop the runtime.

I've prepared an example script that can be run to reproduce this situation. Notice that the device is initialized in get_pipeline method, but only pipeline is returned from it, so the device object gets deleted by garbage collecting mechanism.

import cv2
import depthai

def get_pipeline(config):
    device = depthai.Device('', False)

    p = device.create_pipeline(config=config)

    if p is None:
        raise RuntimeError("Error initializing pipelne")

    return p

pipeline = get_pipeline(config={
    "streams": ["previewout"],
    "ai": {
        "blob_file": "/path/to/mobilenet-ssd.blob",
        "blob_file_config": /path/to/mobilenet-ssd.json"
    }
})

while True:
    data_packets = pipeline.get_available_data_packets()
    for packet in data_packets:
        if packet.stream_name == 'previewout':
            data = packet.getData()
            data0 = data[0, :, :]
            data1 = data[1, :, :]
            data2 = data[2, :, :]
            frame = cv2.merge([data0, data1, data2])

            cv2.imshow('previewout', frame)

    if cv2.waitKey(1) == ord('q'):
        break

del pipeline

We should improve error message to point out that the Device object was deleted from memory

Invalid Resolution reported when trying to use XLinkIn (from video recordings) as inputs to StereoDepth

Got RuntimeError "StereoDepth(2) - StereoDepth | Invalid resolution: Width 0, Height 0" when startPipeline(). Try to do setResolution on XLinkIn but no such method available. The example 17 uses video to feed NN so it is not relevant. How can I make it work?

#!/usr/bin/env python3

from pathlib import Path
import sys
import cv2
import depthai as dai
import numpy as np
from time import monotonic

# Get argument first
monoLPath = str(Path("./mono1_1.mp4").resolve().absolute())
monoRPath = str(Path("./mono2_1.mp4").resolve().absolute())

if len(sys.argv) > 2:
    monoLPath = sys.argv[1]
    monoRPath = sys.argv[2]

# Start defining a pipeline
pipeline = dai.Pipeline()

# Create xLink input to which host will send frames from the video file
xinLFrame = pipeline.createXLinkIn()
xinLFrame.setStreamName("inLeftFrame")

xinRFrame = pipeline.createXLinkIn()
xinRFrame.setStreamName("inRightFrame")

# Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth = pipeline.createStereoDepth()
depth.setConfidenceThreshold(200)
# Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7  # For depth filtering
depth.setMedianFilter(median)

'''
If one or more of the additional depth modes (lrcheck, extended, subpixel)
are enabled, then:
 - depth output is FP16. TODO enable U16.
 - median filtering is disabled on device. TODO enable.
 - with subpixel, either depth or disparity has valid data.
Otherwise, depth output is U16 (mm) and median is functional.
But like on Gen1, either depth or disparity has valid data. TODO enable both.
'''
# Better handling for occlusions:
depth.setLeftRightCheck(False)
# Closer-in minimum depth, disparity range is doubled:
depth.setExtendedDisparity(False)
# Better accuracy for longer distance, fractional disparity 32-levels:
depth.setSubpixel(False)

xinLFrame.out.link(depth.left)
xinRFrame.out.link(depth.right)

# Create output
xout = pipeline.createXLinkOut()
xout.setStreamName("disparity")
depth.disparity.link(xout.input)

# Pipeline is defined, now we can connect to the device
with dai.Device(pipeline) as device:
    # Start pipeline
    device.startPipeline()

    # Input queue will be used to send video frames to the device.
    qInL = device.getInputQueue(name="inLeftFrame")
    qInR = device.getInputQueue(name="inRightFrame")

    # Output queue will be used to get the disparity frames from the outputs defined above
    q = device.getOutputQueue(name="disparity", maxSize=4, blocking=False)

    def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
        return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()

    capL = cv2.VideoCapture(monoLPath)
    capR = cv2.VideoCapture(monoRPath)

    while capL.isOpened():
        read_L_correctly, frameL = capL.read()
        if not read_L_correctly:
            break

        if capR.isOpened():
            read_R_correctly, frameR = capR.read()
            if not read_R_correctly:
                break

            imgL = dai.ImgFrame()
            imgL.setData(to_planar(frame, (1280, 720)))
            imgL.setTimestamp(monotonic())
            imgL.setWidth(1280)
            imgL.setHeight(720)
            qInL.send(imgL)

            imgR = dai.ImgFrame()
            imgR.setData(to_planar(frame, (1280, 720)))
            imgR.setTimestamp(monotonic())
            imgR.setWidth(1280)
            imgR.setHeight(720)
            qInR.send(imgR)

            inDepth = q.get()  # blocking call, will wait until a new data has arrived
            frame = inDepth.getFrame()
            frame = cv2.normalize(frame, None, 0, 255, cv2.NORM_MINMAX)

            # Available color maps: https://docs.opencv.org/3.4/d3/d50/group__imgproc__colormap.html
            frame = cv2.applyColorMap(frame, cv2.COLORMAP_JET)

            # frame is ready to be shown
            cv2.imshow("disparity", frame)

            if cv2.waitKey(1) == ord('q'):
                break

Pytest on Raspberry Pi 4 aborted using gen2_develop branch

At this moment, I am configuring a pipeline to run tests using OpenVINO models on Raspberry Pi 4 (Raspbian 10) with OAK-1. I face some issues running pytest and using latest commit (cc3b8ea and the same problem happened using b9918c4) in gen2_develop branch. I see "Aborted" error when I run tests for models.

Example:

python3.7 -m pytest -s
======================================================== test session starts ========================================================
platform linux -- Python 3.7.3, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /home/pi/dbface
collected 1 item                                                                                                                    

tests/test_dbface.py terminate called without an active exception
Fatal Python error: Aborted

Current thread 0xb6f4fad0 (most recent call first):
  File "/home/pi/dbface/dbface/model.py", line 144 in model_load
  File "/home/pi/dbface/tests/test_dbface.py", line 23 in test_process_sample_dbface
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/python.py", line 183 in pytest_pyfunc_call
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/python.py", line 1641 in runtest
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 162 in pytest_runtest_call
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 255 in <lambda>
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 311 in from_call
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 255 in call_runtest_hook
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 215 in call_and_report
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 126 in runtestprotocol
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/runner.py", line 109 in pytest_runtest_protocol
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/main.py", line 348 in pytest_runtestloop
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/main.py", line 323 in _main
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/main.py", line 269 in wrap_session
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
  File "/home/pi/.local/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 163 in main
  File "/home/pi/.local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 185 in console_main
  File "/home/pi/.local/lib/python3.7/site-packages/pytest/__main__.py", line 5 in <module>
  File "/usr/lib/python3.7/runpy.py", line 85 in _run_code
  File "/usr/lib/python3.7/runpy.py", line 193 in _run_module_as_main
Aborted

However, if I use depthai-0.0.2.1+22ad34c8264fc3a9a919dbc5c01e3ed3eb41f5aa with the same code, tests work properly.

Example:

python3.7 -m pytest -s
======================================================== test session starts ========================================================
platform linux -- Python 3.7.3, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /home/pi/dbface
collected 1 item                                                                                                                    

tests/test_dbface.py .

========================================================= 1 passed in 6.43s =========================================================

On the other hand, if to run pytest using OpenVINO models on a laptop (Ubuntu) with OAK-1 and use any of these commits, anything runs fine without any problems.

Example:

======================================================== test session starts ========================================================
platform linux -- Python 3.8.0, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /home/dbface
collected 1 item                                                                                                                    

tests/test_dbface.py .

========================================================= 1 passed in 7.78s =========================================================

Release Gen2 on PyPi

To progress with Gen2 efforts, we need to prepare a stable release, test it broadly and then upload the package to PyPI

TODO:

  • Pick the first release version number
  • Create release branch
  • Test on Ubuntu 18.04
  • Test on Ubuntu 20.04
  • Test on Raspberry Pi OS
  • Test on Windows 10
  • Test on Mac OS
  • Release package to PyPi
  • Update install instructions on Python API docs

Variable naming in: examples/017_video_mobilenet.py

In examples/17_video_mobilenet.py, the input stream for XLinkIn (from the video file to the nn input) is named “inDet” and then further down in the code when we retrieve inference results from the output queue, we use the same name to store the incoming detections from the output queue (ie, inDet = qDet.tryGet()). This duplicate use of that name may be potentially confusing for people who are just getting started with these examples. Also, the use of “Det” for the XInLink input stream name (“inDet”) is also confusing (since the input is an image frame from the input video stream). Suggest changing:

xinDet = pipeline.createXLinkIn()
xinDet.setStreamName("inDet")

To something more like this:

xinFrame = pipeline.createXLinkIn()
xinFrame.setStreamName(“inFrame")

Sporadic failure during startup

I face a sporadic failure during startup, if a control and config queue is defined.

The execution fails at self.device.startPipeline(self.pipeline)
with the error RuntimeError: Couldn't read data from stream: '__rpc_main' (X_LINK_ERROR).

The thing is, it doesn't fail every time. My code starts ~ 19 of 20 times without any issues. Sporadically it fails.
I also included a wait timer of 10 seconds prior to restart the pipeline and forced usb2Mode. None of that helped.

I can't share my code but I modified the rgb_preview.py example to reproduce it (attached).
For reproduction the script is ran many times, till the error occurs.

I used the controlIn and configIn queues based on an previous example. But as described this is sporadically not working.

Bug description
If you run python3 rgb_preview_modfied.py, the code brakes at starting the pipeline.
RuntimeError: Couldn't read data from stream: '__rpc_main' (X_LINK_ERROR)

How to reproduce
I've made a minimal example from the original rgb_preview.py. The only things that changed are a config and control queue to send and receive camera information/commands, as well as the loop starting a new pipeline till the breakage occurs.

  1. Run the attached script.

Expected behavior
Nothing but a clean start (no breakage, even for multiple restarts). Otherwise the camera is not usable for production.

Discord discussion link
https://discord.com/channels/790680891252932659/799407361986658354/856538425591201802

System
I use the original cable, an OAK-1, native Ubuntu18.04, USB3.0 port and I also tested and reproduced this on multiple systems.

Terminal Output

[email protected]:~/Projects/depthai-python$ DEPTHAI_LEVEL=trace python3 ./examples/rgb_preview.py 
[2021-06-21 16:00:34.419] [debug] Python bindings - version: 2.5.0.0 from 2021-06-08 06:26:30 +0300 build: 2021-06-08 03:57:29 +0000
[2021-06-21 16:00:34.419] [debug] Library information - version: 2.5.0, commit: d6883a576de5f57d3354d7e04791f148ad544a55 from 2021-06-08 06:20:57 +0300, build: 2021-06-08 03:54:13 +0000
[2021-06-21 16:00:34.419] [debug] Initialize - finished
[2021-06-21 16:00:34.420] [debug] Device - pipeline serialized, OpenVINO version: 2021.3
[2021-06-21 16:00:34.864] [debug] Resources - archive open: 2845628ns, archive read: 441848411ns
[2021-06-21 16:00:35.920] [trace] RPC: [1,1,9503979954606424982,null]
[2021-06-21 16:00:35.920] [trace] RPC: [1,1,10182484315255513117,[0]]
[2021-06-21 16:00:35.974] [trace] RPC: [1,1,5804304869041345055,[1.0]]
[2021-06-21 16:00:35.976] [trace] RPC: [1,1,17425566508637143278,null]
[2021-06-21 16:00:36.025] [trace] Log vector decoded, size: 3
[14442C10C18295D000] [2.667] [system] [info] Memory Usage - DDR: 0.13 / 414.66 MiB, CMX: 2.09 / 2.50 MiB, LeonOS Heap: 3.86 / 46.22 MiB, LeonRT Heap: 2.83 / 27.28 MiB
[14442C10C18295D000] [2.667] [system] [info] Temperatures - Average: 45.83 °C, CSS: 46.68 °C, MSS 45.32 °C, UPA: 45.32 °C, DSS: 46.00 °C
[14442C10C18295D000] [2.667] [system] [info] Cpu Usage - LeonOS 29.45%, LeonRT: 1.68%
[2021-06-21 16:00:36.174] [trace] RPC: [1,1,16527326580805871264,[{"connections":[{"node1Id":3,"node1Output":"out","node2Id":0,"node2Input":"inputConfig"},{"node1Id":2,"node1Output":"out","node2Id":0,"node2Input":"inputControl"},{"node1Id":0,"node1Output":"preview","node2Id":1,"node2Input":"in"}],"globalProperties":{"calibData":null,"cameraTuningBlobSize":null,"cameraTuningBlobUri":"","leonCssFrequencyHz":700000000.0,"leonMssFrequencyHz":700000000.0,"pipelineName":null,"pipelineVersion":null},"nodes":[[1,{"id":1,"ioInfo":{"in":{"blocking":true,"name":"in","queueSize":8,"type":3}},"name":"XLinkOut","properties":{"maxFpsLimit":-1.0,"metadataOnly":false,"streamName":"rgb"}}],[0,{"id":0,"ioInfo":{"inputConfig":{"blocking":false,"name":"inputConfig","queueSize":8,"type":3},"inputControl":{"blocking":true,"name":"inputControl","queueSize":8,"type":3},"isp":{"blocking":false,"name":"isp","queueSize":8,"type":0},"preview":{"blocking":false,"name":"preview","queueSize":8,"type":0},"raw":{"blocking":false,"name":"raw","queueSize":8,"type":0},"still":{"blocking":false,"name":"still","queueSize":8,"type":0},"video":{"blocking":false,"name":"video","queueSize":8,"type":0}},"name":"ColorCamera","properties":{"boardSocket":-1,"colorOrder":1,"fp16":false,"fps":30.0,"imageOrientation":-1,"initialControl":{"aeLockMode":false,"aeRegion":{"height":0,"priority":0,"width":0,"x":0,"y":0},"afRegion":{"height":0,"priority":0,"width":0,"x":0,"y":0},"antiBandingMode":0,"autoFocusMode":3,"awbLockMode":false,"awbMode":0,"brightness":0,"chromaDenoise":0,"cmdMask":0,"contrast":0,"effectMode":0,"expCompensation":0,"expManual":{"exposureTimeUs":0,"frameDurationUs":0,"sensitivityIso":0},"lensPosition":0,"lumaDenoise":0,"saturation":0,"sceneMode":0,"sharpness":0},"inputConfigSync":false,"interleaved":false,"ispScale":{"horizDenominator":0,"horizNumerator":0,"vertDenominator":0,"vertNumerator":0},"previewHeight":300,"previewKeepAspectRatio":true,"previewWidth":300,"resolution":0,"sensorCropX":-1.0,"sensorCropY":-1.0,"stillHeight":-1,"stillWidth":-1,"videoHeight":-1,"videoWidth":-1}}],[3,{"id":3,"ioInfo":{"out":{"blocking":false,"name":"out","queueSize":8,"type":0}},"name":"XLinkIn","properties":{"maxDataSize":5242880,"numFrames":8,"streamName":"config"}}],[2,{"id":2,"ioInfo":{"out":{"blocking":false,"name":"out","queueSize":8,"type":0}},"name":"XLinkIn","properties":{"maxDataSize":5242880,"numFrames":8,"streamName":"control"}}]]}]]
[2021-06-21 16:00:36.185] [trace] RPC: [1,1,10180360702496156555,null]
[2021-06-21 16:00:36.186] [trace] RPC: [1,1,14047900442330284907,null]
[2021-06-21 16:00:36.193] [trace] Log vector decoded, size: 1
[14442C10C18295D000] [2.882] [system] [info] ImageManip internal buffer size '80640'B, shave buffer size '19456'B
[2021-06-21 16:00:36.237] [trace] Log vector decoded, size: 1
[14442C10C18295D000] [2.882] [system] [info] SIPP (Signal Image Processing Pipeline) internal buffer size '156672'B
[2021-06-21 16:00:36.238] [trace] RPC: [1,1,8959630473823391071,null]
[2021-06-21 16:00:36.725] [trace] RPC: [1,1,9503979954606424982,null]
[2021-06-21 16:00:38.229] [debug] Device about to be closed...
[2021-06-21 16:00:38.241] [debug] XLinkResetRemote of linkId: (0)
[2021-06-21 16:00:38.241] [debug] DataOutputQueue (rgb) about to be destructed...
[2021-06-21 16:00:38.278] [debug] Timesync thread exception caught: Couldn't read data from stream: '__timesync' (X_LINK_ERROR)
[2021-06-21 16:00:38.329] [debug] DataOutputQueue (rgb) destructed
[2021-06-21 16:00:38.329] [debug] DataInputQueue (control) about to be destructed...
[2021-06-21 16:00:38.379] [debug] Log thread exception caught: Couldn't read data from stream: '__log' (X_LINK_ERROR)
[2021-06-21 16:00:38.429] [debug] DataInputQueue (control) destructed
[2021-06-21 16:00:38.429] [debug] DataInputQueue (config) about to be destructed...
[2021-06-21 16:00:38.479] [debug] DataInputQueue (config) destructed
[2021-06-21 16:00:38.529] [debug] Device closed, 300
Traceback (most recent call last):
  File "./examples/rgb_preview.py", line 30, in <module>
    with dai.Device(pipeline) as device:
RuntimeError: Couldn't read data from stream: '__rpc_main' (X_LINK_ERROR)
[email protected]:~/Projects/depthai-python$ 

"Segmentation fault" when close device on Raspberry Pi4

I know to day is Saturday but I'm struggling in "Segmentation fault" error. This error happens when I try to close device by device.close() command. Are there any way to fix it or close device connection correctly?
Below are some information about my system:

  • Device: Raspberry Pi4 + OAK-D
  • OS: Raspbian GNU/Linux 10 (buster)
  • python 3.7.3
  • depthai 2.5.0.0

Thank you for your help.

build tests fails

trying to run
mkdir build_tests && cd build_tests
cmake .. -D DEPTHAI_PYTHON_ENABLE_TESTS=ON -D DEPTHAI_PYTHON_ENABLE_EXAMPLES=ON -D
DEPTHAI_PYTHON_TEST_EXAMPLES=ON
cmake --build . --parallel
ctest

cmake .. -D DEPTHAI_PYTHON_ENABLE_TESTS=ON -D DEPTHAI_PYTHON_ENABLE_EXAMPLES=ON -D 

gives the following result:
CMake Error at CMakeLists.txt:13 (file):
file failed to open for reading (No such file or directory):

/home/chris/github/oak/depthai-python/depthai-core/cmake/Hunter/config.cmake

CMake Error at CMakeLists.txt:62 (add_subdirectory):
The source directory

/home/chris/github/oak/depthai-python/depthai-core

does not contain a CMakeLists.txt file.

and more from there.

can be fixed by adding
git submodule update --init --recursive

10_mono_depth_mobilenetssd.py depth image is flipped and mono-image is scaled wrong

The depth result is flipped (y-axis) and thus the overlay is not always following.

I added flip to the depth visualizer seems to fix it.

frame_depth = cv2.flip(frame_depth, 1)

Also, the mono image is not cropped but scaled (not fixed aspect ratio).

Lastly, I think we should normalize the depth map.

frame_depth = cv2.normalize(frame_depth, None, 0, 255, cv2.NORM_MINMAX)
frame_depth = cv2.applyColorMap(frame_depth, cv2.COLORMAP_JET)

X_LINK_ERROR 221

I am trying to use two mobilenet Neural Networks. This is my code:

import depthai as dai
import cv2
from time import monotonic
import numpy as np
from time import sleep

body_path_model = "models/mobilenet-ssd_openvino_2021.2_8shave.blob"
face_path_model = "models/face-detection-openvino_2021.2_4shave.blob"
video_path = "videos/21-center-2.mp4"


def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
    return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()


def process_frame(frame: np.array, width: int, height: int) -> dai.ImgFrame:
    # Generate ImgFrame to use as input of the Pipeline
    img = dai.ImgFrame()
    img.setData(to_planar(frame, (width, height)))
    img.setTimestamp(monotonic())
    img.setWidth(width)
    img.setHeight(height)

    return img


if __name__ == "__main__":
    pipeline = dai.Pipeline()
    pipeline.setOpenVINOVersion(version=dai.OpenVINO.Version.VERSION_2021_1)

    in_frame = pipeline.createXLinkIn()
    in_frame.setStreamName("input")

    # Create Mobilenet and Face nodes
    nn_body = pipeline.createMobileNetDetectionNetwork()
    nn_body.setConfidenceThreshold(0.7)
    nn_body.setBlobPath(body_path_model)
    nn_body.setNumInferenceThreads(2)
    nn_body.input.setBlocking(False)
    nn_body.input.setQueueSize(1)

    nn_face = pipeline.createMobileNetDetectionNetwork()
    nn_face.setBlobPath(face_path_model)
    nn_body.setConfidenceThreshold(0.7)
    nn_body.setNumInferenceThreads(2)
    nn_body.input.setBlocking(False)
    nn_body.input.setQueueSize(1)

    # Links
    # Inputs
    in_frame.out.link(nn_body.input)
    in_frame.out.link(nn_face.input)

    # outputs
    body_out_frame = pipeline.createXLinkOut()
    body_out_frame.setStreamName("body_out")
    nn_body.out.link(body_out_frame.input)

    face_out_frame = pipeline.createXLinkOut()
    face_out_frame.setStreamName("face_out")
    nn_face.out.link(face_out_frame.input)

    # Initilize pipeline
    device = dai.Device(pipeline)

    # Queues
    in_q = device.getInputQueue(name="input", maxSize=1, blocking=False)

    body_out_q = device.getOutputQueue(name="body_out", maxSize=1, blocking=True)
    face_out_q = device.getOutputQueue(name="face_out", maxSize=1, blocking=True)

    cap = cv2.VideoCapture(video_path)
    while cap.isOpened():
        read_correctly, frame = cap.read()
        if not read_correctly:
            break

        img = dai.ImgFrame()
        img.setData(to_planar(frame, (300, 300)))
        img.setTimestamp(monotonic())
        img.setWidth(300)
        img.setHeight(300)
        in_q.send(img)

        print(body_out_q.tryGet(), face_out_q.tryGet())

        cv2.imshow("rgb", frame)

        if cv2.waitKey(1) == ord("q"):
            break

Sometimes, not always, it crashs and I don't understand why. This is the Traceback error:

[14442C10913EE4D200] [41.562] [DetectionNetwork(2)] [warning] Network compiled for 4 shaves, maximum available 16, compiling for 8 shaves likely will yield in better performance
[14442C10913EE4D200] [41.569] [DetectionNetwork(2)] [warning] The issued warnings are orientative, based on optimal settings for a single network, if multiple networks are running in parallel the optimal settings may vary
[14442C10913EE4D200] [3.806] [system] [critical] Fatal error. Please report to developers. Log: 'XLinkOut' '221'
Traceback (most recent call last):
  File "c:\Users\Javi\Desktop\UCA\OAK_module\oak\simplified_pipeline.py", line 85, in <module>
    print(body_out_q.tryGet(), face_out_q.tryGet())
RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'body_out' (X_LINK_ERROR)'

It is really strange because not always occur. I have tryed many configurations of blocking/not blocking and queues sizes, but it doesn't work.

Throw proper error message when udev rules are not set on host.

We should also improve the host app and throw a proper error message when the udev rules are not installed (the backend libusb API returns a "permission denied" error in this case).

And in this case potentially offer the user an option to auto-install the rules by running the above script (the user would need to input sudo password).

The udev rules installation could be simplified to not necessarily depend on OpenVINO.
Looks like this works:

Run this command:
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules

Reset or reconnect(USB) the DepthAI device.

some examples depend on blob data, no indication where to get that.

It looks like some of the examples depend on unknown data.
python 08_rgb_mobilenet.py
Traceback (most recent call last):
File "08_rgb_mobilenet.py", line 28, in
nn.setBlobPath(args.nnPath)
RuntimeError: NeuralNetwork node | Blob at path: /home/chris/github/oak/depthai-python/examples/models/mobilenet-ssd_openvino_2021.2_6shave.blob doesn't exist

ctests seems to work, by python examples fail. find . -iname "blob" does not find any.

RuntimeError: No available devices (MacOS)

I was able to do the complete fresh installation but Im not able to run any examples. Somehow my laptop camera is not being detected. Could you please help.

I have python 3.7.7 running on mac os in virtual env.
depthai version : 2.0.0.1+f57a48d32243e62cdf2842d20b3e7834868d7066

image

DepthAI lib version in docs does not exist.

The DepthAI version here -

python3 -m pip install --extra-index-url https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local/ depthai==2.2.1.0.dev+81f88e7249e64b1c34bf0a62eda776901877cca5

failes to install. Log -

ERROR: No matching distribution found for depthai==2.2.1.0.dev+81f88e7249e64b1c34bf0a62eda776901877cca5

Source - https://docs.luxonis.com/projects/api/en/latest/

cc - @cafemoloko

Ref:
https://github.com/luxonis/depthai-python/edit/main/docs/source/index.rst

Requesting for Example Python Scripts

I'm an intermediate in programming, with minimal Python experience.
I'm kindly requesting for example Python scripts.
Can't seem to understand how to do the following with my BW1098 OBC, host RPi3B+ and Python 3.6:

Manipulate color camera through OpenCV Python code within DepthAI library
Manipulate shutter camera through OpenCV Python code within DepthAI library
Use object tracking function of DepthAI library
Save current frame as image

I hope somebody would be kind enough to share sample scripts for these.
Thank you.

IMU suddenly stopped sending data

Dear Luxonis Team,

I am having a trouble with running the IMU on the AOK-D. I am working on a project which requires using the IMU jointly with several other features offered by the DepthAI Module.

Until yesterday, everything was working fine and I could collect the IMU while performing many other tasks. At some point, the IMU collection stopped working and I wasn't able to fix it. The line imuData = imuQueue.tryGet() simply returned None.

I tried running the examples imu_gyroscope_accelerometer.py (or the rotation one) and the code simple gets trapped in the blocking call (and before the problem happened, it was always working fine). Also, any of the examples that does not involve the IMU works just fine, hence the problem really comes from the IMU node. The device did not fall or anything. I tried re-installing my depthai libraries with the main branch (because I was using the /develop branch), it also did not change the problem. I also tried to run the example scripts on another computer and the problem remained.

When the problem happened, I was actually recording USB transmission speed and I was investigating the effect of transmitting IMU data at a fast rate (500 Hz) in the latency of the image collection (the latency of the image streaming increases up to 400 ms, instead of ~100 ms if not using the IMU but using many other functionalities of the DepthAI module, such as encoding, feature tracking and streaming).

Do you have any idea of why this could be happening ? Is there a script that I can run in order to investigate the functionalities of the device ? I am a little scared that this is a hardware problem, is there a way to check this without opening the device ?

Thanks in advance for your help !

I join here the result of the log_system_information.py script.

{
    "architecture": "64bit ELF",
    "machine": "x86_64",
    "platform": "Linux-5.4.0-74-generic-x86_64-with-glibc2.29",
    "processor": "x86_64",
    "python_build": "default Jun  2 2021 10:49:15",
    "python_compiler": "GCC 9.4.0",
    "python_implementation": "CPython",
    "python_version": "3.8.10",
    "release": "5.4.0-74-generic",
    "system": "Linux",
    "version": "#83-Ubuntu SMP Sat May 8 02:35:39 UTC 2021",
    "win32_ver": "",
    "uname": "Linux arthur-Vostro-5490 5.4.0-74-generic #83-Ubuntu SMP Sat May 8 02:35:39 UTC 2021 x86_64 x86_64",
    "packages": [
        "appdirs==1.4.3",
        "apt-clone==0.2.1",
        "apturl==0.5.2",
        "argcomplete==1.12.1",
        "argon2-cffi==20.1.0",
        "arrow==0.14.4",
        "async-generator==1.10",
        "attrs==21.2.0",
        "autobahn==17.10.1",
        "Automat==0.8.0",
        "backcall==0.2.0",
        "backports.entry-points-selectable==1.1.0",
        "bcrypt==3.1.7",
        "beautifulsoup4==4.8.2",
        "bleach==3.3.1",
        "bleak==0.7.1",
        "blinker==1.4",
        "blobconverter==0.0.10",
        "boto3==1.18.11",
        "botocore==1.21.11",
        "breezy==3.0.2",
        "Brlapi==0.7.0",
        "catkin-pkg==0.4.23",
        "catkin-pkg-modules==0.4.23",
        "cbor==1.0.0",
        "cefpython3==66.0",
        "certifi==2019.11.28",
        "cffi==1.14.6",
        "chardet==3.0.4",
        "Click==7.0",
        "colorama==0.4.3",
        "command-not-found==0.3",
        "configobj==5.0.6",
        "constantly==15.1.0",
        "cryptography==2.8",
        "cupshelpers==1.0",
        "cycler==0.10.0",
        "Cython==0.29.14",
        "dbus-python==1.2.16",
        "debugpy==1.4.1",
        "decorator==5.0.9",
        "defer==1.0.6",
        "defusedxml==0.7.1",
        "Deprecated==1.2.7",
        "# Editable install with no version control (depthai==2.9.0.0.dev0+bd207ce615f3295f72b55217d09d035ddb60be95)\n-e /home/arthur/.local/lib/python3.8/site-packages",
        "distlib==0.3.2",
        "distro==1.4.0",
        "docutils==0.16",
        "dulwich==0.19.15",
        "empy==3.3.2",
        "entrypoints==0.3",
        "fastimport==0.9.8",
        "ffmpy3==0.2.4",
        "filelock==3.0.12",
        "Flask==1.1.1",
        "gpg===1.13.1-unknown",
        "greenlet==0.4.15",
        "grpcio==1.16.1",
        "httplib2==0.14.0",
        "hyperlink==19.0.0",
        "idna==2.8",
        "ifaddr==0.1.6",
        "IMDbPY==6.8",
        "incremental==16.10.1",
        "ipykernel==6.0.3",
        "ipython==7.26.0",
        "ipython-genutils==0.2.0",
        "ipywidgets==7.6.3",
        "itsdangerous==2.0.1",
        "jedi==0.18.0",
        "Jinja2==3.0.1",
        "jmespath==0.10.0",
        "jsonschema==3.2.0",
        "jupyter-client==6.1.12",
        "jupyter-core==4.7.1",
        "jupyterlab-pygments==0.1.2",
        "jupyterlab-widgets==1.0.0",
        "keyring==18.0.1",
        "kiwisolver==1.3.1",
        "launchpadlib==1.10.13",
        "lazr.restfulclient==0.14.2",
        "lazr.uri==1.0.3",
        "louis==3.12.0",
        "lz4==3.0.2+dfsg",
        "macaroonbakery==1.3.1",
        "Mako==1.1.0",
        "Markdown==3.1.1",
        "MarkupSafe==2.0.1",
        "matplotlib==3.4.2",
        "matplotlib-inline==0.1.2",
        "mistune==0.8.4",
        "mpi4py==3.0.3",
        "msgpack==0.6.2",
        "nbclient==0.5.3",
        "nbconvert==6.1.0",
        "nbformat==5.1.3",
        "nemo-emblems==5.0.0",
        "nest-asyncio==1.5.1",
        "netaddr==0.7.19",
        "netifaces==0.10.4",
        "nose==1.3.7",
        "notebook==6.4.0",
        "numpy==1.21.1",
        "oauthlib==3.1.0",
        "onboard==1.4.1",
        "open3d==0.10.0.0",
        "opencv-contrib-python==4.5.1.48",
        "opencv-python==4.5.1.48",
        "packaging==20.3",
        "PAM==0.4.2",
        "pandas==1.3.1",
        "pandocfilters==1.4.3",
        "paramiko==2.6.0",
        "parso==0.8.2",
        "pexpect==4.6.0",
        "pickleshare==0.7.5",
        "Pillow==7.0.0",
        "pip==21.2.4",
        "platformdirs==2.2.0",
        "prometheus-client==0.11.0",
        "prompt-toolkit==3.0.19",
        "protobuf==3.6.1",
        "proxy-tools==0.1.0",
        "psutil==5.5.1",
        "ptyprocess==0.7.0",
        "py-ubjson==0.14.0",
        "pyasn1==0.4.2",
        "pyasn1-modules==0.2.1",
        "PyBluez==0.23",
        "pycairo==1.16.2",
        "pycparser==2.20",
        "pycrypto==2.6.1",
        "pycryptodome==3.8.2",
        "pycryptodomex==3.6.1",
        "pycups==1.9.73",
        "pycurl==7.43.0.2",
        "pydot==1.4.1",
        "PyGithub==1.43.7",
        "Pygments==2.9.0",
        "PyGObject==3.36.0",
        "PyHamcrest==1.9.0",
        "PyICU==2.4.2",
        "pyinotify==0.9.6",
        "PyJWT==1.7.1",
        "pyls==0.1.6",
        "pymacaroons==0.13.0",
        "PyNaCl==1.3.0",
        "pynvim==0.4.3",
        "PyOpenGL==3.1.0",
        "pyOpenSSL==19.0.0",
        "pyparsing==2.4.6",
        "pyparted==3.11.2",
        "pypng==0.0.20",
        "PyQRCode==1.2.1",
        "PyQt5==5.14.1",
        "pyRFC3339==1.1",
        "pyrsistent==0.18.0",
        "pyserial==3.4",
        "python-apt==2.0.0+ubuntu0.20.4.5",
        "python-dateutil==2.8.2",
        "python-debian===0.1.36ubuntu1",
        "python-engineio==3.9.0",
        "python-gitlab==2.0.1",
        "python-gnupg==0.4.5",
        "python-igraph==0.8.0",
        "python-magic==0.4.16",
        "python-snappy==0.5.3",
        "python-socketio==4.3.0",
        "python-xapp==2.2.1",
        "python-xlib==0.23",
        "PyTrie==0.2",
        "pytube==10.8.5",
        "pytz==2019.3",
        "pyusb==1.2.1",
        "pywebview==3.5",
        "pyxdg==0.26",
        "PyYAML==5.3.1",
        "pyzmq==22.1.0",
        "rdserialtool==0.2.1",
        "rdumtool==0.1",
        "reportlab==3.5.34",
        "requests==2.24.0",
        "requests-file==1.4.3",
        "requests-unixsocket==0.2.0",
        "roman==2.0.0",
        "rosdep==0.21.0",
        "rosdep-modules==0.21.0",
        "rosdistro==0.8.3",
        "rosdistro-modules==0.8.3",
        "rosinstall==0.7.8",
        "rosinstall-generator==0.1.22",
        "rospkg==1.3.0",
        "rospkg-modules==1.3.0",
        "s3transfer==0.5.0",
        "scipy==1.7.1",
        "SecretStorage==2.3.1",
        "Send2Trash==1.7.1",
        "service-identity==18.1.0",
        "setproctitle==1.1.10",
        "setuptools==45.2.0",
        "simplejson==3.16.0",
        "sip==4.19.21",
        "six==1.14.0",
        "soupsieve==1.9.5",
        "systemd-python==234",
        "terminado==0.10.1",
        "testpath==0.5.0",
        "texttable==1.6.2",
        "tinycss2==1.0.2",
        "tldextract==2.2.1",
        "tornado==6.1",
        "traitlets==5.0.5",
        "Twisted==18.9.0",
        "txaio==2.10.0",
        "txdbus==1.1.2",
        "u-msgpack-python==2.1",
        "ubuntu-drivers-common==0.0.0",
        "ufw==0.36",
        "Unidecode==1.1.1",
        "urllib3==1.25.8",
        "vcstool==0.2.15",
        "vcstools==0.1.42",
        "virtualenv==20.7.2",
        "wadllib==1.3.3",
        "wcwidth==0.2.5",
        "webencodings==0.5.1",
        "Werkzeug==2.0.1",
        "wheel==0.34.2",
        "widgetsnbextension==3.5.1",
        "wrapt==1.11.2",
        "wsaccel==0.6.2",
        "wstool==0.1.18",
        "wxPython==4.0.7",
        "xkit==0.0.0",
        "youtube-dl==2021.4.26",
        "zope.interface==4.7.1"
    ],
    "usb": [
        {
            "port": 0,
            "vendor_id": "0x1d6b",
            "product_id": "0x0003",
            "speed": "SuperPlus"
        },
        {
            "port": 6,
            "vendor_id": "0x0bda",
            "product_id": "0x565a",
            "speed": "High"
        },
        {
            "port": 5,
            "vendor_id": "0x27c6",
            "product_id": "0x538d",
            "speed": "Full"
        },
        {
            "port": 3,
            "vendor_id": "0x03e7",
            "product_id": "0x2485",
            "speed": "High"
        },
        {
            "port": 2,
            "vendor_id": "0x046d",
            "product_id": "0xc077",
            "speed": "Low"
        },
        {
            "port": 10,
            "vendor_id": "0x8087",
            "product_id": "0x0aaa",
            "speed": "Full"
        },
        {
            "port": 0,
            "vendor_id": "0x1d6b",
            "product_id": "0x0002",
            "speed": "High"
        }
    ]
}

install_requirements.py downloads depthai==0.0.2.1

The install_requirements.py is run on a new clone of the examples folder from the develop branch, so it has 29th python file cloned. The script installs depthai==0.0.2.1 on armv7l:

(venv2) pi@raspberrypi:~/depthai-python/examples $ python3 install_requirements.py 
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Requirement already satisfied: pip in /home/pi/venv2/lib/python3.7/site-packages (21.0.1)
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Requirement already satisfied: opencv-python in /home/pi/venv2/lib/python3.7/site-packages (4.5.1.48)
Requirement already satisfied: pyyaml in /home/pi/venv2/lib/python3.7/site-packages (5.3.1)
Requirement already satisfied: requests in /home/pi/venv2/lib/python3.7/site-packages (2.24.0)
Requirement already satisfied: numpy>=1.14.5 in /home/pi/venv2/lib/python3.7/site-packages (from opencv-python) (1.19.5)
Requirement already satisfied: idna<3,>=2.5 in /home/pi/venv2/lib/python3.7/site-packages (from requests) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/pi/venv2/lib/python3.7/site-packages (from requests) (1.25.11)
Requirement already satisfied: certifi>=2017.4.17 in /home/pi/venv2/lib/python3.7/site-packages (from requests) (2020.12.5)
Requirement already satisfied: chardet<4,>=3.0.2 in /home/pi/venv2/lib/python3.7/site-packages (from requests) (3.0.4)
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple, https://artifacts.luxonis.com/artifactory/luxonis-python-snapshot-local
ERROR: Could not find a version that satisfies the requirement depthai==0.0.2.1.dev+0853f2b2f8a182b442dec12bf9cadb69548318a9
ERROR: No matching distribution found for depthai==0.0.2.1.dev+0853f2b2f8a182b442dec12bf9cadb69548318a9
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Processing /home/pi/depthai-python
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Building wheels for collected packages: depthai
  Building wheel for depthai (PEP 517) ... done
  Created wheel for depthai: filename=depthai-0.0.2.1+0853f2b2f8a182b442dec12bf9cadb69548318a9-cp37-cp37m-linux_armv7l.whl size=4351343 sha256=70518a4bf7c2213a0f9658f7782a408e166c45e996efcb37109c0ea6f303ba31
  Stored in directory: /home/pi/.cache/pip/wheels/b2/4b/8a/55247c3552360665eb3170184f4aefe058157d67c7a89c67a4
Successfully built depthai
Installing collected packages: depthai
  Attempting uninstall: depthai
    Found existing installation: depthai 2.1.0.0
    Uninstalling depthai-2.1.0.0:
      Successfully uninstalled depthai-2.1.0.0
Successfully installed depthai-0.0.2.1+0853f2b2f8a182b442dec12bf9cadb69548318a9

Am I doing something wrong here?

Build failure on Ubuntu 18.04 with gcc 7.4.0 (default)

The final link step is failing with this error:

depthai-api/host/py_module/build$ make
[ 20%] Built target nlohmann_json_schema_validator
[ 24%] Linking CXX shared module depthai.cpython-36m-x86_64-linux-gnu.so
/usr/bin/ld: /tmp/ccMFU1de.ltrans0.ltrans.o: relocation R_X86_64_PC32 against symbol `_ZNSt14__shared_countILN9__gnu_cxx12_Lock_policyE2EEC1Ev' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
CMakeFiles/depthai.dir/build.make:563: recipe for target 'depthai.cpython-36m-x86_64-linux-gnu.so' failed
make[2]: *** [depthai.cpython-36m-x86_64-linux-gnu.so] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/depthai.dir/all' failed
make[1]: *** [CMakeFiles/depthai.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

[Error] Neural network blob compiled with uncompatible openvino version. Selected openvino version {OPENVINO_VERSION}. If you want to select an explicit openvino version use: setOpenVINOVersion while creating pipeline.

This issue happens because the compiled neural network blob is not compatible with the latest OpenVINO version supported by Depthai library.

Solution:
When creating the pipeline set the OpenVINO version explicitly:
pipeline.setOpenVINOVersion(version = dai.OpenVINO.Version.VERSION_202{X}_{Y})

OAK-D can't get camera intrinsics and extrinsics

Introduction

Hello, I am using the OAK-D for weeks, only with Python, it worked well, and now, I want to do stereovision with sided cameras. But, to project my frame 2D coordinates into the real world, I need the projection matrices.

So, I would like to get my OAK-D sided cameras intrinsic and extrinsic matrices, but if I can do it, then, I don't understand how it works.

Environment

I don't think that it is an install problem, but I think you might ask me few things, so...

software version
Windows 10 21H1
Python 3.9.4
Gcc 9.2.0
CMake 3.20.1
Pip 21.1.1
DepthAI 2.4.0
OpenCV 4.5.1.48

For the hardware, I don't know my OAK-D serial number, but I received it 4 months ago, so, as it says in the FAQ, calibration data should be stored inside.

Tests

Referring to the documentation, I should get what I want from a CalibrationHandler object, I tried to instance it in many ways, nothing worked at all.

calibration = device.readCalibration()
calibration = pipeline.getCalibrationData()
calibration = dai.CalibrationHandler()

AttributeError: 'depthai.Device' object has no attribute 'readCalibration'
AttributeError: 'depthai.Pipeline' object has no attribute 'getCalibrationData'
AttributeError: module 'depthai' has no attribute 'CalibrationHandler'

I also checked this official repository and tried an other way, but it also failed.

left_intrinsic = device.get_left_intrinsic()

AttributeError: 'depthai.Device' object has no attribute 'get_left_intrinsic'

Conclusion

I have looked a lot of repositories and some issues, I also read DepthAI documentation and FAQ few times, but I still don't understand how to find the sided cameras projection matrices.

I hope that you can help me, sorry for inconvenience.
Thomas PDM

Couldn't read data from stream: 'rgb' (X_LINK_ERROR)

I am not sure if my issue is also linked to this issue #303, but I have the following situation. I would be happy about some hints where I could look next.

Based on the official docker examples, we created our own docker that works on 2 different laptops.

working example dmesg output gives us the following:

[21809.654734] usb 1-2: USB disconnect, device number 24
[21810.206966] usb 2-2: new SuperSpeed Gen 1 USB device number 9 using xhci_hcd
[21810.227825] usb 2-2: New USB device found, idVendor=03e7, idProduct=f63b, bcdDevice= 1.00
[21810.227826] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[21810.227827] usb 2-2: Product: Luxonis Device
[21810.227828] usb 2-2: Manufacturer: Intel Corporation
[21810.227828] usb 2-2: SerialNumber: 14442C10814B0AD100

non-working example When running the docker on our target machine, which is a full-fledged computer, I get this:

[  305.032158] usb 1-3: USB disconnect, device number 8
[  305.631671] usb 1-3: new high-speed USB device number 9 using xhci_hcd
[  305.758397] usb 1-3: New USB device found, idVendor=03e7, idProduct=f63b, bcdDevice= 1.00
[  305.758400] usb 1-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[  305.758401] usb 1-3: Product: Luxonis Device
[  305.758402] usb 1-3: Manufacturer: Intel Corporation
[  305.758404] usb 1-3: SerialNumber: 14442C10814B0AD100
[  308.026291] usb 1-3: USB disconnect, device number 9
[  308.277715] usb 1-3: new high-speed USB device number 10 using xhci_hcd
[  308.404503] usb 1-3: New USB device found, idVendor=03e7, idProduct=2485, bcdDevice= 0.01
[  308.404509] usb 1-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[  308.404512] usb 1-3: Product: Movidius MyriadX
[  308.404515] usb 1-3: Manufacturer: Movidius Ltd.
[  308.404518] usb 1-3: SerialNumber: 03e72485
[  334.158879] usb 1-3: USB disconnect, device number 10

The udev rules are applied on the target computer. Interesting is that the Product: Luxonis Device is disconnecting and only Product: Movidius MyriadX is coming back up again.

The python code crashes with

    inPreview = self.previewQueue.get()
RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'rgb' (X_LINK_ERROR)'

Do you think that this could be related to device issues or software issues?

rgb_preview.py crash my guest machine in vmware workstation

I know this could be the problem of vmware workstation, but I still report my situation.

Environment 1:

  • Host OS: Windows 10 Pro 20H2 OS build 19042.1052
  • VMWare workstation 16.1.2
  • Guest USB compatibility: USB 3.1
  • Guest OS: Ubuntu 20.04 desktop LTS 64bit, Ubuntu 18.04 desktop LTS 64bit
  • Python 3.8.5, 3.6.9

Environment 2:

  • Host OS: Ubuntu 20.04 desktop LTS 64bit
  • VMWare workstation 16.1.2
  • Same guest as the one in the above Environment 1
  • Python 3.8.5, 3.6.9

Reproduce steps:
git clone https://github.com/luxonis/depthai-python
cd depthai-python
python3 examples/install_requirements.py
cd examples
python3 rgb_preview.py

In Environment 1, It previews without any problem. But when I press 'q', my guest is crashed and is closed (poweroff). so I could not collect log nor error messages.
In Environment 2, however, rgb_preview.py runs OK, Even when I press 'q', it stops normally.

I believe this is not the problem of rgb_preview.py. because I tried other python examples and same result.

[Feature Request] Add depth information

The Gen1 API displays depth information in terms of x,y and z axis on standard object detection. Could this be migrated to the Gen2 API 08-RGB and Mobilenet SSD example.

Thanks!

09_mono_mobilenet.py code doesn't match description/intention in docs

The doc at https://docs.luxonis.com/projects/api/en/latest/samples/09_mono_mobilenet/ writes

example shows how to run MobileNetv2SSD on the left grayscale camera and how to display the neural network results on a preview of the right camera stream.

However, the code exclusively uses the right camera. The word "left" never appears in the code.

In my opinion, either the doc is errant, or the code is errant. A fix to either one would align them.

Reproduced in live test using gen2_develop branch abbc96a

Issue with frame width and frame height

When running with the option -s left metaout

for _, nnet_packet in enumerate(nnet_packets):
    print(f'{nnet_packet.getMetadata().getFrameWidth()}, {nnet_packet.getMetadata().getFrameHeight()})

Is returning

  • 19, 302056116 for category 32768
  • 2597050688, 2597050816 for category 65536
  • 4294967279, 4294895615 for category -526338
  • 4286578687, 2147483647 for category -1

The code for FrameMetadata suggests that the height and width should be pixels, but the actual numbers don't make sense to me.

Push git release tags

There's releases in PyPI, but no release tags in the depthai-python git repo, but there is in the depthai repo. Would it be possible to push the tags aligning with the releases in PyPI too?

Unsupported config option for services: 'docs'

cd docs
docker-compose build
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services: 'docs'

docker-compose --version
docker-compose version 1.25.0, build unknown

Linux mint 19.2

Device error/misconfiguration - X_LINK_ERROR (v2.4.0.0)

Hi DepthAI team, I'm getting the following error when trying to continuously read left, right, and RGB images from my OAK-D. I'm using Python 3.8.5, DepthAI version 2.4.0.0 and Ubuntu 20.04. The device crashes after reading a few frames:

[14442C10F1C709D100] [40.832] [system] [critical] Fatal error. Please report to developers. Log: 'ImageManipHelper' '59'
Got left, right from OAK-D
Got left, right from OAK-D
Got left, right from OAK-D
Got left, right from OAK-D
Traceback (most recent call last):
  File "oakd.py", line 357, in <module>
    queue_left = device.getOutputQueue(
RuntimeError: Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'left_stream' (X_LINK_ERROR)'

As seen in the first message, there is also an error related to image manipulation (I'm resizing images to 640x480) however I don't know the reason for this message. The (minimal example) code is given below:

#!/usr/bin/env python3

import os
import sys
import time
import depthai as dai

if __name__ == '__main__':
    pipeline = dai.Pipeline()

    # LEFT CAMERA
    cam_left = pipeline.createMonoCamera()
    cam_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
    cam_left.setResolution(
        dai.MonoCameraProperties.SensorResolution.THE_400_P)

    # RIGHT CAMERA
    cam_right = pipeline.createMonoCamera()
    cam_right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
    cam_right.setResolution(
        dai.MonoCameraProperties.SensorResolution.THE_400_P)

    # RGB CAMERA
    cam_rgb = pipeline.createColorCamera()
    cam_rgb.setBoardSocket(dai.CameraBoardSocket.RGB)
    cam_rgb.setResolution(
        dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    cam_rgb.setColorOrder(
        dai.ColorCameraProperties.ColorOrder.RGB)

    # IMAGE MANIPULATION
    manip_left = pipeline.createImageManip()
    manip_right = pipeline.createImageManip()
    manip_rgb = pipeline.createImageManip()

    for manip in [manip_left, manip_right, manip_rgb]:
        manip.initialConfig.setResize(640, 480)

    # OUTPUT LINKS
    xout_manip_left = pipeline.createXLinkOut()
    xout_manip_right = pipeline.createXLinkOut()
    xout_manip_rgb = pipeline.createXLinkOut()

    xout_manip_left.setStreamName('left_stream')
    xout_manip_right.setStreamName('right_stream')
    xout_manip_rgb.setStreamName('rgb_stream')

    # Raw camera images --> Image manipulation module
    cam_left.out.link(manip_left.inputImage)
    cam_right.out.link(manip_right.inputImage)
    cam_rgb.video.link(manip_rgb.inputImage)
    # Image manipulation module --> public outputs
    manip_left.out.link(xout_manip_left.input)
    manip_right.out.link(xout_manip_right.input)
    manip_rgb.out.link(xout_manip_rgb.input)

    # Pipeline is defined, now we can connect to the device
    device = dai.Device(pipeline, usb2Mode=False)
    device.setLogLevel(dai.LogLevel.DEBUG)

    # True specifies blocking and False overwriting of oldest messages
    queue_left = device.getOutputQueue(
        name='left_stream', maxSize=10, blocking=False)
    queue_right = device.getOutputQueue(
        name='right_stream', maxSize=10, blocking=False)
    queue_rgb = device.getOutputQueue(
        name='rgb_stream', maxSize=10, blocking=False)

    # keep reading
    while True:
        time.sleep(0.5)
        got = []

        if True:
            left = queue_left.tryGet()
            if left is not None:
                left = left.getCvFrame()
                got.append('left')

        if True:
            right = queue_right.tryGet()
            if right is not None:
                right = right.getCvFrame()
                got.append('right')

        if True:
            rgb = queue_rgb.tryGet()
            if rgb is not None:
                rgb = rgb.getCvFrame()
                got.append('rgb')

        if len(got) > 0:
            print('Got {} from OAK-D'.format(', '.join(got)))
        else:
            print('No images were read from OAK-D!')

I've tried using blocking queues instead of non-blocking, as well as .get() instead of .tryGet(), and different queue sizes (3, 10, 20). In all cases the device eventually crashes. Does anyone have an idea of the reason for this crash or the ImageManip error?

I've installed DepthAI v2.4.0.0 with pip and installed the other dependencies with these commands:

sudo wget -qO- http://docs.luxonis.com/_static/install_dependencies.sh | bash
sudo echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules

Strong Jello effect on OAK-1 mounted on drone

I'm trying to use OAK-1 on a drone. The vibrations on the drone are leading to a very strong Jello effect.
I understand that rolling shutter sensors are susceptible to Jello effect when subjected to vibrations.

Is there anything in the camera settings that may be worsening it? Something that aggravates the rolling shutter effect? I'm using preview output at 640x360 resolution while maintaining the aspect ratio.

Also, I tried to compensate by reducing ExposureTime to 1000 us. However, using this setting is forcing me to set the ISO. It will be great if I could control Exposure time only, while ISO is adjusted automatically to keep the scene well-lit (as far as possible).

How to get output as FP32?

I see getLayerFp16(self, name) in docs, It use to get output as float16, but how to get output as float32 ? I cannot see any function like getLayerFp32() :((

tmp_multicam_color - Failed to control 3 cameras through the `Script` node

Use 3 IMX477 cameras at the same time, through the Script node to control the camera transmission to the NNET node, only two cameras can get data through get

from datetime import datetime, timedelta

import cv2
import depthai as dai

try:
    from loguru import logger
except ImportError:
    import logging as logger

    LOG_FORMAT = "%(asctime)s - %(levelname)s - %(message)s"  # type: str
    DATE_FORMAT = "%mm/%dd/%YY %HH:%MH:%SH "  # type: str
    logger.basicConfig(level=logger.DEBUG, format=LOG_FORMAT, datefmt=DATE_FORMAT)

def create_pipeline():
    logger.info("Creating pipeline...")
    pipeline = dai.Pipeline()  # type: dai.Pipeline

    logger.info("Creating Color Camera ...")
    cam_rgb = pipeline.createColorCamera()  # type: dai.node.ColorCamera
    cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    cam_rgb.setBoardSocket(dai.CameraBoardSocket.RGB)
    cam_rgb.setIspScale(1, 2)

    logger.info("Creating Left Camera ...")
    cam_left = pipeline.createColorCamera()  # type: dai.node.ColorCamera
    cam_left.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    cam_left.setBoardSocket(dai.CameraBoardSocket.LEFT)
    cam_left.setIspScale(1, 2)

    logger.info("Creating Right Camera ...")
    cam_right = pipeline.createColorCamera()  # type: dai.node.ColorCamera
    cam_right.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    cam_right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
    cam_right.setIspScale(1, 2)

    cam_select = pipeline.createXLinkIn()
    cam_select.setStreamName("cam_select")

    image_manip_script = pipeline.create(dai.node.Script)
    # image_manip_script.inputs["cam_select"].setBlocking(False)
    # image_manip_script.inputs["cam_select"].setQueueSize(1)
    cam_select.out.link(image_manip_script.inputs["cam_select"])
    cam_left.isp.link(image_manip_script.inputs["left_in"])
    cam_right.isp.link(image_manip_script.inputs["right_in"])
    cam_rgb.isp.link(image_manip_script.inputs["rgb_in"])
    image_manip_script.setScript(
        """
        from datetime import datetime, timedelta

        def wait_for_results(queue):
            # type: (dai.DataOutputQueue) -> bool
            start = datetime.now()
            while 1:
                res = queue.tryGet()
                if res is not None:
                    break
                if datetime.now() - start > timedelta(seconds=1):
                    return False , res
            return True, res
        while 1 :
            flag_ = node.io["cam_select"].get().getData().tolist()[0]
            node.warn(f"flag_ = {flag_}")
            if flag_ == 0:
                has_results,result = wait_for_results(node.io["left_in"])
            elif flag_ == 1:
                has_results,result = wait_for_results(node.io["right_in"])
            elif flag_ == 2:
                has_results,result = wait_for_results(node.io["rgb_in"])
            if not has_results:
                node.warn(f"flag_ {flag_} no data")
            node.io["to_manip"].send(result)

        """
    )

    cam_out = pipeline.createXLinkOut()   # type: dai.node.XLinkOut
    cam_out.setStreamName("cam_out")

    image_manip_script.outputs["to_manip"].link(cam_out.input)

    return pipeline


def wait_for_results(queue):
    # type: (dai.DataOutputQueue) -> bool
    start = datetime.now()
    while not queue.has():
        if datetime.now() - start > timedelta(seconds=1):
            return False
    return True


# @logger.catch
def main():
    with dai.Device(create_pipeline()) as device:
        # Set device log level - to see logs from the Script node
        device.setLogLevel(dai.LogLevel.WARN)
        device.setLogOutputLevel(dai.LogLevel.WARN)
        logger.info("Starting pipeline ...")
        cam_out = device.getOutputQueue("cam_out")
        cam_select = device.getInputQueue("cam_select")
        cam_select_buffer = dai.Buffer()

        cam_list = {
            0: "left",
            1: "right",
            2: "rgb",
        }
        cam_select_id = 0

        while True:
            cam_select_buffer.setData([cam_select_id])

            cam_select.send(cam_select_buffer)
            has_results = wait_for_results(cam_out)
            if not has_results:
                logger.warning(f"cam {cam_list.get(cam_select_id)} No data from nn!")
                cam_select_id = (cam_select_id + 1) % 3
                logger.warning(f"change to {cam_list.get(cam_select_id)}")
                continue
            frame = cam_out.tryGet()
            cv2.imshow("frame", frame.getCvFrame())
            key = cv2.waitKey(1)
            if key in range(48, 51):
                cam_select_id = key - 48
                logger.info(f"change to {cam_list.get(cam_select_id)}")
            elif key == 32:
                cam_select_id = (cam_select_id + 1) % 3
                logger.info(f"change to {cam_list.get(cam_select_id)}")
            elif key == ord("q"):
                break


if __name__ == "__main__":
    main()

The program stops without reporting the error.

In the pycharm IDE: Process finished with exit code -1073740791 (0xC0000409)

Hi, I am trying to use an OAK-D. I have successfully run some examples and I have recompiled those models for my version of openVINO 2020.4 (face-detection-retail-0004 && landmarks-regression-retail-0009). This works!!!

On the other hand, I have found information about face-recognition-resnet100-arcface in the open_model_zoo of openVINO, I have tried to convert it to the IR format according to the information provided and then I convert it to .blob but this remains in limbo. My commands are:

  1. cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\bin
  2. setupvars.bat
  3. cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\model_optimizer
  4. python mo_mxnet.py --input_model arcface/model-0000.params --input_shape [1,3,112,112] --reverse_input_channels --data_type FP16 ---> https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/face-recognition-resnet100-arcface/model.yml
  5. cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\inference_engine\bin\intel64\Release
  6. myriad_compile -m model-0000.xml -o face_rec.blob -ip U8 -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4

The files and test code: https://github.com/borgiaE/gam

Could you guide me on what I am doing wrong? I already appreciate the help.

Multi stream: RGB + Left + Right + Stereo not working

When enabling the four streams together: "RGB(preview) + Left + Right + Stereo" the device crashes. This is a minimal example:

import time
import cv2
import depthai as dai
import numpy as np
import depthai


enable_rgb = True
pipeline = dai.Pipeline()

if enable_rgb:
    # RGB Camera
    rgb_camera = pipeline.createColorCamera()
    rgb_camera.setBoardSocket(dai.CameraBoardSocket.RGB)
    rgb_camera.setPreviewSize(300, 300)
    rgb_camera.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    rgb_camera.setInterleaved(True)
    rgb_camera.setPreviewKeepAspectRatio(False)

# Left camera
left_camera = pipeline.createMonoCamera()
left_camera.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
left_camera.setBoardSocket(dai.CameraBoardSocket.LEFT)

# Right camera
right_camera = pipeline.createMonoCamera()
right_camera.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
right_camera.setBoardSocket(dai.CameraBoardSocket.RIGHT)

# Stereo camera
stereo_camera = pipeline.createStereoDepth()
stereo_camera.setConfidenceThreshold(255)
stereo_camera.setMedianFilter(dai.StereoDepthProperties.MedianFilter.MEDIAN_OFF)
stereo_camera.setLeftRightCheck(True)
stereo_camera.setExtendedDisparity(False)
stereo_camera.setSubpixel(True)
stereo_camera.setOutputDepth(True)
stereo_camera.setOutputRectified(True)

# Link LEFT/RIGHT -> STEREO
left_camera.out.link(stereo_camera.left)
right_camera.out.link(stereo_camera.right)

if enable_rgb:
    # RGB Stream
    rgb_stream = pipeline.createXLinkOut()
    rgb_stream.setStreamName("rgb_stream")
    rgb_camera.preview.link(rgb_stream.input)

# Left Stream
left_stream = pipeline.createXLinkOut()
left_stream.setStreamName("left_stream")
left_camera.out.link(left_stream.input)

# Right Stream
right_stream = pipeline.createXLinkOut()
right_stream.setStreamName("right_stream")
right_camera.out.link(right_stream.input)

# Depth Stream
depth_stream = pipeline.createXLinkOut()
depth_stream.setStreamName("depth_stream")
stereo_camera.depth.link(depth_stream.input)

# Device
device = dai.Device(pipeline, usb2Mode=True)
device.startPipeline()
device.setLogLevel(depthai.LogLevel.DEBUG)
# device.setLogOutputLevel(depthai.LogLevel.DEBUG)
# device.getQueueEvents()


streams = ['left_stream', 'right_stream', 'depth_stream']
if enable_rgb:
    streams.append('rgb_stream')

# Loop
while True:
    print("New_frame")

    for name in streams:
        frame = device.getOutputQueue(name).tryGet()
        if frame:
            print("\t", name, frame.getFrame().shape)

    time.sleep(0.1)

If you set enable_rgb = False the device works with the 3 streams correctly grabbed. When you set enable_rgb = True the only stream available is RGB only for few seconds, then this error occurs:

Traceback (most recent call last):
  File "plain_example.py", line 83, in <module>
    frame = device.getOutputQueue(name).tryGet()
ValueError: Device already closed or disconnected

With version:

depthai                       2.1.0.0.dev0+a60bdfb9c189e17d2356728675fd91be9a5c8c7e

with ubuntu 18

NN models

So I may of missed it in the documentation, but in each of the examples with nn (yolov3,yolov4, ssd, moblienet) reference the blobs located in examples\models but there is no such folder or blob files. Where can i find the premade blob files for the oak-d camera?

From 08_rgb_moblienet line 10
nnPathDefault = str((Path(file).parent / Path('models/mobilenet-ssd_openvino_2021.2_6shave.blob')).resolve().absolute())

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.