Git Product home page Git Product logo

intelligent-video-analytics-with-nvidia-jetson-and-microsoft-azure's Introduction

Intelligent-Video-Analytics-with-NVIDIA-Jetson-and-Microsoft-Azure

A repository demonstrating an end-to-end architecture for Intelligent Video Analytics using NVIDIA hardware with Microsoft Azure.

This project contains a collection of self-paced learning modules which guide the user in developing a custom Intelligent Video Analytics application that can handle a variety of video input sources, leverage a custom object detection model, and provide backing cloud services for analysis and reporting.

Each of these modules is accompanied by a LiveStream that walks through the steps to reproduce in full detail. You can watch a build out of the entire project from the ground up by checking out the following 5-part video playlist on Youtube.

5 part video playlist

Overview

5 minute teaser

The project makes use of the NVIDIA DeepStream SDK running on NVIDIA Jetson Embedded hardware to produce an Intelligent Video Analytics Pipeline.

The solution employs a number of modules that run on the NVIDIA hardware device which are instrumented using the Azure IoT Edge runtime. These modules include the Azure Blob Storage on IoT Edge Module for capturing and mirroring object detection training samples to the cloud via a paired Camera Tagging Module. These captured samples are then used to train a custom object detection model with the Custom Vision AI offering from Azure Cognitive Services. Models generated by this service are leveraged by the DeepStream SDK module using a Custom Yolo Parser.

As object detections are produced by the DeepStream SDK, they are filtered using an Azure Stream Analytics on Edge Job that transforms the output into summarized detections. These object detection results are then transmitted to an Azure IoT Hub where they can be forwarded to additional cloud services for processing and reporting.

The cloud services employed include Time Series Insights, which is a fully managed event processing service for analyzing data over time. We also demonstrate how to forward object detection data to a PowerBI dataset for live visualization of results within PowerBI Reports and Dashboards.

For more details on how this all works under the hood, check out this episode of the IoT Show where we cover these capabilities and associated services in depth:

IoT Show Episode

Prerequisites

Hardware:

Development Environment:

Cloud Services:

Learn more, get certified

If you are interested in learning more about building solutions with Azure IoT Services, check out the following free learning resources:

Once you have upskilled as an IoT developer, make it official with the AZ-220 Azure IoT Developer certification.

intelligent-video-analytics-with-nvidia-jetson-and-microsoft-azure's People

Contributors

toolboc avatar topiaruss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

intelligent-video-analytics-with-nvidia-jetson-and-microsoft-azure's Issues

Compitable with yolov7- tiny or yolov4-tiny ?

Hello guys,

first of all i want to thank you for this great Tutorial.
I have a question i want to ask.
Can i use yolov7-tiny or yolov4-tiny instead of yolov3?
If yes, what changes do I need to make?

I am fairly new to this and would really appreciate your help. Thanks.

Jetpack 5.0.1 + DeepStream 6.1

(gst-plugin-scanner:6): GStreamer-WARNING **: 01:59:48.077: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvjpeg.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block

(gst-plugin-scanner:6): GStreamer-WARNING **: 01:59:48.099: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvcompositor.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block

(gst-plugin-scanner:6): GStreamer-WARNING **: 01:59:48.111: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideosink.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block

(gst-plugin-scanner:6): GStreamer-WARNING **: 01:59:48.203: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block

(gst-plugin-scanner:6): GStreamer-WARNING **: 01:59:48.236: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvivafilter.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block

(gst-plugin-scanner:6): GStreamer-WARNING **: 01:59:48.248: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnveglstreamsrc.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block
...
** WARN: <parse_sink:1645>: Unknown key 'overlay-id' for group [sink0]
** ERROR: <create_multi_source_bin:1457>: Failed to create element 'src_bin_muxer'
** ERROR: <create_multi_source_bin:1550>: create_multi_source_bin failed
** ERROR: <create_pipeline:1327>: create_pipeline failed
** ERROR: <main:1459>: Failed to create pipeline
Quitting
App run failed
** WARN: <parse_sink:1645>: Unknown key 'overlay-id' for group [sink0]
** ERROR: <create_multi_source_bin:1457>: Failed to create element 'src_bin_muxer'
** ERROR: <create_multi_source_bin:1550>: create_multi_source_bin failed
** ERROR: <create_pipeline:1327>: create_pipeline failed
** ERROR: <main:1459>: Failed to create pipeline
Quitting
App run failed
** WARN: <parse_sink:1645>: Unknown key 'overlay-id' for group [sink0]
** ERROR: <create_multi_source_bin:1457>: Failed to create element 'src_bin_muxer'
** ERROR: <create_multi_source_bin:1550>: create_multi_source_bin failed
** ERROR: <create_pipeline:1327>: create_pipeline failed
** ERROR: <main:1459>: Failed to create pipeline
Quitting

When I run this command to check the installation, I meet above errors.
$ docker logs -f NVIDIADeepStreamSDK

Would you let me a hint what's wrong?

does it work with Jetson Jetpack 4.5.1?

Hi, Thank you for your contribution.
Could you advise where to start given we are interested in use of Deepstream integrattion with the most recent Jetpack 4.5.1 release of the Jetson OS , please?

Stop auto strat

Hi! Grate tutorial!

How i can stop the auto start function, i want to do some other stuff with my jetson nano, and i can't stop DeepStreamTest5App

Thanks!

Problem starting NVIDIADeepStreamSDK

Originally had the same issue as the video with ...

          "DISPLAY": {
            "value": ":0"
          }

I'm unsure why. Sometimes I reboot and it's 1, sometimes 0. Anyway, I followed the video advice and simply 'matched' the output from echo $DISPLAY with the value in the DISPLAY value in the deployment.template.json. Unfortunately I didn't get the same result as you did. Mine still fails.

NVIDIA developer forums make some suggestions, here: https://forums.developer.nvidia.com/t/error-failed-to-create-element-src-bin-muxer-while-trying-to-run-a-modified-test5-on-nvidia-deepstream-container/126485 but unfortunately I'm not sure how to apply them.

Output from docker logs -f NVIDIADeepStreamSDK below:

Any help would be greatly appreciated :)

CHeers.

nvbufsurftransform: Could not get EGL display connection
2020-12-28 15:02:19.482779: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
No protocol specified
No EGL Display
nvbufsurftransform: Could not get EGL display connection
No protocol specified
repeats
nvbufsurftransform: Could not get EGL display connection
No protocol specified
No EGL Display
nvbufsurftransform: Could not get EGL display connection
** ERROR: <create_multi_source_bin:1057>: Failed to create element 'src_bin_muxer'
** ERROR: <create_multi_source_bin:1132>: create_multi_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: main:1419: Failed to create pipeline
Quitting
App run failed

NVIDIADeepStreamSDK: fails to start

Paul

    Excellent Tutorial put together by you and Erik. Very well documented and as well as good walkthrough. 

    Currently I am having an issue and I am sure it's my setup.  The issue is on Module 2.6 : Customizing the Sample Deployment, the module NVIDIADeepStreamSDK fails to start.  The following is the error msg I receive when I run  "sudo iotedge logs NVIDIADeepStreamSDK " cmd 

App run failed
** ERROR: <create_multi_source_bin:1057>: Failed to create element 'src_bin_muxer'
** ERROR: <create_multi_source_bin:1132>: create_multi_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: main:1419: Failed to create pipeline
Quitting
App run failed


The issue remains the same, whether I use "0" or "1" for the following in the deployment.template.json file. Also JetsonNano, shows "1" for echo $DISPLAY

                    "env": {
                        "DISPLAY": {
                            "value": ":0"
                        }
                    }

Current setup
Jetson Nano: Jetpack 4.4.1 (L4T 32.4.4)
CSI: RPi Camera
USB: C920 HD Pro Webcam
Using DSConfig-CustomVisionAI.txt
uri=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov

None of the parameters have been changed in the above DSConfig... file. Let me know how I could resolve this issue. Also awaiting for few FI9821P from Foscam and will try with the changes.

Once again appreciate all your community efforts in this arena. Please keep this coming.

Thanks

docker logs tend to get very large volume and eat system disk space

Docker logs tend to get very large volume and eat system disk space causing to lose the ability to reboot,
I lost twice my Jetson Xavier like that... ๐Ÿ˜ฅ

I've written a simple cronjob to truncate all docker containers logs every hour:

edit the crontab file using your favorite editor, for example:
sudo nano /etc/crontab

add this command to the file:
0 * * * * root find /var/lib/docker/containers/ *-json.log -exec truncate -s 0 {} +

I guess you can make it better, I'm not a Linux expert :)

Resource Not Found Error for Offline Videos as Input

Thanks for the great work!!
Instead of RTSP stream, I used an offline locally stored video as an input.
changed the config. to
type=2
uri=file:///home/test.mp4
Both Yolo3 and CustomVision.ai models couldn't be able to pick up this path and shows "Resource NOT found error"
The same video runs on Resnet10 model of DeepStream samples.
Do I need to add any more changes to the config file to run videos as input.

Module 4.1 error

Testing with SampleInput / DemoData produces the following error:

Comparison is not allowed for operands of type 'nvarchar(max)' and 'datetime' in expression 'DeepStreamInput . [@timestamp] ! = CAST ( '1970-01-01T00:00:00.000Z' AS datetime )'.

Not an issue but appreciation

Just wanted to say few words...

This tutorial has been very well put together and works awesome with no issues. The Authors (Paul & Erik) has done such a fantastic job on the following area.

  • Very good explanation of the concepts on Tutorial 1
  • Well split in terms of sessions and timing of each sessions
  • Each explanation is well articulated with industry challenges
  • Video with repository proves how much effort has gone into making of this.

Hats off to you both and the support you provide to the developer community with such tutorials. This really goes a long way especially in multi-vendor solutions support.

Thanks
Jag

Not working with Jetpack 4.6 and deepstream 6.0

I have a Problems with the NvidiadeepstreamSDK docker.

I cant download Jetpack 5.0.1 and deepstream 6.1.

Does this work with my Jetpack 4.6 with deepstream 6.0 on my jetson Nano?

If yes what do i need to edit for deepstream to function?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.