Git Product home page Git Product logo

csi-camera's Introduction

CSI-Camera

Simple example of using a MIPI-CSI(2) Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kits with CSI camera ports. This includes the recent Jetson Nano and Jetson Xavier NX. This is support code for the article on JetsonHacks: https://wp.me/p7ZgI9-19v

For the Nanos and Xavier NX, the camera should be installed in the MIPI-CSI Camera Connector on the carrier board. The pins on the camera ribbon should face the Jetson module, the tape stripe faces outward.

Some Jetson developer kits have two CSI camera slots. You can use the sensor_mode attribute with the GStreamer nvarguscamerasrc element to specify which camera. Valid values are 0 or 1 (the default is 0 if not specified), i.e.

nvarguscamerasrc sensor_id=0

To test the camera:

# Simple Test
#  Ctrl^C to exit
# sensor_id selects the camera: 0 or 1 on Jetson Nano B01
$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! nvoverlaysink

# More specific - width, height and framerate are from supported video modes
# Example also shows sensor_mode parameter to nvarguscamerasrc
# See table below for example video modes of example sensor
$ gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! \
   'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! \
   nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=540' ! \
   nvvidconv ! nvegltransform ! nveglglessink -e

Note: The cameras may report differently than show below. You can use the simple gst-launch example above to determine the camera modes that are reported by the sensor you are using.

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

Also, the display transform may be sensitive to width and height (in the above example, width=960, height=540). If you experience issues, check to see if your display width and height is the same ratio as the camera frame size selected (In the above example, 960x540 is 1/4 the size of 1920x1080).

Samples

simple_camera.py

simple_camera.py is a Python script which reads from the camera and displays the frame to a window on the screen using OpenCV:

$ python simple_camera.py

face_detect.py

face_detect.py is a python script which reads from the camera and uses Haar Cascades to detect faces and eyes:

$ python face_detect.py

Haar Cascades is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. The function is then used to detect objects in other images.

See: https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html

dual_camera.py

Note: You will need install numpy for the Dual Camera Python example to work:

$ pip3 install numpy

This example is for the newer Jetson boards (Jetson Nano, Jetson Xavier NX) with two CSI-MIPI camera ports. This is a simple Python program which reads both CSI cameras and displays them in one window. The window is 1920x540. For performance, the script uses a separate thread for reading each camera image stream. To run the script:

$ python3 dual_camera.py

simple_camera.cpp

The last example is a simple C++ program which reads from the camera and displays to a window on the screen using OpenCV:

$ g++ -std=c++11 -Wall -I/usr/lib/opencv -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera

$ ./simple_camera

This program is a simple outline, and does not handle needed error checking well. For better C++ code, use https://github.com/dusty-nv/jetson-utils

Notes

Camera Image Formats

You can use v4l2-ctl to determine the camera capabilities. v4l2-ctl is in the v4l-utils: ``` $ sudo apt-get install v4l-utils ``` For the Raspberry Pi V2 camera, a typical output is (assuming the camera is /dev/video0):
$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'RG10'
	Name        : 10-bit Bayer RGRG/GBGB
		Size: Discrete 3280x2464
			Interval: Discrete 0.048s (21.000 fps)
		Size: Discrete 3280x1848
			Interval: Discrete 0.036s (28.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)

GStreamer Parameter

For the GStreamer pipeline, the nvvidconv flip-method parameter can rotate/flip the image. This is useful when the mounting of the camera is of a different orientation than the default.

flip-method         : video flip methods
                        flags: readable, writable, controllable
                        Enum "GstNvVideoFlipMethod" Default: 0, "none"
                           (0): none             - Identity (no rotation)
                           (1): counterclockwise - Rotate counter-clockwise 90 degrees
                           (2): rotate-180       - Rotate 180 degrees
                           (3): clockwise        - Rotate clockwise 90 degrees
                           (4): horizontal-flip  - Flip horizontally
                           (5): upper-right-diagonal - Flip across upper right/lower left diagonal
                           (6): vertical-flip    - Flip vertically
                           (7): upper-left-diagonal - Flip across upper left/low

OpenCV and Python

Starting with L4T 32.2.1 / JetPack 4.2.2, GStreamer support is built in to OpenCV. The OpenCV version is 3.3.1 for those versions. Please note that if you are using earlier versions of OpenCV (most likely installed from the Ubuntu repository), you will get 'Unable to open camera' errors.
If you can open the camera in GStreamer from the command line, and have issues opening the camera in Python, check the OpenCV version.
>>>cv2.__version__

Release Notes

v3.2 Release January, 2022

  • Add Exception handling to Python code
  • Faster GStreamer pipelines, better performance
  • Better naming of variables, simplification
  • Remove Instrumented examples
  • L4T 32.6.1 (JetPack 4.5)
  • OpenCV 4.4.1
  • Python3
  • Tested on Jetson Nano B01, Jetson Xavier NX
  • Tested with Raspberry Pi V2 cameras

v3.11 Release April, 2020

  • Release both cameras in dual camera example (bug-fix)

v3.1 Release March, 2020

  • L4T 32.3.1 (JetPack 4.3)
  • OpenCV 4.1.1
  • Tested on Jetson Nano B01
  • Tested with Raspberry Pi v2 cameras

v3.0 December 2019

  • L4T 32.3.1
  • OpenCV 4.1.1.
  • Tested with Raspberry Pi v2 camera

v2.0 Release September, 2019

  • L4T 32.2.1 (JetPack 4.2.2)
  • OpenCV 3.3.1
  • Tested on Jetson Nano

Initial Release (v1.0) March, 2019

  • L4T 32.1.0 (JetPack 4.2)
  • Tested on Jetson Nano

csi-camera's People

Contributors

jetsonhacks avatar jetsonhacksnano avatar tomasz-lewicki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-camera's Issues

[BUG] cpp example not working

Describe the issue
cannot run cpp example on jetson Nano, the python example is working fine

What version of L4T/JetPack
L4T/JetPack version: 4.4

What version of OpenCV
OpenCV version: 4.1.1 with CUDA

Python Version
Python version if applicable: 2.7

To Reproduce
Steps to reproduce the behavior:
python simple_camera.py and the compiled and tried to launch ./simple_camera

Expected behavior
i expect the example to run

Additional context
tried to use nano with Mediapipe iris tracking example gpu, substituted gstreamer stuff in its cpp, but didn't work, so i tried the example from this repository and i noticed this didn't work either, so maybe there is a problem in the gstreamer pipeline? Is it my version of opencv?

update:
i noticed that my build was not built with GSTREAMER support:
Video I/O: DC1394 1.x: NO DC1394 2.x: YES (ver 2.2.5) FFMPEG: YES avcodec: YES (ver 57.107.100) avformat: YES (ver 57.83.100) avutil: YES (ver 55.78.100) swscale: YES (ver 4.8.100) avresample: YES (ver 3.7.0) GStreamer: NO OpenNI: NO OpenNI PrimeSensor Modules: NO OpenNI2: NO PvAPI: NO GigEVisionSDK: NO Aravis SDK: NO UniCap: NO UniCap ucil: NO V4L/V4L2: NO/YES XIMEA: NO Xine: NO gPhoto2: YES
So probably building with GSTREAMER support would fix the problem
But then, out of curiosity, why does the python version works?

Unable to open camera when running simple_camera.py with Python version 3.7.5

Describe the issue
Unable to open camera when running simple_camera.py with Python version 3.7.5 using VSCode.

What version of L4T/JetPack
L4T/JetPack version: 4.6.1

What version of OpenCV
OpenCV version: 4.1.1.2 (although some seem to be 3.2.0 when I use the command: "dpkg -l | grep libopencv")

Python Version
Python version if applicable: 3.7.5

To Reproduce
Steps to reproduce the behavior:

  1. Power the Jetson Nano
  2. Attach RPi V2 camera
  3. Clone this repo
  4. Run the simple_camera.py file

Expected behavior
A Gstreamer window pops up which shows the camera working, like it does for python 2.7.17

Additional context

  1. I have a ton of different installations and overall, it's kind of a mess. Locating the root issue might be rather problematic.
  2. I have used a command that removes python from the computer so that I could install the 3.7 version, what issues might this have caused? Well I needed to at least reinstall pip, who knows what else it did.

Unable to access camera mode 1280x720 (60 FPS)

In available modes there is GST_ARGUS: 1280 x 720 FR = 59,999999.

However, when I try to access this mode using FR = 60/1 it defaults to
GST_ARGUS: 1280 x 720 FR = 120,000005.

Any help regarding this?

Camera module V1.3 on Jetson?

Hi there!

I'm trying to get a camera module V1.3 to work with a Jetson Nano. Unfortunately, it's not showing up as /dev/video*. I'm wondering if there are some packages I would need to install first, or if only the camera module V2 is supported on the Jetson Nano. I'm using the stock Nano ubuntu image, not the one created for the Jetbot.

Thank you for your great content. Any help/pointers much appreciated.

[BUG] Unable to open camera when building opencv from source

Describe the issue
Hi,I am unable to access the camera when I build from source, even with gstreamer enabled. I get the following error

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
Unable to open camera

If I uninstall and install from apt it works fine so it must be some other dependency I am missing. I need to build it from source to enable the nvidia optical flow contrib packages

What version of L4T/JetPack
L4T/JetPack version:
R32 (release), REVISION: 5.1, GCID: 26202423, BOARD: t210ref, EABI: aarch64, DATE: Fri Feb 19 16:45:52 UTC 2021
What version of OpenCV
OpenCV version:
4.5
Python Version
3.8

To Reproduce
Steps to reproduce the behavior:
For example, what command line did you run?
python3 simple_camera.py

Expected behavior
run the camera app and show output

Additional context
Add any other context about the problem here.

Compatible with new Raspberry Pi Camera?

Hello,

i have the Jetson Nano B01 Kit and the new Raspberry Pi Camera . I connected the camera in slot CAM0 with the pins towards the Jetson main board. However, the Jetson does not find the camera - there is no /dev/video.

Do I need to change or upgrade anything in order to make this new Raspberry Pi Camera work? Any camera driver missing?

Thanks and best,
Sebastian

No cameras available

gst-launch-1.0 nvarguscamerasrc sensor_id=1 ! nvoverlaysink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available
Got EOS from element "pipeline0".
Execution ended after 0:00:00.062571012
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Taking and Saving a Picture RPI camera nvidia nano

Hi!

I apologize if this is a really simple solve but I cannot really find a solution.
I am working on a school project and I want to use the NVIDIA nano (with two camera slots) and two RPI v2 cameras to take and save pictures.
I was able to see the video feed from both cameras using your examples but how can I take and store a picture from each camera (at the same time or as close as possible to the same time)

unable to open camera [BUG]

I'm facing this error while I'm trying to compile the code :

(python3:287093): GStreamer-WARNING **: 19:10:20.532: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block

(python3:287093): GStreamer-WARNING **: 19:10:20.534: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvidconv.so': /usr/lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block
[ WARN:0] global /home/jetson/opencv/modules/videoio/src/cap_gstreamer.cpp (734) open OpenCV | GStreamer warning: Error opening bin: no element "nvarguscamerasrc"
[ WARN:0] global /home/jetson/opencv/modules/videoio/src/cap_gstreamer.cpp (501) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
[ERROR:0] global /home/jetson/opencv/modules/videoio/src/cap.cpp (142) open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.5.1) /home/jetson/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): nvarguscamerasrc !  video/x-raw(memory:NVMM), width=1640, height=1232, format=NV12, framerate=21/1 ! nvvidconv flip-method=0 ! video/x-raw, width=960, height=720, format=BGRx !videoconvert ! video/x-raw, format=BGR ! appsink in function 'icvExtractPattern'

What version of L4T/JetPack
4.6

What version of OpenCV
4.5.1

Python Version
3.8

CV2 headers missing

Describe the issue
Please describe the issue
When i try to compile simple_camera.cpp, I have error of cv2 includes missing:

simple_camera.cpp:10:10: fatal error: opencv2/opencv.hpp: No such file or directory
#include <opencv2/opencv.hpp>

What version of L4T/JetPack
L4T/JetPack version:
nv-jetson-nano-sd-card-image-r32.3.1

What version of OpenCV
OpenCV version:
Missing???

Python Version
Python version if applicable:
3

To Reproduce
Steps to reproduce the behavior:
For example, what command line did you run?
g++ -std=c++11 -Wall -I/usr/lib/opencv simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera

Expected behavior
A clear and concise description of what you expected to happen.
compile without error

Additional context
Add any other context about the problem here.

Changes to examples for USB Webcam (Logitech)

Hello. Thank you for the examples. What changes need to be made to use a USB webcam on the Nano? Your gst-launch-1.0 command results in these errors:
Setting pipeline to PAUSED ...

Using winsys: x11
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
Caught SIGSEGV
#0 0x0000007f81b64048 in __GI___poll (fds=0x55a0af67a0, nfds=547638157744, timeout=) at ../sysdeps/unix/sysv/linux/poll.c:41
#1 0x0000007f81c71d80 in () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#2 0x00000055a0855af0 in ()
Spinning. Please run 'gdb gst-launch-1.0 6380' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.
(Argus) Error Timeout: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 137)
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadExecute:243 Stream failed to connect.
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:177 (propagating)

[Ask for Help] simple_camera.cpp:10:10: fatal error: opencv2/opencv.hpp: No such file or directory

Describe the issue
jetson@nano:~/CSI-Camera$ g++ -std=c++11 -Wall -I/usr/lib/opencv simple_camera.cpp -L/usr/lib -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera
simple_camera.cpp:10:10: fatal error: opencv2/opencv.hpp: No such file or directory
#include <opencv2/opencv.hpp>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.

What version of L4T/JetPack
L4T/JetPack version:
jetson@nano:~/CSI-Camera$ jetson_release

  • NVIDIA Jetson Nano (Developer Kit Version)
    • Jetpack 4.5.1 [L4T 32.5.1]
    • NV Power Mode: MAXN - Type: 0
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library
    • Vulkan: 1.2.70

What version of OpenCV
OpenCV version:
jetson@nano:~/CSI-Camera$ python
Python 2.7.17 (default, Sep 30 2020, 13:38:04)
[GCC 7.5.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import cv2
cv2.version
'4.1.1'

Python Version
Python version if applicable:
jetson@nano:/CSI-Camera$ python3 --version
Python 3.6.9
jetson@nano:
/CSI-Camera$ python --version
Python 2.7.17

To Reproduce
Steps to reproduce the behavior:
For example, what command line did you run?

Expected behavior
A clear and concise description of what you expected to happen.

Additional context
Add any other context about the problem here.

It doesn't work when I've configured it[BUG]

Describe the issue
After configuring the Gstream,I want to make calls to the CSI camera through the darknet framework, it is stuck in the picture shown in the picture, and then the jetson nano will restart.
image

What version of L4T/JetPack
L4T/JetPack version:4.4

What version of OpenCV
OpenCV version:opencv4.1.1

Python Version
Python version if applicable:python3.6

To Reproduce
Steps to reproduce the behavior:
./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=30/1 ! nvvidconv flip-method=2 ! video/x-raw, width=1280, height=720, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink"

I installed gstream and called the camera according to the official routine, but the name cannot be displayed

image

I can't start the camera with a simple command either
image

[FEATURE REQUEST] Saving video captured from simple_camera.py

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

ANSWER: I'm quite new to opencv and a way to save the video would be really helpful for me.

Describe the solution you'd like
A clear and concise description of what you want to happen.

ANSWER: An option to save the video, for example I type the command python3 simple_camera.py --save 'path' and when I exit using ctrl-C my video gets saved to that path

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

ANSWER: Exploring opencv currently to overcome this problem.

Additional context
Add any other context or screenshots about the feature request here.

ANSWER: Nope. That's all.

Any help would be appreciated! Thanks and a very amazing work!

[BUG] cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'

Describe the issue

  File "face_detect.py", line 83, in <module>
    face_detect()
  File "face_detect.py", line 58, in face_detect
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
cv2.error: OpenCV(4.2.0) /tmp/build_opencv/opencv/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'

I've seen an issue like this in #21 , I've checked his fix but both files were present in my opencv build.

What version of L4T/JetPack
L4T/JetPack version: Linux 4.9.140-tegra aarch64

What version of OpenCV
OpenCV version: 4.2.0
I used mdegans's build script

Python Version
Python version if applicable: Python 2.7 (cv4 didn't seem to be installed for Python3, still defaulting to 3.3.1)

To Reproduce
Steps to reproduce the behavior:
python face_detect.py

Expected behavior
Don't crash!

Additional context
Should I be running your OpenCV4 build script instead? Is it too late now?
I'm running your latest release.

By the way, I would like to take the opportunity to express my gratitude for your video content. It's pull me out of one-too-many pipeline problems. Its such terrific content that deserves way more attention

with 2 CSI camera?

I want to ask you how to use your c++-example for capturing images from 2 CSI cameras?
May be we have to configure pipeline for but I really do not know.
Could you pls help me?

Thanks

object detection delay

Why is there a delay of around 2s when using jetson nano for real-time object detection, I use the imx219 CSI camera

[BUG] Camera fail to work with option --input-flip=rotate-180

Describe the issue
For both python and CLI tool video-viewer, the camera fail to work whenever we use argv "--input-flip=rotate-180". It work for other option including "horizontal".

Error output:
nvbuf_utils: dmabuf_fd 1052 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed

What version of L4T/JetPack
L4T/JetPack version: 4.6 [L4T 32.6.1[

What version of OpenCV
OpenCV version: 4.1.1

Python Version
Python version if applicable: 3

To Reproduce
Steps to reproduce the behavior:
For example, what command line did you run?
(build with jetson-inferece on jetson nano)
video-viewer csi://0 display://0 --input-flip=rotate-180

Expected behavior
The display show a rotated camera view.

Additional context
N/A

How to capture CSI camera from /dev/video1?

Hi,
I can get camera pre-view window for /dev/video0 by the command as below:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

How to capture CSI camera from /dev/video1?
Thanks

How to open RPI Camera V2 with SSH

Hi,
I'm using the following command in order to test the V2.1 RPi Camera
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
It works great when I'm using it from the Jetson's terminal iteslf.

When I use PuTTy for SSH, I'm getting errors like:

Using winsys: x11
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0
   Camera mode  = 0
   Output Stream W = 3264 H = 2464
   seconds to Run    = 0
   Frame Rate = 21.000000
GST_ARGUS: PowerService: requested_clock_Hz=43238580
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0:
streaming stopped, reason error (-5)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...

When using simple_camera.py:

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 4 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: PowerService: requested_clock_Hz=12096000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.

(python3:8749): Gtk-WARNING **: 15:17:51.882: cannot open display: 
CONSUMER: Done Success
WARNING Argus: 5 client objects still exist during shutdown:
        547481396344 (0x7f980034d8)
        547487607168 (0x7f980016b0)
        547487607328 (0x7f98001750)
        547487612416 (0x7f980018b0)
        547487613712 (0x7f980033c0)

How to test hardware & drivers without X?

To save RAM, I run my Nano headless, connecting via SSH.
gst-launch-1.0 I understand, tries to display the video, so it's out.

cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER) hangs forever when I try it.

Do you know of a fast, reliable (not dependent on many other things) way to test, akin to Raspberry Pi's raspistill, which just captures a single image?

error while using it through ssh

When my jetson is connected to display and running dual_camera.py its working fine but when i am calling through ssh its showng me error

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 3
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 3
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
CONSUMER: Done Success
CONSUMER: Done Success
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
(Argus) Error EndOfFile: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
WARNING Argus: 10 client objects still exist during shutdown:
547279956344 (0x7f6c0038c8)
547286783504 (0x7f6c001820)
547286783664 (0x7f6c0018c0)
547286788640 (0x7f6c001a20)
547286789936 (0x7f6c0037b0)
547299137392 (0x7f40000bf0)
547299137552 (0x7f40000c90)
547299140240 (0x7f40000df0)
547299140464 (0x7f400022a0)

how can i make it work

Pink color distortion

I am using raspberry pi NoIR camera v2 with jeston nano. Everything is fine except my video stream is capturing pink color videos. Any idea what's happening and how to solve it.

Unable to open camera

when I run this,

python simple_camera.py

I got:

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
Unable to open camera

Why? Thanks

Run caused by NANO restart

Be as shown:

GST ARGUS: Setup Complete,
Starting captures for 0 seconds
GST ARGUS: Starting repeat
capture requests.

jetson nano restart and can not open the csi-camera.

Camera doesn't work ("Erroneous pipeline" error)

Every tutorial out there says to run this command to show the camera's video feed, which doesn't work:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

It just spits out an error:

"Erroneous pipeline: could not link nvarguscamerasrc0 to nvvconv0, neither element can handle caps video/x-raw(memory:NVVM), width=(int)3280, height=(int)2464, framerate=(fraction)21/1, format=(string)NV12"

This is on a fresh install of the OS directly from the Jetbot image, and the camera is properly connected and showing up as /dev/video0.

JetPack 4.4 Compatibility?

Is the current state of CSI-Camera compatible with JetPack 4.4?

I have a setup with 2 Raspi-PI v2 Cameras, which functioned smoothly with JetPack 4.3. Since the update to JetPack 4.4, the CSI-Camera examples do not work anymore.

How to change the angle of view?

I am using a Sony IMX 219 160 degree of view
My images got a wrong degree and I couldn't change the degree of view
Especially mode 2 has only 70 degree of view
Do you have any suggestion how to fix this problem?

Can't use it twice if it crashes

Hi, I'm trying to use the code with some other features. When my code crashes though. I have to reboot my nano before i can use the camera again. Here is what i'm using. Thanks in advance.

def gstreamer_pipeline(
    capture_width=1920,
    capture_height=1080,
    display_width=960,
    display_height=540,
    framerate=30,
    flip_method=0,
    ):
    return (
        "nvarguscamerasrc exposurecompensation=-2 ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink"
        % (
            capture_width,
            capture_height,
            framerate,
            flip_method,
            display_width,
            display_height,
        )
    )

cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)

while True:
    ret_val, img = cap.read()
    print(ret_val)
    if ret_val:
        send_img(img)
        break
    else:
        break
        
cap.release()

No cameras available

Hi,
I'm using the MIPI-CSI2 camera from e-Con Systems and got following error :
$ python3 simple_camera.py nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:521 No cameras available

Even-though the camera is available as /dev/video0 and can show the video.

Could you please help me to point out the problem please?

Thanks in advance,
Pascal K.

[BUG] Throw segmentation fault when execute the simple_camera.py

Describe the issue
The simple_camera.py throw segmentation fault when press the "ESC" key to exit the application.

What version of L4T/JetPack
L4T/JetPack version: Jetpack 4.5.1 [L4T 32.5.1]

What version of OpenCV
OpenCV version: OpenCV: 4.1.1 compiled CUDA: NO

Python Version
Python version if applicable: Python 3.6.9

To Reproduce
Direct execute python3 ./simple_camera.py
Wait few seconds (larger than 15 seconds)
Press "ESC" to exit the application

Expected behavior
Exit the application without errors.

Additional context
Add any other context about the problem here.
HW&SW:

  • NVIDIA Jetson Nano (Developer Kit Version)
    • Jetpack 4.5.1 [L4T 32.5.1]
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library
    • Vulkan: 1.2.70

fault

face_detect.py : Jetson reboots while executing

Hi!

I'm trying to execute face_detect.py on my jetson nano 2Gb (jp441), camera IMX219-160

It freezes whithout prompting the camera screen, in the console it says CONSUMER: Producer has connected; continuing and then reboots the platform

Thank you!

What is fx, fy, cx, cy of IMX219 160FOV CSI Camera?

I am two CSI IMX219 160FOV cameras. I want to know that what is fx, fy, cx, cy of this camera?
fx - x-axis focal length of camera in pixels
fy - y a-axis focal length of camera in pixels
cx - x-axis optical center of camera in pixels
cy - y-axis optical center of camera in pixels

What is formula you are using to calculate it ?
How do I can calculate it?
Please do tell me the exact values.

please do answer me as soon as possible, I will be very thankful to you.

[BUG] face_detect.py crash on detectMultiScale

Describe the issue
starting face_detect.py, I have follow error:
Traceback (most recent call last):
File "/home/jetson/CSI-Camera/face_detect.py", line 83, in
face_detect()
File "/home/jetson/CSI-Camera/face_detect.py", line 58, in face_detect
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
cv2.error: OpenCV(3.4.6) /home/jetson/src/opencv-3.4.6/modules/objdetect/src/cascadedetect.cpp:1698: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'

What version of L4T/JetPack
L4T/JetPack version:
nv-jetson-nano-sd-card-image-r32.3.1

What version of OpenCV
OpenCV version:
3.4.6

Python Version
Python version if applicable:
3.6

To Reproduce
Steps to reproduce the behavior:
For example, what command line did you run?
run face_detect.py

Expected behavior
A clear and concise description of what you expected to happen.
program don't crash

Additional context
Add any other context about the problem here.

CSI camera with YoloV3-tiny

Hello,

Quick question. What would the code be to run Yolov3 tiny with the Jetson Nano connected to the CSI camera?

Thanks in advance!

[BUG] can't start the camera

Describe the issue
I simply can't start the camera

What version of L4T/JetPack
L4T/JetPack version:
nvidia-l4t-core 32.5.0-20210115145440

What version of OpenCV
OpenCV version: 4.1.1

Python Version
Python version if applicable: 3.6.9

To Reproduce
Steps to reproduce the behavior:
run the sample .py code

Expected behavior
A clear and concise description of what you expected to happen.
run the code ad open the camera

Additional context
Add any other context about the problem here.

this is the error

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 5
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 120.000005
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module nvarguscamerasrc0 reported: CANCELLED

(python3:9699): GStreamer-CRITICAL **: 22:47:20.231: gst_mini_object_set_qdata: assertion 'object != NULL' failed
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
(Argus) Error EndOfFile: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
(Argus) Error InvalidState: Receive thread is not running cannot send. (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 96)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
GST_ARGUS: Cleaning up
(Argus) Error InvalidState: Receive thread is not running cannot send. (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 96)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
Segmentation fault (core dumped)

[BUG] The csi camera comes out flipped over.

Describe the issue
Please describe the issue

The csi camera comes out flipped over. What should I do?

What version of L4T/JetPack
L4T/JetPack version: 4.3

What version of OpenCV
OpenCV version: 3.4.0

Python Version
Python version if applicable: 2.7.1

There is a problem that it cannot run in python3.

python simple_test.py works fine when I run it as terminal on jetpack 4.4.

But python3 simple_test.py in terminal doesn't work.

The error 'Unable to open camera' is displayed.

Please let me know what the problem is. Why does it only work with Python 2 version?

I need to use python 3.6.9 version / opencv 4.5.1.

[FEATURE REQUEST]

It would be nice if the readme actually told you how to install this package instead of going straight to the how to test it, which obviously won't work until it is installed.

Error using it through ssh

When my jetson is connected to display and running dual_camera.py its working fine but when i am calling through ssh its showng me error

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 3
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 3
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
CONSUMER: Done Success
CONSUMER: Done Success
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
(Argus) Error EndOfFile: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
WARNING Argus: 10 client objects still exist during shutdown:
547279956344 (0x7f6c0038c8)
547286783504 (0x7f6c001820)
547286783664 (0x7f6c0018c0)
547286788640 (0x7f6c001a20)
547286789936 (0x7f6c0037b0)
547299137392 (0x7f40000bf0)
547299137552 (0x7f40000c90)
547299140240 (0x7f40000df0)
547299140464 (0x7f400022a0)

how can i make it work

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.