Git Product home page Git Product logo

ros_deep_learning's Introduction

DNN Inference Nodes for ROS/ROS2

This package contains DNN inference nodes and camera/video streaming nodes for ROS/ROS2 with support for NVIDIA Jetson Nano / TX1 / TX2 / Xavier / Orin devices and TensorRT.

The nodes use the image recognition, object detection, and semantic segmentation DNN's from the jetson-inference library and NVIDIA Hello AI World tutorial, which come with several built-in pretrained networks for classification, detection, and segmentation and the ability to load customized user-trained models.

The camera & video streaming nodes support the following input/output interfaces:

  • MIPI CSI cameras
  • V4L2 cameras
  • RTP / RTSP streams
  • WebRTC streams
  • Videos & Images
  • Image sequences
  • OpenGL windows

Various distribution of ROS are supported either from source or through containers (including Melodic, Noetic, Foxy, Galactic, Humble, and Iron). The same branch supports both ROS1 and ROS2.

Table of Contents

Installation

The easiest way to get up and running is by cloning jetson-inference (which ros_deep_learning is a submodule of) and running the pre-built container, which automatically mounts the required model directories and devices:

$ git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ docker/run.sh --ros=humble  # noetic, foxy, galactic, humble, iron

note: the ros_deep_learning nodes rely on data from the jetson-inference tree for storing models, so clone and mount jetson-inference/data if you're using your own container or source installation method.

The --ros argument to the docker/run.sh script selects the ROS distro to use. They in turn use the ros:$ROS_DISTRO-pytorch container images from jetson-containers, which include jetson-inference and this.

For previous information about building the ros_deep_learning package for an uncontainerized ROS installation, expand the section below (the parts about installing ROS may require adapting for the particular version of ROS/ROS2 that you want to install)

Legacy Install Instructions

jetson-inference

These ROS nodes use the DNN objects from the jetson-inference project (aka Hello AI World). To build and install jetson-inference, see this page or run the commands below:

$ cd ~
$ sudo apt-get install git cmake
$ git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make -j$(nproc)
$ sudo make install
$ sudo ldconfig

Before proceeding, it's worthwhile to test that jetson-inference is working properly on your system by following this step of the Hello AI World tutorial:

ROS/ROS2

Install the ros-melodic-ros-base or ros-eloquent-ros-base package on your Jetson following these directions:

Depending on which version of ROS you're using, install some additional dependencies and create a workspace:

ROS Melodic

$ sudo apt-get install ros-melodic-image-transport ros-melodic-vision-msgs

For ROS Melodic, create a Catkin workspace (~/ros_workspace) using these steps:
http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment#Create_a_ROS_Workspace

ROS Eloquent

$ sudo apt-get install ros-eloquent-vision-msgs \
                       ros-eloquent-launch-xml \
                       ros-eloquent-launch-yaml \
                       python3-colcon-common-extensions

For ROS Eloquent, create a workspace (~/ros_workspace) to use:

$ mkdir -p ~/ros2_example_ws/src

ros_deep_learning

Next, navigate into your ROS workspace's src directory and clone ros_deep_learning:

$ cd ~/ros_workspace/src
$ git clone https://github.com/dusty-nv/ros_deep_learning

Then build it - if you are using ROS Melodic, use catkin_make. If you are using ROS2 Eloquent, use colcon build:

$ cd ~/ros_workspace/

# ROS Melodic
$ catkin_make
$ source devel/setup.bash 

# ROS2 Eloquent
$ colcon build
$ source install/local_setup.bash 

The nodes should now be built and ready to use. Remember to source the overlay as shown above so that ROS can find the nodes.

Testing

Before proceeding, if you're using ROS Melodic make sure that roscore is running first:

$ roscore

If you're using ROS2, running the core service is no longer required.

Video Viewer

First, it's recommended to test that you can stream a video feed using the video_source and video_output nodes. See Camera Streaming & Multimedia for valid input/output streams, and substitute your desired input and output argument below. For example, you can use video files for the input or output, or use V4L2 cameras instead of MIPI CSI cameras. You can also use RTP/RTSP streams over the network.

# ROS
$ roslaunch ros_deep_learning video_viewer.ros1.launch input:=csi://0 output:=display://0

# ROS2
$ ros2 launch ros_deep_learning video_viewer.ros2.launch input:=csi://0 output:=display://0

imagenet Node

You can launch a classification demo with the following commands - substitute your desired camera or video path to the input argument below (see here for valid input/output streams).

Note that the imagenet node also publishes classification metadata on the imagenet/classification topic in a vision_msgs/Detection2DArray message -- see the Topics & Parameters section below for more info.

# ROS
$ roslaunch ros_deep_learning imagenet.ros1.launch input:=csi://0 output:=display://0

# ROS2
$ ros2 launch ros_deep_learning imagenet.ros2.launch input:=csi://0 output:=display://0

detectnet Node

To launch an object detection demo, substitute your desired camera or video path to the input argument below (see here for valid input/output streams). Note that the detectnet node also publishes the metadata in a vision_msgs/Detection2DArray message -- see the Topics & Parameters section below for more info.

# ROS
$ roslaunch ros_deep_learning detectnet.ros1.launch input:=csi://0 output:=display://0

# ROS2
$ ros2 launch ros_deep_learning detectnet.ros2.launch input:=csi://0 output:=display://0

segnet Node

To launch a semantic segmentation demo, substitute your desired camera or video path to the input argument below (see here for valid input/output streams). Note that the segnet node also publishes raw segmentation results to the segnet/class_mask topic -- see the Topics & Parameters section below for more info.

# ROS
$ roslaunch ros_deep_learning segnet.ros1.launch input:=csi://0 output:=display://0

# ROS2
$ ros2 launch ros_deep_learning segnet.ros2.launch input:=csi://0 output:=display://0

Topics & Parameters

Below are the message topics and parameters that each node implements.

imagenet Node

Topic Name I/O Message Type Description
image_in Input sensor_msgs/Image Raw input image
classification Output vision_msgs/Classification2D Classification results (class ID + confidence)
vision_info Output vision_msgs/VisionInfo Vision metadata (class labels parameter list name)
overlay Output sensor_msgs/Image Input image overlayed with the classification results
Parameter Name Type Default Description
model_name string "googlenet" Built-in model name (see here for valid values)
model_path string "" Path to custom caffe or ONNX model
prototxt_path string "" Path to custom caffe prototxt file
input_blob string "data" Name of DNN input layer
output_blob string "prob" Name of DNN output layer
class_labels_path string "" Path to custom class labels file
class_labels_HASH vector<string> class names List of class labels, where HASH is model-specific (actual name of parameter is found via the vision_info topic)

detectnet Node

Topic Name I/O Message Type Description
image_in Input sensor_msgs/Image Raw input image
detections Output vision_msgs/Detection2DArray Detection results (bounding boxes, class IDs, confidences)
vision_info Output vision_msgs/VisionInfo Vision metadata (class labels parameter list name)
overlay Output sensor_msgs/Image Input image overlayed with the detection results
Parameter Name Type Default Description
model_name string "ssd-mobilenet-v2" Built-in model name (see here for valid values)
model_path string "" Path to custom caffe or ONNX model
prototxt_path string "" Path to custom caffe prototxt file
input_blob string "data" Name of DNN input layer
output_cvg string "coverage" Name of DNN output layer (coverage/scores)
output_bbox string "bboxes" Name of DNN output layer (bounding boxes)
class_labels_path string "" Path to custom class labels file
class_labels_HASH vector<string> class names List of class labels, where HASH is model-specific (actual name of parameter is found via the vision_info topic)
overlay_flags string "box,labels,conf" Flags used to generate the overlay (some combination of none,box,labels,conf)
mean_pixel_value float 0.0 Mean pixel subtraction value to be applied to input (normally 0)
threshold float 0.5 Minimum confidence value for positive detections (0.0 - 1.0)

segnet Node

Topic Name I/O Message Type Description
image_in Input sensor_msgs/Image Raw input image
vision_info Output vision_msgs/VisionInfo Vision metadata (class labels parameter list name)
overlay Output sensor_msgs/Image Input image overlayed with the classification results
color_mask Output sensor_msgs/Image Colorized segmentation class mask out
class_mask Output sensor_msgs/Image 8-bit single-channel image where each pixel is a classID
Parameter Name Type Default Description
model_name string "fcn-resnet18-cityscapes-1024x512" Built-in model name (see here for valid values)
model_path string "" Path to custom caffe or ONNX model
prototxt_path string "" Path to custom caffe prototxt file
input_blob string "data" Name of DNN input layer
output_blob string "score_fr_21classes" Name of DNN output layer
class_colors_path string "" Path to custom class colors file
class_labels_path string "" Path to custom class labels file
class_labels_HASH vector<string> class names List of class labels, where HASH is model-specific (actual name of parameter is found via the vision_info topic)
mask_filter string "linear" Filtering to apply to color_mask topic (linear or point)
overlay_filter string "linear" Filtering to apply to overlay topic (linear or point)
overlay_alpha float 180.0 Alpha blending value used by overlay topic (0.0 - 255.0)

video_source Node

Topic Name I/O Message Type Description
raw Output sensor_msgs/Image Raw output image (BGR8)
Parameter Type Default Description
resource string "csi://0" Input stream URI (see here for valid protocols)
codec string "" Manually specify codec for compressed streams (see here for valid values)
width int 0 Manually specify desired width of stream (0 = stream default)
height int 0 Manually specify desired height of stream (0 = stream default)
framerate int 0 Manually specify desired framerate of stream (0 = stream default)
loop int 0 For video files: 0 = don't loop, >0 = # of loops, -1 = loop forever
flip string "" Set the flip method for MIPI CSI cameras (see here for valid values)

video_output Node

Topic Name I/O Message Type Description
image_in Input sensor_msgs/Image Raw input image
Parameter Type Default Description
resource string "display://0" Output stream URI (see here for valid protocols)
codec string "h264" Codec used for compressed streams (see here for valid values)
bitrate int 4000000 Target VBR bitrate of encoded streams (in bits per second)

ros_deep_learning's People

Contributors

dusty-nv avatar jodusan avatar rbeethe avatar rmeertens avatar zepplu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ros_deep_learning's Issues

multiple cameras

Hi , I wanted to know how do I launch detectnet node for multiple camera images at the same time. I have 4 camera image topics with me but how can I run object detection node on all the images at the same time. I am a beginner to this , can you please help me?

Thanks in advance

vision_msgs/Detection2DArray.h can not be found

When I build it on TX2, I got the following errors:

/home/nvidia/racecar-ws/src/ros_deep_learning/src/node_segnet.cpp:27:36: fatal error: vision_msgs/VisionInfo.h: No such file or directory
compilation terminated.
/home/nvidia/racecar-ws/src/ros_deep_learning/src/node_detectnet.cpp:26:42: fatal error: vision_msgs/Detection2DArray.h: No such file or directory

How can I fix it?

image_transport

//image_transport::ImageTransport it(nh); // BUG - stack smashing on TX2?

Does it mean that there are problems with image_transport on Jetson? Are they resolved by now? I see image_transport is used in nodelet_imagenet.cpp

Regards,

ImageTransport in detectnet node

We're using an Nvidia Xavier as a client and a PC acting as a ROS master. The PC has two Realsense d435i cameras connected, and the Xavier is running the detectnet node and subscribing to the image streams coming from the PC.

The realsense are publishing the image streams via ImageTransport package, but I cant really see that this is done in the ros_compat.h. The way it subscribes to the image streams atm is by using ros::NodeHandle.

I will try to modify the code so that I will be using the ImageTransport package instead.

Can't read from topic from vision_msgs.msg Import Detection2DArray

Hey guys,
So I've made a package that has a python script which subscribes to the detectnet topic detections, but when I try to read the msg i get the error:

Traceback (most recent call last):
  File "/home/lars/catkin_ws/src/internship_lars/scripts/listener.py", line 5, in <module>
    from vision_msgs.msg import Detection2DArray

The code that matters:

from vision_msgs.msg import Detection2DArray

rospy.Subscriber('/detectnet/detections', Detection2DArray, callback2) 

Hope anone can see what I'm obviously doing wrong, something with the XML or CMakeLists?

CMakeLists.txt

cmake_minimum_required(VERSION 3.0.2)
project(internship_lars)

find_package(catkin REQUIRED COMPONENTS
  roscpp
  rospy
  std_msgs
  vision_msgs
)

generate_messages(
  DEPENDENCIES
  std_msgs
  vision_msgs
)


catkin_package(
   CATKIN_DEPENDS vision_msgs
)


include_directories(
  ${catkin_INCLUDE_DIRS}
)


catkin_install_python(PROGRAMS scripts/talker.py scripts/listener.py scripts/dataSubscribe.py
  DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)


Package.xml:

<?xml version="1.0"?>
<package format="2">
  <name>internship_lars</name>
  <version>0.0.0</version>
  <description>The internship_lars package</description>


  <license>TODO</license>

  <build_depend>message_generation</build_depend>

  <buildtool_depend>catkin</buildtool_depend>
  <build_depend>roscpp</build_depend>
  <build_depend>rospy</build_depend>
  <build_depend>std_msgs</build_depend>
  <build_depend>vision_msgs</build_depend>
  <build_export_depend>roscpp</build_export_depend>
  <build_export_depend>rospy</build_export_depend>
  <build_export_depend>std_msgs</build_export_depend>
  <exec_depend>roscpp</exec_depend>
  <exec_depend>rospy</exec_depend>
  <exec_depend>std_msgs</exec_depend>
  <exec_depend>vision_msgs</exec_depend>

  <export>


  </export>
</package>

load custom onnx model, failed to convert bgr8 to rgb8

Getting the following issue when running a custom yolov4 model with detectnet node.

Running on Jetson-Tx2

[TRT] engine.cpp (986) - Cuda Error in executeInternal: 719 (unspecified launch failure)
[TRT] FAILED_EXECUTION: std::exception
[TRT] failed to execute TensorRT context on device GPU
[ERROR] [1600442843.225676073]: failed to run object detection on 640x360 image
[cuda] unspecified launch failure (error 719) (hex 0x2CF)
[cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaRGB.cu:60
[cuda] unspecified launch failure (error 719) (hex 0x2CF)
[cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaColorspace.cpp:225
[cuda] unspecified launch failure (error 719) (hex 0x2CF)
[cuda] /home/jetson-indra/catkin_ws/src/ros_deep_learning/src/image_converter.cpp:141
[ERROR] [1600442843.226765144]: failed to convert 640x360 image (from bgr8 to rgb8) with CUDA
[ INFO] [1600442843.226885850]: failed to convert 640x360 bgr8 image
[cuda] unspecified launch failure (error 719) (hex 0x2CF)
[cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaRGB.cu:60
[cuda] unspecified launch failure (error 719) (hex 0x2CF)
[cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaColorspace.cpp:225
[cuda] unspecified launch failure (error 719) (hex 0x2CF)
[cuda] /home/jetson-indra/catkin_ws/src/ros_deep_learning/src/image_converter.cpp:141
[ERROR] [1600442843.227861639]: failed to convert 640x360 image (from bgr8 to rgb8) with CUDA

Any idea how to solve this issue? Thanks!

how to use ros_deep_learning package in ros2 (dashing)

It says that it can be used in ROS Kinetic or Melodic, But is it not supported in ROS2 Dashing version ?

And I wonder that there is a way to use the jetson-inferenc project in ROS2 Dashing.

If anyone who has tried this in ros2 Dashing version or knows how to do it, Please help !

ROS Melodic for TX2

L4T system in newest JetPack 4.2.2 is derived from Ubuntu 18.04, so we should install ROS Melodic on any Jetson device, am I right?

Regards,

VideoViewer ROS2 Command returning error (running ROS2 Dashing)

When I run the following command for the VideoViewer demo, I get a "malformed launch argument" error:

~/ros2_workspace/src$ ros2 launch ros_deep_learning imagenet.ros2.launch input:=csi://0 output:=display://0 malformed launch argument 'imagenet.ros2.launch', expected format '<name>:=<value>'

'I've validated that my cameras are indeed working, and have even tried the similar commands with the imagenet and detectnet node, they both return the same error. Looking for help on what that may be. I am new to ROS and ROS2, but I do understand the basics and after checking the contents of the launch file I believe these are the right parameter values for the command. However, because I am getting an error, I know I am missing something.Thank you.

In detections message: source_img is empty

Hi,

I currently try to evaluate the performance of my detection node with a recorded rosbag I added ground truth to for every image in the sequence.

Unfortunately the source_img information is not provided in the detections message:

 ...
 bbox:
      center:
        x: 132.41796875
        y: 419.493591309
        theta: 0.0
      size_x: 133.533309937
      size_y: 229.306762695
    source_img:
      header:
        seq: 0
        stamp:
          secs: 0
          nsecs: 0
        frame_id: ''
      height: 0
      width: 0
      encoding: ''
      is_bigendian: 0
      step: 0
      data: []

Is there a possibility to determine what exact image a certain detection is derived from?

failed at building ros_deep_learning

Hi everyone.

I'm stuck at building code of ros_deep_learning, just after "catkin_make" I got:

nvidia@tegra-ubuntu:~/catkin_ws$` catkin_make
Base path: /home/nvidia/catkin_ws
Source space: /home/nvidia/catkin_ws/src
Build space: /home/nvidia/catkin_ws/build
Devel space: /home/nvidia/catkin_ws/devel
Install space: /home/nvidia/catkin_ws/install
####
#### Running command: "cmake /home/nvidia/catkin_ws/src -DCATKIN_DEVEL_PREFIX=/home/nvidia/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/home/nvidia/catkin_ws/install -G Unix Makefiles" in "/home/nvidia/catkin_ws/build"
####
-- Using CATKIN_DEVEL_PREFIX: /home/nvidia/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH: /home/nvidia/catkin_ws/devel;/opt/ros/kinetic
-- This workspace overlays: /home/nvidia/catkin_ws/devel;/opt/ros/kinetic
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/nvidia/catkin_ws/build/test_results
-- Found gmock sources under '/usr/src/gmock': gmock will be built
-- Found gtest sources under '/usr/src/gmock': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.14
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~  traversing 1 packages in topological order:
-- ~~  - ros_deep_learning
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'ros_deep_learning'
-- ==> add_subdirectory(ros_deep_learning)
-- Found CUDA: /usr/local/cuda-9.0 (found version "9.0") 
-- Looking for Q_WS_X11
-- Looking for Q_WS_X11 - found
-- Looking for Q_WS_WIN
-- Looking for Q_WS_WIN - not found
-- Looking for Q_WS_QWS
-- Looking for Q_WS_QWS - not found
-- Looking for Q_WS_MAC
-- Looking for Q_WS_MAC - not found
-- Found Qt4: /usr/bin/qmake (found version "4.8.7") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/catkin_ws/build
####
#### Running command: "make -j4 -l4" in "/home/nvidia/catkin_ws/build"
####
Scanning dependencies of target imagenet
Scanning dependencies of target ros_deep_learning_nodelets
Scanning dependencies of target detectnet
[ 11%] Building CXX object ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o
[ 22%] Building CXX object ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/nodelet_imagenet.cpp.o
[ 33%] Building CXX object ros_deep_learning/CMakeFiles/imagenet.dir/src/node_imagenet.cpp.o
[ 44%] Building CXX object ros_deep_learning/CMakeFiles/detectnet.dir/src/image_converter.cpp.o
[ 55%] Building CXX object ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/image_converter.cpp.o
[ 66%] Building CXX object ros_deep_learning/CMakeFiles/imagenet.dir/src/image_converter.cpp.o
[ 77%] Linking CXX executable /home/nvidia/catkin_ws/devel/lib/ros_deep_learning/detectnet
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmGetDevices2'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmCloseOnce'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmMap'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmUnmap'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmFreeDevice'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmGetDeviceNameFromFd2'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmOpenOnce'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmGetDevice2'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmFreeDevices'
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/detectnet.dir/build.make:156: recipe for target '/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/detectnet' failed
make[2]: *** [/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/detectnet] Error 1
CMakeFiles/Makefile2:483: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 88%] Linking CXX shared library /home/nvidia/catkin_ws/devel/lib/libros_deep_learning_nodelets.so
[ 88%] Built target ros_deep_learning_nodelets
[100%] Linking CXX executable /home/nvidia/catkin_ws/devel/lib/ros_deep_learning/imagenet
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmGetDevices2'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmCloseOnce'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmMap'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmUnmap'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmFreeDevice'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmGetDeviceNameFromFd2'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmOpenOnce'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmGetDevice2'
/usr/lib/gcc/aarch64-linux-gnu/5/../../../aarch64-linux-gnu/libGL.so: undefined reference to `drmFreeDevices'
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/imagenet.dir/build.make:156: recipe for target '/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/imagenet' failed
make[2]: *** [/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/imagenet] Error 1
CMakeFiles/Makefile2:808: recipe for target 'ros_deep_learning/CMakeFiles/imagenet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/imagenet.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
```

Running a Retrained Model with ros node

HI All,

Running the latest Jetpack on an Xavier NX. I had used https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect-detection.md to train a custom dataset. That seems OK.

If I run:
NET=~/jetson-inference/python/pytorch-ssd/test

detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes
csi://0

This works properly.

I'm now trying to use this with ros deep learning. If I try a command like this:

ros2 launch ros_deep_learning detectnet.ros2.launch model_path:=/home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx class_labels_path:=home/magneto/jetson-inference/python/pytorch-ssd/test/labels.txt input:=csi://0 output:=display://0

I get errors as seen here:

[detectnet-2] [TRT] INVALID_ARGUMENT: Cannot find binding of given name:
[detectnet-2] [ERROR] [detectnet]: failed to load detectNet model
[detectnet-2] [TRT] failed to find requested input layer in network
[detectnet-2] [TRT] device GPU, failed to create resources for CUDA engine
[detectnet-2] [TRT] failed to create TensorRT engine for /home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx, device GPU
[detectnet-2] [TRT] detectNet -- failed to initialize.
[INFO] [detectnet-2]: process has finished cleanly [pid 30902]
[detectnet-2]

I am clearly missing something. Any help is appreciated.

Package cannot find jetson-inference

I cloned, and make the jetson-inference package on the jetson tx1 and it's working great ( /home/ubuntu/jetson-inference).

But when I try to compile ros_deep_learning, it gives me an error that it cannot find jetson-inference package. Do I have to set some paths in the CmakeList or elsewhere?

Thanks for this awesome DeepLearning demo!

failed to convert 1280x720 bgra8 image

I am using ZEDmini on a Jetson TX2, while trying to use the the image topic /zed/rgb/image_rect_color I am getting the error:

failed to convert 1280x720 bgra8 image

I have tried to subscribe the ros_deep_learning node to other zed camera image topics and same error occurs.

[cuda] cudaAllocMapped 131072 bytes, CPU 0x102540000 GPU 0x102540000
[cuda] cudaAllocMapped 32768 bytes, CPU 0x102350000 GPU 0x102350000
[ INFO] [1567098659.557982573]: model hash => 965427319687731864
[ INFO] [1567098659.558040077]: hash string => /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel/usr/local/bin/networks/ped-100/class_labels.txt
[ INFO] [1567098659.560447145]: node namespace => /detectnet
[ INFO] [1567098659.560522857]: class labels => /detectnet/class_labels_965427319687731864
[ INFO] [1567098659.571631642]: detectnet node initialized, waiting for messages
[ INFO] [1567098659.736026296]: converting 1280x720 bgra8 image
[ERROR] [1567098659.736376120]: 1280x720 image is in bgra8 format, expected bgr8
[ INFO] [1567098659.736437304]: failed to convert 1280x720 bgra8 image
[ INFO] [1567098659.768120524]: converting 1280x720 bgra8 image

catkin error: 'Dims3' in namespace 'nvinfer1' does not name a type

Thank you for providing the amazing tutorial for getting in speed with the jetson nano and pytorch. I'm running jetson inference on a docker container. I was succesfully able to complete pytorch-ssd tutorial Following is my Dockerfile which is based on the dustynv/jetson-inference:r32.4.4 image:

FROM dustynv/jetson-inference:r32.4.4

ENV DEBIAN_FRONTEND=noninteractive
ENV SHELL /bin/bash

RUN apt update && apt install -y \
	lsb-release \
	gnupg2
RUN apt-get clean all

RUN sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

RUN apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

RUN apt update

RUN apt install -y ros-melodic-ros-base

RUN apt install -y ros-melodic-image-transport \
	ros-melodic-vision-msgs

RUN pip install defusedxml \
	rospkg \
	netifaces
WORKDIR /catkin_ws
CMD /bin/bash -c "bash"

When I try to run catkin_make on my workspace, I get mainly the following errors:

error: 'Dims3' in namespace 'nvinfer1' does not name a type
 typedef nvinfer1::Dims3 Dims3;

/usr/local/include/jetson-inference/tensorNet.h:320:17: error: 'nvinfer1::IPluginFactory' has not been declared

/usr/local/include/jetson-inference/tensorNet.h:331:29: error: 'nvinfer1::ICudaEngine' has not been declared

/usr/local/include/jetson-inference/tensorNet.h:568:35: error: 'nvinfer1::IBuilder' has not been declared

/usr/local/include/jetson-inference/tensorNet.h:581:20: error: 'Severity' has not been declared
/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:231:23: error: 'SSD_MOBILENET_V2' is not a member of 'detectNet'
    model = detectNet::SSD_MOBILENET_V2;

I saw a similar issue: but was not clear what changes should I make
Any help would greatly be appreciated. Thank you.

Failed at catkin_make detectnet

Hi,
I've gotten most of the way through building ros deep learning.
Using latest Nano image (r32.2), Ubuntu 18.04, CUDA 10.0, ROS Melodic.
Getting the following...

nano@nano:~/catkin_ws$ catkin_make
Base path: /home/nano/catkin_ws
Source space: /home/nano/catkin_ws/src
Build space: /home/nano/catkin_ws/build
Devel space: /home/nano/catkin_ws/devel
Install space: /home/nano/catkin_ws/install

Running command: "make cmake_check_build_system" in "/home/nano/catkin_ws/build"

Running command: "make -j4 -l4" in "/home/nano/catkin_ws/build"

[ 50%] Built target segnet
[ 50%] Built target imagenet
[ 58%] Building CXX object ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o
[ 83%] Built target ros_deep_learning_nodelets
/home/nano/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘void img_callback(const ImageConstPtr&)’:
/home/nano/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:74:119: error: no matching function for call to ‘detectNet::Detect(float*, uint32_t, uint32_t, float*&, int*, float*&)’
const bool result = net->Detect(cvt->ImageGPU(), cvt->GetWidth(), cvt->GetHeight(), bbCPU, &numBoundingBoxes, confCPU);
^
In file included from /home/nano/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:29:0:
/usr/local/include/jetson-inference/detectNet.h:271:6: note: candidate: int detectNet::Detect(float*, uint32_t, uint32_t, detectNet::Detection**, uint32_t)
int Detect( float* input, uint32_t width, uint32_t height, Detection** detections, uint32_t overlay=OVERLAY_BOX );
^~~~~~
/usr/local/include/jetson-inference/detectNet.h:271:6: note: candidate expects 5 arguments, 6 provided
/usr/local/include/jetson-inference/detectNet.h:283:6: note: candidate: int detectNet::Detect(float*, uint32_t, uint32_t, detectNet::Detection*, uint32_t)
int Detect( float* input, uint32_t width, uint32_t height, Detection* detections, uint32_t overlay=OVERLAY_BOX );
^~~~~~
/usr/local/include/jetson-inference/detectNet.h:283:6: note: candidate expects 5 arguments, 6 provided
/home/nano/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:77:7: error: in argument to unary !
if( !result )
^~~~~~
/home/nano/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘int main(int, char**)’:
/home/nano/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:205:20: error: ‘class detectNet’ has no member named ‘GetMaxBoundingBoxes’
maxBoxes = net->GetMaxBoundingBoxes();
^~~~~~~~~~~~~~~~~~~
ros_deep_learning/CMakeFiles/detectnet.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o] Error 1
CMakeFiles/Makefile2:1436: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed

Any thoughts Dusty? I've searched for answers and tried installing several times. Keep getting stuck at this point.

Thanks,
Ben

Failed to load nodelet /gst_camera of type [ros_jetson_video/gst_camera]

Hi there @dusty-nv , in the launch file you include a node which I assume is the node that takes care of the gst_camera. Do you plan to release such node?
Otherwise there is this error:

[ERROR] [1506071741.153780987]: Failed to load nodelet [/gst_camera] of type [ros_jetson_video/gst_camera] even after refreshing the cache: According to the loaded plugin descriptions the class ros_jetson_video/gst_camera with base class type nodelet::Nodelet does not exist. Declared types are  ros_deep_learning/ros_imagenet
[ERROR] [1506071741.153964994]: The error before refreshing the cache was: According to the loaded plugin descriptions the class ros_jetson_video/gst_camera with base class type nodelet::Nodelet does not exist. Declared types are  ros_deep_learning/ros_imagenet
[FATAL] [1506071741.154335455]: Failed to load nodelet '/gst_camera` of type `ros_jetson_video/gst_camera` to manager `standalone_nodelet'
[gst_camera-4] process has died [pid 32494, exit code 255, cmd /opt/ros/kinetic/lib/nodelet/nodelet load ros_jetson_video/gst_camera standalone_nodelet ~image_raw:=/image_raw __name:=gst_camera __log:=/home/ubuntu/.ros/log/92f488dc-9f76-11e7-b5c5-00044b633dff/gst_camera-4.log].
log file: /home/ubuntu/.ros/log/92f488dc-9f76-11e7-b5c5-00044b633dff/gst_camera-4*.log

Could be possible accepts an image topic as input?

I couldn't find how to do it, would be great if you can make the node accepts an image topic as input, it should be great, in some setups will make you can save have an additional camera ,only for jetson-inference , i.e in a realsense would be possible use the IR right for inference, in the darkness!!!, or use the rgb stream for inference instead of rgb pointclouds.
this would be useful to accept depth stream as input, as you can republish it and apply AI models over it.

thanks in any case by this amazing repo

jetson-utils/imageFormat.h: No such file or directory

Encountered this error when building using catkin_make. Using ROS Melodic.

`#### Running command: "make cmake_check_build_system" in "/home/pred/catkin_ws/build"

Running command: "make -j4 -l4" in "/home/pred/catkin_ws/build"

[ 8%] Building CXX object ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/nodelet_imagenet.cpp.o
[ 17%] Building CXX object ros_deep_learning/CMakeFiles/video_output.dir/src/node_video_output.cpp.o
[ 17%] Building CXX object ros_deep_learning/CMakeFiles/segnet.dir/src/node_segnet.cpp.o
[ 17%] Building CXX object ros_deep_learning/CMakeFiles/imagenet.dir/src/node_imagenet.cpp.o
In file included from /home/pred/catkin_ws/src/ros_deep_learning/src/node_segnet.cpp:24:0:
/home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.h:27:10: fatal error: jetson-utils/imageFormat.h: No such file or directory
#include <jetson-utils/imageFormat.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ros_deep_learning/CMakeFiles/segnet.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/segnet.dir/src/node_segnet.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/segnet.dir/src/node_segnet.cpp.o] Error 1
CMakeFiles/Makefile2:535: recipe for target 'ros_deep_learning/CMakeFiles/segnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/segnet.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
In file included from /home/pred/catkin_ws/src/ros_deep_learning/src/nodelet_imagenet.cpp:8:0:
/home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.h:27:10: fatal error: jetson-utils/imageFormat.h: No such file or directory
#include <jetson-utils/imageFormat.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/nodelet_imagenet.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/nodelet_imagenet.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 21%] Building CXX object ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/image_converter.cpp.o
In file included from /home/pred/catkin_ws/src/ros_deep_learning/src/node_imagenet.cpp:24:0:
/home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.h:27:10: fatal error: jetson-utils/imageFormat.h: No such file or directory
#include <jetson-utils/imageFormat.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ros_deep_learning/CMakeFiles/imagenet.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/imagenet.dir/src/node_imagenet.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/imagenet.dir/src/node_imagenet.cpp.o] Error 1
CMakeFiles/Makefile2:1542: recipe for target 'ros_deep_learning/CMakeFiles/imagenet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/imagenet.dir/all] Error 2
[ 26%] Building CXX object ros_deep_learning/CMakeFiles/video_output.dir/src/image_converter.cpp.o
In file included from /home/pred/catkin_ws/src/ros_deep_learning/src/node_video_output.cpp:24:0:
/home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.h:27:10: fatal error: jetson-utils/imageFormat.h: No such file or directory
#include <jetson-utils/imageFormat.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ros_deep_learning/CMakeFiles/video_output.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/video_output.dir/src/node_video_output.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/video_output.dir/src/node_video_output.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 30%] Building CXX object ros_deep_learning/CMakeFiles/video_output.dir/src/ros_compat.cpp.o
In file included from /home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.cpp:23:0:
/home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.h:27:10: fatal error: jetson-utils/imageFormat.h: No such file or directory
#include <jetson-utils/imageFormat.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/build.make:86: recipe for target 'ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/image_converter.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/src/image_converter.cpp.o] Error 1
CMakeFiles/Makefile2:1148: recipe for target 'ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/ros_deep_learning_nodelets.dir/all] Error 2
In file included from /home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.cpp:23:0:
/home/pred/catkin_ws/src/ros_deep_learning/src/image_converter.h:27:10: fatal error: jetson-utils/imageFormat.h: No such file or directory
#include <jetson-utils/imageFormat.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
ros_deep_learning/CMakeFiles/video_output.dir/build.make:86: recipe for target 'ros_deep_learning/CMakeFiles/video_output.dir/src/image_converter.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/video_output.dir/src/image_converter.cpp.o] Error 1
CMakeFiles/Makefile2:1409: recipe for target 'ros_deep_learning/CMakeFiles/video_output.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/video_output.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
`

Jetson-nano

Would this work with jetson nano? Jetpack and jetson inference provide support for nano. If after installing them, I try to build ros_deep_learning package on jetson nano, would it work?

Failed to load nodelet

nvidia@tegra-ubuntu:~/catkin_ws$ roslaunch ros_deep_learning imagenet.launch
WARNING: Catkin package name "jetson-inference" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, and underscores.
... logging to /home/nvidia/.ros/log/29f3cc70-8352-11e7-80d1-00044b8ca462/roslaunch-tegra-ubuntu-1195.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

WARNING: Catkin package name "jetson-inference" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, and underscores.
started roslaunch server http://192.168.144.13:39466/

SUMMARY

PARAMETERS

  • /imagenet_node/class_labels_path: /home/nvidia/catk...
  • /imagenet_node/model_path: /home/nvidia/catk...
  • /imagenet_node/prototxt_path: /home/nvidia/catk...
  • /rosdistro: kinetic
  • /rosversion: 1.12.7

NODES
/
imagenet_node (nodelet/nodelet)
standalone_nodelet (nodelet/nodelet)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
WARNING: Catkin package name "jetson-inference" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, and underscores.
process[standalone_nodelet-1]: started with pid [1237]
process[imagenet_node-2]: started with pid [1238]
[ INFO] [1503041413.613694631]: Loading nodelet /imagenet_node of type ros_deep_learning/ros_imagenet to manager standalone_nodelet with the following remappings:
[ INFO] [1503041413.613828102]: /imagenet_node/imin -> /camera/rgb/image_rect_color
[ INFO] [1503041413.622413040]: waitForService: Service [/standalone_nodelet/load_nodelet] has not been advertised, waiting...
[ INFO] [1503041413.628775729]: Initializing nodelet with 6 worker threads.
[ INFO] [1503041413.644417973]: waitForService: Service [/standalone_nodelet/load_nodelet] is now available.
/opt/ros/kinetic/lib/nodelet/nodelet: symbol lookup error: /home/nvidia/catkin_ws/devel/lib//libros_deep_learning_nodelets.so: undefined symbol: _ZN8imageNet6CreateEPKcS1_S1_S1_S1_S1_j

> [FATAL] [1503041413.749550813]: Failed to load nodelet '/imagenet_node` of type `ros_deep_learning/ros_imagenet` to manager `standalone_nodelet'
> [standalone_nodelet-1] process has died [pid 1237, exit code 127, cmd /opt/ros/kinetic/lib/nodelet/nodelet manager __name:=standalone_nodelet __log:=/home/nvidia/.ros/log/29f3cc70-8352-11e7-80d1-00044b8ca462/standalone_nodelet-1.log].
> log file: /home/nvidia/.ros/log/29f3cc70-8352-11e7-80d1-00044b8ca462/standalone_nodelet-1*.log
> [imagenet_node-2] process has died [pid 1238, exit code 255, cmd /opt/ros/kinetic/lib/nodelet/nodelet load ros_deep_learning/ros_imagenet standalone_nodelet ~imin:=/camera/rgb/image_rect_color __name:=imagenet_node __log:=/home/nvidia/.ros/log/29f3cc70-8352-11e7-80d1-00044b8ca462/imagenet_node-2.log].
> log file: /home/nvidia/.ros/log/29f3cc70-8352-11e7-80d1-00044b8ca462/imagenet_node-2*.log

all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

how to run custom detectnet model

I am having a detectnet model trained to detect the object of my interest, I am able to use the default models like pednet but not sure how to give the path of my deploy.prototxt file and model name.

Any suggestion is highly appreciated.
Thanks,

image is in rgb8 format, expected bgr8,

I am getting image_raw from a gscam drived camera, with /camera/image_raw subscribed.
the detectNet error with the following:

[cuda] cudaAllocMapped 131072 bytes, CPU 0x101c30000 GPU 0x101c30000
[cuda] cudaAllocMapped 32768 bytes, CPU 0x101b40000 GPU 0x101b40000
[ INFO] [1570526393.448824707]: model hash => 965427319687731864
[ INFO] [1570526393.448906165]: hash string => /usr/local/bin/networks/ped-100/snapshot_iter_70800.caffemodel/usr/local/bin/networks/ped-100/class_labels.txt
[ INFO] [1570526393.452781894]: node namespace => /detectnet
[ INFO] [1570526393.452866842]: class labels => /detectnet/class_labels_965427319687731864
[ INFO] [1570526393.467713092]: detectnet node initialized, waiting for messages
[ INFO] [1570526393.722328769]: converting 1920x1080 rgb8 image
[ERROR] [1570526393.722481165]: 1920x1080 image is in rgb8 format, expected bgr8
[ INFO] [1570526393.722541269]: failed to convert 1920x1080 rgb8 image
[ INFO] [1570526393.922863144]: converting 1920x1080 rgb8 image
[ERROR] [1570526393.923110279]: 1920x1080 image is in rgb8 format, expected bgr8
[ INFO] [1570526393.923377988]: failed to convert 1920x1080 rgb8 image

how can I change the image_encodings???

Failed to load nodelet

Hi
I saw the other thread on this and despite apparently having the latest github version, I am still running into the same error. I am trying to run 'roslaunch ros_deep_learning imagenet.launch' from the command line. I am on TX2 and the onboard camera is working. I am running Jetpack 3.3.

[ERROR] [1546810726.555692500]: Failed to load nodelet [/gst_camera] of type [ros_jetson_video/gst_camera] even after refreshing the cache: According to the loaded plugin descriptions the class ros_jetson_video/gst_camera with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/crop_foremost depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyz_radial depth_image_proc/point_cloud_xyzi depth_image_proc/point_cloud_xyzi_radial depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_proc/crop_decimate image_proc/crop_nonZero image_proc/crop_non_zero image_proc/debayer image_proc/rectify image_proc/resize image_publisher/image_publisher image_rotate/image_rotate image_view/disparity image_view/image kinect2_bridge/kinect2_bridge_nodelet kobuki_safety_controller/SafetyControllerNodelet nodelet_tutorial_math/Plus pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation pcl/VoxelGrid ros_deep_learning/ros_imagenet rtabmap_ros/data_odom_sync rtabmap_ros/data_throttle rtabmap_ros/disparity_to_depth rtabmap_ros/icp_odometry rtabmap_ros/obstacles_detection rtabmap_ros/obstacles_detection_old rtabmap_ros/point_cloud_aggregator rtabmap_ros/point_cloud_xyz rtabmap_ros/point_cloud_xyzrgb rtabmap_ros/pointcloud_to_depthimage rtabmap_ros/rgbd_odometry rtabmap_ros/rgbd_sync rtabmap_ros/rgbdicp_odometry rtabmap_ros/rtabmap rtabmap_ros/stereo_odometry rtabmap_ros/stereo_sync rtabmap_ros/stereo_throttle rtabmap_ros/undistort_depth stereo_image_proc/disparity stereo_image_proc/point_cloud2 yocs_velocity_smoother/VelocitySmootherNodelet
[FATAL] [1546810726.556271854]: Failed to load nodelet '/gst_camera of type ros_jetson_video/gst_camera to manager standalone_nodelet'

[gst_camera-3] process has died [pid 3008, exit code 255, cmd /opt/ros/kinetic/lib/nodelet/nodelet load ros_jetson_video/gst_camera standalone_nodelet ~image_raw:=/image_raw __name:=gst_camera __log:=/home/nvidia/.ros/log/b7ad226c-11f1-11e9-ad09-00044b8d1fa7/gst_camera-3.log].

And from log file: /home/nvidia/.ros/log/b7ad226c-11f1-11e9-ad09-00044b8d1fa7/gst_camera-3*.log:

�[0m[ INFO] [1546810723.368886481]: Loading nodelet /gst_camera of type ros_jetson_video/gst_camera to manager standalone_nodelet with the following remappings:�[0m
�[0m[ INFO] [1546810723.369030799]: /gst_camera/image_raw -> /image_raw�[0m
�[0m[ INFO] [1546810723.383950585]: waitForService: Service [/standalone_nodelet/load_nodelet] has not been advertised, waiting...�[0m
�[0m[ INFO] [1546810723.415831639]: waitForService: Service [/standalone_nodelet/load_nodelet] is now available.�[0m

I needed to change the target folders in the config yaml to match my installation of jetson-inference which resolved some other errors, but I am unable to get past these. Any suggestions please? I can submit other logs if useful, they are rather sizeable and I did not want to flood the post.

Thank you!

EDIT: Also from master.log:

File "/opt/ros/kinetic/lib/python2.7/dist-packages/rosmaster/threadpool.py", line 218, in run result = cmd(*args) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rosmaster/master_api.py", line 210, in publisher_update_task ret = xmlrpcapi(api).publisherUpdate('/master', topic, pub_uris) File "/usr/lib/python2.7/xmlrpclib.py", line 1243, in __call__ return self.__send(self.__name, args) File "/usr/lib/python2.7/xmlrpclib.py", line 1602, in __request verbose=self.__verbose File "/usr/lib/python2.7/xmlrpclib.py", line 1283, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python2.7/xmlrpclib.py", line 1316, in single_request return self.parse_response(response) File "/usr/lib/python2.7/xmlrpclib.py", line 1493, in parse_response return u.close() File "/usr/lib/python2.7/xmlrpclib.py", line 800, in close raise Fault(**self._stack[0]) Fault: <Fault -1: 'publisherUpdate: unknown method name'>

Can not build this node on Xaiver Jetpack 4.2

I am trying to use this node on Xaiver Jetpack 4.2, ROS Melodic. I first built and installed jetson-inference. Ran the example fine. Then downloaded this node, but got the following error when I built the workspace. Anyone knows why?

[ 90%] Built target lane_detector_generate_messages /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘void img_callback(const ImageConstPtr&)’: /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_detectnet.cpp:74:119: error: no matching function for call to ‘detectNet::Detect(float*, uint32_t, uint32_t, float*&, int*, float*&)’ const bool result = net->Detect(cvt->ImageGPU(), cvt->GetWidth(), cvt->GetHeight(), bbCPU, &numBoundingBoxes, confCPU); ^ In file included from /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_detectnet.cpp:29:0: /usr/local/include/jetson-inference/detectNet.h:309:6: note: candidate: int detectNet::Detect(float*, uint32_t, uint32_t, detectNet::Detection**, uint32_t) int Detect( float* input, uint32_t width, uint32_t height, Detection** detections, uint32_t overlay=OVERLAY_BOX ); ^~~~~~ /usr/local/include/jetson-inference/detectNet.h:309:6: note: candidate expects 5 arguments, 6 provided /usr/local/include/jetson-inference/detectNet.h:321:6: note: candidate: int detectNet::Detect(float*, uint32_t, uint32_t, detectNet::Detection*, uint32_t) int Detect( float* input, uint32_t width, uint32_t height, Detection* detections, uint32_t overlay=OVERLAY_BOX ); ^~~~~~ /usr/local/include/jetson-inference/detectNet.h:321:6: note: candidate expects 5 arguments, 6 provided /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_detectnet.cpp:77:7: error: in argument to unary ! if( !result ) ^~~~~~ /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘int main(int, char**)’: /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_detectnet.cpp:205:20: error: ‘class detectNet’ has no member named ‘GetMaxBoundingBoxes’ maxBoxes = net->GetMaxBoundingBoxes(); ^~~~~~~~~~~~~~~~~~~ /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_segnet.cpp: In function ‘int main(int, char**)’: /home/nvidia/racecar_ws/src/ros_deep_learning/src/node_segnet.cpp:254:28: error: ‘class segNet’ has no member named ‘GetClassLabel’; did you mean ‘GetClassPath’? const char* label = net->GetClassLabel(n); ^~~~~~~~~~~~~ GetClassPath ros_deep_learning/CMakeFiles/detectnet.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o' failed make[2]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o] Error 1 CMakeFiles/Makefile2:1850: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/all' failed make[1]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... ros_deep_learning/CMakeFiles/segnet.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/segnet.dir/src/node_segnet.cpp.o' failed make[2]: *** [ros_deep_learning/CMakeFiles/segnet.dir/src/node_segnet.cpp.o] Error 1 CMakeFiles/Makefile2:1919: recipe for target 'ros_deep_learning/CMakeFiles/segnet.dir/all' failed make[1]: *** [ros_deep_learning/CMakeFiles/segnet.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed

Detector node net : custom topics overlay error

Hello dusty,

I am trying to run detector node with my custom image topic and encountered a problem.

Using,
auto img_sub = ROS_CREATE_SUBSCRIBER(sensor_msgs::Image, "custom rostopic name", 5, img_callback);
I subscribe to my custom rostopic(sensor_msgs::Image) image which is 720x540 image size.

The detection seems to be working but at the stage of publishing an overlayed image, it crashes with the following error which seems to be about the memory issue.

[ INFO] [1605629139.267617011]: detectnet node initialized, waiting for messages
[ INFO] [1605629143.360332135]: allocated CUDA memory for 720x540 image conversion
[ INFO] [1605629143.450974899]: detected 2 objects in 720x540 image
[ INFO] [1605629143.451798939]: object 0 class #1 (person)  confidence=0.732592
[ INFO] [1605629143.452022085]: object 0 bounding box (107.788712, 229.121368)  (141.429123, 300.089172)  w=33.640411  h=70.967804
[ INFO] [1605629143.452106825]: object 1 class #1 (person)  confidence=0.795658
[ INFO] [1605629143.452231887]: object 1 bounding box (160.328171, 231.065933)  (185.133057, 295.645020)  w=24.804886  h=64.579086
[ INFO] [1605629143.458095690]: allocated CUDA memory for 720x540 image conversion
free(): invalid pointer
[ INFO] [1605629143.619500453]: allocated CUDA memory for 720x540 image conversion

I tried to identify where the free() function is called but without success.

What could be the possible problem here?

how I flip the image to mode 4 horizontal?

I done it on jetson-inference adding --input-flip=horizontal , as I get mirrored image, but I couldn't find how in the ros node. I guess I need compile that flip mode, and I know how do it in the old fashion in gstcamera.cpp file with " if mode 4 2 etc, but now is "if FLIP_MODE== 180 .... and I dont know what correspond to mode 4 or horizontal, FLIP_MODE==90 or 270 maybe ? I dont know codding sorry.

thanks

Could be possible accept an image topic as input?

I couldn't find how to do it, would be great if you can make the node accepts an image topic as input, it should be great, in some setups will make you can save have an additional camera ,only for jetson-inference , i.e in a realsense would be possible use the IR right for inference, in the darkness!!!, or use the rgb stream for inference instead of rgb pointclouds.
this would be useful to accept depth stream as input, as you can republish it and apply AI models over it.

thanks in any case by this amazing repo

How to modify the frame rate of the v4l2 camera

Thanks for your work,I use the ros node for video reading, video_source.launch.I saw the place to modify the csi camera in line 360 in gstCamera.cpp, but I don’t know how to modify the frame rate of the v4l2 camera (I use a usb camera), the default is the highest frame rate, in fact I only need about 10 frames to read , Please tell me how to modify the frame rate of the v4l2 camera. Thanks for all helpful suggestions.

Error: ROS node can't find model files !

Hello I follow the instruction of ros_deep_learning.

I installed jetson-inference at some other location by following command
cmake -DCMAKE_INSTALL_PREFIX:PATH=~/jetson-inference ../

And then appended required paths
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/nano/jetson-inference/include
export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:/home/nano/jetson-inference/share/jetson-utils/cmake:/home/nano/jetson-inference/share/jetson-inference/cmake
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/nano/jetson-inference/lib

Testing is successful
./detectnet-console images/street.jpg street.jpg

Problem: ROS node can't detect models?

nano@nano:~$ rosrun ros_deep_learning detectnet /imagenet/image_in:=/image_publisher/image_raw _model_name:=ssd-mobilenet-v2

detectNet -- loading detection network model from:
-- model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
-- input_blob 'Input'
-- output_blob 'NMS'
-- output_count 'NMS_1'
-- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
-- threshold 0.500000
-- batch_size 1

[TRT] TensorRT version 6.0.1
[TRT] loading NVIDIA plugins...
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] Could not register plugin creator: FlattenConcat_TRT in namespace:
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension '.uff')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU

error: model file 'networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff' was not found.
if loading a built-in model, maybe it wasn't downloaded before.

    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh

detectNet -- failed to initialize.
[ERROR] [1583696440.892525823]: failed to load detectNet model
nano@nano:~$
`

segnet - map classes in source image with class_mask

I'm trying to get the class of each pixel in the source image.

I'm using segnet node and it works well, I get on the output side colored image and grayscale color mask.

Next I upscaled the class mask to the source image size and colored the pixel into the class color. But the output doesn't make sense to me (the muster of color and mask image are not equal...) Is there some step more, that I have to do?

Tank you

segnet_output

Anyway to view detectnet outputs??

Hello,
This repo is amazing, thank you so much. Is there any way to view the resulting output detections from the detectnet ros node with the bounding boxes and label, I would really appreciate any help.

catkin_make failed after linking dependancies

Nano B01, OpenCV updated from Source, ROS Melodic Full Desktop Install that runs ~/roscore successfully.

  • NVIDIA Jetson Nano (Developer Kit Version)
    • Jetpack 4.3 [L4T 32.3.1]
    • NV Power Mode: MAXN - Type: 0
    • jetson_clocks service: inactive
  • Libraries:
    • CUDA: 10.0.326
    • cuDNN: 7.6.3.28
    • TensorRT: 6.0.1.10
    • Visionworks: 1.6.0.500n
    • OpenCV: 4.1.1 compiled CUDA: YES
    • VPI: 0.1.0
    • Vulkan: 1.1.70

Following ros_deep_learning documents, Checked dependancies with rosdep, All required rosdeps installed successfully.
Compile failed after successful Building and Linking.
//usr/local/lib/libopencv_features2d.so.4.1: undefined reference to cv::ocl::isOpenCLActivated()' //usr/local/lib/libopencv_features2d.so.4.1: undefined reference to cv::ocl::isOpenCLActivated()'
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/imagenet.dir/build.make:152: recipe for target '/home/turtlebotnv/catkin_ws/devel/lib/ros_deep_learning/imagenet' failed
make[2]: *** [/home/turtlebotnv/catkin_ws/devel/lib/ros_deep_learning/imagenet] Error 1
CMakeFiles/Makefile2:1884: recipe for target 'ros_deep_learning/CMakeFiles/imagenet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/imagenet.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/segnet.dir/build.make:152: recipe for target '/home/turtlebotnv/catkin_ws/devel/lib/ros_deep_learning/segnet' failed
make[2]: *** [/home/turtlebotnv/catkin_ws/devel/lib/ros_deep_learning/segnet] Error 1
CMakeFiles/Makefile2:1719: recipe for target 'ros_deep_learning/CMakeFiles/segnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/segnet.dir/all] Error 2
[100%] Built target ros_deep_learning_nodelets
//usr/local/lib/libopencv_features2d.so.4.1: undefined reference to `cv::ocl::isOpenCLActivated()'
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/detectnet.dir/build.make:152: recipe for target '/home/turtlebotnv/catkin_ws/devel/lib/ros_deep_learning/detectnet' failed
make[2]: *** [/home/turtlebotnv/catkin_ws/devel/lib/ros_deep_learning/detectnet] Error 1
CMakeFiles/Makefile2:1650: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed

How to use this ROS node?

I guess before I compile this node in ROS with catkin_make, I need somehow install and compile the jetson-inference project on TX2. Is that correct?

segNet -- failed to load error

roslaunch ros_deep_learning segnet.ros1.launch input_width:=640 input_height:=480 input:=/dev/video0


[ERROR] [1597129109.498712554]: failed to capture next frame
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)33/1, colorimetry=(string)bt601, interlace-mode=(string)progressive
[gstreamer] gstCamera -- recieved first frame, codec=raw format=yuyv width=640 height=480 size=614400
RingBuffer -- allocated 4 buffers (614400 bytes each, 2457600 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (921600 bytes each, 3686400 bytes total)
[ INFO] [1597129109.738371590]: allocated CUDA memory for 640x480 image conversion
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .1.1.7103.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU

error: model file 'networks/FCN-ResNet18-Cityscapes-1024x512/fcn_resnet18.onnx' was not found.
if loading a built-in model, maybe it wasn't downloaded before.

    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh

[TRT] segNet -- failed to load.
[ERROR] [1597129113.435617192]: failed to load segNet model


I already installed default seg model, what is the problem? I am predicting becauseof TRT. The reason why I think "that is cache file not found".

Is there anyone suffered like this?

failed at catkin_make

[ 18%] Built target ros_deep_learning_nodelets
[ 25%] Linking CXX executable /home/nvidia/catkin_ws/devel/lib/ros_deep_learning/segnet
[ 31%] Linking CXX executable /home/nvidia/catkin_ws/devel/lib/ros_deep_learning/detectnet
[ 37%] Linking CXX executable /home/nvidia/catkin_ws/devel/lib/ros_deep_learning/imagenet
[ 50%] Built target planar
[ 62%] Built target planar_map
/usr/bin/ld: CMakeFiles/segnet.dir/src/image_converter.cpp.o: undefined reference to symbol '_Z17cudaRGBA32ToBGRA8P6float4P6uchar4mm'
//usr/local/lib/libjetson-utils.so: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/segnet.dir/build.make:154: recipe for target '/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/segnet' failed
make[2]: *** [/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/segnet] Error 1
CMakeFiles/Makefile2:485: recipe for target 'ros_deep_learning/CMakeFiles/segnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/segnet.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/usr/bin/ld: CMakeFiles/detectnet.dir/src/image_converter.cpp.o: undefined reference to symbol '_Z17cudaRGBA32ToBGRA8P6float4P6uchar4mm'
//usr/local/lib/libjetson-utils.so: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/detectnet.dir/build.make:154: recipe for target '/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/detectnet' failed
make[2]: *** [/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/detectnet] Error 1
CMakeFiles/Makefile2:522: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/all] Error 2
/usr/bin/ld: CMakeFiles/imagenet.dir/src/image_converter.cpp.o: undefined reference to symbol '_Z17cudaRGBA32ToBGRA8P6float4P6uchar4mm'
//usr/local/lib/libjetson-utils.so: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
ros_deep_learning/CMakeFiles/imagenet.dir/build.make:154: recipe for target '/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/imagenet' failed
make[2]: *** [/home/nvidia/catkin_ws/devel/lib/ros_deep_learning/imagenet] Error 1
CMakeFiles/Makefile2:847: recipe for target 'ros_deep_learning/CMakeFiles/imagenet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/imagenet.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j6 -l6" failed

I am getting this error at doing catkin_make.

/detectnet/detections/results/id never goes back to '0' (background), instead stops publishing

I use this node (trying ;-) to steer a R/C car (basically a JetRacer) thru a circuit and it looks promising.

The only issue I have at this point is, that if the car loses track of the circuit and none of the trained markers is visible anymore, my code expects the /detectnet/detections/results/id going back to 0 (which is background by default) and so triggers some sort of recovery maneuver.

Instead the node stops publishing and so "keeps" the last identified object on the topic.

Is there a solution to signal "I don't see anything I was trained to see"? I could use a timeout to catch the absence of messages on the topic, but maybe there is something more elegant... ;-)

catkin_make failed after jetson-inference update

Could not build ros package after installing latest version of jetson-inference

Output of catkin_make:

/home/nvidia/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘void img_callback(const ImageConstPtr&)’:
/home/nvidia/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:74:119: error: no matching function for call to ‘detectNet::Detect(float*, uint32_t, uint32_t, float*&, int*, float*&)’
  const bool result = net->Detect(cvt->ImageGPU(), cvt->GetWidth(), cvt->GetHeight(), bbCPU, &numBoundingBoxes, confCPU);
                                                                                                                       ^
In file included from /home/nvidia/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:29:0:
/usr/local/include/jetson-inference/detectNet.h:271:6: note: candidate: int detectNet::Detect(float*, uint32_t, uint32_t, detectNet::Detection**, uint32_t)
  int Detect( float* input, uint32_t width, uint32_t height, Detection** detections, uint32_t overlay=OVERLAY_BOX );
      ^
/usr/local/include/jetson-inference/detectNet.h:271:6: note:   candidate expects 5 arguments, 6 provided
/usr/local/include/jetson-inference/detectNet.h:283:6: note: candidate: int detectNet::Detect(float*, uint32_t, uint32_t, detectNet::Detection*, uint32_t)
  int Detect( float* input, uint32_t width, uint32_t height, Detection* detections, uint32_t overlay=OVERLAY_BOX );
      ^
/usr/local/include/jetson-inference/detectNet.h:283:6: note:   candidate expects 5 arguments, 6 provided
/home/nvidia/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:77:7: error: in argument to unary !
  if( !result )
       ^
/home/nvidia/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp: In function ‘int main(int, char**)’:
/home/nvidia/catkin_ws/src/ros_deep_learning/src/node_detectnet.cpp:205:20: error: class detectNet’ has no member named ‘GetMaxBoundingBoxes’
  maxBoxes   = net->GetMaxBoundingBoxes();  
                    ^
ros_deep_learning/CMakeFiles/detectnet.dir/build.make:62: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o' failed
make[2]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/src/node_detectnet.cpp.o] Error 1
CMakeFiles/Makefile2:7449: recipe for target 'ros_deep_learning/CMakeFiles/detectnet.dir/all' failed
make[1]: *** [ros_deep_learning/CMakeFiles/detectnet.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j6 -l6" failed

Seems like signature of detectNet::Detect() changed after this commit of jetson-inference but this package is using the old signature.
Is there a tag/commit of jetson-inference for which this package works?

Using realsense node output for detecting object using detectnet

Hi @dusty-nv
I am using Realsense D415 camera along with jetson Tx2.
I have launched the camera node using

roslaunch realsense2_camera rs_camera.launch

after that I ran detectnet node using

rosrun ros_deep_learning detectnet /detectnet/image_in:=camera/color/image_raw _model_name:=pednet

after this I am getting the error

failed to convert 640480 rgb8 image
converting 640
480 rgb8 image
640*480 image is in rgb8 format, expected bgr8

Subscribing node to USB_Cam output

I am trying to use this node subscribing to another message usb_cam/image_raw. I see some remapping is going on in the imagenet nodelet but I am not sure what "imin" is or how its going from "imin" to "/usb_cam/image_raw" (I feel like it should be the other way around. Here is the section of the launch file I am using for the usb_cam node and the ros_deep_learning node:

<!-- Nodelet manager -->
<node pkg="nodelet" type="nodelet" name="standalone_nodelet"  args="manager" output="screen"/>

<!-- ros_imagenet nodelet -->
<node name="imagenet_node"
      pkg="nodelet" type="nodelet"
      args="load ros_deep_learning/ros_imagenet standalone_nodelet"
      output="screen">
      <remap from="~imin" to="/usb_cam/image_raw"/>
</node>

Unfortunately when I launch this, I get the following error.

[ INFO] [1582405073.768284396]: failed to convert 640x480 rgb8 image
[ INFO] [1582405073.799523285]: converting 640x480 rgb8 image
[ERROR] [1582405073.799635580]: 640x480 image is in rgb8 format, expected bgr8

If I removed the "remap..." part of the launch script the error is no longer there, but when i do a $ rostopic echo /imagenet_node/class_str -- nothing happens just a blank screen.

Has anyone used this ros_deep_learning node to subscribe to the published usb_cam/raw_image topic? If so, any idea what I am doing wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.