Git Product home page Git Product logo

ros_driver's Introduction

Official ROS driver for Ensenso stereo cameras Build&Test

Documentation

To get started with the package, take a look at the ROS wiki.

If you want to use the package with ROS2, see our ROS2 documentation on how to build for ROS2 and for further information.

Remarks

  • For using this package, you need to have the Ensenso SDK installed.
  • Ensenso C-Series cameras require at least version 2.1.1
  • Ensenso S- and XR-Series cameras require at least version 1.7.0.
  • Version 1.7.0 and newer requires at least Ensenso SDK 3.0.
  • Version 1.6.3 and older requires at least Ensenso SDK 2.0. Older versions are not supported.
  • All inputs and outputs of this package are in meters and seconds, as the convention for ROS requires. This is different from the NxLib, which uses millimeters and milliseconds.

Report a Bug

For general problems with your camera or the Ensenso SDK, please visit the Ensenso community forum or contact the IDS support.

Bugs of the ROS driver can be reported directly in the issue tracker.

Acknowledgements

ROSIN EU

Supported by ROSIN - ROS-Industrial Quality-Assured Robot Software Components. More information: http://rosin-project.eu/

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 732287.

ros_driver's People

Contributors

benthie avatar erblinium avatar jornb avatar mmoerdijk avatar saierd avatar shedding-reptile avatar simonschmeisser avatar thomascent avatar yguenduez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ros_driver's Issues

Support for X36 series camera

Hi,

Thank you for this ros driver package. Does this repository support the X36-BL camera? I am evaluating a X36-BL camera and it is important for me to use it together with ROS. I saw pull request #72 and branch xr-series but it was not clear whether or not X36 is supported since the pull request is not merged.

Also, the X36 has an RGB camera mounted on top and has been calibrated to output colored point clouds in Windows 10. Is this functionality also supported in this package, i.e., will this package load the already calibrated RGB-X36 setup and output colored point clouds?

Thank you.

Camera publishes data in wrong frame if TF Lookup failed

If the target_frame is set to a frame that is not in the tree, the cloud is published anyway and only a warning is printed ("Could not get the transformation to the target frame from TF. Using an identity transformation for this frame.").

If the camera is misconfigured, there should be no output at all. With the current implementation, every subscriber would assume that the data was transformed into the frame (as it is set in the header), so that next robot movements will rely on wrong data. Just ignoring the failed tf-lookup and publishing wrong data is the worst case for a camera, especially if the data is used for collision avoidance.

If data is published, the frame should be set to the camera_frame, so that a listener has a chance to notice there was a problem.

Same for writePoseToNxLib: If something is wrong with the pose, Identity is used and not even a return value is given.

Camera does not support parameter Exposure

I am currently walking trying to complete the tutorial for the ROS ensenso driver. When i try to run my client, with the code provided http://wiki.ros.org/ensenso_driver/Tutorials/Parameters, i get the error that i am trying to use an attribute that is not available.
:~/ensenso_tutorials/ensenso_ws$ rosrun ensenso_camera ensenso_client.py
Traceback (most recent call last):
File "/home/thomas/ensenso_tutorials/ensenso_ws/src/ros_driver/ensenso_camera/src/ensenso_client.py", line 36, in
result = ensenso_client()
File "/home/thomas/ensenso_tutorials/ensenso_ws/src/ros_driver/ensenso_camera/src/ensenso_client.py", line 25, in ensenso_client
exposure = get_parameter.get_result().results[0].float_value
AttributeError: 'NoneType' object has no attribute 'results'

this i could "solve" by deleting the last part (results & float_val) of the following statement: exposure = get_parameter.get_result().results[0].float_value

When i run the code it returns None, which I recon could be because I altered the last part of the statement above, but it also returns that the camera does not support the reading of the exposure parameter.
[ WARN] [1551961309.027018767]: Reading the parameter Exposure, but the camera does not support it!
I added the bare minimum to the code provided in the tutorial.
ensenso_client.txt

High Calibration Errors

I'm trying to perform Hand Eye Calibration with a fixed Ensenso N35 camera and a Yaskawa robot. Having ensured that the calibration poses are clearly visible to the camera, i proceeded using the hand eye calibration action.

However i am getting errors in the order of 0.5 to 0.8 meters away from the actual position. The residual parameter is reported within 0.8 and 1.4. Correct me if i am wrong, but i understood the residual parameter to represent the error of the calibration with a value closer to 0 meaning better calibration.

Another question is, what is the point of origin on the camera body to which calibration is calculated? I initially assumed it was the midpoint of the projector in the middle of the camera, however the x dimension makes sense if it were related to the lens nearer to the ethernet port.

@saierd @yguenduez How do i improve the calibration quality? Is there something else i should consider in terms of calibration routine apart from the provided actions?

CameraInfo header timestamp is zero

Hi Ensenso,

I've been trying to register the ensenso's depth images to the frame of an RGB camera using a depth_image_proc/register nodelet. I was having some trouble because that nodelet uses a message filter to sychronise the color, depth and camera info messages it subscribes to and it turns out that the camera_info.header.stamp published by the ensenso driver isn't filled in. Here's one I rostopic echoed just now, not to labor the point but you can see that stamp: secs and stamp: nsecs are both 0

$ rostopic echo /depth/camera_info
header: 
  seq: 1
  stamp: 
    secs: 0
    nsecs: 0
  frame_id: "ensenso_camera_frame"
height: 1024
width: 1280
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [1078.28662109375, 0.0, 566.381103515625, 0.0, 0.0, 1078.28662109375, 518.2259521484375, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 1
binning_y: 1
roi: 
  x_offset: 0
  y_offset: 0
  height: 1023
  width: 1279
  do_rectify: True
---

Any chance you could set the camera info timestamp to match the stamp attached to the depth image that's published alongside it? That seems to satisfy depth_image_proc/register

How to set param with c++ ?

I saw the code in the ros wiki.
I understand the code about Setting param in camear with python. But , I could't set param with c++? I want to set the MINIMUM_DISPARITY .

# Add a new parameter set "location1". It will be initialized with the default
# settings from when the camera node started. We then change the exposure settings.
set_parameter.send_goal(SetParameterGoal(parameter_set="location1", parameters=[
    Parameter(key=Parameter.AUTO_EXPOSURE, bool_value=False),
    Parameter(key=Parameter.EXPOSURE, float_value=10)
]))

Could you show the tutriols with c++? Please, Thank you!

Can not turn of pointcloud

I am using an ensenso n35 for deep learning and object detection and localization. I have found that the images produced without the filter are better suited for the object detection as opposed to the images that have the pattern projected on them. I am able to turn of the projector and grab an image. After detection I aim to use the point cloud to find the coordinates of the objects.
I altered the request data to turn the projector of every other picture. I thought that would be able to differentiate between the images with pattern and without by also turning of the point cloud and working with the timing of incoming messages. However the point cloud keeps publishing even when it is supposedly turned of. The request data script contains the following statement:
goal.request_point_cloud = rospy.get_param("~point_cloud", True)
I tried the flowing:

while not rospy.is_shutdown():
    if switch == 0:
        set_parameter.send_goal(SetParameterGoal(parameters=[Parameter(key=Parameter.PROJECTOR, bool_value=False)]))
        goal.request_point_cloud = rospy.get_param("~point_cloud", False)

        switch = 1
    elif switch == 1:
        set_parameter.send_goal(SetParameterGoal(parameters=[Parameter(key=Parameter.PROJECTOR, bool_value=True)]))
        goal.request_point_cloud = rospy.get_param("~point_cloud", True)
        switch = 0

To have the camera switch between publishing a point cloud and not. This however does not seem to work. What would be the proper way of turning the point cloud of and on?

Add rectified depth image as output

In addition to the pointcloud format, would it be possible to return a rectified depth image as well? Looking at the NxLib API, this would be similar to the single channel pointmap from the RenderPointMap function:
https://www.ensenso.com/manual/cmdrenderpointmap.htm

The reason is that many detections can be done on the depth image instead of a pointcloud. Now, we recreate the depth image from the pointcloud result ourselves, but it seems cleaner if the driver would expose this format directly.

Is this feature of interest to others as well?

Request for tags

Hi,

At this moment there are no tags nor releases made for this repo, making it difficult to be dependent on specific versions (requires commit tags, which is not ideal).

Is it an option start tagging further relevant releases/updates and (if possible) tag previous ones?

A question about segmentation fault (core dumped)

I met the problem about segmentation fault when I run this rosrun ensenso_camera ensenso_camera_node
The result is like below:
~/ensenso_ros1_ws$ rosrun ensenso_camera ensenso_camera_node [ INFO] [1644395246.946467651]: Initializing nodelet with 12 worker threads. Segmentation fault (core dumped)

Environment:

  • ubuntu 20.04
  • ensenso s10
  • sdk 3.2.489
  • ros noetic

Thanks!

8UC1 provided need mono8

Hello I am trying to use this driver for the ensenso camera, the package that i intend to use it with accepts mono8 encoding, however, the camera provides 8UC1. Is this something that can be altered? I have been looking for conversion methods but that does not seem to be possible as 8UC1 is not a color format. I am subscribing to the /rectified/left/image topic. Would I need to use a different one? Or do you perhaps know of a method to convert the data?

Thanks in advance!

Low update rates of Point Cloud on N35 camera

I am using two N35 cameras, which by specification can reach a speed of 10 fps of 3D data, but I am experimenting update times greater than 1.6 seconds for a single camera and 2.8 seconds for both cameras running at the same time.

My request_data action are requesting only point_cloud and normals.

Now my question is:

  • The Max. fps (3D): 10 (2x Binning: 30) and 64 disparity levels statement from the N35 description talks about 10 fps of disparity map (if I am not wrong), but there is no information on the update rate of point cloud computed in the inbound computer, are the times that I am experiencing normal?

I tried requesting the data from both cameras in a single thread and in separate threads, but this does not seem to have much influence on the update times.

PS: I am currently concatenating both cameras point cloud using PCL after receiving both cameras data separately, but I am aware there is a way of doing the combination of both point clouds on the cameras computers Output of a single 3D point cloud with data from all cameras used in multi-camera mode, is this possible with the ros_driver? I have already used the target_frame parameter from the ensenso_camera nodes to obtain both cameras pointcloud on the same reference frame.

PPS: I have already increased the network card input buffer to the maximum 4096, but there is no noticeable change.

PointCloud not complete

Hello,
I am currently starting my efforts using the ensenso camera in ROS. I am running ubuntu 16.04, i have both the SDK and the ueyeeth installed and am connecting to a N35 stereo camera.
I can manage retrieving data from the camera and visualize this in rviz. The ensenso_optical_frame, in which everything should be represented is not automatically added. I was able to see the raw camera images but not the pointcloud. Therefore i added that frame myself. However the point cloud that results from this is poor to say the least.
Could this be related to the frame that I created to the camera? The tutorial is not very comprehensive on this matter.

kind regards

Add support for S10

This package only supports stereo cameras but the new S10 is a structured light camera. The following error occurs When attempting to launch an ensenso_camera/nodelet:
The camera to be opened is of the wrong type StructuredLight. It should be Stereo.

############### complete console output ####################

$ roslaunch ensenso_camera nodelet.launch
... logging to /home/tech/.ros/log/e09693b4-59f8-11ec-a68d-94c691a5898a/roslaunch-tech-NUC8i3BEH-14510.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://tech-NUC8i3BEH:43197/

SUMMARY
========

PARAMETERS
 * /Ensenso_218148/serial: 218148
 * /Ensenso_218148/target_frame: Workspace
 * /rosdistro: melodic
 * /rosversion: 1.14.12

NODES
  /
    Ensenso_218148 (nodelet/nodelet)
    manager_ (nodelet/nodelet)

ROS_MASTER_URI=http://localhost:11311

process[manager_-1]: started with pid [14525]
process[Ensenso_218148-2]: started with pid [14526]
[ INFO] [1639169273.834979332]: Loading nodelet /Ensenso_218148 of type ensenso_camera/nodelet to manager /manager_ with the following remappings:
[ INFO] [1639169273.836988566]: waitForService: Service [/manager_/load_nodelet] has not been advertised, waiting...
[ INFO] [1639169273.845668463]: Initializing nodelet with 4 worker threads.
[ INFO] [1639169273.857829152]: waitForService: Service [/manager_/load_nodelet] is now available.
[ERROR] [1639169277.063881560]: The camera to be opened is of the wrong type StructuredLight. It should be Stereo.
[FATAL] [1639169277.174065622]: Failed to load nodelet '/Ensenso_218148` of type `ensenso_camera/nodelet` to manager `/manager_'
[manager_-1] process has died [pid 14525, exit code 1, cmd /opt/ros/melodic/lib/nodelet/nodelet manager __name:=manager_ __log:=/home/tech/.ros/log/e09693b4-59f8-11ec-a68d-94c691a5898a/manager_-1.log].
log file: /home/tech/.ros/log/e09693b4-59f8-11ec-a68d-94c691a5898a/manager_-1*.log
[Ensenso_218148-2] process has died [pid 14526, exit code 255, cmd /opt/ros/melodic/lib/nodelet/nodelet load ensenso_camera/nodelet /manager_ __name:=Ensenso_218148 __log:=/home/tech/.ros/log/e09693b4-59f8-11ec-a68d-94c691a5898a/Ensenso_218148-2.log].
log file: /home/tech/.ros/log/e09693b4-59f8-11ec-a68d-94c691a5898a/Ensenso_218148-2*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

Depth image units are floating point but expressed in millimeters

Hello again,

It seems that the depth images published by this package are encoded as 32FC1 but are expressed in millimeters. It states pretty explicitly in the remarks section on the wiki that:

All inputs and outputs of this package are in meters and seconds, as the convention for ROS requires

So I'm guessing this isn't the intended behavior

Just in case you need to reproduce this on your side, I'm on the latest master of ros_driver and I'm running the camera using the instructions straight out of the Viewing Your First Point Cloud in RViz tutorial on the wiki:

$ rosrun ensenso_ca ensenso_camera_node
$ rosrun ensenso_camera request_data

Here's a bag containing a couple of depth messages I collected, running them through this script prints 543.116 which sounds about right for the distance in millimeters from the camera to the table it's looking at.

import rosbag
from cv_bridge import CvBridge

bridge = CvBridge()

for topic, msg, t in rosbag.Bag('depth_mm.bag').read_messages():

    if topic == '/depth/image':
        depth_msg = msg
        depth = bridge.imgmsg_to_cv2(msg)

print(depth[800,400])

Is there a parameter somewhere that controls this? If not any chance you could publish the depth images in meters?

arg 'serial_stereo' is not defined.

I'm using ensenso N30-802-16-IR stereo 3D camera. For the launch file, I don't know the number of Serial of the Ensenso stereo camera and Serial of a mono camera supported by the NxLib. How to find this two number?

Consider adding a xacro macro of supported cameras

@gavanderhoorn in #2 (comment):

would you consider providing a ensenso_description (or ensenso_support, depending on how many products you intend to support) in combination with the driver?

That would allow you to standardise these frame/link names easily in a urdf, and remove the burden of modelling the camera from your users.

ensenso_camera_test tests are failing

[1519720055.786455757 ERROR /ensenso_camera_node]: Error while opening the camera 'ros_test'!
[1519720055.786535334 ERROR /ensenso_camera_node]: FileStereoCamera: Could not initialize background calibration thread
[1519720055.786575494 ERROR /ensenso_camera_node]: Failed to open the camera. Shutting down.

ENSENSO_INSTALL Required

https://github.com/ensenso/ros_driver/blob/42014bd0fa58ef4840a1e319c533a23958b0f9d1/ensenso_camera/CMakeLists.txt#L56-57

It seems that the environment variable ENSENSO_INSTALL is required to find FindEnsenso.cmake file
installed by the ensenso-sdk.
The ENSENSO_INSTALL variable is set by /etc/environment
This is not convenient for systems not relying on /etc/environment. Would it be possible to put the FindEnsenso.cmake file in a place where ENSENSO_INSTALL wouldn't be necessary?

NxLibException 17 (ExecutionFailed)

We are running tests with an ensenso n35 camera.

Randomly, the following error occurs:

[1612358420.785077658 ERROR /camera_left/ensenso_camera]: NxLibException 17 (ExecutionFailed) for item /Execute/182467
[1612358420.785171708 ERROR /camera_left/ensenso_camera]: CaptureTimeout: Waiting for images timed out.

We are running on the latest SDK3.0311 with ueye 4.93 and the latest version of this repository

What can be the reason causing this issue?

Out put depth image

I want to out put the depth image. So I receive the image from the topic: /depth/image
Then I use the CvBridge() to transform the msg form the topic in to the .png images.
It is worked in those codes:

depth = bridge.imgmsg_to_cv2(depth_msg, "32FC1")
cv2.imwrite(cam1_path + image_name, depth)

But the result is not good. The gray level of the output gray image is only two value: 255 and 0. So can you tell me how to solve this problem?

Issues with multi-camera setups

In our setup we have two Ensenso N35 Cameras, pointing at the same workspace from different angles, but when using a multicamera setup, some questions (which I believe could be bugs) arise.

  1. How to select the ActionServer to use. In the case when having two or more cameras, how does the request_data action chooses which action server to connect to and how do we change this?.

As a matter of fact, if more than one Camera is open (more than one ensenso_camera_nodes are running), when triggering the action request_data both cameras start publishing the data requested, but there is a pronblem...se number 2.

  1. Since the output topics of the camera nodes do not use the node' name as part of the topic's names when opening more than one camera all ensenso_camera_nodes publish to exactly the same topics, which is in most cases not desired.

One way to get around this could be to add a node namespace, which I tried but breaks the functionality of the actionlib package, and I am also not so familiarized with the correct usages of node's namespaces.

This can be solved by remapping the topics like so:
<remap from="/point_cloud" to="/lower_point_cloud"/>
But the ensenso_camera_node has so many output topics that this would not be efficient, your will end up remapping more than 20 topics.

Here is an example of the launch file I am ussing, although remaping the pointcloud topic works as desired I am not sure if the multi publisher problem described in 2 will create issues in further application development.

<launch>

  <arg name="upper_camera_serial" default="171864"/>
  <arg name="lower_camera_serial" default="171865"/>

  <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_upper_camera_node" output="screen">
    <param name="serial" value="$(arg upper_camera_serial)"/>
    <param name="camera_frame" value="ensenso_upper_optical_frame"/>
    <param name="target_frame" value="ensenso_upper_optical_frame"/>
    <remap from="/point_cloud" to="/point_cloud_upper"/>    
  </node>

  <node pkg="ensenso_camera" type="ensenso_camera_node" name="ensenso_lower_camera_node" output="screen">
    <param name="serial" value="$(arg lower_camera_serial)"/>
    <param name="camera_frame" value="ensenso_lower_optical_frame"/>
    <param name="target_frame" value="ensenso_lower_optical_frame"/>
    <remap from="/point_cloud" to="/point_cloud_lower"/>
  </node>

</launch>

Ros driver cannot locate the pattern, while nxView can

Dear reader, I hope you can help me solve this issue. I am trying to locate a Halcon pattern via ROS and project it in Rviz, so that I can use it later on for calibration. I have verified that the pattern is visible by locating it in nxView, where the axes are projected on the pattern. When trying this in ROS, the locatePattern action gives no feedback and no results.

My launch file looks like:

<launch>
  <arg name="debug" default="false" />

  <arg unless="$(arg debug)" name="launch_prefix" value="" />

  <node name="ensenso_camera_node" pkg="ensenso_camera" type="ensenso_camera_node" respawn="false" launch-prefix="$(arg launch_prefix)" output="screen" />

  <node name="request_data" pkg="ensenso_camera" type="request_data" respawn="false" launch-prefix="$(arg launch_prefix)" output="screen" />

  <node name="static_transform_publisher" pkg="tf" type="static_transform_publisher" args="334.834 207.774 973.184 -0.704692 -0.379771 -0.599319 1.69578 map ensenso_optical_frame 100" launch-prefix="$(arg launch_prefix)" />

<node name="rviz" pkg="rviz" type="rviz" output="screen"  />
</launch>

Via the terminal I start the action with :

rostopic pub /locate_pattern/goal ensenso_camera_msgs/LocatePatternActionGoal "header:
  seq: 0
  stamp:
    secs: 0
    nsecs: 0
  frame_id: ''
goal_id:
  stamp:
    secs: 0
    nsecs: 0
  id: ''
goal:
  parameter_set: ''
  number_of_shots: 5
  target_frame: ''
  tf_frame: ''" 

What should I do differently?

Calibration fails despite finding enough patterns

I am trying to perform HandEyeCalibration with an Ensenso N35 camera.

I am able find the patterns using the "Capture Pattern" command.
After finding 10 patterns I call the action with the "Start Calibration" command and receive the error "You need to collect at least 5 patterns before starting a hand eye calibration".
I also tried manually loading the robot poses corresponding to the list of camera exposures, which also did not work.

Any help would be appreciated!

EDIT: In trying to generate random poses at which to capture patterns, i did not notice that i was saving fewer than five poses as the rest of poses could not be reached by the robot. Each loop iteration changed my debug message but did not actually find a new pattern.

Linked camera calibration in eeprom

We noticed that in the ENSENSO_INSTALL folder (e.g. `/opt/ensenso') are files to seem to link to certain memory addresses in the eeprom of the camera. It seems that without these files, the link between rgb and depth camera is not loaded correctly when launching the ros driver.

Am I correct in thinking that on startup the SDK loads the link calibration from the camera eeprom using the address in this file?

Does this mean that when launching the driver from another PC (with a cleanly installed SDK) with the same json file for the same camera, this link calibration is not loaded, since the address to the eeprom of the camera is different?

If so, this means that part of the camera configuration is 'hidden' in the file (as in: it's not enough to have the data is the eeprom and the json to complete reconstruct the setup). What would be the right workflow to fix this? Basically run the calibration routine on the new system again? Or is there some way to save/load this file somewhere and use it on a different installation?

about the rgbd

hi,
I 've found that you were developing the feature for the rgbd-data-request since 2017, but it closed till May in Issues #Feature/request depth rgb in 1#
Is this stopped? I think this feature is important.

By the way, I've tried code in this pull request, but found dependency:
       <depend>rgbd</depend>
       <depend>sr_foundation</depend> 
   in the  ensenso_camera/package.xml,   I could not find these packages  in ros,  can anybody tell me where to find the  'rgbd' and 'sr_foundation' ?

 Thanks!!

New release

With #80 merged, could you create a new release of this repo? :)

Starting cameranode takes longer than wait_for_server

When staring the camera-node it consistently takes about 22 seconds for the camera to be opened.

[ INFO] [**1518085380.386794234**]: Initializing nodelet with 4 worker threads. [ INFO] [**1518085402.302794783**]: Opened camera with serial number 'xxxxxx'.

The wait_for_server in e.g. imagestream and request_data is set to 10 seconds. Therefore I get a timeout error when starting the camera-node and request_data from one launchfile (same holds for texture_pointcloud.launch)

How to compute the Pointcloud from disparity image?

I want to store the 3d data for later processing and would like to only store the disparity image (together with some camera matrix) to only store the minimal amount of data and not the full cloud. Which Information do I have to store and how can I later compute the cloud?

RGB frame registered depth image

Hi,

As mentioned in #74, I am evaluating an X36 camera. Our application requires the depth image registered to the RGB frame. According to #55, the ros_driver, although it outputs the colored point cloud, it does not output the depth image in the RGB frame. I have several questions:
a) Is this still the case?
b) I tried the proposed solution in #55 (use depth_image_proc/register). However I found that the depth image registered is not correct. See below.

RGB image (unrelated but, it seems that ros_driver publishes rgb channels in inverted order)
rgb

Depth image (from the /stereo node)
depth

Depth registered (obtained using depth_image_proc/register as suggested in #55).
depth_registered

I have used depth_image_proc/register before, so I am pretty sure I am configuring it correctly. After checking the Tf for the transform between stereo and rgb camera, it seems to me that the translation is too large, but it matches the "Link" frame information stored inside the RGB camera eeprom... maybe something is weird with the units?

	"Link": {
		"Rotation": {
			"Angle": 0.0139332363601961714,
			"Axis": [
				0.62916486024742091,
				0.662676914584964138,
				0.406215319142441633
			]
		},
		"Target": "181961",
		"Translation": [
			-188.890016230833993,
			70.3128215477127725,
			-16.9718587224561368
		]
	},

image

You comments will be appreciated.

Consider not using 'map' as the default frame

map is really more typically used with mobile robots to provide a reference for a global frame that associated with a map. See also REP-105: Coordinate Frames for Mobile Platforms - map.

It is the default frame that RViz starts with, but this is typically worked around by either providing a custom RViz config file that already uses a different frame, or by documenting the steps needed to change the Fixed Frame. Another option is to use the -f or --fixed-frame command line option, which should allow you to specificy a frame while starting RViz.

To be more in-line with other ROS camera drivers, perhaps ${prefix}_rgb_optical_frame and ${prefix}_(ir|depth)_optical_frame (as needed) might be more appropriate for the sensor locations, and camera_link for the base frame of the entire camera (this of course depends on the frames that are actually needed for the different parts of the physical device).

Ueye

Currently we are trying to use an Ensenso n35. Up untill recently it worked just fine.
Today the camera started throwing errors. We are not using custom code simply the ensenso_camera_node and request_data. When trying to start the camera nodes we sometimes get the following error, which we have not run into up until now:

[ERROR] [1555504175.323337616]: NxLibException 17 (ExecutionFailed) for item /Execute
[ERROR] [1555504175.323476283]: UEyeApiException: UEye API error 17(The driver could not be loaded) at </common/uEye/uEyeCamera.cpp:472>.

At times the camera does manage to start up and data is received however after some time Ros will lose the connection and a similar error message is shown. Could this pertain to the ueye driver or the SDK?

Point cloud is not with respect to the center of camera

My setup includes ensenso N35 mounted on a robot's end-effector. When I visualize the point cloud wrt world_frame (which also happens to be the robot's base_link), it appears slightly shifted (around 5cm). It seems like the ensenso_optical_frame is actually wrt one of the lenses (near the Ethernet connector, which is also 5cm from the center) and not the center of camera.

Also would really appreciate if we have a ensenso_description as mentioned in #3 to have TFs of the depth and both camera frames from the mounting point. As reference : kinect

Failed to load nodelet /mono/Ensenso_

Hi Everyone,
I have ensenso S series camera and I already installed nxView and I can see my view. I am trying to connect with ROS. I downloaded the repository and built it. When I run

roslaunch ensenso_camera register_depth_image.launch

or

roslaunch ensenso_camera nodelet.launch

it gives this error:

Screenshot from 2022-07-19 13-43-46

Thank you !

MapSMtoCores followed by core dump

I am trying to use an Ensenso N35 with ROS kinetic on ubuntu 16.04. When running the node as specified by the tutorial on my computer that has an nvidia gpu i get the following reponse

Initializing nodelet with 16 worker thresds.
MapSMtoCores for SM 7.5 is undefined. Default to use 128 Cores/SM
Segmentation fault (core dumped)

MapSMtoCores for SM 7.5 is undefined. Default to use 128 Cores/SM
is also stated when I open nxView. When I run the command using an ubuntu computer without an Nvidia gpu it runs fine. I am guessing there is an issue with the connection between the node and the gpu. I have been looking for the call to use the gpu in the code but can't find it.
Would you be able to provide a solution?

Could not create cudnn handle

Hello I am trying to use this driver in combination with a deep learning object detection model. The model and the camera are communicating just fine, this is could test by using a virtual camera that I setup in nxView. However when i switched over to the actual camera I get the error that the cudnn handle could not be created. The config file that I am using was saved with the use of CUDA disabled.
Is it possible that this is activated by default when running the camera with ros? If so how would i change the default to no use CUDA?

Thanks in advance!

Unable to receive depth image from linked Ensenso camera

Hi, I am trying to receive a depth image from a calibrated pair of Ensenso N35 and UEye RGB Camera but get the following Error:
Depth images are not yet supported for linked and multi camera usage. Only request depth images when using one stereo camera only.
I can only receive the colourized pointcloud. Is there any way to get the depth and color image seperately?

Thanks in advance!

How to get Aligned to color Depth Image

Hello I'm wondering what's the best methodology to get an aligned to color depth image. I noticed that this tutorial shows how to get a colored point cloud: http://wiki.ros.org/ensenso_driver/Tutorials/TexturedPointCloud. This is great, but lots of alogrithms rely on the RGB and Depth images being aligned to do processing on color and finding corresponding depth pixels to find real world coordinates based on that.

Currently my rgb camera (uEYE 525 ) outputs color resolution of 1200 by 1600, and my n35 outputs 1024 by 1280. After talking with Philip from IDS, we found this: https://github.com/ensenso/ros_driver/blob/master/ensenso_camera_msgs/action/TelecentricProjection.action
which allows the user to specify a frame, and have it transformed to that perspective. We chose the color frame, and it output a 1024 by 768 image, which still isn't the same resolution as the color. You can also specify the size, but that seem to just crop the image. and not actually align them. Also it doesn't output a camera_info topic much like the mono/color/camera_info or stereo/depth/camera_info.

Is there a way to do what I'm asking? The reaslsense/ZED/ZIVID cameras I've used all provide this topic, although they have the big advantage of being integrated in one package, however we appreciate the flexibility that IDS and Ensenso provide with their cameras, so we are looking forward to using these.

Thanks and let me know what you think.

Port to ROS2?

Hi, thank you for providing and maintaining this ROS driver for Ensenso.

May i ask if there are any plans to port this to ROS2? 😄

If there is none, is there a way for people like me to start contributing to a port?

Rising/Falling Edge Trigger failure

Hello, I have a uEYE 525 color camera and N35 depth camera. We are seeing an issue where the cameras aren't keeping the settings that were set through NX View.

Specifically, we are seeing that when we set the trigger for the uEye camera (software/continuous/RisingEdge) it also somehow gets applied to ensenso camera. So through the NxView we can have the ensenso camera be in software trigger and the uEye camera be in RisingEdge Trigger. (Ideal scenerio)

When we use the ROS node with the uEye camera set to rising edge it crashes the node. We did some experiments and found that when we set the uEye to continuous trigger it somehow also changes the ensenso to continuous trigger (even if it's json parameter file says differently).

We notice that if you set the uEye to rising/falling edge in NXview without the n35 also connected it also crashes. If we can't somehow sync this with the rising/falling edge we get the bluelight in our rgb image which isn't the best. Here is what our error is:

[ERROR] [1614298497.177103587]: NxLibException 17 (ExecutionFailed) for item /Execute/4103638864
[ERROR] [1614298497.177372969]: CaptureTimeout: Image capture timed out.
[ERROR] [1614298497.579554535]: NxLibException 17 (ExecutionFailed) for item /Execute/204774
[ERROR] [1614298497.579626236]: CaptureTimeout: Image capture timed out.
[WARN] [1614298497.602803]: Stereo request action was not successful.
[WARN] [1614298497.603948]: code: 17
message: "ExecutionFailed"
[WARN] [1614298497.605043]: Mono request action was not successful.
[WARN] [1614298497.605817]: code: 17
message: "ExecutionFailed"

Running camera node without specifying serial causes crash

When I run the camera node (or nodelet) without specifying the serial number, rosrun ensenso_camera ensenso_camera_node, I get a crash (segmentation fault)

Looking at the code, the error starts here:

// No serial specified. Try to use the first camera in the tree.
NxLibItem cameras = NxLibItem()[itmCameras][itmBySerialNo];
if (cameras.count() > 0)
{
   serial = cameras[0].name();
}

In my item tree, I have no itmBySerialNo:
image
and so serial is actually assigned to byEepromId. The crash then happens here:

std::string type = NxLibItem()[itmCameras][itmBySerialNo][serial][itmType].asString();

Perhaps this the BySerialNo was removed in the upgrade to 2.3?

Workaround

Specify the serial parameter explicitly, don't rely on auto-selection.

System

  • Ubuntu 18.04.03
  • Ensenso SDK 2.3.1174

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.