Comments (11)
We looked into this issue and looks like the interaction between realsense2_camera
package and tf2
package is causing the issue. As a workaround, do not pass these parameters in the isaac_ros_visual_slam_realsense.launch.py
launch file -
input_left_camera_frame
and input_right_camera_frame
. In this mode, camera info will be used to infer the transformation between left and right imagers.
Additionally, there have been a few changes in the parameters of the realsense node. Use 'depth_module.emitter_enabled': 0
to disable the emitter and 'depth_module.profile': 'widthxheightxfps'
to set the resolution and fps.
from isaac_ros_visual_slam.
same problem
from isaac_ros_visual_slam.
Could you please tell us a little more about your setup? You have mentioned running with ROS2 Foxy but DP1.0 onwards only supports Humble. You should be getting two camerainfo topics from the RealSense camera for each of the imagers respectively that should be remapped to appropriate topics in Isaac ROS VSLAM. As for performance, there should be no difference between DP2.0 and DP1.1 releases. Could you tell us more about your evaluation method(s)?
from isaac_ros_visual_slam.
Could you please tell us a little more about your setup? You have mentioned running with ROS2 Foxy but DP1.0 onwards only supports Humble. You should be getting two camerainfo topics from the RealSense camera for each of the imagers respectively that should be remapped to appropriate topics in Isaac ROS VSLAM. As for performance, there should be no difference between DP2.0 and DP1.1 releases. Could you tell us more about your evaluation method(s)?
Hi @swapnesh-wani-nvidia , thanks for the reply
For the ROS, what I mean is that I installed ROS2 foxy outside the container, and I install the container as in this repo which is ROS2 humble and the following are all running in the container.
I test this function with the default launch files "isaac_ros_visual_slam_realsense.launch.py", here is the launch files with camera node configuration:
import launch
from launch_ros.actions import ComposableNodeContainer, Node
from launch_ros.descriptions import ComposableNode
def generate_launch_description():
"""Launch file which brings up visual slam node configured for RealSense."""
realsense_camera_node = Node(
name='camera',
namespace='camera',
package='realsense2_camera',
executable='realsense2_camera_node',
parameters=[{
'infra_height': 360,
'infra_width': 640,
'enable_color': False,
'enable_depth': False,
#when first launch, the camera has IR dots, so I added depth_moudule emitter disabled, and successfully turned the IR emitter off, and, If the emitter are on, the "tracker is lost" warning is few, but the performance is even bad, If the emitter are off, the warning keeps popping and performance are equallly bad.
'stereo_module.emitter_enabled': False,
'depth_module.emitter_enabled': 0,
'infra_fps': 90.0
}]
)
visual_slam_node = ComposableNode(
name='visual_slam_node',
package='isaac_ros_visual_slam',
plugin='isaac_ros::visual_slam::VisualSlamNode',
parameters=[{
'enable_rectified_pose': True,
'denoise_input_images': True,
'rectified_images': False,
'enable_debug_mode': False,
'debug_dump_path': '/tmp/elbrus',
'enable_slam_visualization': True,
'enable_landmarks_view': True,
'enable_observations_view': True,
'map_frame': 'map',
'odom_frame': 'odom',
'base_frame': 'camera_link',
'input_left_camera_frame': 'camera_infra1_frame',
'input_right_camera_frame': 'camera_infra2_frame'
}],
remappings=[('stereo_camera/left/image', '/camera/infra1/image_rect_raw'),
('stereo_camera/left/camera_info', '/camera/infra1/camera_info'),
('stereo_camera/right/image', '/camera/infra2/image_rect_raw'),
('stereo_camera/right/camera_info', '/camera/infra2/camera_info')]
)
#/camera/realsense_splitter_node/output/infra_1
visual_slam_launch_container = ComposableNodeContainer(
name='visual_slam_launch_container',
namespace='',
package='rclcpp_components',
executable='component_container',
composable_node_descriptions=[
visual_slam_node
],
output='screen'
)
return launch.LaunchDescription([
visual_slam_launch_container,
realsense_camera_node
])
And this is one of the header of the /camera/infra1/camera_info, I don't know if it is right or wrong but I assume the info are subcribed by VSLAM node.
header:
stamp:
sec: 1666677052
nanosec: 868972032
frame_id: camera_infra1_optical_frame
height: 480
width: 848
distortion_model: plumb_bob
d:
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
k: - 423.5382995605469
- 0.0
- 422.2384948730469
- 0.0
- 423.5382995605469
- 238.14329528808594
- 0.0
- 0.0
- 1.0
r: - 1.0
- 0.0
- 0.0
- 0.0
- 1.0
- 0.0
- 0.0
- 0.0
- 1.0
p: - 423.5382995605469
- 0.0
- 422.2384948730469
- 0.0
- 0.0
- 423.5382995605469
- 238.14329528808594
- 0.0
- 0.0
- 0.0
- 1.0
- 0.0
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: false
By the end, I noticed that the IR cam maybe bad in the exposures, and resolution are low(360p), are you use this VSLAM in realsense in this same way? For the evaluation, I supposed that the VSLAM are not work normally in the human sense(got mant drift and the pose don't follow my actual movement) , so I think some problem has been made otherwise such error would not be come up.
Also, I'm also working on this tutorial application:
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md
it has a splitter node that the emitter on and off alternately so this frame is on and next frame is off, and it splits the off frame for IR cam output while on frame to ensure the depth cam performance, so the IR cam output would be 30fps, but the camera info stays the same(60fps), would this affect the performance? but this maybe another problem since I might need to get the above done so as to work on another.
I might get the camera calibration and do this again to see if it is the problem
Again, thanks for your reply, and any advises are welcome.
from isaac_ros_visual_slam.
same problem
I reinstalled OS using sdk Manager to jetson agx orin
and installed dp2 isaac_ros_visual_slam, dp2 isaac_ros_nvblox
I tested this function with the default launch files "nvblox_vslam_realsense.launch.py" using d435
from isaac_ros_visual_slam.
I have the same issue. Although i don't know what to expect since i haven't used other versions of isaac ros.
Edit
I just realized that the RealSense parameters from the "isaac_ros_visual_slam_realsense.launch.py" aren't parsed. The infra cameras are still running at 480x848@30fps (at least on my install). Maybe the issue stems from a change in the realsense ros2-beta branch of realsense-ros?
A quick look at the changes suggest that some of the parameter config options were removed from the "rs_launch.py" file in realsense-ros as well.
In other words, maybe the break is from realsense-ros and not the upgrade from isaac_ros dp1.1->dp2.0?
from isaac_ros_visual_slam.
A snappy response from the realsense-ros team confirmed my suspicion: IntelRealSense/realsense-ros#2520 (comment)
I have changed the realsense example launch file configuring the realsense parameters and submitted a PR #57
The change didn't have a noticeable effect on the performance though, sadly.
from isaac_ros_visual_slam.
As a workaround, do not pass these parameters in the
isaac_ros_visual_slam_realsense.launch.py
launch file -input_left_camera_frame
andinput_right_camera_frame
.
@swapnesh-wani-nvidia
This fixed all my problems, thanks a bunch
from isaac_ros_visual_slam.
We looked into this issue and looks like the interaction between
realsense2_camera
package andtf2
package is causing the issue. As a workaround, do not pass these parameters in theisaac_ros_visual_slam_realsense.launch.py
launch file -input_left_camera_frame
andinput_right_camera_frame
. In this mode, camera info will be used to infer the transformation between left and right imagers.Additionally, there have been a few changes in the parameters of the realsense node. Use
'depth_module.emitter_enabled': 0
to disable the emitter and'depth_module.profile': 'widthxheightxfps'
to set the resolution and fps.
Thanks and will test it as soon as I switch back to DP2.0
from isaac_ros_visual_slam.
Thanks @swapnesh-wani-nvidia ! Your suggestions solved the problem. However, With fast motion the estimation can diverge. Would using the D435i IMU improve the estimation and make it more robust?
from isaac_ros_visual_slam.
This hotfix updated the realsense launch file in this repo as per the discussions above.
from isaac_ros_visual_slam.
Related Issues (20)
- load_map_and_localize not work on non-horizontal surfaces HOT 3
- Feature Request to add support for RGBD stream HOT 3
- Using isaac_ros_visual_slam with Devcontainer
- Package Location Error—— “E: Unable to locate package ros-humble-isaac-ros-visual-slam” HOT 3
- Unable to use VisualSlamNode with intra-process comms when running with composition HOT 2
- RVIZ VIO and SLAM paths seperate HOT 2
- Problem with tilted camera mount HOT 7
- [ERROR] X HOT 4
- "Visual tracking is lost" in tutorial HOT 5
- Feature Request: Add support for monocular cameras
- /visual_slam/status is incorrect ? HOT 1
- IMU fusion status HOT 5
- Cuda issue when running roslaunch command. HOT 2
- Feature request : localization only mode. HOT 2
- isaac_ros_visual_slam_zed.launch.py If an error occurs when running HOT 2
- No messages being published HOT 2
- Problem with the quickstart HOT 1
- no vio with zedm camera HOT 1
- LoadMapAdnLocalize crashes HOT 3
- cuvslam
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from isaac_ros_visual_slam.