Git Product home page Git Product logo

urban-terrain-dataset's Introduction

Urban Terrain Dataset


πŸ› οΈ Installation | πŸŽ₯ Video | πŸ“– Paper (RA-L)

The dataset corresponding to the paper 'Learning Self-supervised Traversability with Navigation Experiences of Mobile Robots: A Risk-aware Self-training Approach' , accepted for publication in RA-L on Feb, 2024.

demo

Our task of interests are: terrain mapping; ground obstacle detection; and the estimation of 'robot-specific' traversability.

Why we made custom dataset

While well-known datasets like KITTI provide extensive data for robotic perception, they often fall short in addressing the specific needs of learning robot-specific traversability. That is, KITTI and similar datasets such as Cityscapes and nuScenes, are mainly designed for general applications and may not capture the unique environmental and operational challenges faced by specific robots. On the other hand, the robot's own navigation experiences provide rich contextual information that is crucial for understanding and navigating complex urban terrains.

Given a robotic platform, we collected urban terrain data from its onboard measurements and labeled them by just using a simple manual driving experience of the robot. Here are some good reasons of using onboard measurements and the robot's own navigation experiecne for the application of learning robot-specific traversability:

  • Data Scalability: Leveraging the robot's own sensors and navigation experiences allows for the collection of large-scale datasets without the need for extensive manual data annotation efforts. This approach enables continuous and automated data gathering as the robot operates, facilitating the creation of extensive datasets that capture diverse environmental conditions and scenarios. All we have to do is to manually drive the robot in the target environment, which is typically done when constructing the map of the environment with the aid of SLAM system.
  • Robot-Environment Adaptability: Data collected directly from the robot’s sensors ensures that the training data is highly relevant to the specific robot and its operating environment. This method allows the model to adapt to the unique characteristics of the robot, such as its locomotion capabilities and the given sensor docsurations, leading to more accurate learning and predictions of traversability.

About the dataset

  • Data Format: Our datasets are provided as the files with rosbag format. For more information about the rosbag, see rosbag/Tutorials and rosbag/API Documentation.

  • What's in our dataset?: In each file of our datasets, the following ROS messages are provided:

    • LiDAR measurements (velodyne_msgs/VelodyneScan)
    • IMU measurements (sensor_msgs::Imu)
    • Odometry pose (/tf)
    • Extrinsic parameters of the sensors (/tf_static)

[Note]: To reduce the size of datasets, only the packet messages of LiDAR sensor were recorded. This means that we have to unpack the lidar packets for playback the recorded point cloud measurements. For the purpose, there is a vlp16packet_to_pointcloud.launch file that handles the conversion of lidar packet to point clouds.

  • Robotic Platform: Two-wheeled differential-drive robot, ISR-M3, was used to collect the datasets. The robot was equipped with a single 3D LiDAR and IMU. During the experiments, 3D pose of the robot was estimated by the use of a Lidar-inertial odometry (LIO) system.

  • Environments: We mainly provide two datasets with distinct ground surface characteristics. The training (blue) / testing (red) trajectories of a robot are shown in the aerial images below.

    • Urban campus: This main target environment spans approximately 510m x 460m, with a maximum elevation change of 17m. The maximum inclination of the terrain is 14 degrees. The environment mostly consists of asphalt terrain. Some damaged roads, cobblestoned pavements, and the roads with small debris are challenging.
    • Rural farm road: We additionally validated our approach in the unstructured environments. Farm road areas were typically unpaved dirt or gravel and included various low-height ground obstacles.

demo

demo demo

Get the Data

Download

Use the following links to download the datasets.

1. Urban Campus Dataset: [Google Drive]

2. Farm Road Dataset: [Google Drive]

Play

We provide terrain_dataset_player ROS package for playing the datasets with the basic rviz visualization settings. Please follow the instructions below to play the recorded rosbag files.

Dependencies

In order to run terrain_dataset_player package, please install the dependencies below:

  • Ubuntu (tested on 20.04)
  • ROS (tested on Noetic)
  • velodyne_pointcloud (Velodyne ROS driver for unpacking LiDAR packet msgs)

Instaling the velodyne_pointcloud binaries by using apt should work through:

sudo apt install ros-noetic-velodyne-pointcloud

Build

We recommend to use catkin_tools to build the ROS packages (It is not mandatory). Instaling the catkin_tools package by using apt should work through:

sudo apt install python3-catkin-tools

Use the following commands to download and build the terrain_dataset_player package:

cd ~/your-ros-workspace/src
git clone https://github.com/Ikhyeon-Cho/urban-terrain-dataset.git
cd ..
catkin build terrain_dataset_player   ## If not using catkin_tools, use catkin_make

Run the player

  1. Locate the downloaded rosbag files (See Download the datasets) into data folder.
  2. Run the command below to play the rosbag files in data folder:
## Example: parking_lot.bag
roslaunch terrain_dataset_player parking_lot.launch  playback_speed:=4.0

Citation

Thank you for citing our paper if this helps your research projects:

Ikhyeon Cho, and Woojin Chung. 'Learning Self-Supervised Traversability With Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach', IEEE Robotics and Automation Letters, 2024.

@article{cho2024traversability,
  title={Learning Self-Supervised Traversability With Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach}, 
  author={Cho, Ikhyeon and Chung, Woojin},
  journal={IEEE Robotics and Automation Letters}, 
  year={2024},
  volume={9},
  number={5},
  pages={4122-4129},
  doi={10.1109/LRA.2024.3376148}
}

urban-terrain-dataset's People

Contributors

ikhyeon-cho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

urban-terrain-dataset's Issues

extrinsic matrix between lidar and imu

This dataset is good, but there are currently some doubts about the extrinsic matrix between sensors
May I ask if the extrinsic matrix between lidar and imu in campus_road_camera.bag refers to the base_to_imu in the parking lot, or is imu considered to be in the base coordinate system?
Here is my simple code for converting lidar to imu coordinate systems, I'm not sure if it's correct

import numpy as np
from scipy.spatial.transform import Rotation as R
#Transformation from base to lidar
Rpy_lidar=[-0.05235, 0.005, -0.003]
Rotation_lidar=R. from_euler ('xyz ', rpy_lidar, degrees=False)
R_lidar=rotation_lidar. as_matrix()
Tlidar=np. array ([-0.09, 0, 1.06])
#Transformation from base to imu
T-imu=np. array ([-0.05, 0.00, 0.35])
Rpy_imu=[0.00, 0.00, 0.00]
Rotation_imu=R.free_euler ('xyz ', rpy_imu, degrees=False)
R_imu=rotation_imu. as_matrix()
#Create 4x4 transformation matrices
Tcase_to_lidar=np. eye (4)
TBase_to_lidar [: 3,: 3]=R_lidar
Tbase_to_lidar [: 3, 3]=tlidar
Tcase_to_imu=np. eye (4)
TBase_to_imu [: 3,: 3]=R_imu
Tbase_to_imu [: 3, 3]=t_imu
#Compute the inverse transformation from base to imu
Tcase_to_imu_inv=np. final. inv (Tcase_to_imu)
#Compute the transformation from lidar to imu
Tlidar_to_imu=np. dot
#Extract rotation matrix and translation vector
R_lidar_to_imu=Tlidar_to_imu [: 3,: 3]
Tlidar_to_imu=Tlidar_to_imu [: 3, 3]
Print ("Rotation matrix from lidar to imu (R):")
Print (R_lidar_to_imu)
Print ("Translation vector from lidar to imu (T):")
Print (tlidar-to_imu)

Rotation matrix from lidar to imu (R):
[9.71213646e-01 1.59993173e-02 2.37672201e-01]
[-1.55407445e-02 9.9987203e-01-3.80307975e-03]
[-2.37702626e-01-2.16840434e-19 9.71337975e-01]
Translation vector from lidar to imu (T):
[0.09 0 0.222]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.