Git Product home page Git Product logo

explore-bench's Introduction

Explore-Bench

Explore-Bench is developed aiming to evaluate traditional frontier-based and deep-reinforcement-learning-based autonomous exploration approaches in a unified and comprehensive way.

The related paper "Explore-Bench: Data Sets, Metrics and Evaluations for Frontier-based and Deep-reinforcement-learning-based Autonomous Exploration" is accepted to the 2022 International Conference on Robotics and Automation (ICRA 2022).

Features:

  • Data Sets: various basic exploration scenarios (i.e., loop, narrow corridor, corner, and multiple rooms) and their combinations are designed.
  • Metrics: two types of metrics (efficiency metrics and collaboration metrics) are proposed.
  • Platform: a 3-level platform with a unified data flow and $12 \times$ speed-up that includes a grid-based simulator for fast evaluation and efficient training, a realistic Gazebo simulator, and a remotely accessible robot testbed for high-accuracy tests in physical environments is built.
  • Evaluations: one DRL-based and three frontier-based exploration approaches are evaluated and some insights about the selection and design of exploration methods are provided.

Dependency

The project has been tested on Ubuntu 18.04 (ROS Melodic). To run the exploration approach on Desktop PC or real robots, please install these packages first:

Cartographer for map building

Cartographer is a 2D/3D map-building method. It provides the submaps' and the trajectories' information when building the map. We slightly modified the original Cartographer to make it applicable to multi-robot SLAM and exploration. Please refer to Cartographer-for-SMMR to install the modified Cartographer to carto_catkin_ws and

source /PATH/TO/CARTO_CATKIN_WS/devel_isolated/setup.bash

MAPPO for reinforcement learning training and evaluation

MAPPO is a multi-agent variant of PPO (Proximal Policy Optimization), which is a SOTA on-policy reinforcement learning algorithm. We slightly modified the original marlbenchmark/on-policy and provide the training and test code in this repo.

# create conda environment
conda create -n marl python==3.8.10
conda activate marl
pip install torch torchvision
# install onpolicy package
cd onpolicy
pip install -e .

We provide requirement.txt but it may have redundancy. We recommend that the user try to install other required packages by running the code and finding which required package hasn't installed yet.

Turtlebot3 Description and Simulation

sudo apt install ros-melodic-turtlebot3*
sudo apt install ros-melodic-bfl
pip install future
sudo apt install ros-melodic-teb-local-planner
echo 'export TURTLEBOT3_MODEL=burger' >> ~/.bashrc

After installing these dependencies, put these packages in your ROS workspace (i.e. catkin_ws/src) and catkin_make.
Importantly, the LiDAR range used in our paper is 7m, so you should check the range of base_scan in turtlebot3_burger.gazebo.xacro.
Specially,

roscd turtlebot3_description
gedit turtlebot3_burger.gazebo.xacro # modify the range of base_scan to 7m

Other Dependencies

Except for the DRL-related scripts (running under Python3.8), we use C++ and Python2.7 to implement the benchmark.

There are some packages to be installed:

pip install autolab_core==0.0.14

Run Frontier-based Exploration in Gazebo (Level-1)

Template:

{env}      = 'loop' or 'corridor' or 'corner' or 'room' or 'loop_with_corridor' or 'room_with_corner'
# for multiple robots, there are two env cases: 'env_close' and 'env_far' (i.e. 'loop_close' and 'loop_far')
{number_robots} = 'single' or 'two'
{method}        = 'rrt' or 'mmpf' or 'cost'
{suffix}        = 'robot' or 'robots' (be 'robot' when number_robots != 'single')
# build simulation environment
roslaunch sim_env {env}_env_{number_robots}_{suffix}.launch
# start cartographer map building and move_base
roslaunch sim_env {number_robots}_{suffix}.launch
# start frontier-based exploration
roslaunch exploration_benchmark {number_robots}_{method}_node.launch
# start data logging and evaluating the exploration performance
cd exploration_benchmark/scripts
python exploration_metric_for_{number_robots}_{suffix}.py '../blueprints/{env}.pgm' '../blueprints/{env}.yaml'

For example,

  • Room Environment -- Single Robot -- Field-based Exploration (MMPF)
roslaunch sim_env room_env_single_robot.launch
roslaunch sim_env single_robot.launch
roslaunch exploration_benchmark single_mmpf_node.launch

Then, start a new terminal and use our proposed metrics to evaluate the exploration performance:

cd exploration_benchmark/scripts
python exploration_metric_for_single_robot.py '../blueprints/room.pgm' '../blueprints/room.yaml'

At last, choose "Publish Point" button in the rviz and then click anywhere in the map to start the exploration.

  • Corridor Environment (Far) -- Two Robots -- Cost-based Exploration
roslaunch sim_env corridor_far_env_two_robots.launch
roslaunch sim_env two_robots.launch
roslaunch exploration_benchmark two_cost_node.launch

Then, start a new terminal and use our proposed metrics to evaluate the exploration performance:

cd exploration_benchmark/scripts
python exploration_metric_for_two_robots.py '../blueprints/corridor.pgm' '../blueprints/corridor.yaml'

Run DRL-based Exploration in Gazebo (Level-1)

To run the DRL-based exploration, we need Python3 and conda environment.

Single Robot

First, start the simulation environment , mapping module, rviz visualization and performance evaluation,

roslaunch sim_env room_env_single_robot.launch
roslaunch sim_env single_robot.launch
roslaunch exploration_benchmark single_rl_node.launch
cd exploration_benchmark/scripts
python exploration_metric_for_single_robot.py '../blueprints/room.pgm' '../blueprints/room.yaml'

Then, start a new terminal and run the DRL-based exploration,

# enter conda env
source ~/anaconda3/bin/env marl
cd exploration_benchmark/scripts/DRL
# run the DRL model
bash run_single_robot.sh

Multiple Robots

First, start the simulation environment , mapping module, rviz visualization and performance evaluation,

roslaunch sim_env room_far_env_two_robots.launch
roslaunch sim_env two_robots.launch
roslaunch exploration_benchmark two_rl_node.launch
cd exploration_benchmark/scripts
python exploration_metric_for_two_robots.py '../blueprints/room.pgm' '../blueprints/room.yaml'

Then, start a new terminal and run the DRL-based exploration,

# enter conda env
source ~/anaconda3/bin/env marl
cd exploration_benchmark/scripts/DRL
# run the DRL model
bash run_two_robots.sh

Note: you need to choose "Publish Point" button in the rviz and then click anywhere in the map to start the performance evaluation.

Train DRL Model in Grid-based Simulator (Level-0)

Explore-Bench supports the DRL training in a fast grid-based simulator (Level-0).

Follow these steps to train your own DRL model:

# enter conda env
source ~/anaconda3/bin/env marl
# ensure that you have installed the MAPPO dependency
cd onpolicy/onpolicy/scripts
# run the DRL training 
bash train_grid.sh

The training parameters can be modified according to the user's need, i.e., the number of robots (num_agents), the hidden size, the batch size and so on.

Refer to train_grid.sh for details.

Evaluate Exploration Approaches in Grid-based Simulator (Level-0)

Besides training DRL models, the grid-based simulator can be used for fast evaluation of both frontier-based and DRL-based methods.

The field-based and cost-based exploration approaches are taken for example:

# enter conda env
source ~/anaconda3/bin/env marl
cd grid_simulator
# evaluate cost in corner env (2 robots)
python GridEnv.py cost 2 ../onpolicy/onpolicy/envs/GridEnv/datasets/corner.pgm
# evaluate cost in corner env (1 robots)
python GridEnv.py cost 1 ../onpolicy/onpolicy/envs/GridEnv/datasets/corner.pgm
# evaluate mmpf in corner env (2 robots)
python GridEnv.py mmpf 2 ../onpolicy/onpolicy/envs/GridEnv/datasets/corner.pgm
# evaluate mmpf in corner env (1 robots)
python GridEnv.py mmpf 1 ../onpolicy/onpolicy/envs/GridEnv/datasets/corner.pgm

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{xu2022explore,
  title={Explore-bench: Data sets, metrics and evaluations for frontier-based and deep-reinforcement-learning-based autonomous exploration},
  author={Xu, Yuanfan and Yu, Jincheng and Tang, Jiahao and Qiu, Jiantao and Wang, Jian and Shen, Yuan and Wang, Yu and Yang, Huazhong},
  booktitle={2022 International Conference on Robotics and Automation (ICRA)},
  pages={6225--6231},
  year={2022},
  organization={IEEE}
}

explore-bench's People

Contributors

farawaysail avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

explore-bench's Issues

Wall Obstacles Grey After Map Merge

Hi @FarawaySail,

I would like to ask a question regarding your map_merge package.

Could you kindly explain why there are certain instances where certain portions of the merged map are grey in color (see the gaps in the image attached)? I have read that cells in occupancy grid map are usually represented by 1-100, so grey is probably somewhere in between. Any insights on this? (Note the different color of 'grey' cells vs 'unknown' cells).

image

Warm Regards,
Derek

Exploration Issue

Hi @FarawaySail,

Thank you for this exploration benchmark. It's a really cool and important initiative, which will help us with benchmarking our exploration code.

Unfortunately, the baselines you have provided does not seem to work. On all the examples of the launch files you have given, they all seem to be stuck and are unable to progress with exploration.

Do you have any insights as to why?

Thanks again,
Derek Tan

Confusing Collaboration Metric

Hi @FarawaySail,

Thanks for your repository. I am confused by how you defined the overlap ratio. Why is the overlap area [summation(all robots' area) - total map area ] / total map area?

Let's say robot 1 explores 60pixels, robot 2 explores 40 pixels, total map has 100 pixels to be explored. By this formula, overlap ratio is [(60+40) - 100] / 100 = 0. However, there may be significant overlap between robot 1's 60px and robot 2's 40px. Do you mind explaining more on this?

image

question on reinforcement learning based exploration

I have seen the code in "GridEnv.py" and i have found that the reward computation and action execution for the agent are gotten from the prior knowledge "self.gt_map" , though it is not wrong for it only used to get the reward.
In my opnion, it should use the simulation data (such as occupancygrid data gotten during the mapping).
And i don not understand the reward setting, the time penalty is a constant,you mean that the better when the steps are less?

There's something wrong in your readme.

Your code in the readme is:
echo 'export TURTLEBOT3_MODEL=burger' > ~/.bashrc
This would rewrite the whole file. It should be:
echo 'export TURTLEBOT3_MODEL=burger' >> ~/.bashrc

single robot cannot complete exploration on most maps

Hi, thanks for this cool exploration benchmark!

When I try to run exploration of single robot on each map, I found that for most maps (like corner, room, corridor), the robot can only explore 1/4 or even stuck at the very beginning, and then stay still. I only run the "loop" map successfully, which seems the easiest map compared to the other maps. However, the time cost for 90% exploration is much higher than the paper mentioned. I got 393.3s for "cost" method, while the time in the paper is 124s.

Is this because of the different hardware system? I noticed that the time counting is based on the absolute time cost, instead of the number of exploration steps of the method.

I would be appreciated if you can help me solve these two problems. (i.e., failed exploration on most maps & large gap of time cost)

New Map Merge Script

Hi @FarawaySail,

Hope you are well. I am trying out your repository, and it works great. I noticed that you added a map_merge_new.cpp within the map_merge folder, along with additional changes to the original map_merge repository.

Would you mind explaining more about the reasons, as well as key considerations, as to why you made these changes?

Much Thanks,
Derek

Issue about Exploration on a large map

Hi, @FarawaySail
Your "Explore Bench" tool is fantastic! It's been incredibly helpful in our research. Thank you for creating such a wonderful resource.

Unfortunately, when testing exploration on a large map, such as the "tunnel" map in the CMU environment, I have encountered issues. If I run multiple robot explorations, the planner script (MMPF, RRT, etc.) crashes at the beginning.

Here is the video link: https://drive.google.com/file/d/1SOM2cDIWOeORzpQJ1U9p4Q5evb3Ozrxl/view?usp=drive_link
error msg: https://drive.google.com/file/d/1CL56A7B5Q_g1buQdUBNWa4UtXBc2pjl5/view?usp=drive_link

But when running a single robot exploration, the planner script crashes when the explored map is too large.

Here is the video link: https://drive.google.com/file/d/1PN73ti6xHTDqwFe5T1olAV_QZZe04yWh/view?usp=drive_link
error msg: https://drive.google.com/file/d/1CvzsvO_zpnM93zzvge4-2FxsK7e4iD9s/view?usp=drive_link

Although it said the error log is in the "robot1_mmpf-1.log", this file does not exist.

Do you have any insights on this?

merge map failed

Thank you very much for the outstanding work of your team.

I encountered a problem while running:

roslaunch sim_env corridor_far_env_two_robots.launch
roslaunch sim_env two_robots.launch
roslaunch exploration_benchmark two_cost_node.launch
(during the exploration process, map fusion did not occur due to unknown relative poses between robots):

"Could not get initial position for robot."

I think maybe the "known_init_poses" parameter in the "map_merge" node should be set to false.

However, even after setting it, I still encountered a situation as shown in the figure below.

Uploading screen shot.png…

It seems that there is an error in map fusion. After that, I increased the "estimation_confidence" to 30.0, but the same problem still occurred.

Are there any additional modifications needed for map merge?

How do you measure the coverage of the explored space ?

Dear authors,

Thanks for the nice SW.

Your ICRA 2022 paper talks about the two coverage metrics: 1) T_total and 2) T_topo, which are associated with the coverage of the exploration space. Could you let me know how did you measure the coverage? I am working on your SW, but it seems that the SW only publishes the exploration time and the trajectory length. I cannot locate which topic outputs the coverage of the environment.

Best,

tf树没有robot1/odom,robot1/map

tf树是这样的
9eea92ff-f43d-4063-afbf-82333a3616fa
这时候开启cartographer roslaunch sim_env two_robots.launch 的结果会报各种tf树转换的错误和警告
93f10be7-2112-46e5-94fa-7bb1b865056d
我是不是有哪里没有设置好?

something wrong in the function "self._resize_convert" in grid_runner.py

hello, what a great work you've done!
But i found something wrong in onpolicy/onpolicy/runner/shared/grid_runner.py.
Screenshot from 2024-07-09 16-02-49
in line 524 and 526, variables "obs" and "shared_obs" are actually the same, which are output by function "self._resize_convert" with the same input parameters.However, they are respectively used as the input of the actor and the critic.(maybe the input of the critic should be the global observation?)
Additionally, I find that "obs" and "shared_obs" are not updated in the function "self._resize_convert", in which only raw_ob is updated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.