Git Product home page Git Product logo

cerlab-uav-autonomy's Introduction

CERLAB UAV Autonomy Framework

Releases Noetic Ubuntu 20.04 license Linux platform Linux platform

Welcome to the CERLAB UAV Autonomy Framework, a versatile and modular framework for autonomous unmanned aerial vehicles (UAVs). This framework comprises distinct components (simulator, perception, mapping, planning, and control) to achieve autonomous navigation, unknown exploration, and target inspection.

Author: Zhefan Xu, Computational Engineering & Robotics Lab (CERLAB) at Carnegie Mellon University (CMU).

Contact Email: [email protected]

Video Tutorial: YouTube, BiliBili (for China mainland)

If you find this work helpful, kindly show your support by giving us a free ⭐️. Your recognition is truly valued.

intro

News

  • 2024-01-10: The GitHub code, video demos, video tutorials and relavant papers for our autonomous UAV framework are released. The authors will actively maintain and update this repo!

Table of Contents

  1. The Autonomy Modules Introduction
  2. Installation Guide
  3. Run Autonomy DEMO
  4. PX4 Simulation & Real Flight
  5. Citation and Reference
  6. Acknowledgement
  7. Write at the End

I. The Autonomy Modules Introduction

The funtionality of each autonomy module included in this framework in alphabetical order:

  • autonomous_flight: The autonomous flight package integrating all other modules for various tasks. details
  • global_planner: The global waypoint planner library for autonomous robots. details
  • map_manager: The 3D mapping library for autonomous robots. details
  • onboard_detector: The dynamic obstacle detection and tracking algorithm for autonomous robots. details
  • remote_control: The Rviz configuration and launch files for easy visualization. details
  • time_optimizer: The optimal trajectory time allocation library for autonomous robots. details
  • tracking_controller: The trajectory tracking controller for autonomous robots. details
  • trajectory_planner: The trajectory olanning library for autonomous robots. details
  • uav_simulator: The lightweight Gazebo/ROS-based simulator for unmanned aerial vehicles. details

II. Installation Guide

This repo has been tested on ROS Melodic with Ubuntu 18.04 and ROS Noetic with Ubuntu 20.04 and it depends on the ROS packages: octomap, mavros, and vision_msgs. Installing the package with the following commands:

# step1: install dependencies
sudo apt install ros-${ROS_DISTRO}-octomap* && sudo apt install ros-${ROS_DISTRO}-mavros* && sudo apt install ros-${ROS_DISTRO}-vision-msgs

# step 2: clone this repo to your workspace
cd ~/catkin_ws/src
git clone --recursive https://github.com/Zhefan-Xu/CERLAB-UAV-Autonomy.git

# optional: switch to simulation branch for autonomous_flight
# the default branch is for real flight and PX4 simulation
cd path/to/autonomous_flight
git checkout simulation

# step 3: follow the standard catkin_make procedure
cd ~/catkin_ws
catkin_make

setup environment variable. Add following to your ~/.bashrc

source path/to/uav_simulator/gazeboSetup.bash

III. Run Autonomy DEMO

This section shows the most typical ones: navigation, exploration, and inspection. Note that the default environment and the flight parameters might be different from the demos shown as below. Please check uav_simulator for changing the simulation environments and autonomous_flight for adjusting flight parameters.

Before getting started, please make sure you are in the simulation branch of the submodule autonomous_flight for the following demos (please check the link for detailed explanations):

cd path/to/autonomous_flight
git branch

# if the output says you are not in the simulation branch, please run the following (otherwise please ignore): 
git checkout simulation
cd ~/catkin_ws
catkin_make clean # if you switch the branch for autonomous_flight
catkin_make

a. Autonomous Navigation: Navigating to a given goal position and avoiding collisions.

# start simulator
roslaunch uav_simulator start.launch # recommand to use the corridor env for your first trial

# open the Rviz visualization
roslaunch remote_control dynamic_navigation_rviz.launch # if your test env has dynamic obstacles

# run the navigation program
roslaunch autonomous_flight dynamic_navigation.launch # if your test env has dynamic obstacles

# --------------------------------------------------------------------------------------
# (alternatively, if your test env is purely static, you can run the following instead)
# open the Rviz visualization
roslaunch remote_control navigation_rviz.launch # if your test env only has static obstacles

# run the navigation program
roslaunch autonomous_flight navigation.launch # if your test env only has static obstacles

Once the robot is hovering at the predefined height (check the terminal output messages), you can use the 2D Nav Goal to click a goal point in Rviz and you can see example results shown below:

navigation_demo.mp4

b. Autonomous Exploration: Exploraing an unknown environments and create a map.

# start simulator
roslaunch uav_simulator start.launch # recommand to use the floorplan2 env for your first trial

# open the Rviz visualization
roslaunch remote_control exploration_rviz.launch 

# run the navigation program
roslaunch autonomous_flight dynamic_exploration.launch

The example exploration process is shown in the video demo as below:

exploration_demo.mp4

c. Autonomous Inspection: Navigating to the target and inspecting it with a zig-zag path.

# start simulator
roslaunch uav_simulator start.launch # # recommand to use the tunnel_dynamic_1 env for your first trial

# open the Rviz visualization
roslaunch remote_control inspection_rviz.launch 

# run the navigation program
roslaunch autonomous_flight dynamic_inspection.launch

The example inspection process is shown in the video demo as below:

tunnel_inspection_demo.mp4

IV. PX4 Simulation & Real Flight

This section talks about running this framework in the PX4-based simulation or conducting real flight experiments. Please first follow the PX4 simulation installation guide as provided in uav_simulator.

Before getting started, please make sure you are in the px4 branch of the submodule autonomous_flight for the following demos (please check the link for detailed explanations):

cd path/to/autonomous_flight
git branch

# if the output says you are not in the px4 branch, please run the following (otherwise please ignore): 
git checkout px4
cd ~/catkin_ws
catkin_make clean # if you switch the branch for autonomous_flight
catkin_make

a. PX4 Simulation Experiments

The purpose of having another PX4 simulation (besides the simulator we have shown in the previous section) is to simulate ALL behaviors that we might encounter in the real flight. To run the same demos in the previous section, the only change we need to do is to run the following command to start the simulator instead.

# start PX4 simulator
roslaunch uav_simulator px4_start.launch

b. Real Flight Experiments

Once you have tested the flight in the PX4 simulation, the real flight experiments will have exactly the same behavior as you saw in the simulation. The inputs required for this framework in the real flight experiments are:

  • The robot pose/odometry: The framework requires a SLAM/VIO system that can estimate the robot states.
  • The depth image: The framework expects the depth image to detect objects and construct the map.

Check all the parameters in the autonomous_flight accordingly before the actual flight!!!

c. Examples of Real Flight Experiments

a. The example of real flight experiment for autonomous navigation:

real_navigation_demo.mp4

b. The example of real flight experiment for autonomous exploration:

real_exploration_demo.mp4

c. The example of real flight experiment for autonomous inspection:

real_inspection_demo.mp4

V. Citation and Reference

If you find this work useful, please consider to cite our papers:

  • Zhefan Xu*, Christopher Suzuki*, Xiaoyang Zhan, Kenji Shimada, "Heuristic-based Incremental Probabilistic Roadmap for Efficient UAV Exploration in Dynamic Environments”, IEEE International Conference on Robotics and Automation (ICRA), 2024. [paper] [video].
  • Zhefan Xu and Kenji Shimada, “Quadcopter Trajectory Time Minimization and Robust Collision Avoidance via Optimal Time Allocation”, IEEE International Conference on Robotics and Automation (ICRA), 2024. [paper] [video]
  • Zhefan Xu*, Xiaoyang Zhan*, Yumeng Xiu, Christopher Suzuki, Kenji Shimada, "Onboard dynamic-object detection and tracking for autonomous robot navigation with RGB-D camera”, IEEE Robotics and Automation Letters (RA-L), 2024. [paper] [video].
  • Zhefan Xu, Baihan Chen, Xiaoyang Zhan, Yumeng Xiu, Christopher Suzuki, and Kenji Shimada, “A Vision-Based Autonomous UAV Inspection Framework for Unknown Tunnel Construction Sites With Dynamic Obstacles”, IEEE Robotics and Automation Letters (RA-L), 2023. [paper] [video]
  • Zhefan Xu*, Xiaoyang Zhan*, Baihan Chen, Yumeng Xiu, Chenhao Yang, and Kenji Shimada, "A real-time dynamic obstacle tracking and mapping system for UAV navigation and collision avoidance with an RGB-D camera”, IEEE International Conference on Robotics and Automation (ICRA), 2023. [paper] [video].
  • Zhefan Xu, Yumeng Xiu, Xiaoyang Zhan, Baihan Chen, and Kenji Shimada, “Vision-aided UAV Navigation and Dynamic Obstacle Avoidance using Gradient-based B-spline Trajectory Optimization”, IEEE International Conference on Robotics and Automation (ICRA), 2023. [paper] [video]
  • Zhefan Xu, Di Deng, and Kenji Shimada, “Autonomous UAV Exploration of Dynamic Environments via Incremental Sampling and Probabilistic Roadmap”, IEEE Robotics and Automation Letters (RA-L), 2021. [paper] [video]

VI. Acknowledgement

The author would like to express his sincere gratitude to Professor Kenji Shimada for his great support and all CERLAB UAV team members who contribute to the development of this research.

VII. Write at the End

This repository concludes my first two-year Ph.D. work at CMU, and I would like to share it to contribute to the autonomous UAV research community. At the beginning, I truly felt the frustration as an autonomous robot researcher due to the lack of a comprehensive development framework. As a result, I hope my code can provide researchers a comprehensive and easy-to-understand platform and let them focus on algorithm/theory design instead of the software pipeline. In the meanwhile, this research and developed framework are not perfect and can be further improved; thus, I am actively looking for collaborators in research and development to improve the UAV/robot autonomy level. Please don't hesitate to reach out for potential collaboration!

cerlab-uav-autonomy's People

Contributors

zhefan-xu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cerlab-uav-autonomy's Issues

关于mid360接入问题

您好,您这个项目很好,最近也在学习,想请问一下,如果把mid360之类的激光雷达接入navigation,配置参数需要改什么吗,除了mapping_param.yaml中的sensor_input_mode改为1之外,谢谢

px4 simulation error

hi, thanks for your work very much! It's really fancy.

Now I want to run the project with px4 in simulation, and I follow the instruction in readme.md, but I meet two problem.

The first one is when I run the command roslaunch remote_control dynamic_navigation_rviz.launch, the RobotModel item in Rviz has an error, I try to fix it by add following line in the px4_start.launch, I don't know whether I have done it :
<param name="robot_description" command="cat '$(find uav_simulator)/urdf/quadcopter.urdf'" />
<node pkg="tf" type="static_transform_publisher" name="base_link_to_map" args="0.0 0.0 0 0.0 0.0 0.0 /base_link /map 40" />

The second one is that nothing happened after I run the command roslaunch autonomous_flight dynamic_navigation.launch and operate in Rviz. In another word, the navigation simulation cannot run successfully.

So could you help me? Thanks.

Question about building VINS-Fusion with GPU support using OpenCV 4.6.0

I'm trying to build VINS-Fusion with GPU support using OpenCV 4.6.0, but I'm encountering some difficulties. Specifically, I'm unsure about the correct configuration and build steps required to enable GPU acceleration with OpenCV 4.6.0.

I've already followed the installation instructions for VINS-Fusion and have successfully built it without GPU support. However, I'm not sure how to incorporate OpenCV 4.6.0 with CUDA support into the build process to enable GPU acceleration.

Could someone provide guidance or pointers on how to correctly configure and build VINS-Fusion with GPU support using OpenCV 4.6.0?

Environment:

  • Operating System: Ubuntu 20.04
  • OpenCV Version: 4.6.0
  • CUDA Toolkit Version: 11.4
  • Model: jetson xevier nx

about Vision-aided UAV Navigation and Dynamic Obstacle Avoidance using Gradient-based B-spline Trajectory Optimization

  1. Regarding the calculation of the cost of static obstacles, the UAV cannot see behind the obstacles during flight, so how is the first control point of the obstacles obtained?
  2. The process of executing the trajectory is to plan a straight line, while Figure 3 in the paper is a curve. Is the curve drawn to more clearly show the process of how to escape from the obstacle
    Figure 4 on the calculation of the cost of dynamic obstacles I do not particularly understand whether it is necessary to draw a circle for each future position and then form a conical collision area

Map update is too slow during real flight

Hi, @Zhefan-Xu, thanks for your outstanding work.

I encountered another problem during real flight: it seems that the voxel update of dynamic obstacles is too slow, causing the UAV to easily pause in front of dynamic obstacles. And the trajectory seems to have not been re-planned. Could you give me some advice?
Snipaste_2024-03-29_18-56-45

A question about the paper

Hello! Thanks for your this great job!
I have read the paper named "Onboard dynamic-object detection and tracking for autonomous robot navigation with RGB-D camera".

I am confused in the section of "D. Data Association and Tracking".
The "Instead of directly using the previous obstacle’s feature, we apply the linear propagation to get the predicted obstacle’s position and replace the previous obstacle’s position with the predicted position in the feature vector" content mentioned in the original article.

We want to match the obstacles in the previous frame with the obstacles in the current frame. Why not directly use the features of the previous frame? Could you please explain it?

Best
yzy

Can not make quadrotor takeoff

Hello!
After I run "roslaunch uav_simulator start.launch", I can not make this quadrotor takeoff by keyboard control.
2024-05-05 10-28-51屏幕截图

How to modify maximum speed and acceleration

Hi,

For real flight, I set desired_velocity to 5, desired_acceleration to 1.5, But the actual velocity is probably less than 1m/s.

I feel it makes sense to modify this parameter in the simulation. Can you give me some advice?

Thank you.

PX4仿真环境识别障碍物有问题

为什么我用px4的模型仿真,使用移动导航,而且rviz在tf坐标系的很近周围会有彩色障碍物,但是在gazebo没有障碍物,直接给我整疑惑了

Run Autonomy DEMO 报错,Autonomous Navigation bug

你好,我安装完所有环境后,跑你第一个demo a. Autonomous Navigation: Navigating to a given goal position and avoiding collisions.,运行第三个launch文件roslaunch autonomous_flight dynamic_navigation.launch,就会立马报错
image,跑Autonomous Navigation中的静态的导航是不报错的

Real world experiment

Hi, thanks for your interesting work.

Now I am following your work, conducting real world experiments based on px4 and using vins_fusion for localization. Now I have some problems and would like to ask you about them.

To be on the safe side, I had to do offboard mode in other ways when actually flying, because using the source code would keep trying to get into offboard mode and I couldn't manually intervene when the uav went out of control, so I annotated the following part of the source code.

In flightBase.cpp:

			if (this->mavrosState_.mode != "OFFBOARD" && (ros::Time::now() - lastRequest > ros::Duration(5.0))){
	            if (this->setModeClient_.call(offboardMode) && offboardMode.response.mode_sent){
	                cout << "[AutoFlight]: Offboard mode enabled." << endl;
	            }
	            lastRequest = ros::Time::now();
	        } else {
	            if (!this->mavrosState_.armed && (ros::Time::now() - lastRequest > ros::Duration(5.0))){
	                if (this->armClient_.call(armCmd) && armCmd.response.success){
	                    cout << "[AutoFlight]: Vehicle armed." << endl;
	                }
	                lastRequest = ros::Time::now();
	            }
	        }

And In dynamicNavigation.cpp:
this->takeoff();

However, after the uav took off, I switched to offboard mode and designated the waypoint of the drone through rviz, but it seems that the program did not start the planning work, as the terminal did not output [AutoFlight]: Replan for new goal position. The whole program seems to be stuck in an endless loop of the following program:

		while (ros::ok() and not this->isReach(ps)){
			currTime = ros::Time::now();
			double t = (currTime - startTime).toSec();

			if (t >= endTime){ 
				psT = ps;
			}
			else{
				double currYawTgt = yawCurr + (double) direction * t/endTime * yawDiffAbs;
				geometry_msgs::Quaternion quatT = AutoFlight::quaternion_from_rpy(0, 0, currYawTgt);
				psT.pose.orientation = quatT;
				
			}
			// this->updateTarget(psT);
			target.position.x = psT.pose.position.x;
			target.position.y = psT.pose.position.y;
			target.position.z = psT.pose.position.z;
			target.yaw = AutoFlight::rpy_from_quaternion(psT.pose.orientation);
			this->updateTargetWithState(target);
			// cout << "here" << endl;
			ros::spinOnce();
			r.sleep();
		}

I have completed the simulation experiment based on px4, and I have changed the parameters in the configuration file for my real depth camera, compared to the simulation experiment, where do you think I might have gone wrong?

Another minor issue was when I started rviz, it sometimes seemed impossible to visualize the created dynamic map, but the /dynamic_map/inflated_voxel_map topic kept getting messages.

Thank you for your precious time!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.