Git Product home page Git Product logo

cerlab-uav-autonomy's Issues

PX4仿真环境识别障碍物有问题

为什么我用px4的模型仿真,使用移动导航,而且rviz在tf坐标系的很近周围会有彩色障碍物,但是在gazebo没有障碍物,直接给我整疑惑了

关于mid360接入问题

您好,您这个项目很好,最近也在学习,想请问一下,如果把mid360之类的激光雷达接入navigation,配置参数需要改什么吗,除了mapping_param.yaml中的sensor_input_mode改为1之外,谢谢

How to modify maximum speed and acceleration

Hi,

For real flight, I set desired_velocity to 5, desired_acceleration to 1.5, But the actual velocity is probably less than 1m/s.

I feel it makes sense to modify this parameter in the simulation. Can you give me some advice?

Thank you.

Real world experiment

Hi, thanks for your interesting work.

Now I am following your work, conducting real world experiments based on px4 and using vins_fusion for localization. Now I have some problems and would like to ask you about them.

To be on the safe side, I had to do offboard mode in other ways when actually flying, because using the source code would keep trying to get into offboard mode and I couldn't manually intervene when the uav went out of control, so I annotated the following part of the source code.

In flightBase.cpp:

			if (this->mavrosState_.mode != "OFFBOARD" && (ros::Time::now() - lastRequest > ros::Duration(5.0))){
	            if (this->setModeClient_.call(offboardMode) && offboardMode.response.mode_sent){
	                cout << "[AutoFlight]: Offboard mode enabled." << endl;
	            }
	            lastRequest = ros::Time::now();
	        } else {
	            if (!this->mavrosState_.armed && (ros::Time::now() - lastRequest > ros::Duration(5.0))){
	                if (this->armClient_.call(armCmd) && armCmd.response.success){
	                    cout << "[AutoFlight]: Vehicle armed." << endl;
	                }
	                lastRequest = ros::Time::now();
	            }
	        }

And In dynamicNavigation.cpp:
this->takeoff();

However, after the uav took off, I switched to offboard mode and designated the waypoint of the drone through rviz, but it seems that the program did not start the planning work, as the terminal did not output [AutoFlight]: Replan for new goal position. The whole program seems to be stuck in an endless loop of the following program:

		while (ros::ok() and not this->isReach(ps)){
			currTime = ros::Time::now();
			double t = (currTime - startTime).toSec();

			if (t >= endTime){ 
				psT = ps;
			}
			else{
				double currYawTgt = yawCurr + (double) direction * t/endTime * yawDiffAbs;
				geometry_msgs::Quaternion quatT = AutoFlight::quaternion_from_rpy(0, 0, currYawTgt);
				psT.pose.orientation = quatT;
				
			}
			// this->updateTarget(psT);
			target.position.x = psT.pose.position.x;
			target.position.y = psT.pose.position.y;
			target.position.z = psT.pose.position.z;
			target.yaw = AutoFlight::rpy_from_quaternion(psT.pose.orientation);
			this->updateTargetWithState(target);
			// cout << "here" << endl;
			ros::spinOnce();
			r.sleep();
		}

I have completed the simulation experiment based on px4, and I have changed the parameters in the configuration file for my real depth camera, compared to the simulation experiment, where do you think I might have gone wrong?

Another minor issue was when I started rviz, it sometimes seemed impossible to visualize the created dynamic map, but the /dynamic_map/inflated_voxel_map topic kept getting messages.

Thank you for your precious time!

about Vision-aided UAV Navigation and Dynamic Obstacle Avoidance using Gradient-based B-spline Trajectory Optimization

  1. Regarding the calculation of the cost of static obstacles, the UAV cannot see behind the obstacles during flight, so how is the first control point of the obstacles obtained?
  2. The process of executing the trajectory is to plan a straight line, while Figure 3 in the paper is a curve. Is the curve drawn to more clearly show the process of how to escape from the obstacle
    Figure 4 on the calculation of the cost of dynamic obstacles I do not particularly understand whether it is necessary to draw a circle for each future position and then form a conical collision area

px4 simulation error

hi, thanks for your work very much! It's really fancy.

Now I want to run the project with px4 in simulation, and I follow the instruction in readme.md, but I meet two problem.

The first one is when I run the command roslaunch remote_control dynamic_navigation_rviz.launch, the RobotModel item in Rviz has an error, I try to fix it by add following line in the px4_start.launch, I don't know whether I have done it :
<param name="robot_description" command="cat '$(find uav_simulator)/urdf/quadcopter.urdf'" />
<node pkg="tf" type="static_transform_publisher" name="base_link_to_map" args="0.0 0.0 0 0.0 0.0 0.0 /base_link /map 40" />

The second one is that nothing happened after I run the command roslaunch autonomous_flight dynamic_navigation.launch and operate in Rviz. In another word, the navigation simulation cannot run successfully.

So could you help me? Thanks.

A question about the paper

Hello! Thanks for your this great job!
I have read the paper named "Onboard dynamic-object detection and tracking for autonomous robot navigation with RGB-D camera".

I am confused in the section of "D. Data Association and Tracking".
The "Instead of directly using the previous obstacle’s feature, we apply the linear propagation to get the predicted obstacle’s position and replace the previous obstacle’s position with the predicted position in the feature vector" content mentioned in the original article.

We want to match the obstacles in the previous frame with the obstacles in the current frame. Why not directly use the features of the previous frame? Could you please explain it?

Best
yzy

Question about building VINS-Fusion with GPU support using OpenCV 4.6.0

I'm trying to build VINS-Fusion with GPU support using OpenCV 4.6.0, but I'm encountering some difficulties. Specifically, I'm unsure about the correct configuration and build steps required to enable GPU acceleration with OpenCV 4.6.0.

I've already followed the installation instructions for VINS-Fusion and have successfully built it without GPU support. However, I'm not sure how to incorporate OpenCV 4.6.0 with CUDA support into the build process to enable GPU acceleration.

Could someone provide guidance or pointers on how to correctly configure and build VINS-Fusion with GPU support using OpenCV 4.6.0?

Environment:

  • Operating System: Ubuntu 20.04
  • OpenCV Version: 4.6.0
  • CUDA Toolkit Version: 11.4
  • Model: jetson xevier nx

Map update is too slow during real flight

Hi, @Zhefan-Xu, thanks for your outstanding work.

I encountered another problem during real flight: it seems that the voxel update of dynamic obstacles is too slow, causing the UAV to easily pause in front of dynamic obstacles. And the trajectory seems to have not been re-planned. Could you give me some advice?
Snipaste_2024-03-29_18-56-45

Run Autonomy DEMO 报错,Autonomous Navigation bug

你好,我安装完所有环境后,跑你第一个demo a. Autonomous Navigation: Navigating to a given goal position and avoiding collisions.,运行第三个launch文件roslaunch autonomous_flight dynamic_navigation.launch,就会立马报错
image,跑Autonomous Navigation中的静态的导航是不报错的

Can not make quadrotor takeoff

Hello!
After I run "roslaunch uav_simulator start.launch", I can not make this quadrotor takeoff by keyboard control.
2024-05-05 10-28-51屏幕截图

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.