Git Product home page Git Product logo

fast-livo's People

Contributors

flex-transformer avatar xuankuzcr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast-livo's Issues

请问一下关于相机参数设置方面的 问题

首先我使用hku1.bag的包是可以正常运行fast-livo
但是我换了自己的设备:主机(jetson agx xavier)、livox-mid70、imu(wheeltec)、ZED2。
这是我遇到的问题
2023-02-15 17-21-00屏幕截图
为了排除雷达和imu是不是跑的通fast-lio我把img_enable设置为0后又跑了一下程序,发现rviz虽然没有点云图,但是看样子应该是跑的了fast-lio的,
2023-02-15 17-21-56屏幕截图
。所以我觉得可能是相机标定信息什么的设置不对。
基于相机的出厂标定信息修改了下面的部分
2023-02-15 17-24-55屏幕截图

我对这一块刚入门,想请教一下,我都需要改哪些参数,才可以像跑hku1.bag一样基于我的设备跑通fast-livo。 非常感谢!

Effect of IMU Hz on performance

After reading the paper and downloading the rosbag, is possible to see that IMU and LiDAR synchronized eventhought the BMI088 (the imu inside LiDAR AVIA) output values at 200 hz . Question:

  • How do you think that FAST-LIVO will perform with more frequent data from the IMU sensors than the LiDAR? or it is optimal to the algorithm if they have the same Hz ?
  • Additional question: In your case, do you know what LiVOX is doing with the IMU data when you have low frequencies outputs of LiDAR + IMU. Is it the actual value closer to your LiDAR data or is the average of X measurements corresponding to the desired frequency (i.e: Lidar 10hz = 200/10 = Average of the last 20 measurements)

Thanks in advance!

#define USE_IKFOM 后的 IKFoM_toolkit库报错

感谢优秀的工作,大部分情况都运行良好。

对于部分数据集,我在测试时发现初始化时FAST-LIVO总是跑飞, 但FAST-LIO2可以正常运行,于是我试了试在common_lib.h 中去掉了#define USE_IKFOM的注释,但出现了很多 IKFoM_toolkit库报错及其他报错。
如:
/home/zuo/catkin_ws/src/R3LIVE/FAST-LIVO/include/IKFoM_toolkit/esekfom/../mtk/build_manifold.hpp:102:47: error: request for member ‘oplus’ in ‘((state_ikfom*)this)->state_ikfom::grav’, which is of non-class type ‘int’ #define MTK_OPLUS( type, id) id.oplus(MTK::subvector_(__vec, &self::id), __scale); ^
/home/zuo/catkin_ws/src/R3LIVE/FAST-LIVO/include/IKFoM_toolkit/esekfom/../mtk/build_manifold.hpp:102:47: note: in definition of macro ‘MTK_OPLUS’ #define MTK_OPLUS( type, id) id.oplus(MTK::subvector_(__vec, &self::id), __scale);

/home/zuo/catkin_ws/src/R3LIVE/FAST-LIVO/include/IKFoM_toolkit/esekfom/../mtk/build_manifold.hpp:102:85: error: no matching function for call to ‘subvector_(const MTK::vectview<const double, 0>&, int state_ikfom::*)’ #define MTK_OPLUS( type, id) id.oplus(MTK::subvector_(__vec, &self::id), __scale);

/FAST-LIVO/src/IMU_Processing.cpp:256:31: required from here /usr/include/eigen3/Eigen/src/Core/util/StaticAssert.h:32:40: error: static assertion failed: INVALID_MATRIX_PRODUCT #define EIGEN_STATIC_ASSERT(X,MSG) static_assert(X,#MSG);

请问如何处理此类报错,或者如何正确使用#define USE_IKFOM ,是否还需要其他设置。
或对于初始化时FAST-LIVO部分情况跑飞的情况如何处理?是否需要先原地静止再移动?
十分感谢

image
image

Comparing with R3Live

When comparing the resolution of FAST-LIVO to R3live from your videos, is it clear for me that FAST-LIVO is doing a better work . Have been this difference been quantified?
R3Live
Skjermbilde 2022-05-01 kl  22 22 36
FAST-LIVO*
Skjermbilde 2022-05-01 kl  22 25 13

when running eee_03.bag of NTU_VIRAL,it report an error about [laserMapping-2] process has died...

when running eee_03.bag of NTU_VIRAL,it report an error about [laserMapping-2] process has died...
[laserMapping-2] process has died [pid 3278, exit code -11, cmd /root/catkin_ws/devel/lib/fast_livo/fastlivo_mapping __name:=laserMapping __log:=/root/.ros/log/552d2f00-bccf-11ed-937d-6c24081e023e/laserMapping-2.log].
log file: /root/.ros/log/552d2f00-bccf-11ed-937d-6c24081e023e/laserMapping-2*.log
How to solve it?

There is some drift in the direction of height.

Hello , this is great work for LIVO.
I test my device with your fantastic work recently. I meet one question :
when I walk a short path about 200m, a circle, end at start position , odom is OK. even perfect. like first picture.
then when I walk long distance circle, about 1000m,like above, end at start position. there is a obvious drift at height direction .like second picture.
Can you give me some suggestions to improve this problem?
3Q. ^_^.

I think IMU ( my device is HMS-MM-VRU-UM 01) may be important. I try get IMU noise parameter by https://github.com/gaowenliang/imu_utils tools . then fix cov_gyr and cov_acc parameter. I wish it will be useful.

image

image

Hardware Time Sync Issues

Hi,

I tried your bag files, and tried to get the difference between the camera timestamp and the lidar timestamp. It gives exactly 0.000000000. How is this possible? Did you modify the bag files and replace camera timestamp with lidar timestamp? Even in nanoseconds there is no difference.

【求助】Double free or corruption (out)报错,终止

尊敬的港大团队,您好,

在测试数据集或者收集私人数据都会出现如下报错:

[ LIO ]: Raw feature num: 1417 downsamp num 161 Map num: 152.
[ LIO ]: Using multi-processor, used core number: 4.
double free or corruption (out)
[laserMapping-1] process has died ...

报错节点是在接受lidar和imu数据后就会发生,情况如下:
https://user-images.githubusercontent.com/45952143/207554474-f0282eb6-b0f2-4e81-9771-207f330cc036.mp4

lidar和imu的硬同步应该没有问题,因为FAST-LIO可以跑,报错时候的log files已上传,您过目。
rosout.log
rosout-1-stdout.log
rviz-3-stdout.log
imu-1.log
master.log
roslaunch-cpc-MoreFine-S500-319789.log
roslaunch-cpc-MoreFine-S500-320153.log
roslaunch-cpc-MoreFine-S500-320331.log

A question about ikfom

Thanks for the great code, and a kindly sharing.
I have a short question. I read the fastlivo code as well as r2live, and I noticed ikfom does not being used no longer, but replaced with a simple lines of Eigen computations. Any reasons (and pros/cons)?

"Add 0 3D points" when testing with R3LIVE data.

First of all, thanks for sharing your great work!
I want to test FAST-LIVO with some long bags, such as the R3LIVE datasets.
image
After adjust the parameter in yaml and launch file, "Add 0 3D points " occured.

[ INFO ]: get img at time: 1630286393.589510.
[ INFO ]: get point cloud at time: 1630286393.584056.
[ LIO ]: Raw feature num: 5325 downsamp num 4684 Map num: 11234.
[ LIO ]: Using multi-processor, used core number: 4.
[ LIO ]: time: fov_check: 0.000000 fov_check and readd: 0.000911 match: 0.002030 solve: 0.000292 ICP: 0.013996 map incre: 0.008080 total: 0.012388 icp: 0.002380 construct H: 0.000186.
[ VIO ]: Raw feature num: 5325.
[ VIO ]: Add 0 3D points.
[ VIO ]: time: addFromSparseMap: 0.000002 addSparseMap: 0.000274 ComputeJ: 0.000000 addObservation: 0.000001 total time: 0.000277 ave_total: 0.000277.

It seems to track raw feature successfully, but no points added. Here is my yaml and launch file.

image
image

Waitting for your reply.

Whether the improvement on NTU-VIRAL datasets is still significant after LiDAR and IMU timeoffset correction

Thank you for your excellent work!

The ouster pointcloud and IMU messages in NTU-VIRAL datasets exist synchronization issues. Please refer to this

After regularizing the ouster pointcloud and imu topics in NTU datasets, I retest FAST-LIO2.0 on it and the average absolute pose error is about 0.03m. When I test a LIVO system based on FastLIO2.0 implemented by myself, I find the VIO subsystem can not improve performance significantly. The average absolute pose error of the whole LVIO system on NTU-VIRAL datasets is about 0.029m.

It would be very helpful if you could provide me with the experiment results on corrected NTU-VIRAL datasets.

Vikit opencv issue under ubuntu 20.04

Hi,
I have the following issue while compiling vikit under ubuntu 20.04

I've got the following error :

/home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/homography.cpp: In member function ‘void vk::Homography::calcFromMatches()’: /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/homography.cpp:48:54: error: ‘RANSAC’ was not declared in this scope; did you mean ‘cv::RANSAC’? 48 | cv::Mat cvH = cv::findHomography(src_pts, dst_pts, RANSAC, 2./error_multiplier2); | ^~~~~~ | cv::RANSAC In file included from /usr/include/opencv4/opencv2/opencv.hpp:56, from /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/homography.cpp:13: /usr/include/opencv4/opencv2/calib3d.hpp:230:8: note: ‘cv::RANSAC’ declared here 230 | RANSAC = 8, //!< RANSAC algorithm | ^~~~~~ /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/pinhole_camera.cpp: In member function ‘void vk::PinholeCamera::undistortImage(const cv::Mat&, cv::Mat&)’: /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/pinhole_camera.cpp:112:59: error: ‘CV_INTER_LINEAR’ was not declared in this scope 112 | cv::remap(raw, rectified, undist_map1_, undist_map2_, CV_INTER_LINEAR); | ^~~~~~~~~~~~~~~ /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/img_align.cpp: In member function ‘virtual void vk::ForwardCompositionalSE3::finishIteration()’: /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/img_align.cpp:237:34: error: ‘CV_WINDOW_AUTOSIZE’ was not declared in this scope 237 | cv::namedWindow("residuals", CV_WINDOW_AUTOSIZE); | ^~~~~~~~~~~~~~~~~~ /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/img_align.cpp: In member function ‘virtual void vk::SecondOrderMinimisationSE3::finishIteration()’: /home/alban/multinnov/fast_livo_ws/src/rpg_vikit/vikit_common/src/img_align.cpp:437:34: error: ‘CV_WINDOW_AUTOSIZE’ was not declared in this scope 437 | cv::namedWindow("residuals", CV_WINDOW_AUTOSIZE); | ^~~~~~~~~~~~~~~~~~ make[2]: *** [rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/build.make:154 : rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/src/pinhole_camera.cpp.o] Erreur 1 make[2]: *** Attente des tâches non terminées.... make[2]: *** [rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/build.make:180 : rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/src/img_align.cpp.o] Erreur 1 make[2]: *** [rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/build.make:167 : rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/src/homography.cpp.o] Erreur 1 make[1]: *** [CMakeFiles/Makefile2:3148 : rpg_vikit/vikit_common/CMakeFiles/vikit_common.dir/all] Erreur 2 make: *** [Makefile:141 : all] Erreur 2 Invoking "make -j12 -l12" failed
This error seems to appear because the newer opencv version don't use the prefix CV_, instead you have to replace it by " cv:: ".

So, for exemple the error with CV_RANSAC, I replace it by cv::RANSAC.

You can find where to do the changes in the error code.

It fixe the compile error for me.

IMU question : Average?

Good day. Does anybody knows what the author is doing with the IMU data ? The original livox ros driver ouput either 0 or 200 HZ, meanwhile the supplied bags are sync to the same frecuency between IMU and LiDAR (and same timestamp). You can see it here for example

Thanks!

some config problems

hi, thx for your work. I want to know: 1 if transform between camera and lidar is need? where to add the R and T. 2 what is the meaning of Rcl and Pcl under 'camera':

question about ImuProcess::UndistortPcl()

Dear Authors:
Thank you for contributing elegant programs to open-source SLAM, which helped me a lot in my learning. During this process, I have a question about ImuProcess::UndistortPcl()

V3D T_ei(pos_imu + vel_imu * dt + 0.5 * acc_imu * dt * dt + R_i * Lid_offset_to_IMU - pos_liD_e);
V3D P_i(it_pcl->x, it_pcl->y, it_pcl->z);
V3D P_compensate = state_inout.rot_end.transpose() * (R_i * P_i + T_ei);

From my point of view, the fast-livo code corresponds to the following formula
image
Does it use the extrinsic rotation of Lidar and IMU? Please help check whether the following formula is correct
image
R is the direction cosine matrix, p is the translation, L is lidar frame, G is global frame, and b is IMU frame.
Wish you all the best!

IMU_Processing.cpp formula error

Hello!
I think
cov_acc = cov_acc * (N - 1.0) / N + (cur_acc - mean_acc).cwiseProduct(cur_acc - mean_acc) * (N - 1.0) / (N * N);
cov_gyr = cov_gyr * (N - 1.0) / N + (cur_gyr - mean_gyr).cwiseProduct(cur_gyr - mean_gyr) * (N - 1.0) / (N * N);
should be
cov_acc = cov_acc * (N - 1.0) / N + (cur_acc - mean_acc).cwiseProduct(cur_acc - mean_acc) / (N-1.0);
cov_gyr = cov_gyr * (N - 1.0) / N + (cur_gyr - mean_gyr).cwiseProduct(cur_gyr - mean_gyr) / (N-1.0);

I don't know if there is something wrong with the formula I derived,I have been having this problem since FASTLIO1.0 and 2.0

Use-after-free bug in img_cbk()

Hi, I find a bug by AddressSanitizer when testing our dataset:

cv::Mat getImageFromMsg(const sensor_msgs::ImageConstPtr& img_msg) {
  cv::Mat img;
  img = cv_bridge::toCvShare(img_msg, "bgr8")->image;
  return img;
}

It can be seen from the description of toCvShare() that img_msg->data and img.data may share the same memory, which leads to a use-after-free bug. To be more specifically, img_msg->data will be released after getImageFromMsg() but the return value img may still be used after that.

Why not ROS2?

There is great work in your repos, but all in ROS1. Many people have been on ROS2 for years now. Do you not like it? There are features in ROS2 which make it much faster. Maybe other people have asked, would you participate in an effort to move some of your projects to ROS2?

[laserMapping-1] process has died [pid 11250, exit code -11

If anybody faces this kind of error (look at the screen shot), please pay attention to string

printf("[ LIO ]: Using multi-processor, used core number: %s.\n", MP_PROC_NUM);

in laserMapping.cpp

Resolve the bug by changing the above string to the following:

printf("[ LIO ]: Using multi-processor, used core number: %d.\n", MP_PROC_NUM);

Screenshot from 2023-02-01 12-31-57

Livox driver and RVIZ visualization

Hi, which is the Livox driver is needed to use, the modified HKU as R3Live or the stock one and what launch I need to use, lidar.launch or lidar_msg.launch.

I'm using real hardware, the same that works with fast_lio and R3live without issue, but my issue with fast_livo is in RVIZ I cannot see the transforms and frame published, there is not lidar frames at all. There is not published TF as world, map, etc.
I think the software is working , I post picture of the console, no error messages and something is doing.

Screenshot from 2022-11-14 21-05-40
Screenshot from 2022-11-14 21-05-28
Screenshot from 2022-11-14 21-16-57

question about Backward code

I read both fastlio and fast-livo,I found there are some differences in backward procedure.
I confused about the code below which is compute the translation from the end frame to i-th point
V3D T_ei(pos_imu + vel_imu * dt + 0.5 * acc_imu * dt * dt + R_i * Lid_offset_to_IMU - pos_liD_e);
I wonder what's the meaning of the R_i * Lid_offset_to_IMU
Thanks for your reply,best wishes!

有关测试自己数据集的问题

您好!看到fast-livo代码开源非常兴奋,非常感谢您将代码开源供我们参考学习!!!
但是,我在使用自己的手持设备(mid70+realsence D455相机及相机内置imu(事先已经进行了内外参的标定))测试的时候,遇到了和测试R3live一样的问题,静止时一切安好,只要一动起来里程计就漂了,无法正常建图。(但是在之前测试fast-lio2时,效果很好。)

室内测试:

image
image
image

室内config:

feature_extract_enable : 0
point_filter_num : 1
max_iteration : 10
dense_map_enable : 1
filter_size_surf : 0.05 # 建议室内:0.05~0.15;室外:0.3~0.5
filter_size_map : 0.15 # 建议室内:0.15~0.3;室外:0.4~0.5
cube_side_length : 20
debug : 0
grid_size : 40
patch_size : 8
img_enable : 1
lidar_enable : 1
outlier_threshold : 300 # 78 100 156 建议较暗的场景为50~250,较亮的场景为500~1000。该值越小,vio子模块越快,但抗退化能力越弱。
ncc_en: false # ??
ncc_thre: 0 # ??
img_point_cov : 100 # 1000 The covariance of photometric errors per pixel.
laser_point_cov : 0.001 # 0.001 The covariance of point-to-plane redisual per point.
cam_fx: 426.551167166986
cam_fy: 426.6142447926181
cam_cx: 429.1314081919893
cam_cy: 247.64043896840604

common:
    lid_topic:  "/livox/lidar"
    imu_topic:  "/camera/imu"

preprocess:
    lidar_type: 1 # Livox Avia LiDAR
    scan_line: 1
    blind: 0.05 # blind x m disable

mapping:
    acc_cov_scale: 100
    gyr_cov_scale: 10000
    fov_degree:    70
    # extrinsic_T: [ 0.04165, 0.02326, -0.0284 ]
    # extrinsic_R: [ 1, 0, 0,
    #                0, 1, 0,
    #                0, 0, 1]
    extrinsic_T: [ -0.017020, 0.085815, -0.024827 ]
    extrinsic_R: [ 0.046790, -0.998811, 0.013708,
                   -0.023212, -0.014807, -0.999621,
                   0.998635, 0.046454, -0.023877]

camera:
    # img_topic: /usb_cam/image_raw
    # img_topic:  /camera/image_color
    img_topic: "/camera/color/image_raw"
    #xiyuan
    # lidar to camera
    # Rcl: [0.0110805,-0.999823,0.0151929,
    #      -0.0410198,-0.0156355,-0.999036,
    #       0.999097,0.0104466,-0.0411858]
    # Pcl: [0.0183, 0.0762623, -0.0305996]
    # camera to lidar
    Rcl: [0.0110805, -0.0410198, 0.999097,
         -0.999823, -0.0156355, 0.0104466,
          0.0151929, -0.999036, -0.0411858]
    Pcl: [0.0334975, 0.0198088, 0.0746505]

室外测试:

image

室外config:

只修改了两个降采样的参数

filter_size_surf : 0.3 
filter_size_map : 0.5 

请问一下您知道什么原因吗?或者有没有一些建议呢?

[laserMapping-1] process has died [pid 31651, exit code -11

尊敬的港大团队,您好
在编译完成之后运行roslaunch fast_livo mapping_avia.launch后laserMapping中断,并且播放数据集rviz没有显示,请问应该如何解决这个问题,我的配置如下:
Ubuntu->18.04
PCL->1.9.1
Eigen->3.4.0
OpenCV->4.2.0
2023-01-28 23-16-39屏幕截图

About 360 spinning LiDAR .. @xuankuzcr

Hello? Could you please check this issue again ?
@xuankuzcr

I also tried to test the HILTI 2021 dataset.

But trajectory which is estimated by FAST-LIVO very unstable and eventually diverged.

I set the parameters provided here as shown in the code below. but FAST-LIVO diverge during the dataset playback.

feature_extract_enable : 0
point_filter_num : 4
max_iteration : 3
dense_map_enable : 1
filter_size_surf : 0.3 # 0.3
filter_size_map : 0.3 # 0.4
cube_side_length : 1000
debug : 1
grid_size : 40
patch_size : 8
img_enable : 1
lidar_enable : 1
outlier_threshold : 50
ncc_en: true
ncc_thre: 0.5
img_point_cov : 1000
laser_point_cov : 0.001
cam_fx: 696.7174426776
cam_fy: 696.4862496732
cam_cx: 708.4206218964
cam_cy: 535.6712007522
common:
lid_topic: "/os_cloud_node/points"
imu_topic: "/os_cloud_node/imu"
preprocess:
lidar_type: 3 # Ouster
scan_line: 64
blind: 1 # blind x m disable
mapping:
acc_cov_scale: 100 #10
gyr_cov_scale: 10000 #10
fov_degree: 180
extrinsic_T: [ 0.0, 0.0, 0.0]
extrinsic_R: [ 1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0]
camera:
img_topic: /alphasense/cam1/image_raw
Rcl: [0.0, 0.0, 1.0,
-1.0, 0.0, 0.0,
0.0, -1.0, 0.0]
Pcl: [0.054, 0.137, -0.040]

And there is no part where the lens distortion is processed in the code. why is that ?

And when the 3D point cloud of the rotating LiDAR is re-projected onto the image, how to deal with rear area pointclouds that are not visible in the image ?

Originally posted by @vislero in #37 (comment)

Is extrinsic_R in avia_resize.yaml must be identity?

Thank you for your great contribution!

I am testing hilti2021 dataset on FAST-LIVO which contains a livox mid70, an embeded IMU and a 1440x1080 10Hz global shutter camera.

I find that when I write really extrinsic parameters on the left which extrinsic_R is not identity, FAST-LIVO will result in bad initialization showing that image is not aligned with livox scan.
b11e1d039534e92b5b7b14dcdcbf27f

So I make some changes to force lidar and imu into one frame:

  1. Add some codes in livox_scan_cbk() to transform livox points from lidar frame to IMU frame;
  2. Set paramter to the right above.
    Then the image align well with livox scan.

I would like to know if extrinsic_R in avia_resize.yaml must be identity?

Hardware time sync

Are there any suggestions for aligning camera and lidar timestamps?I tried two hardware sync signals, 1hz (PPS sync) for lidar and 10hz for camera. It was found that there was a time difference.

Questions about lense

Thank you very much for sharing your great code.

I have a short question.

I'd like to build hardware like yours and run the code.

Could you tell me the model number of the camera lens used for the hardware in figure 4 of your paper?

Thank you again for sharing code and look forward to hearing from you.

support velodyne16 laser radar

Outstanding work. Does this code support velodyne16 laser radar, external IMU and realsense camera? If so, how should the parameters be designed? What is the base coordinate system of the rotation matrix in the configuration file

【求助】定义it数组时遇到段错误

尊敬的港大团队您们好,

设备都调试硬同步好了,程序跑起来开始报错。
可能是从点云中筛选出与图像匹配的点,将这些点的坐标存入it中的时候出了一些错误。
我挠破头都不知道怎么整了,下面是是rosbag和配置文件的下载链接,还有gdb信息您过目:

百度网盘下载(rosbag, config文件)

坚果云网盘下载(内容和上面的一样)

开始GDB:

Thread 1 "fastlivo_mappin" received signal SIGSEGV, Segmentation fault.
0x00007ffff03df268 in lidar_selection::LidarSelector::addFromSparseMap (
    this=this@entry=0x55555c676f50, img=..., pg=...)
    at /catkin_ws/src/FAST-LIVO/src/lidar_selection.cpp:376
376	    float it[height*width] = {0.0};

(gdb) bt

#0  0x00007ffff03df268 in lidar_selection::LidarSelector::addFromSparseMap(cv::Mat, boost::shared_ptr<pcl::PointCloud<pcl::PointXYZINormal> >) (this=this@entry=0x55555c676f50, img=..., pg=...)
    at /catkin_ws/src/FAST-LIVO/src/lidar_selection.cpp:376
#1  0x00007ffff03e4893 in lidar_selection::LidarSelector::detect(cv::Mat, boost::shared_ptr<pcl::PointCloud<pcl::PointXYZINormal> >) (this=0x55555c676f50, img=..., pg=...) at /catkin_ws/src/FAST-LIVO/src/lidar_selection.cpp:1065
#2  0x000055555557fd56 in main(int, char**) (argc=<optimized out>, argv=<optimized out>) at /catkin_ws/src/FAST-LIVO/src/laserMapping.cpp:1344

(gdb) print img

$1 = {flags = 1124024320, dims = 2, rows = 2048, cols = 2048, 
  data = 0x5555607f71c0 "\t\r\f\f\v\f\004\006\b\f\b\006\006\n\n\f\v\v\r\r\v\016\n\v\f\t\003\v\f\b\006\b\a\v\n\t\r\v\002\006\f\f\016\f\t\v\020\f\n\b\a\n\f\b\a\n\016\r\t\r\016\v\v\026\033\030\023\023\022\022\026\025\025\025\017\017\026\024\027\016\003\006\a\003\001\v\017\006\005\r\f\n\v\b\a\a\r\f\n\t\004\002\003\001\003\t\n\006\b\a\b\r\r\016\017\017\021\017\v\t\017\016\n\n\v\016\020\021\021\024\024\017\r\024\025\020\023\020\016\020\024\022\016\t\a\f\017\n\n\v\f\n\n\v\006\t\v\016\t\002\b\a\005\a\b\r\r\v\r\n\b\n\v\t\n\006\a\v\016\f\005\n\r\t\002\b\b\t\b\v\004\t\020\t\f\020\016\t\n\r"..., 
  datastart = 0x5555607f71c0 "\t\r\f\f\v\f\004\006\b\f\b\006\006\n\n\f\v\v\r\r\v\016\n\v\f\t\003\v\f\b\006\b\a\v\n\t\r\v\002\006\f\f\016\f\t\v\020\f\n\b\a\n\f\b\a\n\016\r\t\r\016\v\v\026\033\030\023\023\022\022\026\025\025\025\017\017\026\024\027\016\003\006\a\003\001\v\017\006\005\r\f\n\v\b\a\a\r\f\n\t\004\002\003\001\003\t\n\006\b\a\b\r\r\016\017\017\021\017\v\t\017\016\n\n\v\016\020\021\021\024\024\017\r\024\025\020\023\020\016\020\024\022\016\t\a\f\017\n\n\v\f\n\n\v\006\t\v\016\t\002\b\a\005\a\b\r\r\v\r\n\b\n\v\t\n\006\a\v\016\f\005\n\r\t\002\b\b\t\b\v\004\t\020\t\f\020\016\t\n\r"..., 
  dataend = 0x555560bf71c0 "\023#0\024$1\025%\241e;", 
  datalimit = 0x555560bf71c0 "\023#0\024$1\025%\241e;", allocator = 0x0, 
  u = 0x55555c679f00, size = {p = 0x7fffffff4988}, step = {p = 0x7fffffff49d0, 

    buf = {2048, 1}}}

(gdb) print pg
$2 = {px = 0x55555c2b9840, pn = {pi_ = 0x55555c2b98e0}}


以下是录制Rosbag时候camera和livo节点的报错,关掉rosbag record报错就没了

Camera 节点
Compressed Depth Image Transport - Compression requires single-channel 32bit-floating point or 16bit raw depth images (input format is: rgb8).

Livo节点

[ERROR] [1677054307.767761744]: Compressed Depth Image Transport - Compression requires single-channel 32bit-floating point or 16bit raw depth images (input format is: bgr8).
[ERROR] [1677054307.767983097]: OpenCV(3.4.19) /opt/opencv_build/opencv/modules/imgcodecs/src/loadsave.cpp:1000: error: (-2:Unspecified error) in function 'bool cv::imencode(const cv::String&, cv::InputArray, std::vector<unsigned char>&, const std::vector<int>&)'
> Encoding 'params' must be key-value pairs:
>     '(params.size() & 1) == 0'
> where
>     'params.size()' is 3

Code release!

Hello,

I am quite excited to try out your work and it is based on FASTLIO 2 and visual odometry. Is there any approximate code release date?

Memory management issues.

Hi, I have the same issue than with R3Live when tested it, the map is being stored in RAM and it will fill the 16Gb DDR4 in a 3 min aprox. at 868x640 camera resolution. Lidar is publishing at 30hz.
Is there any way to store that data in NVME or SSD to free the RAM?, this problem in make the software quite unreliable to be honest.
I.e can save the map with a command, service,key and start, or a customizable timer and start a new one with different name could work in many cases.
Maybe Im missing something and is my fault, not sure

试了一下自己录的数据集,效果还不错,请教一下作者关于相机的问题

作者您好! 感谢您这么优秀的开源工作,最近我用自己的录的数据集跑了一下效果很不错,视频链接如下,请作者指教一下。

【深圳大学FAST-LIVO建图测试】 https://www.bilibili.com/video/BV1rd4y1v7Fd/?share_source=copy_web&vd_source=c8d922f79df8ad8e31d0d55dd06658d9

以及有问题想请教一下,相机过曝的问题可以在mvs的驱动里面设置成自动曝光来解决吗? 以及有什么好的cmos相机推荐吗?我用的这一款只有130w像素,成像效果不是很好。

运行过程中内存占用不断增多导致进程崩溃

尊敬的港大团队您们好,
我在使用程序的时候遇到程序运行过程中占用内存会不断增多(用hku.bag和自己录制的数据包都会出现),对于稍微大的数据包很容易就出现内存爆满而进程崩溃process has died.请问有什么方法可以修改哪些内容,使得运行过程可以动态释放一些内存,同时跑完数据可以保存完整的地图数据?麻烦提供一些指导

question about LidarSelector::addObservation()

Dear Authors:
Thank you for contributing elegant programs to open-source SLAM, which helped me a lot in my learning. During this process, I have a question about LidarSelector::addObservation()

double delta_theta = (delta_pose.rotation_matrix().trace() > 3.0 - 1e-6) ? 0.0 : std::acos(0.5 * (delta_pose.rotation_matrix().trace() - 1));
if(delta_p > 0.5 || delta_theta > 10) add_flag = true;

From my perspective, it aims to calculate the change of rotation and translation between frames, but should the threshold for angular change be set to 10 degrees instead of 10 radians?
Wish you all the best!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.