Git Product home page Git Product logo

vectr-ucla / direct_lidar_inertial_odometry Goto Github PK

View Code? Open in Web Editor NEW
482.0 14.0 93.0 23.2 MB

[IEEE ICRA'23] A new lightweight LiDAR-inertial odometry algorithm with a novel coarse-to-fine approach in constructing continuous-time trajectories for precise motion correction.

License: MIT License

CMake 1.56% C++ 98.44%
3d-mapping imu lidar lidar-inertial-odometry lidar-slam localization mapping odometry robotics ros

direct_lidar_inertial_odometry's Introduction

Direct LiDAR-Inertial Odometry: Lightweight LIO with Continuous-Time Motion Correction

DLIO is a new lightweight LiDAR-inertial odometry algorithm with a novel coarse-to-fine approach in constructing continuous-time trajectories for precise motion correction. It features several algorithmic improvements over its predecessor, DLO, and was presented at the IEEE International Conference on Robotics and Automation (ICRA) in London, UK in 2023.


drawing

Instructions

Sensor Setup & Compatibility

DLIO has been extensively tested using a variety of sensor configurations and currently supports Ouster, Velodyne, Hesai, and Livox LiDARs. The point cloud should be of input type sensor_msgs::PointCloud2 and the 6-axis IMU input type of sensor_msgs::Imu.

For Livox sensors specifically, you can use the master branch directly if it is of type sensor_msgs::PointCloud2 (xfer_format: 0), or the feature/livox-support branch and the latest livox_ros_driver2 package if it is of type livox_ros_driver2::CustomMsg (xfer_format: 1) (see here for more information).

For best performance, extrinsic calibration between the LiDAR/IMU sensors and the robot's center-of-gravity should be inputted into cfg/dlio.yaml. If the exact values of these are unavailable, a rough LiDAR-to-IMU extrinsics can also be used (note however that performance will be degraded).

IMU intrinsics are also necessary for best performance, and there are several open-source calibration tools to get these values. These values should also go into cfg/dlio.yaml. In practice however, if you are just testing this work, using the default ideal values and performing the initial calibration procedure should be fine.

Also note that the LiDAR and IMU sensors need to be properly time-synchronized, otherwise DLIO will not work. We recommend using a LiDAR with an integrated IMU (such as an Ouster) for simplicity of extrinsics and synchronization.

Dependencies

The following has been verified to be compatible, although other configurations may work too:

  • Ubuntu 20.04
  • ROS Noetic (roscpp, std_msgs, sensor_msgs, geometry_msgs, nav_msgs, pcl_ros)
  • C++ 14
  • CMake >= 3.12.4
  • OpenMP >= 4.5
  • Point Cloud Library >= 1.10.0
  • Eigen >= 3.3.7
sudo apt install libomp-dev libpcl-dev libeigen3-dev

DLIO supports ROS1 by default, and ROS2 using the feature/ros2 branch.

Compiling

Compile using the catkin_tools package via:

mkdir ws && cd ws && mkdir src && catkin init && cd src
git clone https://github.com/vectr-ucla/direct_lidar_inertial_odometry.git
catkin build

Execution

After compiling, source the workspace and execute via:

roslaunch direct_lidar_inertial_odometry dlio.launch \
  rviz:={true, false} \
  pointcloud_topic:=/robot/lidar \
  imu_topic:=/robot/imu

for Ouster, Velodyne, Hesai, or Livox (xfer_format: 0) sensors, or

roslaunch direct_lidar_inertial_odometry dlio.launch \
  rviz:={true, false} \
  livox_topic:=/livox/lidar \
  imu_topic:=/robot/imu

for Livox sensors (xfer_format: 1).

Be sure to change the topic names to your corresponding topics. Alternatively, edit the launch file directly if desired. If successful, you should see the following output in your terminal:

drawing

Services

To save DLIO's generated map into .pcd format, call the following service:

rosservice call /robot/dlio_map/save_pcd LEAF_SIZE SAVE_PATH

Test Data

For your convenience, we provide test data here (1.2GB, 1m 13s, Ouster OS1-32) of an aggressive motion to test our motion correction scheme, and here (16.5GB, 4m 21s, Ouster OSDome) of a longer trajectory outside with lots of trees. Try these two datasets with both deskewing on and off!


drawing

Citation

If you found this work useful, please cite our manuscript:

@article{chen2022dlio,
  title={Direct LiDAR-Inertial Odometry: Lightweight LIO with Continuous-Time Motion Correction},
  author={Chen, Kenny and Nemiroff, Ryan and Lopez, Brett T},
  journal={2023 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2023},
  pages={3983-3989},
  doi={10.1109/ICRA48891.2023.10160508}
}

Acknowledgements

We thank the authors of the FastGICP and NanoFLANN open-source packages:

  • Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno, “Voxelized GICP for Fast and Accurate 3D Point Cloud Registration,” in IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, pp. 11 054–11 059.
  • Jose Luis Blanco and Pranjal Kumar Rai, “NanoFLANN: a C++ Header-Only Fork of FLANN, A Library for Nearest Neighbor (NN) with KD-Trees,” https://github.com/jlblancoc/nanoflann, 2014.

We would also like to thank Helene Levy and David Thorne for their help with data collection.

Many thanks to @shrijitsingh99 for porting DLIO to ROS2!

License

This work is licensed under the terms of the MIT license.


drawing

drawing

direct_lidar_inertial_odometry's People

Contributors

juliangaal avatar kennyjchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

direct_lidar_inertial_odometry's Issues

keyframe-based loop closure in paper

great job! thanks for sharing!

  1. when i read the paper, i found the part of loop closure, but not found during the code . will it be released in future?
  2. if add gps positon as prior factor, the way of lio-sam based on gtsam will be better?

Missing save_traj service

Hey @kennyjchen,

Thank you for your work on DLIO, which I have recently tested and found to be highly effective. However, I've noticed that DLIO lacks the "save_traj" service available in DLO. This functionality is essential for my current project. Could you please let me know why it's missing in DLIO and how I can add it to the platform? Your help is greatly appreciated.

Differences in GICP between DLIO and DLO

Hi,
Thank you for your work!
I found that a guess transformation matrix was passed in for the s2m gicp align function in DLO, while DLIO transforms each new scan pointcloud into global frame for GICP. Does this have any impact on the accuracy or efficiency of GICP solving? Also, are the GICP used by these two algorithms exactly the same?:D

Struggling with Point Cloud Distortion Correction while Adapting HESAI (Pandar) LiDAR

Hello, Kenny!

Thank you for your remarkable work. I am currently working on adapting HESAI (Pandar) LiDAR and encountering difficulties progressing with the point cloud distortion correction. Can you please provide some guidance or suggestions on how to resolve this issue?

image
The terminal log is as follows.
image
I have added some log statements to pinpoint the problem, and it seems like the LIDAR timestamp is less than the latest timestamp in the IMU buffer. My IMU is a 6-axis one operating at 100Hz, and I understand the characteristics of the HESAI point cloud timestamp. Regarding the parameters, I have only modified the extrinsic parameters and the related topics.

Could you please provide some insights or suggestions on how to address this issue? Any help would be greatly appreciated.

trival typos

Hello author,
Thanks for sharing your excellent work! I have some trival questions.
Q1: After firstly process imu after this->imu_calibrated = true;, it seems this->prev_imu_stamp is equal to imu->header.stamp.toSec(), not the default zero value ?

double dt = imu->header.stamp.toSec() - this->prev_imu_stamp;

Q2: ros odometry linear velocity is in some body frame, not in a world frame this->state.v.lin.b?

this->odom_ros.twist.twist.linear.x = this->state.v.lin.w[0];

What is the shortest path I can take to understand the geometric observer?

The geometric observer in your paper is derived from contraction analysis, which is beyond my comprehension. As a mediocre SLAMer, I know a bit about calculus, linear algebra, probability, optimization and lie theory. However, when I asked chatgpt how to learn contraction analysis, it started talking about ODE, functional analysis, dynamic systems and control theory. What is the minimal knowledge I need from these areas to understand your paper A Contracting Hierarchical Observer for Pose-Inertial Fusion? What tutorials would you recommend?

runtime issues when compiling with clang

Hey there,
when I compile dlio (and dlo) with clang, the average computation time more than triples in comparison to using gcc ( ~100ms instead of ~30ms for an Ouster OS0 128).
I tried this on a AMD Ryzen 7 5800H with ubuntu 20.04 using clang 12.0 and gcc 9.4.
With clang, the CPU was running at full core utilization (x16) and a CPU load around 99% while dropping around 30% of the scans.
With gcc, the CPU runs at around 8% and slightly above 1 CPU utilized.
Any idea were this problem may come from?

Best regards, Jan

Instant divergence despite no motion

Hi,
thanks for open-sourcing this great work :)
I'm trying to run your approach on the 2021 Hilti Challenge Dataset using the livox lidar. I tried different sequences, the both IMUs (/alphasense/imu & /alphasense/imu_adis) as well as different extrinsics (z up, z down), but it always seems to diverge instantly, despite the sensor being static in the beginning of the scenes. I'm using the default parameters of DLIO, except of the adapted extrinsics. I also tried disabling deskewing, but it didnt show any effect.

I would appreciate any suggestions on what has to be changed to make it work.
dlio_diverge

Intrinsics and extrinsics

Hey! Firstly, thanks for great paper and code!
I used this tool for intrinsics and extrinsics: https://github.com/hku-mars/LiDAR_IMU_Init
I need help on understanding how to implement those values for DLIO:

Initialization result:
Rotation LiDAR to IMU (degree) = -0.084620 -0.649203 -1.151916
Translation LiDAR to IMU (meter) = -0.056023 -0.021736 0.026451
Time Lag IMU to LiDAR (second) = -0.010057
Bias of Gyroscope (rad/s) = -0.007985 -0.012948 0.020327
Bias of Accelerometer (meters/s^2) = 0.010086 0.009814 0.010098
Gravity in World Frame(meters/s^2) = -0.027753 0.030412 -9.809914

Homogeneous Transformation Matrix from LiDAR to IMU:
0.999734 0.020119 -0.011298 -0.056023
-0.020101 0.999797 0.001704 -0.021736
0.011330 -0.001477 0.999935 0.026451
0.000000 0.000000 0.000000 1.000000

Refinement result:
Rotation LiDAR to IMU (degree) = -0.014112 -0.341810 -1.171677
Translation LiDAR to IMU (meter) = -0.042382 -0.019593 0.020312
Time Lag IMU to LiDAR (second) = -0.010057
Bias of Gyroscope (rad/s) = -0.008959 0.001204 0.013092
Bias of Accelerometer (meters/s^2) = 0.013954 0.009286 -0.020293
Gravity in World Frame(meters/s^2) = -0.029866 0.030184 -9.791937

Homogeneous Transformation Matrix from LiDAR to IMU:
0.999773 0.020448 -0.005959 -0.042382
-0.020446 0.999791 0.000368 -0.019593
0.005965 -0.000246 0.999982 0.020312
0.000000 0.000000 0.000000 1.000000

Btw, using livox mid-360

Ground Truth

Hello Guys!
Where to get the ground truth of your provided datasets?

CPU load

Hello,

First of all thanks for a great paper and code, very easy to use and produced great results!
I just wondered where I can set it to use a constant number of cpu cores? When downsampling the pointcloud I of course got lower computation time but the number of cpu cores used and cpu load also decreased, to further decrease computation time I would like to increase the number of cpu cores used. I tried changing the num_threads variable but that didn't change anything in the debug output, any idea of where I could change this or if this even is an option?

Issue with submap generation

Great work! While studying your code, I noticed that the submap generated by performing ICP with each frame of point cloud seems to be generated based on the estimated state from the previous frame, rather than being generated based on the predicted state of the current frame. Specifically, it appears that the "getNextPose" function is executed before the "buildKeyframesAndSubmap" function. This seems to deviate from what is described in the paper. I would like to kindly ask if my understanding is correct. If so, what are the advantages of doing it this way? Thank you very much in advance for any response you can provide.

Intended logic in updateKeyframes()

I have some questions about this section of updateKeyframes():

bool newKeyframe = false;
// spaciousness keyframing
if (abs(dd) > this->keyframe_thresh_dist_ || abs(theta_deg) > this->keyframe_thresh_rot_) {
newKeyframe = true;
}
// rotational exploration keyframing
if (abs(dd) <= this->keyframe_thresh_dist_ && abs(theta_deg) > this->keyframe_thresh_rot_ && num_nearby <= 1) {
newKeyframe = true;
}
// check for nearby keyframes
if (abs(dd) <= this->keyframe_thresh_dist_) {
newKeyframe = false;
} else if (abs(dd) <= 0.5) {
newKeyframe = false;
}

To my understanding, there are some possible issues here:

rotational exploration keyframing

// rotational exploration keyframing 
if (abs(dd) <= this->keyframe_thresh_dist_ && abs(theta_deg) > this->keyframe_thresh_rot_ && num_nearby <= 1) { 
  newKeyframe = true; 
} 

is always overwritten by

 if (abs(dd) <= this->keyframe_thresh_dist_) { 
   newKeyframe = false;
}

num_nearby has no effect

If this statement

 if (... || abs(theta_deg) > this->keyframe_thresh_rot_) { 
   newKeyframe = true; 
 } 

is true (from the second condition),

// rotational exploration keyframing 
if (abs(dd) <= this->keyframe_thresh_dist_ && abs(theta_deg) > this->keyframe_thresh_rot_ && num_nearby <= 1) { 
  newKeyframe = true; 
} 

this statement doesn't change newKeyFrame, even if num_nearby is larger than 1. At this point, abs(dd) <= this->keyframe_thresh_dist_ is always true, given that abs(dd) > this->keyframe_thresh_dist_ was false in the condition before.

Intended logic

What is the intended logic here? I couldn't find much about this in your papers, except the adaptive thresholding distance to account for tight spaces.

Without the rotational exploration keyframing, the logic is clear to me (roughly):

if abs(dd) <= this->keyframe_thresh_dist or abs(dd) > 0.5 -> return false
else if abs(dd) > this->keyframe_thresh_dist -> return true

Now, I don't quite understand why the keyframe are updated when a large rotation was registered. In a scenario where, let's say, the robot stays in the same spot, but turns 180deg. Why do I have to update keyframes? The map is rotation invariant, so to speak, so I have gained no new information about the environment just be turning. I guess this is what was meant with the following statement in the DLO-paper?

Keyframe nodes are commonly dropped using fixed thresholds (e.g., every 1m or 10◦ of translational or rotational change)

Thanks for you help, happy to discuss

some questions about your new paper

Hi, i recently read the papers of your group and i got some questions.
The first is in the paper "Direct LiDAR-Inertial Odometry and Mapping: Perceptive and Connective SLAM". I wonder that how did you add that connectivity factor into your factor graph? did you just calculated a relative pose using the result from front-end odometer when two these two non-adjacent keyframes having sufficient overlap and used your 3D Jaccard index calculated a noise?
The second is in the paper "Joint On-Manifold Gravity and Accelerometer Intrinsics Estimation for Inertially Aligned Mapping". Did the gravity factor you calculated was only used in the mapping stage and not added into the factor graph you mentioned before? And i also curious that you projected the gravity and covariance calculated in factor graph in inertial frame to body frame, and projected the fixed gravity in inertial frame to body frame, then used these two gravity as error function, why don't you just calculated the error in the inertial frame?
Thank you for your contribution of open-sourceing your code to the community. Looking forward to your reply.

Livox+px4

Hello author, thank you for your open source. Can DLIO be implemented using livox+px4?

Some questions about timestamp in odom.cc

Hi,thank you for publishing a so good lio framework!
I have some questions about the timestamp in function integrateImu().I think the start_time means the first point's timestamp, but you use prev_scan_stamp (the medium timestamp of last scan) in function deskewPointcloud(). Could you please explain?

Tests related to high-speed movements

Hello author, first of all, thank you very much for such an excellent work! I am currently testing high-speed sports scenes when there is a large drift, I use MID360, the trajectory is about 700m, and the speed is about 12m/s.Previously, It used to run well in slow motion.Don't know if you want to adjust some parameters? Hope you can give me some advice.
image

best wishes!

Disaligned point clouds and motion issues with TurtleBot3 Burger

Hi! I was trying to test DLIO (ROS2 branch) algorithm with a TurtleBot3 Burger in a simulated scenario for educational purpose. I am using Ubuntu 20.04 and ROS2 Foxy as ROS2 distro. However, I got into some troubles and I am currently trying to solve them.

  1. If I understood correctly, the extrinsic parameters (i.e. baselink2imu and baselink2lidar) should be equivalent to the base_link->imu_link and base_link->base_scan transforms that are defined in the .xacro file. Am I right?

  2. What I wish to obtain is a reference frame map as in the tf tree below (to do so, I used a static transform publisher but other packages such as Cartographer provide a better transform, hence I tried both). That is, I wish my fixed frame to be map instead of odom. My idea is to change the reference frame point clouds are published to, from odom to map.
    Screenshot_tf
    However, if I launch DLIO I not only get a big disalignment between the built map and the actual scan. It also sometimes happens that reference frames "jump". Here I took some screens from RViz and Gazebo.
    Screenshot_gazebo
    Screenshot_rviz
    (The purple point cloud is the current scan, while the red one represents the map)

  3. A last problem concerns the position that is computed with respect to the origin. I see that, even if the robot does not move, the position somehow oscillates. Furthermore, the trajectory is badly computed. Is this related to IMU calibration or is it related to the reference frame issue?
    EDIT: Terminal shows that the position significantly changes but the traveled distance is still 0. However, if I move around the robot and then I stop it, the position keeps changing and the traveled distance strangely increases.

I may test with a real copy of TurtleBot3 Burger tomorrow and check if I encounter the same problems.

Point Cloud Visualizations

Just a quick question: which app/lib/package was used to create the point cloud visualizations? These look very nice.

No localization with Livox Lidar

Thanks a lot for opening source such a great work, but when I run the code, I found that the odometry output pose has always been the origin and has not changed. I have checked the data topics of lidar and imu, and there is nothing wrong. The frequency of lidar is 10Hz, and the frequency of imu is 200Hz. What are the possible reasons for the above case? Looking forward to your reply~
Screenshot from 2023-06-17 14-40-19
Screenshot from 2023-06-17 14-40-34
Screenshot from 2023-06-17 14-44-04

Extrinsics and Ouster Driver

Hi, and thanks for open sourcing your work!

I assume your extrinsics are tailored to the OS1-64 and taking from the datasheet/driver, correct? Does base_link, your robot frame, refer to the sensor frame of the ouster driver p. 14p?

memory corruption

I got a memory corruption while running it in ros2 humble for around 30 mnitutes.
The output is below
[dlio_odom_node-4] double free or corruption (!prev)

I have never faced this issue so not sure if is reproducible.
I will post it here one I face the same issue again.

Addressing Significant Cumulative Errors with UrbanLoco Dataset

Hi! I have tested your algorithm using Dataset 5: HK-Data20190117 from the UrbanLoco dataset. However, I noticed significant cumulative errors when the vehicle returns to the starting point. Below are the parameters I used. Could you please advise on how to adjust them for better performance? Thank you.
Screenshot from 2023-07-07 11-25-19
Screenshot from 2023-07-07 11-25-54

dlio:

  frames:
    odom: odom
    baselink: base_link
    lidar: lidar
    imu: imu

  map:
    waitUntilMove: true
    dense:
      filtered: false
    sparse:
      frequency: 1.0
      leafSize: 0.25

  odom:

    gravity: 9.80665

    imu:
      approximateGravity: false
      calibration:
        gyro: true
        accel: true
        time: 3
      bufferSize: 5000

    preprocessing:
      cropBoxFilter:
        size: 1.0
      voxelFilter:
        res: 0.25

    keyframe:
      threshD: 1.0
      threshR: 45.0

    submap:
      useJaccard: true
      keyframe:
        knn: 10
        kcv: 10
        kcc: 10
    gicp:
      minNumPoints: 64
      kCorrespondences: 16
      maxCorrespondenceDistance: 0.5
      maxIterations: 32
      transformationEpsilon: 0.01
      rotationEpsilon: 0.01
      initLambdaFactor: 1e-9

    geo:
      Kp: 4.5
      Kv: 11.25
      Kq: 4.0
      Kab: 2.25
      Kgb: 1.0
      abias_max: 5.0
      gbias_max: 0.5

Possible Undefined Behavior / NaNs

I think this code produces NaNs or undefined behavior, if the time difference is 0.

double dt = imu->header.stamp.toSec() - this->prev_imu_stamp;
this->imu_rates.push_back( 1./dt );
if (dt == 0) { return; }

From the reference:

If the second operand is zero, the behavior is undefined, except that if floating-point division is taking place and the type supports IEEE floating-point arithmetic (see std::numeric_limits::is_iec559), then:
- if one operand is NaN, the result is NaN
- dividing a non-zero number by ±0.0 gives the correctly-signed infinity and FE_DIVBYZERO is raised
- dividing 0.0 by 0.0 gives NaN and FE_INVALID is raised

The straightforward solution would be to check for 0 before adjusting imu_rates. I am not sure if that would break some other parts of the code that depend on imu_rates.

I can create a PR and test with the provided data if you agree this could be an issue.

positive constants in geometric observer

Hello author,
I am delighted to see that you have open-sourced this work. I tested some data right away and it worked very well. I am interested in the positive constants in the geometric observer. How is it set up? Does it relate to the usage scenario?
Thanks again for your outsanding contribution!

Odom node crashes when unstructured pointcloud with width=0 is given, due to division by zero error

When dlio::OdomNode::publishCloud(published_cloud, T_cloud) is called with published_cloud having a width of 0, the odometry node will crash due to a propagated division by zero error. This happens because publishCloud calls the following method with the copy_all_fields argument set to true (by default):

pcl::transformPointCloud (const pcl::PointCloud<PointT> &cloud_in,
                     pcl::PointCloud<PointT> &cloud_out,
                     const Eigen::Matrix<Scalar, 4, 4> &transform,
                     bool copy_all_fields)
{
...
  if (copy_all_fields)
        cloud_out.assign (cloud_in.begin (), cloud_in.end (), cloud_in.width);
      else
        cloud_out.resize (cloud_in.width, cloud_in.height);
...
}

This method furthermore calls the following assign method:

template <class InputIterator>
inline void
assign(InputIterator first, InputIterator last, index_t new_width)
{
  points.assign(std::move(first), std::move(last));
  width = new_width;
  height = size() / width;
  if (width * height != size()) {
    PCL_WARN("Mismatch in assignment. Requested width (%zu) doesn't divide "
             "provided size (%zu) cleanly. Setting height to 1\n",
             static_cast<std::size_t>(width),
             static_cast<std::size_t>(size()));
    width = size();
    height = 1;
  }
}

The division by width results in an uncaught division by zero error. This is easily fixed by calling pcl::transformPointCloud with copy_all_fields set to false.

ros2 colcon build error

Snipaste_2024-04-24_00-58-35

i have tried my best to find a solution, but it doesn't work.
It seems that the problem occurs only in foxy, but i need foxy not humble.
Did you meet this problem?

Something about function getNextPose() in odom

hello,good lifes for yours!

this->T = this->T_corr * this->T_prior

why matrix T_corr need mutiply current T prior? T_corr is the relative transformation to submap, T_corr is not current global pose?

updateState not at the same timestamp in propagateState

The propagateState is called every time an IMU message is received, while updateState update the state using Geometric Observer after scan to map. In Geometric Observer, errors between propagated and measured state are computed to perform state correction.

The timestamp of propagated state is at latest IMU timestamp(1678822355.936), while measured state is at Lidar timestamp(1678822355.870) if no motion correction, or median time of the scan(1678822355.870+0.025).

I think propagated state this->state and measured state pin/qin should describe state at the same timestamp.

Untitled

Something wrong with aggressive motion data

Hi, dear author!
Thank you for your work! I have tested the program with your example data, it runs very well.
I am interesting in the situation when spinning lidar is in a very aggressive motion. So I tested your program with another data with aggressive motion in staircase. Here it is. https://drive.google.com/drive/folders/1f-VQOORs1TA5pT-OO_7-rG0kW5F5UoGG
I just tested the 2022-08-30-20-33-52_0.bag. The odometry has a very great drift at the very beginning.
I believe there's something wrong with the extrinsic parameters, so I try serveral times. It turns out that this parameter may be approximate as the picture shows below.
2023-07-23 21-44-15 的屏幕截图
But, still it gets drift after some seconds when walking into a very narrow space.
Should I set the accurate extrinsic parameters, or something is wrong with this aggressive motion in narrow space?
By the way, how do you define the extrinsics baselink2imu and baselink2lidar? Do you mean baselink2lidar as P_L = R*P_b2l + t?
2023-07-23 21-53-19 的屏幕截图

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.