Git Product home page Git Product logo

r-vio's People

Contributors

huaizheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

r-vio's Issues

Question about the Integration estimate

image
image
Dear Professor:
I have already found the composition equation in the Paper 《high-accuracy preintegration for visual-inertial navigation》. But I don't know how to calculate the derivates, and where is the supplementary material? Could you please give me some tips about it ?
image

Some tips about the propagate euqation

Hi, I'm a fresh man to the robocentric problem.I'm confused about the combination function after the IMU propagate. Could you please give me some advice about the code list below ?
Screenshot from 2021-10-24 13-45-36
Is there some paper about this equation?

The problem of the 3 sigma bounds

Hi @huaizheng
Thanks for the awesome work and have a good performance on the Euroc dataset. But when I have the 3 sigma bounds experiment with the R-VIO results and there are some problems in the Euroc V102 dataset. Here are my results.

image_V102

image_V102_2

3sigma

Figure 3 shows that the consistency of the system is problematic, the covariance of the pose( PKK(3, 3) PKK(4, 4) PKK(5, 5)) is too small.

Thank you in advance.

Trajectory drifting for Realsense D435i camera.

Hello! I am trying to run your code on my RS D435i camera. I have already changed all the parameters in the yaml file according to my camera settings. However, when I start to run rviz, the trajectory always starts to drift largely. Currently all I am doing is to tune with different parameters. I didn't change the code at all. Could you please give me some insight? Thank you very much!

Question about the update.

Update_Question
Sorry to disturbing you. There are some question in the updating process. In Updating process, the error state about the global variable should be 0, right? But why there are still the update correction in line 549?
不好意思,大神。关于更新过程,这里还是有地方不懂想请教您一下。关于状态量中的全局变量,似乎在更新的时候一直为0?但为什么在程序(Updater.cc)第549行依然对它有误差修正呢?

run error ubuntu18.04

double free or corruption (out)
code location : R-VIO/src/rvio/Ransac.cc

fsSettings["Camera.T_BC0"] >> T;

Thank you

Please excuse me. Some questions about algorithmic equation deducing and a certain experiment operation details.

Please excuse me. I am sorry to bother you again. Since the last time I consulted here, I found the reason why I failed to run V* and M* Euroc datasets. But unfortunately, I still met some problem, so that I want to ask for some details about the process you ran it. Besides, I also have a algorithmic equation deducing question about your IJRR paper 'Robocentric visual–inertial odometry'.
(1)My algorithmic equation deducing question is around equation (42)(43) as picture showed below. (Especially middle part 10~12 columns)
jietu
I do not know how equation (43) was achieved. So if possible, could you please show some necessary derivations? Thank you very much.
(2)Next is that I want to consult what kind of methods about bag playing you adopted when you accomplish the complete experiment proc, especially when you did those algorithmic testing. Last time, I realized the reason why I suffered from failure when I tried to run "rosbag playing" command. It is the IMU and images' topic message out of sync that resulted in morbid drift. So I tried to keep pressing "s" button after I ran "rosbag playing" command. And I obtained same trajectory results as you got. But it brought a new problem here. If I did it in this way, the speed of bag playing will be too slow. Because of this, I could not try those test about running time. I have tried storing IMU and images datas into two buffer queues, something like the way in VINS-mono. But it still result in the same morbid drift phenomenon as I directly ran "rosbag playing" command without keeping 's' button pressing. So it is the reason why I want to consult what kind of methods about bag playing you adopted when you finished your paper. Thanks very much.

Question about the IMU Propagation using Robocentric formulation

Dear Researcher:
I have detailed the code about《Robocentric Visual-Inertial Odometry》recently. I really appreciate the idea that using robocentric formulation to avoid the inconsistent problem and can avoid the initialization failure. With curiosity, I have detailed the code under the simulation gazebo environment. I use the husky-simulator to get the simulation of the IMU sensor. In order to understand the propagation in the robocentric formulation. I collect the true value of the IMU sensor without the disturbing of imu bias. Like below:
image
Use this setting, I have collect simulation data as below:
链接: https://pan.baidu.com/s/11zdWTpG0oJ1bfPKes9F8wQ 提取码: snjw
--来自百度网盘超级会员v2的分享
(If you have problem with baidu disk, you can also download the data from:https://1drv.ms/u/s!Av8TDx3WXbi_gm4juVcof1GeClpE?e=OX4WsI
Using the data, I have test the LINS and Fast-LIO algorithm with update closed and the result is fine.
The Fast-LIO Result:
Screenshot from 2022-04-06 10-29-49

The LINS Result:
Screenshot from 2022-04-06 10-31-47

However, when I use the R-VIO code without the visual update, the result is weird:
image
image

I closed the update like follows:
image

I don't know why the imu pose drifts quickly under the R-VIO IMU propagation. Could you please give me some advice? I have also consider the gravity effect, but when I stable the gravity orientation, the result is not good too.

I'm very appreciate for your help, thank you very much.
Yours
Qi Wu

Evaluate RVIO with Euroc dataset

Hi, very impressive work! Did you compare RVIO with ROVIO on Euroc datasets before? BTW, how do you evaluate RVIO with Euroc datasets ground truth? Thank you.

Does Gaussian thresholding and Box blurring help?

I noticed that you mentioned using Guassian thresholding and box blurring on image before doing the KLT track, but I did not find implementation in your code. I wonder that do these two tricks help with image blur/dark/ varying light problems?

Planar moves demonstration

In the paper, you state that r-vio works fine in case of planar movements. This is very cool If it really works with no wheel odometry!
Could you provide a dataset where r-vio works fine in case of planar movements? I missed these experiments in the paper. :(

I tried my own dataset, though I am failing, possibly due to bad thresholds.

Results with respect to moving world

Hey @huaizheng ,

I'll be honest I haven't really dug through your paper or code thoroughly yet, however, I am very familiar with VINS. I was wondering if you had any results or intuition for how well your method would perform in a moving world (i.e. all the visual feature tracked belong to some moving vehicle). Does this break underlying assumptions of your approach? or since everything is focused on the relative pose is this still ok? I know VINS, ROVIO, SVO and others all assume that the features are static in the inertial frame, so these quickly break when tracking moving features.

In the meantime I will read through your paper the see if I can get an idea for myself. Awesome work though.

No visualization is seen in RVIZ

i follow your tutorial, type:
roslaunch rvio euroc.launch
rviz rvio_rviz.rviz
rosbag play --pause V1_01_easy.bag /cam0/image_raw:=/camera/image_raw /imu0:=/imu
But No trace is seen in RVIZ.
How to solve it ?

Bias' large fluctuation in R-VIO?

Hello!
I have a question why you put the gravity in the state vector?
As in most paper, the local gravity can just be calculated from its global counterpart([0,0,9.8]) by a rotation matrix(rotation from world to IMU).
In my experiments, I find the estimated bias in R-VIO have larger fluctuation than their counterparts in ROVIO whose state vector do NOT include the local gravity.
I wonder if the local gravity in R-VIO's state vector is a major factor of the bias' large fluctuation?
Would you please express some opinions on this issue?

R-VIO initialization

Was wondering if R-VIO needs the IMU to be stationary for a minimum time for initialization?

What I'm seeing after testing a couple MAV EuRoC bags is that the bags that start with the drone stationary at the beginning seems to initialize well. However, the MH_01_easy.bag dataset, for example, starts with the drone being moved up and down first (presumably to initialize a VO), and that seems to cause R-VIO to diverge significantly at the start. Could this be because the sliding window hasn't been filled yet and the up and down motion cause the intialization to fail? If so this might be worth mentioning in the README?

Tested:

Good Initialization --- V1_01_easy.bag -- [stationary start]
Good Initialization --- V1_02_medium.bag -- [stationary start]
Good Initialization --- V1_03_difficult.bag -- [stationary start]
Bad Initialization --- MH_01_easy.bag -- [dynamic start]
Bad Initialization --- MH_02_easy.bag -- [dynamic start]
Good Initialization --- MH_03_medium.bag -- [stationary start for 1s then up and down]

Screenshot_2019-07-10_10-09-39

^ screenshot of R-VIO running on MH_01_easy.bag

关于初始化问题的了解

image
不好意思,这里关于初始化问题还是有点没弄懂。我的问题如下:论文和代码里都是将移动前的IMU加速度值来作为local gravity.但在直接运动的时候有没有能够快速初始化呢?比方说我用绳子拴住相机直接开始绕圈,这样的话,目前的初始化算法还有效吗?

Question about the urban driving dataset

Hello, I‘m trying to run the code on a public dataset called Brno Urban dataset, which contains a 10Hz, 1920 * 1200 pixel RGB camera and 400 Hz IMU, but unfortunately the algorithm did not seem to perform very well. Trajectories show signs of divergence at the beginning sometimes , and are not stable when the vehicle turns. I have tried to adjust the parameters such as camera image noise, number of features per image and tracking length, but the effect is still not satisfactory. I saw an outdoor scene test in R-VIO' s paper, and I believe the work is very solid. So I would like to ask how to configure parameters. Thank you very much!

Camera-IMU temporal extrinsic not being set in code

Hi Zheng Huai and co,

I am still working through the setup I described in https://github.com/rpng/R-VIO/issues/13, but while I was working on it I noticed that changes I made to Camera.nTimeOffset parameter in the yaml would not be loaded into the code (ie. mnCamTimeOffset = 0.0 always). Looking a little further there seems to be a bit of a typo:

In line 65 of rvio_euroc.yaml:
Camera:nTimeOffset: 0
And in line 71 of System.cc:
mnCamTimeOffset = fsSettings["Camera.nTimeOffset"];

Note the (:) vs the (.). I changed the colon to a period and the parameter now loads.

Best,
~Jeff

Running R-VIO on the Kitti dataset

Hi,

first of all thank you for sharing your code with the community!
I am trying to apply the code to the kitti raw dataset, specifically a residential recording (2011_10_03_drive_0027).
I am starting the estimation from a point where the car is at standstill und doing a right turn afterwards, because I was hoping the gravity and bias estimation would benefit from this.
I already found out that the accuracy is highly dependent on the IMU parameters, but unfortunately I couldn't find the actual values for kitti. I tried to estimate them myself via an Allan standard deviation plot from a few seconds of standstill data from a different track, but from my understanding this is only sufficient for the white noise values, but not the bias random walk. I therefore guessed an appropriate value for the latter.
I also found out that I need much higher white noise values than calculated for the turns to be recognized at all, about two orders of magnitude (used sigma_g: 0.0046, sigma_a: 0.06).
R-VIO_Kitti
As you can see in the attached screenshot the algorithm is doing massive position corrections at each 90 degree turn. I am assuming this is because the uncertainty is reduced when the car is turning. But I am wondering why the uncertainty und deviation is so high in the first place. From what I have seen in your paper, you had much less deviation and corrections when you run the code on your urban driving test data.
I was hoping maybe you have an idea why the performance is not as good as in your paper and how I might improve this.
I will happily provide further details of the configuration or data I have used, if necessary.
Thank you very much!

Question about the trick R-VIO used in the real environment.

Dear Researcher:
I'm a Phd student from Shanghai Jiao Tong University. I have read the paper《Robocentric Visual-Inertial Odometry》recently. I really appreciate the idea that uses robocentric formulation to avoid the inconsistent problem and can avoid the initialization failure. With curiosity, I have test the algorithm under different environments. The simulation gazebo environment and the real indoor experiment environment. I have noticed that in the gazebo environment, the algorithm can show a better result than others however, in the real environment, the R-VIO often get failures. With grateful and thanks, I want to know is there some tricks to use this algorithm in the real environment? Here is my consideration:

  • The extrinsic parameter between IMU and Camera, the R-VIO uses JPL formulation. So the extrinsic should represent the camera to the imu.
  • The imu intrinsic parameter(ba or bg) should be little larger to include some additive noise, like #23
  • The initialization module should be carefully set the detection threshold. The threshold have a deep impact on the bias and gravity estimated.

Here is my tips about the R-VIO used in the real environment. However, from your practice. Is there are some other useful information that I have missed ? Could you please give me some good advices about how to tune the code suitable in some uav environment? If I re-implent this algorithm, what else should I be careful about?

Thanks
Qi Wu

gravity need be normalized?

Thanks for your sharing of the code. I tested it on my own datasets, I found that the vio system is unstable, but if I remove the normalizing of gravity, it run more stably. Should the gravity be normalized?

Error messages and tuning when transplanting R-VIO to non-ROS setup

Hello Zheng Haui and co!

Thanks for this awesome bit of code; I especially appreciate that it is very clean and easy to follow.

So, I'm trying to deploy R-VIO online on a custom, non-ROS setup. I have the camera calibration (intrinsics) and extrinsics (I think), but not the IMU noise properties. I get IMU readings at around 200Hz, and images at around 15-20Hz (good enough, I hope?). I have gotten the code to be plugged into my framework so it is now taking in data and outputting poses, but I am encountering some issues.

I turned on the print/debugging messages and noticed that I was getting a lot of:
Invalid inverse-depth feature estimate (1)!
Invalid inverse-depth feature estimate (1)!
...
Along with the occasional:
Failed in Mahalanobis distance test!
Followed by:
Too few measurements for update!
Followed by my position drifting off into the great beyond...

Digging a little deeper, I started looking into what the inverse depth bearing parameters were doing. I saw that the phi and psi are usually good and within the +-pi/2 bounds, but it was the rho that was always going negative after the LM solve. I looked into the solver iterations and they seems to converge fairly quickly (3-4 steps), with phi and psi not changing much but rho just jumping to some negative value. For the features that do end up with a positive rho value, they always fail the Mahalanobis test, with the distance being on the order of magnitude of 1e3~1e5.

Do you have any thoughts on what could be causing this? My initial reaction was to review my parameters, but looking at the values, the only parameter that has a large difference from the euroc ones are the extrinsics (and the IMU noise properties, which I left the same as the euroc ones). From what I can tell the extrinsics are used for the update step, so would you say that the symptoms I'm seeing are from bad extrinsics? I'm fairly confident with my rotation matrix, but the translation may be a little off, so how sensitive is the system to that (or bad IMU noise properties, for that matter)?

I noticed that in https://github.com/rpng/R-VIO/issues/8 you mentioned having correct parameters is of paramount importance and I totally agree, so I guess I'm inquiring to try and find the root cause of these messages so I can have an idea of which parameters I need to tune (or if the issue is something else completely!).

If you have additional insight or thoughts I'm all ears!

Thank you for your time,
~Jeff

About the algorithm‘s harmful and unbelievable drifting when it worked in several static situations below

Hello, I am sorry to bother you. I want to seek some help. Well, there are several confusing questions we met as follows,
(1)When followed those instruction from file 'Readme.md', we ran this algorithm with Euroc datasets, such as MH_01_easy.bag, we suffered a really tough situation that the trajectory can always drift far away with an incredible rate at beginning, especially when the mav was undergoing a long time static beginning. This directly resulted in the whole process failed. But in your paper, I did not find the same problem when you ran this algorithm under these same datasets. Could you please give me some advice? Is this a problem from code? Or something else possible.
(2)It is similar to (1) that when mav keep short time static during process, such as some periods in MH_01_easy.bag, although the trajectory has been failed because of the static beginning, I noticed that the trajectory was still extending with a fast speed. Meanwhile, for example, I noticed the qkG was keeping increasing. In paper, it claims that the algorithm can handle such situation, but why does the estimation process become so bad?
That is all. I really hope someone can answer our questions. Thanks a lot!

PUBLISH RATE SLOW OF VIO(px4_realsense_bridge_node)

Hi,
I'm using VIO(https://github.com/Auterion/VIO.git) to make connection between VINS_FUSION(for localization) and PX4(mavros).
The rate of /camera/odom/sample in VIO with contains the VINS data is being published at rate 15hz(rostopic hz /camera/odom/sample) but when the topic subscribes the px4_realsense_bridge_node which then publish a topic /camera/odom/sample_throttled which is being published at 2hz which is extreme slow and that's why it results to the slow refreshing of point cloud and some other stuff in fast planner. @beomsu7 you also used VIO to connect t265(odometry) to px4...So please help regarding the issue..

ROSGRAPH
image

How to install Eigen Lib?

Hello
Anyone can help me to install Eigen Library on ubuntu and ROS?
Is there any useful links?

Thanks

Question about the Ω(w) Matrix.

Hi, about the Ω(w) matrix in paper 《Robocentric Visual-Inertial Odometry》
image
I'm confused about the right corner number in the equation (18). It should be zero, not one, right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.