rpng / r-vio Goto Github PK
View Code? Open in Web Editor NEWRobocentric Visual-Inertial Odometry
Home Page: https://journals.sagepub.com/doi/10.1177/0278364919853361
License: GNU General Public License v3.0
Robocentric Visual-Inertial Odometry
Home Page: https://journals.sagepub.com/doi/10.1177/0278364919853361
License: GNU General Public License v3.0
Hi @huaizheng
Thanks for the awesome work and have a good performance on the Euroc dataset. But when I have the 3 sigma bounds experiment with the R-VIO results and there are some problems in the Euroc V102 dataset. Here are my results.
Figure 3 shows that the consistency of the system is problematic, the covariance of the pose( PKK(3, 3) PKK(4, 4) PKK(5, 5)) is too small.
Thank you in advance.
How to do pose update to R-VIO use GPS?
Hello! I am trying to run your code on my RS D435i camera. I have already changed all the parameters in the yaml file according to my camera settings. However, when I start to run rviz, the trajectory always starts to drift largely. Currently all I am doing is to tune with different parameters. I didn't change the code at all. Could you please give me some insight? Thank you very much!
Sorry to disturbing you. There are some question in the updating process. In Updating process, the error state about the global variable should be 0, right? But why there are still the update correction in line 549?
不好意思,大神。关于更新过程,这里还是有地方不懂想请教您一下。关于状态量中的全局变量,似乎在更新的时候一直为0?但为什么在程序(Updater.cc)第549行依然对它有误差修正呢?
double free or corruption (out)
code location : R-VIO/src/rvio/Ransac.cc
fsSettings["Camera.T_BC0"] >> T;
Thank you
I have read this paper https://arxiv.org/abs/1903.08636, it is about feedback map point to MSCKF by Schmidt-EKF. If I wan to realize Schmidt-EKF on R-VIO,How should I propagate the covariance matrix PAS?
Please excuse me. I am sorry to bother you again. Since the last time I consulted here, I found the reason why I failed to run V* and M* Euroc datasets. But unfortunately, I still met some problem, so that I want to ask for some details about the process you ran it. Besides, I also have a algorithmic equation deducing question about your IJRR paper 'Robocentric visual–inertial odometry'.
(1)My algorithmic equation deducing question is around equation (42)(43) as picture showed below. (Especially middle part 10~12 columns)
I do not know how equation (43) was achieved. So if possible, could you please show some necessary derivations? Thank you very much.
(2)Next is that I want to consult what kind of methods about bag playing you adopted when you accomplish the complete experiment proc, especially when you did those algorithmic testing. Last time, I realized the reason why I suffered from failure when I tried to run "rosbag playing" command. It is the IMU and images' topic message out of sync that resulted in morbid drift. So I tried to keep pressing "s" button after I ran "rosbag playing" command. And I obtained same trajectory results as you got. But it brought a new problem here. If I did it in this way, the speed of bag playing will be too slow. Because of this, I could not try those test about running time. I have tried storing IMU and images datas into two buffer queues, something like the way in VINS-mono. But it still result in the same morbid drift phenomenon as I directly ran "rosbag playing" command without keeping 's' button pressing. So it is the reason why I want to consult what kind of methods about bag playing you adopted when you finished your paper. Thanks very much.
Could you share the parameter configuration of the urban driving dataset?
Dear Researcher:
I have detailed the code about《Robocentric Visual-Inertial Odometry》recently. I really appreciate the idea that using robocentric formulation to avoid the inconsistent problem and can avoid the initialization failure. With curiosity, I have detailed the code under the simulation gazebo environment. I use the husky-simulator to get the simulation of the IMU sensor. In order to understand the propagation in the robocentric formulation. I collect the true value of the IMU sensor without the disturbing of imu bias. Like below:
Use this setting, I have collect simulation data as below:
链接: https://pan.baidu.com/s/11zdWTpG0oJ1bfPKes9F8wQ 提取码: snjw
--来自百度网盘超级会员v2的分享
(If you have problem with baidu disk, you can also download the data from:https://1drv.ms/u/s!Av8TDx3WXbi_gm4juVcof1GeClpE?e=OX4WsI )
Using the data, I have test the LINS and Fast-LIO algorithm with update closed and the result is fine.
The Fast-LIO Result:
However, when I use the R-VIO code without the visual update, the result is weird:
I closed the update like follows:
I don't know why the imu pose drifts quickly under the R-VIO IMU propagation. Could you please give me some advice? I have also consider the gravity effect, but when I stable the gravity orientation, the result is not good too.
I'm very appreciate for your help, thank you very much.
Yours
Qi Wu
Hi, very impressive work! Did you compare RVIO with ROVIO on Euroc datasets before? BTW, how do you evaluate RVIO with Euroc datasets ground truth? Thank you.
I noticed that you mentioned using Guassian thresholding and box blurring on image before doing the KLT track, but I did not find implementation in your code. I wonder that do these two tricks help with image blur/dark/ varying light problems?
In the paper, you state that r-vio works fine in case of planar movements. This is very cool If it really works with no wheel odometry!
Could you provide a dataset where r-vio works fine in case of planar movements? I missed these experiments in the paper. :(
I tried my own dataset, though I am failing, possibly due to bad thresholds.
Hi,
Thx for your guys great work,are the urban dataset available for download for just testing?
Hey @huaizheng ,
I'll be honest I haven't really dug through your paper or code thoroughly yet, however, I am very familiar with VINS. I was wondering if you had any results or intuition for how well your method would perform in a moving world (i.e. all the visual feature tracked belong to some moving vehicle). Does this break underlying assumptions of your approach? or since everything is focused on the relative pose is this still ok? I know VINS, ROVIO, SVO and others all assume that the features are static in the inertial frame, so these quickly break when tracking moving features.
In the meantime I will read through your paper the see if I can get an idea for myself. Awesome work though.
i follow your tutorial, type:
roslaunch rvio euroc.launch
rviz rvio_rviz.rviz
rosbag play --pause V1_01_easy.bag /cam0/image_raw:=/camera/image_raw /imu0:=/imu
But No trace is seen in RVIZ.
How to solve it ?
Hello!
I have a question why you put the gravity in the state vector?
As in most paper, the local gravity can just be calculated from its global counterpart([0,0,9.8]) by a rotation matrix(rotation from world to IMU).
In my experiments, I find the estimated bias in R-VIO have larger fluctuation than their counterparts in ROVIO whose state vector do NOT include the local gravity.
I wonder if the local gravity in R-VIO's state vector is a major factor of the bias' large fluctuation?
Would you please express some opinions on this issue?
Was wondering if R-VIO needs the IMU to be stationary for a minimum time for initialization?
What I'm seeing after testing a couple MAV EuRoC bags is that the bags that start with the drone stationary at the beginning seems to initialize well. However, the MH_01_easy.bag
dataset, for example, starts with the drone being moved up and down first (presumably to initialize a VO), and that seems to cause R-VIO to diverge significantly at the start. Could this be because the sliding window hasn't been filled yet and the up and down motion cause the intialization to fail? If so this might be worth mentioning in the README?
Tested:
Good Initialization --- V1_01_easy.bag -- [stationary start]
Good Initialization --- V1_02_medium.bag -- [stationary start]
Good Initialization --- V1_03_difficult.bag -- [stationary start]
Bad Initialization --- MH_01_easy.bag -- [dynamic start]
Bad Initialization --- MH_02_easy.bag -- [dynamic start]
Good Initialization --- MH_03_medium.bag -- [stationary start for 1s then up and down]
^ screenshot of R-VIO running on MH_01_easy.bag
如果数据集的数据在最后有一段比较长的静止的场景,rvio估计出来的轨迹会发散,我觉得可能的原因是视觉部分每次都会重新估计landmark的3D坐标,而不是像orb_slam一样,保存之前的landmark,不知道对不对?
Hello, I‘m trying to run the code on a public dataset called Brno Urban dataset, which contains a 10Hz, 1920 * 1200 pixel RGB camera and 400 Hz IMU, but unfortunately the algorithm did not seem to perform very well. Trajectories show signs of divergence at the beginning sometimes , and are not stable when the vehicle turns. I have tried to adjust the parameters such as camera image noise, number of features per image and tracking length, but the effect is still not satisfactory. I saw an outdoor scene test in R-VIO' s paper, and I believe the work is very solid. So I would like to ask how to configure parameters. Thank you very much!
Hi Zheng Huai and co,
I am still working through the setup I described in https://github.com/rpng/R-VIO/issues/13, but while I was working on it I noticed that changes I made to Camera.nTimeOffset parameter in the yaml would not be loaded into the code (ie. mnCamTimeOffset = 0.0 always). Looking a little further there seems to be a bit of a typo:
In line 65 of rvio_euroc.yaml:
Camera:nTimeOffset: 0
And in line 71 of System.cc:
mnCamTimeOffset = fsSettings["Camera.nTimeOffset"];
Note the (:) vs the (.). I changed the colon to a period and the parameter now loads.
Best,
~Jeff
Hi,
first of all thank you for sharing your code with the community!
I am trying to apply the code to the kitti raw dataset, specifically a residential recording (2011_10_03_drive_0027).
I am starting the estimation from a point where the car is at standstill und doing a right turn afterwards, because I was hoping the gravity and bias estimation would benefit from this.
I already found out that the accuracy is highly dependent on the IMU parameters, but unfortunately I couldn't find the actual values for kitti. I tried to estimate them myself via an Allan standard deviation plot from a few seconds of standstill data from a different track, but from my understanding this is only sufficient for the white noise values, but not the bias random walk. I therefore guessed an appropriate value for the latter.
I also found out that I need much higher white noise values than calculated for the turns to be recognized at all, about two orders of magnitude (used sigma_g: 0.0046, sigma_a: 0.06).
As you can see in the attached screenshot the algorithm is doing massive position corrections at each 90 degree turn. I am assuming this is because the uncertainty is reduced when the car is turning. But I am wondering why the uncertainty und deviation is so high in the first place. From what I have seen in your paper, you had much less deviation and corrections when you run the code on your urban driving test data.
I was hoping maybe you have an idea why the performance is not as good as in your paper and how I might improve this.
I will happily provide further details of the configuration or data I have used, if necessary.
Thank you very much!
Dear Researcher:
I'm a Phd student from Shanghai Jiao Tong University. I have read the paper《Robocentric Visual-Inertial Odometry》recently. I really appreciate the idea that uses robocentric formulation to avoid the inconsistent problem and can avoid the initialization failure. With curiosity, I have test the algorithm under different environments. The simulation gazebo environment and the real indoor experiment environment. I have noticed that in the gazebo environment, the algorithm can show a better result than others however, in the real environment, the R-VIO often get failures. With grateful and thanks, I want to know is there some tricks to use this algorithm in the real environment? Here is my consideration:
Here is my tips about the R-VIO used in the real environment. However, from your practice. Is there are some other useful information that I have missed ? Could you please give me some good advices about how to tune the code suitable in some uav environment? If I re-implent this algorithm, what else should I be careful about?
Thanks
Qi Wu
Thanks for your sharing of the code. I tested it on my own datasets, I found that the vio system is unstable, but if I remove the normalizing of gravity, it run more stably. Should the gravity be normalized?
Hello Zheng Haui and co!
Thanks for this awesome bit of code; I especially appreciate that it is very clean and easy to follow.
So, I'm trying to deploy R-VIO online on a custom, non-ROS setup. I have the camera calibration (intrinsics) and extrinsics (I think), but not the IMU noise properties. I get IMU readings at around 200Hz, and images at around 15-20Hz (good enough, I hope?). I have gotten the code to be plugged into my framework so it is now taking in data and outputting poses, but I am encountering some issues.
I turned on the print/debugging messages and noticed that I was getting a lot of:
Invalid inverse-depth feature estimate (1)!
Invalid inverse-depth feature estimate (1)!
...
Along with the occasional:
Failed in Mahalanobis distance test!
Followed by:
Too few measurements for update!
Followed by my position drifting off into the great beyond...
Digging a little deeper, I started looking into what the inverse depth bearing parameters were doing. I saw that the phi and psi are usually good and within the +-pi/2 bounds, but it was the rho that was always going negative after the LM solve. I looked into the solver iterations and they seems to converge fairly quickly (3-4 steps), with phi and psi not changing much but rho just jumping to some negative value. For the features that do end up with a positive rho value, they always fail the Mahalanobis test, with the distance being on the order of magnitude of 1e3~1e5.
Do you have any thoughts on what could be causing this? My initial reaction was to review my parameters, but looking at the values, the only parameter that has a large difference from the euroc ones are the extrinsics (and the IMU noise properties, which I left the same as the euroc ones). From what I can tell the extrinsics are used for the update step, so would you say that the symptoms I'm seeing are from bad extrinsics? I'm fairly confident with my rotation matrix, but the translation may be a little off, so how sensitive is the system to that (or bad IMU noise properties, for that matter)?
I noticed that in https://github.com/rpng/R-VIO/issues/8 you mentioned having correct parameters is of paramount importance and I totally agree, so I guess I'm inquiring to try and find the root cause of these messages so I can have an idea of which parameters I need to tune (or if the issue is something else completely!).
If you have additional insight or thoughts I'm all ears!
Thank you for your time,
~Jeff
Hello, I am sorry to bother you. I want to seek some help. Well, there are several confusing questions we met as follows,
(1)When followed those instruction from file 'Readme.md', we ran this algorithm with Euroc datasets, such as MH_01_easy.bag, we suffered a really tough situation that the trajectory can always drift far away with an incredible rate at beginning, especially when the mav was undergoing a long time static beginning. This directly resulted in the whole process failed. But in your paper, I did not find the same problem when you ran this algorithm under these same datasets. Could you please give me some advice? Is this a problem from code? Or something else possible.
(2)It is similar to (1) that when mav keep short time static during process, such as some periods in MH_01_easy.bag, although the trajectory has been failed because of the static beginning, I noticed that the trajectory was still extending with a fast speed. Meanwhile, for example, I noticed the qkG was keeping increasing. In paper, it claims that the algorithm can handle such situation, but why does the estimation process become so bad?
That is all. I really hope someone can answer our questions. Thanks a lot!
Hi,
I'm using VIO(https://github.com/Auterion/VIO.git) to make connection between VINS_FUSION(for localization) and PX4(mavros).
The rate of /camera/odom/sample in VIO with contains the VINS data is being published at rate 15hz(rostopic hz /camera/odom/sample) but when the topic subscribes the px4_realsense_bridge_node which then publish a topic /camera/odom/sample_throttled which is being published at 2hz which is extreme slow and that's why it results to the slow refreshing of point cloud and some other stuff in fast planner. @beomsu7 you also used VIO to connect t265(odometry) to px4...So please help regarding the issue..
How to save a .txt track
Hello
Anyone can help me to install Eigen Library on ubuntu and ROS?
Is there any useful links?
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.