Git Product home page Git Product logo

hybvio's People

Contributors

asolin avatar oseiskar avatar pekkaran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hybvio's Issues

Raspberry pi4

Res sir,
Will it be possible to run at 30fps in raspberry pi4 using Ros for quadrotor state estimation.
Thank you

Problem about IMU noise unit

Hi, I am trying to run your beautiful algorithm on my own device.
I wonder if the parameter such as noiseProcessGyro and noiseProcessBGA is continuous or discrete?
I have calibrated my IMU via imu_utils or so on, which give me the continuous noise parameter. Should I transform them into discrete ones?
Thanks a lot!~

question about a jacobian

Hi! Thanks for your beautiful code and I am confused about a jacobian and wish to get your help.

I have no idea about jacobians from https://github.com/SpectacularAI/HybVIO/blob/main/src/odometry/triangulation.cpp#L270 to https://github.com/SpectacularAI/HybVIO/blob/main/src/odometry/triangulation.cpp#L322. Are these calculations used for jacobian of landmark reprojection errors to imu poses in ekf state, which will be used in later filter update? (in this way, we do not need to project H into nullspaces of landmark.) But how the jacobians are derivated?

I know its quite boring and troublesome to explain it, so a liitle explain will be grateful, thanks!

Question about the paper

image

Dear professor: In the experiment part, you have mentioned to read the app for more details, but I don't know where to find the app, could you please give me some advice? Yours

question about the processing queues

Dear sir
Thanks for the excellent work! This code has two queues to perform feature detection or tracking things in one thread(the first queue) and do ekf works like propogations or visual updates in the second thread(the second queue),is that right?Could you give some advice if i want to get two queues which are both to process images and parallel to each other before the ekf queue?Thanks a lot!

cmake problem

hello
Thank for you sharing so good job
I build on ubuntu1804 system
When I cmake the project after finished the build.sh file, I encoutered the problem so follow:
Selection_144

Error when running `make`

When running make -j6 here:

mkdir target
cd target
cmake -DBUILD_VISUALIZATIONS=ON -DUSE_SLAM=ON ..
# or if not using clang by default:
# CC=clang CXX=clang++ cmake ..
make -j6

I'm having this error:

/home/user/workspace/HybVIO/src/slam/orb_extractor.cpp: In member function ‘virtual void slam::{anonymous}::OrbExtractorImplementation::detectAndExtract(tracker::Image&, const tracker::Camera&, const std::vector<tracker::Feature>&, slam::KeyPointVector&, std::vector<int>&)’:
/home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:110:18: warning: missing initializer for member ‘slam::KeyPoint::bearing’ [-Wmissing-field-initializers]
                 });
                  ^
/home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:110:18: warning: missing initializer for member ‘slam::KeyPoint::descriptor’ [-Wmissing-field-initializers]
/home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:161:18: error: no matching function for call to ‘std::vector<slam::KeyPoint, Eigen::aligned_allocator<slam::KeyPoint> >::push_back(<brace-enclosed initializer list>)’
                 });
                  ^
In file included from /usr/include/c++/7/vector:64:0,
                 from /home/user/workspace/HybVIO/src/slam/static_settings.hpp:4,
                 from /home/user/workspace/HybVIO/src/slam/orb_extractor.hpp:4,
                 from /home/user/workspace/HybVIO/src/slam/orb_extractor.cpp:40:
/usr/include/c++/7/bits/stl_vector.h:939:7: note: candidate: void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = slam::KeyPoint; _Alloc = Eigen::aligned_allocator<slam::KeyPoint>; std::vector<_Tp, _Alloc>::value_type = slam::KeyPoint]
       push_back(const value_type& __x)
       ^~~~~~~~~
/usr/include/c++/7/bits/stl_vector.h:939:7: note:   no known conversion for argument 1 from ‘<brace-enclosed initializer list>’ to ‘const value_type& {aka const slam::KeyPoint&}’
/usr/include/c++/7/bits/stl_vector.h:953:7: note: candidate: void std::vector<_Tp, _Alloc>::push_back(std::vector<_Tp, _Alloc>::value_type&&) [with _Tp = slam::KeyPoint; _Alloc = Eigen::aligned_allocator<slam::KeyPoint>; std::vector<_Tp, _Alloc>::value_type = slam::KeyPoint]
       push_back(value_type&& __x)
       ^~~~~~~~~
/usr/include/c++/7/bits/stl_vector.h:953:7: note:   no known conversion for argument 1 from ‘<brace-enclosed initializer list>’ to ‘std::vector<slam::KeyPoint, Eigen::aligned_allocator<slam::KeyPoint> >::value_type&& {aka slam::KeyPoint&&}’
src/slam/CMakeFiles/slam.dir/build.make:283: recipe for target 'src/slam/CMakeFiles/slam.dir/orb_extractor.cpp.o' failed
make[2]: *** [src/slam/CMakeFiles/slam.dir/orb_extractor.cpp.o] Error 1
CMakeFiles/Makefile2:340: recipe for target 'src/slam/CMakeFiles/slam.dir/all' failed
make[1]: *** [src/slam/CMakeFiles/slam.dir/all] Error 2
Makefile:94: recipe for target 'all' failed
make: *** [all] Error 2

Test without additional sensor data

Nice work!

I have one question about the input data. As stated in your paper:

Without additional inputs, these methods can only estimate the location relative to the starting point but provide no global position information.

Which means your code still works when there is only one video data, right? How can I modify or how can I set flags to feed only one video in for testing?

Thank you so much.
Diep Tran

Make error

Hi, I'm also stuck here, the same as closed issure #11 (on Ubuntu18.04). I tried to rm -rf target and using clang-10.0 but still doesn't work. May I ask is there any other way to solve this?

Question about the parameters

Dear sir,
Thanks for the excellent work!
i have a question about the parameters "-maxVisualUpdates" and "-maxSuccessfulVisualUpdates",their default value are both 20.When i set them both to 30,the mono vio can also run in realtime in euroc, but the result seems to be the same as the 20's.Is that right?But I think that the more visual updates are done,the better result that we should get.
Thanks again!

KERNEL.HASWELL file missing?

Platform

Macbook Air M2

OS

Ubuntu 20.04 Docker

Problem

I was trying to build dependencies by running

CC=clang-12 CXX=clang++-12 WITH_OPENGL=OFF BUILD_VISUALIZATIONS=OFF ./scripts/build.sh

and I got the following error:

-- Reading vars from /HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm64/KERNEL.HASWELL...
CMake Error at cmake/utils.cmake:20 (file):
  file STRINGS file
  "/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm64/KERNEL.HASWELL"
  cannot be read.
Call Stack (most recent call first):
  kernel/CMakeLists.txt:16 (ParseMakefileVars)
  kernel/CMakeLists.txt:863 (build_core)

I have looked in the 3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm64/ directory and there was no KERNEL.HASWELL in the directory.

computational load

Hi @oseiskar Thanks for your sharing code and great work.

When I test the code, I found that it would occupy all remaining cpu resources of the mechines(intel i7-7700× 8 ).
Your algorithm looks lightweight, and should not take up so many system resources, where is the problem of my test?

two more questions:

  1. how to reduce resource system resources usage?
  2. how to control video processing speed? like 15 fps

many thanks!

Most Faithful Real-time Settings

I'm trying to run HybVIO on euroc/tum-vi as if it were a realtime system where camera frames are processed as they arrive. If there are ever multiple frames that haven't been processed yet I'd like to discard the old frame and use the new one.
For example:
HybVIO gets frame 100 and starts processing
frame 101 arrives
frame 102 arrives
HybVIO completes processing of frame 100
HybVIO discards frame 101 and starts processing frame 102

Do the parameters I've chosen accurately model this situation? I'm particularly unsure about the values of "sampleSyncLag" and "sampleSyncFrameBufferSize":
-sampleSyncFrameCount=1
-sampleSyncLag=9
-sampleSyncSmartFrameRateLimiter=true
-sampleSyncFrameBufferSize=1

Any help would be greatly appreciated. Thank you!

Problem with triangulation【More importantly about how to understand PIVO】?

Dear Professor:
Recently, I have read the paper 《HybVIO: Pushing the Limits of Real-time Visual-inertial Odometry》and its corresponding code. Thank you for your wonderful work that contributes to the robotics community. However, there are some troubles bothering me a lot. I have noticed that the visual landmark estimation part(https://github.com/SpectacularAI/HybVIO/blob/main/src/odometry/triangulation.cpp#L203) is different from the triangulation part in the original msckf. That's the beauty of the PIVO, your previous paper, Right? However, I don't understand, in the landmark triangulation part, why could you estimate the landmark coordinate jacobian with respect to the camera pose in the trail. To my knowledge, in this part, we would only estimate the landmark position and should calculate the jacobian with the landmark coordinate, like this:
image
image
More precisely, I don't understand how to derive the analytical-formula of the jacobian dE^TE in the triangulation part.
I have read your paper《PIVO: Probabilistic Inertial-Visual Odometry for Occlusion-Robust Navigation》 again and again. However, the paper is concise and I am not clever enough to understand why to calculate the jacobian like this. Could you please give me some doc or clues about it? I'm really looking forward for your help, thank you very much.
Yours
Qi Wu

Visual inertial ekf slam

Res sir,
Can hybvio perform classical tightly coupled visual inertial ekf slam discarding msckf type update and improvisation described in paper.
Thank you

Linker error

After make is done i get this error

[100%] Linking CXX executable main
../3rdparty/mobile-cv-suite/build/host/lib/libaccelerated-arrays.a(glfw.cpp.o): In function `accelerated::opengl::(anonymous namespace)::GLFWProcessor::GLFWProcessor(bool, int, int, char const*, GLFWwindow**, accelerated::opengl::GLFWProcessorMode)::{lambda()#1}::operator()() const':
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:51: undefined reference to `glfwInit'
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:52: undefined reference to `glfwWindowHint'
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:62: undefined reference to `glfwCreateWindow'
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:63: undefined reference to `glfwTerminate'
../3rdparty/mobile-cv-suite/build/host/lib/libaccelerated-arrays.a(glfw.cpp.o): In function `accelerated::opengl::(anonymous namespace)::GLFWProcessor::~GLFWProcessor()::{lambda()#1}::operator()() const':
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:76: undefined reference to `glfwDestroyWindow'
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:77: undefined reference to `glfwTerminate'
../3rdparty/mobile-cv-suite/build/host/lib/libaccelerated-arrays.a(glfw.cpp.o): In function `accelerated::opengl::(anonymous namespace)::GLFWProcessor::enqueue(std::function<void ()> const&)::{lambda()#1}::operator()() const':
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:89: undefined reference to `glfwMakeContextCurrent'
/home/user/workspace/HybVIO/3rdparty/mobile-cv-suite/accelerated-arrays/src/opengl/glfw.cpp:91: undefined reference to `glfwPollEvents'
clang: error: linker command failed with exit code 1 (use -v to see invocation)
CMakeFiles/main.dir/build.make:199: recipe for target 'main' failed
make[2]: *** [main] Error 1
CMakeFiles/Makefile2:70: recipe for target 'CMakeFiles/main.dir/all' failed
make[1]: *** [CMakeFiles/main.dir/all] Error 2
Makefile:94: recipe for target 'all' failed
make: *** [all] Error 2

Can't compile on any of the systems

On arch with dependencies - issue with definitions in opencv for ffmpeg
On debian (same as in description) it gives compile error around json library area
On ubuntu 22.04 there is another error (can't remember as of now)

what i'm doing wrong? i can't compile dependencies on any of the systems
in case logs needed, i can provide them, but they are all from 3rd party dependencies (mobile cv)

Question about distortion coefficients

Dear sir,
Thanks for the excellent work! i have a little question about the distortion coefficients.In euroc dataset the parameter.txt is using the third parameter as k3.But according to the radial-tangential distortion model,its origin meaning is p1,is that right?So it confuses me.Waiting for your reply,thanks again!

Ros interface

Res sir,
If i want to implement a ros interface where i can use it with oak d lite how can i achieve this how should i start.
Thank you

question about a jacobian

Hi, thanks for your beautiful work!
I got a question about the jacobian at: https://github.com/SpectacularAI/HybVIO/blob/main/src/odometry/triangulation.cpp#L963

In my understanding, its about such a question: P_{point_in_camera} = R^{cam}{world} * P{point_in_world} and R = R(q).
then we want to get the jacobian of d(P_{point_in_camera}) / d (q), why this jacobian is related to camera and imu baseline parameter?

I am trying to make equirectangular images + IMU available on your code, do you think this is a practicable idea?

The trajectory of VIO is easy to drift

hello
I used D1000-120 model of mynteye camera and recorded a ten minutes data by fixed it on the front of car. Then I used the convert way mentioned in your readme file. Finally, I used "./main -i=my-data-path -p" command in the terminal and run it. But the trajectory is easy to drift. so do you have any try on mynteye camera?

HybVIO assume/tuned for parallel stereo cameras?

I've been using HybVIO on the recently released Monado VR dataset: https://huggingface.co/datasets/collabora/monado-slam-datasets and have been observing a lot of failures, even after increasing the parameters like BA problem size and pose history size in order to account for the higher fps (54).

In many cases, the number of tracks is extremely low (many new key points are detected in the left frame but very few are tracked to the right frame), but tracking proceeds much better when I enable the "useRectification" flag. When I contacted the dataset's author, they said that when they tuned Basalt to run on it, they had to modify the algorithm because it was tuned for parallel stereo cameras and the Valve Index (the VR headset the data is from) cameras are canted.

Just wanted to ask and see if this might be what's causing the problem here as well.

Thank you in advance!

Parameters?

Hi, and thank you for making tis code available. I am building it on windows desktop, visual studio 2019, and after a day or so of tweaking, I have it compiled.

However, when I run
main.exe -i=output p -useStereo
The application does not start. ("output" is the dir containing the csv and video files data.csv, data.avi., data2.avi)

I think this is because i am not passing in parameters correctly.

I am confused as to how the parameter json is created.
The system looks for:
std::ifstream cmdParametersFile("../data/cmd.json");

But how do I create this file and pass in the camera intrinsics etc?

(I am using Zed2 data recorded with the zed capture application from your readme)

Thank you!

Recreating Online Arxiv Paper Results for TUM-VI

Love the paper, thank you so much for putting it and the code out there!

When I was trying to recreate the paper results, I noticed that my EUROC results match but TUM-VI did not. Looking at the paper, I found:
Table B2 online-stereo-Normal SLAM has identical RMSE to postprocess (Table 4)

I suspect that this is just a typo although I could be wrong here.

Cheers and all the best!

visualUpdateForEveryNFrame Quality Worse Than Skipping Entire Frame

Sorry in advance if you guys are no longer answering these types of questions:
Do you have any idea why skipping the visual update (visualUpdateForEveryNFrame=2) would have worse performance than skipping the frame entirely (data attached)?

We also noticed that on average, visualUpdateForEveryNFrame makes SLAM perform worse than VIO (i.e. "-visualUpdateForEveryNFrame=2" has worse quality than "-useSlam=false -visualUpdateForEveryNFrame=2").

Is this some sort of bug, or was visualUpdateForEveryNFrame (skipping visual updates) just never intended to be used in a real setting to improve performance at the expense of quality?

To do frame dropping, I modified sample_sync.cpp right under the sampleSyncSmartFrameLimiter if statement just popping every N frames.

hybvio frame skip data.xlsx

vio modules or slam modules. which matters?

Hi @oseiskar Thanks for your great works!

Your algorithm outperform the current state-of-the-art. Have you test other VIO + slam modules, like MSCKF vins-mono etc.
I would like to know which one is important vio modules or slam modules?

Question about the input data.

Sorry for bothering you. But when I run the code on my mac. It always shows:

in.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.
               main.cpp:705   Discarding a bad frame.

It seems the video input didn't get the video ? But I just install opencv from the 3rdparty through build.sh. Everything shows the it build success. So I don't know how to solve this problem, could you please give me some advice?

Visualizations crash with `-useStereo=false`

main crashes with Assertion false && "GPU visualizer not supported"' failed., when enabling a visualization such as -c (default) or -p with -useStereo=false (default) this assert is triggered.

It means visualizations do not work in mono mode, that is, you must specify -useStereo=true or -c=false. Consequently mono-only datasets cannot be visualized.

OpenBLAS compile error on Raspberry

Hi, I am trying to compile and run HybVIO on a Raspberry Pi 4 with 4 GB of RAM running Debian Bullseye with no desktop environment (Raspberry Pi OS Lite 32-bit).

These are the steps I followed:

  1. sudo apt update && sudo apt upgrade && sudo apt autoremove
  2. sudo apt install clang libc++-dev libgtk2.0-dev libgstreamer1.0-dev libvtk6-dev libavresample-dev libglfw3-dev libglfw3 libglew-dev libxkbcommon-dev cmake git ffmpeg (I was unable to install glfw package, is it supported on Pi?)
  3. git clone https://github.com/SpectacularAI/HybVIO.git --recursive
  4. cd HybVIO/3rdparty/mobile-cv-suite
  5. ./scripts/build.sh

And this is the received error:

CMake Warning (dev) at CMakeLists.txt:135 (if):
  Policy CMP0054 is not set: Only interpret if() arguments as variables or
  keywords when unquoted.  Run "cmake --help-policy CMP0054" for policy
  details.  Use the cmake_policy command to set the policy and suppress this
  warning.

  Quoted variables like "HASWELL" will no longer be dereferenced when the
  policy is set to NEW.  Since the policy is not set the OLD behavior will be
  used.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Reading vars from /home/pi/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm/KERNEL...
-- Reading vars from /home/pi/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm/KERNEL.HASWELL...
CMake Error at cmake/utils.cmake:20 (file):
  file STRINGS file
  "/home/pi/HybVIO/3rdparty/mobile-cv-suite/OpenBLAS/kernel/arm/KERNEL.HASWELL"
  cannot be read.
Call Stack (most recent call first):
  kernel/CMakeLists.txt:16 (ParseMakefileVars)
  kernel/CMakeLists.txt:863 (build_core)

Is it something relative to a specific version of CMake that must be used to compile this project?
And furthermore, is there a specific guide available on how to compile HybVIO on Raspberry and parameter optimizations to make it run in real time?

Thanks

Low overlap stereo cameras

Hello, I have a pair of stereo cameras from a VR headset that point at different angles. I've been able to get HybVIO to track stereo features only with -useRectification, but even then it doesn't seem possible for HybVIO to track features on the non-overlapping areas of the images.

  1. Is there a way to avoid having to prerectify the images in this case?
  2. Is it possible to track non-overlapping features in HybVIO?

Below is a stereo frame example obtained by running this command:

./main -i=../data/benchmark/ody-easy -p -c -s -windowResolution=640 -useSlam -useStereo -displayStereoMatching -useRectification

Over this HybVIO-formatted dataset. The original EuRoC-formatted dataset can be found here.

2022-06-14-135018_1284x521_scrot

some question about rolling shutter setting parameter.

Hello, thank you very much for opening up such great work. I'm looking at the details of the code recently. But at present, some problems have been bothering me. How do you deal with the rolling shutter problem? I don't see the relevant parameter settings in your code, such as "rolling_shutter_skew" or "rolling_shutter_readout_time". Thank you again, and look forward to your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.