Comments (11)
Hi, the pose graph optimization is only needed for loop closures. If you can do without loop closures, you could disable them with --no_loop_detection
.
To debug the "Cholesky failure" error, could you please check the CMake settings of your g2o build for BUILD_WITH_MARCH_NATIVE
? It should be set to ON. (And make sure that badslam doesn't accidentally use a potential different install of g2o that may have set this to OFF.)
from badslam.
Hi,
your first problem is absolutely same as mine,
but my environment is: ubuntu16.04 with cuda9.1 and gtx1060
from badslam.
Hi,
your first problem is absolutely same as mine,
but my environment is: ubuntu16.04 with cuda9.1 and gtx1060
Does badslam work properly?
I think the Pose graph optimization is critcal to the whole optimization. If pose graph optimization fails, badslam fails too.
from badslam.
Hi,
your first problem is absolutely same as mine,
but my environment is: ubuntu16.04 with cuda9.1 and gtx1060Does badslam work properly?
I think the Pose graph optimization is critcal to the whole optimization. If pose graph optimization fails, badslam fails too.
I have just tried dataset mannequin_1 and desk_1, which are belong to training set and test set seperately, and badslam can work on mannequin_1 but cannot work on desk_1, that is wired
from badslam.
Hi, the pose graph optimization is only needed for loop closures. If you can do without loop closures, you could disable them with
--no_loop_detection
.To debug the "Cholesky failure" error, could you please check the CMake settings of your g2o build for
BUILD_WITH_MARCH_NATIVE
? It should be set to ON. (And make sure that badslam doesn't accidentally use a potential different install of g2o that may have set this to OFF.)
Thanks, it solves issue 1.
but the second issue still stands, slam from D435 live streams does not work.
Output:20:16:50.570 direct_ba_alternating.c:249 WARN| Pose estimation not converged (not_converged_count: 1, call_counter: 1297)
A hold still D435, and come with wrong trajectory. with or without --no_loop_detection.
Wrong intrinsic parameters?
there is a warning at very beginning 20:16:43.119 input_realsense.cc:121 WARN| Ignoring the color stream's distortion: Brown Conrady (coefficients: 0, 0, 0, 0, 0)
from badslam.
I just tried with my D435 for about a minute and it worked fine. Do you use the recommended settings that are set if you answer yes to the dialog box that pops up when clicking the "RealSense live input" button? Also note that since the D435 color camera is rolling shutter and not synchronized to the infrared cameras, this will disable photometric residuals. So the camera needs to see enough 3D structure for this to work. From your screenshot it seems like the scene may be quite flat (in case the partial reconstruction is correct), which makes it likely to fail. You could try leaving photometric residuals enabled (in the "bundle adjustment" tab) and moving the camera extremely carefully.
The "Pose estimation not converged" warnings may sometimes be caused by oscillating pose estimation (which is not a big problem) or if the scene doesn't sufficiently constrain the pose estimation, or too little depth data is available in a frame. In any case, they sometimes also happen due to this during normal operation and don't necessarily mean that something broke.
The color intrinsics are on the one hand irrelevant if not using photometric residuals, and on the other hand, as the message says the distortion coefficients are all zero, so it is fine to ignore these zeros (probably the program should better only show the message if there is some non-zero coefficient). This is by the way another reason for not using the color camera: the given factory calibration seems to only use the four pinhole parameters, which is likely not very accurate.
from badslam.
well, it solved my question now, but I met a new question just like cdb0y511's second question, when I run desk_2 dataset, I get horrible trajectory, and a lot of c:249 WARN| Pose estimation not converged
from badslam.
If you mean ETH3D's desk_2 dataset, then as reported on https://www.eth3d.net/slam_benchmark , all the methods we tested on this dataset seem to have failed, so that would be expected.
from badslam.
It seems the result is still not very good. I hope I could improve it a little.
This is the best I have
I just tried with my D435 for about a minute and it worked fine. Do you use the recommended settings that are set if you answer yes to the dialog box that pops up when clicking the "RealSense live input" button? Also note that since the D435 color camera is rolling shutter and not synchronized to the infrared cameras, this will disable photometric residuals. So the camera needs to see enough 3D structure for this to work. From your screenshot it seems like the scene may be quite flat (in case the partial reconstruction is correct), which makes it likely to fail. You could try leaving photometric residuals enabled (in the "bundle adjustment" tab) and moving the camera extremely carefully.
Yes, I use the RealSense live input with recommended settings for D435. And with or without photometric residuals (problem still stands). It seems really easy to mess up. Even I move the sensor carefully. I have not dig the code deep yet.
Following your comment I guess badslam only uses the stereo infrared cameras instead of direct depth output of the realsense, and without the laser projection. Cause for an active sensor, an white wall with laser projection is an ideal situation.The depth will be quite accurate. However, The badslam indeed use the direct depth_stream in input_realsense.cc. I am little confused, why does the track mess up?
from badslam.
Following your comment I guess badslam only uses the stereo infrared cameras
No, I didn't want to imply that it uses the infrared images directly. It uses the depth images as computed by the camera software, with the projection active.
It is not really possible to guess why tracking fails in your case without more information on what happens exactly and how the camera view looks like at the exact moment when it fails. The screenshot shows that tracking broke, but not where it broke and under which conditions.
From general experience, it happens really easily to point the camera at parts of a scene that don't constrain all the dimensions of the camera pose. For example, if only a planar wall is visible that has little texture (or when operating with photometric residuals disabled), then the camera pose is not constrained along directions that make it move in parallel to the wall. This will result in immediate tracking failure. There is not much that could be done about this without investing some effort; some possible options would be:
- Use a camera with a higher field-of-view to reduce the chance of this happening, or use a camera rig with cameras pointing in different directions (which would require implementing support for that)
- Use a camera with an IMU to at least prevent this situation from resulting in immediate failure (which would require implementing support for that; however, it will still result in failure if the camera observes that view for a longer time) (#11)
- Try to make better use of the color images in the SLAM system (but that only works as long as the wall isn't completely homogeneous, and might never work really well for the D435 unless its color camera deficiencies are accounted for)
- Try to detect under-constrained situations and give up tracking in these cases, then re-localize once the view is better. (#15)
Specific to the D435 camera, it also seems that its depth image quality is relatively bad (this might be responsible for the artefacts on the right side of the reconstruction in your screenshot). For example, it tends to interpolate between foreground and background objects instead of showing sharp transitions. That may confuse the SLAM system. There also seems to be strong noise in the depth images. I haven't tried myself, but I think that the camera offers some settings that can be tuned to potentially improve its depth images. Another thing to try would be to reduce the maximum depth parameter in badslam to ignore the far-away depth estimates that are likely affected the most by the issues.
If you would be fine with non-realtime operation, a way to improve the depth quality would be to accurately calibrate the D435's infrared cameras yourself and also compute the depth images yourself with that calibration, using a better stereo algorithm than what the camera uses internally.
from badslam.
It is not really possible to guess why tracking fails in your case without more information on what happens exactly and how the camera view looks like at the exact moment when it fails. The screenshot shows that tracking broke, but not where it broke and under which conditions.
Really appreciate your reply.
As you see, I use D435 in an office with white wall. It does stop tracking in the front of the white wall or ceiling.
The next thing is how to make it more robust. I will follow your suggestions. If there is any progress, I will let you know.
P.S this time maybe the normal results, I start from a chessboard.
from badslam.
Related Issues (20)
- Can not generate .ply file HOT 3
- License or patents about badslam HOT 1
- Could not find a package configuration file provided by "SuiteSparse" with any of the following names HOT 1
- Target "badslam_test" links to target "Eigen3::Eigen" but the target was not found. HOT 4
- About Evaluation Results on ETH3D SLAM Benchmark HOT 2
- badslam runtime error on jetson nano?
- slam_test error! HOT 3
- Build with CUDA 11.6 HOT 4
- FATL| RealSense input requested, but the program was compiled without RealSense support. HOT 5
- Autotune Cuda Error: out of memory HOT 1
- angle_difference in Direct BA HOT 2
- Hello, when I want to run tum_ This error occurs when FR1 data set HOT 1
- opengv 3rdparty HOT 2
- Let libpng expand the color channels to 16 bit if requested HOT 2
- Can we use 16-bit RGB image as input without converting into 8-bit RGB(by libpng)? HOT 3
- Question about the required depth maps precision
- Error on Ubuntu 22 and Nvidia 4090 HOT 4
- Docker build error
- How is timestamp supposed to be obtained?
- Compile error about the g2o on Windows 10 with VS2019
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from badslam.