Git Product home page Git Product logo

Comments (11)

puzzlepaint avatar puzzlepaint commented on June 28, 2024 1

Hi, the pose graph optimization is only needed for loop closures. If you can do without loop closures, you could disable them with --no_loop_detection.

To debug the "Cholesky failure" error, could you please check the CMake settings of your g2o build for BUILD_WITH_MARCH_NATIVE? It should be set to ON. (And make sure that badslam doesn't accidentally use a potential different install of g2o that may have set this to OFF.)

from badslam.

HuangTY96 avatar HuangTY96 commented on June 28, 2024

Hi,
your first problem is absolutely same as mine,
but my environment is: ubuntu16.04 with cuda9.1 and gtx1060

from badslam.

cdb0y511 avatar cdb0y511 commented on June 28, 2024

Hi,
your first problem is absolutely same as mine,
but my environment is: ubuntu16.04 with cuda9.1 and gtx1060

Does badslam work properly?
I think the Pose graph optimization is critcal to the whole optimization. If pose graph optimization fails, badslam fails too.

from badslam.

HuangTY96 avatar HuangTY96 commented on June 28, 2024

Hi,
your first problem is absolutely same as mine,
but my environment is: ubuntu16.04 with cuda9.1 and gtx1060

Does badslam work properly?
I think the Pose graph optimization is critcal to the whole optimization. If pose graph optimization fails, badslam fails too.

I have just tried dataset mannequin_1 and desk_1, which are belong to training set and test set seperately, and badslam can work on mannequin_1 but cannot work on desk_1, that is wired

from badslam.

cdb0y511 avatar cdb0y511 commented on June 28, 2024

Hi, the pose graph optimization is only needed for loop closures. If you can do without loop closures, you could disable them with --no_loop_detection.

To debug the "Cholesky failure" error, could you please check the CMake settings of your g2o build for BUILD_WITH_MARCH_NATIVE? It should be set to ON. (And make sure that badslam doesn't accidentally use a potential different install of g2o that may have set this to OFF.)

Thanks, it solves issue 1.
but the second issue still stands, slam from D435 live streams does not work.
Output:20:16:50.570 direct_ba_alternating.c:249 WARN| Pose estimation not converged (not_converged_count: 1, call_counter: 1297)

Screenshot from 2019-07-23 20-24-05

A hold still D435, and come with wrong trajectory. with or without --no_loop_detection.

Wrong intrinsic parameters?
there is a warning at very beginning 20:16:43.119 input_realsense.cc:121 WARN| Ignoring the color stream's distortion: Brown Conrady (coefficients: 0, 0, 0, 0, 0)

from badslam.

puzzlepaint avatar puzzlepaint commented on June 28, 2024

I just tried with my D435 for about a minute and it worked fine. Do you use the recommended settings that are set if you answer yes to the dialog box that pops up when clicking the "RealSense live input" button? Also note that since the D435 color camera is rolling shutter and not synchronized to the infrared cameras, this will disable photometric residuals. So the camera needs to see enough 3D structure for this to work. From your screenshot it seems like the scene may be quite flat (in case the partial reconstruction is correct), which makes it likely to fail. You could try leaving photometric residuals enabled (in the "bundle adjustment" tab) and moving the camera extremely carefully.

The "Pose estimation not converged" warnings may sometimes be caused by oscillating pose estimation (which is not a big problem) or if the scene doesn't sufficiently constrain the pose estimation, or too little depth data is available in a frame. In any case, they sometimes also happen due to this during normal operation and don't necessarily mean that something broke.

The color intrinsics are on the one hand irrelevant if not using photometric residuals, and on the other hand, as the message says the distortion coefficients are all zero, so it is fine to ignore these zeros (probably the program should better only show the message if there is some non-zero coefficient). This is by the way another reason for not using the color camera: the given factory calibration seems to only use the four pinhole parameters, which is likely not very accurate.

from badslam.

HuangTY96 avatar HuangTY96 commented on June 28, 2024

well, it solved my question now, but I met a new question just like cdb0y511's second question, when I run desk_2 dataset, I get horrible trajectory, and a lot of c:249 WARN| Pose estimation not converged
Selection_001

from badslam.

puzzlepaint avatar puzzlepaint commented on June 28, 2024

If you mean ETH3D's desk_2 dataset, then as reported on https://www.eth3d.net/slam_benchmark , all the methods we tested on this dataset seem to have failed, so that would be expected.

from badslam.

cdb0y511 avatar cdb0y511 commented on June 28, 2024

It seems the result is still not very good. I hope I could improve it a little.
This is the best I have
Screenshot from 2019-07-24 18-48-21

I just tried with my D435 for about a minute and it worked fine. Do you use the recommended settings that are set if you answer yes to the dialog box that pops up when clicking the "RealSense live input" button? Also note that since the D435 color camera is rolling shutter and not synchronized to the infrared cameras, this will disable photometric residuals. So the camera needs to see enough 3D structure for this to work. From your screenshot it seems like the scene may be quite flat (in case the partial reconstruction is correct), which makes it likely to fail. You could try leaving photometric residuals enabled (in the "bundle adjustment" tab) and moving the camera extremely carefully.

Yes, I use the RealSense live input with recommended settings for D435. And with or without photometric residuals (problem still stands). It seems really easy to mess up. Even I move the sensor carefully. I have not dig the code deep yet.
Following your comment I guess badslam only uses the stereo infrared cameras instead of direct depth output of the realsense, and without the laser projection. Cause for an active sensor, an white wall with laser projection is an ideal situation.The depth will be quite accurate. However, The badslam indeed use the direct depth_stream in input_realsense.cc. I am little confused, why does the track mess up?

from badslam.

puzzlepaint avatar puzzlepaint commented on June 28, 2024

Following your comment I guess badslam only uses the stereo infrared cameras

No, I didn't want to imply that it uses the infrared images directly. It uses the depth images as computed by the camera software, with the projection active.

It is not really possible to guess why tracking fails in your case without more information on what happens exactly and how the camera view looks like at the exact moment when it fails. The screenshot shows that tracking broke, but not where it broke and under which conditions.

From general experience, it happens really easily to point the camera at parts of a scene that don't constrain all the dimensions of the camera pose. For example, if only a planar wall is visible that has little texture (or when operating with photometric residuals disabled), then the camera pose is not constrained along directions that make it move in parallel to the wall. This will result in immediate tracking failure. There is not much that could be done about this without investing some effort; some possible options would be:

  • Use a camera with a higher field-of-view to reduce the chance of this happening, or use a camera rig with cameras pointing in different directions (which would require implementing support for that)
  • Use a camera with an IMU to at least prevent this situation from resulting in immediate failure (which would require implementing support for that; however, it will still result in failure if the camera observes that view for a longer time) (#11)
  • Try to make better use of the color images in the SLAM system (but that only works as long as the wall isn't completely homogeneous, and might never work really well for the D435 unless its color camera deficiencies are accounted for)
  • Try to detect under-constrained situations and give up tracking in these cases, then re-localize once the view is better. (#15)

Specific to the D435 camera, it also seems that its depth image quality is relatively bad (this might be responsible for the artefacts on the right side of the reconstruction in your screenshot). For example, it tends to interpolate between foreground and background objects instead of showing sharp transitions. That may confuse the SLAM system. There also seems to be strong noise in the depth images. I haven't tried myself, but I think that the camera offers some settings that can be tuned to potentially improve its depth images. Another thing to try would be to reduce the maximum depth parameter in badslam to ignore the far-away depth estimates that are likely affected the most by the issues.

If you would be fine with non-realtime operation, a way to improve the depth quality would be to accurately calibrate the D435's infrared cameras yourself and also compute the depth images yourself with that calibration, using a better stereo algorithm than what the camera uses internally.

from badslam.

cdb0y511 avatar cdb0y511 commented on June 28, 2024

It is not really possible to guess why tracking fails in your case without more information on what happens exactly and how the camera view looks like at the exact moment when it fails. The screenshot shows that tracking broke, but not where it broke and under which conditions.

Really appreciate your reply.
As you see, I use D435 in an office with white wall. It does stop tracking in the front of the white wall or ceiling.
The next thing is how to make it more robust. I will follow your suggestions. If there is any progress, I will let you know.
P.S this time maybe the normal results, I start from a chessboard.
Screenshot from 2019-07-24 20-46-14

from badslam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.