Git Product home page Git Product logo

dcsam's Introduction

dcsam

Build Status docs

This library, built using GTSAM, provides factor type definitions and a new solver to perform approximate inference on discrete-continuous (hybrid) factor graph models typically encountered in robotics applications.

NOTE: As of 1/30/2023 the latest version of DC-SAM on main depends on GTSAM release 4.2a8. If you are using GTSAM 4.1.1, check out our pre-4.2 release tag. This is the version of DC-SAM you would have used if you cloned the repository prior to 1/30/2023. Many thanks to Parker Lusk for bringing us into the future.

References

A technical report describing this library and our solver can be found here. If you found this code useful, please cite it as:

@article{doherty2022discrete,
  author={Doherty, Kevin J. and Lu, Ziqi and Singh, Kurran and Leonard, John J.},
  journal={IEEE Robotics and Automation Letters},
  title={Discrete-{C}ontinuous {S}moothing and {M}apping},
  year={2022},
  volume={7},
  number={4},
  pages={12395-12402},
  doi={10.1109/LRA.2022.3216938}
}

Prerequisites

To retrieve the appropriate version of GTSAM:

~ $ git clone https://github.com/borglab/gtsam
~ $ cd gtsam
~/gtsam $ git checkout 4.2a8

Follow instructions in the GTSAM repository to build and install with your desired configuration.

Optional

  • gtest for building tests.

Building

Building the project

To build using cmake:

~/dcsam $ mkdir build
~/dcsam $ cd build
~/dcsam/build $ cmake ..
~/dcsam/build $ make -j

Run tests

To run unit tests, first build with testing enabled:

~/dcsam $ mkdir build
~/dcsam $ cd build
~/dcsam/build $ cmake .. -DDCSAM_ENABLE_TESTS=ON
~/dcsam/build $ make -j

Now you can run the tests as follows:

~/dcsam/build $ make test

Examples

For example usage, check out the DC-SAM examples repo or take a look through testDCSAM.cpp.

Developing

We're using pre-commit for automatic linting. To install pre-commit run:

pip3 install pre-commit

You can verify your installation went through by running pre-commit --version and you should see something like pre-commit 2.7.1.

To get started using pre-commit with this codebase, from the project repo run:

pre-commit install

Now, each time you git add new files and try to git commit your code will automatically be run through a variety of linters. You won't be able to commit anything until the linters are happy with your code.

dcsam's People

Contributors

keevindoherty avatar kurransingh avatar plusk01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dcsam's Issues

Weird KITTI behaviors after adding uniform priors on inactive discrete keys

After adding uniform priors on inactive discrete keys in this commit, KITTI results are consistently worse than the old ones on seq 5.

EVO ape Before the commit:

APE w.r.t. translation part (m)
(with SE(3) Umeyama alignment) (aligned poses: 30)

       max	7.931835
      mean	2.841607
    median	2.044358
       min	0.249557
      rmse	3.547685
       sse	1736.877678
       std	2.123991

After the commit:

APE w.r.t. translation part (m)
(with SE(3) Umeyama alignment) (aligned poses: 30)

       max	14.252595
      mean	5.990570
    median	5.300368
       min	0.301529
      rmse	7.317135
       sse	7388.584812
       std	4.201612

Printing out the decision tree factor with min error gives something like this:

Factor  x63 l17
DecisionTreeFactor:
Potentials:
  Cardinalities: {c17:1, }
  Leaf 1
Factor  x61 l18
DecisionTreeFactor:
Potentials:
  Cardinalities: {c16:1, c17:1, c18:1, c19:1, }
  Leaf 1

What's the proper version of GTSAM that we need?

GTSAM @ caa14bc does not seem to have the Potentials.h file that we want to include:

/home/tonio/repos/idbt/include/idbt/max_product.h:18:10: fatal error: gtsam/discrete/Potentials.h: No such file or directory
 #include <gtsam/discrete/Potentials.h>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

Normalization Constant Sign Bug

It appears that there is a minor bug in the computation of a DCFactor normalization constant.

// Compute the (negative) log of the normalizing constant
return -(factor.dim() * log(2.0 * M_PI) / 2.0) -
(log(infoMat.determinant()) / 2.0);

Specifically, the sign of the d/2 log(2 * pi) term. It would be worth it to double check my maths, but I derive that that term is positive...
image

While this does not affect the maximal component, it would cause issues if one assumes that errors are properly normalized.

Clean up the docs

Doxygen setup right now is not super nice (not well formatted or linked with GTSAM docs).

If we can install GTSAM on the Docker image that builds the documentation, Doxygen should be able to figure out inherited types from GTSAM.

Also might consider playing with Doxygen configs to make the documentation generally a little "nicer".

Add interface for continuous-only solves

Per #22 this is particularly relevant when a user does not need to do a batch solve of discrete variables. Until we have proper incremental (hybrid) solves, users should be able to specify that only fresh continuous assignments are needed (e.g. in the event that only odometry measurements are added to a factor graph). The default behavior now on main performs a heuristic check for odometry-like factors and does not perform a discrete solve when only these factors are added, but this could lead to odd behavior for certain graph inputs.

Cache DC factor indices for fast updates in solver

Current (naive) update implementation requires ~O(M) measurement updates. Simply caching the indices of DCFactors within the discrete/continuous graphs (or iSAM2 in the latter case) will be a huge performance boost; more like ~O(K), K << M, where K is the number of DCFactors only (vs. M total factors).

make test failed

Hello, thanks for your work !
I build the project with test, like this:
~/$ mkdir build
~/$ cd build
~/build$ cmake .. -DENABLE_TESTS=ON
~/build$ make -j

then ~/build$ make test

get that error:
image

My environment:
ubuntu 18.04
gtsam version: v4.2a7
gtest built from apt-get libgtest-dev

Segfault when running kitti seq 05

When we ran sparse one-class (car) kitti SLAM with max-mixtures for data association using the following parameters, we encountered a segfault at around the first big loop closure (in the middle of the test).

  <!-- Arguments -->
  <arg name="kitti_path" default="/media/ziqi/LENOVO_USB_HDD/data/kitti/05/"/>
  <!-- Data association algorithm {0: ML; 1: MM; 2: SM; 3: EM} -->
  <arg name="DA_type" default="1"/>
  <arg name="noise_gain" default="0"/>
  <arg name="misclassification_rate" default="0"/>
...
  <!-- VISO2 parameters -->
  <param name="ref_frame_change_method" value="0"/> <!-- choose from 0,1,2, try 2 first if 0 fails-->
  <param name="viso2_small_motion_threshold" value="5.0"/>
  <param name="viso2_inlier_threshold" value="90"/> <!-- must be integer-->

  <!-- Block Matching parameters (for point cloud computation) -->
  <param name="minDisparities" value="0"/>
  <param name="numDisparities" value="8"/>  <!-- will be multiplied by 16-->
  <param name="blockSize" value="5"/>
...
  <!-- Keyframe time threshold [nsec] -->
  <param name="kf_threshold" value="2e9"/>

  <!-- Noise models -->
  <rosparam param="odom_noise_model"> [0.02, 0.02, 0.02, 0.02, 0.02, 0.02] </rosparam>
  <rosparam param="det_noise_model"> [0.2, 0.2, 1.0] </rosparam>
  <!-- Null-Hypo weight -->
  <param name="nh_weight" value="0.1" />
  <!-- Object measurement gating params -->
  <param name="search_radius" value="20.0"/>
  <param name="maha_dist_thresh" value="4.0"/>
...

The gdb back-trace log is pasted here:

0x00007ffff5b0d07a in gtsam::EliminateDiscrete(gtsam::DiscreteFactorGraph const&, gtsam::Ordering const&) ()
   from /usr/local/lib/libgtsam.so.4
(gdb) backtrace 
#0  0x00007ffff5b0d07a in gtsam::EliminateDiscrete(gtsam::DiscreteFactorGraph const&, gtsam::Ordering const&) ()
    at /usr/local/lib/libgtsam.so.4
#1  0x0000555555789d3a in gtsam::EliminationTraits<gtsam::DiscreteFactorGraph>::DefaultEliminate(gtsam::DiscreteFactorGraph const&, gtsam::Ordering const&) ()
#2  0x000055555578ed69 in boost::detail::function::function_invoker2<std::pair<boost::shared_ptr<gtsam::DiscreteConditional>, boost::shared_ptr<gtsam::DiscreteFactor> > (*)(gtsam::DiscreteFactorGraph const&, gtsam::Ordering const&), std::pair<boost::shared_ptr<gtsam::DiscreteConditional>, boost::shared_ptr<gtsam::DiscreteFactor> >, gtsam::DiscreteFactorGraph const&, gtsam::Ordering const&>::invoke(boost::detail::function::function_buffer&, gtsam::DiscreteFactorGraph const&, gtsam::Ordering const&) ()
#3  0x00007ffff5b15e46 in gtsam::EliminationData<gtsam::EliminatableClusterTree<gtsam::DiscreteBayesTree, gtsam::DiscreteFactorGraph> >::EliminationPostOrderVisitor::operator()(boost::shared_ptr<gtsam::ClusterTree<gtsam::DiscreteFactorGraph>::Cluster> const&, gtsam::EliminationData<gtsam::EliminatableClusterTree<gtsam::DiscreteBayesTree, gtsam::DiscreteFactorGraph> >&) () 

After adding some printing statements, we found that this is related to the getMarginals function in DCSAM.

And we don't see this error using maximum likelihood based data association in max-mixtures.

We temporarily solved this in max-mixtures by only querying marginals for the continuous graph (Originally we are doing this).

Potential numerical issues in DCFactor exp-normalization

The DCFactor::evalProbs( ...) function implements "exp-normalization" where we attempt to normalize a set of (negative) log probabilities as exp(log p_i) / ( sum_i exp(log p_i) ). For exceptionally small values of log p_i, e.g. -10^6, this expression is susceptible to underflow. This can occur, for example, if the "continuous part" of a DCFactor for a particular discrete assignment has large error, even independent of the "discrete part." Rather than compute this expression naively as written, we should "shift" the exponents prior to normalizing to avoid numerical issues.

cc: @kurransingh

CI Server is down

Jenkins CI is currently down due to a power cycle at CSAIL over the past few weeks.

Need to either reboot mrg-beast at CSAIL, or probably the more sustainable future solution is to migrate CI fully to Github.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.