Git Product home page Git Product logo

bayes-filters-lib's People

Contributors

claudiofantacci avatar giulioromualdi avatar traversaro avatar xenvre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bayes-filters-lib's Issues

Rethink and implement Logger class

In the current implementation the base class is technically Logger.
Logger should be a class that gets another class and log the result.

Rethink and reimplement `getOutputState`

The current implementation is too customized on single problems and does not have a proper definition across the StateModel class hierarchy. We need to rethink the method and find a proper way to expose it as a public method (if truly needed).

Output GaussianMixture in sigma_point::unscented_transform does not reflect the internal composition of the measurement vector

In sigma_point::unscented_transform, the output GaussianMixture is initialized considering the whole row size of the propagated sigma points.

GaussianMixture output(input.components, prop_sigma_points.rows());

Since, however, we know the internal composition of the measurement sizes

bfl::sigma_point::OutputSize output_size;
std::tie(valid_fun_data, fun_data, output_size) = function(input_sigma_points);

we should probably have

 GaussianMixture output(input.components, output_size.first, output_size.second); 

The current implementation does not suffer from this bug, since what is important is that the means are evaluated taking into account the linear and circular sizes

output.mean(i).topRows(output_size.first).noalias() = prop_sigma_points_i.topRows(output_size.first) * weight.mean;
output.mean(i).bottomRows(output_size.second) = directional_mean(prop_sigma_points_i.bottomRows(output_size.second), weight.mean);

However, as a user, I would probably like to know the correct sizes directly from the output GaussianMixture returned by the method, using mixture.dim_linear and mixture.dim_circular.

Cholesky update

Originally posted by @traversaro in #61 (comment)

I am relative sure it does not apply to your case, but it may be interesting for you to know that in some context (where you have a lot of low-rank updates) the computation of the inverse matrix via repeated uses of the Sherman–Morrison–Woodbury identity is numerically noisy/unstable ( https://epubs.siam.org/doi/abs/10.1137/0907034?journalCode=sijcd4 ). For Symmetric Positive Definite matrices it is sometimes preferred to use the Cholesky Update, see for example https://www.semanticscholar.org/paper/Low-Rank-Updates-for-the-Cholesky-Decomposition-Seeger/8e22b71338d20c884bbb904155f12227781eb750 .


Originally posted by @xEnVrE in #61 (comment)

Thank you @traversaro for your point. I wasn't aware of this. If I am not wrong, Cholesky Update is also used in the square root form of the Unscented Kalman Filter exactly for this reason.

In the end, the paper from which we are taking this kind of usage of the SMW identity, rewrites the covariance matrix using a sort of Cholesky factorization but the inner structure of the factors is exactly known. Don't know if, by using Cholesky updates, it is possible to obtain the same speedup as we are getting with SMW. I'll read the references you pointed out.


Originally posted by @traversaro in #61 (comment)

If you are interested in this, we can also check out Section 3.3 of http://pasa.lira.dist.unige.it/pasapdf/1228_Gijsberts+Metta2012.pdf (that is actually the reason I am aware of this stuff).


Originally posted by @traversaro in #61 (comment)

We actually still have the code from the Cholesky update in icub-main, see:

Uncoherent `skip`-related variable value in `*Prediction` classes

While refactoring here and there, I noticed that the skip-related variables in the *Prediction classes do not have coherent values when they are set in some specific ways. For example, in GaussianPrediction::skip if one set skip_prediction_ the skip_state_ and skip_exogenous_ are not set as well with the same status. skip_state_ and skip_exogenous_ are meant to be used inside predictionStep and hence they could not be set as skip_prediction_ works at a higher level, i.e. in the prediction method. However. I think this could lead to error if one first set skip_state_, then set and unset skip_prediction_. As an user, I would expect the prediction to be completely resumed while insted skip_state_ would be still true. In the end, I propose to unify the behaviour to disambiguate the relationship between the variables as follow:

    if (what_step == "prediction")
    {
        skip_prediction_ = status;

        skip_state_ = status;
        skip_exogenous_ = status;
    }
    else if (what_step == "state")
    {
        skip_state_ = status;

        skip_prediction_ = skip_state_ & skip_exogenous_;
    }
    else if (what_step == "exogenous")
    {
        skip_exogenous_ = status;

        skip_prediction_ = skip_state_ & skip_exogenous_;
    }

What do you think @xEnVrE?

Fixed-size vectorizable Eigen matrices in WhiteNoiseAcceleration

Hi!

I'm using MS VS 2015.
I compiled example from test_UKF and started running it.
On the following line of code from example file:
std::unique_ptr<AdditiveStateModel> wna = utils::make_unique<WhiteNoiseAcceleration>(T, tilde_q);
I got assert error (from Eigen DenseStorage):

EIGEN_DEVICE_FUNC
plain_array() 
{ 
  EIGEN_MAKE_UNALIGNED_ARRAY_ASSERT(15);
  check_static_allocation_size<T,Size>();
}

Could you advice how to fix example code quickly?

Missing const-overloading in some classes

We should check all the interfaces we have and enforce cons-overloading for const-correctness.
For esample, see all the get* methods of StateModel.h

I don't think we are making huge mistakes, but we should improve our current interfaces.

Implement evaluation of a Gaussian density in case of special factorization of the covariance matrix

This issue is for the implementation of a utility function to evaluate a multivariate Gaussian density in those cases in which the covariance matrix can be written in the form S = UV + R with R an invertible block diagonal matrix.

This is useful when the covariance matrix of the measurement model is such that

  • the evaluation of its inverse and
  • the evaluation of its determinant,

that are required to evaluate the Gaussian density, are computationally demanding because rows(S) is a large number. However, if cols(U) << rows(U), then it possible to use the Sherman-Morrison formula and the fact that R is block diagonal to reduce the problem to the inversion (and the evaluation of the determinant) of a matrix of size cols(U) x cols(U)

Once this is implemented, we will be able to implement bfl::SUKFCorrection::getLikelihood.

How shold we implement the state?

The possibilities could be three

  • do not implement the state at all and let the user define it all the time;
  • implement the state in a "static" way, like OpenCV;
  • implement the state using templates, like PCL.

Implement evaluation of a Gaussian density in `utils`

There are several places where we are evaluating a Gaussian density, namely

  • GaussianLikelihood::likelihood
  • WhiteNoiseAcceleration::getTransitionProbability
  • GPFCorrection::evaluateProposal

and we will have other such as *KFCorrection::getLikelihood as per #42.

Can we have a utility function in utils.cpp for this? Of course, one day we may end up having a class for distributions.

Update documentation

Update the documentation and integrate comments and code of the issues marked with the documentation label.

Update tests

We should:

  1. devise a way to disable logs (write to files, in particular) during tests by default, with the possibility of activating them with a parameter or CMake option;
  2. reduce the number of particles.

Avoid (or not) default implementation of method `MeasurementModel::getNoiseCovarianceMatrix`

At the moment, there is a default implementation of the virtual method MeasurementModel::getNoiseCovarianceMatrix() const

std::pair<bool, MatrixXd> MeasurementModel::getNoiseCovarianceMatrix() const
{
return std::make_pair(false, MatrixXd::Zero(1, 1));
}

If a user implements that method, in a inheriting class, without using the keywords const and override, the internal machinery of the library will silently call the default method possibly causing erroneous behaviors. Should we change this to something different, e.g. throwing an exception in the default implementation?

@claudiofantacci

Issue with decorator pattern

We need to rethink the future of the decorator pattern as it cannot work in the followin scenario: a decorated method calls a method of its decorated class which itself calls a class methods. The decorated class will never be able to call a decorated method and this may cause unexpected behaviour when decorator classes are used improperly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.