ethz-asl / curves Goto Github PK
View Code? Open in Web Editor NEWA library of curves for estimation.
License: BSD 3-Clause "New" or "Revised" License
A library of curves for estimation.
License: BSD 3-Clause "New" or "Revised" License
Now that we are working with the unstable gtsam development branch, an issue has occurred where contributors to the curves library are using different revisions of gtsam. Which has in turn has caused some confusion about why things are breaking and/or faulting on different machines.
E.g. while the 6D-Expression branch builds with an older revision of gtsam, it does not build with one checked out as of today (rev 61666f22d618010bc1a63298ef856741db62f1d3). Some include locations have changed, and some functions have become private (which were previously public), etc.
Can we standardize on a specific gtsam revision that we will work with for some set period of time? or are the day-to-day expression updates in gtsam critical to our use? Maybe someone more involved on the gtsam side can make a recommendation (is there a recent commit that will be fairly stable for our use until speed becomes critical?).
There should be a tool which allows to extract an interpolated curve from two other curves depending on an interpolation parameter that is in range [0, 1].
A simplified example would be
curve = linearlyInterpolate(curve1, curve2, t);
t = 0.0 -> curve = curve1
t = 0.5 -> curve = 0.5_curve1 + 0.5_curve2
t = 1.0 -> curve = curve2
Should and how should we add this feature to this library?
This could cause errors. Rather, make a well-named factory function that does this.
I suppose this comes down to personal preference, but I am a fan of Google style when it comes to references/pointers (inputs are const refs, outputs are pointers)
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Reference_Arguments
Opinions?
In the case where the time falls exactly on a coefficient, only one coefficient should be returned, not a pair of coefficients. To our understanding, two coefficients should only be returned when the time falls between two coefficients.
We see three possible cases :
1 - Time out of bounds => error
2 - Time between two coefficients => two coefficients should be passed
3 - Time on a coefficient => what should the behaviour of this function be?
Would the user have to call a getCoefficientAt() (singular) or hasCoefficientAtTime() function to check if the time falls on a coefficient, before calling the getCoefficientsAt() ?
Please advice.
Would it make sense to bring this more together with the CubicHermiteSE3Curve? I see that the CubicHermiteE3Curve is presently not inheriting from another Curve type which is a bit against the common interface idea.
I am trying to populate the LinearInterpolationVectorSpaceEvaluator.cpp to create an evaluator object for our linearly interpolated 1d curve. Currently the class is defined as :
class LinearInterpolationVectorSpaceEvaluator : public Evaluator< VectorSpaceConfig >
However, in LinearInterpolationVectorSpaceEvaluator.hpp the getter function is:
virtual EvaluatorTypePtr getEvaluator(Time time) const;
This functions returns a boost::shared_ptr< Evaluator< VectorSpaceConfig > > ;
Could someone comment on how to make the link between the LinearInterpolationVectorSpaceEvaluator class and the boost::shared_ptr? Am I right by trying to create a LinearInterpolationVectorSpaceEvaluator object or am I getting something wrong?
Thanks!
getBackTime()
--> getMinTime()
extendFront()
/extendBack()
--> extend()
and infer the direction from the timestamps.
retractFront()
/retractBack()
--> ???
Current implementation allows evaluation at some underlying coefficient values as well as using coefficients as an input.. why not leave Curves to be evaluated when values are known to the object, and use the Evaluator when coefficient values must be an input?
On bitbucket.org/curves, there is CubicHermiteE3Curve which I need in my optimization lib. Was it renamed?
I would have constructors like:
/// Initialize one curve from another
Curve(const Curve& other);
/// Initialize one curve from another by sampling at the specified times
Curve(const Curve& other, const std::vector<Time>& times);
We can make this abstract by defining a pure virtual object (to wrap the Values
and std::vector<Matrix>
that show up in the GTSAM NonLinearFactor
.
Something like:
class EvaluatorCoefficients {
public:
// Get the coefficient associated with this key
const Coefficient& coefficient(const Key& key) const = 0;
// Set the Jacobian of this coefficient
void setJacobian( const Key& key, const Eigen::MatrixXd& Jk) = 0;
};
Then the implementation could handle everything without an extra copy.
Some people like exceptions. I'm starting to lean toward return codes. This only works if you have a logging system. I'm currently using glog so that should work. If we go this route, which functions should be able to fail?
Thanks for this very useful lib though its hard for me to understand.
Recently i have some questions while using it to intepolate my SE3 pose trajectory:
a) Is cubicHermit better than cubicSpline fitting ? How to implement cubicSplineSE3curve ?
b)While using cubicHermitSE3curvefitting, why new eular angles become randoms when yaw is about 90 or -90 degree ?
Waiting for great answers !
Currently, in CoefficientImplementation the function names and documentation use some very technical and specific terminology. It may be worth simplifying this?
one example: the "retract" function in this class is the function that updates the state values with a perturbation... I believe it has to do with manifold terminology, but maybe a more intuitive naming convention would be appropriate?
Much of the documentation also uses terms like, Chart and Ambient Space, which I did not understand today until I read about Atlas' and manifolds etc. We may want to consider our target audience if we plan to make this open source. I doubt this is common roboticist knowledge.
Background for discussion: https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
We may want a way of specifying the maximum derivative order available..
e.g. Cubic splines are C^2 continuous and with GPs only a few analytical derivatives may be available.. I have to see if there is a simple automatic method.
There are files that are not compiled when building curves
, but only when building curves_ros
. If we are unlucky, there may be even more files inside curves
that only get compiled when used by another package. This is, among others, problematic because it could prevent us from catching bugs early.
As can be seen in http://129.132.38.183:8080/job/curves/label=ubuntu/85/parsed_console/?auto_refresh=false. I would assume that the version of gtsam used here is wrong (b66dda2).
Which commit do you guys @gawela , @rdube use with curves currently? (the commit above is the one used by multiagent mapping)
minkindr_gtsam compiles with that revsion. But this probably doesn't tell a lot, right?
We'll require further functions to do all the fancy things with curves, such es extracting pieces of curves and composing several curves into one.
Proposed functinos would be (shifted from comments on curves.hpp):
/// extract a piece of the curve as a new curve
/// \param[in] min_time the begin time of the new curve
/// \param[in] max_time the end time of the new curve
/// \param[out] result the sub curve.
virtual void subcurve(Time min_time, Time max_time, Curve* result) const;
/// join two curves
/// compose two curves
And further propositions.
I had many discussions with Mike about this and it looks like a lot of the ugliness on both sides of the curves/continuous scanmatching code base are because of our goal to keep curves and the gtsam library logically separate. Therefore I propose a fix to this.
We should use #ifdef USE_GTSAM
to make the library gtsam compatible. This would able only very minimal changes to the library:
Coefficient
class derive from gtsam::DerivedValue<Coefficient>
. This already works without any other code modificationgtsam::Expression
objects. This is bleeding edge functionality in gtsam but it would eclipse our use of Evaluator
types, it would be fast because every curve could produce information that includes the fixed-sized Jacobians, and it would be simple to work with.To do this, we have to do the following:
VectorSpaceCurve<3>
(a 3D vector space curve) you would have a function like: virtual Expression< Point3 > evaluate(Time time) = 0;
. This is the Block Automatic differentiation of an evaluator. Nice bits: we don't have to implement all of these evaluators for different things, and the curves can build the expressions with strong type-size information (so they are fast).I'll work with Mike to come up with the right design for this over the next few days. This will simplify everything that happens downstream.
For efficiency sake can we write some template structure to bypass virtual functions in this class - esp. for retract and localCorrdinates which will be frequently called by GTSAM.
Here is the error message I'm getting:
[curves] ==> '/home/mbosse/catkin_aslam/build/curves/build_env.sh /usr/bin/make -j7 -l7' in '/home/mbosse/catkin_aslam/build/curves'
[ 10%] Generating API documentation with Doxygen
/bin/sh: 1: cd: can't cd to /home/mbosse/catkin_aslam/devel/share/curves/doc
make[2]: *** [CMakeFiles/doc] Error 2
https://bitbucket.org/gtborg/gtsam/issue/10/follow-up-on-eigen-patches
Our current guess at a solution is:
There are lots of virtual and overriden functions that are not implemented, but will not even trigger a runtime warning if still used. In the few places where a warning/error is triggered, is is not done in a consistent manner (see this issue: #78 ). All these functions should trigger a warning/error and it needs to bed decided which way to go in terms of the logging framework.
Optimize for coefficients that fall directly on keys (e.g. alpha == 0)--> no interploation is required in this case and expressions do not need to be evaluated.
Soon, we will have to drop a file somewhere and asses the quality of the curve (at least for the Euclidean space). Here are possibilities that came up at the last meeting:
Some extra questions/thoughts:
To correctly fix a curve at a given time.
However, as in the HermiteCurve this introduces ExpressionFactors to curves. Do we want that?
I'd like to converge on a design that tightly integrates with GTSAM as soon as possible so that we can come up with first implementations of everything using the new "Traits" design and GTSAM expressions. To understand this discussion it is useful to review how the GTSAM trait classes work, and how block automatic differentiation (BAD) works in GTSAM. The short summary of these two topics is:
There are at least 3 different ways errors are logged/handled: std::cerr
, std::runtime_error
, and CHECK
(a google macro defined in logging.h
). We should adhere to one standard, the ideal one probably being MELO
. This will introduce an additional dependency, though.
What was the design decision behind separating evaluation and the Jacobian function:
/// Get the curve Jacobians.
/// This is the main interface for GTSAM
virtual void getJacobians(unsigned derivativeOrder, ...
I ask because in the case of GP's, my jacobians essentially double as the interpolation coefficients (similar to alpha and 1-alpha in the case of linear interpolation). I know Paul mentioned previously that it is not good to store the Jacobian's because they take too much memory, so I am doing them functionally, but at the moment I have to do it twice (once for evaluation and once for getJacobians).
The base coefficient class already declares the values as a dynamic eigen vector, so when implementing derived classes we can't really take advantage of efficient statically sized vectors.
Is there a better way to design the interfaces, perhaps to use more templates, so that we can take advantage of efficient Block Auto Differentiation routines, and not have so much superfluous allocation, deallocation and copying of data?
I think this makes it more clear that time is integer nanoseconds.
Anyone disagree?
Even for development it may be prudent to better document the expected behaviour of our functions.
e.g. should these function clear the map before adding entries, or simply insert on top of the existing entries (the current case)?
HermiteCoefficientManager:
/// \brief Get the coefficients that are active within a range \f$[t_s,t_e) \f$.
void getCoefficientsInRange(Time startTime,
Time endTime,
Coefficient::Map& outCoefficients) const;
/// \brief Get all of the curve's coefficients.
void getCoefficients(Coefficient::Map& outCoefficients) const;
LinearInterpolationVectorSpaceCurve:
/// \brief Get the coefficients that are active at a certain time.
virtual void getCoefficientsAt(Time time,
Coefficient::Map& outCoefficients) const;
/// \brief Get the coefficients that are active within a range \f$[t_s,t_e) \f$.
virtual void getCoefficientsInRange(Time startTime,
Time endTime,
Coefficient::Map& outCoefficients) const;
/// \brief Get all of the curve's coefficients.
virtual void getCoefficients(Coefficient::Map& outCoefficients) const;
Is there a case where appending would be useful, and if so, do we ever expect a scenario where we attempt to insert a duplicate key? e.g. this would happen if we called getCoefficientsInRange twice, with two overlapping ranges, and the same outCoefficients map. If we expect duplicates, should we sanity check that the coefficient values are the same?
Plenty of virtual functions get overriden, but the override
specifier is not used a single time as suggested by best practices.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.