patr-schm / tinyad Goto Github PK
View Code? Open in Web Editor NEWAutomatic Differentiation in Geometry Processing Made Simple
License: MIT License
Automatic Differentiation in Geometry Processing Made Simple
License: MIT License
Hello there,
I've been using this library to create a reference implementation of a project I've been working on for a long time.
I'm generally pretty satisfied, thank you for creating this! I've been waiting for quite some time for someone to implement a convenient autodiff framework that supports sparse hessians, and after years of implementing these by hand during my phd, having auto diff generate hessians for me has been a welcome relief that has helped accelerate my research progress.
However, I've got one thing that's been bothering me, and wanted to ask about it, as perhaps I'm simply not seeing the answer to immediately, but otherwise would like to request that you patch the library so that I don't need to create a fork to support my reference implementation.
In particular, I see that TinyAD seems to have built in support for changing out the linear solver that is used in the convenience methods like the line search, but I can't seem to figure out how to use this feature effectively.
I created a tinyad wrapper that I've been using in my project here:
https://github.com/the13fools/gpgpt/blob/main/src/ADWrapper/ADFuncRunner.h#L82.
Ideally, I'd like to initialize a solver like so in this code:
TinyAD::LinearSolver<double, Eigen::CholmodSupernodalLLT< Eigen::SparseMatrix> > > solver;
But when I do something like this the tinyad code gives me some templating issues. I think this should be a pretty straightforward thing to patch in the library code?
Thanks for creating this and for the support!
Best,
Josh
I am trying to use the scalar_function interface to assemble my global problem, but I am finding some troubles. My mesh is defined in terms of an Eigen::Matrix<float, Eigen::Dynamic, 3> for points, and std::vector<std::array<int, 3>> for faces. However, when i do
auto func = TinyAD::scalar_function<3>(faces, [](auto &element) { return 0; });
I get a compilation error
[build] /workspace/src/Velo/Physics/StVK.cpp:39:17: error: no matching function for call to 'scalar_function'
[build] 39 | auto func = TinyAD::scalar_function<3>(faces, [](auto &element) { return 0; });
[build] | ^~~~~~~~~~~~~~~~~~~~~~~~~~
[build] /workspace/build/dev-release/_deps/tinyad-src/include/TinyAD/Detail/ScalarFunctionImpl.hh:302:6: note: candidate function template not viable: no known conversion from '(lambda at /workspace/src/Velo/Physics/StVK.cpp:39:51)' to 'const EvalSettings' for 2nd argument
[build] 302 | auto scalar_function(
[build] | ^
[build] 303 | const VariableRangeT& _variable_range,
[build] 304 | const EvalSettings& _settings)
[build] | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[build] 1 error generated.
I am not sure how I should use the assembling interface
I'm not sure what the preferred way is to contribute to this library, but I'd like to suggest that the implementation for autodiffing pow be updated to the following:
friend Scalar pow(
const Scalar& a,
const int& e)
{
TINYAD_CHECK_FINITE_IF_ENABLED_AD(a);
if constexpr (TINYAD_ENABLE_OPERATOR_LOGGING) TINYAD_DEBUG_VAR(__FUNCTION__);
if ( e == 0 )
{
return chain((PassiveT)1.0,
(PassiveT)0.0,
(PassiveT)0.0,
a);
}
else if ( e == 1 )
{
return chain( a.val,
(PassiveT)1.0,
(PassiveT)0.0,
a);
}
else
{
const PassiveT f2 = std::pow(a.val, e - 2);
const PassiveT f1 = f2 * a.val;
const PassiveT f = f1 * a.val;
return chain(
f,
e * f1,
e * (e - 1) * f2,
a);
}
}
For my use case (deformable simulation), I find that I'm often writing very similar ScalarFunction
s. For example, when calculating two energies for a range of vertices, only a single call to an elementEnergy
function template differs. Below is a minimal example:
#include <Eigen/Core>
#include <TinyAD/ScalarFunction.hh>
#include <iostream>
template<typename T> T elementEnergy0(Eigen::Vector3<T> x) {
return x.sum();
}
template<typename T> T elementEnergy1(Eigen::Vector3<T> x) {
return x.prod();
}
TinyAD::ScalarFunction<3, double, Eigen::Index> createScalarFunction(int vertexCount) {
auto scalarFunction = TinyAD::scalar_function<3>(TinyAD::range(vertexCount));
scalarFunction.add_elements<1>(TinyAD::range(vertexCount), [&](auto &element) -> TINYAD_SCALAR_TYPE(element) {
using ScalarT = TINYAD_SCALAR_TYPE(element);
int index = element.handle;
Eigen::Vector3<ScalarT> x = element.variables(index);
ScalarT energy = elementEnergy0(x); // I want this function call to be variable.
return energy;
});
return scalarFunction;
}
int main() {
Eigen::RowVector3d v0(0.0, 0.0, 0.0);
Eigen::RowVector3d v1(1.0, 1.0, 1.0);
Eigen::Matrix<double, 2, 3> vertexPositions;
vertexPositions << v0, v1;
int vertexCount = vertexPositions.rows();
Eigen::VectorXd x = vertexPositions.reshaped<Eigen::RowMajor>(vertexPositions.rows() * 3, 1);
auto scalarFunction0 = createScalarFunction(vertexCount /*, elementEnergy0 */);
auto e0 = scalarFunction0.eval(x);
std::cout << e0 << std::endl;
// I want this function to use elementEnergy1:
auto scalarFunction1 = createScalarFunction(vertexCount /*, elementEnergy1 */);
}
The problem is, I can't find a way to pass function templates into the createScalarFunction
without assigning types. However, TinyAD "needs" these function templates to remain "templated", such that they can be called with both a passive type e.g. double
and an active type e.g. TinyAD::Scalar
.
So to conclude my question is: how can I create a function that returns a ScalarFunction
, that internally uses a variable function template?
PS: Thanks for the great library, I'm really loving it. The sparse Hessians are exactly what I've been searching for!
The paper said "The restriction of having to choose k at compile time can be lifted by using TinyAD::Scalar<Eigen::Dynamic, double> at a run time cost."
I tried to modify angle.cc
as follows but got an assertion error.
How can I use k at run time? Thank you.
void test(Eigen::VectorXd variable, Eigen::VectorXd y){
// Choose autodiff scalar type for 3 variables
using ADouble = TinyAD::Scalar<Eigen::Dynamic, double>;
// Init a vector of active variables
auto x = ADouble::make_active(variable);
// Compute angle between the two vectors
ADouble angle = acos(x.dot(y) / (x.norm() * y.norm()));
// Retreive gradient and Hessian w.r.t. x
Eigen::Vector3d g = angle.grad;
Eigen::Matrix3d H = angle.Hess;
TINYAD_INFO("angle: " << std::endl << angle.val);
TINYAD_INFO("g: " << std::endl << g);
TINYAD_INFO("H: " << std::endl << H);
}
int main()
{
Eigen::VectorXd variable(3);
variable << 0.0, -1.0, 1.0;
Eigen::VectorXd constant(3);
constant << 2.0, 3.0, 5.0;
test(variable, constant);
return 0;
}
Hi, I'm trying to define an objective that involves the shape operator, which evaluates the data on one-ring around each vertex. However, I only find objectives connected with constant variables in all the examples. I can't do something like
func::add_element <valence>
in a loop since it requires a const value inside<>. Is there any solution to this kind of objective? Thank you so much in advance!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.