rjhogan / adept-2 Goto Github PK
View Code? Open in Web Editor NEWCombined array and automatic differentiation library in C++
Home Page: http://www.met.reading.ac.uk/clouds/adept/
License: Apache License 2.0
Combined array and automatic differentiation library in C++
Home Page: http://www.met.reading.ac.uk/clouds/adept/
License: Apache License 2.0
On Windows with MinGW 64bit memory usage problem.
This will be ok...
//memory ok
//5MB
aVector yy=x*2;
for(int q=0;q<30000000;q++)
{
stack.new_recording();
yy=x*2;
}
But this memory usage will go through the roof. Even after Stack goes out of scope the memory is not freed up...
//crazy memory usage and super slow
//475MB and doesn't free even once
//stack is out of scope
for(int q=0;q<30000000;q++)
{
stack.new_recording();
aVector yy=x*2;
}
On Linux I had no such issue. I've tried various setting. I thought it might be something in the Packet.h file with freeing of memory but I don't understand what's going on in that file. Trying various compiler flags made no difference. Not building the library and instead using the method for non unix systems made no difference either. I use msys2 so my Windows system behalves more like that of Linux than Windows.
Below are what Stack said before and after the previous code. On Linux The result from Stack was about the same and nothing looks odd...
When stack is created
Automatic Differentiation Stack (address 0x22fa90):
Currently attached - thread safe
Recording status:
Recording is ON
0 statements (1048576 allocated) and 0 operations (1048576 allocated)
0 gradients currently registered and a total of 1 needed (current index 0)
Gradient list has no gaps
Computation status:
0 gradients assigned (0 allocated)
Jacobian size: 0x0
Independent indices:
Dependent indices:
Parallel Jacobian calculation can use up to 2 threads
Each thread treats 4 (in)dependent variables
After the 475MB memory thing has happened
Automatic Differentiation Stack (address 0x22fa90):
Currently attached - thread safe
Recording status:
Recording is ON
1 statements (1048576 allocated) and 1 operations (1048576 allocated)
42 gradients currently registered and a total of 70 needed (current index 69)
Gradient list has 1 gaps (12-38 )
Computation status:
0 gradients assigned (0 allocated)
Jacobian size: 0x0
Independent indices:
Dependent indices:
Parallel Jacobian calculation can use up to 2 threads
Each thread treats 4 (in)dependent variables
Below are all the outputs of the settings from settings.h for one of my tests...
version 2.0.5
compiler_version g++ [7.2.0]
compiler_flags
-g1 -O3 -D__unix__ -march=native
configuration
Adept version 2.0.5:
Compiled with g++ [7.2.0]
Compiler flags "-g1 -O3 -D__unix__ -march=native"
BLAS support from blas library
Jacobians processed in blocks of size 4
have_matrix_multiplication 1
Using make check
one test always fail on Windows for me and that's ...
test_thread_safe_arrays... ./run_tests.sh: line 18: 4944 Segmentation fault ./$TEST >> $LOG 2> $STDERR
In the test_results.txt
file there is nothing under the test_thread_safe_arrays
section...
########################################################
### test_thread_safe_arrays
########################################################
All other tests passed.
Cheers,
Jonti
Tests also fail to build with gcc12
:
In file included from autodiff_benchmark.cpp:17:
differentiator.h: In member function 'virtual bool AdolcDifferentiator::adjoint(TestAlgorithm, const std::vector<double>&, std::vector<double>&, const std::vector<double>&, std::vector<double>&)':
differentiator.h:383:17: error: 'aReal' was not declared in this scope; did you mean 'adept::aReal'?
383 | std::vector<aReal> q_init(NX);
| ^~~~~
| adept::aReal
In file included from ../include/adept.h:18,
from differentiator.h:23:
../include/adept/scalar_shortcuts.h:22:24: note: 'adept::aReal' declared here
22 | typedef Active<Real> aReal;
| ^~~~~
differentiator.h:383:22: error: template argument 1 is invalid
383 | std::vector<aReal> q_init(NX);
| ^
differentiator.h:383:22: error: template argument 2 is invalid
differentiator.h:384:22: error: template argument 2 is invalid
384 | std::vector<aReal> q(NX);
| ^
differentiator.h:389:13: error: invalid types 'int[int]' for array subscript
389 | q_init[i] <<= x[i];
| ^
differentiator.h:392:9: error: no matching function for call to 'AdolcDifferentiator::func(TestAlgorithm&, int&, int&)'
392 | func(test_algorithm, q_init, q);
| ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
differentiator.h:96:8: note: candidate: 'template<class ActiveRealType> void Differentiator::func(TestAlgorithm, const std::vector<T>&, std::vector<T>&)'
96 | void func(TestAlgorithm test_algorithm,
| ^~~~
differentiator.h:96:8: note: template argument deduction/substitution failed:
differentiator.h:392:9: note: mismatched types 'const std::vector<T>' and 'int'
392 | func(test_algorithm, q_init, q);
| ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
differentiator.h:395:8: error: invalid types 'int[int]' for array subscript
395 | q[i] >>= y[i];
| ^
differentiator.h: In member function 'virtual bool AdolcDifferentiator::jacobian(TestAlgorithm, const std::vector<double>&, std::vector<double>&, std::vector<double>&, int)':
differentiator.h:420:17: error: 'aReal' was not declared in this scope; did you mean 'adept::aReal'?
420 | std::vector<aReal> q_init(NX);
| ^~~~~
| adept::aReal
../include/adept/scalar_shortcuts.h:22:24: note: 'adept::aReal' declared here
22 | typedef Active<Real> aReal;
| ^~~~~
differentiator.h:420:22: error: template argument 1 is invalid
420 | std::vector<aReal> q_init(NX);
| ^
differentiator.h:420:22: error: template argument 2 is invalid
differentiator.h:421:22: error: template argument 2 is invalid
421 | std::vector<aReal> q(NX);
| ^
differentiator.h:426:13: error: invalid types 'int[int]' for array subscript
426 | q_init[i] <<= x[i];
| ^
differentiator.h:429:9: error: no matching function for call to 'AdolcDifferentiator::func(TestAlgorithm&, int&, int&)'
429 | func(test_algorithm, q_init, q);
| ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
differentiator.h:96:8: note: candidate: 'template<class ActiveRealType> void Differentiator::func(TestAlgorithm, const std::vector<T>&, std::vector<T>&)'
96 | void func(TestAlgorithm test_algorithm,
| ^~~~
differentiator.h:96:8: note: template argument deduction/substitution failed:
differentiator.h:429:9: note: mismatched types 'const std::vector<T>' and 'int'
429 | func(test_algorithm, q_init, q);
| ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
differentiator.h:432:8: error: invalid types 'int[int]' for array subscript
432 | q[i] >>= y[i];
| ^
make[2]: *** [autodiff_benchmark-autodiff_benchmark.o] Error 1
make[1]: *** [check-am] Error 2
make: *** [check-recursive] Error 1
Hi,
I currently use CppAD and was considering porting my codebase to use your library. I make use of gradient, jacobian, and hessian computations in cppad. The optimisation framework I'm using also makes use of the sparsity pattern that CppAD can compute for both jacobian and hessian matrices. Is there anything that I'm currently using from CppAD as per the above that's not currently provided in your library before I begin the porting work? Thanks
Hi,
I'm having issue installing adept2 on mac.
Since there is no autoreconf on mac, so I tried to install from release package adept-2.1.1
.
I followed the installation guide, and configuration option used was
./configure --prefix=/Users/xxx/build "CXXFLAGS=-g -O3"
But the make command gave me this error
libtool: link: ranlib .libs/libadept.a
libtool: link: ( cd ".libs" && rm -f "libadept.la" && ln -s "../libadept.la" "libadept.la" )
Making all in include
make[2]: Nothing to be done for `all'.
Making all in benchmark
make[2]: Nothing to be done for `all'.
Making all in test
********************************************************
*** To compile test programs in test/ and benchmark/ ***
*** type "make check" ***
********************************************************
make[2]: *** No rule to make target `README.md', needed by `all-am'. Stop.
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2
May I get some support on this issue?
Thanks,
Ge
I am seeing weird behavior with asin and acos.
Consider the main function in file "summa.cpp"
#include <iostream>
#include <stdint.h>
#include "adept.h"
using adept::adouble;
void simple(std::vector<adouble>& y, std::vector<adouble>& x, adept::Stack& stack)
{
y[0] = acos(x[2]);
y[1] = asin(x[2]);
}
int main(int argc, char* argv[]){
std::vector<double> jac(16);
adept::Stack stack;
std::vector<adouble> x(4);
x[0] = 0;
x[1] = 0;
x[2] = 0.5;
x[3] = 0;
stack.new_recording();
std::vector<adouble> y(4);
simple(y,x, stack);
stack.independent(&x[0], 4);
stack.dependent(&y[0], 4);
stack.jacobian(&jac[0]);
for(unsigned int i =0; i<16; i++){
std::cout<<jac[i]<<", ";
}
std::cout << std::endl;
std::cout<<sin(0.5) << std::endl;
std::cout<<asin(0.5) << std::endl;
}
I would thus expect the following to be printed (computing the jacobian analytically) as Jacobian is stored in column major order
0, 0, 0, 0, 0, 0, 0, 0, -0.479426, 0.877582, 0, 0, 0, 0, 0,
0.479426
0.479426
However, I get this
0, 0, 0, 0, 0, 0, 0, 0, -1.1547, 1.1547, 0, 0, 0, 0, 0, 0,
0.479426
0.523599
I use the following MakeFile to compile
CXX = g++
CXXFLAGS = -Wall -g
#Run time seach path for the shared library of adept
LDFLAGS = -Wl,-rpath -Wl,/usr/local/lib
OBJECTS = summa.o
PROGRAM = summa
# Include-file location
INCLUDES = -I/usr/local/include
# Library location and name, plus the math library
LIBS = -L/usr/local/lib -lm -ladept
$(PROGRAM): $(OBJECTS)
$(CXX) $(CXXFLAGS) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $(PROGRAM)
# Rule to build a normal object file (used to compile all objects in OBJECTS)
%.o: %.cpp
$(CXX) $(CXXFLAGS) $(LDFLAGS) $(INCLUDES) -c $<
I am running
OS: Arch Linux x86_64
Kernel: 6.2.13-arch1-1
g++ (GCC) 13.1.1
I am surely doing something horribly wrong but I am unable to figure out what...
Hi. I think there is a typo in UnaryOperation.h, Line 190. In the definition of Fabs, it should say std::fabs, not std::abs.
The following test program fails to compile (Visual Studio 15.5.2 on Windows 7), and is fixed by the patch above. I don't have other systems available to test on I'm afraid.
Best wishes,
`#include "adept.h"
using adept::adouble;
int main()
{
adouble x = 1.0;
adouble y = fabs(x);
return 0;
}`
with
Error 'adept::fabs': no matching overloaded function found d:\dev\exoticagit\aad\adept-2\adept-2.0.3\include\adept\unaryoperation.h 190
Error 'adept::internal::UnaryOperation<Type,adept::internal::Fabs,R> adept::fabs(const adept::Expression<Type,A> &)': could not deduce template argument for 'const adept::Expression<Type,A> &' from 'const adept::Real' d:\dev\exoticagit\aad\adept-2\adept-2.0.3\include\adept\unaryoperation.h 190
This patch fixes it for me:
`--- a/3rdParty/Adept-2/adept-2.0.4/adept-2.0.4/include/adept/UnaryOperation.h
+++ b/3rdParty/Adept-2/adept-2.0.4/adept-2.0.4/include/adept/UnaryOperation.h
@@ -239,7 +239,7 @@ namespace adept {
ADEPT_DEF_UNARY_FUNC(Sinh, sinh, std::sinh, "sinh", cosh(val), false)
ADEPT_DEF_UNARY_FUNC(Cosh, cosh, std::cosh, "cosh", sinh(val), false)
ADEPT_DEF_UNARY_FUNC(Abs, abs, std::abs, "abs", ((val>0.0)-(val<0.0)), false)
// Functions y(x) whose derivative depends on the result of the
// function, i.e. dy(x)/dx = f(y)
--`
It would be helpful to have Adept releases tagged. This allows users to reference the URL for the release rather than a github hash.
Adept-2/doc/adept_documentation.tex
line 809
adept.new_recording();
should be changed to
stack.new_recording();
Hello
I am trying to perform automatic differentiation of a code using Adept. However, I am not able to provide a user-defined gradient to Adept. I have a function that I want to differentiate using Adept but I want to provide my gradient for a variable, say, x_a . I use x_a.set_gradient(xgrad) where xgrad
is user-defined gradient value. It seems like I have to do stack.new_recording() to make adept use my gradient, but isn't stack.new_recording() clears all the previous gradients calculated so far using Adept ? Can you please let me know if I am doing this wrong ?
Thanks and Regards
Sharan
Hello,
I found that the following code:
adept::Stack main_stack;
adept::FixedArray<adept::Real, true, 2> v({2, 3});
main_stack.new_recording();
adept::aReal r = adept::sum(v);
r.set_gradient(1.0);
main_stack.reverse();
std::cerr << v.get_gradient();
throws at v.get_gradient():
terminate called after throwing an instance of 'adept::gradient_out_of_range'
what(): Gradient index out of range: probably aReal objects have been created after a set_gradient(s) call
If I substitute the first line with:
adept::FixedArray<adept::Real, true, 2, 2> v({{2, 3}, {2, 3}});
then it all works well.
Am I doing something wrong?
This happens to me on Linux both with GCC 5.4.0 and Clang 6.0.
Hello,
I have another problem. The following program prints {{0, 0}}, which is not correct:
https://gist.github.com/acriaer/e1fc3a637148a47a49ecefda9c5310a2
Please take a look at division by 1.0 at line 15 and call to Function at line 17.
Both of those should not affect the gradient. However, if I remove either one of them (or both), the program prints correct {{0.707107, 0.707107}}.
This is the furthest I could simplify the case from my application. If I make it simpler, the problem disappears.
Happens every time on GCC 5.4.0, GCC 7.3.0 and Clang 6.0. Adept is compiled with -O3.
Finding the maximum value of an array with maxval(), e.g.,
adept::Vector v{-2, -1};
std::cout << "v : " << v << "\n";
std::cout << "maxval(v) : " << adept::maxval(v) << "\n";
returns
v : {-2, -1}
maxval(v) : 2.22507e-308
instead of -1. In reduce.h in line 217
T first_value() { return std::numeric_limits<T>::min(); }
returns a value > 0 for double
, whereas lowest()
returns -1.79769e+308.
Also consider the output for inf
v : {inf, inf}
minval(v) : 1.79769e+308
Steps to reproduce:
Result:
...
Making all in test
/bin/sh: line 21: cd: test: No such file or directory
make[1]: *** [Makefile:503: all-recursive] Error 1
make[1]: Leaving directory '/home/bradbell/repo/cmpad.git/external/adept.git/build'
make: *** [Makefile:383: all] Error 2
Configure Warning:
configure: WARNING: cannot determine how to obtain linking information from f77
System:
build>uname -a
Linux brad-mobile 6.4.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Aug 23 17:46:49 UTC 2023 x86_64 GNU/Linux
I keep getting a segmentation fault when I try to initialize a struct of adoubles. See below the struct
struct VehParam{
// Contructor for the 4 DOF parameters
VehParam()
: _l(0.5), _c1(1e-4), _c0(0.02), _R(0.08451952624), _I(1e-3), _gamma(0.334), _tau0(0.09), _omega0(161.185) , _step(1e-3) {}
VehParam(adouble l, adouble c1, adouble c0, adouble R, adouble I, adouble gamma, adouble tau0, adouble omega0, double step)
: _l(l), _c1(c1), _c0(c0), _R(R), _I(I), _gamma(gamma), _tau0(tau0), _omega0(omega0), _step(step) {}
adouble _l; // length of car
adouble _c1; // Motor resistance that multiplies linearly
adouble _c0; // Motor resistance that is constant
adouble _R; // Radius of wheel
adouble _I; // Moment of Inertia of wheel
adouble _gamma; // Gear Ratio
adouble _tau0; // Motor torque at 0 RPM
adouble _omega0; // RPM at which motor toruqe goes to 0
double _step; // Time step used in the integration
};
I initialize using
VehParam veh1_param
This does not happen when I only use doubles.
Hi,
I would like to test adept for auto differentiation in an optimization process (using IPOPT).
I have a template class that use double or adouble.
In the class, I have some std::vector of Eigen::Matrix defined as :
std::vector < Eigen::Matrix < T, 6, 1 >, Eigen::aligned_allocator < Eigen::Matrix < T, 6, 1 > > > a;
I have a segmentation fault when I try to resize the vector.
If I resize this kind of vector in a simple program (out of any class), I do not have any trouble
Do you have any clue about it ?
Thank you
S. Lengagne
There are missing symbols, some examples:
The issue is that these are only defined in the arm_neon.h
header for 64 bit targets (__aarch64__
). Is this something that is planned to be fixed? Or should vectorization be disabled for 32 bit arm targets?
Hi.
I was trying to compile the Adept-2 library. I have CppAD installed so the process also tries to generate the benchmark tests for CppAD. However, when I was trying to do make check
the benchmark tests failed to compile.
In file included from differentiator.h:39:0,
from autodiff_benchmark.cpp:17:
advection_schemes.h: In instantiation of ‘void lax_wendroff_vector(int, Real, const aReal*, aReal*) [with aReal = CppAD::AD<double>; Real = double]’:
differentiator.h:107:26: required from ‘void Differentiator::func(TestAlgorithm, const std::vector<T>&, std::vector<T>&) [with ActiveRealType = CppAD::AD<double>]’
differentiator.h:520:35: required from here
advection_schemes.h:80:33: error: cannot convert ‘const CppAD::AD<double>’ to ‘double’ in assignment
for (int i=0; i<NX; i++) Q(i) = q_init[i]; // Initialize q
~~~~~^~~~~~~~~
advection_schemes.h: In instantiation of ‘void toon_vector(int, Real, const aReal*, aReal*) [with aReal = CppAD::AD<double>; Real = double]’:
differentiator.h:110:18: required from ‘void Differentiator::func(TestAlgorithm, const std::vector<T>&, std::vector<T>&) [with ActiveRealType = CppAD::AD<double>]’
differentiator.h:520:35: required from here
advection_schemes.h:102:33: error: cannot convert ‘const CppAD::AD<double>’ to ‘double’ in assignment
for (int i=0; i<NX; i++) Q(i) = q_init[i]; // Initialize q
~~~~~^~~~~~~~~
Makefile:456: recipe for target 'autodiff_benchmark-autodiff_benchmark.o' failed
make[2]: *** [autodiff_benchmark-autodiff_benchmark.o] Error 1
make[2]: Leaving directory '/home/sanithovski/ids/Adept-2/benchmark'
Makefile:572: recipe for target 'check-am' failed
make[1]: *** [check-am] Error 2
I am using Ubuntu 18.04.2 with gcc 6.5.0, but I tried with gcc 8.3.0 and got the same error. I suspect there is a change in the related functions from newer versions of CppAD.
Thanks.
Hello,
I am having problems when computing different Jacobians (using different sets of independent variables) from the same recording.
Consider the following code to be differentiated:
adept::Stack stack;
adept::aVector3 u = {1.0, 2.0, 3.0};
adept::aReal h = 5;
stack.new_recording();
adept::aVector3 v = adept::Vector3{2,3,4} * u + 9 * h;
then, if we compute the derivative of v wrt to h, the following code is working just fine:
int n = 1; // independent
int m = 3; // dependent
adept::Real jac[3] = {0.0};
//stack.independent(u);
stack.independent(h);
stack.dependent(v);
stack.jacobian(jac);
print_jacobian(m, n, jac); // jac = {9.0, 9.0, 9.0}
However, if we first compute the full Jacobian, then clear the independent variables and add only h, the result is not right:
int n = 3+1; // independent
int m = 3; // dependent
adept::Real jac[12] = {0.0};
stack.independent(u);
stack.independent(h);
stack.dependent(v);
stack.jacobian(jac);
print_jacobian(m, n, jac); // jac = { 2.0, 0.0, 0.0, 9.0,
// 0.0, 3.0, 0.0, 9.0,
// 0.0, 0.0, 4.0, 9.0 } RIGHT
stack.clear_independents();
stack.clear_dependents();
n = 1;
m = 3;
adept::Real jac1[3] = {0.0};
//stack.independent(u);
stack.independent(h);
stack.dependent(v);
stack.jacobian(jac1);
print_jacobian(m, n, jac1); // jac1 = {9.0, 9.0, 25.0} WRONG!
Am I doing something wrong?
Is this a problem with the clear_independents/clear_dependents functionality?
This happens to me on Ubuntu 18.04.2 with GCC 7.4.0
Thanks!
-- For reference
void print_jacobian(int m, int n, adept::Real* jac)
{
for (int i = 0; i < m; ++i)
{
for (int j = 0; j < n; ++j)
{
printf("\t%.3f", jac[m*j+i]);
}
printf("\n");
}
}
Dear All,
I am trying to build the Jacobian matrix for one ODE question, like dy/dt=f(t,y). I have programmed the C++ code for the ODE functions. And I want to build one Jacobian matrix of df/dy for the ODE solver. I know I can create the hand code of df/dy by myself, but it is not effective. So I want to know whether the Adept algorithm can help me to build this Jacobian matrix? The size of this matrix is around 3000*3000. Thank you,
Is there a way to link Adept with Intel MKL Blas and Lapack? I'm now using ./configure from msys64 on a Windows PC.
I now got the following lines at the and of ./configure:
configure: ********************* Libraries used by Adept **********************
configure: BLAS (Basic Linear Algebra Subprograms) will not be used: MATRIX MULTIPLICATION IS UNAVAILABLE
configure: LAPACK (Linear Algebra Package) will not be used: LINEAR ALGEBRA ROUTINES ARE UNAVAILABLE
Thanks in advance,
Laurent
Hello,
I was trying to use Adept-2 in a concurrency environment, where I defined a function as follows
void func(){
Stack stack;
...
}
The function func
may be called concurrently. The program gave me segfault and the error appears to be related to conflicts of multiple active stacks. Currently, I have a workaround by using a mutex
and the error disappears.
std::mutex mu;
void func(){
const std::lock_guard<std::mutex> lock(mu);
Stack stack;
}
But this can degrade performance, especially when func
is very expensive. What is the optimal way in Adept-2 to handle this issue? Thanks!
BTW, thanks for developing Adept-2. The AD and array functionalities are really useful, and the syntax is pretty intuitive and elegant. 👍
test_checkpoint.cpp: In function ‘int main(int, char**)’:
test_checkpoint.cpp:45: error: ‘class Timer’ has no member named ‘print_on_exit’
test_checkpoint.cpp:69: error: ‘class Timer’ has no member named ‘new_activity’
test_checkpoint.cpp:70: error: ‘class Timer’ has no member named ‘new_activity’
test_checkpoint.cpp:76: error: invalid conversion from ‘int’ to ‘const char*’
test_checkpoint.cpp:76: error: initializing argument 1 of ‘void Timer::start(const char*)’
test_checkpoint.cpp:137: error: invalid conversion from ‘int’ to ‘const char*’
test_checkpoint.cpp:137: error: initializing argument 1 of ‘void Timer::start(const char*)’
test_checkpoint.cpp:236: error: ‘class Timer’ has no member named ‘stop’
make[1]: *** [test_checkpoint.o] Error 1
make: *** [check-recursive] Error 1
0001-Initial-move-assign-fixes.txt
The following test case (a call to Array(.) move assignment) fails to compile on Visual Studio 15.5.3 and Clang, with error message:
'1>aad\adept-2\adept-2.0.4\include\adept\array.h(402): error C2660: 'adept::internal::GradientIndex::swap': function does not take 2 arguments'
'1>aad\adept-2\adept-2.0.4\include\adept\array.h(385): note: while compiling class template member function 'adept::Array<1,adept::Real,false> &adept::Array<1,adept::Real,false>::operator =(adept::Array<1,adept::Real,false> &&)''
I've attached a simple fix, but it may not be the best way to resolve this issue.
Regards,
John.
#include <adept.h>
#include <adept/array_shortcuts.h>
#include < iostream >
using adept::adouble;
using adept::Vector;
Vector generateVector(void)
{
Vector x = { 3.0 };
return x;
}
void testMoveAssign(void)
{
#ifdef ADEPT_MOVE_SEMANTICS
std::cout << "\n Move on" << std::endl;
#else
std::cout << "\n Move off" << std::endl;
#endif
Vector v;
v = generateVector(); // This fails to compile.
}
this leads to following errors on packet.h on windows:
adept/Packet.h(400,14): error C3861: 'mm_hprod_pd': identifier not found
adept/Packet.h(404,14): error C3861: 'mm_hmin_pd': identifier not found
adept/Packet.h(408,14): error C3861: 'mm_hmax_pd': identifier not found
Packet.h
has test code to define _mm_undefined_ps
if it's missing on old GCC. Old Clangs, though, also don't have this instruction, causing build failures: example Travis failure.
I was able to work around it in the conda-forge
package we're adding with a dumb patch, which I'm only applying for our OSX builds. You shouldn't put this in Adept, because it's incorrect (if _mm_undefined_ps
is defined as a function rather than a macro, as it actually is, it'll override that definiton), but there should probably be a version check for clang too.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.