Git Product home page Git Product logo

george's Introduction

George

Fast and flexible Gaussian Process regression in Python.

https://img.shields.io/badge/GitHub-dfm%2Fgeorge-blue.svg?style=flat http://img.shields.io/badge/license-MIT-blue.svg?style=flat https://github.com/dfm/george/workflows/Tests/badge.svg?style=flat https://coveralls.io/repos/github/dfm/george/badge.svg?branch=main&style=flat https://readthedocs.org/projects/george/badge/?version=latest

Read the documentation at: george.readthedocs.io.

george's People

Contributors

arkottke avatar dependabot[bot] avatar dfm avatar jbernhard avatar jborrow avatar kamuish avatar mirca avatar mykytyn avatar ruthangus avatar shoyer avatar simonrw avatar syrte avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

george's Issues

Python installation of development version

I've been trying to get at this issue for a bit and haven't had any luck. The install of this version using "python setup.py install" gives me the following errors mostly involving reference to an "assemble_Matrix method"

Everything goes well until...

In file included from george/hodlr.cpp:476:
include/solver.h:59:9: warning: delete called on non-final 'george::HODLRSolverMatrix' that has virtual functions but non-virtual
destructor [-Wdelete-non-virtual-dtor]
delete matrix_;
^
include/solver.h:92:18: error: no matching member function for call to 'assemble_Matrix'
solver_->assemble_Matrix(diag, tol_, 's', seed);
~~~~~~~~~^~~~~~~~~~~~~~~
./hodlr/header/HODLR_Tree.hpp:188:7: note: candidate function not viable: requires 3 arguments, but 4 were provided
void assemble_Matrix(VectorXd& diagonal, double lowRankTolerance, char s) {
^
./hodlr/header/HODLR_Tree.hpp:73:7: note: candidate function not viable: requires single argument 'node', but 4 arguments were provided
void assemble_Matrix(HODLR_Node*& node) {
^
2 warnings and 1 error generated.
error: command 'gcc' failed with exit status 1

Interest in Leave One Out Cross Validation score and jackknife matrix?

I've been working with Leave One Out Cross Validation's (LOOCV) with George. In Gaussian Processes it is possible to analytically find the leave one out estimator without re-doing the covariance matrix inversion. It's been fairly easy to write up so I was considering adding them to george directly. However, I do not know if they'd be considered redundant with the existing lnliklihood functions. Should I implement them?

Additionally, I've also been working on a similar calculation that can produce a jackknife covariance matrix from leave one estimates. This is slightly more complicated, but I'm interested to see if they provide better error estimates than the covariance matrix produces by the gaussian process. Is there interest in this feature, too?

Installation erorr - Not able to import george library

I am trying to follow the step by step instructions from http://dan.iel.fm/george/current/user/quickstart/#installation
and am encountering errors when importing the george library. The command line output is:

import george
Traceback (most recent call last):
File "", line 1, in
File "george/init.py", line 13, in
from . import kernels
File "george/kernels.py", line 18, in
from ._kernels import CythonKernel
ImportError: No module named _kernels

Any advice would be much appreciated!

Using real world datasets

Hi there,
I am currently trying to understand how to do GP inference on a real world dataset. My dataset has 3 columns: [Latitude,Longitude,Measurements] and stores 59 rows for 59 locations labelled via GPS coordinates. In order to perform GP Regression to infer the hyper-parameters, I am not quite sure if I should directly feed in the raw data or should I preprocess somehow ?

Please advise.

Using the dim keyoword with multiple dimensional kernels.

When applying george kernels to multiple dimensional inputs, it is possible to apply kernels only to specified dimensions using the dim keyword. It is not clear from the documentation that dim should start at 0.

Even if I give it ndim = 3, where dim should be set to 0,1 or 2 - the code does not throw an error if I set dim to 1,2 and 3 (in my respective kernels).

Expose logdet and solve methods in the HODLR solver?

First of all, it would be nice to expose these directly for advanced users. I did notice that these are in the HODLR python binding but George will be faster, since you don't write the functions to compute the covariance matrix in Python.

Secondly, given that these calculations will likely be the bottleneck for any calculations of the log-likelihood, exposing these methods to Python (even if only internally) would allow for consolidating the lnlikelihood and grad_lnlikelihood methods for both solvers (resolving #11).

Error Encountered with ExpSine2Kernel

When I use the ExpSine2Kernel, I get the following linear algebra error:


Traceback (most recent call last):
File "kernelissue.py", line 9, in
gp.compute(t)
File "/Library/Python/2.7/site-packages/george-0.2.0-py2.7-macosx-10.9-intel.egg/george/gp.py", line 178, in compute
self.solver.compute(self._x, self._yerr, **kwargs)
File "/Library/Python/2.7/site-packages/george-0.2.0-py2.7-macosx-10.9-intel.egg/george/basic.py", line 23, in compute
self._factor = (cholesky(K, overwrite_a=True, lower=False), False)
File "/Library/Python/2.7/site-packages/scipy/linalg/decomp_cholesky.py", line 81, in cholesky
check_finite=check_finite)
File "/Library/Python/2.7/site-packages/scipy/linalg/decomp_cholesky.py", line 30, in _cholesky
raise LinAlgError("%d-th leading minor not positive definite" % info)

numpy.linalg.linalg.LinAlgError: 28-th leading minor not positive definite

Here's a sample program which gives this error:


from george import kernels
import george
import numpy as np

t = np.linspace(39.0, 65.0, 71)

gp = george.GP(kernels.ExpSine2Kernel(1, 1))

gp.compute(t)

I tried using other hyperparameters for the kernel and got the same error.

_kernels.so : no suitable image found

I'm installing with pip in a virtualenv, on Python 2.7.5, numpy 1.9.0 (though I don't know why it sees the deprecated numpy 1.7 interface), OS and compiler : OSX 10.9.2, Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn), x86_64-apple-darwin13.1.0
When I import the module, the following appears:

import george
Traceback (most recent call last):
File "", line 1, in
File "/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/george/init.py", line 13, in
from . import kernels
File "/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/george/kernels.py", line 18, in
from ._kernels import CythonKernel
ImportError: dlopen(/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/george/_kernels.so, 2): no suitable image found. Did find:
/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/george/_kernels.so: mach-o, but wrong architecture

Full installation logs follow:

$ pip install george
Downloading/unpacking george
Downloading george-0.2.1.tar.gz (127kB): 127kB downloaded
Running setup.py (path:/Users/ocramz/.virtualenvs/venv_21oct2014/build/george/setup.py) egg_info for package george

Installing collected packages: george
Running setup.py install for george

Found Eigen version 3.2.2 in: /usr/local/include/eigen3
Found HODLR headers in: ./hodlr/header
building 'george._kernels' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -Iinclude -I/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include/eigen3 -I./hodlr/header -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c george/_kernels.cpp -o build/temp.macosx-10.9-intel-2.7/george/_kernels.o -Wno-unused-function -Wno-uninitialized
clang: warning: argument unused during compilation: '-mno-fused-madd'
In file included from george/_kernels.cpp:352:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1804:
/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by "          "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it by " \
 ^
In file included from george/_kernels.cpp:354:
include/metrics.h:25:49: warning: implicit conversion loses integer precision: 'size_type' (aka 'unsigned long') to 'unsigned int' [-Wshorten-64-to-32]
    virtual unsigned int size () const { return vector_.size(); };
                                         ~~~~~~ ^~~~~~~~~~~~~~
george/_kernels.cpp:1769:16: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n = (__pyx_v_x->dimensions[0]);
            ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:1770:19: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_ndim = (__pyx_v_x->dimensions[1]);
               ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:1803:20: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_delta = (__pyx_v_x->strides[0]);
                ~  ^~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2109:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n1 = (__pyx_v_x1->dimensions[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2110:19: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_ndim = (__pyx_v_x1->dimensions[1]);
               ~  ^~~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2111:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n2 = (__pyx_v_x2->dimensions[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2150:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_d1 = (__pyx_v_x1->strides[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2151:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_d2 = (__pyx_v_x2->strides[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2378:16: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n = (__pyx_v_x->dimensions[0]);
            ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2379:19: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_ndim = (__pyx_v_x->dimensions[1]);
               ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2480:20: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_delta = (__pyx_v_x->strides[0]);
                ~  ^~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2489:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_dx = (__pyx_v_g->strides[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2490:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_dy = (__pyx_v_g->strides[1]);
             ~  ^~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2748:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n1 = (__pyx_v_x1->dimensions[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2749:19: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_ndim = (__pyx_v_x1->dimensions[1]);
               ~  ^~~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2750:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n2 = (__pyx_v_x2->dimensions[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2857:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_d1 = (__pyx_v_x1->strides[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2858:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_d2 = (__pyx_v_x2->strides[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2867:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_dx = (__pyx_v_g->strides[0]);
             ~  ^~~~~~~~~~~~~~~~~~~~~
george/_kernels.cpp:2868:17: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_dy = (__pyx_v_g->strides[1]);
             ~  ^~~~~~~~~~~~~~~~~~~~~
In file included from george/_kernels.cpp:355:
include/kernels.h:251:18: warning: implicit conversion loses integer precision: 'const long' to 'unsigned int' [-Wshorten-64-to-32]
        : Kernel(ndim), metric_(metric) {};
          ~~~~~~ ^~~~
include/kernels.h:294:46: note: in instantiation of member function 'george::kernels::RadialKernel<george::metrics::OneDMetric>::RadialKernel' requested here
    ExpKernel (const long ndim, M* metric) : RadialKernel<M>(ndim, metric) {};
                                             ^
george/_kernels.cpp:3721:28: note: in instantiation of member function 'george::kernels::ExpKernel<george::metrics::OneDMetric>::ExpKernel' requested here
      __pyx_v_kernel = new george::kernels::ExpKernel<george::metrics::OneDMetric>(__pyx_v_ndim, new george::metrics::OneDMetric(__pyx_v_ndim, __pyx_t_7));
                           ^
In file included from george/_kernels.cpp:355:
include/kernels.h:251:18: warning: implicit conversion loses integer precision: 'const long' to 'unsigned int' [-Wshorten-64-to-32]
        : Kernel(ndim), metric_(metric) {};
          ~~~~~~ ^~~~
include/kernels.h:294:46: note: in instantiation of member function 'george::kernels::RadialKernel<george::metrics::IsotropicMetric>::RadialKernel' requested here
    ExpKernel (const long ndim, M* metric) : RadialKernel<M>(ndim, metric) {};
                                             ^
george/_kernels.cpp:3745:28: note: in instantiation of member function 'george::kernels::ExpKernel<george::metrics::IsotropicMetric>::ExpKernel' requested here
      __pyx_v_kernel = new george::kernels::ExpKernel<george::metrics::IsotropicMetric>(__pyx_v_ndim, new george::metrics::IsotropicMetric(__pyx_v_ndim));
                           ^
In file included from george/_kernels.cpp:355:
include/kernels.h:251:18: warning: implicit conversion loses integer precision: 'const long' to 'unsigned int' [-Wshorten-64-to-32]
        : Kernel(ndim), metric_(metric) {};
          ~~~~~~ ^~~~
include/kernels.h:294:46: note: in instantiation of member function 'george::kernels::RadialKernel<george::metrics::AxisAlignedMetric>::RadialKernel' requested here
    ExpKernel (const long ndim, M* metric) : RadialKernel<M>(ndim, metric) {};
                                             ^
george/_kernels.cpp:3769:28: note: in instantiation of member function 'george::kernels::ExpKernel<george::metrics::AxisAlignedMetric>::ExpKernel' requested here
      __pyx_v_kernel = new george::kernels::ExpKernel<george::metrics::AxisAlignedMetric>(__pyx_v_ndim, new george::metrics::AxisAlignedMetric(__pyx_v_ndim));
                           ^
25 warnings generated.
In file included from george/_kernels.cpp:352:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1804:
/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by "          "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it by " \
 ^
1 warning generated.
c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. build/temp.macosx-10.9-intel-2.7/george/_kernels.o -lm -o build/lib.macosx-10.9-intel-2.7/george/_kernels.so
ld: warning: ld: warning: ignoring file /opt/local/lib/gcc48/libgcc_ext.10.5.dylib, missing required architecture i386 in file /opt/local/lib/gcc48/libgcc_ext.10.5.dylib (1 slices)ignoring file /opt/local/lib/gcc48/libstdc++.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/gcc48/libstdc++.dylib

ld: warning: ignoring file /opt/local/lib/gcc48/gcc/x86_64-apple-darwin13/4.8.3/libgcc.a, file was built for archive which is not the architecture being linked (i386): /opt/local/lib/gcc48/gcc/x86_64-apple-darwin13/4.8.3/libgcc.a
Found Eigen version 3.2.2 in: /usr/local/include/eigen3
Found HODLR headers in: ./hodlr/header
building 'george.hodlr' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -Iinclude -I/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include/eigen3 -I./hodlr/header -I/usr/local/include/eigen3 -I./hodlr/header -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c george/hodlr.cpp -o build/temp.macosx-10.9-intel-2.7/george/hodlr.o -Wno-unused-function -Wno-uninitialized
clang: warning: argument unused during compilation: '-mno-fused-madd'
In file included from george/hodlr.cpp:352:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1804:
/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by "          "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it by " \
 ^
In file included from george/hodlr.cpp:354:
include/metrics.h:25:49: warning: implicit conversion loses integer precision: 'size_type' (aka 'unsigned long') to 'unsigned int' [-Wshorten-64-to-32]
    virtual unsigned int size () const { return vector_.size(); };
                                         ~~~~~~ ^~~~~~~~~~~~~~
In file included from george/hodlr.cpp:356:
In file included from include/solver.h:7:
./hodlr/header/HODLR_Matrix.hpp:74:23: warning: implicit conversion loses integer precision: 'Index' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
                unsigned n      =   v.size();
                         ~          ^~~~~~~~
george/hodlr.cpp:2008:16: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n = (__pyx_v_x->dimensions[0]);
            ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/hodlr.cpp:2017:19: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_ndim = (__pyx_v_x->dimensions[1]);
               ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/hodlr.cpp:2506:16: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_n = (__pyx_v_y->dimensions[0]);
            ~  ^~~~~~~~~~~~~~~~~~~~~~~~
george/hodlr.cpp:2507:19: warning: implicit conversion loses integer precision: 'npy_intp' (aka 'long') to 'unsigned int' [-Wshorten-64-to-32]
  __pyx_v_nrhs = (__pyx_v_y->dimensions[1]);
               ~  ^~~~~~~~~~~~~~~~~~~~~~~~
In file included from george/hodlr.cpp:356:
In file included from include/solver.h:6:
In file included from ./hodlr/header/HODLR_Tree.hpp:24:
./hodlr/header/HODLR_Node.hpp:126:13: warning: implicit conversion loses integer precision: 'Index' (aka 'long') to 'int' [-Wshorten-64-to-32]
                        int m0  =       V[0].rows();
                            ~~          ^~~~~~~~~~~
./hodlr/header/HODLR_Tree.hpp:122:10: note: in instantiation of member function 'HODLR_Node<george::HODLRSolverMatrix>::compute_K' requested here
                        node->compute_K(s);
                              ^
./hodlr/header/HODLR_Tree.hpp:214:3: note: in instantiation of member function 'HODLR_Tree<george::HODLRSolverMatrix>::compute_Factor' requested here
                compute_Factor(root);
                ^
include/solver.h:95:18: note: in instantiation of member function 'HODLR_Tree<george::HODLRSolverMatrix>::compute_Factor' requested here
        solver_->compute_Factor();
                 ^
In file included from george/hodlr.cpp:356:
In file included from include/solver.h:6:
In file included from ./hodlr/header/HODLR_Tree.hpp:24:
./hodlr/header/HODLR_Node.hpp:127:13: warning: implicit conversion loses integer precision: 'Index' (aka 'long') to 'int' [-Wshorten-64-to-32]
                        int m1  =       V[1].rows();
                            ~~          ^~~~~~~~~~~
In file included from george/hodlr.cpp:356:
In file included from include/solver.h:5:
In file included from /usr/local/include/eigen3/Eigen/Dense:2:
In file included from /usr/local/include/eigen3/Eigen/LU:22:
/usr/local/include/eigen3/Eigen/src/LU/FullPivLU.h:494:19: warning: implicit conversion loses integer precision: 'const Index' (aka 'const long') to 'Index' (aka 'int') [-Wshorten-64-to-32]
  m_p.setIdentity(rows);
  ~~~             ^~~~
./hodlr/header/HODLR_Node.hpp:145:12: note: in instantiation of member function 'Eigen::FullPivLU<Eigen::Matrix<double, -1, -1, 0, -1, -1> >::compute' requested here
                Kinverse.compute(K);
                         ^
./hodlr/header/HODLR_Tree.hpp:123:10: note: in instantiation of member function 'HODLR_Node<george::HODLRSolverMatrix>::compute_Inverse' requested here
                        node->compute_Inverse();
                              ^
./hodlr/header/HODLR_Tree.hpp:214:3: note: in instantiation of member function 'HODLR_Tree<george::HODLRSolverMatrix>::compute_Factor' requested here
                compute_Factor(root);
                ^
include/solver.h:95:18: note: in instantiation of member function 'HODLR_Tree<george::HODLRSolverMatrix>::compute_Factor' requested here
        solver_->compute_Factor();
                 ^
In file included from george/hodlr.cpp:356:
In file included from include/solver.h:5:
In file included from /usr/local/include/eigen3/Eigen/Dense:2:
In file included from /usr/local/include/eigen3/Eigen/LU:22:
/usr/local/include/eigen3/Eigen/src/LU/FullPivLU.h:496:38: warning: implicit conversion loses integer precision: 'Index' (aka 'long') to 'Index' (aka 'int') [-Wshorten-64-to-32]
    m_p.applyTranspositionOnTheRight(k, m_rowsTranspositions.coeff(k));
    ~~~                              ^
/usr/local/include/eigen3/Eigen/src/LU/FullPivLU.h:496:41: warning: implicit conversion loses integer precision: 'const Scalar' (aka 'const long') to 'Index' (aka 'int') [-Wshorten-64-to-32]
    m_p.applyTranspositionOnTheRight(k, m_rowsTranspositions.coeff(k));
    ~~~                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/eigen3/Eigen/src/LU/FullPivLU.h:498:19: warning: implicit conversion loses integer precision: 'const Index' (aka 'const long') to 'Index' (aka 'int') [-Wshorten-64-to-32]
  m_q.setIdentity(cols);
  ~~~             ^~~~
/usr/local/include/eigen3/Eigen/src/LU/FullPivLU.h:500:38: warning: implicit conversion loses integer precision: 'Index' (aka 'long') to 'Index' (aka 'int') [-Wshorten-64-to-32]
    m_q.applyTranspositionOnTheRight(k, m_colsTranspositions.coeff(k));
    ~~~                              ^
/usr/local/include/eigen3/Eigen/src/LU/FullPivLU.h:500:41: warning: implicit conversion loses integer precision: 'const Scalar' (aka 'const long') to 'Index' (aka 'int') [-Wshorten-64-to-32]
    m_q.applyTranspositionOnTheRight(k, m_colsTranspositions.coeff(k));
    ~~~                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from george/hodlr.cpp:356:
In file included from include/solver.h:6:
In file included from ./hodlr/header/HODLR_Tree.hpp:24:
./hodlr/header/HODLR_Node.hpp:152:11: warning: implicit conversion loses integer precision: 'Index' (aka 'long') to 'int' [-Wshorten-64-to-32]
                int n   =       matrix.cols();
                    ~           ^~~~~~~~~~~~~
./hodlr/header/HODLR_Tree.hpp:128:11: note: in instantiation of member function 'HODLR_Node<george::HODLRSolverMatrix>::apply_Inverse' requested here
                                node->apply_Inverse(mynode->Uinverse[number], mStart);
                                      ^
./hodlr/header/HODLR_Tree.hpp:214:3: note: in instantiation of member function 'HODLR_Tree<george::HODLRSolverMatrix>::compute_Factor' requested here
                compute_Factor(root);
                ^
include/solver.h:95:18: note: in instantiation of member function 'HODLR_Tree<george::HODLRSolverMatrix>::compute_Factor' requested here
        solver_->compute_Factor();
                 ^
In file included from george/hodlr.cpp:355:
include/kernels.h:251:18: warning: implicit conversion loses integer precision: 'const long' to 'unsigned int' [-Wshorten-64-to-32]
        : Kernel(ndim), metric_(metric) {};
          ~~~~~~ ^~~~
include/kernels.h:294:46: note: in instantiation of member function 'george::kernels::RadialKernel<george::metrics::OneDMetric>::RadialKernel' requested here
    ExpKernel (const long ndim, M* metric) : RadialKernel<M>(ndim, metric) {};
                                             ^
george/hodlr.cpp:3497:28: note: in instantiation of member function 'george::kernels::ExpKernel<george::metrics::OneDMetric>::ExpKernel' requested here
      __pyx_v_kernel = new george::kernels::ExpKernel<george::metrics::OneDMetric>(__pyx_v_ndim, new george::metrics::OneDMetric(__pyx_v_ndim, __pyx_t_7));
                           ^
In file included from george/hodlr.cpp:355:
include/kernels.h:251:18: warning: implicit conversion loses integer precision: 'const long' to 'unsigned int' [-Wshorten-64-to-32]
        : Kernel(ndim), metric_(metric) {};
          ~~~~~~ ^~~~
include/kernels.h:294:46: note: in instantiation of member function 'george::kernels::RadialKernel<george::metrics::IsotropicMetric>::RadialKernel' requested here
    ExpKernel (const long ndim, M* metric) : RadialKernel<M>(ndim, metric) {};
                                             ^
george/hodlr.cpp:3521:28: note: in instantiation of member function 'george::kernels::ExpKernel<george::metrics::IsotropicMetric>::ExpKernel' requested here
      __pyx_v_kernel = new george::kernels::ExpKernel<george::metrics::IsotropicMetric>(__pyx_v_ndim, new george::metrics::IsotropicMetric(__pyx_v_ndim));
                           ^
In file included from george/hodlr.cpp:355:
include/kernels.h:251:18: warning: implicit conversion loses integer precision: 'const long' to 'unsigned int' [-Wshorten-64-to-32]
        : Kernel(ndim), metric_(metric) {};
          ~~~~~~ ^~~~
include/kernels.h:294:46: note: in instantiation of member function 'george::kernels::RadialKernel<george::metrics::AxisAlignedMetric>::RadialKernel' requested here
    ExpKernel (const long ndim, M* metric) : RadialKernel<M>(ndim, metric) {};
                                             ^
george/hodlr.cpp:3545:28: note: in instantiation of member function 'george::kernels::ExpKernel<george::metrics::AxisAlignedMetric>::ExpKernel' requested here
      __pyx_v_kernel = new george::kernels::ExpKernel<george::metrics::AxisAlignedMetric>(__pyx_v_ndim, new george::metrics::AxisAlignedMetric(__pyx_v_ndim));
                           ^
19 warnings generated.
In file included from george/hodlr.cpp:352:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17:
In file included from /Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1804:
/Users/ocramz/.virtualenvs/venv_21oct2014/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by "          "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings]
#warning "Using deprecated NumPy API, disable it by " \
 ^
1 warning generated.
c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. build/temp.macosx-10.9-intel-2.7/george/hodlr.o -lm -o build/lib.macosx-10.9-intel-2.7/george/hodlr.so
ld: warning: ignoring file /opt/local/lib/gcc48/libstdc++.dylib, file was built for x86_64 which is not the architecture being linked (i386): /opt/local/lib/gcc48/libstdc++.dylib
ld: warning: ignoring file /opt/local/lib/gcc48/libgcc_ext.10.5.dylib, missing required architecture i386 in file /opt/local/lib/gcc48/libgcc_ext.10.5.dylib (1 slices)
ld: warning: ignoring file /opt/local/lib/gcc48/gcc/x86_64-apple-darwin13/4.8.3/libgcc.a, file was built for archive which is not the architecture being linked (i386): /opt/local/lib/gcc48/gcc/x86_64-apple-darwin13/4.8.3/libgcc.a

Would this be the correct way to have multiple independent variables with independent kernels?

Say I have timeseries where two variables, f and x vary with time t. I seek to use Gaussian processes to isolate noise varying with time, but also noise correlated with variations in x. So essentially I want my GP to have independent variables x and t, and I want each to have an associated ExpSquaredKernel.

I don't know how to format latex-like maths on github but this is what I want my covariance matrix to look like:

k(X_i, X_j) = k(x_i, t_i, x_j, t_j) = k_1(x_i, x_j) + k_2(t_i, t_j)

Where k_1(x_i, x_j) = A_x * exp(-(x_j - x_i)**2/(2*var_x))
and k_2(t_i, t_j) = A_t * exp(-(t_j - t_i)**2/(2*var_t))

So that my hyperparameters are (A_x, var_x, A_t, var_t).

The problem is how to do this in george. Here's what I'm currently trying (since I need ndim=2, I'm putting trying to lock-out the "other" variable from each ExpSquaredKernel):

    k_spatial = A_x * kernels.ExpSquaredKernel(metric=[var_x, np.inf], ndim=2)
    k_temporal = A_t * kernels.ExpSquaredKernel(metric=[np.inf, var_t], ndim=2)
    kernel = k_spatial + k_temporal

Then when I set up my regression, I'll need to make sure that the np.infs don't change. Is this the recommended way to do this? Or is there some other functionality to do it in george?

sample_conditional/predict failing for 1d cases

The following code

import george
x = np.linspace(0,10,40)
y = np.sin(x) + 0.1*np.random.normal(size=40)
e = 0.1*np.ones_like(y)
kernel = george.kernels.ExpSquaredKernel(3)
gp     = george.GaussianProcess(kernel)
gp.compute(x,e)
t = np.linspace(x.min(), x.max(), 300)
samples = gp.sample_conditional(y-y.mean(), t, size=100)

returns a RunTimeError: Failed to compute model in line 133 of predict

Clarification on gradient calculation process

Hi, first off, thanks a lot of putting this library online! I'm trying to learn GPs, and this library's code has helped me a lot in understanding how to work with GPs.

I came across this piece of code in master:
https://github.com/dfm/george/blob/master/george/kernels.py#L137

In particular, I'm considering why there is a multiplication on this line:

return g * self.vector_gradient[None, None, :]

I found what looks like the corresponding line in 1.0-dev, which is here:
https://github.com/dfm/george/blob/1.0-dev/templates/kernels.py#L195

return g[:, :, self.unfrozen]

The calculation of g itself doesn't seem to have changed much between the two versions, so I would like to know if the version on master is correct behaviour, because when I run this code on master:

kern = ConstantKernel(3.0)
grad = kern.gradient([ [1], [5], [7] ]) # arbitrarily chosen
print grad

I receive the matrix:

[ [[ 3.] [ 3.] [ 3.]]
  [[ 3.] [ 3.] [ 3.]]
  [[ 3.] [ 3.] [ 3.]] ]

Shouldn't this matrix's entries all be 1.0, regardless of the actual value of the constant? When reading the docs, I noticed that the gradient is supposed to be taken for kern.vector, which is the natural logarithm in this case - is this related to the multiplication operation, or the gradient matrix produced above?

Hope this was clear enough, feel free to clarify if you're unclear about anything!

Entropy calculation in / from George

Hello All, Say that I have a GP object, how do I go about computing its H. Do I have to manually integrate over dy something like -np.exp(gp.lnlikelihood(y)) * gp.lnlikelihood(y)? Or am I barking up the wrong tree?

How to apply break points to some of the kernels but not others?

Hello,

Say I have a ndim=3 problem, where kernel k1 is applied on axis=[0,1] and k2 on axis 3. How would you point bounds on the independent parameters in axes [0,1] so that the kernel only applies when the value of axis 3 is bound_1 < x_3 < bound_2? I thought the bounds argument (I'm using the dev version) would be used for it, but maybe I misread the docs, it seems more like the bounds apply bounds to the hyperparameters (at least I keep getting some sort of dimension mismatch errors when I try to use it). Is there anyway to implement this?

Update docs

The docs are in a sad state. Here are some things that need to be done:

  • update documentation for new solver framework
  • document solver interface (#13)
  • allow and document custom kernels

... and probably some other things!

Stable version install

I have tried downloading the stable version using "pip install george" and it completes the install and says "Successfully installed george-0.2.1" but I cannot find where the package was downloaded to. I am on a mac and was in my home directory when I ran the install line.

Covariance bug? (not sure if bug)

Hi Dan,

Perhaps this is a feature not a bug - but I have been playing around with george for K2 and one thing we want to do is to clip outliers by fitting a GP to the raw obs and looking for where points lie > n sigma from the predictive mean.

I notice that george has some funny behaviour when you try and evaluate the posterior covariance at points exactly in the training input array - I have attached a little script where george freaks out and gets zero covariance where GPy doesn't. If you perturb the prediction inputs even slightly from the training inputs it has a nice smooth curve with sensible covariance as expected.

I'm not sure if this is a bug or a feature depending on the tiny default yerr argument?

Cheers,

Ben

import numpy as np 
import matplotlib.pyplot as plt 
import george
from george import kernels
import GPy

amp = 0.1
signal = 2.
nsamp = 100

x = np.linspace(0,1,100)

y = signal*x**2.-signal*x**3. + np.random.randn(100)*amp

k1 = kernels.WhiteKernel(0.01)
k2 = 1.0*kernels.ExpSquaredKernel(1.0)
kernel = k1+k2

gp = george.GP(kernel)

gp.compute(x,yerr=0.1)

xpred = x+ 0.00001*np.random.randn(x.shape[0])
x[0], x[-1] = 0.,1.
ypred,ycov = gp.predict(y,xpred)
yerr = np.sqrt(np.diag(ycov))
samples = gp.sample_conditional(y,xpred,size=nsamp)

kernel2 = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
m = GPy.models.GPRegression(np.array([x]).T,np.array([y]).T,kernel2)

mmean, mvar = m.predict(np.array([xpred]).T)
mmean,mvar  = mmean.flatten(),mvar.flatten()


plt.figure(0)
plt.clf()
plt.plot(x,y,'.k')
plt.plot(xpred,ypred,'-r')
plt.plot(xpred,mmean,'-b')
plt.fill_between(xpred,ypred+yerr,ypred-yerr,color='r',alpha=0.2)
plt.fill_between(xpred,mmean+mvar,mmean-mvar,color='b',alpha=0.2)

for j in range(nsamp):
    plt.plot(xpred,samples[j,:],'-r',alpha=0.01)
plt.xlabel('x')
plt.ylabel('y')
plt.legend(['Data','Model'],loc='best')
plt.draw()
plt.show()

Gradients with HODLR solver?

It would be convenient (e.g., for maximum likelihood estimation) to be able to calculate gradients of the log likelihood with the HODLR solver, as you have already implemented with the NumPy/SciPy solver. Is this on your roadmap? My C isn't great, but I might take a crack at this...

non-stationary kernel proposal / question

This is the second time I've found myself wanting a non-stationary kernel of the form
K(xi, xj) = A(xbar) K'(xi, xj)
where

  • A is an amplitude that depends on location (via xbar=(xi+xj)/2) and maybe some hyperparameters
  • K'(xi, xj) is some other kernel that may be stationary or may depends on location in some way. (In my cases, exp_squared is reasonable.)

It would be great if I could compose a kernel like this or slightly more complicated versions that have a form like A(xbar) B(xbar) K'(xi,xj) + C(xbar) D(xbar) K''(xi,xj) where I have addition and a few products for the amplitudes. As I type, I realize that maybe this is already possible. (but email is not your forte ๐Ÿ˜„

Changepoints for george kernels

We're using george for an interesting test case. We're modelling eclipsing binaries to measure component masses, but the primary star is accreting, which produces a source of flickering (red noise).

The accreting nature is key for us, as there is a bright spot where the accretion stream from the secondary star hits the accretion disc. Timing the ingress of the bright spot eclipse allows us to measure the mass ratio.

George provides the GP to model the red noise. Here's the issue - when the primary star is eclipsed, the flickering is also hidden. But because GPs are stationary, the GP 'wants' to have large amplitude variations during eclipse, and thus the GP tends to fit the bright spot ingress, which occurs during primary eclipse.

What we need to do is implement change points - discrete times where the hyper parameters of the GP change. There's a nice description of how to define kernel functions with change points in Section 4 of Garnett, Obsorne & Roberts (http://www.robots.ox.ac.uk/~parg/pubs/changepoint.pdf). So far we have implemented this in a PythonKernel, but it is slow. We'd like to implement changepoints for all native, stationary, george kernels.

I've opened this issue to discuss ways of going about this. I'd appreciate some tips as to where to start in the george codebase, but also the API needs some thought. The obvious thing to do would be to supply a list of hyper parameters, and an equal length list of change points, but this clashes somewhat with the way the API deals with ndims > 1.

ExpSine2Kernel true implementation

I'm using George 0.2.1. I need the periodic kernel:

k(dt) = exp{ - Gamma * sin^2( pi*dt / period ) }

In the documentation page I find the ExpSine2Kernel kernel, whose name indicates that it is the kernel for my case. Nonetheless, the formula reported in the documentation page has a "sin" rather than an "exp" as outer function:

k(dt) = sin{ - Gamma * sin^2( pi*dt / period ) }

Is it exactly how the kernel is implemented, or is it a typo of the doc?

I also checked the doc page of George 1.0, where the bona-fide typo has been corrected from "sin" to "exp". Can you please clarify this?

Thanks a lot

simple model with multiple predictors

Hi,
I am interested on trying your package, looks exiting!

let say I want to do y ~ X, with y a univariate output and X a matrix. For example like:

# dimensions
n1, n2 = 10,50
# predictors
X = np.sort(np.random.rand(n1*n2)).reshape(n1,n2)
# linear coefficients
B = np.random.normal(0,1,size=n2)
# oupout and linear function of coefficients and predictors + some noise
y = np.dot(X,B) + np.random.normal(0,.5,size=n1)

how do I go about fitting the GP? in particular

  • I do not have error bars for the y's, so I am not sure how to call the gp.compute
  • How do I specify the dimension of the kernel?
  • Do you have a linear kernel, i.e., XX', I want to try that as well.

This looks a neat addition to the emcee, which is how I want to try it out.

Thanks a lot in advance

Inference algorithm

Hi,
I'd just like to ask you what algorithm do you use for inference in your Gaussian processes. I see you're using some sort of truncated squared exponential kernel - what kernel is that exactly? Can you point me to any resources that you've been using to implement this?
Thank you!

Error in tutorial - hyperparameter optimization

Hi,
I was working through the tutorial on hyperparameter optimisation and there was an issue getting the co2 data with statsmodels (v0.8.0). To read the data I had to use the following:

data = sm.datasets.co2.load()
t = np.array(data.data.date).astype(int)
y = np.array(data.data.co2).astype(float)

which is slightly different to what is in the tutorial. I assume statsmodels has changed. Just thought I'd point this out.

Cheers,
James

Metric for radial kernels

When using the radial kernels it is not clear how to input the metric. For example, if the amplitude and time length of my model are (a, t), should I set the exponential squared kernel as

k = a^2 * kernels.ExpSquaredKernel(t^2),

or

k = a * kernels.ExpSquaredKernel(t) ?

The same happens with the rest of the radial kernels. The documentation doesn't really clarify it. Thank you.

Bug in gp.compute?

Bear with me, I'm new to this. But if I call run, for example

x = np.linspace(0,1,10)
errs = np.linspace(1,10,10)
kernel = george.kernels.ExpKernel(2)
gp = george.GaussianProcess(kernel)
gp.compute(x,errs)
np.diag(gp._gp.get_matrix(x))

I get

array([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.])

when I expect

array([  1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.])

kernel.value not using updated hyperparameters with HODLR solver

I'm hitting what I think may be bug in which the kernel.value method is somehow not using updated kernel parameters when using the HODLR solver.

Here's what I'm doing. I train a model using the HODLR solver. I check that the kernel hyperparameters have been updated from their initial values. I then call kernel.value and find that the result is using the kernel with the initial values.

I've made a sample script that reproduces the bug. When you run the script, it prints out the log-likelihood of the trained model, the kernel with trained hyperparameters, and the main diagonal of the covariance matrix. If you run the script using the basic solver

python george-bug.py --solver basic

the values of the main diagonal are all 1.257, which is, correctly, the sum of the trained ConstantKernel variance and the WhiteKernel. However, if you run the script using the HODLR solver

python george-bug.py --solver HODLR

the values of the main diagonal are all 2.0, which is, incorrectly, the sum of the initial ConstantKernel variance and the WhiteKernel variance.

For the record, I'm using Python 2.7 and George 0.2.1 on an old version (Lucid) of Ubuntu.

Check for x_.cols() in predict of solver.h

Hi Dan,

I was just playing around with George for the first time, and some of your demo code results in errors. Specifically, the line
samples = gp.sample_conditional(y, t, size=100)
on the main George page fails with a fairly general runtime warning.

I tracked it down to line 154 in solver.h, where ndim is tested against x_.cols(). Is that really necessary? The code runs fine with only the check for x_.rows()

Thanks!

Rutger

pip cannot find george

I on ubuntu, installed libeigen-dev on the system, am using miniconda, Python v3.5.

scopatz@agenor ~ $ pip install george
Collecting george
  Could not find a version that satisfies the requirement george (from versions: )
No matching distribution found for george

Also, it would be nice if there was a conda package for this. Conda is much better suited to this use case than pip is. My 2 drachmas, though :)

stable version installed with pip didn't work for me

I just tried installing the stable version using pip and couldn't get it to work -- it didn't recognize that there was a module called gp and I didn't see gp in the george folder. However, I followed the instructions to install the development version and that worked. It could be that I'm somehow misusing pip (or that the stable version isn't actually supposed to work yet) but just thought I'd mentioned it in case others are having this issue.

Bekki

Speeding up repeated predictions

As it's currently implemented, a solver must recreate the alpha matrix every time predict() is called. Since alpha only depends on the observations y, this is very inefficient when predicting many different test points t conditioned on the same y.

Why would one want to do this? (Why not just put all the test points into one array and call predict() once?) Well for one, if the test points are sampled from MCMC, there is no other choice.

I hacked in a quick fix to cache alpha, and it gave a ~10x speedup.

There's a way to do this quite elegantly, but it would require breaking compatibility with the current API. Something like:

  • compute(x) -> compute(x, y=None)
    • If y is given, then calculate and save alpha.
  • predict(y, t) -> predict(t, y=None)
  • sample_conditional(y, t) -> sample_conditional(t, y=None)
  • lnlikelihood(y) -> lnlikelihood(y=None)
    • If y is given, things work exactly as they do now.
    • If not, use the saved alpha. Exception if alpha was not pre-calculated.

So one could have, e.g.:

gp.compute(x, y)
prior_draw = gp.sample(t)
posterior_draw = gp.sample_conditional(t)
mu, cov = gp.predict(t)
prob = gp.lnlikelihood()

I like this. I think it makes the API cleaner and more consistent.

I can code this up and submit a PR, but before I go ahead...do you want it? Is there a way to do it without breaking backwards compatibility?

Unexpected ln value for the constant in front of an ExpSquaredKernel

Hello, I defined an ExpSquaredKernel like so:

	A_xy_i = 10**(-4.5)
	var_x_i = 18**(-1)
	var_y_i = 18**(-1)

	k_spatial = A_xy_i * kernels.ExpSquaredKernel(metric=[var_x_i, var_y_i], ndim=3, axes=[0,1])

But when I print the values I get this:

In [1]: k_spatial.get_vector()
Out[1]: array([-11.46024521,  -2.89037176,  -2.89037176])

Which doesn't agree with the following:

In [2]: A_xy_i

Out[2]: 3.1622776601683795e-05

In [4]: np.log(A_xy_i)

Out[4]: -10.361632918473205

This happens every time I define a constant in front of the ExpSquaredKernel. I didn't change the vector value either this is immediately after the definition. Is this supposed to happen? What is the actual conversion from the amplitude to the constant vector?

Using 1.0-dev branch.

MemoryError when computing grad_lnlikelihood with HODLR

When trying to calculate the grad_lnlikelihood method of a large Gaussian process on the 1.0.0dev branch I get thrown a memory error.

Below's a minimum (not) working example which replicates the problem on my system; on a system with a much larger amount of memory I've been able to increase past ~4000 samples, but isn't this problem meant to be avoided by using the HODLR solver?

import numpy
import george
from george import kernels

training_x = np.random.rand(4000, 9)
training_y = np.random.rand(4000, 1)

k0 =  np.std(training_y)**2
k3 = kernels.Matern52Kernel(np.ones(9), ndim=9)

kernel = k0+k3
gp = george.GP(kernel, tol=1e-6, solver=george.HODLRSolver)

gp.compute(training_x)

gp.grad_lnlikelihood(training_y.T[0])

The traceback I get is

---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
<ipython-input-423-f8304806f185> in <module>()
     14 gp.compute(training_x)
     15 
---> 16 gp.grad_lnlikelihood(training_y.T[0])

/home/daniel/.virtualenvs/heron/local/lib/python2.7/site-packages/george-1.0.0.dev0-py2.7-linux-x86_64.egg/george/gp.pyc in grad_lnlikelihood(self, y, quiet)
    453         if self.fit_kernel and len(self.kernel):
    454             l = len(self.kernel)
--> 455             Kg = self.kernel.get_gradient(self._x)
    456             grad[n:n+l] = 0.5 * np.einsum("ijk,ij", Kg, A)
    457 

/home/daniel/.virtualenvs/heron/local/lib/python2.7/site-packages/george-1.0.0.dev0-py2.7-linux-x86_64.egg/george/kernels.pyc in get_gradient(self, x1, x2)
    204             x2 = np.ascontiguousarray(x2, dtype=np.float64)
    205             g = self.kernel.gradient_general(which, x1, x2)
--> 206         return g[:, :, self.unfrozen]
    207 
    208     def test_gradient(self, x1, x2=None, eps=1.32e-6, **kwargs):

MemoryError: 

unrecognized flag when solving for white noise component of covariance matrix

When I want to explicitly solve for the white noise component of the covariance matrix, together with an exponent-squared or Matern kernel, in the call to the GP object I set the flag fit_white_noise=True. Here's the relevant code snippet:

def lnlike(p, tPass, y, yerr):
    a, tau = np.exp(p[0:2])
    kernelChoice = kernels.ExpSquaredKernel(tau)
    gp = george.GP(a * kernelChoice, fit_white_noise=True, solver=george.HODLRSolver) 
    gp.compute(np.squeeze(tPass), yerr)
    return gp.lnlikelihood(y - model_fit(tPass)) 

which returns the error
TypeError: __cinit__() got an unexpected keyword argument 'fit_white_noise'

Has this option been deprecated? Do I access the sigma^2*I matrix some other way? (The other kernel parameters converge if I just don't set that flag, but I can't tell how to access the underlying white noise component.)

Required library HODLR not found. Check the documentation for solutions.

ubgpu@ubgpu:/github/george$ sudo python setup.py install
running install
running bdist_egg
running egg_info
writing george.egg-info/PKG-INFO
writing top-level names to george.egg-info/top_level.txt
writing dependency_links to george.egg-info/dependency_links.txt
writing pbr to george.egg-info/pbr.json
reading manifest file 'george.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'hodlr/header/*.hpp'
writing manifest file 'george.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
Found Eigen version 3.2.0 in: /usr/include/eigen3
Traceback (most recent call last):
File "setup.py", line 177, in
"Programming Language :: Python",
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/dist-packages/setuptools/command/install.py", line 73, in run
self.do_egg_install()
File "/usr/lib/python2.7/dist-packages/setuptools/command/install.py", line 88, in do_egg_install
self.run_command('bdist_egg')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/dist-packages/setuptools/command/bdist_egg.py", line 185, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/usr/lib/python2.7/dist-packages/setuptools/command/bdist_egg.py", line 171, in call_command
self.run_command(cmdname)
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/dist-packages/setuptools/command/install_lib.py", line 21, in run
self.build()
File "/usr/lib/python2.7/distutils/command/install_lib.py", line 111, in build
self.run_command('build_ext')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/dist-packages/setuptools/command/build_ext.py", line 49, in run
_build_ext.run(self)
File "/usr/lib/python2.7/distutils/command/build_ext.py", line 337, in run
self.build_extensions()
File "/usr/lib/python2.7/dist-packages/Pyrex/Distutils/build_ext.py", line 82, in build_extensions
self.build_extension(ext)
File "setup.py", line 99, in build_extension
raise RuntimeError("Required library HODLR not found. "
RuntimeError: Required library HODLR not found. Check the documentation for solutions.
ubgpu@ubgpu:
/github/george$

Installation failure with conflicting error messages.

Following the installation instructions I tried first using the pip method (on two computers). First one worked (Ubuntu 16.10), second one failed (Ubuntu 14.04, but similar python environment) with the following error code (copy pasting the relevant part):

gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Iinclude -I/home/marko/anaconda3/lib/python3.5/site-packages/numpy/core/include -I/usr/include/eigen3 -I./hodlr/header -I/home/marko/anaconda3/include/python3.5m -c george/_kernels.cpp -o build/temp.linux-x86_64-3.5/george/_kernels.o -Wno-unused-function -Wno-uninitialized
  gcc: error trying to exec 'cc1plus': execvp: No such file or directory
  error: command 'gcc' failed with exit status 1
  
  ----------------------------------------
  Failed building wheel for george
  Running setup.py clean for george
Failed to build george
Installing collected packages: george
  Running setup.py install for george ... error
    Complete output from command /home/marko/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-qf6ggi46/george/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-is44ebst-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.5
    creating build/lib.linux-x86_64-3.5/george
    copying george/utils.py -> build/lib.linux-x86_64-3.5/george
    copying george/kernels.py -> build/lib.linux-x86_64-3.5/george
    copying george/gp.py -> build/lib.linux-x86_64-3.5/george
    copying george/generate_kernel_defs.py -> build/lib.linux-x86_64-3.5/george
    copying george/__init__.py -> build/lib.linux-x86_64-3.5/george
    copying george/basic.py -> build/lib.linux-x86_64-3.5/george
    creating build/lib.linux-x86_64-3.5/george/testing
    copying george/testing/test_solvers.py -> build/lib.linux-x86_64-3.5/george/testing
    copying george/testing/test_tutorial.py -> build/lib.linux-x86_64-3.5/george/testing
    copying george/testing/test_gp.py -> build/lib.linux-x86_64-3.5/george/testing
    copying george/testing/__init__.py -> build/lib.linux-x86_64-3.5/george/testing
    copying george/testing/test_pickle.py -> build/lib.linux-x86_64-3.5/george/testing
    copying george/testing/test_kernels.py -> build/lib.linux-x86_64-3.5/george/testing
    running egg_info
    writing dependency_links to george.egg-info/dependency_links.txt
    writing top-level names to george.egg-info/top_level.txt
    writing george.egg-info/PKG-INFO
    warning: manifest_maker: standard file '-c' not found
    
    reading manifest file 'george.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    writing manifest file 'george.egg-info/SOURCES.txt'
    copying george/_kernels.cpp -> build/lib.linux-x86_64-3.5/george
    copying george/hodlr.cpp -> build/lib.linux-x86_64-3.5/george
    running build_ext
    Found Eigen version 3.2.0 in: /usr/include/eigen3
    Found HODLR headers in: ./hodlr/header
    building 'george._kernels' extension
    creating build/temp.linux-x86_64-3.5
    creating build/temp.linux-x86_64-3.5/george
    gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Iinclude -I/home/marko/anaconda3/lib/python3.5/site-packages/numpy/core/include -I/usr/include/eigen3 -I./hodlr/header -I/home/marko/anaconda3/include/python3.5m -c george/_kernels.cpp -o build/temp.linux-x86_64-3.5/george/_kernels.o -Wno-unused-function -Wno-uninitialized
    gcc: error trying to exec 'cc1plus': execvp: No such file or directory
    error: command 'gcc' failed with exit status 1
    
    ----------------------------------------
Command "/home/marko/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-qf6ggi46/george/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-is44ebst-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-qf6ggi46/george/

I think it's relevant that it seems to find "HODLR" on my machine Found HODLR headers in: ./hodlr/header

So I tried installing the development version by cloning off github, and then I get the following error:

running build_ext
Found Eigen version 3.2.0 in: /usr/include/eigen3
Traceback (most recent call last):
  File "setup.py", line 177, in <module>
    "Programming Language :: Python",
  File "/home/marko/anaconda3/lib/python3.5/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/marko/anaconda3/lib/python3.5/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/home/marko/anaconda3/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/home/marko/anaconda3/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/setuptools/command/install.py", line 67, in run
  File "/home/marko/anaconda3/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/setuptools/command/install.py", line 109, in do_egg_install
  File "/home/marko/anaconda3/lib/python3.5/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/marko/anaconda3/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/home/marko/anaconda3/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/setuptools/command/bdist_egg.py", line 161, in run
  File "/home/marko/anaconda3/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/setuptools/command/bdist_egg.py", line 147, in call_command
  File "/home/marko/anaconda3/lib/python3.5/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/marko/anaconda3/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/home/marko/anaconda3/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/setuptools/command/install_lib.py", line 11, in run
  File "/home/marko/anaconda3/lib/python3.5/distutils/command/install_lib.py", line 107, in build
    self.run_command('build_ext')
  File "/home/marko/anaconda3/lib/python3.5/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/marko/anaconda3/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/home/marko/anaconda3/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/setuptools/command/build_ext.py", line 77, in run
  File "/home/marko/anaconda3/lib/python3.5/site-packages/Cython/Distutils/build_ext.py", line 164, in run
    _build_ext.build_ext.run(self)
  File "/home/marko/anaconda3/lib/python3.5/distutils/command/build_ext.py", line 338, in run
    self.build_extensions()
  File "/home/marko/anaconda3/lib/python3.5/site-packages/Cython/Distutils/build_ext.py", line 172, in build_extensions
    self.build_extension(ext)
  File "setup.py", line 99, in build_extension
    raise RuntimeError("Required library HODLR not found. "
RuntimeError: Required library HODLR not found. Check the documentation for solutions.

Which seems contradictory as it found HODLR the first time. Also I didn't understand the first error message.

Add a function from outside code

How would I add a function from outside code?
I added

include "Dawson.h"

to kernels.h in the templates file.
and then added Dawson.h and Dawson.cpp to the george/include/ directory, and reran the setup.py script.
It seems to run to completion, so that's good.
However, when I try to import george in python, I get the following error:

import george
Traceback (most recent call last):
File "", line 1, in
File "build/bdist.linux-x86_64/egg/george/init.py", line 18, in
File "build/bdist.linux-x86_64/egg/george/kernels.py", line 31, in
File "build/bdist.linux-x86_64/egg/george/cython_kernel.py", line 7, in
File "build/bdist.linux-x86_64/egg/george/cython_kernel.py", line 6, in bootstrap
ImportError: /home/mjlewis/.python-eggs/george-1.0.0.dev0-py2.7-linux-x86_64.egg-tmp/george/cython_kernel.so: undefined symbol: _Z6Dawsond

I assume I need to modify kernels.py or maybe init.py.
Any suggestions?

minor doc tweak

Super minor, but since you are editing docs... this method:
sample_conditional(y, t, size=1)
....
Returns samples (N, ntest), a list of predictions at coordinates given by t.

Should be "samples (size, ntest)"

Same kernel returns different results with same data

Officially opening an issue, per earlier conversation:

The following code:

import numpy as np
import george
from george.kernels import ExpSquaredKernel

a = 0.1
s = 0.5

x = np.linspace(0, 1000, 10000)
y = np.random.random(10000)*0.1 + 0.5

yerr = 0.1*np.ones(10000)
m = 0.5*np.ones(10000)

for i in xrange(10):
    kernel = a * ExpSquaredKernel(s)
    gp = george.GaussianProcess(kernel, tol=1e-12, nleaf=100)
    gp.compute(x, yerr)
    print gp.lnlikelihood(y-m)

returns

10124.3121797
10125.0028425
10125.1186467
10125.332759
10125.332759
10125.332759
10125.2206463
10124.5410274
10124.5410274
10123.6107036

It's interesting that there is the same result returned multiple times, making me think that it's some small random number that's propagating through the matrix operations (although that's just a guess...). Changing tol or nleaf doesn't seem to affect the results.

George is 1D Only?

Hello, This is more of a question than anything, but it seems that George only handles 1-dimensional inputs and responses. Is this true? It seems like the sklearn.gaussian_process module does handle N-dim inputs and repsonses. Is this true? I am new to the GP would and trying to get a handle on what the different tools and their limitations are. Thanks!

Circulant embedding for fast sampling?

One thing we're interested in doing is sampling the conditional distribution of a GP at regular intervals in multiple dimensions, e.g., at every point in a 1000x1000 grid. This is infeasible with the naive methods currently in George, because it would require constructing a covariance matrix with 10^12 elements.

However, it is indeed possible to sample realizations at regularly space intervals in N-dimensions very quickly the using a trick involving the FFT. The technique is known as circulant embedding (see, e.g., https://www.stat.washington.edu/research/reports/2005/tr477.pdf) and is implemented in several R packages, notably RandomFields.

Unfortunately, we haven't found any elegant Python implementations. Though I'm guessing that this is something that you don't personally need, it seems like it might be a natural fit for a feature to add to George, to complement its fast training. Thoughts?

pip ValueError

Installed via sudo pip install george. No problem when I manually installed from source, but got this for the pip version:

In [1]: import george

ImportError Traceback (most recent call last)
in ()
----> 1 import george

/usr/local/lib/python2.7/dist-packages/george/init.py in ()
11 all = ["kernels", "GP", "BasicSolver", "HODLRSolver"]
12
---> 13 from . import kernels
14 from .gp import GP
15 from .basic import BasicSolver

/usr/local/lib/python2.7/dist-packages/george/kernels.py in ()
16 from functools import partial
17
---> 18 from ._kernels import CythonKernel
19 from .utils import numerical_gradient
20

ImportError: No module named _kernels

Calculations of the gradient of the likelihood

Hello!

I've been trying to develop my own version of using gaussian processes in python and for that I've been using george as a guide and comparison for the results I have.

Right now I got into a problem calculating the gradient of the marginal likelihood and I don't really know the source of it or why it happens.

To compare my results with george's results I've created a kernel equal to george's ExpSine2Kernel that I've named ExpSineGeorge and can be found here:
https://github.com/jdavidrcamacho/Tests_GP/blob/master/Programs%20being%20tested/Kernel.py#L242

On it I've created the kernel and the derivatives that are going to be used to calculate the gradient.

The tests to compare both versions are here:
https://github.com/jdavidrcamacho/Tests_GP/blob/master/Programs%20being%20tested/tests.py#L17

Using the kernel ExpSineGeorge(2.0/1.1**2, 7.1) and ExpSine2Kernel(2.0/1.1**2, 7.1), when making the calculations in both version I get the same marginal likelihood, in the example in the link it will be -54.31687. But in the calculation of the gradient I get -1.62352 -120.41732 in my version and -2.68350 -854.96297 in george.

As far as I can see the calculation of the gradient are correct (I based myself on Rasmussen&Williams just like george) and I even used some of george's code to make it that can be found here:
https://github.com/jdavidrcamacho/Tests_GP/blob/master/Programs%20being%20tested/Likelihood.py#L37
and to be more precise I used the final version of the calculation of the gradient equal to george (as can be seen in line 59 of the previous link). My calculations of the covariance matrix as far as I can see are correct, if it wasn't I think i shouldn't get the same marginal likelihood of george because when calculating the marginal likelihood I calculate it.

The important detail I've discovered and that I don't know why is:
In the kernels ExpSine2Kernel and ExpSineGeorge we are going to have two derivatives, one in order to gamma and another in order to P (I don't use amplitude in them to simplify things), since both kernel's expressions are equal so are going to be its derivatives.
I've check it in george's link https://github.com/dfm/george/blob/master/include/kernels.h#L414 and as far as I can see both kernel's derivatives are equal.
There is only one I way I manage to get the same results as george. If I go to the expressions of my derivatives, and in the equation of the derivative in order of gamma I multiply it by gamma, and in the equation of the derivative in order of P I multiply it by P, I manage to have exactly the same results of the gradient of george.

You can go to https://github.com/jdavidrcamacho/Tests_GP/blob/master/Programs%20being%20tested/Kernel.py#L255 and check it for yourself. Runing the tests.py with the code as it is you get different results, if you remove the hastags in the return you will be multiplying my derivatives by gamma and P and run the tests.py again you get the same result as george.

Any help to help me understand why this happens or if I'm doing something wrong would help me a lot.
Cheers

Installing george with local copy of eigen3

I'm trying to get george running on the UW astronomy department machines (Scientific Linux) with Python 3.5. I've downloaded and unpacked the source for eigen, and I'm running the install with:

pip install george \
    --global-option=build_ext \
    --global-option=-I/path/to/Downloads/eigen-eigen-c58038c56923/Eigen

and getting the error:

 RuntimeError: Required library Eigen 3 not found. Check the documentation for solutions.

The INSTALL docs within Eigen suggest that it doesn't need to be built to be used, but should I be building something first? Am I pointing to the right directory within the un-archived package?

Thanks!

Tag releases on GitHub?

You already have a version 0.1.1 on pypi... this would make it easier see what's changed in the code since then.

ready for prime time?

We are going to get slammed when the paper hits arxiv.

Also, do the docs clearly refer to the IEEE paper and request citation? I know these points are important to our IEEE co-authors, and rightly so.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.