Git Product home page Git Product logo

pints's Introduction

Unit tests on multiple python versions Unit tests on multiple operating systems codecov Change-point testing code Change-point testing results binder readthedocs

What is Pints?

PINTS (Probabilistic Inference on Noisy Time-Series) is a framework for optimisation and Bayesian inference on ODE models of noisy time-series, such as arise in electrochemistry and cardiac electrophysiology.

PINTS is described in this publication in JORS, and can be cited using the information given in our CITATION file. More information about PINTS papers can be found in the papers directory.

Using PINTS

PINTS can work with any model that implements the pints.ForwardModel interface. This has just two methods:

n_parameters() --> Returns the dimension of the parameter space.

simulate(parameters, times) --> Returns a vector of model evaluations at
                                the given times, using the given parameters

Experimental data sets in PINTS are defined simply as lists (or arrays) of times and corresponding experimental values. If you have this kind of data, and if your model (or model wrapper) implements the two methods above, then you are ready to start using PINTS to infer parameter values using optimisation or sampling.

A brief example is shown below: An example of using PINTS in an optimisation (Left) A noisy experimental time series and a computational forward model. (Right) Example code for an optimisation problem. The full code can be viewed here but a friendlier, more elaborate, introduction can be found on the examples page.

Beyond time-series models, PINTS can be used on any error function or log-likelihood that takes real-valued, continuous parameters.

A graphical overview of the methods included in PINTS can be viewed here.

Examples and documentation

PINTS comes with a number of detailed examples, hosted here on github. In addition, there is a full API documentation, hosted on readthedocs.io.

Installing PINTS

The latest release of PINTS can be installed without downloading (cloning) the git repository, by opening a console and typing

$ pip install --upgrade pip
$ pip install pints

Note that you'll need Python 3.6 or newer.

If you prefer to have the latest cutting-edge version, you can instead install from the repository, by typing

$ git clone https://github.com/pints-team/pints.git
$ cd pints
$ pip install -e .[dev,docs]

To uninstall again, type:

$ pip uninstall pints

What's new in this version of PINTS?

To see what's changed in the latest release, see the CHANGELOG.

Contributing to PINTS

There are lots of ways to contribute to PINTS development, and anyone is free to join in! For example, you can report problems or make feature requests on the issues pages.

Similarly, if you want to contribute documentation or code you can tell us your idea on this page, and then provide a pull request for review. Because PINTS is a big project, we've written extensive contribution guidelines to help standardise the code — but don't worry, this will become clear during review.

License

PINTS is fully open source. For more information about its license, see LICENSE.

Get in touch

Questions, suggestions, or bug reports? Open an issue and let us know.

Alternatively, feel free to email us at pints at maillist.ox.ac.uk.

pints's People

Contributors

aabills avatar alisterde avatar arnaudyoh avatar ben18785 avatar braun-steven avatar chonlei avatar danielfridman98 avatar davaug avatar fcooper8472 avatar i-bouros avatar iamleeg avatar jarthur36 avatar k-shep avatar lorcandelaney avatar martinjrobins avatar michaelclerx avatar mirams avatar phumtutum avatar rccreswell avatar sanmitraghosh avatar simonmarchant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pints's Issues

Replace priors by log-priors

Sanmitra says it's better :-)

  1. Are all the algorithms happy with this?
  2. Should this only happen under the hood? I imagine users would prefer to specify a prior rather than a log prior... We could even think about giving the Prior class a log() method?

Look into model selection using 'reversible jump mcmc'

Chris Gill wrote:

I had an idea last summer about how one might go about doing model selection and parameter fitting in one go using mcmc but didn’t have time to get the details working enough to share it with anyone. It turns out someone has already developed the idea in quite a general framework, and it is useful - it’s called reversible jump mcmc. Essentially you can jump between different parameter spaces provided you have a suitable map between them. The wikipedia page has a fairly short intro to it. I wondered if that might be an interesting direction to try out with the electrochemistry stuff, e.g. determining mechanism of action and the parameters in one (admittedly computationally expensive) go? Just a thought, and I’m sure it will depend on how the different reaction models are specified, but I’ve been meaning to email you about it for some time now.

Work out interface to get 1st order sensitivities into Pints

Gary wrote: "Another 'whilst I remember' type thing! It would be good to get boring-old-Fisher Information / Hessian at the MLE and the covariance matrix that that implies, so we could compare max likelihood with Bayesian for some of these problems. Some of our peaks are so unimodal I suspect it may be an excellent approximation for a lot of our problems, and a zillion times faster."

Add brute-force samplers (for uniform priors)

For example, explore each parameter individually (param x, score y) or plot any two parameters against each other (param 1 x, param 2 z, score y)

Use evaluator interface to parallelise

  • Uniform
  • Latin hypercube
  • Sobol

CellML Model class

It would be nice to have a class that implements a Model concept, which takes a cellml file (or string?) defining what the model is.

@MichaelClerx: you have some cellml conversion routines don't you. Would this be useful here?

STAN interface

Need a function/class that takes a model and a data set, passes this into STAN to be solved (using HMC) and returns distributions on the parameters

Implement Kylie's models

  • Aslanidi 2009
  • Clancy 2001
  • Courtemanche 1998
  • Di Veroli 2013 a
  • Di Veroli 2013 b
  • Fink 2008
  • Fox 2002
  • Grandi 2010
  • Hund 2004
  • Inada 2009
  • Kurata 2002
  • Lindblad 1996
  • Lin 1996
  • Lu 2001
  • Matsuoka 2003
  • Mazhari 2001
  • Noble 1998
  • Nygren 1998
  • Oehmen 2002
  • O'Hara 2011
  • Priebe 1998
  • Ramirez 2000
  • Seemann 2003
  • Severi 2012
  • Shannon 2004
  • Ten Tusscher 2004
  • Wang 1997
  • Winslow 1999
  • Zeng 1995
  • Zhang 2000

Skipping Kiehn 1999 because it doesn't have equations for the rate constants but a look-up table instead

Investigate Gradient Profiling

Following Gary's suggestion on slack

"Gradient Profiling (Giles Hooker) CollocInfer in R - a mixture of Nonlinear least squares and Gradient matching I think… need to read up"

adaptive covariance MCMC functionality

need a function/class that takes a model, a set of parameters, prior information and a data set and returns distributions on the parameters using adaptive covariance MCMC

Set up travis-ci.org

@martinjrobins Jonathan Cooper suggested we set up this repo to have automated testing with Travis (travis-ci.org).
I had a look but it tells me I don't have the authority. Would you like to give this a go?

Non-linear optimizer functionality

Need a function/class that takes a model, some data, a set of parameters and some bounds and gives the best-fit parameters for that data

Add FFT-based score function

Martin wrote:

[C]an you create an efficient score function that depends on the distance between experiment and model in the frequency domain, rather than time domain? I guess the score class can just take an FFT of the values when its created and re-use this?

parallel CMA-ES

I think @MichaelClerx changes to CMA-ES to get it working in the new infrastructure removed the parallel aspects of CMA-ES? This is still in there as a comment, so should just need to integrate it with the new code

Add tools for repeated optimisiations

The current cmaes method has some unused ipop code:

Once someone figures out a good way to get random samples in the parameter space we can either add an ipop setting to the CMAES class or rename the class IPOP_CMAES and create a wrapper called CMAES that disables it

Methods like IPOP_CMAES use multiple restarts from random positions in the search space to improve chances of finding optima and reducing chances of getting stuck.

We could add some code that does this automatically, maybe using the Boundaries class to generate new starting points or perhaps a Prior class.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.