Git Product home page Git Product logo

fade's Introduction

FADE

A toolbox of ensemble based data-assimilation algorithms for use with Firedrake.

Linux terminal installation

As a prerequisite, it is required to have Firedrake installed alongside it's dependencies. Instructions on how to this can be found at: http://firedrakeproject.org/download.html. Then, it is required to have the Firedrake virtualenv activated; instructions for this can be found by again following the aforementioned link.

Finally, for some demos to work it is required to have Firedrake-MLMC installed (a side-package of http://firedrakeproject.org). Instructions on how to do this can be found at: https://github.com/firedrakeproject/firedrake-mlmc.

Then, to install firedrake_da, type the following commands into the terminal:

  1. git clone https://github.com/firedrake_da
  2. pip install -e ./firedrake_da

Documentation

To build the documentation, type

make html

whilst in the docs repository, or alternatively go to

Documentation Status

Contact

For any enquiries, please contact: [email protected]

Author Details

Mr. Alastair Gregory, Research Postgraduate, Imperial College London

References and Acknowledgements

The C code for the Earth Mover's Distance computations in this package, used to create kernels for Firedrake is from the following reference: Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for image retrieval. IJCV, 2000.

I would like to thank Dr. Colin Cotter and the Firedrake team at Imperial College London for help in designing and developing this open source package.

fade's People

Stargazers

Daniel Shapero avatar Dan Sandiford avatar

Watchers

James Cloos avatar Alastair Gregory avatar

fade's Issues

Making CoarseningLocalisation more efficient

  • Put an option in for CoarseningLocalisation that can accept ensembles of functions in and then carry out CL on each but only make one function to inject to at the start of the loop so that not to waste N functions.

  • Make changes in scripts, e.g. emd_kernel that use CoarseningLocalisation and make them more efficient by using list/tuple inputs

Verification

Target to do a sub-repo of verification tools for Firedrake ensemble forecasts

  • Multidimensional Rank Histogram - landed in #23
  • CRPS
  • Reliability Diagram

Localisation scheme for Kalman filter

Design a localisation scheme for the covariance vector Functions in the Kalman updates implemented by kernels.

  • Use similar localisation on the covariance vector as we did in emd_kernel.py - possibly using the degenerate localisation scheme (so not coarsening one) in localisation.py.

  • Included sparse matrix compatibility for banded covariance matrices to be efficient

  • Created runtime demos with different r_locs in (remember default is None localisation) to see speed-up of Kalman update

Correct localisation in MLETPF

We need to correct the localisation arguments r_loc_c and r_loc_f in coupling.py.

  • Make only one argument, r_loc in this, as it's very important that same localisation is used, however let both coarse and fine be simply the same r_loc and thus use the same r_loc for both the estimator versions of each level estimators -> this is the only thing needed for consistency!

Observation noise sum to evaluated function

  • Change the way likelihood is calculated by making a function with the constant measurement error variance on all data points and then use that in likelihood.

  • Make a DG0 function with all cell centres set to R and then project this to the ensemble space and use it in weight calculation. more of a weight update procedure than an observations change.

  • Change demos to match this way of finding observations.

  • Need to make it so that the observations can be computed by user by just summing together evaluated function at coordinate and measurement error, not to basis coefficient.

Place ensembles in dictionary in kernel transform into a vector function

  • Just like in the cost tensor computation, place all ensemble functions, that go into dictionary for the ensemble transform kernel, into just one vector function (same with the output functions) and change string loops to fit these indices. This will create speed ups in kernels and also in preallocating the output functions.

This might meaning not using the ensembles appending to dictionary - turns out kalman doesn't use this when using all ensembles as vector functions

Don't throw error when mesh isn't from hierarchy

  • Remove error for the case where user gives a mesh not from a hierarchy. Instead set r_loc = 0 and don't carry on into CoarseningLocalisation. Possibly give a warning this happened.

Warning isn't given.

Constant_Compatiblity

To Do:

  • We need to make the ensemble_transform, weight_update, Observations and kalman_update functionalities compatible with Constants as to maximise what can be fed into the firedrake-mlmc package, especially EnsembleHierarchy. Basically allow one to carry out the data assimilation techniques to be carried out on Constant representing scalars.

Changing path to EMD C files

Currently, any function that uses emd_kernel, such as ensemble_transform_update needs to be done inside fade/.

  • Change the path of the kernel functions emd in emd_kernel.py so it can adapt to any user's path.

Restructure Kernel Code

  • Allow update_Dictionary function to be included in a separate, possibly firedrake_da/utils.py script, so that we can import it from both the new kalman_kernel.py script to be made and the emd_kernel.py script

Kalman Filter Redevelopment

We want to restructure the kalman code so that the ensemble assimilation is carried out using kernel based approaches, with compilmentary localisation and covariance tensors.

TODO:

  • Create C kernels that can carry out Kalman transform.

  • Use and create covariance tensors vectors, like in the cost tensor generation in ensemble_transform.py. Alter the covariance.py script.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.