Git Product home page Git Product logo

aiida-testing's Introduction

Build Status Docs status PyPI version GitHub license

aiida-testing

A pytest plugin to simplify testing of AiiDA plugins. This package implements two ways of running an AiiDA calculation in tests:

  • mock_code: Implements a caching layer at the level of the executable called by an AiiDA calculation. This tests the input generation and output parsing, which is useful when testing calculation and parser plugins.
  • archive_cache: Implements an automatic archive creation and loading, to enable AiiDA - level caching in tests. This circumvents the input generation / output parsing, making it suitable for testing higher-level workflows.

For more information, see the documentation.

aiida-testing's People

Contributors

broeder-j avatar chrisjsewell avatar danielmarchand avatar greschd avatar janssenhenning avatar ltalirz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

aiida-testing's Issues

mock-code: allow passing custom hash function via callback

Unfortunately, there are simulation codes whose input files are not portable (e.g. because they require the absolute path to some data directory... no idea why).

For such cases, the current hash function will not work on CI since even when the data directory is provided, the absolute path to it will differ.

One might argue that it is really the simulation code that should be rewritten here, but unfortunately that is not always an option.
For such cases, it would be useful to be able to pass a custom hash function to the mock_code_factory fixture.

improve tests for mock-code

  • test that with --testing-config-action=generate, after tests, the template is written
  • test that with --mock-regenerate-test-data, test data is actually rewritten (so far tests only that it still runs)

Add version number to config

This will allow automatic migrations of the configuration and output directories.

The version number should be w.r.t. changes in the config and/or output format, not corresponding to the package version.

mock-code: Allow code paths to be relative to config file location

In aiida-fleur we would like to place the executables to generate mock code test data relative to the location of the .aiida-testing-config.yml. Currently this is only possible with some hacks (See
https://github.com/JuDFTteam/aiida-fleur/blob/develop/.github/workflows/ci.yml#L136
for an example)
Here the paths are modified in the CI to be absolute

sed -i "s/\./${GITHUB_WORKSPACE//\//\\/}\/tests/g" .aiida-testing-config.yml
./run_all_cov.sh --local-exe-hdf5

We usually don't add compiled fleur executables to a global bin folder, since there are multiple executables with different configurations on the same machine very often.

add tests + exception for MPI runs

Will merge #47 since it is needed by aiida-lsmo, but one should add

  • a test of a run withmpi=True
  • if possible, an exception/warning, if a number of processes >1 is detected (or implement a solution for it)

consider name change

I am not familiar with the genesis of the aiida-testing, but it is obviously highly generic and does not convey what functionality the plugin adds on top of what already exists in aiida-core.

Some thoughts: The plugin focuses on

  • tests of processes
  • tests that integrate AiiDA and the simulation software
  • tests that go from setting up calculation inputs through to the final result of the calculation/workflow

Name candidates could therefore be:

  1. aiida-process-tests
  2. aiida-integration-tests
  3. aiida-e2e-tests (e2e=end-to-end)

I would probably vote for option 3

Add CI

CI should run:

  • tests
  • pre-commit
  • doc build (depends on #2)

suggest method for versioning of mocked codes

The usual approach to versioning of codes in AiiDA is, at the moment, to encode the version both in the label and in the description of the code.

It seems to me, the documentation of mock_code should include an example of a versioned code (besides diff).
The easiest thing would simply be to follow this pattern - it might lead some people to point a code "cp2k-5.1" to an actual cp2k-6.1 binary on their system (because that's what they happen to have installed) but I still think it's still better than to have no versioning.
This is necessary for tests against multiple versions of a code, and I think we should therefore already start with good habits.

Small detail: Currently, the mock-code factory only allows setting the label, not the description of a code.
Also, both label and description can be set directly via the constructor rather than like this

code.label = code_label

One could either make explicit label/description arguments or just forward remaining **kwargs to the Code constructor.

P.S. Currently, the kwarg for the entry point is called entry_point in the factory and input_plugin_name in the code.
One could decide to harmonize this.

Config file detection

Currently, the .aiida-testing-config.yml config file is searched for in the CWD and all its parent directories. The recommended place to put it for an AiiDA plugin would be at the project root, so that it is always found regardless of where in the project the tests are executed.

Question to plugin developers: Does this work with your use case / plugin structure?

Add flag to force regenerating files

We should add a command-line flag to wipe the output directory before running the tests.

In the case of "uniquely labelled" content, maybe there should be an option to only regenerate it if the test was actually run.

Also useful would be a way to delete only outputs that were never touched during the test run.

allow '**' in ignore files

Feature request, nice to have.

It would be very nice if one could allow for bash regs in the ignorefile list.
example

ignore_files=['cdn**'] # to ignore files with names cdn01, cdn02, cdn23, ...

(if a code produces an arbitray number of files with certain naming pattern)

Otherwise thanks for your work! Currently everything is working nicely on linux. On mac I had an issue, which I did not yet managed to track down yet.

Fix / modernize install

As reported by @DanielMarchand in #36, the install can fail if fastentrypoints cannot be found.

This can be fixed by un-vendoring fastentrypoints, and introducing the [build-system] info in a pyproject.toml. While we're at it, we can also consider switching from setup.json with a "complex" setup.py, to setup.cfg with a minimal setup.py.

Example setup.py: https://github.com/Z2PackDev/TBmodels/blob/dev/setup.py
Here we actually fail gracefully if fastentrypoints isn't present. I think this is for the purpose of running python setup.py {sdist,bdist_wheel}. Because pip isn't involved in that, the build isolation + environment setup doesn't work as expected.

Example pyproject.toml: https://github.com/Z2PackDev/TBmodels/blob/dev/pyproject.toml

Both these files can be used pretty much unaltered AFAICT, only setup.cfg would need adapting.

See #36 for additional discussion.

Store (optionally?) input files

The current implementation only stores the outputs of a calculation, and not the inputs. We should add support (maybe flag-enabled) to also keep the inputs.

mock-code: improve error message for missing test data

Here is an example of a traceback resulting from missing test data on CI.

The error displayed is that some node that was expected from the calculation was not created, but this could have many reasons, which makes it unnecessarily hard to debug.

Of course, the calculation class itself could be more clever about parsing the outputs but ideally we (=aiida-testing) would be able to communicate to pytest that the mock code did encounter an input it did not yet know.

I'm not quite sure about the best way to accomplish this.
One could perhaps try to monkey-patch the calculation class of the input plugin when setting up the mock code Code instance, such that it does some extra check after the original _prepare_for_submission but perhaps there is also a less invasive way?

Any ideas @greschd ?

mock-code: allow ignoring files based on path

The current code for the ignore_files parameter allows ignoring files only based on their filename, not based on their path in the the run directory

for dirname, _, filenames in os.walk('.'):
if dirname.startswith('./.aiida'):
continue
os.makedirs(os.path.join(res_dir, dirname), exist_ok=True)
for filename in filenames:
if any(fnmatch.fnmatch(filename, expr) for expr in ignore_files):

However, I have a use case where a code creates a bunch of folders and I simply want to ignore the content of some of these - e.g. I'd like to ignore 'VTK/*'

An ignore_paths approach seems generally more powerful to me, so I propose to add this.

Add sanity check to see that 'aiida-mock-code' executable runs

Add a "sanity check" session-scoped fixture (with autouse) that checks that the aiida-mock-codes executable works. For example, it could fail when pkg_resources detects an incompatibility in the requirements.

This probably also needs some change in the aiida-mock-codes executable itself, giving it a --sanity-check option or similar.

Add CI tests that install without -e

The current CI tests all install the package with pip install -e .. This doesn't fully test the packaging, because it roughly corresponds to adding the directory to sys.path. If there is a mistake in the manifest file or listed packages to install, it will not be caught.

We should add a job that installs with pip install ., without -e.

Prompted by a regression introduced in #38, which is fixed by #40.

Test with schedulers and MPI

The code is currently tested with (and designed for) a direct, non-MPI calculation only. We should test if it works also with schedulers and MPI.

Enable running without a config file

Currently, the .aiida-testing-config.yml config file is committed to the repository. In general (and for plugins), this kind of defeats the purpose of the config file: It contains the parts that are different on different systems, and can not be hardcoded in the tests.

A partial solution is to just recommend not committing .aiida-testing-config.yml, however it would be good to make all fixtures run (if all caches are present) also without a config file, for the purposes of CI. Having to create a special "CI config file" instead would be quite cumbersome.

Add support for testing fail conditions of code (inside complex workflows)

It is a bit painful right now to test for code that fails in unusual ways. Current caching assumes a kind of one-to-one mapping between inputs and outputs. What we would like is to test multiple fail conditions, e.g., due to node failure during computation, but all share the same input parameters. The only way that this can be done now is to manually copy cached files and modify neutral parameters, e.g., title in pw.x. @greschd @ltalirz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.