Git Product home page Git Product logo

pytest-flakefinder's Introduction

pytest-flakefinder

Build Status

Runs tests multiple times to expose flakiness.


This Pytest plugin was generated with Cookiecutter along with @hackebrot's Cookiecutter-pytest-plugin template.

Features

  • When enabled it will 'multiply' your tests so that they get run multiple times without having to restart pytest. This helps you find flakiness in your tests.
  • You can limit your flake runs to a particular timeout value.

Installation

pip install pytest-flakefinder

For best flake-finding combine with pytest-xdist:

pip install pytest-xdist

Usage

Flake Finding

Enable plugin for tests:

pytest --flake-finder

This will run every test the default, 50, times. Every test is run independently and you can even use xdist to send tests to multiple processes.

To configure the number of runs:

pytest --flake-finder --flake-runs=runs

To find flakes in one test or a couple of tests you can use pytest's built in test selection.

Finding flakes in one test:

pytest -k test_maybe_flaky --flake-finder

When used with xdist the flake finder can expose many timing related flakes.

Timing Out

When using flake-finder as part of a CI run it might be useful to limit the amount of time it will run.

Running with timeout:

pytest --flake-finder --flake-max-minutes=minutes

Tests started after the timeout will be skipped.

Contributing

Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.

Contributors

Issues

If you encounter any problems, please file an issue along with a detailed description.

pytest-flakefinder's People

Contributors

adaamz avatar and-semakin avatar euresti avatar muslimbeibytuly avatar nipunn1313 avatar wwuck avatar zoltandominguez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-flakefinder's Issues

PyPI

Hi,

could you please put your useful plugin on PyPI? It’s not hard and helps a lot.

Thanks!

Incompatible with pytest-xdist?

When running with pytest -n auto with pytest-xdist 3.1, tests are only being run once. I can try downgrading pytest-xdist, maybe, but figured I'd start an issue first and see whether others might be seeing similar behavior.

flakefinder fails to collect tests generated by `@pytest.mark.parametrize`

Long time user of flakefinder here - thank you for providing this awesome workaround for the silly pytest limitation. I have run into this extension failing with a specific type of tests. Please allow me to walk you through the issue:

Let's write a test that uses @pytest.mark.parametrize

$ cat << EOT >> test_1.py
import pytest
@pytest.mark.parametrize(
    "a, b",
    [(1, 2), (3, 4)],
)
def test_flake(a, b): pass
EOT
$ pytest --disable-warnings --collect-only -q test_1.py
test_1.py::test_flake[3-4]
test_1.py::test_flake[1-2]

2 tests collected in 1.31s

Let's run one of them:

$ pytest test_1.py::test_flake[1-2]
========================================================== test session starts ===========================================================
platform linux -- Python 3.8.15, pytest-7.2.0, pluggy-1.0.0
Using --randomly-seed=1277854280
rootdir: /mnt/nvme0/code/huggingface/m4-bs-rampup
plugins: randomly-3.12.0, forked-1.4.0, hydra-core-1.3.0, anyio-3.6.2, subtests-0.9.0, hypothesis-6.47.4, timeout-2.1.0, dash-2.7.1, xdist-3.1.0, instafail-0.4.2, flakefinder-1.0.0
collected 1 item

test_1.py .                                                                                                                        [100%]

=========================================================== 1 passed in 1.31s ============================================================

Let's use flakefinder:

$ pytest --flake-finder --flake-runs=5 test_1.py::test_flake[1-2]
========================================================== test session starts ===========================================================
platform linux -- Python 3.8.15, pytest-7.2.0, pluggy-1.0.0
Using --randomly-seed=3718677389
rootdir: /mnt/nvme0/code/huggingface/m4-bs-rampup
plugins: randomly-3.12.0, forked-1.4.0, hydra-core-1.3.0, anyio-3.6.2, subtests-0.9.0, hypothesis-6.47.4, timeout-2.1.0, dash-2.7.1, xdist-3.1.0, instafail-0.4.2, flakefinder-1.0.0
collected 0 items

========================================================= no tests ran in 1.29s ==========================================================
ERROR: not found: /mnt/nvme0/code/huggingface/m4-bs-rampup/test_1.py::test_flake[1-2]
(no name '/mnt/nvme0/code/huggingface/m4-bs-rampup/test_1.py::test_flake[1-2]' in any of [<Module test_1.py>])

this breaks, because I think flakefinder internally uses parametrize:

metafunc.parametrize(
argnames=fixture_name,
argvalues=list(range(self.flake_runs)),
)

which clashes with the test's parametrize

Note that tests that use a unitest parameterized have no problem as there it uses a different format like test_flake_1_2

I tried to fix this and it appears that if I immediately return from the overridden pytest_generate_tests this starts working just fine, including duplicating

$ pytest --flake-finder --flake-runs=5 test_1.py::test_flake[1-2]
========================================================== test session starts ===========================================================
platform linux -- Python 3.8.15, pytest-7.2.0, pluggy-1.0.0
Using --randomly-seed=1049050234
rootdir: /mnt/nvme0/code/huggingface/m4-bs-rampup
plugins: randomly-3.12.0, forked-1.4.0, hydra-core-1.3.0, anyio-3.6.2, subtests-0.9.0, hypothesis-6.47.4, timeout-2.1.0, dash-2.7.1, xdist-3.1.0, instafail-0.4.2, flakefinder-1.0.0
collected 1 item

test_1.py .....

================================================================= PASSES =================================================================
======================================================== short test summary info =========================================================
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
=========================================================== 5 passed in 1.28s ============================================================

But I'm not sure how inside pytest_generate_tests to tell if a test has a @pytest.mark.parametrize decorator, I think the fix should be something like:

    @pytest.hookimpl(tryfirst=True)
    def pytest_generate_tests(self, metafunc):
        """For all true pytest tests use metafunc to add all the duplicates."""
        if metafunc.has_parametrize_decorator???: # no idea how to test for this
            return

        fixture_name = "__flakefinder_{}".format(metafunc.function.__name__)
        metafunc.fixturenames.append(fixture_name)
        metafunc.parametrize(
            argnames=fixture_name,
            argvalues=list(range(self.flake_runs)),
        )
        fixture_name = "__flakefinder_{}".format(metafunc.function.__name__)
        metafunc.function._pytest_duplicated = True

except for the weird collection report with minus values:

collected 2 items / 5 deselected / -3 selected

Also if I don't use the specific subtest it works too with the original code:

$ pytest --flake-finder --flake-runs=5 test_1.py
========================================================== test session starts ===========================================================
platform linux -- Python 3.8.15, pytest-7.2.0, pluggy-1.0.0
Using --randomly-seed=3507953969
rootdir: /mnt/nvme0/code/huggingface/m4-bs-rampup
plugins: randomly-3.12.0, forked-1.4.0, hydra-core-1.3.0, anyio-3.6.2, subtests-0.9.0, hypothesis-6.47.4, timeout-2.1.0, dash-2.7.1, xdist-3.1.0, instafail-0.4.2, flakefinder-1.0.0
collected 2 items

test_1.py ..........

================================================================= PASSES =================================================================
======================================================== short test summary info =========================================================
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[1-2]
PASSED test_1.py::test_flake[3-4]
PASSED test_1.py::test_flake[3-4]
PASSED test_1.py::test_flake[3-4]
PASSED test_1.py::test_flake[3-4]
PASSED test_1.py::test_flake[3-4]
=========================================================== 10 passed in 1.29s ===========================================================

But I need to repeat a specific sub-test.

And I can't use pytest --flake-finder --flake-runs=5 test_1.py -k 1-2 either, because I have other tests that have the same params so it'd run other tests.

The clash happens between the specific:

test_1.py::test_flake[1-2]

and 2nd parametrize:

test_1.py::test_flake[1-1-2]

and that's why they don't match.

Thank you.

Tests report wrong time

Running tests under tox reports the wrong amount of passed time:

=========================================================== test session starts ===========================================================
platform darwin -- Python 2.7.10, pytest-2.8.0, py-1.4.30, pluggy-0.3.1
rootdir: /Users/david/src/pytest-flakefinder, inifile: 
plugins: flakefinder-0.1.0
collected 15 items 

tests/test_flakefinder.py ...............

======================================================= 15 passed in 964.02 seconds =======================================================

This probably happens because we monkeypatch time.time

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.