Git Product home page Git Product logo

pytest-repeat's Introduction

pytest-repeat

pytest-repeat is a plugin for pytest that makes it easy to repeat a single test, or multiple tests, a specific number of times.

license python version anaconda ci issues

Requirements

You will need the following prerequisites in order to use pytest-repeat:

  • Python 3.7+ or PyPy3
  • pytest 4 or newer

Installation

To install pytest-repeat:

$ pip install pytest-repeat

Repeating a test

Use the --count command line option to specify how many times you want your test, or tests, to be run:

$ pytest --count=10 test_file.py

Each test collected by pytest will be run count times.

If you want to mark a test in your code to be repeated a number of times, you can use the @pytest.mark.repeat(count) decorator:

import pytest


@pytest.mark.repeat(3)
def test_repeat_decorator():
    pass

If you want to override default tests executions order, you can use --repeat-scope command line option with one of the next values: session, module, class or function (default). It behaves like a scope of the pytest fixture.

function (default) scope repeats each test count or repeat times before executing next test. session scope repeats whole tests session, i.e. all collected tests executed once, then all such tests executed again and etc. class and module behaves similar session , but repeating set of tests is a tests from class or module, not all collected tests.

Repeating a test until failure

If you are trying to diagnose an intermittent failure, it can be useful to run the same test over and over again until it fails. You can use pytest's -x option in conjunction with pytest-repeat to force the test runner to stop at the first failure. For example:

$ pytest --count=1000 -x test_file.py

This will attempt to run test_file.py 1000 times, but will stop as soon as a failure occurs.

UnitTest Style Tests

Unfortunately pytest-repeat is not able to work with unittest.TestCase test classes. These tests will simply always run once, regardless of --count, and show a warning.

Resources

pytest-repeat's People

Contributors

bluetech avatar blueyed avatar bobsilverberg avatar cclauss avatar davehunt avatar fuhrysteve avatar gdyuldin avatar hugovk avatar jamseams avatar jayvdb avatar nicoddemus avatar okken avatar peque avatar ronnypfannschmidt avatar the-compiler avatar tomviner avatar tucked avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-repeat's Issues

how to run test with orders in pytest-repeat

here is my scenario :

@pytest.mark.order1
def test_module_A():
    print("1.1 test_module_A...")
    pass
@pytest.mark.order2
def test_module_B():
    print("1.2 test_module_B...")
    pass
@pytest.mark.order3
def test_module_C():
   print("1.2 test_module_C...")
    pass

when I run test with following command: pytest -vv --count=2
results:

test_module_A
test_module_A
test_module_B
test_module_B
test_module_C
test_module_C

but I expect result is

test_module_A
test_module_B
test_module_C
test_module_A
test_module_B
test_module_C

any suggestions?

notes/question on release process

I just wanted to check to see if I could do a release, so I just pushed v0.9.2

Process I used:

  • Update Changelog with description of new tag, v0.9.2. Push to a forked branch. Merge to primary repo.
  • Create release v0.9.2 and with it create the new tag, v0.9.2 through GitHub web interface
  • On local machine, git fetch upstream
  • pip install twine build
  • rm -fr dist/*
  • python -m build
  • twine upload dist/*

@nicoddemus (or anyone else who cares), does that seem about right?

I wanted to look into having a GitHub action detect a new tag and publish, but that will require access to PyPI "manage" feature of pytest-repeat, which I don't have.
I'm ok with manual publishing. I just want to document it.
Please let me know if I missed a step.

When the project switches to hatch or flit, the process will change. But I do like the way the version comes from a git tag, so I don't want to change to another workflow unless/until that's supported. I think hatch allows it. Not sure about flit.

Does not work with parametrized nodeids

pytest tests/test_database.py::TestDatabaseFixtures::test_access\[db\] works, but adding --count 10 fails:

collecting 0 items
ERROR: not found: …/Vcs/pytest-django/tests/test_database.py::TestDatabaseFixtures::test_access[db]
(no name '…/Vcs/pytest-django/tests/test_database.py::TestDatabaseFixtures::test_access[db]' in any of [<Instance '()'>])

pytest version 3.6.3.dev5+gb7b9c54d

Repeat decorator

Is it currently possible to mark a test with a decorator to be run multiple times?

In example:

@pytest.repeat(10)
def test_repeated_10():
    assert True

So when executing pytest only the tests that are marked with the decorator are run multiple times.

Multiplying tests should happen after -k

When used with -k to select only some specific tests, pytest still collects all tests, multiplied by the number passed to --count.

Therefore pytest --count=120 -x -k test_foo will collect the number of total tests times 120, which results in a lot of processing and memory usage.

I can imagine that pytest-repeat could be improved in this regard to only multiply the collected tests after -k was applied.

The workaround here is to specify the file containing the test, but that still multiplies the tests contained in this file.

platform linux -- Python 3.6.1, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
cachedir: .cache
Django settings: velodrome.settings (from environment variable)
plugins: testmon-0.9.4, repeat-0.4.1, profiling-1.2.6, pdb-0.2.0, mock-1.6.0, django-3.1.2, catchlog-1.2.2, asyncio-0.5.0, pylama-7.3.3

Feature request: class level repeat

I have a class where scripts are dependent on prior results.
For class C, scripts S1, S2, S3 to run in order.

Putting the pytest.mark.repeat on the class doesn't result in repeating the class, but in repeating each test N times.
With '@pytest.mark.repeat(3)', the result is
C.S1, C.S1, C.S1, C.S2, C.S2, C.S2, C.S3, C.S3, C.S3.
What I need is:
C.S1, C.S2, C.S3, C.S1, C.S2, C.S3,...

What problem does pytest-repeat solve?

I've read / heard about this plugin a couple of times already, but I don't understand what it is good for. Why would you want to repeat a unit test several times? Which type of problem does it solve?

I would appreciate a small example 🙏

RemovedInPytest4Warning warning after testing completes

OS version: Ubuntu 18.04.1 LTS
Python version: 3.6.6
pytest version: 3.8.1
pytest-repeat version: 0.7.0

Warning:

/usr/local/lib/python3.6/dist-packages/pytest_repeat.py:56: RemovedInPytest4Warning: MarkInfo objects are deprecated as they contain merged marks which are hard to deal with correctly.
Please use node.get_closest_marker(name) or node.iter_markers(name).
Docs: https://docs.pytest.org/en/latest/mark.html#updating-code
  count = int(metafunc.function.repeat.args[0])

--count only executing once

Any idea why my test only executes once with --count=1000?

 pytest --count=1000  test/preprocessing/test_transforms.py 
====================================================================================== test session starts =======================================================================================
platform linux -- Python 3.11.5, pytest-7.4.2, pluggy-1.3.0
rootdir: /home/dani/source/repos/sc-api
configfile: pyproject.toml
plugins: repeat-0.9.3
collected 1 item                                                                                                                                                                               

test/preprocessing/test_transforms.py::TestTransforms::test_AudioEncoder PASSED                                                                                                            [  100%]
======================================================================================= 11 passed in 0.52s =======================================================================================

I'd like to help with this project

Hi @pytest-dev/pytest-repeat-admin,

I was looking through this project recently and noticed it seems to need a bit of attention.
I've filed a few issues, and also noticed that many of the current issues are old and could probably be closed.

I'd like to contribute to the project with, at the very least

  • tagging/labelling issues
  • closing some issues
  • responding to issues

But also, I'd like to work on:

  • resolving/implementing some of the issues/suggestions I've submitted.

Please let me know if that'd be possible.

Repeat only on failure

I'd like to repeat a test until it succeeds (waiting for some condition before continuing with other tests). An option would be great like --repeat-on=failure, with a default of success, which seems to be the default behviour. Does this make sense ?

Option to retry N times until test passes

Perhaps I first misfiled feature request on pytest itself pytest-dev/pytest#11508

What's the problem this feature will solve?

Repeating test after success increases probability of whole test run failure. This is what pytest-repeat does now.

There are situations when cost of false positive failure is high and cost of fixing a particular failing test (to be race free) is even higher (due to lack of human resources).

Good solutions it seems is just to repeat a test after failure, decreasing probability of false positive failure.

Describe the solution you'd like

ctest have option like this

  --repeat until-pass:<n>      = Allow each test to run up to <n> times in order to pass

This could be good for pytest to support something similar to help with 'probabilistic' tests.

For example workflow could have pytest run with something like --repeat-until-pass 3 greatly lowering probability of false positive test failures.

Organize pytest arguments into a group

Pytest arguments are currently added to the custom arguments group using parser.addoption.

You can organize these into a pytest-repeat group so it's clear what options go with this module.

grouped_options = parser.getgroup("pytest-repeat")
grouped_options.addoption 

Warnings not listed in https://docs.pytest.org/en/latest/warnings.html

I am getting the following warnings and which is not listed in the below given link.

tests/test_functions.py::TestFun::test_add1
/Users/xyz/.local/share/virtualenvs/mathlib_abc-WoXJe7Bq/lib/python3.7/site-packages/pytest_repeat.py:45: UserWarning: Repeating unittest class tests not supported
"Repeating unittest class tests not supported")

tests/test_functions.py::TestFun::test_add2
/Users/xyz/.local/share/virtualenvs/mathlib_abc-WoXJe7Bq/lib/python3.7/site-packages/pytest_repeat.py:45: UserWarning: Repeating unittest class tests not supported
"Repeating unittest class tests not supported")

tests/test_functions.py::TestFun::test_mul
/Users/xyz/.local/share/virtualenvs/mathlib_abc-WoXJe7Bq/lib/python3.7/site-packages/pytest_repeat.py:45: UserWarning: Repeating unittest class tests not supported
"Repeating unittest class tests not supported")

-- Docs: https://docs.pytest.org/en/latest/warnings.html

Running pytest-repeat together with pytest-randomly does not give new random seed

Would it be possible to run pytest-repeat together with pytest-randomly and get a new random seed for each session?

Trying to run with --repeat-scope=session, did unfortunately not trigger a new randomly-seed.

$ pytest tests/hdl/test_signal_decoding_tb.py::test_fpga_random_transactions -lsx --repeat-scope=session --count=2
================================ test session starts =================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
Using --randomly-seed=2806041905
rootdir: /home/user
plugins: randomly-3.12.0, repeat-0.9.3, xdist-3.1.0, cocotb-test-0.2.2
collected 2 items

tests/hdl/test_signal_decoding_tb.py::test_fpga_random_transactions[1-2] PASSED
tests/hdl/test_signal_decoding_tb.py::test_fpga_random_transactions[2-2] PASSED

================================= 2 passed in 0.02s ==================================

Thanks for a great plugin!

Using pytest.mark.repeat with pytest parametrize

I have been trying to repeat only one of the parameterised tests through pytest.param, but I have not been able to get anywhere. Is it a known limitation with the pytest-repeat plugin?

@pytest.mark.parametrize("name", 
[pytest.param("John", marks=pytest.mark.special), 
pytest.param("Ted", marks=pytest.mark.special), 
pytest.param("Ben", marks=pytest.mark.repeat(10))
])
def test_print_name(name):
    print(name)

Not possible to disable repetitions for a certain test case

It is not possible to disable repetitions for a certain test case. It fails with a TypeError exception if the repeat mark is set to 1. It is possible to limit it to 2, but 1 causes a failure. Here is an example:

@pytest.mark.repeat(1)
def test_heavy():
    pass

def test_light():
    pass

py.test --count=10 fails with:

TypeError: issubclass() arg 1 must be a class

0.9.1: pytest is failing

I'm trying to package your module as rpm packag. So I'm using typical in such case build, install and test cycle used on building package from non-root account:

  • "setup.py build"
  • "setup.py install --root </install/prefix>"
  • "pytest with PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>

May I ask for help because few units are failing:

+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-repeat-0.9.1-5.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-repeat-0.9.1-5.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -p no:randomly
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1
plugins: repeat-0.9.1, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, aspectlib-1.5.2, toolbox-0.5, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, xprocess-0.18.1, black-0.3.12, anyio-3.3.0, asyncio-0.15.1, trio-0.7.0, httpbin-1.0.0, subtests-0.5.0, isort-2.0.0, hypothesis-6.14.6, mock-3.6.1, profiling-1.7.0, Faker-8.12.1
collected 16 items

test_repeat.py .FFFFFFF.FFFFFF.                                                                                                                                      [100%]

================================================================================= FAILURES =================================================================================
________________________________________________________________________ TestRepeat.test_can_repeat ________________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a258b4a90>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_can_repeat0')>

    def test_can_repeat(self, testdir):
        testdir.makepyfile("""
            def test_repeat():
                pass
        """)
        result = testdir.runpytest('--count', '2')
>       result.stdout.fnmatch_lines(['*2 passed*'])
E       Failed: remains unmatched: '*2 passed*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:31: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_can_repeat0

___________________________________________________________ TestRepeat.test_mark_repeat_decorator_is_registered ____________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cf68d30>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_mark_repeat_decorator_is_registered0')>

    def test_mark_repeat_decorator_is_registered(self, testdir):
        result = testdir.runpytest('--markers')
>       result.stdout.fnmatch_lines([
            '@pytest.mark.repeat(n): run the given test function `n` times.'])
E       Failed: nomatch: '@pytest.mark.repeat(n): run the given test function `n` times.'
E           and: '@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/stable/warnings.html#pytest-mark-filterwarnings '
E           and: ''
E           and: '@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.'
E           and: ''
E           and: "@pytest.mark.skipif(condition, ..., *, reason=...): skip the given test function if any of the conditions evaluate to True. Example: skipif(sys.platform == 'win32') skips the test if we are on the win32 platform. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-skipif"
E           and: ''
E           and: "@pytest.mark.xfail(condition, ..., *, reason=..., run=True, raises=None, strict=xfail_strict): mark the test function as an expected failure if any of the conditions evaluate to True. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-xfail"
E           and: ''
E           and: "@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/stable/parametrize.html for more info and examples."
E           and: ''
E           and: '@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/stable/fixture.html#usefixtures '
E           and: ''
E           and: '@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.'
E           and: ''
E           and: '@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.'
E           and: ''
E       remains unmatched: '@pytest.mark.repeat(n): run the given test function `n` times.'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:36: Failed
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/stable/warnings.html#pytest-mark-filterwarnings

@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.

@pytest.mark.skipif(condition, ..., *, reason=...): skip the given test function if any of the conditions evaluate to True. Example: skipif(sys.platform == 'win32') skips the test if we are on the win32 platform. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-skipif

@pytest.mark.xfail(condition, ..., *, reason=..., run=True, raises=None, strict=xfail_strict): mark the test function as an expected failure if any of the conditions evaluate to True. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-xfail

@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/stable/parametrize.html for more info and examples.

@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/stable/fixture.html#usefixtures

@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.

@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.

__________________________________________________________________ TestRepeat.test_mark_repeat_decorator ___________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cf54550>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_mark_repeat_decorator0')>

    def test_mark_repeat_decorator(self, testdir):
        testdir.makepyfile("""
            import pytest
            @pytest.mark.repeat(3)
            def test_mark_repeat_decorator():
                pass
        """)
        result = testdir.runpytest()
>       result.stdout.fnmatch_lines(['*3 passed*'])
E       Failed: nomatch: '*3 passed*'
E           and: '============================= test session starts =============================='
E           and: 'platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1'
E           and: 'rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_mark_repeat_decorator0'
E           and: 'collected 1 item'
E           and: ''
E           and: 'test_mark_repeat_decorator.py '
E           and: 'INTERNALERROR> Traceback (most recent call last):'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 269, in wrap_session'
E           and: 'INTERNALERROR>     session.exitstatus = doit(config, session) or 0'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 323, in _main'
E           and: 'INTERNALERROR>     config.hook.pytest_runtestloop(session=session)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__'
E           and: 'INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec'
E           and: 'INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 337, in traced_hookexec'
E           and: 'INTERNALERROR>     return outcome.get_result()'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result'
E           and: 'INTERNALERROR>     raise ex[1].with_traceback(ex[2])'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 52, in from_call'
E           and: 'INTERNALERROR>     result = func()'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 335, in <lambda>'
E           and: 'INTERNALERROR>     outcome = _Result.from_call(lambda: oldcall(hook, hook_impls, kwargs))'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>'
E           and: 'INTERNALERROR>     self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall('
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall'
E           and: 'INTERNALERROR>     return outcome.get_result()'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result'
E           and: 'INTERNALERROR>     raise ex[1].with_traceback(ex[2])'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall'
E           and: 'INTERNALERROR>     res = hook_impl.function(*args)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 348, in pytest_runtestloop'
E           and: 'INTERNALERROR>     item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__'
E           and: 'INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec'
E           and: 'INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 337, in traced_hookexec'
E           and: 'INTERNALERROR>     return outcome.get_result()'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result'
E           and: 'INTERNALERROR>     raise ex[1].with_traceback(ex[2])'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 52, in from_call'
E           and: 'INTERNALERROR>     result = func()'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 335, in <lambda>'
E           and: 'INTERNALERROR>     outcome = _Result.from_call(lambda: oldcall(hook, hook_impls, kwargs))'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>'
E           and: 'INTERNALERROR>     self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall('
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall'
E           and: 'INTERNALERROR>     return outcome.get_result()'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result'
E           and: 'INTERNALERROR>     raise ex[1].with_traceback(ex[2])'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall'
E           and: 'INTERNALERROR>     res = hook_impl.function(*args)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 109, in pytest_runtest_protocol'
E           and: 'INTERNALERROR>     runtestprotocol(item, nextitem=nextitem)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 120, in runtestprotocol'
E           and: 'INTERNALERROR>     rep = call_and_report(item, "setup", log)'
E           and: 'INTERNALERROR>   File "/usr/lib/python3.8/site-packages/flaky/flaky_pytest_plugin.py", line 134, in call_and_report'
E           and: 'INTERNALERROR>     self._call_infos[item][when] = call'
E           and: 'INTERNALERROR> KeyError: <Function test_mark_repeat_decorator>'
E           and: ''
E           and: '============================ no tests ran in 0.01s ============================='
E       remains unmatched: '*3 passed*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:48: Failed
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_mark_repeat_decorator0
collected 1 item

test_mark_repeat_decorator.py
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 323, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 337, in traced_hookexec
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 52, in from_call
INTERNALERROR>     result = func()
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 335, in <lambda>
INTERNALERROR>     outcome = _Result.from_call(lambda: oldcall(hook, hook_impls, kwargs))
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR>     self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/main.py", line 348, in pytest_runtestloop
INTERNALERROR>     item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 337, in traced_hookexec
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 52, in from_call
INTERNALERROR>     result = func()
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 335, in <lambda>
INTERNALERROR>     outcome = _Result.from_call(lambda: oldcall(hook, hook_impls, kwargs))
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR>     self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 109, in pytest_runtest_protocol
INTERNALERROR>     runtestprotocol(item, nextitem=nextitem)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/_pytest/runner.py", line 120, in runtestprotocol
INTERNALERROR>     rep = call_and_report(item, "setup", log)
INTERNALERROR>   File "/usr/lib/python3.8/site-packages/flaky/flaky_pytest_plugin.py", line 134, in call_and_report
INTERNALERROR>     self._call_infos[item][when] = call
INTERNALERROR> KeyError: <Function test_mark_repeat_decorator>

============================ no tests ran in 0.01s =============================
____________________________________________________________ TestRepeat.test_mark_repeat_decorator_repeat_once _____________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1ce9c220>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_mark_repeat_decorator_repeat_once0')>

    def test_mark_repeat_decorator_repeat_once(self, testdir):
        testdir.makepyfile("""
            import pytest
            @pytest.mark.repeat(1)
            def test_mark_repeat_decorator_repeat_once():
                pass
        """)
        result = testdir.runpytest('--count', '10')
>       result.stdout.fnmatch_lines(['*1 passed*'])
E       Failed: remains unmatched: '*1 passed*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:59: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_mark_repeat_decorator_repeat_once0

_______________________________________________________________________ TestRepeat.test_parametrize ________________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1ce9cc70>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_parametrize0')>

    def test_parametrize(self, testdir):
        testdir.makepyfile("""
            import pytest
            @pytest.mark.parametrize('x', ['a', 'b', 'c'])
            def test_repeat(x):
                pass
        """)
        result = testdir.runpytest('-v', '--count', '2')
>       result.stdout.fnmatch_lines([
            '*test_parametrize.py::test_repeat[[]a-1-2[]] PASSED*',
            '*test_parametrize.py::test_repeat[[]a-2-2[]] PASSED*',
            '*test_parametrize.py::test_repeat[[]b-1-2[]] PASSED*',
            '*test_parametrize.py::test_repeat[[]b-2-2[]] PASSED*',
            '*test_parametrize.py::test_repeat[[]c-1-2[]] PASSED*',
            '*test_parametrize.py::test_repeat[[]c-2-2[]] PASSED*',
            '*6 passed*',
        ])
E       Failed: remains unmatched: '*test_parametrize.py::test_repeat[[]a-1-2[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:70: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_parametrize0

___________________________________________________________________ TestRepeat.test_parametrized_fixture ___________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cd81bb0>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_parametrized_fixture0')>

    def test_parametrized_fixture(self, testdir):
        testdir.makepyfile("""
            import pytest
            @pytest.fixture(params=['a', 'b', 'c'])
            def parametrized_fixture(request):
                return request.param

            def test_repeat(parametrized_fixture):
                pass
        """)
        result = testdir.runpytest('--count', '2')
>       result.stdout.fnmatch_lines(['*6 passed*'])
E       Failed: remains unmatched: '*6 passed*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:92: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_parametrized_fixture0

_______________________________________________________________________ TestRepeat.test_step_number ________________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cd76280>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_step_number0')>

    def test_step_number(self, testdir):
        testdir.makepyfile("""
            import pytest
            expected_steps = iter(range(5))
            def test_repeat(__pytest_repeat_step_number):
                assert next(expected_steps) == __pytest_repeat_step_number
                if __pytest_repeat_step_number == 4:
                    assert not list(expected_steps)
        """)
        result = testdir.runpytest('-v', '--count', '5')
>       result.stdout.fnmatch_lines([
            '*test_step_number.py::test_repeat[[]1-5[]] PASSED*',
            '*test_step_number.py::test_repeat[[]2-5[]] PASSED*',
            '*test_step_number.py::test_repeat[[]3-5[]] PASSED*',
            '*test_step_number.py::test_repeat[[]4-5[]] PASSED*',
            '*test_step_number.py::test_repeat[[]5-5[]] PASSED*',
            '*5 passed*',
        ])
E       Failed: remains unmatched: '*test_step_number.py::test_repeat[[]1-5[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:105: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_step_number0

______________________________________________________________________ TestRepeat.test_unittest_test _______________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1ccf7190>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_unittest_test0')>

    def test_unittest_test(self, testdir):
        testdir.makepyfile("""
            from unittest import TestCase

            class ClassStyleTest(TestCase):
                def test_this(self):
                    assert 1
        """)
        result = testdir.runpytest('-v', '--count', '2')
>       result.stdout.fnmatch_lines([
            '*test_unittest_test.py::ClassStyleTest::test_this PASSED*',
            '*1 passed*',
        ])
E       Failed: remains unmatched: '*test_unittest_test.py::ClassStyleTest::test_this PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:132: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_unittest_test0

__________________________________________________________________ TestRepeat.test_scope[session-lines0] ___________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cc88be0>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_scope0')>, scope = 'session'
lines = ['*test_1.py::test_repeat1[[]1-2[]] PASSED*', '*test_1.py::test_repeat2[[]1-2[]] PASSED*', '*test_2.py::test_repeat3[[...*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*', '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*', ...]

    @pytest.mark.parametrize(['scope', 'lines'], [
        ('session', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('module', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('class', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('function', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
    ])
    def test_scope(self, testdir, scope, lines):
        testdir.makepyfile(test_1="""
            def test_repeat1():
                pass

            def test_repeat2():
                pass
        """)
        testdir.makepyfile(test_2="""
            def test_repeat3():
                pass

            def test_repeat4():
                pass
        """)
        testdir.makepyfile(test_3="""
            class TestRepeat1(object):
                def test_repeat5(self):
                    pass
                def test_repeat6(self):
                    pass
            class TestRepeat2(object):
                def test_repeat7(self):
                    pass
                def test_repeat8(self):
                    pass
        """)
        result = testdir.runpytest('-v', '--count', '2', '--repeat-scope',
                                   scope)
>       result.stdout.fnmatch_lines(lines)
E       Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:244: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count --repeat-scope session
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_scope0

___________________________________________________________________ TestRepeat.test_scope[module-lines1] ___________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1ccf0c70>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_scope1')>, scope = 'module'
lines = ['*test_1.py::test_repeat1[[]1-2[]] PASSED*', '*test_1.py::test_repeat2[[]1-2[]] PASSED*', '*test_1.py::test_repeat1[[...peat2[[]2-2[]] PASSED*', '*test_2.py::test_repeat3[[]1-2[]] PASSED*', '*test_2.py::test_repeat4[[]1-2[]] PASSED*', ...]

    @pytest.mark.parametrize(['scope', 'lines'], [
        ('session', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('module', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('class', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('function', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
    ])
    def test_scope(self, testdir, scope, lines):
        testdir.makepyfile(test_1="""
            def test_repeat1():
                pass

            def test_repeat2():
                pass
        """)
        testdir.makepyfile(test_2="""
            def test_repeat3():
                pass

            def test_repeat4():
                pass
        """)
        testdir.makepyfile(test_3="""
            class TestRepeat1(object):
                def test_repeat5(self):
                    pass
                def test_repeat6(self):
                    pass
            class TestRepeat2(object):
                def test_repeat7(self):
                    pass
                def test_repeat8(self):
                    pass
        """)
        result = testdir.runpytest('-v', '--count', '2', '--repeat-scope',
                                   scope)
>       result.stdout.fnmatch_lines(lines)
E       Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:244: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count --repeat-scope module
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_scope1

___________________________________________________________________ TestRepeat.test_scope[class-lines2] ____________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cbf77f0>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_scope2')>, scope = 'class'
lines = ['*test_1.py::test_repeat1[[]1-2[]] PASSED*', '*test_1.py::test_repeat2[[]1-2[]] PASSED*', '*test_1.py::test_repeat1[[...peat2[[]2-2[]] PASSED*', '*test_2.py::test_repeat3[[]1-2[]] PASSED*', '*test_2.py::test_repeat4[[]1-2[]] PASSED*', ...]

    @pytest.mark.parametrize(['scope', 'lines'], [
        ('session', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('module', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('class', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('function', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
    ])
    def test_scope(self, testdir, scope, lines):
        testdir.makepyfile(test_1="""
            def test_repeat1():
                pass

            def test_repeat2():
                pass
        """)
        testdir.makepyfile(test_2="""
            def test_repeat3():
                pass

            def test_repeat4():
                pass
        """)
        testdir.makepyfile(test_3="""
            class TestRepeat1(object):
                def test_repeat5(self):
                    pass
                def test_repeat6(self):
                    pass
            class TestRepeat2(object):
                def test_repeat7(self):
                    pass
                def test_repeat8(self):
                    pass
        """)
        result = testdir.runpytest('-v', '--count', '2', '--repeat-scope',
                                   scope)
>       result.stdout.fnmatch_lines(lines)
E       Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:244: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count --repeat-scope class
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_scope2

__________________________________________________________________ TestRepeat.test_scope[function-lines3] __________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cc38cd0>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_scope3')>, scope = 'function'
lines = ['*test_1.py::test_repeat1[[]1-2[]] PASSED*', '*test_1.py::test_repeat1[[]2-2[]] PASSED*', '*test_1.py::test_repeat2[[...peat2[[]2-2[]] PASSED*', '*test_2.py::test_repeat3[[]1-2[]] PASSED*', '*test_2.py::test_repeat3[[]2-2[]] PASSED*', ...]

    @pytest.mark.parametrize(['scope', 'lines'], [
        ('session', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('module', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('class', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
        ('function', [
            '*test_1.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_1.py::test_repeat2[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat3[[]2-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]1-2[]] PASSED*',
            '*test_2.py::test_repeat4[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat5[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat1::test_repeat6[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat7[[]2-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]1-2[]] PASSED*',
            '*test_3.py::TestRepeat2::test_repeat8[[]2-2[]] PASSED*',
            '*16 passed*',
        ]),
    ])
    def test_scope(self, testdir, scope, lines):
        testdir.makepyfile(test_1="""
            def test_repeat1():
                pass

            def test_repeat2():
                pass
        """)
        testdir.makepyfile(test_2="""
            def test_repeat3():
                pass

            def test_repeat4():
                pass
        """)
        testdir.makepyfile(test_3="""
            class TestRepeat1(object):
                def test_repeat5(self):
                    pass
                def test_repeat6(self):
                    pass
            class TestRepeat2(object):
                def test_repeat7(self):
                    pass
                def test_repeat8(self):
                    pass
        """)
        result = testdir.runpytest('-v', '--count', '2', '--repeat-scope',
                                   scope)
>       result.stdout.fnmatch_lines(lines)
E       Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:244: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count --repeat-scope function
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_scope3

______________________________________________________________________ TestRepeat.test_omitted_scope _______________________________________________________________________

self = <test_repeat.TestRepeat object at 0x7f4a1cb8b400>, testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-125/test_omitted_scope0')>

    def test_omitted_scope(self, testdir):
        testdir.makepyfile("""
            def test_repeat1():
                pass

            def test_repeat2():
                pass
        """)
        result = testdir.runpytest('-v', '--count', '2')
>       result.stdout.fnmatch_lines([
            '*test_omitted_scope.py::test_repeat1[[]1-2[]] PASSED*',
            '*test_omitted_scope.py::test_repeat1[[]2-2[]] PASSED*',
            '*test_omitted_scope.py::test_repeat2[[]1-2[]] PASSED*',
            '*test_omitted_scope.py::test_repeat2[[]2-2[]] PASSED*',
            '*4 passed*',
        ])
E       Failed: remains unmatched: '*test_omitted_scope.py::test_repeat1[[]1-2[]] PASSED*'

/home/tkloczko/rpmbuild/BUILD/pytest-repeat-0.9.1/test_repeat.py:256: Failed
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --count
  inifile: None
  rootdir: /tmp/pytest-of-tkloczko/pytest-125/test_omitted_scope0

========================================================================= short test summary info ==========================================================================
FAILED test_repeat.py::TestRepeat::test_can_repeat - Failed: remains unmatched: '*2 passed*'
FAILED test_repeat.py::TestRepeat::test_mark_repeat_decorator_is_registered - Failed: nomatch: '@pytest.mark.repeat(n): run the given test function `n` times.'
FAILED test_repeat.py::TestRepeat::test_mark_repeat_decorator - Failed: nomatch: '*3 passed*'
FAILED test_repeat.py::TestRepeat::test_mark_repeat_decorator_repeat_once - Failed: remains unmatched: '*1 passed*'
FAILED test_repeat.py::TestRepeat::test_parametrize - Failed: remains unmatched: '*test_parametrize.py::test_repeat[[]a-1-2[]] PASSED*'
FAILED test_repeat.py::TestRepeat::test_parametrized_fixture - Failed: remains unmatched: '*6 passed*'
FAILED test_repeat.py::TestRepeat::test_step_number - Failed: remains unmatched: '*test_step_number.py::test_repeat[[]1-5[]] PASSED*'
FAILED test_repeat.py::TestRepeat::test_unittest_test - Failed: remains unmatched: '*test_unittest_test.py::ClassStyleTest::test_this PASSED*'
FAILED test_repeat.py::TestRepeat::test_scope[session-lines0] - Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'
FAILED test_repeat.py::TestRepeat::test_scope[module-lines1] - Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'
FAILED test_repeat.py::TestRepeat::test_scope[class-lines2] - Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'
FAILED test_repeat.py::TestRepeat::test_scope[function-lines3] - Failed: remains unmatched: '*test_1.py::test_repeat1[[]1-2[]] PASSED*'
FAILED test_repeat.py::TestRepeat::test_omitted_scope - Failed: remains unmatched: '*test_omitted_scope.py::test_repeat1[[]1-2[]] PASSED*'
======================================================================= 13 failed, 3 passed in 1.51s =======================================================================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
/usr/lib/python3.8/site-packages/_pytest/pathlib.py:80: PytestWarning: (rm_rf) error removing /tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6/test_safe_get_no_perms0
<class 'OSError'>: [Errno 39] Directory not empty: 'test_safe_get_no_perms0'
  warnings.warn(
/usr/lib/python3.8/site-packages/_pytest/pathlib.py:80: PytestWarning: (rm_rf) error removing /tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6/test_safe_delete_no_perms0
<class 'OSError'>: [Errno 39] Directory not empty: 'test_safe_delete_no_perms0'
  warnings.warn(
/usr/lib/python3.8/site-packages/_pytest/pathlib.py:80: PytestWarning: (rm_rf) error removing /tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6/test_safe_set_no_perms0
<class 'OSError'>: [Errno 39] Directory not empty: 'test_safe_set_no_perms0'
  warnings.warn(
/usr/lib/python3.8/site-packages/_pytest/pathlib.py:80: PytestWarning: (rm_rf) error removing /tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6/test_rmtree_errorhandler_rerai0
<class 'OSError'>: [Errno 39] Directory not empty: 'test_rmtree_errorhandler_rerai0'
  warnings.warn(
/usr/lib/python3.8/site-packages/_pytest/pathlib.py:80: PytestWarning: (rm_rf) error removing /tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6/test_rmtree_errorhandler_reado0
<class 'OSError'>: [Errno 39] Directory not empty: 'test_rmtree_errorhandler_reado0'
  warnings.warn(
/usr/lib/python3.8/site-packages/_pytest/pathlib.py:80: PytestWarning: (rm_rf) error removing /tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6
<class 'OSError'>: [Errno 39] Directory not empty: '/tmp/pytest-of-tkloczko/garbage-1997f056-b7e7-4f45-8b03-fcd91c83b1b6'
  warnings.warn(

Add a changelog

While this seems to change rarely, it'd still be nice if it had a changelog to make it easier to see what changed between releases 😉

Test with parametrize items ; how to repeat but launch fixture only between batch of test?

I have a simple test with parametrize and fixture:

@pytest.mark.repeat(3)
@pytest.parametrize("case", ["a","b","c"])
def test_1(case, my_fixture):
    print("test_1 case: {}".format(case))

@pytest.fixture(scope="?")
def my_fixture():
    yield # Executed at the end of the test
    print("my_fixture")

Which pytest-repeat scope and pytest scope should I use if I want the following output?

test_1 case: a
test_1 case: b
test_1 case: c
my_fixture
test_1 case: a
test_1 case: b
test_1 case: c
my_fixture
test_1 case: a
test_1 case: b
test_1 case: c
my_fixture

So far I managed to have the following:

1- With pytest-repeat's scope="session" and pytest's scope="session"

test_1 case: a
test_1 case: b
test_1 case: c
test_1 case: a
test_1 case: b
test_1 case: c
test_1 case: a
test_1 case: b
test_1 case: c
my_fixture

2- With pytest-repeat's scope="session" and pytest's scope="function"

test_1 case: a
my_fixture
test_1 case: b
my_fixture
test_1 case: c
my_fixture
test_1 case: a
my_fixture
test_1 case: b
my_fixture
test_1 case: c
my_fixture
test_1 case: a
my_fixture
test_1 case: b
my_fixture
test_1 case: c
my_fixture

Thanks for your help.

Add timeout flag, to timeout test run after some number of minutes

Proposal to add --repeat-timeout=minutes, with the type being a float.

This borrowed functionality from pytest-flakefinder that has a --flake-max-minutes.

--repeat-timeout vs --repeat-max-minutes
I'm thinking --repeat-timeout seems more obvious, and a bit shorter.

int vs float
flakefinder uses an integer. I propose using a float. It doesn't make that much difference in practice.
But testing the feature would be way more convenient if we didn't have to wait a minute for a timeout.

skip vs exit
flakefinder uses skips to skip remaining tests.
That works, but is annoying with tons of skips.
I propose using pytest.exit() to avoid all of the pointless skips.
Also, then you can just guess a big number. "repeat like 20,000 times, but stop after an hour" kind of thing.

implementation
The implementation is fairly simple.
At the start of a test run, store an expiration time of time.time() + (timeout_minutes * 60).
Then at test time, compare the expiration time to the current time.
If timeout, stop testing.
This is pretty simple to implement.

affect on existing usage
My only concern would be if the functionality slows down test runs not using it.
The code added when NOT used is one variable check, and seems to not affect test time if not used.

alternatives
This functionality could also be added as a separate plugin entirely, as there are totally reasons to want to ensure a test suite doesn't run too long.
There is pytest-timeout, but the timeout there applies to individual tests, not the suite.
If added as a new plugin, perhaps something like pytest-suite-timeout or something?

keeping it with pytest-repeat
I could see this being a separate plugin. But also, I think it's a pretty common use case especially with pytest-repeat.
The idea being some way to implement "repeat it a bunch of times, but not more than like 30 minutes".
So, I'm thinking it would be good to include with pytest-repeat.

Enable Test Case Level Repeat Config

Hi,

Thanks for writing such a useful and simple to use plugin. I have been using repeat in our environment for a while and added support for test case level repeat configuration using markers. I was wondering if you are interested in this feature and us contributing back to your repo.

Thanks,
Rui

Squash repeated tests into one test result

Is there a way to "squash" repeated tests into one test result?

@pytest.mark.repeat(5)
def test_example():
    assert random.choice([True, False])

This gives me 5 results.

Is there something like @pytest.mark.repeat(5, squash=True) which effectively does:

def test_example():
    for _ in range(5):
         assert random.choice([True, False])

Yet, without modifying test_example by inserting a for loop?

Not support unittest

My case is written in a class that inherits unittest.TestCase,I can use pytest to run。Now I want to use pytest-repeat(0.8.0) to satisfy my need for repetitive execution,but does not take effect。so can you support this scenario。Please see below:

import logging
import random
import time
from unittest import TestCase

import pytest

HANDLE = logging.StreamHandler()
HANDLE.setFormatter(logging.Formatter(
    '%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s'))
LOG = logging.getLogger(__name__)
LOG.setLevel(logging.DEBUG)
LOG.addHandler(HANDLE)


class Test(TestCase):
    @pytest.mark.repeat(3)
    def test_repeat_1(self):
        LOG.info("begin: " + time.strftime("%Y-%m-%d_%H:%M:%S"))
        time.sleep(random.randint(1, 4))
        LOG.info("end: " + time.strftime("%Y-%m-%d_%H:%M:%S"))

    @pytest.mark.repeat(3)
    def test_repeat_2(self):
        LOG.info("begin: " + time.strftime("%Y-%m-%d_%H:%M:%S"))
        time.sleep(random.randint(1, 4))
        LOG.info("end: " + time.strftime("%Y-%m-%d_%H:%M:%S"))

    @pytest.mark.repeat(3)
    def test_repeat_3(self):
        LOG.info("begin: " + time.strftime("%Y-%m-%d_%H:%M:%S"))
        time.sleep(random.randint(1, 4))
        LOG.info("end: " + time.strftime("%Y-%m-%d_%H:%M:%S"))

enhancement: Repeat limited by time, "run for hh:mm:ss" or "until hh:mm:ss"

I have a rarely occurring "long delay" issue when using a resource via network. It usually happens in the 8-15 hours range but sometimes never. It usually recover after a few minutes (of a transactions that takes less than a second).
I wish I could loop my tests all night, but be certain it stops in the morning to free the resources. If I do a "CTRL-C" I miss the final report.

  • Using the repeat I have to guesstimate how many loops I will need for H hours and hope the error will occur at least once in the time-frame.
    Using the 'run-untill-failure' is not great either since the problem may not occur at all. And I want the test loop to continue to get statistics on recovery time.

Not a big priority, but a nice to have.
Thanks for the great work.

repeating a test spams pytest cache

When running many repeatitions of a test with --count, .pytest_cache/v/cache/nodeids may get up to millions of entries. Eg:

"test/utils/test_types.py::test_if_none[1-100]",
  "test/utils/test_types.py::test_if_none[10-100]",
  "test/utils/test_types.py::test_if_none[100-100]",
  "test/utils/test_types.py::test_if_none[11-100]",
  "test/utils/test_types.py::test_if_none[12-100]",
...

This may cause a lag of several seconds even after executing a single method with -k specifying a single test file.

The culprit is pytest_sessionfinish in cacheprovider.py, when calling config.cache.set("cache/nodeids", sorted(self.cached_nodeids)). When cached_nodeids contains milions of entries, this function always generates a json with the whole contents of .pytest_cache/v/cache/nodeids.

pytest-repeat crashes on doctest

on python 3.6.8, pytest 4.5.0, pytest-repeat 0.8.0

py.test --doctest-modules --count=2

    @pytest.fixture(autouse=True)
    def __pytest_repeat_step_number(request):
        if request.config.option.count > 1:
            try:
                return request.param
            except AttributeError:
>               if issubclass(request.cls, TestCase):
E               TypeError: issubclass() arg 1 must be a class

When the test is a doctest (request.node is a DoctestItem), request.cls is None.

I assume this can be fixed with an extra is not None in the condition on line 42:

if issubclass(request.cls, TestCase):

Happy to open a PR

Skip remaining repeated tests on failure

The rerunfailures plugin can be used to detect flaky tests and ignore them by trying to rerun previous failed tests. If a repeated run succeeds that is then considered "good enough".

I am interested in the "opposite" feature and thought it might fit into this plugin. To ensure that a test is not flaky I want to run it multiple times. This plugin already supports that use case. In the case where a test fails I don't want the remaining invocations to happen (in order to save time). Using the pytest option -x is not feasible for that case since I am still interested in the result of all different tests and don't want to abort testing all together.

Would an additional option like --stop-repeating-same-test-if-it-failed-once (just a name to describe the semantic) fit into this plugin? If yes, with a little pointer I might be able to provide a pull request if that is helpful.

AttributeError: SubRequest instance has no attribute 'param'

Having trouble getting this plugin to work with a parametrized fixture like this:

import pytest

@pytest.fixture(params=['a'])
def paramed_fixture(request):
    return request.param

def test_use_paramed_fixture(paramed_fixture):
    assert paramed_fixture

I get the error below, note how it's running 3 tests!

$ py.test test_repeat_with_paramed_fixture.py --count=2
==================== test session starts ====================
platform linux2 -- Python 2.7.6, pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /home/tom/.virtualenvs/f1aefb524a50802b, inifile: 
plugins: repeat-0.2
collected 3 items 

test_repeat_with_paramed_fixture.py .EE

==================== ERRORS ====================
____________________ ERROR at setup of test_use_paramed_fixture[1] ____________________

request = <SubRequest 'paramed_fixture' for <Function 'test_use_paramed_fixture[1]'>>

    @pytest.fixture(params=['a'])
    def paramed_fixture(request):
>       return request.param
E       AttributeError: SubRequest instance has no attribute 'param'

test_repeat_with_paramed_fixture.py:5: AttributeError
____________________ ERROR at setup of test_use_paramed_fixture[2] ____________________

request = <SubRequest 'paramed_fixture' for <Function 'test_use_paramed_fixture[2]'>>

    @pytest.fixture(params=['a'])
    def paramed_fixture(request):
>       return request.param
E       AttributeError: SubRequest instance has no attribute 'param'

test_repeat_with_paramed_fixture.py:5: AttributeError
==================== 1 passed, 2 error in 0.01 seconds ====================

When I remove the line request.param:

@pytest.fixture(params=['a'])
def paramed_fixture(request):
    # don't access request.param this time
    return

def test_use_paramed_fixture(paramed_fixture):
    pass

And turn on verbose output, you can see the 3 tests more clearly:

$ py.test test_repeat_with_paramed_fixture.py --count=2 -v
==================== test session starts ====================
platform linux2 -- Python 2.7.6, pytest-2.8.7, py-1.4.31, pluggy-0.3.1 -- /home/tom/.virtualenvs/f1aefb524a50802b/bin/python
cachedir: .cache
rootdir: /home/tom/.virtualenvs/f1aefb524a50802b, inifile: 
plugins: repeat-0.2
collected 3 items 

test_repeat_with_paramed_fixture.py::test_use_paramed_fixture[a] PASSED
test_repeat_with_paramed_fixture.py::test_use_paramed_fixture[1] PASSED
test_repeat_with_paramed_fixture.py::test_use_paramed_fixture[2] PASSED

==================== 3 passed in 0.00 seconds ====================

test_use_paramed_fixture[a] contains the a from the params list. Then 1 and 2 go after that, which would be the repeats.

Obviously what should be happening is [a][1] then [a][2], just 2 tests.

Request Feature: support for unittest repeat

$ pytest --version
This is pytest version 3.4.1, imported from /home/mortenb/.pyenv/versions/3.6.4/envs/unity/lib/python3.6/site-packages/pytest.py
setuptools registered plugins:
  pytest-repeat-0.4.1 at /home/mortenb/.pyenv/versions/3.6.4/envs/unity/lib/python3.6/site-packages/pytest_repeat.py

If I use --count=N
It runs the tests one time then issue this warning for the unittests it discovered:

UserWarning: Repeating unittest class tests not supported
"Repeating unittest class tests not supported")

There are two ways currently to repeat a unittest N times
./mytest.py T.test1 T.test1 .. N-1Time NTime
(not very practical when you want 1000 :-) )

Or you can like your own testrunner:

Ntimes = 10
    testlist = [tfunc for tfunc in sorted(dir(T)) if callable(getattr(T, tfunc)) and str(tfunc).startswith('test')]
    suite = unittest.TestSuite()
    for t in testlist:
        if t=='<testname to run>':
            for i in range(Ntimes):
                suite.addTest(T(t))
    runner = unittest.TextTestRunner(verbosity=2, failfast=True)
    runner.run(suite)

Doesn't work with Tornado's gen_test

Get the following traceback when trying to run a test using Tornado's gen_test:

pyfuncitem = <Function test_SendTransferRequest[1-10]>

    @pytest.mark.tryfirst
    def pytest_pyfunc_call(pyfuncitem):
        gen_test_mark = pyfuncitem.keywords.get('gen_test')
        if gen_test_mark:
            io_loop = pyfuncitem.funcargs.get('io_loop')
>           run_sync = gen_test_mark.kwargs.get('run_sync', True)
E           AttributeError: 'bool' object has no attribute 'kwargs'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.