Comments (3)
See
JoeSc@ca6212e
from pytest-dependency.
I'm not sure if I like the idea. I see at least two issues:
-
The use of magic names. Note that "all" is a perfectly valid name for a test.
-
It changes the nature of the dependency relation in an inconsistent manner. Until now this relation is static. The dependencies are given as a fixed list of tests. It is a priorily determined whether any other test belongs to the dependencies of a given test. This is independent of the current invocation of pytest. This is by design and I have use cases that depend on this. What your are proposing here is something dynamic. The dependencies of a tests in your proposal are all the tests that have been run until now in the current invocation of pytest. This is not determined a priorily.
Some examples to illustrate the issues this:
Example for the first issue
The following is perfectly valid with the current pytest-dependency:
import pytest
@pytest.mark.dependency(name="whole")
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
@pytest.mark.dependency(name="all")
def test_b():
pass
@pytest.mark.dependency(name="universe", depends=["all"])
def test_c():
pass
test_c
does not depend on test_a
. It will be run and pass. Yes, I do recongize the difference between depends=["all"]
and depends="all"
. With your proposal in place, we might have the following:
import pytest
@pytest.mark.dependency(name="whole")
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
@pytest.mark.dependency(name="all")
def test_b():
pass
@pytest.mark.dependency(name="universe", depends="all")
def test_c():
pass
Now, test_c
would be skipped. I would find this confusing.
Example for the second issue
Assume your proposal to be in place. Consider:
import pytest
@pytest.mark.dependency(name="a")
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
@pytest.mark.dependency(name="b")
def test_b():
pass
@pytest.mark.dependency(name="c", depends="all")
def test_c():
pass
From reading the code, one would assume it to be equivalent with:
import pytest
@pytest.mark.dependency(name="a")
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
assert False
@pytest.mark.dependency(name="b")
def test_b():
pass
@pytest.mark.dependency(name="c", depends=["a", "b"])
def test_c():
pass
But it is not. Assume the example to be saved as test.py
and running python -m pytest test.py::test_b test.py::test_c
respectively, test_c
would not be skipped in the first variant, but skipped in the second one. I'd call this inconsistent.
from pytest-dependency.
Issue 1 I could see being solved by adding a unique flag in the mark to avoid confusion @pytest.mark.dependency(all_previous=True)
Issue 2 I see no way around since all by definition is dynamic.
Do you see any way to bring something like this into pytest-dependency? Or is the static vs dynamic definition of dependencies too great of barrier.
I have other uses for pytest-dependency so it'd be nice to keep it all clean with dependency marks. The only alternative I can see is using the request fixture
def test_d(request):
if request.session.testsfailed:
pytest.skip()
pass
from pytest-dependency.
Related Issues (20)
- Test suite incompatibility with pytest 6.2 HOT 2
- Tests skipped on linux(3.6.13)(with no test failed) but executing on windows(3.6.10) HOT 3
- Dependency between two tests HOT 1
- session scope not working HOT 1
- Depends on all test methods in class HOT 1
- Conditional dependency on 2 tests HOT 2
- 0.5.1: pytest is failing HOT 4
- Is ordering on fixtures supported?
- only test_b is passed when install pytest-xdist with n>2.
- Not skipping a test when parent test skips or xfails
- The test was skipped,although the dependency test is success. HOT 5
- Dependency call is skipped by executing test with mark HOT 3
- pytest dependency not skipping the dependent tests when failed HOT 3
- when the dependency case use @pytest.mark.parametrize, depened cases was skipped,although the dependency test is success HOT 1
- is @pytest.mark.dependency() can be used for class level? like API test class depends on BVT test class...
- pyproject.toml missing
- Inheriting from test class with dependencies doesn't work correctly HOT 1
- Is pytest-dependency suppose to tell me why a test was skipped? I see nothing in the output explaining why. HOT 1
- Rerunning skipped cases when the parent test fails
- 0.6.0: pep517 based build fails with `Invalid version: 'UNKNOWN'` error message HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytest-dependency.