Git Product home page Git Product logo

Comments (8)

bendtherules avatar bendtherules commented on August 11, 2024

Yes, this will be really useful. Also, this is how to implement this kind of hook, from their docs - https://pytest.org/latest/example/simple.html#post-process-test-reports-failures

But I think, it is better to add test reports for both onpass and onfail, and let the callback decide how to use it.

from pytest-watch.

joeyespo avatar joeyespo commented on August 11, 2024

I'm starting to think this would be a problem best solved externally to ptw. --onfail makes for a good "dumb" trigger, but introducing a contract between a failed ptw run and the callback executable can introduce quite a bit of complexity.

Taking a step back, would a new --runner hook be more helpful here? That is, instead of triggering a script on a failed run that expects result information somehow, what if you have a script that runs py.test? This would allow you to further customize the run and have complete information about the results directly if the script uses pytest.main.

To help with reusability, ptw could also append its own sys.argv to the --runner call. The callback script could then look something like this:

# runpytest.py

import sys
import pytest

class CaptureDataPlugin(obect):
    def pytest_sessionfinish(self, session, exitstatus):
        self.report = ...

capture_data = DataCapturingPlugin()
args = sys.argv[sys.argv.index('--'):] if '--' in sys.argv else []
exit_code = pytest.main(args, plugins=[capture_data])

if exit_code != 0:
    print('Failed test run:', capture_data.report)

Or perhaps with a helper provided by ptw:

# runpytest.py

import pytest_watch

def show_data(session, exitstatus):
    if exitstatus == 0:
        return
    print('Failed test run:', ...)

pytest_watch.run_pytest(pytest_sessionfinish=show_data)

Then you'd run ptw --runner "python runpytest.py" to use it. This would allow you to get exactly the information you're looking for without ptw knowing any of the details. This flexibility would be especially helpful if you're looking for extra information from plugins, and might also be able to clean up the growing list of CLI arguments.

What do you two think?

from pytest-watch.

bendtherules avatar bendtherules commented on August 11, 2024

I think the --onpass and --onfail were important because pytest-watch did the heavylifting logic and allowed users to set very short shell commands, which although NOT very customisable, were very easy and fun to use.

The point is to do something very easy, like (playing a custom tone/ showing a "no. of pass/fail" message) on fail, which is best kept out of pytest. In ideal situation, the debug info provided to this callback should not be full blown, but it hard to decide how much is optimum - which I think you are concerned about. Lets just decide some optimum point and go forward with it.

On the other hand, having a --runner is very customizable for powerful use cases, but is indeed a step backwards for those simple usecases, who would again need to implement their own own buggy logic for the whole thing. It is hard to do it right, so I believe pytest-watch should take care of the hard part.

So, having a "--runner" will be a welcome addition, but please dont get rid of the onpass/onfail stuff, I am sure people will add fun things on top of it.

from pytest-watch.

blueyed avatar blueyed commented on August 11, 2024

👍 for an additional --runner.

from pytest-watch.

joeyespo avatar joeyespo commented on August 11, 2024

@bendtherules Oh yes! Don't get me wrong, the --onpass and --onfail won't be going anywhere.

I mean they should be used strictly for the easy stuff, and the new --runner should be used for more advanced tasks like inspecting a failed test run.

@blueyed Awesome! I'll get on this soon so we can close this issue.

from pytest-watch.

bendtherules avatar bendtherules commented on August 11, 2024

@joeyespo Oh, I get it now. But atleast, we should provide the (no. of fails and no. of total testcases) onfail, I think. It will allow for slightly more custom response

from pytest-watch.

joeyespo avatar joeyespo commented on August 11, 2024

@bendtherules I responded to you in your PR.

Thinking more about --runner helpers, and considering that it's far easier to add features than remove them, I'm hesitant to add any helpers right away after releasing --runner.

Moreover, the above code example could have easily imported, say, some_pytest_helper instead of pytest_watch and ptw wouldn't know the difference. You can use any package to handle the extension and interpretation of pytest.main(), so they don't necessarily have to be bundled here.

I'll add --runner and hold off on adding any other features related to interpreting pytest results until there's more demand.

from pytest-watch.

joeyespo avatar joeyespo commented on August 11, 2024

Added --runner in v3.7.0.

Closing this in favor of that approach. Thanks for the suggestion and initiating this discussion, @blueyed! And thanks @bendtherules for getting this going.

If you need anything more, feel free to re-open.

from pytest-watch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.