Git Product home page Git Product logo

Comments (8)

geoffrey464 avatar geoffrey464 commented on July 30, 2024 1

Hey Vasily!

Unfortunately, I played with both commenting the offending line of code and changing the JD_DISABLE_BUFFERING environment variable and neither was able to fix the capturing of stdout. I think that it's a larger issue with how PyCharm is handling capturing.

I just realized also, that I never showed what the actual output of the test results were. So I'll paste an example below (with names changed for brevity),

tests/test_scripts.py::test_script1[subprocess] 
tests/test_scripts.py::test_script2[subprocess] 
tests/test_scripts.py::test_script3[subprocess] 
# Running console script: script1 <path-to-script-argument1> <path-to-script-argument2> -o<optinal-script-argument1> --silent
# Script return code: 0
# Script stdout:

# Script stderr:

# Running console script: script2 <path-to-script-argument1> <path-to-script-argument2> -o<optinal-script-argument1> --silent
# Script return code: 0
# Script stdout:

# Script stderr:
<insert warning for standard error here>

But in either case, thanks for all your help with solving this! I'm looking forward to trying out both solutions.

Best, Geoffrey

from pytest-console-scripts.

kvas-it avatar kvas-it commented on July 30, 2024 1

Merged and released. Thanks for the feedback, Geoffrey!

from pytest-console-scripts.

kvas-it avatar kvas-it commented on July 30, 2024

Hi Geoffrey!

Thanks for this ticket. It made me realize that I messed up: my intention was to only print out the result if the test fails. By default this is more or less how it works, but when the output capturing is disabled with something like -s (which you probably do), it starts printing the output of every run, which can be quite annoying and was never my intention.

However, there's no easy way to print only on test failure, at least I don't see it, because at the time when RunResult is created we don't know yet if the test will fail or not. Your proposed parameter allows fixing it in one test, but people usually disable capturing for the whole test suite and it's probably annoying to go and add print_output=False everywhere. We also don't want to turn output printing off by default because it's useful for debugging test failures. Perhaps something like a command line option would work better.

What do you think? I'm happy to implement your original request and/or a command line option.

Cheers,
Vasily

from pytest-console-scripts.

geoffrey464 avatar geoffrey464 commented on July 30, 2024

Vasily, Thanks for replying. I didn't even realize that I was suppressing the output capturing since I'm running the test suite through PyCharm which must automatically add that flag when running the pytest module. Sorry to make you have to go through and try to figure out what I was talking about since it doesn't show on normal invocation.

I agree with you that there's not a great way to only print on test failure. Could there be an if statement in the init block that would be, print stuff if not returncode == 0? I'm not too familiar with the test failure criterion so my thought process may be too naive for this application.

I think that a global flag would be better to disable capturing. What would your proposal be for the command line option and how would it interact with the larger pytest suite?

Thanks, Geoffrey

from pytest-console-scripts.

kvas-it avatar kvas-it commented on July 30, 2024

Sorry to make you have to go through and try to figure out what I was talking about since it doesn't show on normal invocation.

No worries, I knew more or less what's happening right away when I read your ticket. And if the tests behave this way in PyCharm, that's also not great so it would be good to fix it.

Could there be an if statement in the init block that would be, print stuff if not returncode == 0?

Yes, this is possible. But the thing is: nonzero returncode is not necessarily a failure. Some tests check that the script under test correctly handles errors so signaling error would be right (and returncode == 0 would cause the test to fail). Then there might be several calls to script_runner.run() inside of one test and stuff like that.

I think that a global flag would be better to disable capturing. What would your proposal be for the command line option and how would it interact with the larger pytest suite?

When you run pytest from the command line, capturing is enabled by default, so you don't see any prints that happen inside of the tests, unless the test fails (so you'd see the RunResult prints only in failed tests, which is usually what you want). It seems that PyCharm might be disabling the capturing so that the prints go to the console and then you see them. It would be good to check if it's possible to configure this in PyCharm -- let me know if you find anything, otherwise I can try to install PyCharm and reproduce what you see.

Regarding the command line option, it can be something like --script-output=yes|no. When you give this option to pytest, RunResult would skip the printing. This can be useful in case you can't control capturing when running from PyCharm and also for people that want to disable capturing but not see the RunResult prints. However, if you can't control pytest command line options, this would not be useful for you.

So I'm thinking that we have three possible paths to investigate:

  1. If it's possible to control capturing when running tests under PyCharm, you can turn it off and it seems like this will solve your problem. I can then add a note to the README about this so that if others have the same issue, they'd know what to do.
  2. The command line option seems useful in some scenarios, but it will only help you if you can control pytest command line options from PyCharm.
  3. Finally, I can just implement it the way you originally proposed. Seems pretty easy and if it solves your problem, I'm happy to add this.

Let me know if any of this sounds good.

Cheers,
Vasily

P.S. Sorry for too much text 🤷

from pytest-console-scripts.

geoffrey464 avatar geoffrey464 commented on July 30, 2024

Gotcha, thanks for all of the info!

After messing around in the settings, I don't see any user interaction that would allow for the toggling of capturing in PyCharm. However, after poking around a bit deeper in the actual call itself it appears that pytest is called indirectly through a runner script that automatically includes the run silent -s configuration (See Attached Image).
pytest_runner_auto_include_silent

If it's possible to control capturing when running tests under PyCharm, you can turn it off and it seems like this will solve your problem. I can then add a note to the README about this so that if others have the same issue, they'd know what to do.

Unfortunately, this doesn't appear to be the case unless users want to go into the application files itself and modify it.

The command line option seems useful in some scenarios, but it will only help you if you can control pytest command line options from PyCharm.

I believe that the command line option will most likely be the most extensible, as well as the most usable for the majority of people. It is very simple to add additional arguments to the pytest call signature within PyCharm so that should not be a problem.

Finally, I can just implement it the way you originally proposed. Seems pretty easy and if it solves your problem, I'm happy to add this.

If its not too difficult, I believe that there are edge cases where disabling the output printing on a individual test function basis would be appropriate.

If you do end up implementing both the command line option and on an individual basis, I think that they should complement each other. I.e. there shouldn't be two separate functions that do the same thing. It would be really elegant if the command line option set the parameter for each individual test, or something like that.

Best, Geoffrey

Lol It's all good, I appreciate all of the background info.

from pytest-console-scripts.

kvas-it avatar kvas-it commented on July 30, 2024

Hi Geoffrey!

The -s option disables the capturing, so if there's a way to change the value of JD_DISABLE_BUFFERING, that might solve the issue.

In any case, it seems like the command line option and the additional argument to script_runner.run() would be useful to have -- I will implement them.

Cheers,
Vasily

from pytest-console-scripts.

kvas-it avatar kvas-it commented on July 30, 2024

Hi Geoffrey!

Can you check out the version from this branch and let me know if it solves your problem?

Cheers,
Vasily

from pytest-console-scripts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.