joeyespo / pytest-watch Goto Github PK
View Code? Open in Web Editor NEWLocal continuous test runner with pytest and watchdog.
License: MIT License
Local continuous test runner with pytest and watchdog.
License: MIT License
Some of them, like --runner
, could use a better description and some examples.
I'd really like to add some tests for the ini_config code first.
For that, we need a easy way of creating directory structures with some content filled in some files, somewhat like what can be done in cloud-init config files. Cloud-init seems too heavy and unix-specific. There is also Cloudbase-init for Windows.
We could write separate helpers for this, but it is better to shop for good modules instead. This are the ones I could find easily : testdata and pyboiler. Also, this blog post (http://btmiller.com/2015/03/17/represent-file-structure-as-yaml-with-python.html) can be a starting point
None of them can set the file contents though.
We need to choose one, that will also be used in other tests.
My emacs creates backup and interlock files. I moved backups to other directory. Interlock files are harder to manage, but I don't want to disable them. Thus ptw runs at twice from time to time - because, for example, .#zzz.py
appears in folder directory. What can I do?
It would be useful to have the name of failed tests available with the --onfail
callback, e.g. in an environment variable.
This could be captured from subprocess.call
and then parsed. Although that probably means to force / require specific options to py.test
to get a useful short test summary, i.e. -rfEsxXw
.
I could imagine using a py.test
hook for this though: when pytest-watch is used as a plugin, it could capture this data itself.
The data would be useful to easily open the failing test in your editor, without having to copy'n'paste it (although this can be quite easy with a plugin like https://github.com/kopischke/vim-fetch already).
Ever since switching to use py.test
, the test files are being cached. Possibly because pytest is using import
to load and run the tests with? Here's the related Pytest issue.
If it can be fixed there, we can continue on. If not, we'll have to go back to using the shell=True
approach or see what else we can do to be sure arguments are quoted properly in a cross-platform way.
Repost from my code comment
<directories>
option which can be in cmdline cant be set from pytest.ini as the cmdline arguments which don't start with --
dont get overloaded from ini.
There should be a way to do that.
My project has a directory tests
containing the unit tests I want to run with pytests. It also has a directory tests_functional
, which I don't want to run continuously.
py.test tests/
will run only my unit tests, while
ptw tests/
will also run everything in tests_functional
, even so:
ptw --ignore ./tests_functional ./tests
I don't see any way to run only what is in the tests
folder. Moreover, I would expect a pytest extension to gather tests the same way pytest (and pytest-xdist
) does.
If it's important I am using pytest-watch with https://github.com/pytest-dev/pytest-incremental.
When I update any source file sometimes pytest-watch reruns test twice, sometimes even three times. Is it a problem or a feature?
PTW reruns pytest 2-3 times in a row on every file save for me, in vim. Vim makes three changes on one save: moves file
to file~
, then creates file
, then saves the latest text to file
.
On its own, that would be fine (thank goodness for ptw interrupt and restart). But I'm using an onfail hook to display a failure message. KeyboardInterrupt
seems to trigger the onfail hook, so I get a failure message every time I save a file. :(
My instinct is to change this else
to elif exit_code != EXIT_INTERRUPTED
:
pytest-watch/pytest_watch/watcher.py
Lines 254 to 259 in ff2740e
I'm not familiar with this code base though, and I'm pretty new to ptw. Am I using it wrong somehow? Is there a better way to hook a message on real test failures?
If my code suggestion is a good approach, I'm happy to do a quick patch PR.
I try to use -v to increase verbosity of the output of the pytest to get a full diff
Use -v to get the full diff
with pytest -v work fine, I get the full diff. But when I use -v
with ptw
I don't get the full diff
There is an issue with pytest-watch running with django. I think it is related to the #21 issue.
While developing a django project sometims I get test database already exists
when using pytest-watch. I think it's because pytest-watch in some circumstances launches second py.test copy. So we need some sort of locking mechanism, because otherwise it's really hard to use.
merge_config
might make ptw
exit silently with code 2, when there are missing pytest plugins:
Removing with silence()
shows this:
usage: ptw [options] [file_or_dir] [file_or_dir] [...]
ptw: error: unrecognized arguments: --reuse-db --dc=Tester
inifile: …/project/setup.cfg
rootdir: …/project/project
Source:
pytest-watch/pytest_watch/config.py
Lines 28 to 29 in 3037f55
It would be good to display the errors from py.test
in this case.
This issue was triggered when I've tried to use pytest-watch
installed globally using https://github.com/mitsuhiko/pipsi/.
ptw exits immediately if there an import error like :
import not_found
def test_100():
assert True
But if ptw starts with no error, then the same import error is correctly displayed (and ptw continues to run as expected) :
Running: py.test
============================================================================================================= test session starts =============================================================================================================
platform linux -- Python 3.6.3, pytest-3.0.7, py-1.5.2, pluggy-0.4.0
rootdir: /tmp/toto, inifile:
plugins: hypothesis-3.38.5
collected 1 items
test_dummy.py .
========================================================================================================== 1 passed in 0.00 seconds ===========================================================================================================
Change detected: test_dummy.py
Running: py.test
============================================================================================================= test session starts =============================================================================================================
platform linux -- Python 3.6.3, pytest-3.0.7, py-1.5.2, pluggy-0.4.0
rootdir: /tmp/toto, inifile:
plugins: hypothesis-3.38.5
collected 0 items / 1 errors
=================================================================================================================== ERRORS ====================================================================================================================
_______________________________________________________________________________________________________ ERROR collecting test_dummy.py ________________________________________________________________________________________________________
ImportError while importing test module '/tmp/toto/test_dummy.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_dummy.py:1: in <module>
import not_found
E ModuleNotFoundError: No module named 'not_found'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================================================================================== 1 error in 0.12 seconds ===========================================================================================================
With the following unit test file:
# file test_syntax.py
print("hello"
ptw will fail silently.
D:\temp\t>type test_syntax.py
print("hello"
D:\temp\t>ptw
D:\temp\t>
If I fix the syntax error, I can start ptw. If I introduce a syntax error later (after starting ptw, while it is still running), I get the expected SyntaxError exception:
Running: py.test
============================= test session starts =============================
platform win32 -- Python 3.5.3, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
rootdir: D:\temp\t, inifile:
collected 0 items / 1 errors
=================================== ERRORS ====================================
_______________________ ERROR collecting test_syntax.py _______________________
d:\tools\python35\lib\site-packages\_pytest\python.py:408: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=importmode)
d:\tools\python35\lib\site-packages\py\_path\local.py:662: in pyimport
__import__(modname)
E File "D:\temp\t\test_syntax.py", line 2
E
E ^
E SyntaxError: unexpected EOF while parsing
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.20 seconds ===========================
I'd like to ignore multiple directories in my pytest.ini, but it doesn't seem to work. When I touch a file in one of the ignored directories, ptw still starts a run.
I did try to specify multiple ignore keys (like how the CLI works), but ini files cannot have duplicated keys.
[pytest-watch]
ignore = __pycache__,junk,misc,resources,venv,wz_profiler
ext = .py,.html,.txt
From the start, pytest-watch has lived outside of pytest. It reactively runs py.test
based on filesystem events, but also allows you to swap that out with another testing tool with --runner
.
It's also had to duplicate some behavior that comes out-of-the-box with pytest, such as inifile discovery, directory inputs, and --ignore
to name a few.
Moving the bulk of ptw to an official pytest plugin might alleviate a lot of this. The project can still include the ptw
and pytest-watch
console scripts (I personally enjoy running ptw
), however, they can forward all options to py.test
instead of reimplementing them (e.g. --verbose
, --quiet
, --pdb
, --ignore
details, inifile discovery, etc). These scripts could even continue exposing the CLI arguments that don't make sense to move to a plugin, like --runner
.
A pytest-watch plugin could cleanly focus on its own options (--onpass
, --spool
, --poll
), leaving the rest to pytest.
Potential downsides:
--runner
won't know how to interpret the CLI arguments, so it'll effectively have to do a lot of what pytest-watch does now, unless it runs py.test
right away. (Probably not a huge problem.)Thoughts?
If I add a test file when PTW is already running the test is ignored by PTW unless I restart it.
It seems that you listen on changes for a list of files you gathered on PTW start and do not look for the new ones. Haven't checked the code yet but I think it's so. Maybe we should discover new tests as they appear.
When aborting ptw in the collection phase it will not quit, but retry it:
% ptw ...
^CError: Could not run --collect-only to find the pytest config file. Trying again without silencing stdout..
I think it should exit on SIGINT here.
Tried installing the package using Windows 10, Python 3.6 and Pipenv 9.0.3 but failed because of the following error:
Error: An error occurred while installing pytest-watch!
Exception:
Traceback (most recent call last):
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\commands\install.py", line 342, in run
prefix=options.prefix_path,
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\req\req_set.py", line 784, in install
**kwargs
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\req\req_install.py", line 851, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\req\req_install.py", line 1064, in move_wheel_files
isolated=self.isolated,
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\wheel.py", line 247, in move_wheel_files
prefix=prefix,
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\locations.py", line 153, in distutils_scheme
i.finalize_options()
File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\setuptools\command\install.py", line 38, in finalize_options
orig.install.finalize_options(self)
File "c:\users\jklmustoju\appdata\local\programs\python\python36\Lib\distutils\command\install.py", line 346, in finalize_options
'userbase', 'usersite')
File "c:\users\jklmustoju\appdata\local\programs\python\python36\Lib\distutils\command\install.py", line 487, in convert_paths
setattr(self, attr, convert_path(getattr(self, attr)))
File "c:\users\jklmustoju\appdata\local\programs\python\python36\Lib\distutils\util.py", line 125, in convert_path
raise ValueError("path '%s' cannot be absolute" % pathname)
ValueError: path '/Lib/site-packages' cannot be absolute
I have a project where files look kind of like:
app/
module/
module.py
tests/
module/
module_test.py
some_test.py
In my ideal usage, I would say ptw --include module*
which would watch module_test.py
and module.py
because that's the current test I'm working on, and don't want all the other tests to keep re running as I'm working.. also I'm finding writing many many --ignore
to be cumbersome, though I'm probably doing that incorrectly.
Cheeseshop doesn't support markdown, thus it looks ugly (https://pypi.python.org/pypi/pytest-watch/)
On the other hand, Warehouse (https://pypi.org/project/pytest-watch/) was promised to support both Markdown and RST, but current implementation only supports RST as well.
So I think readme needs conversion into RST
Just an idea.
I found myself using the same set of options over and over, e.g. like so:
$ ptw --nobeep --poll --ignore=.tox -- \
--import . tests -sv \
--cov=some.package --cov-report=term \
--cov-report=html --cov-config=tox.ini
Maybe we could support loading the defaults from ~/.ptwrc
which could look like this
[pytest-watch]
nobeep = true
poll = true
ignore = .tox
addopts = --import . --cov-report=term --cov-report=html --cov-config=tox.ini -sv
and then the command above would be shortened to just
$ ptw -- --cov=some.package # default options, with coverage
$ ptw # default options, no coverage
Maybe it could also look for a local ./.ptwrc
file first (in the current folder).
If possible I'd like to add a new flag to pytest-watch --nowatchdirs, which matches up with the naming of --norecursedirs, in pytest.
I've been looking at http://hypothesis.readthedocs.org/en/latest/, a quickcheck tool, which automatically generates random inputs to test functions to ultimately make more robust software, part of its processing involves creating modules on-the-fly. Hypothesis creates modules on the fly at the root of the main pytest running process and therefore pytest-watch picks up on the new modules as they are created and pytest-watch never stops.
If I could include:
--nowatchdirs ".hypothesis"
This would allow for such operation. Do you have any thoughts?
See this comment and the surrounding discussion for details.
Python 3.5.1, Windows 7
The Windows executables appear to be broken.
Also, the readme is misleading:
Note: It can also be run using its full name py.test.watch.
py.test.watch
does not in fact exist
>pip install pytest-watch
>ptw
failed to create process
>py.test.watch
'py.test.watch' is not recognized as an internal or external command, operable program or batch file.
>venv/Scripts/ptw.exe
failed to create process
>venv/Scripts/pytest-watch.exe
failed to create process
>python venv/Scripts/pytest-watch-script.py
this works!
Quite often I invoke pytest with the --ipdb
option so I could see what is wrong with my code on first failure.
So I invkoed pytest-watch with ptw -- --ipdb
. The problem is that if after droping into ipdb I modify a test or a file it starts a new session and the one with ipdb console goes to the background. In the end I end up with dozens of ipdb instances running.
Is it possible to block ptw if current session has dropped to the ipdb?
Even though this project runs pytest automagically, I still need to cancel and re-run pretty often, because I like to run it with different flags.
E.g.: I like to increase / decrease verbosity or toggle pdb.
I think it would be nice to do this while pytest-watch is running using keyboard shortcuts. For example:
Pressing P could toggle pdb.
Pressing V (verbose) or Q (quiet) could switch between -q
,
,-v
and-vv
.
Perhaps with --
. For example:
$ pytest-watch -- -x
Let's continue the discussion on how to improve the current single-level --ignore
(alternatively, --norecursedirs
). #6 was the original issue.
The core problem is that we need a way to exclude directory trees from the watch list. If we don't, ptw
can crawl to a halt walking the entire tree or worse, use up the system's file resources.
Not all operating systems handle recursive watching natively, so using solutions like inotify (Linux) and kqueues (Mac) adds a new watch to each subdirectory instead of just one at the root. (Note that some systems do, like the Windows API and Mac's FSEvents.)
So we need to balance simplicity and efficiency. As mentioned above, not all platforms need a custom recursive solution, so using one in those cases hurts instead of helps.
Another idea is to shop around. If watchdog or another 3rd-party library exposes an efficient cross-platform solution (recursive watching/ignoring on Linux, and PatternMatchingEventHandler
or a regex solution like #7 for Windows and Mac), we could pass it along instead of re-implementing it.
The traceback after --collect-only
failed is not the same as with calling py.test --collect-only
directly:
With ptw project-dir
:
Error: Could not run --collect-only to find the pytest config file. Trying again without silencing stdout...
Traceback (most recent call last):
File "…/pyenv/project/bin/ptw", line 9, in <module>
load_entry_point('pytest-watch', 'console_scripts', 'ptw')()
File "…/pytest-watch/pytest_watch/command.py", line 83, in main
if not merge_config(args, pytest_args, verbose=args['--verbose']):
File "…/pytest-watch/pytest_watch/config.py", line 86, in merge_config
config_path = _collect_config(pytest_args, silent)
File "…/pytest-watch/pytest_watch/config.py", line 78, in _collect_config
return _run_pytest_collect(pytest_args)
File "…/pytest-watch/pytest_watch/config.py", line 52, in _run_pytest_collect
exit_code = pytest.main(argv, plugins=[collect_config_plugin])
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 39, in main
config = _prepareconfig(args, plugins)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 118, in _prepareconfig
pluginmanager=pluginmanager, args=args)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call
wrap_controller.send(call_outcome)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/helpconfig.py", line 28, in pytest_cmdline_parse
config = outcome.get_result()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
raise ex[1].with_traceback(ex[2])
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
self.result = func()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 861, in pytest_cmdline_parse
self.parse(args)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 966, in parse
self._preparse(args, addopts=addopts)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 937, in _preparse
args=args, parser=self._parser)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
return call_outcome.get_result()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
raise ex[1].with_traceback(ex[2])
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
self.result = func()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 238, in pytest_load_initial_conftests
_setup_django()
File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 134, in _setup_django
django.setup()
File "…/django/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "…/django/django/apps/registry.py", line 78, in populate
raise RuntimeError("populate() isn't reentrant")
RuntimeError: populate() isn't reentrant
With py.test --collect-only project-dir
:
Traceback (most recent call last):
File "…/pyenv/project/bin/py.test", line 11, in <module>
sys.exit(main())
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 39, in main
config = _prepareconfig(args, plugins)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 118, in _prepareconfig
pluginmanager=pluginmanager, args=args)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call
wrap_controller.send(call_outcome)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/helpconfig.py", line 28, in pytest_cmdline_parse
config = outcome.get_result()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
raise ex[1].with_traceback(ex[2])
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
self.result = func()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 861, in pytest_cmdline_parse
self.parse(args)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 966, in parse
self._preparse(args, addopts=addopts)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 937, in _preparse
args=args, parser=self._parser)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
return call_outcome.get_result()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
raise ex[1].with_traceback(ex[2])
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
self.result = func()
File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 238, in pytest_load_initial_conftests
_setup_django()
File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 134, in _setup_django
django.setup()
File "…/django/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "…/django/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "…/django/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "…/pyenv/project/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 658, in exec_module
File "<frozen importlib._bootstrap_external>", line 764, in get_code
File "<frozen importlib._bootstrap_external>", line 724, in source_to_code
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "…/app/models.py", line 2082
import bpdb
^
SyntaxError: invalid syntax
Things like ptw {some ptw options}
works. Also do ptw {some ptw options} -- -x
and this latter becomes the same as pytest -x
.
But I want to test just a specific file. Without ptw it would be pytest tests/test_specificfile.py
and that'll ignore tests/otherfile.py
etc.
But this doesn't work: ptw {some ptw options} -- tests/test_specificfile.py
:(
Flycheck for python generates temp files for each change done in the file even if the file has not been saved. So my_file.py will generate flycheck_my_file.py. As far as I can tell there is no way to change the path or extension of these files.
This creates a problem with pytest-watch. Whenever I change anything it runs the tests even if I havent saved the file. I would really like it to only run on file save. What would be nice was if --ignore
could also ignore files by pattern similar to how .gitignore works. So I could run ptw --ignore flycheck*
and not trigger a test when the flycheck files are generated.
Regardless, I am loving this tool. Thanks for developing it.
I just tried this on my project and it looks good initially. However, once I update any file (i.e. re-save), pytest-watch just performs test after test after test ... until I ctrl-c.
It should definitely not do that.
There is either a loop in the tool itself or it detects changes to files that are generated during the build process and thus always attempts to rebuild.
The latter is unlikely however since it does not start looping until the very first change is detected. Furthermore, I checked which files had changed in the directory since the "original" run of pytest-watch and it was only the file I actually did change and its corresponding __pycache__/*.pyc
file. And that file is very likely not regenerated if the source file didn't change either.
It would be nice if the changed file was displayed in the console before the command is rerun.
Environment:
Windows7, Python 3.5.0 (64-bit), pytest 2.8.3, py 1.4.31, pytest-watch 3.8.0, watchdog 0.8.3
We do a lot of our development in docker, and the events for changed files are generated in the host OS instead of inside docker, so ptw doesn't see that files have changed. Do you have any interest in adding a polling option, that will just scan the folder for modified file times and trigger tests? I may be able to write the code if you are.
The proposal is to send SIGTERM
to the pytest process when pytest-watch observes a file change. It'll then wait for the process to finish before starting the new cycle.
This will solve #23 as well as any other quirky behavior that comes from having two pytest instances running the same test suite.
This could also help with #21. The underlying question is what to do when a test initiates an interactive session like pdb
. If pdb
ignores our termination signal, we're done. If not, we may also need a --full
CLI argument to tell pytest-watch to wait out the full test session before starting the new one. Similar to how pytest automatically adds -s
when you use --pdb
, pytest-watch can automatically add --full
when --pdb
is present in pytest's argument list or in the config file.
I'd love some guidance on how to get pytest --last-failed
to play nice with ptw. Right now, if the last failed tests succeeds, then ptw sits patiently until the next file change.
What I'd like is for it to run normally with last-failed, but when all of the last-failed tests succeed, then rerun all tests. Any suggestions?
From pytest docs:
usage:
py.test [options] [file_or_dir] [file_or_dir] [...]
positional arguments:
file_or_dirgeneral:
-c file
- load configuration fromfile
instead of trying to locate one of the implicit configuration files.
Both the file_or_dir
and -c
options, when supplied can effectively change the directory tests get loaded from.
ptw
should also try to locate pytest.ini
file from there, which it doesnt as the args list is not passed while locating ini path. Maybe we should do that?
--collect-only
One problem is that some plugins (like pytest-cov ) are a bit odd (or, maybe they have some reason) and don't respect the --collect-only
flag. So, if we do add all args, it might not be as much a dry-run as we would like it to be and have side-effects (like, pytest-cov generates coverage of tests collected ).
On other hand, it might not be too much of side-effect, as we are going to run the tests anyway in the next step. But we should still mention this in docs, lest someone wants to disable this.
Most desirable result. But parsing and trying to guess the important parameters makes ptw
less robust and prone to pytest
changes, not to mention the code repetition from pytest
The following fails for me (silently, with exit code 2), similar to #43.
ptw project ~/.pyenv/versions/3.5.1/lib/python3.5/unittest/ -- --custom-option
Running it directly through py.test:
% py.test project ~/.pyenv/versions/3.5.1/lib/python3.5/unittest --custom-option http://localhost:9000/
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: unrecognized arguments: --custom-option http://localhost:9000/
inifile: None
rootdir: /home/user
It looks like py.test does not handle dirs outside of cwd properly, and it should not use plugins from there anyway.
Therefore the watch dirs should not be passed to it.
I have installed pytest-watch
globally. I used to use this to run py.test
from a virtual environment.
Since I've updated to 4.0.0
this no longer works.
As a workaround I now just install pytest-watch
inside of the virtualenv as well, but I prefer not having to do so for each virtualenv.
When ptw
is running, killing it (pkill ptw
) will not terminate the running py.test
process.
I think that pytest-watch should forward the signal to py.test (or kill it with INT
or TERM
always).
ptw currently has --onexit
, but no --onstart
- There should be no reason for this asymmetrical design. There is already --beforerun
, but it is run before every run of pytest and so, isn't a good candidate for initialising something just once.
It can be useful for starting things like a external notifier, which wants to run for the whole duration of ptw.
Adding it should be trivial.
With ptw dir -- path/to/test_file.py
ptw will include the dir
in the py.test run:
Running: py.test dir path/to/test_file.py
This however will cause py.test to run the whole directory, and not only the specified test file.
This was not the case with 3.10.0, which does:
Running: py.test path/to/test_file.py
btw: I found the best method is to specify testpaths = path/to/tests
with the pytest options in setup.cfg
, which gets used when running just py.test
.
If this was done intentionally, I think it should only get done when there are no py.test args provided, but that feels a bit too magic.
Instead of only calling py.test
, it would be nice to be able to configure some hook to be run before it.
The workaround appears to be creaating a py.test
wrapper script that would do this, and prepend it via $PATH. But it would be more comfortable to configure the path to py.test
itself, which could be this wrapper script then.
Optional parameters like "-ignore" or others don't seem to be documented right now. Atleast, I was not able to find it. So, it will be better to put it in the readme.
Hi! Checking the docs I see that you can specify to run part of the tests specifying the test files or the test directories. However, it would be great if something like this could happen:
Imagine a large project in which the whole unit test suite takes various seconds to run. In this situation, it can be annoying to run the whole suite on each change. Instead, what would be great is to run only the tests related to the code that is being modified. For understanding this better let me put a project structure example:
my-project
|-my-project
| |-source1.py
| |-source2.py
|-tests
|-ut
|-test_source1.py
|-test_source2.py
In this case, when developing, when I modify the source1
or the test_source1
I would like to only run the tests located in test_source1
, not the ones in test_source2
. I think that naming the tests the same name as the source files prefixed with test_
is a common practice and could help on this.
Do you think that this or a similar approach is possible? I'm open to create a PR myself if you think it is appropriate. This feature would make the development and the TDD cycle much faster 🙂
There's a growing need to be sure the CLI arguments all play nicely together, that it's cross-platform, and runs on Python 2.7 and 3.4+.
Funny enough, I'm not exactly sure how to approach it here.
Ideas?
It would be great if:
ptw [options] [<directory> ...]
rather than:
ptw [options] [<directory>]
I managed to patch this for my personal use.
But I hope this could be done in simpler way with plugin
.
How about changing your main logic
py.test ...
pytest-plugin
and adding entry_points={ 'pytest11': ... }
to your setup.py
? (like other pytest-something
plugins do)Anyway, this is my wish:
py.test --watch --onfail='...' --onpass='...' tests
for watching and testing tests/
, or
py.test --watch-dir=mypackage --onfail='...' --onpass='...' tests examples
for watching mypackage/
but testing tests/
and examples/
.
Or some better interface could be possible...
Thanks for reading this and releasing v2!
Hi,
Would you like to move the plugin under the pytest-dev org? The idea of such a move is just to increase the visibility of the plugin and eventually share maintenance, but as the original author you still retain full control over the implementation and oversee of the plugin. You can read more in Submitting plugins to pytest-dev.
Cheers!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.