charmed-kubernetes / pytest-operator Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
When using the include
param to render_charm()
with a path with several directory levels, to selectively apply templating to specific files, the parent directories have to be explicitly included as well or they will not be traversed down in the walk. Any directory which is a prefix of a path in include
should be implicitly included.
I have recently heard that sometimes, when running integration tests locally, it can be handy to run the suite in a separate controller.
At the moment OpsTest handles the creation of a temporary testing model, could we do the same with a temporary testing controller?
When I run itests with
tox -e integration-machine -- --keep-models
No models are kept.
Our tests frequently need to run commands on units to do things like interact with kubectl
or issue workload queries (e.g., prometheus queries). There's a bit of boilerplate that is associated with that, such as: selecting a unit to run on, checking the response status and exit code, ensuring a reasonable timeout, etc. It would be nice to have a helper around that, and examples can already be found in kube-state-metrics-operator and kubernetes-master.
Definition of done:
juju_run
helper is added to OpsTest
f"{app_name}/leader"
to select the leader unit of that appTimeoutError
should be raised if the job didn't completeStretch goal:
kubectl
wrapper that defaults to "kubernetes-master/leader" for the unit and prepends "kubectl --kubeconfig=/root/.kube/config"
to the command.Not sure this is more of an issue or a feature request, but here's my thought.
juju deploy subordinate_charm
and the charm will be automagically deployed with scale=0/0.
While if I do:
ops_test.deploy(subordinate_charm)
it will fail citing subordinate application must be deployed without units
.
User is then required to do: ops_test.deploy(subordinate_charm, num_units=0)
and it will work.
Two problems with this:
ops_test.deploy
to match that of juju deploy
if charm.is_subordinate
would do.Hey there, thanks for putting the plugin together. I have been trying to get this up and running, but as of right now, building on LXD seems to fail due to the automatic name generation approach, which uses the module __name__
and translates _
to -
.
My operator is a machine operator (i.e. not K8s) and is structured such that the relevant tests are all based in <project_root>/tests/test_integration.py
(and in this case are filtered with an integration
marker).
Here is what I am seeing:
$ tox -r -e integration
...<tox setup snipped>...
integration run-test: commands[0] | pytest -v --tb native --show-capture=no --log-cli-level=INFO -s -m integration /home/techalchemy/git/artifactory-operator/tests
=================================================================== test session starts ===================================================================
platform linux -- Python 3.8.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/techalchemy/git/artifactory-operator/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/techalchemy/git/artifactory-operator, configfile: pyproject.toml
plugins: operator-0.8.1, asyncio-0.15.1
collected 65 items / 63 deselected / 2 selected
tests/test_integration.py::test_build_and_deploy /snap/bin/juju
/home/techalchemy/git/artifactory-operator/.tox/integration/bin/charmcraft
--------------------------------------------------------------------- live log setup ----------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:155 Using tmp_path: /home/techalchemy/git/artifactory-operator/.tox/integration/tmp/pytest/tests.test-integration-k3aa0
INFO pytest_operator.plugin:plugin.py:217 Adding model overlord:tests.test-integration-k3aa
ERROR
tests/test_integration.py::test_bundle ERROR
========================================================================= ERRORS ==========================================================================
_________________________________________________________ ERROR at setup of test_build_and_deploy _________________________________________________________
Traceback (most recent call last):
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 142, in wrapper
return loop.run_until_complete(setup())
File "/home/techalchemy/.pyenv/versions/3.8.7/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 123, in setup
res = await gen_obj.__anext__()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 144, in ops_test
await ops_test._setup_model()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 218, in _setup_model
self.model = await self._controller.add_model(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/controller.py", line 354, in add_model
model_info = await model_facade.CreateModel(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 480, in wrapper
reply = await f(*args, **kwargs)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/_client5.py", line 5515, in CreateModel
reply = await self.rpc(msg)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 623, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py", line 495, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.test-integration-k3aa" is not a valid name: model names may only contain lowercase letters, digits and hyphens
______________________________________________________________ ERROR at setup of test_bundle ______________________________________________________________
Traceback (most recent call last):
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 142, in wrapper
return loop.run_until_complete(setup())
File "/home/techalchemy/.pyenv/versions/3.8.7/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 123, in setup
res = await gen_obj.__anext__()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 144, in ops_test
await ops_test._setup_model()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 218, in _setup_model
self.model = await self._controller.add_model(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/controller.py", line 354, in add_model
model_info = await model_facade.CreateModel(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 480, in wrapper
reply = await f(*args, **kwargs)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/_client5.py", line 5515, in CreateModel
reply = await self.rpc(msg)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 623, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py", line 495, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.test-integration-k3aa" is not a valid name: model names may only contain lowercase letters, digits and hyphens
==================================================================== warnings summary =====================================================================
.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233
/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: flake8-ignore
self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233
/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: plugins
self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================= short test summary info =================================================================
ERROR tests/test_integration.py::test_build_and_deploy - juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.t...
ERROR tests/test_integration.py::test_bundle - juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.test-integr...
====================================================== 63 deselected, 2 warnings, 2 errors in 0.23s =======================================================
ERROR: InvocationError for command /home/techalchemy/git/artifactory-operator/.tox/integration/bin/pytest -v --tb native --show-capture=no --log-cli-level=INFO -s -m integration tests (exited with code 1)
_________________________________________________________________________ summary _________________________________________________________________________
ERROR: integration: commands failed
The relevant code appears to be at
pytest-operator/pytest_operator/plugin.py
Lines 172 to 177 in 6efd734
As a workaround, I have modified my tests/test_integration.py
to include the following line:
_, _, __name__ = __name__.rpartition(".")
And this seems to have provided a temporary solution.
Was doing some testing with pytest-operator
and tox
earlier today and noticed that it pulls in version 0.7.0
of charmcraft
rather than the latest 1.0.0
.
This is my tox env:
[testenv:integration]
deps =
juju
pytest
pytest-operator
ipdb
commands = pytest -v --tb native --show-capture=no --log-cli-level=INFO -s {posargs:tests/integration}
And contents of venv:
ls .tox/integration/lib/python3.9/site-packages | grep charmcra
drwxrwxr-x - jon jon 28 May 15:39 -- charmcraft
drwxrwxr-x - jon jon 28 May 15:39 -- charmcraft-0.7.0.dist-info
Thanks! :)
Thanks for creating and maintaining the pytest operator, it makes integration tests much easier! This is a fairly broad feature request (or potentially a bug report). Recently I have been testing a charm on kubernetes that depends on another charm that can only run on lxd. The first thing I tried was to have a controller running on lxd and use add-k8s to also get the lxd controller to manage the kubernetes charm. Then I used the --cloud
and --controller
parameters to request pytest operator to use the lxd controller and kubernetes cloud. This resulted in the following error (I also tried this on GitHub actions with the same result, it looks like some weird unauthorized error):
tox -e integration -- --jenkins-agent-image localhost:32000/jenkins-agent-k8s:85f19a452481f845507e9457796022a751aa039b --cloud micro-lxd --controller lxd --jenkins-controller-name lxd --jenkins-model-name jenkins --jenkins-unit-number 1
integration installed: asttokens==2.0.8,attrs==22.1.0,backcall==0.2.0,bcrypt==3.2.2,cachetools==5.2.0,certifi==2022.6.15,cffi==1.15.1,charset-normalizer==2.1.0,cryptography==37.0.4,decorator==5.1.1,executing==0.10.0,google-auth==2.10.0,idna==3.3,iniconfig==1.1.1,ipdb==0.13.9,ipython==8.4.0,jedi==0.18.1,Jinja2==3.1.2,juju==3.0.1,jujubundlelib==0.5.7,kubernetes==24.2.0,macaroonbakery==1.3.1,MarkupSafe==2.1.1,matplotlib-inline==0.1.6,multi-key-dict==2.0.3,mypy-extensions==0.4.3,oauthlib==3.2.0,ops==1.5.2,packaging==21.3,paramiko==2.11.0,parso==0.8.3,pbr==5.10.0,pexpect==4.8.0,pickleshare==0.7.5,pluggy==1.0.0,prompt-toolkit==3.0.30,protobuf==3.20.1,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pycparser==2.21,Pygments==2.13.0,pymacaroons==0.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyRFC3339==1.1,pytest==7.1.2,pytest-asyncio==0.19.0,pytest-operator==0.22.0,python-dateutil==2.8.2,python-jenkins==1.7.0,pytz==2022.2.1,PyYAML==6.0,requests==2.28.1,requests-oauthlib==1.3.1,rsa==4.9,six==1.16.0,stack-data==0.4.0,theblues==0.5.2,toml==0.10.2,tomli==2.0.1,toposort==1.7,traitlets==5.3.0,typing-inspect==0.8.0,typing_extensions==4.3.0,urllib3==1.26.11,wcwidth==0.2.5,websocket-client==1.3.3,websockets==10.3
integration run-test-pre: PYTHONHASHSEED='2230270510'
integration run-test: commands[0] | pytest -v --tb native --ignore=/home/jdkandersson/src/jenkins-agent-operator/tests/unit --log-cli-level=INFO -s --jenkins-agent-image localhost:32000/jenkins-agent-k8s:85f19a452481f845507e9457796022a751aa039b --cloud micro-lxd --controller lxd --jenkins-controller-name lxd --jenkins-model-name jenkins --jenkins-unit-number 1
===================================================================================================== test session starts =====================================================================================================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0 -- /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/jdkandersson/src/jenkins-agent-operator, configfile: pyproject.toml
plugins: operator-0.22.0, asyncio-0.19.0
asyncio: mode=strict
collected 2 items
tests/integration/test_charm.py::test_active
------------------------------------------------------------------------------------------------------- live log setup --------------------------------------------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:625 Adding model lxd:test-charm-cmhu on cloud micro-lxd
ERROR
tests/integration/test_charm.py::test_build_succeeds ERROR
=========================================================================================================== ERRORS ============================================================================================================
________________________________________________________________________________________________ ERROR at setup of test_active ________________________________________________________________________________________________
Traceback (most recent call last):
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 293, in _asyncgen_fixture_wrapper
result = event_loop.run_until_complete(setup())
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 275, in setup
res = await gen_obj.__anext__()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 231, in ops_test
await ops_test._setup_model()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 690, in _setup_model
model_state = await self._add_model(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 627, in _add_model
model = await controller.add_model(model_name, cloud_name, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/controller.py", line 342, in add_model
model_info = await model_facade.CreateModel(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 481, in wrapper
reply = await f(*args, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/old_clients/_client9.py", line 3158, in CreateModel
reply = await self.rpc(msg)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 654, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py", line 634, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
----------------------------------------------------------------------------------------------------- Captured log setup ------------------------------------------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:625 Adding model lxd:test-charm-cmhu on cloud micro-lxd
____________________________________________________________________________________________ ERROR at setup of test_build_succeeds ____________________________________________________________________________________________
Traceback (most recent call last):
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 293, in _asyncgen_fixture_wrapper
result = event_loop.run_until_complete(setup())
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 275, in setup
res = await gen_obj.__anext__()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 231, in ops_test
await ops_test._setup_model()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 690, in _setup_model
model_state = await self._add_model(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 627, in _add_model
model = await controller.add_model(model_name, cloud_name, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/controller.py", line 342, in add_model
model_info = await model_facade.CreateModel(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 481, in wrapper
reply = await f(*args, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/old_clients/_client9.py", line 3158, in CreateModel
reply = await self.rpc(msg)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 654, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py", line 634, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
=================================================================================================== short test summary info ===================================================================================================
ERROR tests/integration/test_charm.py::test_active - juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
ERROR tests/integration/test_charm.py::test_build_succeeds - juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
====================================================================================================== 2 errors in 1.01s ======================================================================================================
Task was destroyed but it is pending!
task: <Task pending name='Task-21' coro=<WebSocketCommonProtocol.recv() done, defined at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:486> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[create_task_with_handler.<locals>._task_result_exp_handler(task_name='tmp', logger=<Logger juju....ion (WARNING)>)() at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/jasyncio.py:64]>
Task was destroyed but it is pending!
task: <Task pending name='Task-22' coro=<Event.wait() done, defined at /usr/lib/python3.10/asyncio/locks.py:201> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-4' coro=<WebSocketCommonProtocol.transfer_data() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:945> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[Task.task_wakeup()]>
Task was destroyed but it is pending!
task: <Task pending name='Task-5' coro=<WebSocketCommonProtocol.keepalive_ping() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:1234> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-6' coro=<WebSocketCommonProtocol.close_connection() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:1289> wait_for=<Task pending name='Task-4' coro=<WebSocketCommonProtocol.transfer_data() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:945> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-7' coro=<Connection._receiver() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py:531> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-12' coro=<Connection._pinger() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py:579> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Exception ignored in: <coroutine object WebSocketCommonProtocol.close_connection at 0x7fe9f6ef7450>
Traceback (most recent call last):
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 1329, in close_connection
await self.close_transport()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 1347, in close_transport
if await self.wait_for_connection_lost():
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 1372, in wait_for_connection_lost
await asyncio.wait_for(
File "/usr/lib/python3.10/asyncio/tasks.py", line 405, in wait_for
loop = events.get_running_loop()
RuntimeError: no running event loop
ERROR: InvocationError for command /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/bin/pytest -v --tb native --ignore=/home/jdkandersson/src/jenkins-agent-operator/tests/unit --log-cli-level=INFO -s --jenkins-agent-image localhost:32000/jenkins-agent-k8s:85f19a452481f845507e9457796022a751aa039b --cloud micro-lxd --controller lxd --jenkins-controller-name lxd --jenkins-model-name jenkins --jenkins-unit-number 1 (exited with code 1)
___________________________________________________________________________________________________________ summary ___________________________________________________________________________________________________________
ERROR: integration: commands failed
I was able to deploy charms to the kubernetes controlled from lxd fine using normal juju commands.
What I got working was to have the integration tests run on a separate microk8s controller and use cross-model and cross-controller relations. However, this required the use of ops_test.juju
because I could not find a way to create the relations natively e.g. using ops_test.model.relate
if the relation is on another model (and in this case the model is on another controller).
The ops_test
fixture has module scope, and it is very convenient to have the model destroyed when I move from one test_*.py
to another.
However, I use the same *.charm
in all my test_*.py
, so fixture such as
@pytest.fixture(scope="module")
async def charm_under_test(ops_test: OpsTest) -> Path:
"""Charm used for integration testing."""
return = await ops_test.build_charm(".")
would still cause my charm to be built again and again as the test progresses throughout the test_*.py
files.
It would be handy to be able to have a charm "persist" without tricks such as
global _path_to_built_charm
if _path_to_built_charm is None:
_path_to_built_charm = await ops_test.build_charm(".")
Which brings me to a question: I wonder if the design approach is:
ops_test
in module scope is desirableawait ops_test.model.reset()
at the end of every test functionTypically, we expect that pip install pytest-operator
will also install pytest-asyncio
, and pytest-asyncio
adds a command line argument to pytest for --asyncio-mode
(eg: pytest --asyncio-mode=auto
should be a valid command). However, after commit cfef8ea9927e4cc07ff69d042239e2612d1cfc84
, this command sometimes works. For example, running the following in an empty dir:
# pip install direct from commit "Adding option --no-deploy" (feature commit 2)
# BREAKS
for x in `seq 20 29`; do
python -m venv venv$x; source venv$x/bin/activate; python -m pip install -U pip -q
pip install pytest git+https://github.com/charmed-kubernetes/pytest-operator.git@cfef8ea9927e4cc07ff69d042239e2612d1cfc84 -q
./venv$x/bin/pytest --asyncio-mode=auto # <-- this should execute successfully
done
Will result in 10 new venvs, some of which can successfully execute the pytest --asyncio-mode=auto
command and some of which give the following error:
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --asyncio-mode=auto
inifile: None
rootdir: /tmp/pytest-asyncio-debug
This intermittent error is linked to the venv, not to execution of the pytest command. For example, if you create a venv and then execute pytest --asyncio-mode=auto
multiple times, all executions will have the same result (either pass or fail). But if you then create multiple venvs and repeat this test, some venvs will pass and some will fail.
An example of installs that all work can be generated by using the previous commit:
# pip install direct from commit "Replace build ..." (feature commit 1)
# works
for x in `seq 11 20`; do
python -m venv venv$x; source venv$x/bin/activate; python -m pip install -U pip -q
pip install pytest git+https://github.com/charmed-kubernetes/pytest-operator.git@182f46d264c8e3ff94e74a0f863f94a4e269e254 -q
./venv$x/bin/pytest --asyncio-mode=auto
done
Specifying the asyncio_mode
via a pytest.ini or pyproject.toml appears to work consistently. For example, in pyproject.toml this always works:
[tool.pytest.ini_options]
asyncio_mode = "auto"
Charmcraft is moving towards putting all data into charmcraft.yaml - it would be nice if pytest-operator
supported that too.
From a quick perusal, it looks like simply trying to read the data from charmcraft.yaml
if metadata.yaml
doesn't exist would work, as you don't use any of the keys charmcraft rearranges
Hello Team,
In my integration tests, I'm not able to deploy a bundle using ops_test.model.deploy
as shown in the snippet below:
with ops_test.model_context(COS_MODEL_NAME):
await ops_test.model.deploy( # type: ignore[union-attr]
entity_url="https://charmhub.io/cos-lite",
trust=True,
)
The error I'm getting is:
File "/actions-runner/_work/sdcore-tests/sdcore-tests/tests/integration/fixtures.py", line 90, in deploy_cos_lite
await ops_test.model.deploy( # type: ignore[union-attr]
File "/actions-runner/_work/sdcore-tests/sdcore-tests/.tox/integration/lib/python3.10/site-packages/juju/model.py", line 1748, in deploy
await handler.fetch_plan(url, charm_origin, overlays=overlays)
File "/actions-runner/_work/sdcore-tests/sdcore-tests/.tox/integration/lib/python3.10/site-packages/juju/bundle.py", line 302, in fetch_plan
raise JujuError(self.plan.errors)
juju.errors.JujuError: ['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '']
My env is:
Workaround is of course running deployment using ops_test.run
, but I'd rather use a proper way to deploy bundles.
BR,
Bartek
With the recent update of pytest
to 7.3.0 (see changelog), pytest-operator
in v0.22 is failing in some tests due to an error on setup:
Traceback (most recent call last):
File "/home/runner/work/iam-bundle/iam-bundle/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 149, in tmp_path_factory
return pytest.TempPathFactory(
TypeError: TempPathFactory.__init__() missing 2 required positional arguments: 'retention_count' and 'retention_policy'
Some examples of failing tests:
cos-lite bundle
iam-bundle
A temporary solution is to pin pytest
to 7.2.2.
It seems that previous changes to this dropped support for the --model-config
option to set model-config via a passed yaml file
Hello!
I'm running some tests on an Openstack cloud and I've noticed that the ops_test
fixture does not teardown the storage created by Juju on the model.
AFAIK the destroy_storage
parameter on the OpsTest.forget_model
method could be the solution, but it is not currently exposed to the user. I couldn't come up with a nice API for exposing this feature in this library (maybe another fixture?). I'd be willing to implement this functionality, but I'm not sure how to expose it to the users.
def check_deps():
missing = []
for dep in ("juju", "charm", "charmcraft"):
res = subprocess.run(["which", dep])
if res.returncode != 0:
missing.append(dep)
if missing:
> raise RuntimeError(
"Missing dependenc{}: {}".format(
"y" if len(missing) == 1 else "ies",
", ".join(missing),
)
)
E RuntimeError: Missing dependency: charm
I ran in to this while trying to set up integration tests. Given that charmcraft basically replaces charm-tools, it would be nice if I didn't need it installed. If we really need both we could install them with pip in the virtualenv automatically.
With 1.1.0, charmcraft switched to classic
confinement and doing builds inside a LXD container. This broke the ability for charms to reference local libraries during build in their requirements.txt
since those libraries are not visible within the LXD container. This is required for the ops_test.build_lib()
feature for testing charm libraries.
I'm not sure if this feature is there already (and I haven't found it), but it would be nice to support more natively application config management (set, get).
At the moment the best way I could find to config-set is:
await ops_test.juju("config", APP_NAME, "key=value")
However, this makes unsetting keys impossible:
await ops_test.juju("config", APP_NAME, 'key=""')
--> 'key' is now '""' (quoted quotation marks), not None
.
I solved this by doing
os.system(f'juju config {APP_NAME} key=""')
But this is an indication that a unit config wrapper would be very welcome :)
I'm having issues running a simple test while using the ops_test
fixture
async def test_dummy(ops_test: OpsTest) -> None:
assert True
it tries to connect to 127.0.0.1:16443
while myu local k8s config points to a completely different ip
shipperizer in ~/shipperizer/iam-bundle on main โ ฮป cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *********************************************************************************
server: https://192.168.1.249:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: ******************************************************
below stack trace and execution
shipperizer in ~/shipperizer/iam-bundle on main โ ฮป TESTING_KUBECONFIG=/home/shipperizer/.kube/config tox -e integration -- --keep-models -x -k test_dummy
integration create: /home/shipperizer/shipperizer/iam-bundle/.tox/integration
integration installdeps: -r/home/shipperizer/shipperizer/iam-bundle/integration-requirements.txt
integration installed: anyio==3.7.1,asttokens==2.2.1,backcall==0.2.0,bcrypt==4.0.1,cachetools==5.3.1,certifi==2023.7.22,cffi==1.15.1,charset-normalizer==3.2.0,cryptography==41.0.3,decorator==5.1.1,executing==1.2.0,google-auth==2.22.0,greenlet==2.0.2,h11==0.14.0,httpcore==0.17.3,httpx==0.24.1,idna==3.4,iniconfig==2.0.0,ipdb==0.13.13,ipython==8.14.0,jedi==0.19.0,Jinja2==3.1.2,juju==3.2.0.1,jujubundlelib==0.5.7,kubernetes==27.2.0,lightkube==0.14.0,lightkube-models==1.27.1.4,macaroonbakery==1.3.1,MarkupSafe==2.1.3,matplotlib-inline==0.1.6,mypy-extensions==1.0.0,oauthlib==3.2.2,ops==2.5.1,packaging==23.1,paramiko==2.12.0,parso==0.8.3,pexpect==4.8.0,pickleshare==0.7.5,playwright==1.37.0,pluggy==1.2.0,prompt-toolkit==3.0.39,protobuf==3.20.3,ptyprocess==0.7.0,pure-eval==0.2.2,pyasn1==0.5.0,pyasn1-modules==0.3.0,pycparser==2.21,pyee==9.0.4,Pygments==2.16.1,pymacaroons==0.13.0,PyNaCl==1.5.0,pyRFC3339==1.1,pytest==7.4.0,pytest-asyncio==0.21.1,pytest-base-url==2.0.0,pytest-operator==0.28.0,pytest-playwright==0.4.2,python-dateutil==2.8.2,python-slugify==8.0.1,pytz==2023.3,PyYAML==6.0.1,requests==2.31.0,requests-oauthlib==1.3.1,rsa==4.9,six==1.16.0,sniffio==1.3.0,stack-data==0.6.2,text-unidecode==1.3,theblues==0.5.2,toposort==1.10,traitlets==5.9.0,typing-inspect==0.9.0,typing_extensions==4.7.1,urllib3==1.26.16,wcwidth==0.2.6,websocket-client==1.6.1,websockets==11.0.3
integration run-test-pre: PYTHONHASHSEED='3322065513'
integration run-test: commands[0] | playwright install
integration run-test: commands[1] | pytest -v --tb native /home/shipperizer/shipperizer/iam-bundle/tests/integration --log-cli-level=INFO -s --keep-models -x -k test_dummy
=========================================================== test session starts ============================================================
platform linux -- Python 3.11.4, pytest-7.4.0, pluggy-1.2.0 -- /home/shipperizer/shipperizer/iam-bundle/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/shipperizer/shipperizer/iam-bundle
configfile: pyproject.toml
plugins: playwright-0.4.2, anyio-3.7.1, operator-0.28.0, base-url-2.0.0, asyncio-0.21.1
asyncio: mode=Mode.AUTO
collected 11 items / 10 deselected / 1 selected
tests/integration/test_bundle.py::test_dummy
-------------------------------------------------------------- live log setup --------------------------------------------------------------
WARNING urllib3.connectionpool:connectionpool.py:823 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))': /api/v1/namespaces/controller-microk8s/services/controller-service
WARNING urllib3.connectionpool:connectionpool.py:823 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))': /api/v1/namespaces/controller-microk8s/services/controller-service
WARNING urllib3.connectionpool:connectionpool.py:823 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))': /api/v1/namespaces/controller-microk8s/services/controller-service
ERROR
================================================================== ERRORS ==================================================================
_______________________________________________________ ERROR at setup of test_dummy _______________________________________________________
Traceback (most recent call last):
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 444, in _error_catcher
yield
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 567, in read
data = self._fp_read(amt) if not fp_closed else b""
^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 533, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/http/client.py", line 479, in read
s = self.fp.read()
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
ConnectionResetError: [Errno 104] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/connectionpool.py", line 734, in urlopen
response = self.ResponseCls.from_httplib(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 653, in from_httplib
resp = ResponseCls(
^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 266, in __init__
self._body = self.read(decode_content=decode_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 566, in read
with self._error_catcher():
File "/usr/lib/python3.11/contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/response.py", line 461, in _error_catcher
raise ProtocolError("Connection broken: %r" % e, e)
urllib3.exceptions.ProtocolError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/runner.py", line 341, in from_call
result: Optional[TResult] = func()
^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/runner.py", line 262, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_hooks.py", line 433, in __call__
return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_manager.py", line 112, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_callers.py", line 155, in _multicall
return outcome.get_result()
^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_result.py", line 108, in get_result
raise exc.with_traceback(exc.__traceback__)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_callers.py", line 80, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/runner.py", line 157, in pytest_runtest_setup
item.session._setupstate.setup(item)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/runner.py", line 497, in setup
raise exc
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/runner.py", line 494, in setup
col.setup()
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/python.py", line 1791, in setup
self._request._fillfixtures()
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 566, in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 585, in getfixturevalue
fixturedef = self._get_active_fixturedef(argname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 607, in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 693, in _compute_fixture_value
fixturedef.execute(request=subrequest)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 1069, in execute
result = ihook.pytest_fixture_setup(fixturedef=self, request=request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_hooks.py", line 433, in __call__
return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_manager.py", line 112, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_callers.py", line 155, in _multicall
return outcome.get_result()
^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_result.py", line 108, in get_result
raise exc.with_traceback(exc.__traceback__)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pluggy/_callers.py", line 80, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 1123, in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/_pytest/fixtures.py", line 902, in call_fixture_func
fixture_result = fixturefunc(**kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pytest_asyncio/plugin.py", line 304, in _asyncgen_fixture_wrapper
result = event_loop.run_until_complete(setup())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pytest_asyncio/plugin.py", line 286, in setup
res = await gen_obj.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pytest_operator/plugin.py", line 231, in ops_test
await ops_test._setup_model()
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/pytest_operator/plugin.py", line 708, in _setup_model
await self._controller.connect(self.controller_name)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/juju/controller.py", line 104, in connect
await self._connector.connect_controller(controller_name, **kwargs)
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/juju/client/connector.py", line 111, in connect_controller
await self.connect(
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/juju/client/connector.py", line 75, in connect
self._connection = await Connection.connect(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/juju/client/connection.py", line 338, in connect
self.proxy.connect()
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/juju/client/proxy/kubernetes/proxy.py", line 42, in connect
service = corev1.read_namespaced_service(self.service, self.namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/api/core_v1_api.py", line 25141, in read_namespaced_service
return self.read_namespaced_service_with_http_info(name, namespace, **kwargs) # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/api/core_v1_api.py", line 25228, in read_namespaced_service_with_http_info
return self.api_client.call_api(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/rest.py", line 241, in GET
return self.request("GET", url,
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/kubernetes/client/rest.py", line 214, in request
r = self.pool_manager.request(method, url,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/request.py", line 74, in request
return self.request_encode_url(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/shipperizer/shipperizer/iam-bundle/.tox/integration/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=16443): Max retries exceeded with url: /api/v1/namespaces/controller-microk8s/services/controller-service (Caused by ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer')))
------------------------------------------------------------ Captured log setup ------------------------------------------------------------
WARNING urllib3.connectionpool:connectionpool.py:823 Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))': /api/v1/namespaces/controller-microk8s/services/controller-service
WARNING urllib3.connectionpool:connectionpool.py:823 Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))': /api/v1/namespaces/controller-microk8s/services/controller-service
WARNING urllib3.connectionpool:connectionpool.py:823 Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))': /api/v1/namespaces/controller-microk8s/services/controller-service
========================================================= short test summary info ==========================================================
ERROR tests/integration/test_bundle.py::test_dummy - urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=16443): Max retries exceeded with url: /api/v1/namespaces/c...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
===================================================== 10 deselected, 1 error in 0.02s ======================================================
ERROR: InvocationError for command /home/shipperizer/shipperizer/iam-bundle/.tox/integration/bin/pytest -v --tb native tests/integration --log-cli-level=INFO -s --keep-models -x -k test_dummy (exited with code 1)
_________________________________________________________________ summary __________________________________________________________________
ERROR: integration: commands failed
my juju environment seems to be working just fine
shipperizer in ~/shipperizer/iam-bundle on main โ ฮป juju controllers
Use --refresh option with this command to see the latest information.
Controller Model User Access Cloud/Region Models Nodes HA Version
microk8s* bundle admin superuser microk8s/localhost 3 1 - 3.1.5
shipperizer in ~/shipperizer/iam-bundle on main โ ฮป juju clouds
Only clouds with registered credentials are shown.
There are more clouds, use --all to see them.
Clouds available on the controller:
Cloud Regions Default Type
microk8s 1 localhost k8s
Clouds available on the client:
Cloud Regions Default Type Credentials Source Description
localhost 1 localhost lxd 1 built-in LXD Container Hypervisor
microk8s 1 localhost k8s 1 built-in A Kubernetes Cluster
I found it was a useful pattern to temporarily speed up the status-update hook firing rate.
This looks like a job for a context manager.
Proposal:
async with ops_test.fast_forward('10s'):
# do stuff which might take some time to reflect on the workload status
assert ops_test.model.wait_for_idle(...)
I had a selenium-based test that was taking a long time to run and I got bored. I hit ctrl+c
and the test exited, but got this traceback:
Traceback (most recent call last):
File "/notebook-operators/.tox/integration/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 103, in pytest_sessionfinish
session._setupstate.teardown_all()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 412, in teardown_all
self._pop_and_teardown()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 387, in _pop_and_teardown
self._teardown_with_finalization(colitem)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 405, in _teardown_with_finalization
self._callfinalizers(colitem)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 402, in _callfinalizers
raise exc
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 395, in _callfinalizers
fin()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/fixtures.py", line 1034, in finish
raise exc
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/fixtures.py", line 1027, in finish
func()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/fixtures.py", line 941, in _teardown_yield_fixture
next(it)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pytest_operator/plugin.py", line 133, in abort_on_fail
failed = request.node.failed
AttributeError: 'Function' object has no attribute 'failed'
Offhand, it looks like abort_on_fail
doesn't specify a scope, function
is the default scope, and Function
in pytest doesn't have a failed attribute.
I'm not familiar with the design of abort_on_fail
, but it seems like maybe that should be session or module-scoped?
There is an error in the README.md file:
assert ops_test.applications["my-charm"].units[0].workload_status == "active"
should be
assert ops_test.model.applications["my-charm"].units[0].workload_status == "active"
It seems that the the pytest-operator version 0.17.0 breaks integration tests on teardown.
Failing job without pytest-operator being constrained: https://github.com/claudiubelu/waltz-integration-juju/runs/6065841857?check_suite_focus=true
Job passing after constraining pytest-operator<0.17.0: https://github.com/claudiubelu/waltz-integration-juju/runs/6073143526?check_suite_focus=true
When calling something like ops_test.build_charm(".", bases_index=0)
the bases_index
will be ignored since it evaluates to False
here.
please unlock usage without tox
and provide another tmp path when no TOX_ENV_DIR
provided
here is a block that causes issue:
pytest-operator/pytest_operator/plugin.py
Line 146 in 2c29fa8
@pytest.fixture(scope="session")
def tmp_path_factory(request):
# Override temp path factory to create temp dirs under Tox env so that
# confined snaps (e.g., charmcraft) can access them.
return pytest.TempPathFactory(
given_basetemp=Path(os.environ["TOX_ENV_DIR"]) / "tmp" / "pytest",
trace=request.config.trace.get("tmpdir"),
_ispytest=True,
)
In a charm I'm developing, my itests are failing because of an uncaught websockets.exceptions.ConnectionClosed error.
I tried figuring out what was going on but had little luck. It looks like something goes wrong when trying to destroy the test model in ops_test._cleanup_models()
.
Attaching a full debug log.
$ tox -vve integration > itest_logs.txt
While running tests on a MAAS juju cloud where the model is named admin/mymodel
according to the <user>/<model>
convention, the ops_test.build_charm
method doesn't work
Happens when a test uses ops_test.build_charm(".")
. Traceback logs are provided below.
@pytest.mark.abort_on_fail
@pytest.mark.skip_if_deployed
async def test_build_and_deploy(ops_test: OpsTest, series, sync_helper, provided_collectors, resources):
"""Build the charm-under-test and deploy it together with related charms.
Assert on the unit status before any relations/configurations take place.
"""
# Build and deploy charm from local source folder
> charm = await ops_test.build_charm(".")
tests/functional/test_charm.py:70:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.tox/func31/lib/python3.11/site-packages/pytest_operator/plugin.py:943: in build_charm
charms_dst_dir = self.tmp_path / "charms"
.tox/func31/lib/python3.11/site-packages/pytest_operator/plugin.py:518: in tmp_path
tmp_path = self._tmp_path_factory.mktemp(current_state.model_name)
.tox/func31/lib/python3.11/site-packages/_pytest/tmpdir.py:131: in mktemp
basename = self._ensure_relative_to_basetemp(basename)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = TempPathFactory(_given_basetemp=None, _trace=<pluggy._tracing.TagTracerSub object at 0x75b39e836d10>, _basetemp=PosixPath('/tmp/pytest-of-dashmage/pytest-0'), _retention_count=3, _retention_policy='all')
basename = 'admin/mymodel'
def _ensure_relative_to_basetemp(self, basename: str) -> str:
basename = os.path.normpath(basename)
if (self.getbasetemp() / basename).resolve().parent != self.getbasetemp():
> raise ValueError(f"{basename} is not a normalized and relative path")
E ValueError: admin/mymodel is not a normalized and relative path
From the logs, it looks like pytest-operator isn't able to create a new tmp directory due to the presence of the "/" in the model name.
Made changes here. Not sure if this might break things elsewhere since the name of the tmp directory is being changed.
def tmp_path(self) -> Path:
(...)
if current_state and current_state.tmp_path is None:
model_name = current_state.model_name
if "/" in current_state.model_name:
model_name = "-".join(model_name.split("/"))
tmp_path = self._tmp_path_factory.mktemp(model_name)
(...)
The docs reference mentioned at the bottom of pytest-operator page doesn't exist anymore.
I have an integration test that deletes a kubernetes pod using either the kubectl
command or lightkube
(the issue happens with both) as follows
# Deleting the primary pod using kubectl
k8s_client = AsyncClient(namespace=ops_test.model_name)
await k8s_client.delete(Pod, name=replica_name)
The test suite passes successfully.
During the pytest-operator
's teardown operation, the following exception is thrown inconsistently :
INFO pytest_operator.plugin:plugin.py:477 Juju error logs:
controller-0: 14:09:52 ERROR juju.worker.caasapplicationprovisioner.runner exited "mongodb-k8s": Operation cannot be fulfilled on pods "mongodb-k8s-0": the object has been modified; please apply your changes to the latest version and try again
And this results in the model created by the pytest-operator, not to be deleted and to be stuck on the destroying
status until being force deleted manually.
0.14.0
ubuntu-20.04
(also happening on ubuntu-22.04
)2.9.32
1.24.0
(also happening on 1.23.6
)Greetings! I am currently writing some integration tests for charms that I am working on that use test bundles to check the stability of the charm (i.e. can it sync correctly with its companion charms). Unfortunately, I am not able to deploy the test bundles due to issues with juju-bundle
:
Traceback (most recent call last):
File "/mnt/d/projects/work/charms/slurmdbd-operator/tests/integration/test_charm.py", line 31, in test_deploy_against_channel_latest_edge
await ops_test.deploy_bundle(_.name)
File "/mnt/d/projects/work/charms/slurmdbd-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 1134, in deploy_bundle
await self.run(*cmd, check=True)
File "/mnt/d/projects/work/charms/slurmdbd-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 588, in run
raise AssertionError(
AssertionError: Command ['juju-bundle', 'deploy', '--bundle', '/tmp/tmpbo7dj_dw', '--build', '--'] failed (1): Error: YAML Error: series: unknown variant `focal`, expected one of `kubernetes`, `oneiric`, `precise`, `quantal`, `raring`, `saucy`, `trusty`, `utopic`, `vivid`, `wily`, `xenial`, `yakkety`, `zesty`, `artful`, `bionic`, `cosmic`, `disco`, `eoan`, `win2012hvr2`, `win2012hv`, `win2012r2`, `win2012`, `win7`, `win8`, `win81`, `win10`, `win2016`, `win2016hv`, `win2016nano`, `centos7` at line 9 column 9
Caused by:
series: unknown variant `focal`, expected one of `kubernetes`, `oneiric`, `precise`, `quantal`, `raring`, `saucy`, `trusty`, `utopic`, `vivid`, `wily`, `xenial`, `yakkety`, `zesty`, `artful`, `bionic`, `cosmic`, `disco`, `eoan`, `win2012hvr2`, `win2012hv`, `win2012r2`, `win2012`, `win7`, `win8`, `win81`, `win10`, `win2016`, `win2016hv`, `win2016nano`, `centos7` at line 9 column 9
---------------------------------------------------------------------------------------------------------- Captured log setup ----------------------------------------------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:646 Connecting to existing model localhost-lxd:controller on unspecified cloud
---------------------------------------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------------------------------------
INFO test_charm:test_charm.py:25 Deploying slurmdbd against latest/edge slurm charms test bundle...
INFO test_charm:test_charm.py:27
INFO pytest_operator.plugin:plugin.py:497 Using tmp_path: /mnt/d/projects/work/charms/slurmdbd-operator/.tox/integration/tmp/pytest/controller0
INFO pytest_operator.plugin:plugin.py:941 Building charm slurmdbd
INFO pytest_operator.plugin:plugin.py:946 Built charm slurmdbd in 31.50s
INFO pytest_operator.plugin:plugin.py:1130 Deploying (and possibly building) bundle using juju-bundle command:'juju-bundle deploy --bundle /tmp/tmpbo7dj_dw --build --'
======================================================================================================= short test summary info ========================================================================================================
FAILED tests/integration/test_charm.py::test_deploy_against_channel_latest_edge - AssertionError: Command ['juju-bundle', 'deploy', '--bundle', '/tmp/tmpbo7dj_dw', '--build', '--'] failed (1): Error: YAML Error: series: unknown variant `focal`, expected one of `kubernetes`, `oneiric`, `precise`, `quantal`, `...
Looking at the error output, I can tell that the issue here is that the juju-bundle
executable does not recognize focal
(Ubuntu 20.04 LTS) as an available series option for charms. The lack of recognition seems to be because of juju-bundle's age and lack of maintenance.
I propose two ways to potentially fix this bug:
juju-bundle
with plain ole juju
. <- Preferred optionjuju-bundle
.Let me know what you all think!
I'm working on a charm which has following structure:
โโโ LICENSE
โโโ ...
โโโ src
โย ย โโโ charm.py
โย ย โโโ resource.py
โโโ tests
โย ย โโโ __init__.py
โย ย โโโ functional
โย ย โย ย โโโ __init__.py
โย ย โย ย โโโ bundle.yaml
โย ย โย ย โโโ conftest.py
โย ย โย ย โโโ test_ceph_csi.py
โย ย โย ย โโโ utils
โย ย โย ย โโโ __init__.py
โย ย โย ย โโโ utils.py
โย ย โโโ unit
โย ย โโโ ...
โโโ tox.ini
I'm not sure if it's important but I run functional tests via tox
. All it really does is execute pytest {toxinidir}/tests/functional
.
Problem is that due to the presence of __init__.py
files in my structure, when the pytest-operator tries to deploy my bundle, it constructs name of the model with dot-separated names of the directories. Final model name looks like this:
tests.functional.test-ceph-csi-8gzx
and I get the following error/traceback:
/usr/lib/python3.6/asyncio/base_events.py:484: in run_until_complete
return future.result()
.tox/func/lib/python3.6/site-packages/pytest_asyncio/plugin.py:123: in setup
res = await gen_obj.__anext__()
.tox/func/lib/python3.6/site-packages/pytest_operator/plugin.py:144: in ops_test
await ops_test._setup_model()
.tox/func/lib/python3.6/site-packages/pytest_operator/plugin.py:223: in _setup_model
self.model_name, cloud_name=self.cloud_name
.tox/func/lib/python3.6/site-packages/juju/controller.py:360: in add_model
region=region
.tox/func/lib/python3.6/site-packages/juju/client/facade.py:480: in wrapper
reply = await f(*args, **kwargs)
.tox/func/lib/python3.6/site-packages/juju/client/_client5.py:5515: in CreateModel
reply = await self.rpc(msg)
.tox/func/lib/python3.6/site-packages/juju/client/facade.py:623: in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <juju.client.connection.Connection object at 0x7f68dd1b26a0>
msg = {'params': {'cloud-tag': 'cloud-openstack', 'config': {'authorized-keys': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDyfPI...: 'tests.functional.test-ceph-csi-8gzx', ...}, 'request': 'CreateModel', 'request-id': 10, 'type': 'ModelManager', ...}
encoder = <class 'juju.client.facade.TypeEncoder'>
async def rpc(self, msg, encoder=None):
'''Make an RPC to the API. The message is encoded as JSON
using the given encoder if any.
:param msg: Parameters for the call (will be encoded as JSON).
:param encoder: Encoder to be used when encoding the message.
:return: The result of the call.
:raises JujuAPIError: When there's an error returned.
:raises JujuError:
'''
self.__request_id__ += 1
msg['request-id'] = self.__request_id__
if'params' not in msg:
msg['params'] = {}
if "version" not in msg:
msg['version'] = self.facades[msg['type']]
outgoing = json.dumps(msg, indent=2, cls=encoder)
log.debug('connection {} -> {}'.format(id(self), outgoing))
for attempt in range(3):
if self.monitor.status == Monitor.DISCONNECTED:
# closed cleanly; shouldn't try to reconnect
raise websockets.exceptions.ConnectionClosed(
0, 'websocket closed')
try:
await self.ws.send(outgoing)
break
except websockets.ConnectionClosed:
if attempt == 2:
raise
log.warning('RPC: Connection closed, reconnecting')
# the reconnect has to be done in a separate task because,
# if it is triggered by the pinger, then this RPC call will
# be cancelled when the pinger is cancelled by the reconnect,
# and we don't want the reconnect to be aborted halfway through
await asyncio.wait([self.reconnect()], loop=self.loop)
if self.monitor.status != Monitor.CONNECTED:
# reconnect failed; abort and shutdown
log.error('RPC: Automatic reconnect failed')
raise
result = await self._recv(msg['request-id'])
log.debug('connection {} <- {}'.format(id(self), result))
if not result:
return result
if 'error' in result:
# API Error Response
> raise errors.JujuAPIError(result)
E juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.functional.test-ceph-csi-8gzx" is not a valid name: model names may only contain lowercase letters, digits and hyphens
.tox/func/lib/python3.6/site-packages/juju/client/connection.py:495: JujuAPIError
Removing the __init__.py
files gets rid of this problem but then pytest
complains about relative imports. Also I think that main point is that algorithm which creates new model name uses forbidden characters to concatenate directories.
EDIT: Including test_build_and_deploy
step 'cause I forgot originaly.
@pytest.mark.abort_on_fail
async def test_build_and_deploy(ops_test):
"""Build ceph-csi charm and deploy testing model."""
logger.info("Building ceph-csi charm.")
ceph_csi_charm = await ops_test.build_charm(".")
logger.debug("Deploying ceph-csi functional test bundle.")
await ops_test.model.deploy(
ops_test.render_bundle("tests/functional/bundle.yaml", master_charm=ceph_csi_charm)
)
await ops_test.model.wait_for_idle(
wait_for_active=True, timeout=60 * 60, check_freq=5, raise_on_error=False
)
When using the --model
switch 1 on k8s, all goes fine. However on an LXD environment I'm getting the following error (both locally and on pipelines ):
Traceback (most recent call last):
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/runner.py", line 341, in from_call
result: Optional[TResult] = func()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/runner.py", line 262, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_hooks.py", line [43](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:44)3, in __call__
return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_manager.py", line 112, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_callers.py", line 155, in _multicall
return outcome.get_result()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_result.py", line 108, in get_result
raise exc.with_traceback(exc.__traceback__)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_callers.py", line 80, in _multicall
res = hook_impl.function(*args)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/runner.py", line 157, in pytest_runtest_setup
item.session._setupstate.setup(item)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/runner.py", line [49](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:50)7, in setup
raise exc
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/runner.py", line 494, in setup
col.setup()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/python.py", line 1791, in setup
self._request._fillfixtures()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line [56](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:57)6, in _fillfixtures
item.funcargs[argname] = self.getfixturevalue(argname)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line [58](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:59)5, in getfixturevalue
fixturedef = self._get_active_fixturedef(argname)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 607, in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 693, in _compute_fixture_value
fixturedef.execute(request=subrequest)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 1045, in execute
fixturedef = request._get_active_fixturedef(argname)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 607, in _get_active_fixturedef
self._compute_fixture_value(fixturedef)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 693, in _compute_fixture_value
fixturedef.execute(request=subrequest)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 1069, in execute
result = ihook.pytest_fixture_setup(fixturedef=self, request=request)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_hooks.py", line 433, in __call__
return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_manager.py", line 112, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_callers.py", line 155, in _multicall
return outcome.get_result()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_result.py", line 108, in get_result
raise exc.with_traceback(exc.__traceback__)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pluggy/_callers.py", line 80, in _multicall
res = hook_impl.function(*args)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 1123, in pytest_fixture_setup
result = call_fixture_func(fixturefunc, request, kwargs)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/_pytest/fixtures.py", line 902, in call_fixture_func
fixture_result = fixturefunc(**kwargs)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 304, in _asyncgen_fixture_wrapper
result = event_loop.run_until_complete(setup())
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 286, in setup
res = await gen_obj.__anext__()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 231, in ops_test
await ops_test._setup_model()
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line [71](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:72)9, in _setup_model
model_state = await self._connect_to_model(
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 678, in _connect_to_model
await model.connect(state.full_name)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/juju/model.py", line 6[72](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:73), in connect
_, model_uuid = await self._connector.connect_model(model_name, **kwargs)
File "/home/runner/work/opensearch-operator/opensearch-operator/.tox/charm-integration/lib/python3.10/site-packages/juju/client/connector.py", line 161, in connect_model
raise JujuConnectionError('Model not found: {}'.format(_model_name))
juju.errors.JujuConnectionError: Model not found: admin/testing
------------------------------ Captured log setup ------------------------------
INFO pytest_operator.plugin:plugin.py:6[75](https://github.com/canonical/opensearch-operator/actions/runs/5745348380/job/15573293066#step:6:76) Connecting to existing model github-pr-86a89-lxd:testing on unspecified cloud
Enviroment:
Are there any hitns or known workarounds available perhaps?
Thank you.
When using ops_test.model.deploy
to deploy a bundle, it seems to ignore pinned charm revisions.
The following template named bundle.yaml.j2
:
{%- set testing = testing is defined and testing.casefold() in ["1", "yes", "true"] %}
bundle: kubernetes
name: identity-platform
website: https://github.com/canonical/iam-bundle
issues: https://github.com/canonical/iam-bundle/issues
applications:
hydra:
charm: hydra
channel: {{ channel|default('edge', true) }}
scale: 1
series: jammy
trust: true
kratos:
charm: kratos
channel: {{ channel|default('edge', true) }}
scale: 1
series: jammy
trust: true
kratos-external-idp-integrator:
charm: kratos-external-idp-integrator
channel: {{ channel|default('edge', true) }}
scale: 1
series: jammy
identity-platform-login-ui-operator:
charm: identity-platform-login-ui-operator
channel: {{ channel|default('edge', true) }}
scale: 1
series: jammy
trust: true
postgresql-k8s:
charm: postgresql-k8s
channel: 14/stable
series: jammy
scale: 1
trust: true
tls-certificates-operator:
charm: tls-certificates-operator
channel: stable
scale: 1
{%- if testing %}
options:
ca-common-name: demo.ca.local
generate-self-signed-certificates: true
{%- endif %}
traefik-admin:
charm: traefik-k8s
revision: 136
channel: edge
series: focal
scale: 1
trust: true
traefik-public:
charm: traefik-k8s
revision: 136
channel: edge
series: focal
scale: 1
trust: true
relations:
- [hydra:pg-database, postgresql-k8s:database]
- [kratos:pg-database, postgresql-k8s:database]
- [kratos:endpoint-info, hydra:endpoint-info]
- [kratos-external-idp-integrator:kratos-external-idp, kratos:kratos-external-idp]
- [hydra:admin-ingress, traefik-admin:ingress]
- [hydra:public-ingress, traefik-public:ingress]
- [kratos:admin-ingress, traefik-admin:ingress]
- [kratos:public-ingress, traefik-public:ingress]
- [identity-platform-login-ui-operator:ingress, traefik-public:ingress]
- [identity-platform-login-ui-operator:endpoint-info, hydra:endpoint-info]
- [identity-platform-login-ui-operator:ui-endpoint-info, hydra:ui-endpoint-info]
- [identity-platform-login-ui-operator:ui-endpoint-info, kratos:ui-endpoint-info]
- [identity-platform-login-ui-operator:kratos-endpoint-info, kratos:kratos-endpoint-info]
- [traefik-admin:certificates, tls-certificates-operator:certificates]
- [traefik-public:certificates, tls-certificates-operator:certificates]
is rendered correctly with ops_test.render_bundle
(it includes the revision 136 in traefik-k8s), but when deployed in an integration test using ops_test.model.deploy
, the most current revision 141 is deployed instead.
Example CI
Deploy the bundle with ops_test.run("juju", "deploy", rendered_bundle, "--trust")
When we run ops_test.build_charm()
a charm with multiple bases, the returned charm path is arbitrarily picked and there is no ability to select the series
of the built charm.
I believe that this arbitrary selection of charm file is happening here
It seems that if a test is marked with abort_on_fail
but also xfail
and it fails as expected, it does not also abort, which makes subsequent tests have a real FAILURE
rather than an XFAIL
.
I was running multiple functional tests using pytest-operator with --keep-models
for debugging and after failure I was not able to know which model was actually used. I think it would be nice if pytest-operator can provide info with name of model, which will/was (depends when the info come) be used.
Something like this:
$ tox -e func -- -s -vv --keep-models --series focal
func: commands[0]> pytest /home/rgildein/code/canonical/charm-apt-mirror/tests/functional -s -vv --keep-models --series focal
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.11.6, pytest-7.4.0, pluggy-1.2.0 -- /home/rgildein/code/canonical/charm-apt-mirror/.tox/func/bin/python
cachedir: .tox/func/.pytest_cache
rootdir: /home/rgildein/code/canonical/charm-apt-mirror
plugins: operator-0.28.0, asyncio-0.21.0
asyncio: mode=Mode.STRICT
model: test-charm-xhs7
collected 17 items
...
Currently my method for testing leadership change is:
Is there a better way?
while True:
logger.info("deploy charm")
await ops_test.model.deploy(
charm_under_test, application_name=app_name, resources=resources, num_units=10
)
await block_until_leader_elected(ops_test, app_name)
if await get_leader_unit_num(ops_test, app_name) > 0:
break
# we're unlucky: unit/0 is the leader, which means no scale down could trigger a
# leadership change event - repeat
logger.info("Elected leader is unit/0 - resetting and repeating")
await ops_test.model.applications[app_name].remove()
await ops_test.model.block_until(lambda: len(ops_test.model.applications) == 0)
await ops_test.model.reset()
Not entirely sure yet what the source of the issue is, but it looks like models created by the test fixture are not properly cleaned up when the test is done (or aborted?)
I noticed that I have a bunch of spurious models in my microk8s cloud:
Will keep this issue updated if more details emerge; or tell me how I can help make it more helpful because I imagine this isn't of much use.
Update: also juju models seem to hang around. Notice that one of them is stuck in a destroying state since... one week?
test-charm-l8cp microk8s/localhost kubernetes available - admin 2022-02-18
test-charm-mq6i microk8s/localhost kubernetes available - admin 2022-02-18
test-charm-p9q1 microk8s/localhost kubernetes available - admin 2022-02-18
test-kubectl-delete-0rta microk8s/localhost kubernetes available - admin 2022-02-18
test-kubectl-delete-9xne microk8s/localhost kubernetes available - admin 5 minutes ago
test-loki-tester-u1n5 microk8s/localhost kubernetes destroying - admin 2022-02-18
As a user, I would like to have some utility to test the contents of relation databags in itests.
Something like:
async def test_relate(ops_test: OpsTest):
await ops_test.juju('relate', 'spring-music:ingress', 'traefik-k8s:ingress')
async with ops_test.fast_forward():
await ops_test.model.wait_for_idle(['traefik-k8s', 'spring-music'])
data = await ops_test.get_relation_data(requirer_endpoint='spring-music/0:ingress',
provider_endpoint='traefik-k8s/0:ingress')
model = ops_test.current_alias
assert data.requirer.application_data == {
'host': f'spring-music.{model}.svc.cluster.local',
'model': model,
'name': 'spring-music/0',
'port': '8080',
}
assert data.provider.unit_data == {'foo': 'bar'}
I am trying to write tests for my charm using the operator framework but the charmcraft pack
step fails as it neither uses a remote lxd cluster nor does it try to use multipass as charmcraft already uses if I try to build the charm locally directly outside tests.
My charmcraft.yaml:
type: charm
bases:
- build-on:
- name: "ubuntu"
channel: "20.04"
run-on:
- name: "ubuntu"
channel: "20.04"
architectures: [amd64, arm64]
- build-on:
- name: "ubuntu"
channel: "22.04"
run-on:
- name: "ubuntu"
channel: "22.04"
architectures: [amd64, arm64]
parts:
charm:
source: .
charm-requirements: [requirements.txt]
build-packages: [git]
The error I get is :
File "/Users/jatin/canonical/nats-operator/tests/integration/test_charm.py", line 8, in test_smoke
charm = await ops_test.build_charm(".")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jatin/canonical/nats-operator/.tox/integration/lib/python3.11/site-packages/pytest_operator/plugin.py", line 969, in build_charm
assert "lxd" in all_groups, (
AssertionError: Group 'lxd' required but not available; ensure that lxd is available or use --destructive-mode
assert 'lxd' in {'_accessoryupdater', '_amavisd', '_analyticsd', '_analyticsusers', '_appinstalld', '_appleevents', ...}
Using --destructive-mode
also fails as it does not find lxd on the host:
File "/Users/jatin/canonical/nats-operator/tests/integration/test_charm.py", line 8, in test_smoke
charm = await ops_test.build_charm(".")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jatin/canonical/nats-operator/.tox/integration/lib/python3.11/site-packages/pytest_operator/plugin.py", line 1004, in build_charm
raise RuntimeError(
RuntimeError: Failed to build charm .:
Packing the charm.
Skipping 'bases[0].build-on[0]': name 'ubuntu' does not match host 'darwin'.
No suitable 'build-on' environment found in 'bases[0]' configuration.
Skipping 'bases[1].build-on[0]': name 'ubuntu' does not match host 'darwin'.
No suitable 'build-on' environment found in 'bases[1]' configuration.
No suitable 'build-on' environment found in any 'bases' configuration.
For every integration test run in our 20 charms I'm getting these warnings. I'm aware that more people are experiencing the same. Any fix?
================== 1 passed, 3 warnings in 341.19s (0:05:41) ===================
/home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
Task was destroyed but it is pending!
task: <Task pending name='Task-11' coro=<Connection._pinger() running at /home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:572> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f0a204f4310>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-6' coro=<Connection._receiver() running at /home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:524> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f0a21da8460>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-4' coro=<WebSocketCommonProtocol.keepalive_ping() running at /home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:977> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f0a2055fdc0>()]>>
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x7f0a21de78b0>
transport: <_SelectorSocketTransport closing fd=11>
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/selector_events.py", line 910, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/sslproto.py", line 6[85](https://github.com/canonical/charmed-magma/runs/6271545946?check_suite_focus=true#step:4:85), in _process_write_backlog
self._transport.write(chunk)
File "/usr/lib/python3.8/asyncio/selector_events.py", line [91](https://github.com/canonical/charmed-magma/runs/6271545946?check_suite_focus=true#step:4:91)6, in write
self._fatal_error(exc, 'Fatal write error on socket transport')
File "/usr/lib/python3.8/asyncio/selector_events.py", line 711, in _fatal_error
self._force_close(exc)
File "/usr/lib/python3.8/asyncio/selector_events.py", line 723, in _force_close
self._loop.call_soon(self._call_connection_lost, exc)
File "/usr/lib/python3.8/asyncio/base_events.py", line 719, in call_soon
self._check_closed()
File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
pythonlib-juju
's signatures to help writing int tests. Ideally would like to rely on pytest-operator
and not another library that is wrapped by it
ops_test.model/unit/app
with some supported usage examples?pytest-operator
are REALLY hard to read, and it's very easy to miss things, could these be improved in a more readable / less manual format?When developing integration tests, the current typing implementation doesn't support peeking (in my case, vim.lsp.buf.signature_help()
using pyright
as the LSP)
This is because of 'challenging' typing from pythonlib-juju
Although tightening up that typing is out of the scope for pytest-operator
, the common workflow building int tests looks like:
ops_test.model
or ops_test.model.applications
pythonlib-juju
source codejuju.applications
, juju.unit
, juju.model
etc (often which does not have types)vars(questionable_object)
or dir(questionable_object)
, all of which takes a lot of time and trial-and-error that don't belong in test-writingOverall, this makes building integration tests really slow, and more likely to blindly copy code from other examples of charm repos that have done the work already. Not bad on it's own, but probably undesirable.
While running tests on a MAAS juju cloud where the model is named admin/mymodel
according to the <user>/<model>
convention, the ops_test
fixture does not work.
Models are named admin/mymodel
and admin/controller
.
Have a simple test that calls the ops_test
fixture.
def test_something(ops_test):
assert True
And run pytest path/to/test.py --model admin/mymodel
to execute the tests.
I was trying to reuse my existing model by providing --model admin/mymodel
but this still tried to create a new model (and promptly failed since I wasn't the admin user on the cloud). Upon tracing the execution flow from the plugin.py
-> ops_test
-> _setup_model
-> track_model
-> _model_exists
where the last method is used to set the value of the use_existing
variable which was getting the wrong value (should be True, getting False).
This seemed to be due to the fact that the self._controller.list_models()
returns the models but without prefixing the user name (without "admin" in this instance).
async def _model_exists(self, model_name: str) -> bool:
"""
returns True when the model_name exists in the model.
"""
all_models = await self._controller.list_models()
return model_name in all_models
# all_models had the value ['controller', 'mymodel'] but model_name is "admin/mymodel"
As seen above, the _model_exists
method returns False when the self._controller.list_models()
returns the wrong name for the current model.
Made changes here.
return model_name.split("/")[-1] in all_models
Currently, docs/reference.md
has to be manually maintained and thus has a significant possibility of falling out of sync with the actual set of helpers which are available.
It would be handy if ops_test would be able to print out the kubectl logs of all workloads in the test's model when a test fails.
Currently this is partly captured by https://github.com/canonical/charm-logdump-action
I get a bunch of these warnings when running the tests.
.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py:191
/home/pietro/canonical/charm-prometheus-node-exporter/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py:191: DeprecationWarning: The 'asyncio_mode' default value will change to 'strict' in future, please explicitly use 'asyncio_mode=strict' or 'asyncio_mode=auto' in pytest configuration file.
config.issue_config_time_warning(LEGACY_MODE, stacklevel=2)
Maybe some develop branch is already preparing for these changes/upgrading a dependency version, but, to make sure...
I'm not sure if this is a problem on the side of the charmcraft, but many times
I've found myself in a situation where the integration test doesn't work and
it's because of a broken container that charmcraft uses to build a charm.
The error in such a situation is often not obvious and setting log-level to
debug do not provide much information, as charmcraft is not running without
--verbose
flag.
The solution could be to add the charmcraft clean
run in the _clean_model
function.
Manually running charmcraft clean
is not always noticeable, as in many
projects pytest-operator is used to build test charms whose source code is
either in the sub-directory of integration tests 1 or clone from git
repository.
To help us track the time it takes for charmcraft pack
operations and other juju
commands, we should add elapsed time reports to our logs. Nothing complicated, but something simple like build_charm completed in X seconds
, etc. This way we can provide better feedback to the team responsible for those tools.
When using with the most recent version of pytest 8.2.0, asyncio-0.21.1
is used which breaks the fixtures with error: AttributeError: 'FixtureDef' object has no attribute 'unittest'
.
To reproduce:
tox.ini
in project root.# Copyright 2024 Canonical Ltd.
# See LICENSE file for licensing details.
[tox]
skipsdist=True
skip_missing_interpreters = True
envlist = unit
[vars]
src_path = {toxinidir}/src/
tst_path = {toxinidir}/tests/
;lib_path = {toxinidir}/lib/charms/operator_name_with_underscores
all_path = {[vars]src_path} {[vars]tst_path}
[testenv]
basepython = python3.10
setenv =
PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path}
PYTHONBREAKPOINT=ipdb.set_trace
PY_COLORS=1
passenv =
PYTHONPATH
[testenv:integration]
description = Run integration tests
pass_env =
PYTEST_ADDOPTS
deps =
pytest
pytest-asyncio
pytest-operator
commands =
pytest -v --tb native --log-cli-level=INFO
tests/integratin/test_asyncio.py
import logging
import pytest
import pytest_asyncio
logger = logging.getLogger(__name__)
@pytest_asyncio.fixture(name="buggy")
async def async_fixture():
logger.info("SETUP FIXTURE")
return "buggy"
@pytest.mark.asyncio
async def test_buggy(buggy):
logger.info("TESTING")
print(f"hello {buggy}")
The test will fail.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.