charmed-kubernetes / pytest-operator Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
There is an error in the README.md file:
assert ops_test.applications["my-charm"].units[0].workload_status == "active"
should be
assert ops_test.model.applications["my-charm"].units[0].workload_status == "active"
It would be handy if ops_test would be able to print out the kubectl logs of all workloads in the test's model when a test fails.
Currently this is partly captured by https://github.com/canonical/charm-logdump-action
The docs reference mentioned at the bottom of pytest-operator page doesn't exist anymore.
I get a bunch of these warnings when running the tests.
.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py:191
/home/pietro/canonical/charm-prometheus-node-exporter/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py:191: DeprecationWarning: The 'asyncio_mode' default value will change to 'strict' in future, please explicitly use 'asyncio_mode=strict' or 'asyncio_mode=auto' in pytest configuration file.
config.issue_config_time_warning(LEGACY_MODE, stacklevel=2)
Maybe some develop branch is already preparing for these changes/upgrading a dependency version, but, to make sure...
When calling something like ops_test.build_charm(".", bases_index=0)
the bases_index
will be ignored since it evaluates to False
here.
I'm not sure if this feature is there already (and I haven't found it), but it would be nice to support more natively application config management (set, get).
At the moment the best way I could find to config-set is:
await ops_test.juju("config", APP_NAME, "key=value")
However, this makes unsetting keys impossible:
await ops_test.juju("config", APP_NAME, 'key=""')
--> 'key' is now '""' (quoted quotation marks), not None
.
I solved this by doing
os.system(f'juju config {APP_NAME} key=""')
But this is an indication that a unit config wrapper would be very welcome :)
def check_deps():
missing = []
for dep in ("juju", "charm", "charmcraft"):
res = subprocess.run(["which", dep])
if res.returncode != 0:
missing.append(dep)
if missing:
> raise RuntimeError(
"Missing dependenc{}: {}".format(
"y" if len(missing) == 1 else "ies",
", ".join(missing),
)
)
E RuntimeError: Missing dependency: charm
I ran in to this while trying to set up integration tests. Given that charmcraft basically replaces charm-tools, it would be nice if I didn't need it installed. If we really need both we could install them with pip in the virtualenv automatically.
As a user, I would like to have some utility to test the contents of relation databags in itests.
Something like:
async def test_relate(ops_test: OpsTest):
await ops_test.juju('relate', 'spring-music:ingress', 'traefik-k8s:ingress')
async with ops_test.fast_forward():
await ops_test.model.wait_for_idle(['traefik-k8s', 'spring-music'])
data = await ops_test.get_relation_data(requirer_endpoint='spring-music/0:ingress',
provider_endpoint='traefik-k8s/0:ingress')
model = ops_test.current_alias
assert data.requirer.application_data == {
'host': f'spring-music.{model}.svc.cluster.local',
'model': model,
'name': 'spring-music/0',
'port': '8080',
}
assert data.provider.unit_data == {'foo': 'bar'}
The ops_test
fixture has module scope, and it is very convenient to have the model destroyed when I move from one test_*.py
to another.
However, I use the same *.charm
in all my test_*.py
, so fixture such as
@pytest.fixture(scope="module")
async def charm_under_test(ops_test: OpsTest) -> Path:
"""Charm used for integration testing."""
return = await ops_test.build_charm(".")
would still cause my charm to be built again and again as the test progresses throughout the test_*.py
files.
It would be handy to be able to have a charm "persist" without tricks such as
global _path_to_built_charm
if _path_to_built_charm is None:
_path_to_built_charm = await ops_test.build_charm(".")
Which brings me to a question: I wonder if the design approach is:
ops_test
in module scope is desirableawait ops_test.model.reset()
at the end of every test functionWhen we run ops_test.build_charm()
a charm with multiple bases, the returned charm path is arbitrarily picked and there is no ability to select the series
of the built charm.
I believe that this arbitrary selection of charm file is happening here
Currently my method for testing leadership change is:
Is there a better way?
while True:
logger.info("deploy charm")
await ops_test.model.deploy(
charm_under_test, application_name=app_name, resources=resources, num_units=10
)
await block_until_leader_elected(ops_test, app_name)
if await get_leader_unit_num(ops_test, app_name) > 0:
break
# we're unlucky: unit/0 is the leader, which means no scale down could trigger a
# leadership change event - repeat
logger.info("Elected leader is unit/0 - resetting and repeating")
await ops_test.model.applications[app_name].remove()
await ops_test.model.block_until(lambda: len(ops_test.model.applications) == 0)
await ops_test.model.reset()
Our tests frequently need to run commands on units to do things like interact with kubectl
or issue workload queries (e.g., prometheus queries). There's a bit of boilerplate that is associated with that, such as: selecting a unit to run on, checking the response status and exit code, ensuring a reasonable timeout, etc. It would be nice to have a helper around that, and examples can already be found in kube-state-metrics-operator and kubernetes-master.
Definition of done:
juju_run
helper is added to OpsTest
f"{app_name}/leader"
to select the leader unit of that appTimeoutError
should be raised if the job didn't completeStretch goal:
kubectl
wrapper that defaults to "kubernetes-master/leader" for the unit and prepends "kubectl --kubeconfig=/root/.kube/config"
to the command.Not entirely sure yet what the source of the issue is, but it looks like models created by the test fixture are not properly cleaned up when the test is done (or aborted?)
I noticed that I have a bunch of spurious models in my microk8s cloud:
Will keep this issue updated if more details emerge; or tell me how I can help make it more helpful because I imagine this isn't of much use.
Update: also juju models seem to hang around. Notice that one of them is stuck in a destroying state since... one week?
test-charm-l8cp microk8s/localhost kubernetes available - admin 2022-02-18
test-charm-mq6i microk8s/localhost kubernetes available - admin 2022-02-18
test-charm-p9q1 microk8s/localhost kubernetes available - admin 2022-02-18
test-kubectl-delete-0rta microk8s/localhost kubernetes available - admin 2022-02-18
test-kubectl-delete-9xne microk8s/localhost kubernetes available - admin 5 minutes ago
test-loki-tester-u1n5 microk8s/localhost kubernetes destroying - admin 2022-02-18
Hey there, thanks for putting the plugin together. I have been trying to get this up and running, but as of right now, building on LXD seems to fail due to the automatic name generation approach, which uses the module __name__
and translates _
to -
.
My operator is a machine operator (i.e. not K8s) and is structured such that the relevant tests are all based in <project_root>/tests/test_integration.py
(and in this case are filtered with an integration
marker).
Here is what I am seeing:
$ tox -r -e integration
...<tox setup snipped>...
integration run-test: commands[0] | pytest -v --tb native --show-capture=no --log-cli-level=INFO -s -m integration /home/techalchemy/git/artifactory-operator/tests
=================================================================== test session starts ===================================================================
platform linux -- Python 3.8.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/techalchemy/git/artifactory-operator/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/techalchemy/git/artifactory-operator, configfile: pyproject.toml
plugins: operator-0.8.1, asyncio-0.15.1
collected 65 items / 63 deselected / 2 selected
tests/test_integration.py::test_build_and_deploy /snap/bin/juju
/home/techalchemy/git/artifactory-operator/.tox/integration/bin/charmcraft
--------------------------------------------------------------------- live log setup ----------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:155 Using tmp_path: /home/techalchemy/git/artifactory-operator/.tox/integration/tmp/pytest/tests.test-integration-k3aa0
INFO pytest_operator.plugin:plugin.py:217 Adding model overlord:tests.test-integration-k3aa
ERROR
tests/test_integration.py::test_bundle ERROR
========================================================================= ERRORS ==========================================================================
_________________________________________________________ ERROR at setup of test_build_and_deploy _________________________________________________________
Traceback (most recent call last):
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 142, in wrapper
return loop.run_until_complete(setup())
File "/home/techalchemy/.pyenv/versions/3.8.7/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 123, in setup
res = await gen_obj.__anext__()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 144, in ops_test
await ops_test._setup_model()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 218, in _setup_model
self.model = await self._controller.add_model(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/controller.py", line 354, in add_model
model_info = await model_facade.CreateModel(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 480, in wrapper
reply = await f(*args, **kwargs)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/_client5.py", line 5515, in CreateModel
reply = await self.rpc(msg)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 623, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py", line 495, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.test-integration-k3aa" is not a valid name: model names may only contain lowercase letters, digits and hyphens
______________________________________________________________ ERROR at setup of test_bundle ______________________________________________________________
Traceback (most recent call last):
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 142, in wrapper
return loop.run_until_complete(setup())
File "/home/techalchemy/.pyenv/versions/3.8.7/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_asyncio/plugin.py", line 123, in setup
res = await gen_obj.__anext__()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 144, in ops_test
await ops_test._setup_model()
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 218, in _setup_model
self.model = await self._controller.add_model(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/controller.py", line 354, in add_model
model_info = await model_facade.CreateModel(
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 480, in wrapper
reply = await f(*args, **kwargs)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/_client5.py", line 5515, in CreateModel
reply = await self.rpc(msg)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/facade.py", line 623, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py", line 495, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.test-integration-k3aa" is not a valid name: model names may only contain lowercase letters, digits and hyphens
==================================================================== warnings summary =====================================================================
.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233
/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: flake8-ignore
self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233
/home/techalchemy/git/artifactory-operator/.tox/integration/lib/python3.8/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: plugins
self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================= short test summary info =================================================================
ERROR tests/test_integration.py::test_build_and_deploy - juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.t...
ERROR tests/test_integration.py::test_bundle - juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.test-integr...
====================================================== 63 deselected, 2 warnings, 2 errors in 0.23s =======================================================
ERROR: InvocationError for command /home/techalchemy/git/artifactory-operator/.tox/integration/bin/pytest -v --tb native --show-capture=no --log-cli-level=INFO -s -m integration tests (exited with code 1)
_________________________________________________________________________ summary _________________________________________________________________________
ERROR: integration: commands failed
The relevant code appears to be at
pytest-operator/pytest_operator/plugin.py
Lines 172 to 177 in 6efd734
As a workaround, I have modified my tests/test_integration.py
to include the following line:
_, _, __name__ = __name__.rpartition(".")
And this seems to have provided a temporary solution.
please unlock usage without tox
and provide another tmp path when no TOX_ENV_DIR
provided
here is a block that causes issue:
pytest-operator/pytest_operator/plugin.py
Line 146 in 2c29fa8
@pytest.fixture(scope="session")
def tmp_path_factory(request):
# Override temp path factory to create temp dirs under Tox env so that
# confined snaps (e.g., charmcraft) can access them.
return pytest.TempPathFactory(
given_basetemp=Path(os.environ["TOX_ENV_DIR"]) / "tmp" / "pytest",
trace=request.config.trace.get("tmpdir"),
_ispytest=True,
)
With the recent update of pytest
to 7.3.0 (see changelog), pytest-operator
in v0.22 is failing in some tests due to an error on setup:
Traceback (most recent call last):
File "/home/runner/work/iam-bundle/iam-bundle/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 149, in tmp_path_factory
return pytest.TempPathFactory(
TypeError: TempPathFactory.__init__() missing 2 required positional arguments: 'retention_count' and 'retention_policy'
Some examples of failing tests:
cos-lite bundle
iam-bundle
A temporary solution is to pin pytest
to 7.2.2.
I'm working on a charm which has following structure:
โโโ LICENSE
โโโ ...
โโโ src
โย ย โโโ charm.py
โย ย โโโ resource.py
โโโ tests
โย ย โโโ __init__.py
โย ย โโโ functional
โย ย โย ย โโโ __init__.py
โย ย โย ย โโโ bundle.yaml
โย ย โย ย โโโ conftest.py
โย ย โย ย โโโ test_ceph_csi.py
โย ย โย ย โโโ utils
โย ย โย ย โโโ __init__.py
โย ย โย ย โโโ utils.py
โย ย โโโ unit
โย ย โโโ ...
โโโ tox.ini
I'm not sure if it's important but I run functional tests via tox
. All it really does is execute pytest {toxinidir}/tests/functional
.
Problem is that due to the presence of __init__.py
files in my structure, when the pytest-operator tries to deploy my bundle, it constructs name of the model with dot-separated names of the directories. Final model name looks like this:
tests.functional.test-ceph-csi-8gzx
and I get the following error/traceback:
/usr/lib/python3.6/asyncio/base_events.py:484: in run_until_complete
return future.result()
.tox/func/lib/python3.6/site-packages/pytest_asyncio/plugin.py:123: in setup
res = await gen_obj.__anext__()
.tox/func/lib/python3.6/site-packages/pytest_operator/plugin.py:144: in ops_test
await ops_test._setup_model()
.tox/func/lib/python3.6/site-packages/pytest_operator/plugin.py:223: in _setup_model
self.model_name, cloud_name=self.cloud_name
.tox/func/lib/python3.6/site-packages/juju/controller.py:360: in add_model
region=region
.tox/func/lib/python3.6/site-packages/juju/client/facade.py:480: in wrapper
reply = await f(*args, **kwargs)
.tox/func/lib/python3.6/site-packages/juju/client/_client5.py:5515: in CreateModel
reply = await self.rpc(msg)
.tox/func/lib/python3.6/site-packages/juju/client/facade.py:623: in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <juju.client.connection.Connection object at 0x7f68dd1b26a0>
msg = {'params': {'cloud-tag': 'cloud-openstack', 'config': {'authorized-keys': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDyfPI...: 'tests.functional.test-ceph-csi-8gzx', ...}, 'request': 'CreateModel', 'request-id': 10, 'type': 'ModelManager', ...}
encoder = <class 'juju.client.facade.TypeEncoder'>
async def rpc(self, msg, encoder=None):
'''Make an RPC to the API. The message is encoded as JSON
using the given encoder if any.
:param msg: Parameters for the call (will be encoded as JSON).
:param encoder: Encoder to be used when encoding the message.
:return: The result of the call.
:raises JujuAPIError: When there's an error returned.
:raises JujuError:
'''
self.__request_id__ += 1
msg['request-id'] = self.__request_id__
if'params' not in msg:
msg['params'] = {}
if "version" not in msg:
msg['version'] = self.facades[msg['type']]
outgoing = json.dumps(msg, indent=2, cls=encoder)
log.debug('connection {} -> {}'.format(id(self), outgoing))
for attempt in range(3):
if self.monitor.status == Monitor.DISCONNECTED:
# closed cleanly; shouldn't try to reconnect
raise websockets.exceptions.ConnectionClosed(
0, 'websocket closed')
try:
await self.ws.send(outgoing)
break
except websockets.ConnectionClosed:
if attempt == 2:
raise
log.warning('RPC: Connection closed, reconnecting')
# the reconnect has to be done in a separate task because,
# if it is triggered by the pinger, then this RPC call will
# be cancelled when the pinger is cancelled by the reconnect,
# and we don't want the reconnect to be aborted halfway through
await asyncio.wait([self.reconnect()], loop=self.loop)
if self.monitor.status != Monitor.CONNECTED:
# reconnect failed; abort and shutdown
log.error('RPC: Automatic reconnect failed')
raise
result = await self._recv(msg['request-id'])
log.debug('connection {} <- {}'.format(id(self), result))
if not result:
return result
if 'error' in result:
# API Error Response
> raise errors.JujuAPIError(result)
E juju.errors.JujuAPIError: failed to create config: creating config from values failed: "tests.functional.test-ceph-csi-8gzx" is not a valid name: model names may only contain lowercase letters, digits and hyphens
.tox/func/lib/python3.6/site-packages/juju/client/connection.py:495: JujuAPIError
Removing the __init__.py
files gets rid of this problem but then pytest
complains about relative imports. Also I think that main point is that algorithm which creates new model name uses forbidden characters to concatenate directories.
EDIT: Including test_build_and_deploy
step 'cause I forgot originaly.
@pytest.mark.abort_on_fail
async def test_build_and_deploy(ops_test):
"""Build ceph-csi charm and deploy testing model."""
logger.info("Building ceph-csi charm.")
ceph_csi_charm = await ops_test.build_charm(".")
logger.debug("Deploying ceph-csi functional test bundle.")
await ops_test.model.deploy(
ops_test.render_bundle("tests/functional/bundle.yaml", master_charm=ceph_csi_charm)
)
await ops_test.model.wait_for_idle(
wait_for_active=True, timeout=60 * 60, check_freq=5, raise_on_error=False
)
I found it was a useful pattern to temporarily speed up the status-update hook firing rate.
This looks like a job for a context manager.
Proposal:
async with ops_test.fast_forward('10s'):
# do stuff which might take some time to reflect on the workload status
assert ops_test.model.wait_for_idle(...)
Was doing some testing with pytest-operator
and tox
earlier today and noticed that it pulls in version 0.7.0
of charmcraft
rather than the latest 1.0.0
.
This is my tox env:
[testenv:integration]
deps =
juju
pytest
pytest-operator
ipdb
commands = pytest -v --tb native --show-capture=no --log-cli-level=INFO -s {posargs:tests/integration}
And contents of venv:
ls .tox/integration/lib/python3.9/site-packages | grep charmcra
drwxrwxr-x - jon jon 28 May 15:39 -- charmcraft
drwxrwxr-x - jon jon 28 May 15:39 -- charmcraft-0.7.0.dist-info
Thanks! :)
I have recently heard that sometimes, when running integration tests locally, it can be handy to run the suite in a separate controller.
At the moment OpsTest handles the creation of a temporary testing model, could we do the same with a temporary testing controller?
When I run itests with
tox -e integration-machine -- --keep-models
No models are kept.
To help us track the time it takes for charmcraft pack
operations and other juju
commands, we should add elapsed time reports to our logs. Nothing complicated, but something simple like build_charm completed in X seconds
, etc. This way we can provide better feedback to the team responsible for those tools.
With 1.1.0, charmcraft switched to classic
confinement and doing builds inside a LXD container. This broke the ability for charms to reference local libraries during build in their requirements.txt
since those libraries are not visible within the LXD container. This is required for the ops_test.build_lib()
feature for testing charm libraries.
Currently, docs/reference.md
has to be manually maintained and thus has a significant possibility of falling out of sync with the actual set of helpers which are available.
In a charm I'm developing, my itests are failing because of an uncaught websockets.exceptions.ConnectionClosed error.
I tried figuring out what was going on but had little luck. It looks like something goes wrong when trying to destroy the test model in ops_test._cleanup_models()
.
Attaching a full debug log.
$ tox -vve integration > itest_logs.txt
I had a selenium-based test that was taking a long time to run and I got bored. I hit ctrl+c
and the test exited, but got this traceback:
Traceback (most recent call last):
File "/notebook-operators/.tox/integration/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 103, in pytest_sessionfinish
session._setupstate.teardown_all()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 412, in teardown_all
self._pop_and_teardown()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 387, in _pop_and_teardown
self._teardown_with_finalization(colitem)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 405, in _teardown_with_finalization
self._callfinalizers(colitem)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 402, in _callfinalizers
raise exc
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/runner.py", line 395, in _callfinalizers
fin()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/fixtures.py", line 1034, in finish
raise exc
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/fixtures.py", line 1027, in finish
func()
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/_pytest/fixtures.py", line 941, in _teardown_yield_fixture
next(it)
File "/notebook-operators/.tox/integration/lib/python3.9/site-packages/pytest_operator/plugin.py", line 133, in abort_on_fail
failed = request.node.failed
AttributeError: 'Function' object has no attribute 'failed'
Offhand, it looks like abort_on_fail
doesn't specify a scope, function
is the default scope, and Function
in pytest doesn't have a failed attribute.
I'm not familiar with the design of abort_on_fail
, but it seems like maybe that should be session or module-scoped?
Typically, we expect that pip install pytest-operator
will also install pytest-asyncio
, and pytest-asyncio
adds a command line argument to pytest for --asyncio-mode
(eg: pytest --asyncio-mode=auto
should be a valid command). However, after commit cfef8ea9927e4cc07ff69d042239e2612d1cfc84
, this command sometimes works. For example, running the following in an empty dir:
# pip install direct from commit "Adding option --no-deploy" (feature commit 2)
# BREAKS
for x in `seq 20 29`; do
python -m venv venv$x; source venv$x/bin/activate; python -m pip install -U pip -q
pip install pytest git+https://github.com/charmed-kubernetes/pytest-operator.git@cfef8ea9927e4cc07ff69d042239e2612d1cfc84 -q
./venv$x/bin/pytest --asyncio-mode=auto # <-- this should execute successfully
done
Will result in 10 new venvs, some of which can successfully execute the pytest --asyncio-mode=auto
command and some of which give the following error:
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --asyncio-mode=auto
inifile: None
rootdir: /tmp/pytest-asyncio-debug
This intermittent error is linked to the venv, not to execution of the pytest command. For example, if you create a venv and then execute pytest --asyncio-mode=auto
multiple times, all executions will have the same result (either pass or fail). But if you then create multiple venvs and repeat this test, some venvs will pass and some will fail.
An example of installs that all work can be generated by using the previous commit:
# pip install direct from commit "Replace build ..." (feature commit 1)
# works
for x in `seq 11 20`; do
python -m venv venv$x; source venv$x/bin/activate; python -m pip install -U pip -q
pip install pytest git+https://github.com/charmed-kubernetes/pytest-operator.git@182f46d264c8e3ff94e74a0f863f94a4e269e254 -q
./venv$x/bin/pytest --asyncio-mode=auto
done
Specifying the asyncio_mode
via a pytest.ini or pyproject.toml appears to work consistently. For example, in pyproject.toml this always works:
[tool.pytest.ini_options]
asyncio_mode = "auto"
I'm not sure if this is a problem on the side of the charmcraft, but many times
I've found myself in a situation where the integration test doesn't work and
it's because of a broken container that charmcraft uses to build a charm.
The error in such a situation is often not obvious and setting log-level to
debug do not provide much information, as charmcraft is not running without
--verbose
flag.
The solution could be to add the charmcraft clean
run in the _clean_model
function.
Manually running charmcraft clean
is not always noticeable, as in many
projects pytest-operator is used to build test charms whose source code is
either in the sub-directory of integration tests 1 or clone from git
repository.
Not sure this is more of an issue or a feature request, but here's my thought.
juju deploy subordinate_charm
and the charm will be automagically deployed with scale=0/0.
While if I do:
ops_test.deploy(subordinate_charm)
it will fail citing subordinate application must be deployed without units
.
User is then required to do: ops_test.deploy(subordinate_charm, num_units=0)
and it will work.
Two problems with this:
ops_test.deploy
to match that of juju deploy
if charm.is_subordinate
would do.Greetings! I am currently writing some integration tests for charms that I am working on that use test bundles to check the stability of the charm (i.e. can it sync correctly with its companion charms). Unfortunately, I am not able to deploy the test bundles due to issues with juju-bundle
:
Traceback (most recent call last):
File "/mnt/d/projects/work/charms/slurmdbd-operator/tests/integration/test_charm.py", line 31, in test_deploy_against_channel_latest_edge
await ops_test.deploy_bundle(_.name)
File "/mnt/d/projects/work/charms/slurmdbd-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 1134, in deploy_bundle
await self.run(*cmd, check=True)
File "/mnt/d/projects/work/charms/slurmdbd-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 588, in run
raise AssertionError(
AssertionError: Command ['juju-bundle', 'deploy', '--bundle', '/tmp/tmpbo7dj_dw', '--build', '--'] failed (1): Error: YAML Error: series: unknown variant `focal`, expected one of `kubernetes`, `oneiric`, `precise`, `quantal`, `raring`, `saucy`, `trusty`, `utopic`, `vivid`, `wily`, `xenial`, `yakkety`, `zesty`, `artful`, `bionic`, `cosmic`, `disco`, `eoan`, `win2012hvr2`, `win2012hv`, `win2012r2`, `win2012`, `win7`, `win8`, `win81`, `win10`, `win2016`, `win2016hv`, `win2016nano`, `centos7` at line 9 column 9
Caused by:
series: unknown variant `focal`, expected one of `kubernetes`, `oneiric`, `precise`, `quantal`, `raring`, `saucy`, `trusty`, `utopic`, `vivid`, `wily`, `xenial`, `yakkety`, `zesty`, `artful`, `bionic`, `cosmic`, `disco`, `eoan`, `win2012hvr2`, `win2012hv`, `win2012r2`, `win2012`, `win7`, `win8`, `win81`, `win10`, `win2016`, `win2016hv`, `win2016nano`, `centos7` at line 9 column 9
---------------------------------------------------------------------------------------------------------- Captured log setup ----------------------------------------------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:646 Connecting to existing model localhost-lxd:controller on unspecified cloud
---------------------------------------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------------------------------------
INFO test_charm:test_charm.py:25 Deploying slurmdbd against latest/edge slurm charms test bundle...
INFO test_charm:test_charm.py:27
INFO pytest_operator.plugin:plugin.py:497 Using tmp_path: /mnt/d/projects/work/charms/slurmdbd-operator/.tox/integration/tmp/pytest/controller0
INFO pytest_operator.plugin:plugin.py:941 Building charm slurmdbd
INFO pytest_operator.plugin:plugin.py:946 Built charm slurmdbd in 31.50s
INFO pytest_operator.plugin:plugin.py:1130 Deploying (and possibly building) bundle using juju-bundle command:'juju-bundle deploy --bundle /tmp/tmpbo7dj_dw --build --'
======================================================================================================= short test summary info ========================================================================================================
FAILED tests/integration/test_charm.py::test_deploy_against_channel_latest_edge - AssertionError: Command ['juju-bundle', 'deploy', '--bundle', '/tmp/tmpbo7dj_dw', '--build', '--'] failed (1): Error: YAML Error: series: unknown variant `focal`, expected one of `kubernetes`, `oneiric`, `precise`, `quantal`, `...
Looking at the error output, I can tell that the issue here is that the juju-bundle
executable does not recognize focal
(Ubuntu 20.04 LTS) as an available series option for charms. The lack of recognition seems to be because of juju-bundle's age and lack of maintenance.
I propose two ways to potentially fix this bug:
juju-bundle
with plain ole juju
. <- Preferred optionjuju-bundle
.Let me know what you all think!
When using the include
param to render_charm()
with a path with several directory levels, to selectively apply templating to specific files, the parent directories have to be explicitly included as well or they will not be traversed down in the walk. Any directory which is a prefix of a path in include
should be implicitly included.
Thanks for creating and maintaining the pytest operator, it makes integration tests much easier! This is a fairly broad feature request (or potentially a bug report). Recently I have been testing a charm on kubernetes that depends on another charm that can only run on lxd. The first thing I tried was to have a controller running on lxd and use add-k8s to also get the lxd controller to manage the kubernetes charm. Then I used the --cloud
and --controller
parameters to request pytest operator to use the lxd controller and kubernetes cloud. This resulted in the following error (I also tried this on GitHub actions with the same result, it looks like some weird unauthorized error):
tox -e integration -- --jenkins-agent-image localhost:32000/jenkins-agent-k8s:85f19a452481f845507e9457796022a751aa039b --cloud micro-lxd --controller lxd --jenkins-controller-name lxd --jenkins-model-name jenkins --jenkins-unit-number 1
integration installed: asttokens==2.0.8,attrs==22.1.0,backcall==0.2.0,bcrypt==3.2.2,cachetools==5.2.0,certifi==2022.6.15,cffi==1.15.1,charset-normalizer==2.1.0,cryptography==37.0.4,decorator==5.1.1,executing==0.10.0,google-auth==2.10.0,idna==3.3,iniconfig==1.1.1,ipdb==0.13.9,ipython==8.4.0,jedi==0.18.1,Jinja2==3.1.2,juju==3.0.1,jujubundlelib==0.5.7,kubernetes==24.2.0,macaroonbakery==1.3.1,MarkupSafe==2.1.1,matplotlib-inline==0.1.6,multi-key-dict==2.0.3,mypy-extensions==0.4.3,oauthlib==3.2.0,ops==1.5.2,packaging==21.3,paramiko==2.11.0,parso==0.8.3,pbr==5.10.0,pexpect==4.8.0,pickleshare==0.7.5,pluggy==1.0.0,prompt-toolkit==3.0.30,protobuf==3.20.1,ptyprocess==0.7.0,pure-eval==0.2.2,py==1.11.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pycparser==2.21,Pygments==2.13.0,pymacaroons==0.13.0,PyNaCl==1.5.0,pyparsing==3.0.9,pyRFC3339==1.1,pytest==7.1.2,pytest-asyncio==0.19.0,pytest-operator==0.22.0,python-dateutil==2.8.2,python-jenkins==1.7.0,pytz==2022.2.1,PyYAML==6.0,requests==2.28.1,requests-oauthlib==1.3.1,rsa==4.9,six==1.16.0,stack-data==0.4.0,theblues==0.5.2,toml==0.10.2,tomli==2.0.1,toposort==1.7,traitlets==5.3.0,typing-inspect==0.8.0,typing_extensions==4.3.0,urllib3==1.26.11,wcwidth==0.2.5,websocket-client==1.3.3,websockets==10.3
integration run-test-pre: PYTHONHASHSEED='2230270510'
integration run-test: commands[0] | pytest -v --tb native --ignore=/home/jdkandersson/src/jenkins-agent-operator/tests/unit --log-cli-level=INFO -s --jenkins-agent-image localhost:32000/jenkins-agent-k8s:85f19a452481f845507e9457796022a751aa039b --cloud micro-lxd --controller lxd --jenkins-controller-name lxd --jenkins-model-name jenkins --jenkins-unit-number 1
===================================================================================================== test session starts =====================================================================================================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0 -- /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/jdkandersson/src/jenkins-agent-operator, configfile: pyproject.toml
plugins: operator-0.22.0, asyncio-0.19.0
asyncio: mode=strict
collected 2 items
tests/integration/test_charm.py::test_active
------------------------------------------------------------------------------------------------------- live log setup --------------------------------------------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:625 Adding model lxd:test-charm-cmhu on cloud micro-lxd
ERROR
tests/integration/test_charm.py::test_build_succeeds ERROR
=========================================================================================================== ERRORS ============================================================================================================
________________________________________________________________________________________________ ERROR at setup of test_active ________________________________________________________________________________________________
Traceback (most recent call last):
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 293, in _asyncgen_fixture_wrapper
result = event_loop.run_until_complete(setup())
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 275, in setup
res = await gen_obj.__anext__()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 231, in ops_test
await ops_test._setup_model()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 690, in _setup_model
model_state = await self._add_model(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 627, in _add_model
model = await controller.add_model(model_name, cloud_name, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/controller.py", line 342, in add_model
model_info = await model_facade.CreateModel(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 481, in wrapper
reply = await f(*args, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/old_clients/_client9.py", line 3158, in CreateModel
reply = await self.rpc(msg)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 654, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py", line 634, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
----------------------------------------------------------------------------------------------------- Captured log setup ------------------------------------------------------------------------------------------------------
INFO pytest_operator.plugin:plugin.py:625 Adding model lxd:test-charm-cmhu on cloud micro-lxd
____________________________________________________________________________________________ ERROR at setup of test_build_succeeds ____________________________________________________________________________________________
Traceback (most recent call last):
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 293, in _asyncgen_fixture_wrapper
result = event_loop.run_until_complete(setup())
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_asyncio/plugin.py", line 275, in setup
res = await gen_obj.__anext__()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 231, in ops_test
await ops_test._setup_model()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 690, in _setup_model
model_state = await self._add_model(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/pytest_operator/plugin.py", line 627, in _add_model
model = await controller.add_model(model_name, cloud_name, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/controller.py", line 342, in add_model
model_info = await model_facade.CreateModel(
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 481, in wrapper
reply = await f(*args, **kwargs)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/old_clients/_client9.py", line 3158, in CreateModel
reply = await self.rpc(msg)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/facade.py", line 654, in rpc
result = await self.connection.rpc(msg, encoder=TypeEncoder)
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py", line 634, in rpc
raise errors.JujuAPIError(result)
juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
=================================================================================================== short test summary info ===================================================================================================
ERROR tests/integration/test_charm.py::test_active - juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
ERROR tests/integration/test_charm.py::test_build_succeeds - juju.errors.JujuAPIError: failed to open kubernetes client: unable to determine legacy status for namespace test-charm-cmhu: Unauthorized
====================================================================================================== 2 errors in 1.01s ======================================================================================================
Task was destroyed but it is pending!
task: <Task pending name='Task-21' coro=<WebSocketCommonProtocol.recv() done, defined at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:486> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[create_task_with_handler.<locals>._task_result_exp_handler(task_name='tmp', logger=<Logger juju....ion (WARNING)>)() at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/jasyncio.py:64]>
Task was destroyed but it is pending!
task: <Task pending name='Task-22' coro=<Event.wait() done, defined at /usr/lib/python3.10/asyncio/locks.py:201> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-4' coro=<WebSocketCommonProtocol.transfer_data() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:945> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[Task.task_wakeup()]>
Task was destroyed but it is pending!
task: <Task pending name='Task-5' coro=<WebSocketCommonProtocol.keepalive_ping() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:1234> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-6' coro=<WebSocketCommonProtocol.close_connection() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:1289> wait_for=<Task pending name='Task-4' coro=<WebSocketCommonProtocol.transfer_data() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py:945> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-7' coro=<Connection._receiver() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py:531> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-12' coro=<Connection._pinger() running at /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/juju/client/connection.py:579> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Exception ignored in: <coroutine object WebSocketCommonProtocol.close_connection at 0x7fe9f6ef7450>
Traceback (most recent call last):
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 1329, in close_connection
await self.close_transport()
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 1347, in close_transport
if await self.wait_for_connection_lost():
File "/home/jdkandersson/src/jenkins-agent-operator/.tox/integration/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 1372, in wait_for_connection_lost
await asyncio.wait_for(
File "/usr/lib/python3.10/asyncio/tasks.py", line 405, in wait_for
loop = events.get_running_loop()
RuntimeError: no running event loop
ERROR: InvocationError for command /home/jdkandersson/src/jenkins-agent-operator/.tox/integration/bin/pytest -v --tb native --ignore=/home/jdkandersson/src/jenkins-agent-operator/tests/unit --log-cli-level=INFO -s --jenkins-agent-image localhost:32000/jenkins-agent-k8s:85f19a452481f845507e9457796022a751aa039b --cloud micro-lxd --controller lxd --jenkins-controller-name lxd --jenkins-model-name jenkins --jenkins-unit-number 1 (exited with code 1)
___________________________________________________________________________________________________________ summary ___________________________________________________________________________________________________________
ERROR: integration: commands failed
I was able to deploy charms to the kubernetes controlled from lxd fine using normal juju commands.
What I got working was to have the integration tests run on a separate microk8s controller and use cross-model and cross-controller relations. However, this required the use of ops_test.juju
because I could not find a way to create the relations natively e.g. using ops_test.model.relate
if the relation is on another model (and in this case the model is on another controller).
pythonlib-juju
's signatures to help writing int tests. Ideally would like to rely on pytest-operator
and not another library that is wrapped by it
ops_test.model/unit/app
with some supported usage examples?pytest-operator
are REALLY hard to read, and it's very easy to miss things, could these be improved in a more readable / less manual format?When developing integration tests, the current typing implementation doesn't support peeking (in my case, vim.lsp.buf.signature_help()
using pyright
as the LSP)
This is because of 'challenging' typing from pythonlib-juju
Although tightening up that typing is out of the scope for pytest-operator
, the common workflow building int tests looks like:
ops_test.model
or ops_test.model.applications
pythonlib-juju
source codejuju.applications
, juju.unit
, juju.model
etc (often which does not have types)vars(questionable_object)
or dir(questionable_object)
, all of which takes a lot of time and trial-and-error that don't belong in test-writingOverall, this makes building integration tests really slow, and more likely to blindly copy code from other examples of charm repos that have done the work already. Not bad on it's own, but probably undesirable.
For every integration test run in our 20 charms I'm getting these warnings. I'm aware that more people are experiencing the same. Any fix?
================== 1 passed, 3 warnings in 341.19s (0:05:41) ===================
/home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
Task was destroyed but it is pending!
task: <Task pending name='Task-11' coro=<Connection._pinger() running at /home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:572> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f0a204f4310>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-6' coro=<Connection._receiver() running at /home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:524> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f0a21da8460>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-4' coro=<WebSocketCommonProtocol.keepalive_ping() running at /home/ubuntu/actions-runner/_work/charmed-magma/charmed-magma/orchestrator-bundle/orc8r-accessd-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:977> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f0a2055fdc0>()]>>
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x7f0a21de78b0>
transport: <_SelectorSocketTransport closing fd=11>
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/selector_events.py", line 910, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/sslproto.py", line 6[85](https://github.com/canonical/charmed-magma/runs/6271545946?check_suite_focus=true#step:4:85), in _process_write_backlog
self._transport.write(chunk)
File "/usr/lib/python3.8/asyncio/selector_events.py", line [91](https://github.com/canonical/charmed-magma/runs/6271545946?check_suite_focus=true#step:4:91)6, in write
self._fatal_error(exc, 'Fatal write error on socket transport')
File "/usr/lib/python3.8/asyncio/selector_events.py", line 711, in _fatal_error
self._force_close(exc)
File "/usr/lib/python3.8/asyncio/selector_events.py", line 723, in _force_close
self._loop.call_soon(self._call_connection_lost, exc)
File "/usr/lib/python3.8/asyncio/base_events.py", line 719, in call_soon
self._check_closed()
File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
It seems that if a test is marked with abort_on_fail
but also xfail
and it fails as expected, it does not also abort, which makes subsequent tests have a real FAILURE
rather than an XFAIL
.
It seems that the the pytest-operator version 0.17.0 breaks integration tests on teardown.
Failing job without pytest-operator being constrained: https://github.com/claudiubelu/waltz-integration-juju/runs/6065841857?check_suite_focus=true
Job passing after constraining pytest-operator<0.17.0: https://github.com/claudiubelu/waltz-integration-juju/runs/6073143526?check_suite_focus=true
I have an integration test that deletes a kubernetes pod using either the kubectl
command or lightkube
(the issue happens with both) as follows
# Deleting the primary pod using kubectl
k8s_client = AsyncClient(namespace=ops_test.model_name)
await k8s_client.delete(Pod, name=replica_name)
The test suite passes successfully.
During the pytest-operator
's teardown operation, the following exception is thrown inconsistently :
INFO pytest_operator.plugin:plugin.py:477 Juju error logs:
controller-0: 14:09:52 ERROR juju.worker.caasapplicationprovisioner.runner exited "mongodb-k8s": Operation cannot be fulfilled on pods "mongodb-k8s-0": the object has been modified; please apply your changes to the latest version and try again
And this results in the model created by the pytest-operator, not to be deleted and to be stuck on the destroying
status until being force deleted manually.
0.14.0
ubuntu-20.04
(also happening on ubuntu-22.04
)2.9.32
1.24.0
(also happening on 1.23.6
)A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.