Git Product home page Git Product logo

sdcore-upf-k8s-operator's People

Contributors

danielarndt avatar dariofaccin avatar dependabot[bot] avatar gatici avatar ghislainbourgeois avatar gmerold avatar gruyaume avatar markbeierl avatar patriciareinoso avatar renovate[bot] avatar saltiyazan avatar telcobot avatar tonyandrewmeyer avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

tonyandrewmeyer

sdcore-upf-k8s-operator's Issues

Connection Error during `config-changed` event

Describe the bug

An error comes up during config changed event.

To Reproduce

  1. Pack the charm
charmcraft pack --verbose
  1. Deploy the charm:
juju deploy ./sdcore-upf-k8s_ubuntu-22.04-amd64.charm --trust --resource bessd-image=ghcr.io/canonical/sdcore-upf-bess:1.3 --resource pfcp-agent-image=ghcr.io/canonical/sdcore-upf-pfcpiface:1.3

Expected behavior

No error

Logs

unit-sdcore-upf-k8s-0: 10:13:55 ERROR unit.sdcore-upf-k8s/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/./src/charm.py", line 1040, in <module>
    main(UPFOperatorCharm)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/main.py", line 456, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/framework.py", line 351, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/framework.py", line 853, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/framework.py", line 943, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/./src/charm.py", line 513, in _on_config_changed
    self._on_bessd_pebble_ready(event)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/./src/charm.py", line 535, in _on_bessd_pebble_ready
    self._configure_bessd_workload()
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/./src/charm.py", line 593, in _configure_bessd_workload
    self._run_bess_configuration()
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/./src/charm.py", line 601, in _run_bess_configuration
    if not self._is_bessctl_executed():
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/./src/charm.py", line 637, in _is_bessctl_executed
    return self._bessd_container.exists(path=f"/{BESSCTL_CONFIGURE_EXECUTED_FILE_NAME}")
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/model.py", line 2578, in exists
    self._pebble.list_files(str(path), itself=True)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/pebble.py", line 2318, in list_files
    resp = self._request('GET', '/v1/files', query)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/pebble.py", line 1754, in _request
    response = self._request_raw(method, path, query, headers, data)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/ops/pebble.py", line 1803, in _request_raw
    raise ConnectionError(
ops.pebble.ConnectionError: Could not connect to Pebble: socket not found at '/charm/containers/bessd/pebble.socket' (container restarted?)

Environment

  • Charm / library version (if relevant):
  • Juju version (output from juju --version): 3.4.0
  • Cloud Environment: MicroK8s
  • Kubernetes version (output from kubectl version --short): v1.28.7

Additional context

Cannot remove UPF if charm goes into blocked state

If the charm goes into a blocked state, for example due to incompatible CPU, the charm cannot be removed cleanly. It goes into error state on cleanup with the following in debug log:

unit-upf-0: 16:32:10 INFO unit.upf/0.juju-log HTTP Request: DELETE https://10.152.183.1/api/v1/namespaces/core/services/upf-external "HTTP/1.1 404 Not Found"
unit-upf-0: 16:32:10 ERROR unit.upf/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 188, in raise_for_status
    resp.raise_for_status()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/httpx/_models.py", line 759, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'https://10.152.183.1/api/v1/namespaces/core/services/upf-external'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404

SD-Core UPF operator goes to error state when Multus is not enabled

Describe the bug

Using Multus is a pre-requisite for using sdcore-upf-k8s-operator. However when users don't use it, the operator crashes ungracefully.

To Reproduce

  1. Deploy SD-Core UPF on a Kubernetes that doesn't have multus installed or enabled.

Expected behavior

The operator stops gracefully and goes to a Blocked status.

Logs

unit-sdcore-upf-k8s-0: 16:09:50 INFO unit.sdcore-upf-k8s/0.juju-log HTTP Request: GET https://10.152.183.1/apis/k8s.cni.cncf.io/v1/namespaces/user-plane/network-attachment-definitions/access-net "HTTP/1.1 404 Not Found"
unit-sdcore-upf-k8s-0: 16:09:50 ERROR unit.sdcore-upf-k8s/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 245, in network_attachment_definition_is_created
    existing_nad = self.client.get(
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/lightkube/core/client.py", line 140, in get
    return self._client.request("get", res=res, name=name, namespace=namespace)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/lightkube/core/generic_client.py", line 245, in request
    return self.handle_response(method, resp, br)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/lightkube/core/generic_client.py", line 196, in handle_response
    self.raise_for_status(resp)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/lightkube/core/generic_client.py", line 190, in raise_for_status
    raise transform_exception(e)
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/lightkube/core/generic_client.py", line 188, in raise_for_status
    resp.raise_for_status()
  File "/var/lib/juju/agents/unit-sdcore-upf-k8s-0/charm/venv/httpx/_models.py", line 759, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'https://10.152.183.1/apis/k8s.cni.cncf.io/v1/namespaces/user-plane/network-attachment-definitions/access-net'

Environment

  • Charm version : edge (rev. 6)
  • Juju version: 3.1.7
  • Cloud Environment: Microk8s (1.27-strict/stable)
  • Kubernetes version: v1.27.7

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

github-actions
.github/workflows/codeql-analysis.yml
.github/workflows/issues.yaml
.github/workflows/lint-pr.yaml
  • amannn/action-semantic-pull-request v5
.github/workflows/main.yaml
  • actions/checkout v4
  • dorny/paths-filter v3
  • actions/checkout v4
  • dorny/paths-filter v3
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/promote.yaml
.github/workflows/update-libs.yaml
  • actions/checkout v4
  • peter-evans/create-pull-request v6.0.0
pip_requirements
requirements.txt
  • juju ==3.3.1.1
  • pydantic >=2
  • PyYAML >=6.0.1
test-requirements.txt
  • macaroonbakery ==1.3.4
  • pytest-asyncio ==0.21.1
tox.ini
terraform
terraform/terraform.tf
  • juju ~> 0.10.1

  • Check this box to trigger a request for Renovate to run again on this repository

Charm sets status from init instead of relying on CollectStatus event

Bug Description

During charm __init__, the validation of config is performed: if any of the options is invalid, the charm will set the status to Blocked without relying on the CollectStatus event callback.

To Reproduce

  1. Navigate to src/charm.py, line 110
  2. Unit status should not be handled here

Environment

Juju version (output from juju --version): 3.4.2-genericlinux-amd64
Cloud Environment: MicroK8s v1.28.7 revision 6532

Relevant log output

No relevant log is required.

Additional context

No response

Hook error when MetalLB does not have available IP addresses

Describe the bug
We are getting a hook failed error when relating UPF to NMS (n4 interface) whenever all the metallb IP addresses has been assigned.

Expected behavior
Charm should go to blocked status.

Logs

Unit                         Workload  Agent  Address       Ports  Message
amf/0*                       active    idle   10.1.110.186         
ausf/0*                      active    idle   10.1.110.190         
gnbsim/0*                    waiting   idle   10.1.110.184         Waiting for N2 information
grafana-agent-k8s/0*         blocked   idle   10.1.110.132         grafana-cloud-config: off, logging-consumer: off
mongodb-k8s/0*               active    idle   10.1.110.142         Primary
nrf/0*                       active    idle   10.1.110.149         
nssf/0*                      active    idle   10.1.110.155         
pcf/0*                       active    idle   10.1.110.156         
router/0*                    active    idle   10.1.110.183         
sdcore-nms/0*                blocked   idle   10.1.110.187         Waiting for `sdcore-management` relation to be created
self-signed-certificates/0*  active    idle   10.1.110.140         
smf/0*                       active    idle   10.1.110.163         
udm/0*                       active    idle   10.1.110.166         
udr/0*                       active    idle   10.1.110.172         
upf/0*                       error     idle   10.1.110.177         hook failed: "fiveg_n4-relation-joined"
unit-upf-0: 16:20:42 ERROR unit.upf/0.juju-log fiveg_n4:28: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 980, in <module>
    main(UPFOperatorCharm)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 436, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 340, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 842, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/sdcore_upf/v0/fiveg_n4.py", line 231, in _on_relation_joined
    self.on.fiveg_n4_request.emit(relation_id=event.relation.id)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 340, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 842, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 252, in _on_fiveg_n4_request
    self._update_fiveg_n4_relation_data()
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 276, in _update_fiveg_n4_relation_data
    upf_hostname=self._get_n4_upf_hostname(),
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 293, in _get_n4_upf_hostname
    elif lb_hostname := self._upf_load_balancer_service_hostname():
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 751, in _upf_load_balancer_service_hostname
    return service.status.loadBalancer.ingress[0].hostname  # type: ignore[attr-defined]
TypeError: 'NoneType' object is not subscriptable

The EXTERNAL-IP for the upf-external service stays as pending.

$ kubectl -n core2 get svc

upf-external                         LoadBalancer   10.152.183.188   <pending>     8805:30196/UDP                29m

How to reproduce
Disable metallb should be enough (?). Or make sure that all the IPs on its range has already been assigned.
Deploy SD Core
Deploy NMS
juju integrate sdcore-nms:sdcore-management sdcore-webui:sdcore-management

Environment
Juju version: 3.1.6
Cloud Environment: MicroK8s
Kubernetes version: v1.27.6

UPF goes into error state without meaningful message on invalid CPU

While the message about the CPU does show briefly in juju status, the charm itself just shows error running config changed hook. This also prevents the charm from being removed cleanly, as the user must juju resolve the unit a couple of times before it will be removed.

Scaling UPF down to 0 then back to 1 results in failure

If the UPF needs to be stopped, for example to conserve CPU resources when not in use, it cannot be scaled back up to 1 unit as Multus errors seem to occur. The following is observed on describing the pod after scaling back to 1 unit:

  Warning  NoNetworkFound          10s   multus             cannot find a network-attachment-definition (access-net) in namespace (core): network-attachment-definitions.k8s.cni.cncf.io "access-net" not found
  Warning  FailedCreatePodSandBox  10s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b6d1bb486054a040c13ad71fa794b362e810f89221ef0ac66682096241f9cd0b": plugin type="multus" name="multus-cni-network" failed (add): Multus: [core/upf-0/581b7566-b041-4324-a337-7e1a50307ed4]: error loading k8s delegates k8s args: TryLoadPodDelegates: error in getting k8s network for pod: GetNetworkDelegates: failed getting the delegate: getKubernetesDelegate: cannot find a network-attachment-definition (access-net) in namespace (core): network-attachment-definitions.k8s.cni.cncf.io "access-net" not found

Integration tests are flaky because of an error at the `start` event

Describe the bug

Integration tests are flaky because of an error at the start event. Here is an example CI run:

To Reproduce

  1. Run integration tests multiple times

Expected behavior

Integration tests run reliably

Logs

INFO     juju.model:model.py:2957 Waiting for model:
  sdcore-upf-k8s/0 [idle] error: hook failed: "start"
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 INFO juju.worker.uniter awaiting error resolution for "start" hook
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 DEBUG juju.worker.uniter [AGENT-STATUS] error: hook failed: "start"
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 DEBUG juju.worker.uniter.remotestate storage attachment change for sdcore-upf-k8s/0: {storage-config-0 {2 alive true /var/lib/juju/storage/config/0}}
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 INFO juju.worker.uniter awaiting error resolution for "start" hook
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 DEBUG juju.worker.uniter.remotestate storage attachment change for sdcore-upf-k8s/0: {storage-shared-app-1 {2 alive true /var/lib/juju/storage/shared-app/0}}
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 INFO juju.worker.uniter awaiting error resolution for "start" hook
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 DEBUG juju.worker.uniter.remotestate workloadEvent enqueued for sdcore-upf-k8s/0: 0
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 DEBUG juju.worker.uniter.remotestate workloadEvent enqueued for sdcore-upf-k8s/0: 1
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 INFO juju.worker.uniter awaiting error resolution for "start" hook
unit-sdcore-upf-k8s-0: 2024-02-29 19:18:32 INFO juju.worker.uniter awaiting error resolution for "start" hook
model-1a9263d9-23d8-4079-8a19-5d11729b3134: 2024-02-29 19:19:00 DEBUG juju.worker.caasadmission received admission request for sdcore-upf-k8s-0.17b86b939ef4126c of /v1, Kind=Event in namespace test-integration-t0k5
model-1a9263d9-23d8-4079-8a19-5d11729b3134: 2024-02-29 19:19:01 DEBUG juju.worker.caasadmission received admission request for sdcore-upf-k8s-0.17b86b939ef4126c of /v1, Kind=Event in namespace test-integration-t0k5
unit-sdcore-upf-k8s-0: 2024-02-29 19:19:01 DEBUG juju.worker.uniter.remotestate storage attachment change for sdcore-upf-k8s/0: {storage-config-0 {2 alive true /var/lib/juju/storage/config/0}}
unit-sdcore-upf-k8s-0: 2024-02-29 19:19:01 INFO juju.worker.uniter awaiting error resolution for "start" hook
unit-sdcore-upf-k8s-0: 2024-02-29 19:19:01 DEBUG juju.worker.uniter.remotestate storage attachment change for sdcore-upf-k8s/0: {storage-shared-app-1 {2 alive true /var/lib/juju/storage/shared-app/0}}
unit-sdcore-upf-k8s-0: 2024-02-29 19:19:01 INFO juju.worker.uniter awaiting error resolution for "start" hook

Environment

  • Charm / library version (if relevant): 5baf9d5
  • Juju version (output from juju --version): 3.4.0
  • Cloud Environment: MicroK8s
  • Kubernetes version (output from kubectl version --short): 1.29-strict/stable

UPF startup delayed ~15 minutes

As the UPF is deployed, the charm reconfigures multus to have the additional network attachment definitions, and then restarts the pod. There seems to be an error preventing the charm from restarting as the following error is seen repeatedly in the juju debug log, and the pod stays in terminating state

unit-upf-0: 17:08:30 ERROR juju.worker.uniter.operation hook "stop" (via hook dispatching script: dispatch) failed: exit status 1
unit-upf-0: 17:08:30 WARNING unit.upf/0.juju-log 2 containers are present in metadata.yaml and refresh_event was not specified. Defaulting to update_status. Metrics IP may not be set in a timely fashion.
unit-upf-0: 17:08:30 INFO unit.upf/0.juju-log HTTP Request: GET https://10.152.183.1/apis/k8s.cni.cncf.io/v1/namespaces/user-plane/network-attachment-definitions "HTTP/1.1 401 Unauthorized"
unit-upf-0: 17:08:30 ERROR unit.upf/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 188, in raise_for_status
    resp.raise_for_status()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/httpx/_models.py", line 749, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://10.152.183.1/apis/k8s.cni.cncf.io/v1/namespaces/user-plane/network-attachment-definitions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 292, in list_network_attachment_definitions
    return list(
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 252, in list
    cont, chunk = self.handle_response('list', resp, br)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 196, in handle_response
    self.raise_for_status(resp)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 190, in raise_for_status
    raise transform_exception(e)
lightkube.core.exceptions.ApiError: Unauthorized

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 961, in <module>
    main(UPFOperatorCharm)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 439, in main
    framework.reemit()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 849, in reemit
    self._reemit()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 928, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 356, in _on_config_changed
    self.on.nad_config_changed.emit()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 342, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 839, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 928, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 533, in _configure_multus
    self._configure_network_attachment_definitions()
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 572, in _configure_network_attachment_definitions
    ) in self.kubernetes.list_network_attachment_definitions():
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 296, in list_network_attachment_definitions
    raise KubernetesMultusError("Could not list NetworkAttachmentDefinitions")
charms.kubernetes_charm_libraries.v0.multus.KubernetesMultusError: Could not list NetworkAttachmentDefinitions

This message is repeated every 1 second for ~15 minutes. It appears that something in Juju or K8s gives up at this point and the charm is forcefully restarted. Once that happens, it is able to proceed with creating the NetworkAttachmentDefinitions and it starts up normally

Error removing UPF application (DPDK)

Describe the bug

Upon destroy model, the UPF went into error state.

To Reproduce

  1. Deploy the full sdcore CUPS as in the mastering tutorial
  2. Destroy the control-plane model
  3. Destroy the user-plane model.
  4. Note UPF remains and model is not destroyed

Expected behavior

Model should be removed without user intervention.

Logs

The following is observed in the debug-log:

unit-upf-0: 20:14:42 INFO unit.upf/0.juju-log HTTP Request: GET https://10.152.183.1/apis/apps/v1/namespaces/user-plane/statefulsets/upf "HTTP/1.1 200 OK"
unit-upf-0: 20:14:42 INFO unit.upf/0.juju-log HTTP Request: PATCH https://10.152.183.1/apis/apps/v1/namespaces/user-plane/statefulsets/upf?fieldManager=KubernetesClient "HTTP/1.1 409 Conflict"
unit-upf-0: 20:14:42 ERROR unit.upf/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 188, in raise_for_status
    resp.raise_for_status()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/httpx/_models.py", line 761, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '409 Conflict' for url 'https://10.152.183.1/apis/apps/v1/namespaces/user-plane/statefulsets/upf?fieldManager=KubernetesClient'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 427, in unpatch_statefulset
    self.client.patch(
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/client.py", line 325, in patch
    return self._client.request("patch", res=res, name=name, namespace=namespace, obj=obj,
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 245, in request
    return self.handle_response(method, resp, br)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 196, in handle_response
    self.raise_for_status(resp)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/lightkube/core/generic_client.py", line 190, in raise_for_status
    raise transform_exception(e)
lightkube.core.exceptions.ApiError: Apply failed with 1 conflict: conflict with "python-httpx" using apps/v1: .spec.template.spec.containers[name="bessd"].securityContext.privileged

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 1153, in <module>
    main(UPFOperatorCharm)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 544, in main
    manager.run()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 520, in run
    self._emit()
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 509, in _emit
    _emit_charm_event(self.charm, self.dispatcher.event_name)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 143, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 352, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 851, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 941, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 736, in _on_remove
    self.kubernetes.unpatch_statefulset(
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/kubernetes_charm_libraries/v0/multus.py", line 436, in unpatch_statefulset
    raise KubernetesMultusError(f"Could not remove patches from statefulset {name}")
charms.kubernetes_charm_libraries.v0.multus.KubernetesMultusError: Could not remove patches from statefulset upf
unit-upf-0: 20:14:42 ERROR juju.worker.uniter.operation hook "remove" (via hook dispatching script: dispatch) failed: exit status 1

Environment

  • Charm / library version (if relevant): 1.4/edge, rev 135
  • Juju version (output from juju --version): 3.4.2-genericlinux-amd64
  • Cloud Environment: Microk8s 1.29 (classic)
  • Kubernetes version (output from kubectl version --short): v1.29.2
  • Terraform version (output from terraform version): v1.8.1-dev

Additional context

UPF was deployed in DPDK mode, and therefore hugepages as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.