Git Product home page Git Product logo

traefik-k8s-operator's Introduction

Traefik Kubernetes Charmed Operator

CharmHub Badge Release Discourse Status

This Juju charmed operator written with the Operator Lifecycle Manager Framework, powering ingress controller-like capabilities on Kubernetes. By ingress controller-like capabilities, we mean that the Traefik Kubernetes charmed operator exposes Juju applications to the outside of a Kubernetes cluster, without relying on the ingress resource of Kubernetes. Rather, Traefik is instructed to expose Juju applications by means of relations with them.

Setup

These instructions assume you will run the charm on microk8s, and rely on a few plugins, specifically:

sudo snap install microk8s
microk8s enable storage dns
# The following line is required unless you plan to use the `external_hostname` configuration option
microk8s enable metallb 192.168.0.10-192.168.0.100 # You likely want change these IP ranges

Usage

juju deploy ./traefik-k8s_ubuntu-20.04-amd64.charm traefik-ingress --trust --resource traefik-image=ghcr.io/canonical/traefik:2.10.4

Configurations

  • external_hostname allows you to specify a host for the URL that Traefik will assume is its externally-visible URL, and that will be used to generate the URLs passed to the proxied applications. Note that this has to be a 'bare' hostname, i.e. no http prefix and no :port suffix. Neither are configurable at the moment. (see ) If external_hostname is unspecified, Traefik will use the ingress ip of its Kubernetes service, and the charm will go into WaitingStatus if it does not discover an ingress IP on its Kubernetes service. The Setup section shows how to optionally set up metallb with MicroK8s, so that Traefik's Kubernetes service will receive an ingress IP.

  • routing_mode: structured as an enumeration, that allows you to select how Traefik will generate routes:

    • path: Traefik will use its externally-visible url and create a route for the requester that will be structure like:

      http://<external_hostname>:<port>/<requester_model_name>-<requester_application_name>-<requester-unit-index>
      

      For example, an ingress-per-unit provider with http://foo external URL, will provide to the unit my-unit/2 in the my-model model the following URL:

      http://foo/my-model-my-unit-2
      
    • subdomain: Traefik will use its externally-visible url, based on external_hostname or, missing that, the ingress IP, and create a route for the requester that will be structure like:

      http://<requester_model_name>-<requester_application_name>-<requester-unit-index>.<external_hostname>:<port>/
      

      For example, an ingress-per-unit provider with http://foo:8080 external URL, will provide to the unit my-unit/2 in the my-model model the following URL:

      http://my-model-my-unit-2.foo:8080
      

      IMPORTANT: With the subdomain routing mode, incoming HTTP requests have the Host header set to match one of the routes. Considering the example above, incoming requests are expected to have the following HTTP header:

      Host: my-model-my-unit-2.foo
      

Relations

Providing ingress proxying

This charmed operator supports two types of proxying:

  • per-app: This is the "classic" proxying logic of an ingress-controller, load-balancing incoming connections to the various units of the Juju application related via the ingress relation by routing over the latter's Kubernetes service.
  • per-unit: Traefik will have routes to the single pods of the proxied Juju application related to it via the ingress-per-unit relation. This type of routing, while somewhat unconventional in Kubernetes, is necessary for applications like Prometheus (where each remote-write endpoint needs to be routed to separately) and beneficial to databases, the clients of which can perform client-side load balancing

Monitoring Traefik itself

The metrics endpoint exposed by Traefik can be scraped by Prometheus over the prometheus_scrape relation interface with:

juju add-relation traefik-ingress:metrics-endpoint prometheus

Contributing

Please see the Juju SDK docs for guidelines on enhancements to this charm following best practice guidelines, and CONTRIBUTING.md for developer guidance.

traefik-k8s-operator's People

Contributors

abuelodelanada avatar benhoyt avatar danielarndt avatar dnplas avatar dstathis avatar gatici avatar ghislainbourgeois avatar javacruft avatar jnsgruk avatar johnsca avatar lucabello avatar mmkay avatar natalian98 avatar nishant-dash avatar observability-noctua-bot avatar pietropasotti avatar rbarry82 avatar sed-i avatar simskij avatar weiiwang01 avatar yanksyoon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

traefik-k8s-operator's Issues

Error with traefik lib directory (change `-` with `_`)

Traefik lib directory uses - instead of _

$ ls -1 lib/charms
alertmanager_k8s
grafana_k8s
loki_k8s
observability_libs
traefik-k8s

and this leads to a SyntaxError

unit-loki-k8s-0: 17:16:32 WARNING unit.loki-k8s/0.install   File "./src/charm.py", line 24
unit-loki-k8s-0: 17:16:32 WARNING unit.loki-k8s/0.install     from charms.traefik-k8s.v0.ingress_per_unit import IngressPerUnitRequirer
unit-loki-k8s-0: 17:16:32 WARNING unit.loki-k8s/0.install                        ^
unit-loki-k8s-0: 17:16:32 WARNING unit.loki-k8s/0.install SyntaxError: invalid syntax
unit-loki-k8s-0: 17:16:32 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1

Error description when deploying without metallb

Bug Description

When deploying the charm (from the COS bundle) without metallb enabled, the charm runs into an exception.

To Reproduce

  1. Enable only dns and hostpath-storage on microk8s
  2. juju deploy cos-lite --channel edge --trust
  • This deploys revision 93 of the traefik-k8s charm

Environment

juju: 2.9.35 (bootstrapped agent: 2.9.29)
microk8s: 1.25.3

Relevant log output

unit-traefik-0: 10:53:55 ERROR unit.traefik/0.juju-log traefik-route:5: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 710, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 166, in _on_relation_changed
    self._update_requirers_with_external_host()
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 175, in _update_requirers_with_external_host
    relation.data[self.charm.app]["external_host"] = self._stored.external_host
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/model.py", line 945, in __setitem__
    raise RelationDataError('relation data values must be strings')
ops.model.RelationDataError: relation data values must be strings
unit-traefik-0: 10:53:55 ERROR juju.worker.uniter.operation hook "traefik-route-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-traefik-0: 10:53:55 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook

Additional context

No response

Remote "rewrite" and "prefix" from the ingress_per_unit

The prefix and rewrite parts of this spec are underspecified, and a holover to specific implementation details of the istio operators. There is a great, undeniable need of providing flexibility in terms of specifying the route used by the ingress to route to the proxied application, but that requires careful design to not be (1) bound to specific ingress implementations or (2) something that charm authors need to leak in the configuration options of their applications.

Remove the `serialized_data_interface` dependency

At present, this charm depends on the serialized-data-interface (SDI) library to perform version negotiation and schema validation on relation data.

While this has some attractive properties, the implementation is such that it obscures the way that relations are actually formed, and how charms really communicate through the use of 'relation data bags'. Concepts such as requests and responses do not really exist in the context of relations, and including such language will be confusing to the ecosystem.

Juju will soon include the native ability to perform some of the function that SDI aims to provide, and this charm is likely to proliferate beyond just internal Canonical products, and as such we should aim to implement what we need using constructs natively available in ops.

My suggestion is that we fix ingress-per-unit first, and then ingress - in line with the Observability team's current priorities.

`ready_for_unit` is emitted too early

Bug Description

Currently, ready_for_unit is emitted based on relation events only:

if this_unit_name in changed:
self.on.ready_for_unit.emit( # type: ignore
self.relation, current_urls[this_unit_name]
)

This is racy with the traefik workload: sometimes the remote app processes the event before traefik workload is in fact ready.

To Reproduce

Relate prometheus and traefik.

Environment

  • traefik-k8s edge 84

Relevant log output

# Prometheus gets the "ready-for-unit" event, but http requests still fail

Ingress for unit ready on 'http://pd-ssd-4cpu-8gb.us-central1-a.c.lma-light-load-testing.internal:80/cos-lite-load-test-prometheus-0'

config reload error via http://localhost:9090/cos-lite-load-test-prometheus-0/-/reload: HTTPConnectionPool(host='localhost', port=9090): Read timed out. (read timeout=2.0)


# After a while update-status fires, at which point the traefik workload is really ready and http reqs to prom work

Emitting Juju event update_status.

Starting new HTTP connection (1): localhost:9090
http://localhost:9090 "GET /cos-lite-load-test-prometheus-0/api/v1/status/buildinfo HTTP/1.1" 200 188

Additional context

No response

possible issue with IPURequirer.on.revoked emission

Bug Description

ATM we uniformly handle changed and broken, as a result we do:

current_urls = self.urls or {}

we cannot guarantee however that if we're in a broken context, urls will be None as we expect (Traefik wipes the data, but it might be that juju hasn't delete it yet, or the requirer otherwise still can see it).

Instead, we could force current_urls to be {} if isinstance(event, BrokenEvent) to generate, regardless of traefik's data, the correct delta.

To Reproduce

Unclear yet whether this issue results in a real reproducible bug.

Environment

--

Relevant log output

--

Additional context

No response

Uncaught exception while using ingress_per_unit lib

While integrating ingress_per_unit lib in a charm, I'm getting:

unit-loki-k8s-0: 17:25:00 ERROR unit.loki-k8s/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 235, in <module>
    main(LokiOperatorCharm)
  File "/var/lib/juju/agents/unit-loki-k8s-0/charm/venv/ops/main.py", line 414, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 53, in __init__
    self.ingress_per_unit = IngressPerUnitRequirer(self, port=80)
  File "/var/lib/juju/agents/unit-loki-k8s-0/charm/lib/charms/traefik_k8s/v0/ingress_per_unit.py", line 308, in __init__
    super().__init__(charm, endpoint)
  File "/var/lib/juju/agents/unit-loki-k8s-0/charm/venv/serialized_data_interface/relation.py", line 102, in __init__
    self._validate_relation_meta()
  File "/var/lib/juju/agents/unit-loki-k8s-0/charm/venv/serialized_data_interface/relation.py", line 153, in _validate_relation_meta
    assert (
AssertionError: IngressPerUnitRequirer must be used on a 'limit: 1' relation endpoint
unit-loki-k8s-0: 17:25:00 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1

In the charm metadata.yaml file I have:

requires:
  ingress-per-unit:
    interface: ingress_per_unit

With the following metadata.yaml:

requires:
  ingress-per-unit:
    interface: ingress_per_unit
    limit: 1

This error disappears.

This Exception should be catched and the documentation should be explicit about the limit

Documentation improvement

In the ingress_per_unit docstring we say:

Getting Started

To get started using the library, you just need to fetch the library using charmcraft. Note
that you also need to add the serialized_data_interface dependency to your charm's
requirements.txt.

cd some-charm
charmcraft fetch-lib charms.traefik-k8s.v0.ingress_per_unit
echo "serialized_data_interface" >> requirements.txt

If requirements.txt has no newline at end of file this leads to an error in the file, for instance:

ops
kubernetes
requests
lightkube
lightkube-models
aiohttpserialized_data_interface

traefik_k8s.v1.ingress relation relies on deprecated controller storage

Bug Description

The interface relies on deprecated controller storage (canonical/operator#882). If a charm respects this deprecation notice and sets use_juju_for_storage=False then the state stored in self._stored.current_url (for example) can be lost.

To Reproduce

Deploy a charm which uses the ingress relation and amend the charm to echo the value of url property of the ingress requires interface. This should return a url. Build a new copy of the charm with use_juju_for_storage=False and upgrade the existing charm. The url property will now be None.

A more longwinded way:

$ git clone  https://opendev.org/openstack/charm-placement-k8s 
$ cd charm-placement-k8s/
$ juju download --series jammy --channel yoga/edge placement-k8s
$ mv placement*.charm  placement-k8s.charm
$ tox -e func-smoke
...
$ juju status placement
Model              Controller  Cloud/Region        Version  SLA          Timestamp
zaza-df54e5a6cf87  micro       microk8s/localhost  2.9.34   unsupported  11:42:18Z

App        Version  Status  Scale  Charm          Channel  Rev  Address        Exposed  Message
placement           active      1  placement-k8s             0  10.152.183.48  no       

Unit          Workload  Agent  Address      Ports  Message
placement/0*  active    idle   10.1.124.51         

$ mkdir /tmp/charm-tmp
$ unzip -d /tmp/charm-tmp  placement-k8s.charm

$ sed -i -e 's/use_juju_for_storage=True/use_juju_for_storage=False/' /tmp/charm-tmp/src/charm.py 
$ cd /tmp/charm-tmp
$ zip -r /tmp/placement.charm *
$ juju upgrade-charm --switch /tmp/placement.charm placement
$ juju status placement
Model              Controller  Cloud/Region        Version  SLA          Timestamp
zaza-df54e5a6cf87  micro       microk8s/localhost  2.9.34   unsupported  11:48:15Z

App        Version  Status   Scale  Charm          Channel  Rev  Address        Exposed  Message
placement           waiting      1  placement-k8s             1  10.152.183.48  no       waiting for units to settle down

Unit          Workload  Agent  Address      Ports  Message
placement/0*  waiting   idle   10.1.124.32         (ingress-internal) integration incomplete

This is because the placement charm checks the url property of the relation to check it is ready.

Environment

Running in microk8s. The library version is:

LIBAPI = 1
LIBPATCH = 5

Relevant log output

N/A

Additional context

N/A

Remove SDI-inherited testing infrastructure

Bug Description

When I made this change, that seemed perfectly innocuous:
image

I bumped into an issue: test_lib_helpers was patching the IngressRequest object to such an extent that _check_unit_belongs_to_relation(self.units[0]) was raising.
Bottom line is: all the mocking and patching we're doing in test_lib_helpers is hard to maintain and debug.

To Reproduce

checkout purge-sdi-polish at 18c7b99
and run the utests.

Environment

dev setup

Relevant log output

n.a.

Additional context

No response

Docstring error

Bug Description

In #35 serialized_data_interface was removed as a dependency, but in ingress_per_unit.py we still have:

## Getting Started

To get started using the library, you just need to fetch the library using `charmcraft`.
**Note that you also need to add the `serialized_data_interface` dependency to your
charm's `requirements.txt`.**

To Reproduce

Environment

Relevant log output

-

Additional context

No response

Cannot index relation data with "None"

Bug Description

When removing a relation (traefik_route interface) between traefik-k8s and grafana-k8s, and KeyError exception is raised:

KeyError: 'Cannot index relation data with "None". Are you trying to access remote app data during a relation-broken event? This is not allowed.'

To Reproduce

  1. Deploy cos: juju deploy cos-lite --trust
  2. Wait until all units are in active status
  3. Remove relations, for instance: jhack imatrix clear
  4. See in juju debug-log the errors

Environment

juju: 3.0.0 (and 2.9.35 as well)
microk8s: 1.25-strict

Relevant log output

unit-grafana-0: 00:56:30.249 INFO juju.worker.uniter.operation ran "grafana-source-relation-broken" hook (via hook dispatching script: dispatch)
unit-grafana-0: 00:56:32.056 ERROR unit.grafana/0.juju-log ingress:5: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 1156, in <module>
    main(GrafanaCharm, use_juju_for_storage=True)
  File "/var/lib/juju/agents/unit-grafana-0/charm/venv/ops/main.py", line 426, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 210, in __init__
    url=self.external_url,
  File "./src/charm.py", line 1068, in external_url
    if self.ingress.external_host:
  File "/var/lib/juju/agents/unit-grafana-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 220, in external_host
    self._update_stored_external_host()
  File "/var/lib/juju/agents/unit-grafana-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 235, in _update_stored_external_host
    external_host = relation.data[relation.app].get("external_host", "")
  File "/var/lib/juju/agents/unit-grafana-0/charm/venv/ops/model.py", line 914, in __getitem__
    raise KeyError(
KeyError: 'Cannot index relation data with "None". Are you trying to access remote app data during a relation-broken event? This is not allowed.'
unit-grafana-0: 00:56:32.564 ERROR juju.worker.uniter.operation hook "ingress-relation-broken" (via hook dispatching script: dispatch) failed: exit status 1

Additional context

No response

traefik in error state while adding relation

Bug Description

While adding relation between traefik and a charm using ingress_per_unit traefik end up in error state

To Reproduce

1.- Deploy 2 units of loki charm
2.- Deploy traefik charm (juju deploy ./traefik-k8s_ubuntu-20.04-amd64.charm traefik-ingress --trust --resource traefik-image=docker.io/jnsgruk/traefik:2.6.1)
3. Add relation between both charms.

Environment

  • I built traefik charm from main.
  • Using multipass workflow charm-dev

Relevant log output

unit-traefik-ingress-0: 19:49:05 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-traefik-ingress-0: 19:49:06 ERROR unit.traefik-ingress/0.juju-log ingress-per-unit:4: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 501, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/main.py", line 426, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 276, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 736, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 783, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/serialized_data_interface/relation.py", line 254, in _handle_relation
    self.on.ready.emit(event.relation)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 276, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 736, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 783, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/lib/charms/traefik_k8s/v0/ingress_per_unit.py", line 162, in _emit_request_event
    self.on.request.emit(event.relation)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 276, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 736, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 783, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 241, in _handle_ingress_request
    self._process_ingress_relation(event.relation)
  File "./src/charm.py", line 273, in _process_ingress_relation
    config, unit_url = self._generate_per_unit_config(request, gateway_address, unit)
  File "./src/charm.py", line 300, in _generate_per_unit_config
    unit_name = request.get_unit_name(unit).replace("/", "-")
AttributeError: 'NoneType' object has no attribute 'replace'

Additional context

juju status:

Model  Controller  Cloud/Region        Version  SLA          Timestamp
cos    charm-dev   microk8s/localhost  2.9.25   unsupported  19:51:34-03:00

App              Version  Status   Scale  Charm              Channel  Rev  Address         Exposed  Message
agent                     active       1  grafana-agent-k8s             1  10.152.183.172  no
loki                      active       2  loki-k8s                      0  10.152.183.98   no
traefik-ingress           waiting      1  traefik-k8s                   0  192.168.122.10  no       installing agent

Unit                Workload  Agent  Address       Ports  Message
agent/0*            active    idle   10.1.157.75       
loki/0*             active    idle   10.1.157.126       
loki/1              active    idle   10.1.157.108       
traefik-ingress/0*  error     idle   10.1.157.94          hook failed: "ingress-per-unit-relation-changed" for loki:ingress-per-unit

Relation provider                 Requirer                Interface         Type     Message
loki:logging                      agent:logging-consumer  loki_push_api     regular
loki:loki                         loki:loki               loki              peer
traefik-ingress:ingress-per-unit  loki:ingress-per-unit   ingress_per_unit  regular 

TLS encryption to proxied workloads

It should be possible for requesters of the ingress-per-unit relation to ask for Traefik to establish TLS when routing requests to them. This, in turn, requires extending the relation data protocol to allow the specification of CA and Server certificates submitted by single units and/or Juju applications.

Note: This issue is not about providing TLS to Traefik itself from the outside of the Kubernetes cluster, which is covered by #6.

Charm goes into error state on startup

When deploying the cos-lite bundle, traefik enters error state because of an assertion error:

Example 1: Manual bundle deployment

unit-traefik-0: 11:53:36.750 ERROR unit.traefik/0.juju-log traefik-route:5: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 693, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 153, in _on_relation_changed
    self.on.ready.emit(event.relation)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 361, in _handle_traefik_route_ready
    self._process_ingress_relation(event.relation)
  File "./src/charm.py", line 373, in _process_ingress_relation
    assert gateway_address, "No gateway address available"

def _process_ingress_relation(self, relation: Relation):
# There's a chance that we're processing a relation event which was deferred until after
# the relation was broken. Select the right per_app/per_unit provider and check it is ready
# before continuing. However, the provider will NOT be ready if there are no units on the
# other side, which is the case for the RelationDeparted for the last unit (i.e., the
# proxied application scales to zero).
gateway_address = self._external_host
assert gateway_address, "No gateway address available"

Example 2: CI

https://github.com/canonical/cos-lite-bundle/actions/runs/3189444890/jobs/5203256288

Typo in `ingress_per_unit` lib documentation

In ingress_per_unit lib docstring we have:

    self.framework.observe(
        self.ingress_per_unit.on.ingress_change, self._handle_ingress_per_unit
    )

This snippet produces this error:

unit-loki-k8s-0: 17:45:12 ERROR unit.loki-k8s/0.juju-log Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 235, in <module>
    main(LokiOperatorCharm)
  File "/var/lib/juju/agents/unit-loki-k8s-0/charm/venv/ops/main.py", line 414, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 68, in __init__
    self.ingress_per_unit.on.ingress_change, self._handle_ingress_per_unit
AttributeError: 'IngressPerUnitRequirerEvents' object has no attribute 'ingress_change'

The snippet should be:

    self.framework.observe(
        self.ingress_per_unit.on.ingress_changed, self._handle_ingress_per_unit
    )

hook failed: "ingress-relation-broken" for trfk:ingress-per-unit

Bug Description

With prom related to trfk, when juju remove-application trfk, got AttributeError: 'NoneType' object has no attribute 'name'.

Both prom and trfk are a couple of revision back, so this may have been solved already.

To Reproduce

  • Relate prom to trfk
  • Offer remote write
  • Remove traefik

Environment

Model    Controller           Cloud/Region        Version  SLA          Timestamp
welcome  charm-dev-batteries  microk8s/localhost  3.0.2    unsupported  16:46:04-04:00

App   Version  Status      Scale  Charm           Channel  Rev  Address        Exposed  Message
prom  2.42.0   waiting         1  prometheus-k8s  edge     110  10.152.183.46  no       waiting for units to settle down
trfk  2.9.6    terminated    0/1  traefik-k8s     edge     117  10.43.8.34     no       unit stopped by the cloud

Unit     Workload  Agent  Address     Ports  Message
prom/0*  error     idle   10.1.55.6          hook failed: "ingress-relation-broken" for trfk:ingress-per-unit
trfk/0   unknown   lost   10.1.55.63         agent lost, see 'juju show-status-log trfk/0'

Offer  Application  Charm           Rev  Connected  Endpoint              Interface                Role
prom   prom         prometheus-k8s  110  0/0        receive-remote-write  prometheus_remote_write  provider

Relevant log output

unit-prom-0: 16:42:12.516 ERROR unit.prom/0.juju-log ingress:18: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 763, in <module>
    main(PrometheusCharm)
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/main.py", line 423, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 110, in __init__
    external_url = urlparse(self.external_url)
  File "./src/charm.py", line 256, in external_url
    if ingress_url := self.ingress.url:
  File "/var/lib/juju/agents/unit-prom-0/charm/lib/charms/traefik_k8s/v1/ingress_per_unit.py", line 843, in url
    urls = self.urls
  File "/var/lib/juju/agents/unit-prom-0/charm/lib/charms/traefik_k8s/v1/ingress_per_unit.py", line 834, in urls
    current_urls = self._urls_from_relation_data
  File "/var/lib/juju/agents/unit-prom-0/charm/lib/charms/traefik_k8s/v1/ingress_per_unit.py", line 804, in _urls_from_relation_data
    if not relation.app and not relation.app.name:  # type: ignore
AttributeError: 'NoneType' object has no attribute 'name'

Additional context

No response

Permission denied error when providing ingress requirements in IPU

Bug Description

ModelErrors are fired when prometheus tries to access self.ingress.relation.data, because self.relation simply does self.relations[0] and, apparently, relations contains not one but two relations, the first of which is a ghost (possible juju bug?)

The second relation is the one we want.

To Reproduce

juju deploy prometheus-k8s --channel beta
juju deploy traefik-k8s --channel edge
juju relate prometheus-k8s:ingress traefik

juju remove-application traefik-k8s
juju deploy traefik-k8s --channel edge --application-name='trfk'
juju relate prometheus-k8s:ingress trfk

Environment

edge

Relevant log output

File "/var/lib/juju/agents/unit-prometheus-k8s-0/charm/lib/charms/traefik_k8s/v1/ingress_per_unit.py", line 70
3, in _handle_relation                                                                                          
    self._publish_auto_data(typing.cast(Relation, event.relation))                                              
  File "/var/lib/juju/agents/unit-prometheus-k8s-0/charm/lib/charms/traefik_k8s/v1/ingress_per_unit.py", line 71
1, in _publish_auto_data                                                                                        
    self.provide_ingress_requirements(host=self._host, port=self._port)                                         
  File "/var/lib/juju/agents/unit-prometheus-k8s-0/charm/lib/charms/traefik_k8s/v1/ingress_per_unit.py", line 75
1, in provide_ingress_requirements                                                                              
    self.relation.data[self.unit].update(data)                                                                  
  File "/usr/lib/python3.8/_collections_abc.py", line 832, in update
    self[key] = other[key]
  File "/var/lib/juju/agents/unit-prometheus-k8s-0/charm/venv/ops/model.py", line 938, in __setitem__
    self._backend.relation_set(self.relation.id, key, value, self._is_app)
  File "/var/lib/juju/agents/unit-prometheus-k8s-0/charm/venv/ops/model.py", line 2137, in relation_set
    return self._run(*args)
  File "/var/lib/juju/agents/unit-prometheus-k8s-0/charm/venv/ops/model.py", line 2036, in _run
    raise ModelError(e.stderr)
ops.model.ModelError: b'ERROR cannot read relation settings: permission denied\n'


### Additional context

unclear whether it's a juju bug or not

deploy each itest in its own model

    I think deploying all itests in the same model is not a justification but rather something to avoid.

The introduction of deploy_traefik_if_not_deployed and deploy_charm_if_not_deployed seems to me like an antipattern that should be avoided.

In the past we were trying out reset but it wasn't very stable. If we add a preserve keyword to reset perhaps we could try again.

Originally posted by @sed-i in #103 (comment)

Add integration tests for `ingress` and `ingress-per-unit`

We should have some basic tests in place that deploy traefik, and have it related on both the ingress, and ingress-per-unit interface.

More advanced tests should follow that switch the subdomain/path mode and test the available routes.

`ingress_changed` event is emitted 3 times and the first one with `None`

Bug Description

When relating Traefik with another charm using ingress_per_unit interface you can see that the event ingress_changed is emitted 3 times, and the first time the ingress URL produced is None

To Reproduce

  1. Integrate Traefik in your charm following this guide
  2. Pack traefik and your consumer charm.
  3. juju deploy ./traefik-k8s_ubuntu-20.04-amd64.charm traefik-ingress --trust --resource traefik-image=docker.io/jnsgruk/traefik:2.6.1
  4. juju deploy ./*.charm loki --resource loki-image=grafana/loki:2.4.1
  5. juju add-relation traefik-ingress loki
  6. Check logs with: juju debug-log --replay | grep "This unit's ingress URL"

Environment

  • multipass workflow: charm-dev
    • juju: 2.9.25
    • microk8s: v1.23.4

Relevant log output

$ juju debug-log --replay | grep "This unit's ingress URL"
...
unit-loki-0: 18:07:19 INFO unit.loki/0.juju-log ingress-per-unit:29: This unit's ingress URL: None
unit-loki-0: 18:07:23 INFO unit.loki/0.juju-log ingress-per-unit:29: This unit's ingress URL: http://192.168.122.10:80/inna-loki-0
unit-loki-0: 18:07:26 INFO unit.loki/0.juju-log ingress-per-unit:29: This unit's ingress URL: http://192.168.122.10:80/inna-loki-0

Additional context

In the IngressPerUnitRequirerclass we have:

    def _emit_ingress_change_event(self, event):
        # TODO Avoid spurious events, emit only when URL changes
        self.on.ingress_changed.emit(self.relation)

I think that the event should be emitted only when URL changes and it's not None

Error in docstring. Follow standards

In ingress_per_unit docstring we have

    def _handle_ingress_per_unit(self, event):
        log.info("This unit's ingress URL: %s", self.ingress_per_unit.url)

Since the standard is logger docstrings should be:

    def _handle_ingress_per_unit(self, event):
        logger.info("This unit's ingress URL: %s", self.ingress_per_unit.url)

Allow stripping of path prefix when forwarding to backend

Bug Description

When configured to use path based routing, the traefik operator will send requests to the backend on the same path that came into the frontend. This requires the backend service being able to reconfigure itself based upon the path prefix that is chosen from by the traefik operator. This is generally possible, however not all backends will be able to necessarily reconfigure the root path without sticking a proxy in the way to rewrite the URL. Instead, the traefik operator should allow for the requires side of the relation to indicate that the path prefix should be stripped or left intact.

This is fairly simply to do, and requires that a middleware option be injected/included into the yaml configuration for the service (relevant documentation). In fact, this should maybe be the default option since backend services may not be aware that they need to handle a forwarded HTTP request to the original path determined by the traefik operator.

To Reproduce

I think it's fairly self-explanatory, but ...

  1. juju deploy --channel beta traefik
  2. juju deploy
  3. juju add-relation traefik
  4. curl URL provided on the relation
  5. Observe the traefik logs do not include the X-Forwarded-Prefix headers, which is used then the path prefix is stripped:
2022-04-18T23:04:38.084Z [traefik] time="2022-04-18T23:04:38Z" level=debug msg="vulcand/oxy/roundrobin/rr: Forwarding this request to URL" Request="{\"Method\":\"GET\",\"URL\":{\"Scheme\":\"\",\"Opaque\":\"\",\"User\":null,\"Host\":\"\",\"Path\":\"/openstack-nova/v2.1/os-hypervisors/detail\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\",\"RawFragment\":\"\"},\"Proto\":\"HTTP/1.1\",\"ProtoMajor\":1,\"ProtoMinor\":1,\"Header\":{\"Accept\":[\"application/json\"],\"Accept-Encoding\":[\"gzip, deflate\"],\"Connection\":[\"keep-alive\"],\"User-Agent\":[\"python-novaclient\"],\"X-Auth-Token\":[\"gAAAAABiXe6FWsObpnFYyAUq26ZavlDHi5bvGG1I0PFWmL6OkKdHe1HmSsmqIIaYfvQnEOsrtplRVtpOLeQPT8uEobez5VLp5uWu8lxPlncTYTFYIelnuQFUo2H5mFrWgCfqGoIbwByT9QtLCOgcfJnn35QhUT2hbjVU-BRaJgc7flM1XuqzJ9Q\"],\"X-Forwarded-Host\":[\"192.168.8.2\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-0\"],\"X-Openstack-Nova-Api-Version\":[\"2.1\"],\"X-Real-Ip\":[\"10.1.38.1\"]},\"ContentLength\":0,\"TransferEncoding\":null,\"Host\":\"192.168.8.2\",\"Form\":null,\"PostForm\":null,\"MultipartForm\":null,\"Trailer\":null,\"RemoteAddr\":\"10.1.38.1:3050\",\"RequestURI\":\"/openstack-nova/v2.1/os-hypervisors/detail\",\"TLS\":null}" ForwardURL="http://nova.openstack.svc.cluster.local:8774"
2022-04-18T23:04:40.984Z [traefik] time="2022-04-18T23:04:40Z" level=debug msg="vulcand/oxy/roundrobin/rr: completed ServeHttp on request" Request="{\"Method\":\"GET\",\"URL\":{\"Scheme\":\"\",\"Opaque\":\"\",\"User\":null,\"Host\":\"\",\"Path\":\"/openstack-nova/v2.1/os-hypervisors/detail\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\",\"RawFragment\":\"\"},\"Proto\":\"HTTP/1.1\",\"ProtoMajor\":1,\"ProtoMinor\":1,\"Header\":{\"Accept\":[\"application/json\"],\"Accept-Encoding\":[\"gzip, deflate\"],\"Connection\":[\"keep-alive\"],\"User-Agent\":[\"python-novaclient\"],\"X-Auth-Token\":[\"gAAAAABiXe6FWsObpnFYyAUq26ZavlDHi5bvGG1I0PFWmL6OkKdHe1HmSsmqIIaYfvQnEOsrtplRVtpOLeQPT8uEobez5VLp5uWu8lxPlncTYTFYIelnuQFUo2H5mFrWgCfqGoIbwByT9QtLCOgcfJnn35QhUT2hbjVU-BRaJgc7flM1XuqzJ9Q\"],\"X-Forwarded-Host\":[\"192.168.8.2\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-0\"],\"X-Openstack-Nova-Api-Version\":[\"2.1\"],\"X-Real-Ip\":[\"10.1.38.1\"]},\"ContentLength\":0,\"TransferEncoding\":null,\"Host\":\"192.168.8.2\",\"Form\":null,\"PostForm\":null,\"MultipartForm\":null,\"Trailer\":null,\"RemoteAddr\":\"10.1.38.1:3050\",\"RequestURI\":\"/openstack-nova/v2.1/os-hypervisors/detail\",\"TLS\":null}"

Reconfiguring the yaml to include the necessary stripPrefix middleware, results in the proper request and the backend knowing how to service the path without reconfiguring the backend application:

2022-04-18T23:09:01.542Z [traefik] time="2022-04-18T23:09:01Z" level=debug msg="vulcand/oxy/roundrobin/rr: Forwarding this request to URL" Request="{\"Method\":\"GET\",\"URL\":{\"Scheme\":\"\",\"Opaque\":\"\",\"User\":null,\"Host\":\"\",\"Path\":\"/v2.1/os-hypervisors/detail\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\",\"RawFragment\":\"\"},\"Proto\":\"HTTP/1.1\",\"ProtoMajor\":1,\"ProtoMinor\":1,\"Header\":{\"Accept\":[\"application/json\"],\"Accept-Encoding\":[\"gzip, deflate\"],\"Connection\":[\"keep-alive\"],\"User-Agent\":[\"python-novaclient\"],\"X-Auth-Token\":[\"gAAAAABiXe-Nan9s3Mo6BygUvoiauekQFcnWbMBOHGU5aD5XInjpc6zlVCT2MeA6e0ucbTlWFUnXNlTHpo0OkHd3IfHrpwBtNr1-pwGotTi_5Sx0xW_DPPz1MdmC_rXZnOOYvNyVa3quCQar18pyOktTP42QJxP-cM6unim9omPvj3iE1MKIyDE\"],\"X-Forwarded-Host\":[\"192.168.8.2\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Prefix\":[\"/openstack-nova\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-0\"],\"X-Openstack-Nova-Api-Version\":[\"2.1\"],\"X-Real-Ip\":[\"10.1.38.1\"]},\"ContentLength\":0,\"TransferEncoding\":null,\"Host\":\"192.168.8.2\",\"Form\":null,\"PostForm\":null,\"MultipartForm\":null,\"Trailer\":null,\"RemoteAddr\":\"10.1.38.1:7822\",\"RequestURI\":\"/v2.1/os-hypervisors/detail\",\"TLS\":null}" ForwardURL="http://nova.openstack.svc.cluster.local:8774"
2022-04-18T23:09:05.230Z [traefik] time="2022-04-18T23:09:05Z" level=debug msg="vulcand/oxy/roundrobin/rr: completed ServeHttp on request" Request="{\"Method\":\"GET\",\"URL\":{\"Scheme\":\"\",\"Opaque\":\"\",\"User\":null,\"Host\":\"\",\"Path\":\"/v2.1/os-hypervisors/detail\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\",\"RawFragment\":\"\"},\"Proto\":\"HTTP/1.1\",\"ProtoMajor\":1,\"ProtoMinor\":1,\"Header\":{\"Accept\":[\"application/json\"],\"Accept-Encoding\":[\"gzip, deflate\"],\"Connection\":[\"keep-alive\"],\"User-Agent\":[\"python-novaclient\"],\"X-Auth-Token\":[\"gAAAAABiXe-Nan9s3Mo6BygUvoiauekQFcnWbMBOHGU5aD5XInjpc6zlVCT2MeA6e0ucbTlWFUnXNlTHpo0OkHd3IfHrpwBtNr1-pwGotTi_5Sx0xW_DPPz1MdmC_rXZnOOYvNyVa3quCQar18pyOktTP42QJxP-cM6unim9omPvj3iE1MKIyDE\"],\"X-Forwarded-Host\":[\"192.168.8.2\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Prefix\":[\"/openstack-nova\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-0\"],\"X-Openstack-Nova-Api-Version\":[\"2.1\"],\"X-Real-Ip\":[\"10.1.38.1\"]},\"ContentLength\":0,\"TransferEncoding\":null,\"Host\":\"192.168.8.2\",\"Form\":null,\"PostForm\":null,\"MultipartForm\":null,\"Trailer\":null,\"RemoteAddr\":\"10.1.38.1:7822\",\"RequestURI\":\"/v2.1/os-hypervisors/detail\",\"TLS\":null}"

Environment

microk8s: v1.23.5
juju: 2.9.28
traefik: beta (rev 22)

Relevant log output

Relevant logs are included in the output from the reproducer. This is a design change request.

Additional context

I believe something along the lines of the following would suffice for the per-app ingress scenario:

Added to the _generate_per_app_config method after the config dict has been created:

        if request.strip_prefix and self._routing_mode == _RoutingMode.path:
            traefik_middleware_name = f"juju-{prefix}-stripprefix"
            config['http']['routers'][traefik_router_name]['middlewares'] = [traefik_middleware_name]
            config['http']['middlewares'] = {
                traefik_middleware_name: {
                    'stripPrefix': {
                        'prefixes': [f"/{prefix}"],
                    }
                }
            }

Traefik in error state while deploying COS-Lite

Bug Description

Sometimes Traefik ends up in error state while deploying COS-Lite.

╭─ubuntu@charm-dev-juju-3 ~ 
╰─$ juju status --color --relations           
Model  Controller           Cloud/Region        Version  SLA          Timestamp
cos    charm-dev-batteries  microk8s/localhost  3.0.2    unsupported  14:59:03-03:00

App           Version  Status   Scale  Charm             Channel  Rev  Address         Exposed  Message
alertmanager  0.23.0   active       1  alertmanager-k8s  stable    36  10.152.183.24   no       
catalogue              active       1  catalogue-k8s     stable     4  10.152.183.99   no       
grafana       9.2.1    active       1  grafana-k8s       stable    52  10.152.183.27   no       
loki          2.4.1    active       1  loki-k8s          stable    47  10.152.183.221  no       
prometheus    2.33.5   active       1  prometheus-k8s    stable    79  10.152.183.84   no       
traefik                waiting      1  traefik-k8s       stable    93  10.152.183.203  no       installing agent

Unit             Workload  Agent  Address      Ports  Message
alertmanager/0*  active    idle   10.1.36.120         
catalogue/0*     active    idle   10.1.36.109         
grafana/0*       active    idle   10.1.36.123         
loki/0*          active    idle   10.1.36.122         
prometheus/0*    active    idle   10.1.36.121         
traefik/0*       error     idle   10.1.36.119         hook failed: "traefik-route-relation-changed" for grafana:ingress

To Reproduce

  1. Deploy cos-lite: juju deploy cos-lite --trust
  2. Check juju status

Environment

  • juju: 3.0.2
  • microk8s: v1.25.4 (tracking: 1.25-strict/stable)
  • Multipass VM: 4 vCPU - 6 Gb RAM

Relevant log output

unit-traefik-0: 14:57:28.646 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook
unit-traefik-0: 14:57:30.712 ERROR unit.traefik/0.juju-log traefik-route:5: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 710, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 166, in _on_relation_changed
    self._update_requirers_with_external_host()
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 175, in _update_requirers_with_external_host
    relation.data[self.charm.app]["external_host"] = self._stored.external_host
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/model.py", line 945, in __setitem__
    raise RelationDataError('relation data values must be strings')
ops.model.RelationDataError: relation data values must be strings
unit-traefik-0: 14:57:31.044 ERROR juju.worker.uniter.operation hook "traefik-route-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1
unit-traefik-0: 14:57:31.046 INFO juju.worker.uniter awaiting error resolution for "relation-changed" hook

Additional context

No response

Clean up IPU requirer API

IPU Requirer's use case is to only support a relation at a time.
(What would it mean to have two ingress relations as a requirer?)

Therefore it has a .relation -> Relation method that assumes at most one relation exists.
However the is_ready, is_failed, is_... methods all take relation as an argument, as if there could be many. This results in complexity and ugliness.

Replace jsonschema with pydantic

The traefik libraries would be excellent candidates for trying out replacing jsonschema with pydantic. According to lessons learned by the charmcraft team, pydantic plays a lot nicer with more complex data structures, is faster in the general use case and has better error logging.

In addition to the swap, it would be excellent if we generated the pydantic model from the json schemas of the interface spec.

traefik hook error

Bug Description

unit-traefik-0: 22:43:11 ERROR unit.traefik/0.juju-log traefik-route:5: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 710, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 166, in _on_relation_changed
    self._update_requirers_with_external_host()
  File "/var/lib/juju/agents/unit-traefik-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 175, in _update_requirers_with_external_host
    relation.data[self.charm.app]["external_host"] = self._stored.external_host
  File "/var/lib/juju/agents/unit-traefik-0/charm/venv/ops/model.py", line 945, in __setitem__
    raise RelationDataError('relation data values must be strings')
ops.model.RelationDataError: relation data values must be strings

To Reproduce

#!/bin/bash

# Install microk8s, juju and lxd snaps
sudo snap install microk8s --classic
sudo snap install juju --classic
sudo snap install lxd --channel latest/stable

lxd init --auto
lxc network set lxdbr0 ipv6.address none

sudo zpool create lxd-pool /dev/nvme0n1p3
lxc storage create lxd-pool zfs source=lxd-pool

# Uncomment and configure your own zfs pool name
lxc profile edit default <<EOY
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: lxd-pool
    type: disk
name: default
EOY

#
# Bootstrap LXD
juju bootstrap lxd

# Enable microk8s hostpath storage and dns
microk8s.enable hostpath-storage dns

# Get our primary ip and add the k8s to our lxd controller
my_ip=`hostname --ip-address | awk '{print $2}'`
#ip route get 8.8.8.8 | head -1 | cut -d' ' -f7
microk8s.kubectl config view --raw \
  | sed "s/127.0.0.1/\${my_ip}/g" \
  | juju add-k8s microk8s-localhost --cluster-name=microk8s-cluster --controller=localhost-localhost

# Add a k8s model
juju add-model cos-lite-testing microk8s-localhost

# Deploy COS lite
juju deploy cos-lite --trust

Environment

hitting this on latest/stable and latest/edge

Relevant log output

https://paste.ubuntu.com/p/32cvWD8rhW/

Additional context

No response

Support TCP routing

For database endpoints that need to be exposed outside of a Kubernetes cluster (e.g., through most cross-model relations), we should update the relation interfaces and the libraries to provide means of requesting tcp routing (through TCP Routers) rather than just http ones. @wolsen care to provide more details for your use-cases?

hook failed: "traefik-route-relation-changed" for grafana:ingress

Bug Description

When I try to connect traefik-k8s to grafana-k8s, the traefik-route-relation-changed hook fails.

To Reproduce

juju deploy grafana-k8s
juju deploy traefik-k8s
juju relate grafana-k8s traefik-k8s

Environment

Juju 3.0 installed from source

$ juju version --all
+ juju version --all
version: 3.0.3-ubuntu-amd64
git-commit: 10fda27abdb2682beef8e47f87de77251cb16864
git-tree-state: clean
compiler: gc

MicroK8s installed from snap

$ microk8s version
+ microk8s version
MicroK8s v1.25.4 revision 4221

grafana-k8s rev 52 (latest/stable)
traefik-k8s rev 93 (latest/stable)

Relevant log output

unit-traefik-k8s-0: 10:30:33 ERROR unit.traefik-k8s/0.juju-log traefik-route:23: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 710, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 166, in _on_relation_changed
    self._update_requirers_with_external_host()
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/lib/charms/traefik_route_k8s/v0/traefik_route.py", line 175, in _update_requirers_with_external_host
    relation.data[self.charm.app]["external_host"] = self._stored.external_host
  File "/var/lib/juju/agents/unit-traefik-k8s-0/charm/venv/ops/model.py", line 945, in __setitem__
    raise RelationDataError('relation data values must be strings')
ops.model.RelationDataError: relation data values must be strings
unit-traefik-k8s-0: 10:30:34 ERROR juju.worker.uniter.operation hook "traefik-route-relation-changed" (via hook dispatching script: dispatch) failed: exit status 1

Additional context

No response

Traefik route config file disappears from (doesn't appear in the first place on) traefik filesystem

Bug Description

I deployed the load test with the "grafana workaround"

sudo -u ubuntu juju remove-relation grafana:ingress traefik
sleep 30
sudo -u ubuntu juju relate grafana:ingress traefik

after which grafana was reachable via ingress address.

After about 6 hours of operation, something went wrong (see logs section below), seems like all charms restarted, and grafana became unreachable.

Traefik still had grafana details in relation data,

$ juju show-unit traefik/0
  - relation-id: 27
    endpoint: traefik-route
    related-endpoint: ingress
    application-data:
      config: |
        http:
          routers:
            juju-cos-lite-load-test-grafana-router:
              entryPoints:
              - web
              rule: PathPrefix(`/cos-lite-load-test-grafana`)
              service: juju-cos-lite-load-test-grafana-service
          services:
            juju-cos-lite-load-test-grafana-service:
              loadBalancer:
                servers:
                - url: http://grafana-0.grafana-endpoints.cos-lite-load-test.svc.cluster.local:3000/

But the config file is not on the filesystem (and curling the grafana endpoint gives 404):

ubuntu@pd-ssd-4cpu-8gb:~$ juju ssh --container traefik traefik/0 ls /opt/traefik/juju
juju_ingress_ingress-per-unit_4_prometheus.yaml
juju_ingress_ingress-per-unit_5_loki.yaml
juju_ingress_ingress_20_catalogue.yaml
juju_ingress_ingress_7_alertmanager.yaml

After re-relating grafana:ingress traefik the file re-appears:

ubuntu@pd-ssd-4cpu-8gb:~$ juju ssh --container traefik traefik/0 ls /opt/traefik/juju
juju_ingress_ingress-per-unit_4_prometheus.yaml
juju_ingress_ingress-per-unit_5_loki.yaml
juju_ingress_ingress_20_catalogue.yaml
juju_ingress_ingress_7_alertmanager.yaml
juju_ingress_traefik-route_28_grafana.yaml

To Reproduce

Deploy the bundle (same thing happens on startup).

Environment

Bundle from edge.

Relevant log output

unit-grafana-0: 2022-12-08 01:02:25 ERROR juju.worker.dependency "log-sender" manifold worker returned unexpected error: sending log message: write tcp 10.1.79.222:43324->10.152.183.132:17070: i/o timeout: write tcp 10.1.79.222:43324->10.152.183.132:17070: i/o timeout
unit-grafana-0: 2022-12-08 01:02:48 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: api connection broken unexpectedly

unit-traefik-0: 2022-12-08 01:05:02 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: [40652e] "unit-traefik-0" cannot open api: try was stopped

unit-grafana-0: 2022-12-08 01:05:14 ERROR juju.worker.dependency "api-caller" manifold worker returned unexpected error: [40652e] "unit-grafana-0" cannot open api: cannot resolve "controller-service.controller-uk8s.svc.cluster.local": lookup controller-service.controller-uk8s.svc.cluster.local: i/o timeout

unit-grafana-0: 2022-12-08 01:05:32 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""

unit-traefik-0: 2022-12-08 01:05:35 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""

unit-traefik-0: 2022-12-08 01:05:46 WARNING juju.worker.uniter.context could not retrieve the cloud API version: unable to determine legacy status for namespace cos-lite-load-test: Get "https://10.152.183.1:443/api/v1/namespaces/cos-lite-load-test": net/http: TLS handshake timeout
unit-grafana-0: 2022-12-08 01:05:46 WARNING juju.worker.uniter.context could not retrieve the cloud API version: unable to determine legacy status for namespace cos-lite-load-test: Get "https://10.152.183.1:443/api/v1/namespaces/cos-lite-load-test": net/http: TLS handshake timeout

unit-grafana-0: 2022-12-08 01:05:59 INFO unit.grafana/0.juju-log Running legacy hooks/start.
unit-traefik-0: 2022-12-08 01:05:59 INFO unit.traefik/0.juju-log Running legacy hooks/start.

Additional context

No response

On relation_broken, `IngressPerUnitRequirer` still displays the URL no longer available

Bug Description

When removing an ingress_per_unit relation, the IngressPerUnitRequirer.url property return the last ingress URL.

To Reproduce

  1. juju deploy traefik-k8s --channel=edge --trust
  2. juju deploy prometheus-k8s --channel=edge --trust
  3. juju add-relation traefik-k8s prometheus-k8s:ingress
  4. Ensure that traefik is in Active status (e.g., setting up microk8s enable meltallb)
  5. juju remove-relation traefik-k8s prometheus-k8s:ingress
  6. microk8s.kubectl exec prometheus-k8s-0 -n cos -c prometheus -- ps aux

Environment

My nightmares

Relevant log output

Adding `print(f"Update grafana source: {self._external_url}")`in the constructor of Prometheus:

unit-prometheus-k8s-0: 13:43:38 DEBUG unit.prometheus-k8s/0.ingress-relation-broken External URL: http://192.168.2.10:80/cos-prometheus-k8s-0
unit-prometheus-k8s-0: 13:43:38 INFO unit.prometheus-k8s/0.juju-log ingress:7: Prometheus configuration reloaded
unit-prometheus-k8s-0: 13:43:38 DEBUG unit.prometheus-k8s/0.juju-log ingress:7: Setting Grafana data sources: {'model': 'cos', 'model_uuid': '3f8cf4e2-dca6-4bef-8e4f-4fa67728d12b', 'application': 'prometheus-k8s', 'type': 'prometheus'}
unit-prometheus-k8s-0: 13:43:38 DEBUG unit.prometheus-k8s/0.ingress-relation-broken Update grafana source: http://192.168.2.10:80/cos-prometheus-k8s-0
unit-prometheus-k8s-0: 13:43:39 INFO juju.worker.uniter.operation ran "ingress-relation-broken" hook (via hook dispatching script: dispatch)

Additional context

My guess is that the relation data bags are still accessible during the relation_broken event, and that leads to spurious results.

ingress_per_unit lacks a proper relation_departed hook

in IPU, when a unit leaves, it would be useful for the charm to know in a more 'native' way.
Proposal: to add a

IngressPerUnitProvider.on.departed

hook, which can be used by Traefik to wipe ingress for the departing unit.
At the moment one is forced to:

self.framework.observe(self.on.ingress_per_unit_relation_departed, self._handle_ingress_request)

Hook failed: "storage-attached"

Congratulations for Charmed ingress on k8s.

I think now maybe I can get access to applications without Nginx Ingress.

I’m on Microk8s/Aws and for

juju deploy traefik-k8s --to 0

the return is:

hook failed: “storage-attached”

In the log appears:

File “/var/lib/juju/agents/unit-traefik-k8s-1/charm/lib/charms/observability_libs/v0/kubernetes_service_patch.py”, line 279, in _namespace with open("/var/run/secrets/kubernetes.io/serviceaccount/namespace", “r”) as f: FileNotFoundError: [Errno 2] No such file or directory: ‘/var/run/secrets/kubernetes.io/serviceaccount/namespace’ unit-traefik-k8s-1: 13:16:49 ERROR juju.worker.uniter.operation hook “configurations-storage-attached” (via hook dispatching script: dispatch) failed: exit status 1 unit-traefik-k8s-1: 13:16:49 INFO juju.worker.uniter awaiting error resolution for “storage-attached” hook
I also tried juju deploy from zinc-k8s and it returned the same error.

I am using cStor openebs for PVC in addition to the Microk8s default volumes configuration.

What can I do to get out of this impasse, @PietroPasotti

Thanks if you can answer

Flaky integration tests

Bug Description

Occasionally an itest fails on gh (but not locally)

To Reproduce

Run CI on gh

Environment

CI

Relevant log output

INFO     juju.model:model.py:2715 Waiting for model:
  tcp-tester/0 [allocating] waiting: installing agent
=========================== short test summary info ============================
ERROR tests/integration/test_tcp_ipu_compatibility.py::test_tcp_ipu_compatibility - asyncio.exceptions.TimeoutError: Timed out waiting for model:

Additional context

No response

Consider abstracting schema validation logic from `ingress_per_unit`

Right now the schema validation logic all lives inside the ingress-per-unit lib. As we move toward solving #37, we will likely need to solve very similar (if not the same) problems. As much of that serialisation/validation logic should be taken out into a lib of its own, and tested separately such that it can be reused here, and elsewhere.

(It may not end up living in this charm by default...)

Unhandled exception on removing related app

Bug Description

The traefik k8s operator throws an unhandled exception when a related app is removed.

To Reproduce

  1. Build and deploy traefik from current git main
  2. Set and external hostname
  3. Add a relation with an IngressPerAppRequirer
  4. Remove the relation after it is established.

Environment

Built and deployed from github on microk8s

Relevant log output

unit-traefik-ingress-0: 10:20:19 ERROR unit.traefik-ingress/0.juju-log ingress:1: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 603, in <module>
    main(TraefikIngressCharm)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/main.py", line 426, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 276, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 736, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 783, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/lib/charms/traefik_k8s/v0/ingress.py", line 250, in _handle_relation_broken
    self.on.data_removed.emit(event.relation)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 276, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 736, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-traefik-ingress-0/charm/venv/ops/framework.py", line 783, in _reemit
    custom_handler(event)
  File "./src/charm.py", line 273, in _handle_ingress_data_removed
    self._wipe_ingress_for_relation(event.relation)
  File "./src/charm.py", line 498, in _wipe_ingress_for_relation
    if request := self.ingress_per_app.get_request(relation):
AttributeError: 'IngressPerAppProvider' object has no attribute 'get_request'
unit-traefik-ingress-0: 10:20:19 ERROR juju.worker.uniter.operation hook "ingress-relation-broken" (via hook dispatching script: dispatch) failed: exit status 1

Additional context

No response

Improve docs about `external_hostname`

Manual testing works flawless for the happy path, however, setting external_hostname to "http://external-address.local" makes the scheme appear twice in the resulting URL. Same if you specify port, or both.

$ juju config traefik external_hostname=external-address.local:9090
$ juju show-unit prometheus/0 

...
 url: http://external-address.local:9090:80/traefik-test-prometheus-k8s-0
...
$ juju config traefik external_hostname=http://external-address.local
$ juju show-unit prometheus/0 

...
 url: http://http://external-address.local:80/traefik-test-prometheus-k8s-0
...
$ juju config traefik external_hostname=http://external-address.local:9090
$ juju show-unit prometheus/0 

...
 url: http://http://external-address.local:9090:80/traefik-test-prometheus-k8s-0
...

Originally posted by @simskij in #63 (review)

When downscaling a related consumer app, `IngressPerUnitRequirer` does not sync endpoints.

Bug Description

Let's say we deploy one traefik-k8s unit and three loki-k8s units and relate both apps. traefik charm will generate 3 endpoints and will send them over relation data.

In relation data we see those 3 endpoints:

$ juju show-unit loki/0
loki/0:
...
  - endpoint: ingress-per-unit
    related-endpoint: ingress-per-unit
    application-data:
      _supported_versions: '- v1'
      data: |-
        ingress:
          loki/0:
            url: http://192.168.122.10:80/cos-loki-0
          loki/1:
            url: http://192.168.122.10:80/cos-loki-1
          loki/2:
            url: http://192.168.122.10:80/cos-loki-2
    related-units:
      traefik-k8s/0:
        in-scope: true
        data:
          egress-subnets: 192.168.122.10/32
          ingress-address: 192.168.122.10
          private-address: 192.168.122.10
  provider-id: loki-0
  address: 10.1.157.69

But if we downscale loki to 1 unit: juju scale-application loki 1 we will still have 3 endpoints:

$ juju show-unit loki/0
loki/0:
...
  - endpoint: ingress-per-unit
    related-endpoint: ingress-per-unit
    application-data:
      _supported_versions: '- v1'
      data: |-
        ingress:
          loki/0:
            url: http://192.168.122.10:80/cos-loki-0
          loki/1:
            url: http://192.168.122.10:80/cos-loki-1
          loki/2:
            url: http://192.168.122.10:80/cos-loki-2
    related-units:
      traefik-k8s/0:
        in-scope: true
        data:
          egress-subnets: 192.168.122.10/32
          ingress-address: 192.168.122.10
          private-address: 192.168.122.10
  provider-id: loki-0
  address: 10.1.157.69

And what is going to happen if we upscale loki to 4 units? will we still have 3 endpoints in relation data?
No, upscaling works as expected and we have 4 endpoints:

$ juju show-unit loki/0
loki/0:
...
  - endpoint: ingress-per-unit
    related-endpoint: ingress-per-unit
    application-data:
      _supported_versions: '- v1'
      data: |-
        ingress:
          loki/0:
            url: http://192.168.122.10:80/cos-loki-0
          loki/1:
            url: http://192.168.122.10:80/cos-loki-1
          loki/2:
            url: http://192.168.122.10:80/cos-loki-2
          loki/3:
            url: http://192.168.122.10:80/cos-loki-3
    related-units:
      traefik-k8s/0:
        in-scope: true
        data:
          egress-subnets: 192.168.122.10/32
          ingress-address: 192.168.122.10
          private-address: 192.168.122.10
  provider-id: loki-0
  address: 10.1.157.69

To Reproduce

  1. Deploy traefik
  2. Deploy 3 units of a charm that uses IngressPerUnitRequirer
  3. Relate both charms
  4. Check you have 3 endpoints in relation data
  5. Downscale consumer charm to, let's say, 2 units.
    1. Check you still have 3 endpoints in relation data.

Environment

EVERYWHERE

Relevant log output

--

Additional context

No response

(Re)consider conditional yaml dump

The charm is being (overly?) smart about relation data:

if config:
yaml_config = yaml.dump(config) if not isinstance(config, str) else config

The charm lib isn't:

app_databag["config"] = yaml.safe_dump(config)

For consistency, consider:

-        app_databag['config'] = yaml.safe_dump(config)
+        app_databag['config'] = yaml.safe_dump(config) if not isinstance(config, str) else config

Static checker fails for `ingress_per_unit`

Prometheus static-charm check fails for ingress_per_unit.

Meanwhile I added the following to pyproject.toml

[[tool.mypy.overrides]]
module = ["charms.traefik_k8s.*"]
follow_imports = "silent"

Might be a good idea to add static checks here.

lib/charms/traefik_k8s/v0/ingress_per_unit.py:229: error: Type variable "charms.traefik_k8s.v0.ingress_per_unit._IngressPerUnitBase._IngressPerUnitEventType" is unbound  [valid-type]
        on: _IngressPerUnitEventType
            ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:229: note: (Hint: Use "Generic[_IngressPerUnitEventType]" or "Protocol[_IngressPerUnitEventType]" base class to bind "_IngressPerUnitEventType" inside a class)
lib/charms/traefik_k8s/v0/ingress_per_unit.py:229: note: (Hint: Use "_IngressPerUnitEventType" in function signature to bind "_IngressPerUnitEventType" inside a function)
lib/charms/traefik_k8s/v0/ingress_per_unit.py:263: error: _IngressPerUnitEventType? has no attribute "ready"  [attr-defined]
                self.on.ready.emit(relation)
                ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:265: error: _IngressPerUnitEventType? has no attribute "available"  [attr-defined]
                self.on.available.emit(relation)
                ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:267: error: _IngressPerUnitEventType? has no attribute "failed"  [attr-defined]
                self.on.failed.emit(relation)
                ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:288: error: _IngressPerUnitEventType? has no attribute "broken"  [attr-defined]
            self.on.broken.emit(event.relation)
            ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:301: error: Item "None" of "Optional[Any]" has no attribute "name"  [union-attr]
            if not relation.app.name:
                   ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:504: error: Incompatible return value type (got "Dict[Any, Dict[Any, Any]]", expected "Dict[Unit, RequirerData]")  [return-value]
            return requirer_unit_data
                   ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:604: error: Incompatible types in assignment (expression has type "None", variable has type "Tuple[Optional[str], int]")  [assignment]
                self._auto_data = None
                                  ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:667: error: Item "None" of "Optional[Relation]" has no attribute "data"  [union-attr]
                raw = self.relation.data[self.unit].get("data")
                      ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:710: error: Item "None" of "Optional[Relation]" has no attribute "data"  [union-attr]
            self.relation.data[self.unit]["data"] = yaml.safe_dump(data)
            ^
lib/charms/traefik_k8s/v0/ingress_per_unit.py:320: error: Unused "type: ignore" comment
            if not relation.app.name:  # type: ignore
            ^
Found 11 errors in 1 file (checked 2 source files)

TLS termination by Traefik

Traefik should provide an option to expose a TLS-protected endpoint. Ideally, this would occur when Traefik is related to a charm providing certificates, like Vault.

external_hostname silently accepts a port and updates reldata with invalid address

Bug Description

When juju config traf external_hostname=192.168.1.10:8080, the other end of the relation sees

  - relation-id: 1
    endpoint: ingress
    related-endpoint: ingress-per-unit
    application-data:
      ingress: |-
        prom/0:
          url: http://192.168.1.10:8080:80/welcome-prom-0

To Reproduce

juju deploy --channel=edge prometheus-k8s prom
juju deploy --channel=edge traefik-k8s traf
juju relate prom:ingress traf
juju config traf external_hostname=192.168.1.10:8080

Environment

microk8s/localhost 2.9.33
traefik-k8s edge 84

Relevant log output

unit-prom-0: 14:49:14.331 ERROR unit.prom/0.juju-log ingress:1: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 610, in <module>
    main(PrometheusCharm)
  File "/var/lib/juju/agents/unit-prom-0/charm/venv/ops/main.py", line 419, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 139, in __init__
    endpoint_port=external_url.port or self._port,
  File "/usr/lib/python3.8/urllib/parse.py", line 177, in port
    raise ValueError(message) from None
ValueError: Port could not be cast to integer value as '8080:80'

Additional context

No response

"ERROR permission denied" when removing relation between Traefik and consumer charm

Bug Description

When a relation between Traefik operator and a consumer charm (let's say Loki) that use IngressPerUnitRequirer, an ops.model.ModelError exception is raised

To Reproduce

  1. Deploy this charm
  2. Deploy a consumer charm that uses IngressPerUnitRequirer (let's say Loki)
  3. Run juju add-relation traefik-ingress loki
  4. Run juju remove-relation traefik-ingress loki
  5. Check logs with juju debug-log

Environment

Both charms are packed from source code, running in charm-dev Multipass workflow.

Relevant log output

unit-loki-0: 16:18:56 ERROR unit.loki/0.juju-log logging:16: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-loki-0/charm/venv/ops/model.py", line 1595, in _run
    result = run(args, **kwargs)
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '('/var/lib/juju/tools/unit-loki-0/network-get', 'ingress-per-unit', '--format=json')' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./src/charm.py", line 290, in <module>
    main(LokiOperatorCharm)
  File "/var/lib/juju/agents/unit-loki-0/charm/venv/ops/main.py", line 419, in main
    charm = charm_class(framework)
  File "./src/charm.py", line 59, in __init__
    self.ingress_per_unit = IngressPerUnitRequirer(
  File "/var/lib/juju/agents/unit-loki-0/charm/lib/charms/traefik_k8s/v0/ingress_per_unit.py", line 353, in __init__
    self.auto_data = self._complete_request(host or "", port)
  File "/var/lib/juju/agents/unit-loki-0/charm/lib/charms/traefik_k8s/v0/ingress_per_unit.py", line 376, in _complete_request
    host = str(binding.network.bind_address)
  File "/var/lib/juju/agents/unit-loki-0/charm/venv/ops/model.py", line 556, in network
    self._network = Network(self._backend.network_get(self.name, self._relation_id))
  File "/var/lib/juju/agents/unit-loki-0/charm/venv/ops/model.py", line 1851, in network_get
    return self._run(*cmd, return_output=True, use_json=True)
  File "/var/lib/juju/agents/unit-loki-0/charm/venv/ops/model.py", line 1597, in _run
    raise ModelError(e.stderr)
ops.model.ModelError: b'ERROR unit "loki/0" not found (not found)\n'

Additional context

Seems the problem is in _complete_request method when executing host = str(binding.network.bind_address):

    def _complete_request(self, host: Optional[str], port: int):
        if not host:
            binding = self.charm.model.get_binding(self.endpoint)
            host = str(binding.network.bind_address)

        return {
            self.charm.unit: {
                "model": self.model.name,
                "name": self.charm.unit.name,
                "host": host,
                "port": port,
            },
        }

Charm should set blocked (or somehow report an anomaly) if an ingress relation 'fails'

Bug Description

https://github.com/jnsgruk/hello-kubecon integrates with nginx via an 'ingress' interface.
Relating hello-kubecon to traefik reports an all-green status.

image

Even though, obviously, the databags show that the integration is not working as it should

image

and the charm does not have ingress.

To Reproduce

  1. juju deploy hello-kubecon
  2. juju deploy traefik-k8s
  3. juju relate traefik-k8s:ingress hello-kubecon:ingress

Environment

microk8s, juju3

Relevant log output

--

Additional context

consider using a StatusPool (https://github.com/PietroPasotti/status-pool-example) to tell the user that 'traefik is all good BUT...'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.