Git Product home page Git Product logo

data-platform-libs's Introduction

Data Platform Libraries for Operator Framework Charms

Charmhub Release Tests

Description

The data-platform-libs charm provides a set of charm libraries which offers convenience methods for interacting with charmed databases, but also writing you own database consuming application charms, through relations

This charm is not meant to be deployed itself, and is used as a mechanism for hosting libraries only.

Usage

This charm is not intended to be deployed. It is a container for standalone charm libraries, which can be managed using charmcraft fetch-lib (ref. link), after which they may be imported and used as normal charms. For example:

charmcraft fetch-lib charms.data_platform_libs.v0.data_interfaces

Following are the libraries available in this repository:

  • data_interfaces - Library to manage the relation for the data-platform products.
  • data_models - Library to introduce pydantic models for handling configuration, action parameters and databags.
  • database_provides - [DEPRECATED] Library that offers custom events and methods for provider-side of the relation (e.g. mysql)
  • database_requires - [DEPRECATED] Library that offers custom events and methods for requirer-side of the relation (e.g. wordpress)

Note: data_interfaces is not compatible ops<=1.5.4. It is compatible with only ops>=2.0.0.

The charms from the tests/integration folder aren't meant to be used for anything beyond testing and example code. They serve as examples of how to use the charm libraries.

Contributing

Please see the Juju SDK docs for guidelines on enhancements to this charm following best practice guidelines, and CONTRIBUTING.md for developer guidance.

data-platform-libs's People

Contributors

a-velasco avatar batalex avatar beliaev-maksim avatar carlcsaposs-canonical avatar delgod avatar deusebio avatar dmitry-ratushnyy avatar dragomirp avatar juditnovak avatar marceloneppel avatar marcoppenheimer avatar mthaddon avatar paulomach avatar taurus-forever avatar welpaolo avatar wrfitch avatar yanksyoon avatar zmraul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-platform-libs's Issues

Update data-platform-workflows used in this repository to avoid Node16 warnings

Steps to reproduce

The following test run has a bunch of warnings for deprecation of Node 16 due to usage of actions/checkout@v3 which originates from data-platform-workflows@v2

Expected behavior

No warnings

Actual behavior

A lot of warnings because we are using a very old version of data-platform-workflows

Versions

n/a

Log output

n/a

Additional context

n/a

For upgrade rollback on k8s, the `upgrade-relation-changed` handler will defer indefinitely

On k8s, for a rollback case, the leader unit will have the recovery state, rendering the cluster_state with the recovery value.

This will defer the on_upgrade_changed handler indefinitely because of this test.

The defer makes sense for VM cases, where on charm refresh, all units will set it state to ready on the upgrade-charm handler.
On K8s, only the upgrading unit will receive the upgrade-charm event, and if the leader unit is not the upgrading unit, the cluster_state will remain in the recovery state.

Deferring the on_upgrade_changed indefinitely will prevent the upgrade_stack being popped, and a subsequent resume-upgrade will set the wrong partition value, staling the rollback process.

[CodeQL] Change log messages that trigger CodeQL false-positives

CodeQL keeps thinking that the code below distributes sensitive information.

            except KeyError:
                logger.error(
                    "Non-existing field '%s' was attempted to be removed from the databag (relation ID: %s)",
                    str(field),

This is an issue for all Charm Developers who use data-platform-libs, as they ALL manually need to supress the error on their pipelines.

Running ops.testing or ops.scenario tests fails when a charm uses the DatabaseRequires class

When using the relation_aliases option dynamic events are created. For some reason unlike other events these seem to persist across test invocations and cause event type duplication errors. I'm raising this against this interface as it's the only place I see this issue but it's possible the root cause of the bug is in ops.

Steps to reproduce

Run a ops.scenario (or ops.testing) pytest test which uses DatabaseRequires with the relations_aliases option. For example:

import pytest
from scenario import State, Context, Container, Relation
from ops.charm import CharmBase
from ops.model import UnknownStatus

from charms.data_platform_libs.v0.database_requires import (
    DatabaseRequires,
    DatabaseCreatedEvent,
)


class ApplicationCharm(CharmBase):
    def __init__(self, *args):
        super().__init__(*args)
        self.database = DatabaseRequires(
            self,
            relation_name="database",
            database_name="database",
            relations_aliases=["an_alias"],
        )
        self.framework.observe(
            self.database.on.database_created, self._on_database_created
        )

    def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
        self.status.set(ActiveStatus("received database credentials"))


@pytest.mark.parametrize("leader", (True, False))
def test_charm_no_deps(leader):
    metadata = {
        "name": "myapp",
        "version": "3",
        "subordinate": False,
        "requires": {"database": {"interface": "mysql_client", "limit": 1}},
    }
    state = State(
        leader=leader,
        config={},
        containers=[],
        relations=[
            Relation(
                endpoint="database",
                interface="mysql_client",
                remote_app_name="mysql",
                local_unit_data={},
                remote_app_data={},
            )
        ],
    )
    ctxt = Context(charm_type=ApplicationCharm, meta=metadata)
    out = ctxt.run("config-changed", state)
    assert out.unit_status == UnknownStatus()

Expected behavior

2 passed in Xs

Actual behavior

...
E               RuntimeError: unable to define an event with event_kind that overlaps with an existing type <class 'charms.data_platform_libs.v0.database_requires.DatabaseEvents'> attribute: an_alias_database_created
...
1 failed, 1 passed in Xs

Versions

ops: 2.4.1
ops-scenario: 4.0.4
charms.data_platform_libs.v0.database_requires: 
    LIBID = "0241e088ffa9440fb4e3126349b2fb62"
    LIBAPI = 0
    LIBPATCH = 4

DatabaseRequestedEvent dont set `app` or `unit`

Bug Description

The emitter of DatabaseRequestedEvent don't set values to the event.unit nor event.app properties inherited from RelationEvent, so they are both defaulted to None.

For MySQL, the remote unit.name is used to index, within the database, the units specific user (e.g. user@address).

Also, without setting these properties, direct access to relation remote app/unit databag is not possible, .e.g.:

(Pdb) event.relation.data[event.unit]
*** KeyError: None

To Reproduce

1.Add a handler for DatabaseRequestedEvent
2.Inside the handler, try access event.app or event.unit

Environment

Tested with v0.1 from charmhub

Relevant log output

2022-07-04 21:14:26,327 ERROR    Uncaught exception while in charm code:
Traceback (most recent call last):
  File "./src/charm.py", line 281, in <module>
    main(MySQLOperatorCharm)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/main.py", line 431, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/main.py", line 142, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-mysql-81/charm/lib/charms/data_platform_libs/v0/database_provides.py", line 187, in _on_relation_changed
    self.on.database_requested.emit(event.relation)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/framework.py", line 316, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/framework.py", line 784, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/framework.py", line 857, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-mysql-81/charm/src/relations/provides.py", line 68, in _on_database_requested
    remote_host = event.relation.data[event.unit].get("private-address")
  File "/var/lib/juju/agents/unit-mysql-81/charm/venv/ops/model.py", line 890, in __getitem__
    return self._data[key]
KeyError: None

Additional context

No response

No resume-upgrade support for VM

For the sake of a common UI, add support for resume-upgrade action on VM.

Optionally, one could opt-in to the resume-upgrade by setting a parameter to the pre-upgrade-check action.

[DESIGN] Secret fields shouldn't appear in the databag

Currently (re-using the code from Cross-charm Relations) the pre-requested secret fields are saved in the databag.

They should be dynamically determined instead.

The overhead shouldn't be significatnt. True that this means that we need to get ALL secrets on the first occasion when the secret_fields attribute is referred to (which happens pretty much in all operations).

However since we strictly limit the the number of secrets being used, we'll probably have to fetch no more than 2-3 secrets from the Juju Secret Store. This should not mean a massive overhead.

[Model] Make (de)serialization a first class feature

Currently, we have two issues that look like intertwined: (1) databag format changes between charm revisions and we need to handle that at upgrades; and (2) predictable serialization, so we make sure a given class with the same data always get transformed into exactly the same string.

The later problem will happen every time we change the way we handle databags. In upgrades, we will also have to provide a way to upgrade from "model-revision-X" to "model-revision-X+1" and as @MiaAltieri highlighted, they will have to coexist while the upgrade is in-progress.

The former is an old issue that recurrently bubbles up in Juju. As we start to build more complex data structures that are serialized to the databag, we cannot guarantee anymore that the final string remains the same, even if the original struct had exactly the same data.

Proposal

To have a BaseModel class that can cover both scenarios.

This class would look like:

class RootDataPlatformModel(BaseModel):

    revision: int

    @root_validator
    def convert_data_following_rev(...):
        # Sequence of if-elif-else that format the data according to the the incoming revision code

    

    def __str__(...):
        data = self.model_dump()  # define custom serializers with @model_serializer

        # Now, iterate over the revisions this class knows about: we must create a final string that follows the `revision` field
        data_following_rev = ....

        # Now that we know the data formatted as a dict for revision X, we should iterate over its structure and start converting
        # each piece to string
        for key, value in data_following_rev.items():
              # For each literal type, the conversion to string is already done
              # For each object within the "key" or "value" that implements RootDataPlatformModel, call str(this object)
              # For structures like raw Lists, Sets, Tuples, we need to make sure they are ordered

Using Metaclasses to Implement Different Revisions

This code will rapidly evolve into a complex web of if-elif-else as the number of exceptions between revisions grow. ideally, we do not want to deal with object "RootDataPlatformModel", but rather "RootDataPlatformModelRevisionX" when we are working on revision X.

So, what we could do instead is have a metaclass to decide which final object we will create, based on the revision:

class RootModelSelector(type):
    def __new__(cls, args, kwargs):
        # This method will return a different subclass of `BaseModel` depending on which revision code has been provided


class RootDataPlatformModel(metaclass=RevisionSelector):

    revision: int

    @root_validator
    def deserializer(...):
        # Now, this method is drastrically simplified, as we will use 
    

    def __str__(...):
         # Also simplified, we always serialize for the appropriate revision

And the way we'd use it is:

class PeerRelationRev95Model(RootPeerRelationModel):
    ...


class PeerRelationRev99Model(RootPeerRelationModel):
    ...

class PeerRelationSelector(RootModelSelector):
       # Implements the __new__ to ensure we are we select anything as follows:
       # if revision <= 95: return PeerRelationRev95Model()
       # elif revision 95 < or <=99: return PeerRelationRev99Model()


class RootPeerRelationModel(BaseModel, metaclass=PeerRelationSelector):

    revision: int

    @root_validator
    def deserializer(...):
        # Now, this method is drastrically simplified, as we will use 
    

    def __str__(...):
         # Also simplified, we always serialize for the appropriate revision

Any "upper class" calls exclusively the RootPeerRelationModel.

How to Select the Revision

The actual selection of the revision is done by the upper classes that create BaseModel objects. These are the classes that have access to the databag and can also query the main charm to know which revision they should take into consideration.

In the case we are in the middle of an upgrade, databag revisions X and X+1 may coexist for sometime. Therefore, a class that decides to load the databag should first check if all its peers have finished moving to X+1. If not, then it should continue to use (de)serialization for revision X.

Once the upgrade is finished, then the units can use revision X+1 (de)serialization and persist that new format in the databag instead.

Updated secret still at earlier content in observers (that labelled them)

Steps to reproduce

Remove pytest.mark.xfail from test tests/integration/test_charm.py -> test_provider_get_set_delete_fields_secrets for the TLS parameter

Expected behavior

The secret in applicatoin/<unit_id> is rolled to peek_content(). It does not return the old value.

Actual behavior

It is returning the old value instead of the new one.

Versions

Operating system:

Juju CLI: 3.2.2

Juju agent: 3.1.6

Log output

Juju debug log: , to be verified with juju debug-code

Additional context

[BUG] `DataPeerOtherUnit` is instantiating the wrong handler

Steps to reproduce

  1. Instantiate more than one DataPeerOtherUnit instances

Expected behavior

woks

Actual behavior

RuntimeError: two objects claiming to be OpensearchDasboardsCharm/DataPeerOtherUnit[dashboard_peers] have been created

Additional context

diff --git a/lib/charms/data_platform_libs/v0/data_interfaces.py b/lib/charms/data_platform_libs/v0/data_interfaces.py
index d24aa6f..bf7dd40 100644
--- a/lib/charms/data_platform_libs/v0/data_interfaces.py
+++ b/lib/charms/data_platform_libs/v0/data_interfaces.py
@@ -1820,7 +1820,7 @@ class DataPeerOtherUnit(DataPeerOtherUnitData, DataPeerOtherUnitEventHandlers):
             secret_field_name,
             deleted_label,
         )
-        DataPeerEventHandlers.__init__(self, charm, self, unique_key)
+        DataPeerOtherUnitEventHandlers.__init__(self, charm, self, unique_key)
 
 
 # General events

DataPeer(Unit) `generate_secret_label()` should be a public interface function

(Peer) Secret label has to be reproducible within charms.
Typically in case secret-changed event may need to take action depending on which secret (that the charm is observing) was the one that changed. See MongoDB on_secret_changed().

NOTE: We may want to consider the same for DatabaseProvides, DatabaseRequires!

VM single unit upgrade will stall on "other units upgrading first"

For VM, with an application with a single unit, on refresh the unit will stuck with status message "other units upgrading first".

This is due the upgrade library relying on the upgrade's relation config-changed being triggered by other units setting it's unit databag state to ready, which will not happen to a single unit deployment.

FEATURE REQUEST - No-op secret.set_content() if the new content is the same as the current content

Context

Full context can be found in this Matrix thread

In summary, IS raised an issue of too many secret revisions causing a slowdown of a production Juju MongoDB snap, causing an outage.
After debugging, the cause was the Kafka charm using DP libs calling the following every update-status:

self.kafka_provider.set_tls(relation_id=relation.id, tls=tls)

After discussion with the Juju team, it appears that every time we secret.set_content(), it creates a new revision, regardless of whether or not the contents have actually changed.

Issue

Too many secret revisions causes issues for the Juju MongoDB snap.
Currently, there are no protections in ops or DP Libs to avoid repeatedly updating a secret field with the same content, which is a fairly reasonable use-case, present in 'Config' and 'Relation Data' setting.
Secrets should work the same way as charm 'Config' and 'Relation Data', where setting the same value ends up in a no-op, doing 'nothing'.
The Juju team said that this won't be possible to implement on their end.

Request

Make DP Libs check secret content during a desired update to a secret field, and if the update is the same as the current content, do nothing.

Desired Behavior

In pseudo, the main gist would be to support the following:

def update_clients(self):
    for relation in self.model.relations[CLIENT_RELATION_NAME]:
        secret = get_secret(relation.name)  # let's assume this func exists

        secret.set_content({"some-state": self.get_up_to_date_state()})

def _on_update_status(self, event: UpdateStatusEvent):
    self.update_clients()

The actual get_secret and set_content is of course abstracted by the lib. But essentially whenever the lib would write a secret, it should check the content and skip it if it's unchanged.

Failure to remove a field results in no other secret fields getting removed

Code causing issue

Steps to reproduce

self.database_provides.delete_relation_data(
                relation.id, fields=["username", "password", "uris", "field-that-cannot-be-removed"]
            )

Expected behavior

juju show-unit app-name/0

shows no secrets for "username", "password", "uris",

Actual behavior

juju show-unit app-name/0

still has secrets for "username", "password", "uris",

Versions

Juju 3.1.6, latest lib

Exception when fetch relation data and no relation exists

Bug Description

Error is raised if no relation exists. This is returned from ops. However, I believe this error should be handled on the lib side rather than charm side

To Reproduce

call data = self.database.fetch_relation_data() on the charm without psql integration

Environment

PSQL: latest edge
data_interfaces: LIBPATCH = 1

Relevant log output

Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/./src/charm.py", line 173, in <module>
    main(FastAPIDemoCharm)
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/venv/ops/main.py", line 438, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/venv/ops/main.py", line 150, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/venv/ops/framework.py", line 355, in emit
    framework._emit(event)  # noqa
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/venv/ops/framework.py", line 856, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/./src/charm.py", line 63, in _on_database_relation_removed
    self._update_layer_and_restart(None)
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/./src/charm.py", line 79, in _update_layer_and_restart
    new_layer = self._pebble_layer.to_dict()
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/./src/charm.py", line 147, in _pebble_layer
    "environment": self.app_environment,
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/./src/charm.py", line 99, in app_environment
    db_data = self.fetch_postgres_relation_data()
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/./src/charm.py", line 110, in fetch_postgres_relation_data
    data = self.database.fetch_relation_data()
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 512, in fetch_relation_data
    key: value for key, value in relation.data[relation.app].items() if key != "data"
  File "/var/lib/juju/agents/unit-demo-api-charm-0/charm/venv/ops/model.py", line 914, in __getitem__
    raise KeyError(
KeyError: 'Cannot index relation data with "None". Are you trying to access remote app data during a relation-broken event? This is not allowed.'

Additional context

No response

[UX] Block charm on deployment with a message "This charm is NOT meant to be deployed as Juju application."

Hi,

It is a UX proposal, this charm is not supposed to be deployed, but confusing results achieved if done by mistake:

ubuntu@juju3:~$ juju status
Model   Controller  Cloud/Region        Version  SLA          Timestamp
demo18  microk8s    microk8s/localhost  3.1.6    unsupported  18:07:42+01:00

App                 Version  Status   Scale  Charm               Channel  Rev  Address         Exposed  Message
data-platform-libs           waiting      1  data-platform-libs  stable    26  10.152.183.245  no       installing agent

Unit                   Workload  Agent  Address       Ports  Message
data-platform-libs/0*  unknown   idle   10.1.141.221  

Proposal: block charm with the information: This charm is NOT meant to be deployed as Juju application.

ubuntu@juju3:~$ juju status
Model   Controller  Cloud/Region        Version  SLA          Timestamp
demo18  microk8s    microk8s/localhost  3.1.6    unsupported  18:07:42+01:00

App                 Version  Status   Scale  Charm               Channel  Rev  Address         Exposed  Message
data-platform-libs           blocked      1  data-platform-libs  stable    26  10.152.183.245  no       This charm is NOT meant to be deployed as Juju application.

Unit                   Workload  Agent  Address       Ports  Message
data-platform-libs/0*  blocked   idle   10.1.141.221         This charm is NOT meant to be deployed as Juju application.

Low priority UX improvement.

[secret-changed] Not to lock ourselves out of our DB (DataPeer)

Steps to reproduce

Assume as below:

  1. DB admin password changes (gets "committed" to the Juju Secret store)
  2. Whenever the DB password changes, we may need to execute "some actions"
    Like updating the admin password on the database level
    Now we have a "chicken and egg" problem... As if either of those operations fail we lost the old/new password.
    Which is why additional actions (like updating the password in the DB) must go in the secret-changed event.
    These actions MUST go in the secret-changed event by convention (that's equally received by the owner as observers)
  3. The secret-changed event handler (the single place where we"ve got the "memory" i.e. old value of the secret) fails and/or gets deferred
  4. The charm tries to use the new secret value (whenever getting the secret that appears with the new value in the Juju Secret store) -- however the DB never succeeded to set the admin password yet, thus the new credentials aren't in place... While we have no trace of the old credentials anymore.

Expected behavior

As long as all actions on the charm's side, relating ot secret update aren't performed, the secret MUSTN'T be updated from the perspecive of the charm. It has to be "stuck" on the old value.

(At this point we may wanna consider so called callback functions to be linked to the secrets or not.)

Actual behavior

The above "horror" scenario applies

Versions

Operating system: doesn't matter

Juju CLI: 3.1+

Juju agent: corresponding

[BUG][BACKWARDS_COMPATIBILITY] Updates from URI-in-databag broken

See: https://github.com/canonical/pgbouncer-operator/actions/runs/8639099075/job/23697059499#step:21:456

The deeper issue (or an additional consequence?) was that the secret from internal label (URI) got the first secret group assigned as label.

ubuntu@vm-lxd:~$ juju show-secret --reveal coc30v0ap9rj9bc4n6a0
coc30v0ap9rj9bc4n6a0:
  revision: 4
  owner: pgbouncer
  label: pgbouncer.app.user
  created: 2024-04-11T18:48:29Z
  updated: 2024-04-11T21:17:41Z
  content:
    auth-file: |-
      "pgbouncer_auth_relation_6" "md5e56f74ca75a73e9746b891a61f9fd888"
      "pgbouncer_stats_pgbouncer" "md59c1005d6dde73f766e6b03bac3294efc"
    cfg-file: "[databases]\npostgresql_test_app_first_database = host=10.99.134.44
      dbname=postgresql_test_app_first_database port=5432 auth_user=pgbouncer_auth_relation_6\n\n[pgbouncer]\nlisten_addr
      = 127.0.0.1\nlisten_port = 6432\nlogfile = /var/lib/postgresql/pgbouncer/pgbouncer.log\npidfile
      = /var/lib/postgresql/pgbouncer/pgbouncer.pid\nadmin_users = \nstats_users =
      pgbouncer_stats_pgbouncer\nauth_type = md5\nuser = snap_daemon\nmax_client_conn
      = 10000\nignore_startup_parameters = extra_float_digits,options\nserver_tls_sslmode
      = prefer\nso_reuseport = 1\nunix_socket_dir = /var/lib/postgresql/pgbouncer\npool_mode
      = session\nmax_db_connections = 100\ndefault_pool_size = 5\nmin_pool_size =
      3\nreserve_pool_size = 3\nauth_query = SELECT username, password FROM pgbouncer_auth_relation_6.get_auth($1)\nauth_file
      = /var/snap/charmed-postgresql/current/etc/pgbouncer/pgbouncer/userlist.txt\n\n"
    monitoring-password: rVhoyYk3uGU0xbgHy4XISneV
    relation-id-7: xSoOaBtBmvXJ870bSYTGB2ZR

For more context a more detailed explanation of the problem here: #157 (comment)

metaclass conflict

When try to import the library we get an exception:

2023-04-14 09:57:40 WARNING install Traceback (most recent call last):
2023-04-14 09:57:40 WARNING install   File "./src/charm.py", line 13, in <module>
2023-04-14 09:57:40 WARNING install     from charms.data_platform_libs.v0.data_interfaces import DatabaseRequires
2023-04-14 09:57:40 WARNING install   File "/var/lib/juju/agents/unit-mlflow-server-0/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 356, in <module>
2023-04-14 09:57:40 WARNING install     class DataProvides(Object, ABC):
2023-04-14 09:57:40 WARNING install TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
2023-04-14 09:57:40 ERROR juju.worker.uniter.operation runhook.go:140 hook "install" (via hook dispatching script: dispatch) failed: exit status 1

one of the potential issues might be that our charm requires ops<2 which we set in reqs.txt for our charm. While this lib requires ops>2. Not sure how charmcraft is going to resolve it.
but the issue might different, worth to investigate

is_resource_created should return False where there is no relation

Bug Description

When there are no relations, is_resource_created returns True since

all([]) == True

The correct UX that I would feel more intuitive is:

  • when calling is_resource_created() with no relations, we return False
  • when calling is_resource_created(relation_id) with no relation, we raise an exception

To Reproduce

N/A

Environment

N/A

Relevant log output

N/A

Additional context

No response

[DESIGN] Evaluate if Relation Data could be split from Events, and if `charm` could be non-mandatory for Relation Data repesentation

Following the Charm Structure Patterns spec, it would be excellent to offer the same functionalty (Relation Data abstraction, specific Event Handlers) in a way that those two could be handled separately (instantiated in separate modules, etc.)

At this point, it would be also useful to make the interfaces "independent" from the charm in a sense that we use framework, app, and unit instead.

All the above should be possible in a backwards compatible way -- which should be possible based on the current code.

`TypeError: metaclass conflict` encountered when using data_interfaces charm library

Bug Description

My charm slurmdbd-operator fails to install because the error TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases is thrown when importing the data_interfaces charm library.

Looking through the charm library, the issue is being caused by the DataProvides and DataRequires base classes trying to inherit from both ops.framework.Object and abc.ABC. ops.framework.Object and abc.ABC use different metaclasses so conflicts are encountered when trying to define DataProvides and DataRequires at charm runtime.

To Reproduce

  1. git clone [email protected]:NucciTheBoss/slurmdbd-operator.git@mysql-migration
  2. cd slurmdbd-operator && charmcraft -v pack
  3. juju deploy ./slurmdbd_ubuntu-20.04-amd64_centos-7-amd64.charm slurmdbd
  4. See error once install hook is executed.

Environment

I am running Juju locally using my LXD server hosted on my workstation. I am using v0.6 of the _data_interfaces charm library. I am using Juju version 3.1 and my slurmdbd-operator is current using Ubuntu 20.04 LTS as its base.

Relevant log output

unit-slurmdbd-1: 14:57:56 INFO juju.worker.uniter found queued "install" hook                                                                                                                                      
unit-slurmdbd-1: 14:57:56 WARNING unit.slurmdbd/1.install Traceback (most recent call last):                                                                                                                       
unit-slurmdbd-1: 14:57:56 WARNING unit.slurmdbd/1.install   File "./src/charm.py", line 12, in <module>                                                                                                            
unit-slurmdbd-1: 14:57:56 WARNING unit.slurmdbd/1.install     from charms.data_platform_libs.v0.data_interfaces import (                                                                                           
unit-slurmdbd-1: 14:57:56 WARNING unit.slurmdbd/1.install   File "/var/lib/juju/agents/unit-slurmdbd-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 354, in <module>                           
unit-slurmdbd-1: 14:57:56 WARNING unit.slurmdbd/1.install     class DataProvides(Object, ABC):                                                                                                                     
unit-slurmdbd-1: 14:57:56 WARNING unit.slurmdbd/1.install TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases                      
unit-slurmdbd-1: 14:57:56 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1                                                                          
unit-slurmdbd-1: 14:57:56 INFO juju.worker.uniter awaiting error resolution for "install" hook

Additional context

To resolve the metaclass conflicts, you need to create a new metaclass that inherits from both ops.framework._Metaclass and abc.ABCMeta.

Non-existing secret error message shouldn't display set contents such as "{non-existing-secret}"

Steps to reproduce

Use mongodb-operator unittests -> test_delete_password()

Expected behavior

Instead of expected "Non-existing secret {'ca-secret'}" it should be ""Non-existing secret(s) 'ca-secret', 'any-other-secret'..."

Actual behavior

See above

Versions

Operating system: any

Juju CLI: any with secrets

Juju agent: any with secrets

Log output

Juju debug log: None

Additional context

[BUG][BACKWARDS_COMPATIBILITY] URI in databag is removed on first read not write

In case secret URI was stored in the databag (Peer Relation Data solution before moving to secret labels), don't remove the databag field after the first read, BUT on the first write

This could result in issues in case we do an upgrade (from a version that was retrieving secrets from URI in databag), read our secrets (thus loose the databag internal-field key that was referring to the secret URI) and we do a rollback after.
(NOTE: in case no rollback is applied, all works well.)

Review `delete_relation_data()` workflow

Steps to reproduce

  1. Juju3
  2. Delete secret field like username

Expected behavior

No attempt to remove it from the databag

Actual behavior

I think we also evaluate it as 'normal' field

Versions

Operating system: independent from OS

Juju CLI: independent from Juju CLI > 3

Juju agent: independent from libjuju

Log output

Juju debug log:

Additional context

Support for enabling database extensions

There's currently no way to enable database extensions as the connection user. The database charm will not be aware of the user the related application is requesting, so setting up the extensions through the database charm configuration is not an option.

IMHO the relation library should support enabling this extensions. Otherwise, an SQL client library would need to be included to execute the corresponding SQL statements. One possible solution would be for the library to support this SQL execution.

Feature request: Support tls ca change propagation to DatabaseRequires

This issue outlines a feature request for supporting tls change propagation for apps on the DatabaseRequires side of the relation.

Currently, the tls-ca and tls info is only passed on the DatabaseCreatedEvent which is triggered once during database relation setup. However, when tls is later configured or changed, there is currently no propagation of such event towards the application side. However, when working with self-signed certificates, the certificates may need to be added to the trusted root ca, hence the requirement for the change to be propagated.

Thank you very much! :)

[DESIGN] Should ALL peer data handled as secret?

In order to elliminate the potential for developer error (i.e. "forgetting to register a peer relation field as a secret field"), should we potentially consider a specific interface function that makes sure that a field is added as a secret?

I.e. what could be called from the charm's set_secret() method, ensuring that the field will be considered as a secret
So it would dynamically add the field to secret-fields if needed (see #127, and add the value specified.

This should be very simple to do, by literally calling the new function to be added, let's say update_secret_fields(new_field), and update_relation_data().

data_interfaces: DataPeerUnit adding secret is broken

Steps to reproduce

  1. Try to add a unit-level secret

Expected behavior

The secret is added

Actual behavior

Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-postgresql-1/charm/./src/charm.py", line 1475, in <module>
    main(PostgresqlOperatorCharm)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/main.py", line 436, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/main.py", line 144, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/framework.py", line 340, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/framework.py", line 842, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/postgresql_k8s/v0/postgresql_tls.py", line 117, in _on_tls_relation_joined
    self._request_certificate(None)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/postgresql_k8s/v0/postgresql_tls.py", line 97, in _request_certificate
    self.charm.set_secret(SCOPE, "key", key.decode("utf-8"))
  File "/var/lib/juju/agents/unit-postgresql-1/charm/./src/charm.py", line 264, in set_secret
    self.peer_relation_unit.update_relation_data(peers.id, {key: value})
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 451, in wrapper
    return f(self, *args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 999, in update_relation_data
    return self._update_relation_data(relation, data)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 1582, in _update_relation_data
    _, normal_fields = self._process_secret_fields(
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 800, in _process_secret_fields
    if group_result := operation(relation, group, secret_fields, *args, **kwargs):
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 1096, in _add_or_update_relation_secrets
    return self._add_relation_secret(relation, group, secret_fields, data, uri_to_databag)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 462, in wrapper
    return f(self, *args, **kwargs)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 1050, in _add_relation_secret
    secret = self.secrets.add(label, content, relation)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 571, in add
    secret.add_secret(content, relation)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/lib/charms/data_platform_libs/v0/data_interfaces.py", line 494, in add_secret
->>>>>>>>>>>>>>>>>>>>>    secret = self.charm.app.add_secret(content, label=self.label)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/model.py", line 442, in add_secret
    id = self._backend.secret_add(
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/model.py", line 3372, in secret_add
    result = self._run('secret-add', *args, return_output=True)
  File "/var/lib/juju/agents/unit-postgresql-1/charm/venv/ops/model.py", line 2948, in _run
    raise ModelError(e.stderr) from e
ops.model.ModelError: ERROR this unit is not the leader

Versions

Operating system: Ubuntu LTS

Juju CLI: 3.1.7

Juju agent: 3.1.6

Log output

Juju debug log: see above

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.