Git Product home page Git Product logo

mariadb-operator's Introduction

mariadb

CI Release Helm Manifests OLM Helm

Go Report Card Go Reference Slack Artifact Hub Operator Hub

🦭 mariadb-operator

Run and operate MariaDB in a cloud native way. Declaratively manage your MariaDB using Kubernetes CRDs rather than imperative commands.

Please, refer to the documentation, the API reference and the example suite for further detail.

Bare minimum installation

This installation flavour provides the minimum resources required to run mariadb-operator in your cluster.

helm repo add mariadb-operator https://mariadb-operator.github.io/mariadb-operator
helm install mariadb-operator mariadb-operator/mariadb-operator

Recommended installation

The recommended installation includes the following features:

  • Metrics: Leverage prometheus operator to scrape the mariadb-operator internal metrics.
  • Webhook certificate renewal: Automatic webhook certificate issuance and renewal using cert-manager. By default, a static self-signed certificate is generated.
helm repo add mariadb-operator https://mariadb-operator.github.io/mariadb-operator
helm install mariadb-operator mariadb-operator/mariadb-operator \
  --set metrics.enabled=true --set webhook.cert.certManager.enabled=true

Openshift

The Openshift installation is managed separately in the mariadb-operator-helm repository, which contains a helm based operator that allows you to install mariadb-operator via OLM.

Quickstart

Let's see mariadb-operator🦭 in action! First of all, install the following configuration manifests that will be referenced by the CRDs further:

kubectl apply -f examples/manifests/config

Next, you can proceed with the installation of a MariaDB instance:

kubectl apply -f examples/manifests/mariadb.yaml
kubectl get mariadbs
NAME      READY   STATUS    PRIMARY POD     AGE
mariadb   True    Running   mariadb-0       3m57s

kubectl get statefulsets
NAME      READY   AGE
mariadb   1/1     2m12s

kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
mariadb      ClusterIP   10.96.235.145   <none>        3306/TCP,9104/TCP   2m17s

Up and running 🚀, we can now create our first logical database and grant access to users:

kubectl apply -f examples/manifests/database.yaml
kubectl apply -f examples/manifests/user.yaml
kubectl apply -f examples/manifests/grant.yaml
kubectl get databases
NAME        READY   STATUS    CHARSET   COLLATE           AGE
data-test   True    Created   utf8      utf8_general_ci   22s

kubectl get users
NAME              READY   STATUS    MAXCONNS   AGE
user              True    Created   20         29s

kubectl get grants
NAME              READY   STATUS    DATABASE   TABLE   USERNAME          GRANTOPT   AGE
user              True    Created   *          *       user              true       36s

At this point, we can run our database initialization scripts:

kubectl apply -f examples/manifests/sqljobs
kubectl get sqljobs
NAME       COMPLETE   STATUS    MARIADB   AGE
01-users   True       Success   mariadb   2m47s
02-repos   True       Success   mariadb   2m47s
03-stars   True       Success   mariadb   2m47s

kubectl get jobs
NAME                  COMPLETIONS   DURATION   AGE
01-users              1/1           10s        3m23s
02-repos              1/1           11s        3m13s
03-stars-28067562     1/1           10s        106s

kubectl get cronjobs
NAME       SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
03-stars   */1 * * * *   False     0        57s             2m33s

Now that the database has been initialized, let's take a backup:

kubectl apply -f examples/manifests/backup.yaml
kubectl get backups
NAME               COMPLETE   STATUS    MARIADB   AGE
backup             True       Success   mariadb   15m

kubectl get jobs
NAME               COMPLETIONS   DURATION   AGE
backup-27782894    1/1           4s         3m2s

Last but not least, let's provision a second MariaDB instance bootstrapping from the previous backup:

kubectl apply -f examples/manifests/mariadb_from_backup.yaml
kubectl get mariadbs
NAME                  READY   STATUS    PRIMARY POD             AGE
mariadb               True    Running   mariadb-0               7m47s
mariadb-from-backup   True    Running   mariadb-from-backup-0   53s

kubectl get restores
NAME                                         COMPLETE   STATUS    MARIADB               AGE
bootstrap-restore-mariadb-from-backup        True       Success   mariadb-from-backup   72s

kubectl get jobs
NAME                                         COMPLETIONS   DURATION   AGE
backup                                       1/1           9s         12m
bootstrap-restore-mariadb-from-backup        1/1           5s         84s

Documentation

GitOps

You can embrace GitOps best practises by using this operator, just place your CRDs in a git repo and reconcile them with your favorite tool, see an example with flux:

Roadmap

Take a look at our roadmap and feel free to open an issue to suggest new features.

Adopters

Please create a PR and add your company or project to our ADOPTERS.md file if you are using our project!

Contributing

We welcome and encourage contributions to this project! Please check our contributing and development guides. PRs welcome!

Community

Get in touch

mariadb-operator's People

Contributors

addreas avatar dependabot[bot] avatar djakobczak1 avatar gg-kialo avatar grilix avatar grooverdan avatar harunkucuk5 avatar hoega avatar iangilfillan avatar jescarri avatar kosmoz avatar ksankeerth avatar kvaps avatar lgohyipex avatar luohaha3123 avatar macno avatar mariadb-mmontes avatar mariadb-pieterhumphrey avatar martinweise avatar matthieugusmini avatar melledouwsma avatar mmontes11 avatar nunnatsa avatar peterjanroes avatar pmig avatar roccazzella avatar stefan-bergstein avatar vidalee avatar wittdennis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mariadb-operator's Issues

`ServiceTemplate` to provide `Service` type and annotations

Be able to provide a Service template in the MariaDB CRD to customize how it is exposed in the cluster. For example, if we wanted to expose it using Metallb:

apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
  name: mariadb
spec:
    ....
    service:
       type: LoadBalancer
       annotations:
            metallb.universe.tf/address-pool: sandbox

It should default to type ClusterIP and no annotations.

[Feature] Management UI

Is your feature request related to a problem? Please describe.

Many operators out there, such as Zalando's Postgres Operator, Stackgres and MinIO have a UI to create and manage deployments. I like this for monitoring purposes, because it enables a better overview of the available options and deployments.

Describe the solution you'd like

Some kind of management UI, to create and view deployments, change any mutable options, restart deployments to apply those configurations, and anything else that makes sense to have in a operator UI. Additionally, it could integrate with OIDC or Kubernetes RBAC for authentication.

Describe alternatives you've considered

Well, using kubectl and the CRDs, but sometimes a UI is just simpler.

Additional context

I'd love to help out with this, if there's a vision for this project then collaborating with its creators would make sense. I have skills in both Go and React, so building a dashboard and management UI shouldn't be too hard. Would love to hear if this is on the roadmap, and how I can help!

warning "Aborted connection .. to db: '...' user: '...' host: '10.1.156.63' (Got an error reading communication packets)

Kubernetes version: microk8s v1.26.1
mariadb-operator version: latest
Install method: helm
Install flavour: minimal

Hi, when I connect an application (fireflyiii) sometimes the warning "Aborted connection .. to db: '...' user: '...' host: '10.1.156.63' (Got an error reading communication packets)" appears. The application does not receive the expected data in that case.
Thank you for any help.

[Feature] Inherit MariaDB labels

Is your feature request related to a problem? Please describe.

Our infrastructure rely on some k8s label where we expect specific values :

  • app.kubernetes.io/component
  • app.kubernetes.io/instance
  • app.kubernetes.io/name
  • app.kubernetes.io/version

Describe the solution you'd like
We'd like to be able to override the label set by mariadb-operator with some custom label.
One way to do this would be to inherit label from mariadb-operator manifest inside child objects (statefulset, pods, ...)

Something like:

apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
  name: mariadb
  labels:
    app.kubernetes.io/component: mycomponent
    app.kubernetes.io/instance: myinstance
    app.kubernetes.io/name: myname
    app.kubernetes.io/version: 1.0.0

Would create a statefulset and the associated pods with the same labels

Potentially, a configuration, to specify the labels where we seek inheritance could allow backward compatibility.

Something like the configuration done by zalando with their postgres operator:

inherited_labels list of label keys that can be inherited from the cluster manifest, and added to each child objects (Deployment, StatefulSet, Pod, PVCs, PDB, Service, Endpoints and Secrets) created by the operator. Typical use case is to dynamically pass labels that are specific to a given Postgres cluster, in order to implement NetworkPolicy. The default is empty.

https://opensource.zalando.com/postgres-operator/docs/reference/operator_parameters.html

[Bug] config configmap name is constructed as "config-" + mariadb.Name

Generated mariadb pod is configured with configmap different from what I put in myCnfConfigMapKeyRef.name

My configuration

  myCnfConfigMapKeyRef:¬
    name: mariadb-my-cnf¬
    key: my.cnf¬

Resulting pod manifest:

  - configMap:
      defaultMode: 420
      items:
      - key: my.cnf
        path: my.cnf
      name: config-mariadb
    name: config

I think it comes from here:

❯ rg config-
controllers/mariadb_controller.go
401:            Name:      fmt.Sprintf("config-%s", mariadb.Name),

Expected behaviour

The name of the configmap is exactly the one provided as myCnfConfigMapKeyRef.name. Or the documentation explaining that the map name is built using "config-" prefix and the name or MariaDB object.

Steps to reproduce the bug

Configure MariaDB object as described above.

Environment details:

  • Kubernetes version: 1.26.4
  • mariadb-operator version: 0.0.12
  • Install method: kustomize
  • Install flavour: custom

[Ha replication] enable two way replication / circular async

Hi !

First off all thanks for your work . It's really a nice solution for Databases in Openshift !

We are going to migrate from old physical infrastructure to openshift.
We currently use circular replication (Master -> Slave in both direction / ring replication) between multiple-site.

I would like to use the same type of replication between different Openshift cluster.

I already use 'submariner' to link my cluster with service (export service through VPN)

By now , if i understand it well , replication will work on local cluster with special connexion name ?
It may be possible to you the same concept with external ressources ?

Best regards

[Feature] Affinity for Backup and Restore Jobs

Describe the solution you'd like
Ability to specify affinity, nodeSelector and tolerations in Backup and Restore Jobs.

Additional context
We have supported HA features recently in the MariaDB CRD, we should support this in Backup and Restore so we can schedule the Pods in the same node as MariaDB. This is important when it comes to do physical backups as we need access to the MariaDB PVC, something that won't be possible if the Backup Pod is scheduled in a different node.

chart broken: does not install CRDs

They are included in the helm chart (in mariadb-operator/crds/crds.yaml) and I am not using --skip-crds`. On this cluster I installed the operator before, so maybe the old CRDs are in conflict. But deleting the CRDs and trying to reinstall via helm did not change anything.

[Feature] support read-only root fs

Is your feature request related to a problem? Please describe.

I would like to have mariadb running with root fs readonly, like all containers should.

Describe the solution you'd like

Either more direct support for it or at least an ability to define additional custom volumes for the pod (e.g /tmp, /var/lib)

Describe alternatives you've considered

There does not seem to be any options today to achieve it. The read-only root fs mode can be set via securitycontext, but there seems to be no way to provide the addional (emptyDir) volumes to mariadb pod.

"Error creating ConfigMap" when using myCnfConfigMapKeyRef

Get "Error creating ConfigMap" when using myCnfConfigMapKeyRef

Reproduce steps:

  1. kubectl apply -f config/samples/config
  2. kubectl apply -f config/samples/mariadb_v1alpha1_mariadb_config.yaml
  3. Get "Error creating ConfigMap"

mariadb-operator: v0.11.0 installed via Helm

$ kubectl get mariadb
NAME      READY   STATUS                     AGE
mariadb   False   Error creating ConfigMap   6s
$ kubectl describe mariadb
Name:         mariadb
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  mariadb.mmontes.io/v1alpha1
Kind:         MariaDB
Metadata:
  Creation Timestamp:  2023-03-21T03:52:39Z
  Generation:          1
  Managed Fields:
    API Version:  mariadb.mmontes.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:image:
          .:
          f:pullPolicy:
          f:repository:
          f:tag:
        f:myCnfConfigMapKeyRef:
        f:port:
        f:rootPasswordSecretKeyRef:
        f:volumeClaimTemplate:
          .:
          f:accessModes:
          f:resources:
            .:
            f:requests:
              .:
              f:storage:
          f:storageClassName:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2023-03-21T03:52:39Z
    API Version:  mariadb.mmontes.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
    Manager:         mariadb-operator
    Operation:       Update
    Subresource:     status
    Time:            2023-03-21T03:52:39Z
  Resource Version:  5080193
  UID:               a22ab51e-7353-48d6-b25a-98709cce1e46
Spec:
  Image:
    Pull Policy:  IfNotPresent
    Repository:   mariadb
    Tag:          10.7.4
  My Cnf Config Map Key Ref:
    Key:   my.cnf
    Name:  mariadb-my-cnf
  Port:    3306
  Root Password Secret Key Ref:
    Key:   root-password
    Name:  mariadb
  Volume Claim Template:
    Access Modes:
      ReadWriteOnce
    Resources:
      Requests:
        Storage:         100Mi
    Storage Class Name:  standard
Status:
  Conditions:
    Last Transition Time:  2023-03-21T03:52:39Z
    Message:               Error creating ConfigMap
    Reason:                Failed
    Status:                False
    Type:                  Ready
Events:                    <none>

Reuse `sql.DB` objects in `TemplateReconciler`

We are creating database connections on the fly in the TemplateReconciler whenever a resource needs to be reconciled, see:

sql.Open returns a sql.DB that maintains its own pool of connections, but the problem here is that we might have multiple MariaDB instances with different connection details, so we cannot reuse the connection.

The idea would be introducing a LRU of sql.DB objects, keeping only the most recently used. It would also need to be indexable by the MariaDB's types.NamespacedName so we can efficiently get the right instance on each reconciliation cycle.

webhook - Address is not allowed

$ kubectl apply -f config/samples/mariadb_v1alpha1_mariadb.yaml

Error from server (InternalError): error when creating "config/samples/mariadb_v1alpha1_mariadb.yaml": Internal error occurred: failed calling webhook "vmariadb.kb.io": failed to call webhook: Post "https://mariadb-operator-webhook.mariadb-operator.svc:443/validate-mariadb-mmontes-io-v1alpha1-mariadb?timeout=10s": Address is not allowed

[Feature] HA via Galera

Support for HA via MariaDB Galera following the same approach as this PoC.

A posible solution might be:

  • When Galera is enabled in MariaDB resources, create the StatefulSet and Pods with the mariadb.mmontes.io/galera: enabled annotation
  • Create a new GaleraReconciler that will watch StatefulSets and Pods with the mariadb.mmontes.io/galera: enabled annotation
  • The Pod will have an extra sidecar container, the Galera reloader, which has an API to reload the /etc/mysql/mariadb.conf.d/galera.conf file and gracefully restart the mariadbd process by sending a signal.
  • Whenever a Pod goes down, the GaleraReconciler queries the StatefulSet to get the total number of pods, and based on their availability makes calls to the Galera reloaders to update the configuration. It will use the StatefulSet FQDN to selectively talk to specific instances (mariadb-0.mariadb.default.svc.cluster.local).
  • Check the PoC for more details: https://github.com/mmontes11/mariadb-galera-poc

Related to:

Additional resources:

Further improvements:

[Bug] Issues getting Quickstarted - Pod getting killed off, readiness probe failed

Describe the bug
Quickstart process results in pods being launched that get killed off. The pods events suggest that the service does not run:

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  42s                default-scheduler  Successfully assigned mariadb/mariadb-0 to k3s-n3
  Warning  Unhealthy  22s (x3 over 32s)  kubelet            Liveness probe failed: ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
  Normal   Killing    22s                kubelet            Container mariadb failed liveness probe, will be restarted
  Warning  Unhealthy  2s (x8 over 32s)   kubelet            Readiness probe failed: ERROR 2002 (HY000): Can't connect to local server through socket '/run/mysqld/mysqld.sock' (2)
  Normal   Pulled     1s (x2 over 42s)   kubelet            Container image "mariadb:10.7.4" already present on machine
  Normal   Created    1s (x2 over 42s)   kubelet            Created container mariadb
  Normal   Started    1s (x2 over 42s)   kubelet            Started container mariadb

Expected behaviour
A working MariaDB should be created, in a running state.

Steps to reproduce the bug
cert-manager and prometheus are both installed in the cluster.

  1. Install mariadb-operator from Helm chart. helm install -n mariadb mariadb-operator mariadb-operator/mariadb-operator -f values.yaml.
nameOverride: mariadb

metrics:
  enabled: true

ha:
  enabled: false
  1. Install the config manifests defined in the quickstart
kubectl -n mariadb apply -f samples/config
  1. Provision a MariaDB instance
kubectl -n mariadb apply -f samples/mariadb_v1alpha1_mariadb.yaml

I've modified mariadb_v1alpha1_mariadb.yaml to include a 'standard' storage class:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  namespace: mariadb
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mariadb-mariadb
  labels:
    app: mariadb-mariadb
spec:
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  # storageClassName: "mariadb-mariadb"
  # storageClassName: local-path
  storageClassName: standard
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/cephfs/apps/mariadb/mariadb"

---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
  name: mariadb
spec:
  rootPasswordSecretKeyRef:
    name: mariadb
    key: root-password

  database: mariadb
  username: mariadb
  passwordSecretKeyRef:
    name: mariadb
    key: password

  image:
    repository: mariadb
    tag: "10.7.4"
    pullPolicy: IfNotPresent

  port: 3306

  volumeClaimTemplate:
    resources:
      requests:
        storage: 100Mi
    storageClassName: standard
    accessModes:
      - ReadWriteOnce

  myCnf: |
    [mysqld]
    bind-address=0.0.0.0
    default_storage_engine=InnoDB
    binlog_format=row
    innodb_autoinc_lock_mode=2
    max_allowed_packet=256M
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 300m
      memory: 512Mi

  env:
    - name: TZ
      value: SYSTEM

  envFrom:
    - configMapRef:
        name: mariadb

  podSecurityContext:
    runAsUser: 0

  securityContext:
    allowPrivilegeEscalation: false

Additional context

Environment details:

  • Kubernetes version: v1.24.11+k3s1
  • mariadb-operator version: 0.11
  • Install method: helm
  • Install flavour: Recommended I think.

Helm repo down ?

Hi,

Seems like the helm repo charts.mmontes-dev.duckdns.org is down, it times out after some time.

Cheers,

[Feature] Add possibility to specify user name in User CRD spec

Is your feature request related to a problem? Please describe.

Currently the name of the user in the database will be the name of the CRD. I have multiple MariaDB running on the same namespace and I want some users in the databases to have the same name without sharing the same CRD (since a CRD name is unique in its namespace).

Describe the solution you'd like

Add a (optional?) "username"/"name" field in the spec of the CRD that will define the name of the user in MariaDB.

Describe alternatives you've considered

Splitting the MariaDB CRDs in multiple namespace, not possible on my end.

Additional context

Example: If I create a "foo" user in the database A, I cannot create a user "foo" in the database B since there is already a User CRD named "foo" in the namespace.

Environment details:

  • mariadb-operator version: v0.0.11

[Bug] Can't create user due to Host ... is not allowed to connect to this MariaDB server

Describe the bug

mariadb-operator create a user due to Error 1130: Host '10.244.1.39' is not allowed to connect to this MariaDB server. E.g.:

{"level":"error","ts":1682341296.5456517,"msg":"Reconciler error","controller":"user","controllerGroup":"mariadb.mmontes.io","controllerKind":"User","user":{"name":"photoprism","namespace":"mariadb"},"namespace":"mariadb","name":"photoprism","reconcileID":"9854f729-1653-428c-9f38-c431fd603f65","error":"error reconciling in TemplateReconciler: error creating MariaDB client: 1 error occurred:\n\t* Error 1130: Host '10.244.1.39' is not allowed to connect to this MariaDB server\n\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"}

I can presumably log in to MariaDB to fix this manually.

Expected behaviour
mariadb-operator creates a user.

Steps to reproduce the bug

Install the 0.11.0 helm chart with no customisations, then apply the yaml below.

Additional context

Deployed CRs:

apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
  name: mariadb
spec:
  rootPasswordSecretKeyRef:
    name: passwords
    key: root-password

  image:
    repository: mariadb
    tag: "10.7.4" # "10.11.2"
    pullPolicy: IfNotPresent

  port: 3306

  volumeClaimTemplate:
    resources:
      requests:
        storage: 10Gi
    storageClassName: rook-ceph-block
    accessModes:
      - ReadWriteOnce

  env:
    - name: TZ
      value: UTC
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: User
metadata:
  name: photoprism
spec:
  mariaDbRef:
    name: mariadb
  passwordSecretKeyRef:
    name: passwords
    key: photoprism
  # This field is immutable and defaults to 10
  maxUserConnections: 20
---
apiVersion: v1
kind: Secret
metadata:
  name: passwords
stringData:
  photoprism: photoprismsecret
  root-password: supersecret

Logged by MariaDB:

2023-04-24 12:23:05 2134 [Warning] Aborted connection 2134 to db: 'unconnected' user: 'unauthenticated' host: '10.244.1.39' (This connection closed normally without authentication)

Environment details:

  • Kubernetes version: v1.26.0
  • mariadb-operator version: 0.11.0
  • Install method: helm
  • Install flavour: default? I haven't customised any helm default values yet.

custom my.cnf file

Hi
I dont see any possibility how to i can use a custom my.cnf file or inject custom configuration for mariadb.
Can any one tell me how to do in this operator ?

OCP Support. Publish operator in OperatorHub.io

Make sure the operator behaves as expected in OCP, for example, by running the automated tests and then publish the operator in OperatorHub:

More detailed instructions can be found in this comment.

It's worth mentioning that, Openshift might require a different set of RBAC permissions:

Default values for Interval and ScrapeTimeout in ServiceMonitor are not working

Creating a MariaDB resource with the minimal metrics configuration allowed by the CRD, causes an error.

Metrics configuration:

  metrics:
    exporter:
      image:
        repository: prom/mysqld-exporter
        tag: v0.14.0
    serviceMonitor:
      prometheusRelease: kube-prometheus-stack

Error:

{
    "level": "error",
    "ts": 1675436926.6985223,
    "msg": "Reconciler error",
    "controller": "mariadb",
    "controllerGroup": "mariadb.mmontes.io",
    "controllerKind": "MariaDB",
    "mariaDB": {
        "name": "mariadb",
        "namespace": "mariadb-test"
    },
    "namespace": "mariadb-test",
    "name": "mariadb",
    "reconcileID": "b27c0a29-a642-42ab-aa4c-90778e24f3bd",
    "error": "error creating ServiceMonitor: 1 error occurred:\n\t* error creating Service Monitor: ServiceMonitor.monitoring.coreos.com \"mariadb\" is invalid: [spec.endpoints[0].scrapeTimeout: Invalid value: \"'10s'\": spec.endpoints[0].scrapeTimeout in body should match '^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$', spec.endpoints[0].interval: Invalid value: \"'10s'\": spec.endpoints[0].interval in body should match '^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$']\n\n",
    "stacktrace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"
}

I think the problem has to do with quoting, as the resource looks like this after the failed reconciliation:

  metrics:
    exporter:
      image:
        pullPolicy: IfNotPresent
        repository: prom/mysqld-exporter
        tag: v0.14.0
    serviceMonitor:
      interval: '''10s'''
      prometheusRelease: kube-prometheus-stack
      scrapeTimeout: '''10s''' 

The version used to reproduce this error was v0.0.6.

mariadb_v1alpha1_mariadb_minimal.yaml fails

Kubernetes version: microk8s v1.26.1
mariadb-operator version: latest
Install method: helm
Install flavour: minimal

kubectl apply -f config/samples/mariadb_v1alpha1_mariadb_minimal.yaml

kubectl describe pod/mariadb-0

Events:
Type Reason Age From Message


Normal Scheduled 40s default-scheduler Successfully assigned mariadb-operator/mariadb-0 to kvmub02
Normal SuccessfulAttachVolume 39s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-06c18fac-12b5-4808-8b0d-9c4bee6849b9"
Normal Pulled 11s (x3 over 31s) kubelet Container image "mariadb:10.7.4" already present on machine
Normal Created 11s (x3 over 31s) kubelet Created container mariadb
Normal Started 11s (x3 over 31s) kubelet Started container mariadb
Warning BackOff 7s (x7 over 29s) kubelet Back-off restarting failed container mariadb in pod mariadb-0_mariadb-operator(96449cb0-be28-4855-89d3-4363d0217ece)

kubectl logs pod/mariadb-0

2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.7.4+mariafocal started.
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.7.4+maria
focal started.
2023-02-28 15:29:32+00:00 [Note] [Entrypoint]: Initializing database files
2023-02-28 15:29:32 0 [ERROR] InnoDB: The Auto-extending data file './ibdata1' is of a different size 767 pages than specified by innodb_data_file_path
2023-02-28 15:29:32 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2023-02-28 15:29:33 0 [ERROR] Plugin 'InnoDB' init function returned error.
2023-02-28 15:29:33 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2023-02-28 15:29:33 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2023-02-28 15:29:33 0 [ERROR] Aborting

Installation of system tables failed! Examine the logs in
/var/lib/mysql/ for more information.

The problem could be conflicting information in an external
my.cnf files. You can ignore these by doing:

shell> /usr/bin/mariadb-install-db --defaults-file=~/.my.cnf

You can also try to start the mysqld daemon with:

shell> /usr/sbin/mariadbd --skip-grant-tables --general-log &

and use the command line tool /usr/bin/mariadb
to connect to the mysql database and look at the grant tables:

shell> /usr/bin/mysql -u root mysql
mysql> show tables;

Try 'mysqld --help' if you have problems with paths. Using
--general-log gives you a log in /var/lib/mysql/ that may be helpful.

The latest information about mysql_install_db is available at
https://mariadb.com/kb/en/installing-system-tables-mysql_install_db
You can find the latest source at https://downloads.mariadb.org and
the maria-discuss email list at https://launchpad.net/~maria-discuss

Please check all of the above before submitting a bug report
at https://mariadb.org/jira

[Feature] HA via Galera with Arbitrator

Describe the solution you'd like
I'd love to scale Galera with the use of a Galera Arbitrator. This makes it so we're able to run, for example, 2 Galera nodes with an Arbitrator for the 3rd vote.

Describe alternatives you've considered
Running 3 Galera nodes instead. Capacity wise this is more consuming.

Additional context

  • A feature request for after/during the integration of #4.
  • The Cluster Loses its Primary State Due to Split Brain section of the Crash Recovery documentation

[Feature] Ability to specify the securityContext for the MariaDB instance pod

Is your feature request related to a problem? Please describe.
I wanted to experiment with the operator using AWS EFS volumes, but the folders in this type of volume are owned by user 1000.
When MariaDB starts, it chown the /var/lib/mysql to its user. But it doesn't have the permission to do so, since the folders are owned by user 1000. (chown: changing ownership of '/var/lib/mysql/': Operation not permitted).

Running the pod with

securityContext:
  runAsUser: 1000

would solve this issue. Unfortunately, while it's possible to define the securityContext for the controller pod and the controller container in the values.yaml file, it doesn't seem like it's possible for the MariaDB pod.

Describe the solution you'd like
The possibility to define the securityContext of the MariaDB pod in the values.yaml file.

Describe alternatives you've considered
Exploring the template files of the Helm chart to hardcode the securityContext, without success.

Additional context
None.

Environment details:

  • Kubernetes version: v1.25.6-eks-48e63af
  • mariadb-operator version: v0.0.9
  • Install method: helm
  • Install flavour: minimal

Improve observability

At the moment, the controller provides a limited observability by:

  • Setting status.conditions and mapping them to printer columns, allowing the user and the developer to know the internal state of the CRDs
  • Logging reconciling errors, this is the default behaviour of kubebuilder

This could be notably improved by:

Support for `schedule` in `BackupMariaDB`

It should be posible to specify a cron expression in a new schedule field in order to take backups periodically. The operator should reconcile a CronJob instead of a Job for this case.

help needed configuration

hello
i'm new with kubernetes operators, and i'm not sure i understand how to configure this
i want to configure username, db name and user password, but i'm not sure how to edit config/*.yml , and / or if i have to edit even the samples

thank you in advance

[Feature] Check `Endpoints` state to determine `Connection` health

Describe the solution you'd like
Connections healthiness can be determined by checking the Endpoint state related to the spec.ServiceName field. If no endpoints are available, no connection should be made to MariaDB and the Connection can mark as unhealthy

Additional context
This could potentially save a lot of connections to MariaDB and fix race conditions when creating Connections that depend on Services.

[Feature] HA via replication

Describe the solution you'd like
Ability to have multiple replicas of MariaDB that are asynchronously replicated. We could potentially support multiple topologies and multiple degress of asynchronism:

Describe alternatives you've considered
HA via Galera, but it might be worth having a simpler alternatives for HA.

Additional context

Support for db initialization scripts

Hello

it would be nice to have an out-of-the-box support for loading sql scripts during initialization of mariadb instance

afaik you could achieve this simply with configmaps, if scripts loaded is < 1Mb, or by implementing a container that clone a git repository and mounts the script in a volume, but it seems quite cumbersome

thank you

backup tries to mount mariadb-data volume which fails with ReadWriteOnce volumes

AFAICT it does not need the data dir.

It does indeed need something. Unfortunately I did not notice straight away because the job completed successfully but the log said

💾 Taking physical backup
[00] 2023-01-16 12:35:50 Connecting to MariaDB server host: mariadb, user: root, password: set, port: 3306, socket: /run/mysqld/mysqld.sock
[00] 2023-01-16 12:35:50 Using server version 10.10.2-MariaDB-1:10.10.2+maria~ubu2204
mariabackup based on MariaDB server 10.10.2-MariaDB debian-linux-gnu (x86_64)
[00] 2023-01-16 12:35:50 uses posix_fadvise().
[00] 2023-01-16 12:35:50 cd to /var/lib/mysql/
[00] 2023-01-16 12:35:50 open files limit requested 0, set to 1048576
[00] 2023-01-16 12:35:50 mariabackup: using the following InnoDB configuration:
[00] 2023-01-16 12:35:50 innodb_data_home_dir =
[00] 2023-01-16 12:35:50 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2023-01-16 12:35:50 innodb_log_group_home_dir = ./
[00] 2023-01-16 12:35:50 InnoDB: Using liburing
2023-01-16 12:35:50 0 [Note] InnoDB: Number of transaction pools: 1
mariabackup: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required)
2023-01-16 12:35:50 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF
2023-01-16 12:35:50 0 [ERROR] InnoDB: File ./ib_logfile0 was not found
[00] 2023-01-16 12:35:50 Error: cannot read redo log header
🧹 Cleaning up old backups
📜 Backup history

This is not as successful as I hoped.

Support for `MARIADB_PASSWORD_HASH` and `MARIADB_ROOT_PASSWORD_HASH`

The oficial image of MariaDB now supports passing the passwords as a hash, check the environment variables documentation:

This implies that the user would provide a hash instead of the actual password in the Secret, so he basically will need to run this command beforehand in another MariaDB instance:

SELECT PASSWORD('thepassword')

This could be a security enhancement, since the Kubernetes secrets are just base64 encoded. However, it also requires the user to do extra work, so we should support both methods (the way we do now + hash)

Functionality to customize commands when executing mysqldumps (backups)

Is your feature request related to a problem? Please describe.
When performing backups with large databases we would like the ability to not lock tables.
Before using operator we were creating backups using: mysqldump --single-transaction

Describe the solution you'd like

The ability to customize the logical backup options such as locking tables, using single-transaction, other flags

Alternatives would be running a different backup process, using standalone cron job

Environment details:

  • Kubernetes version: v1.23.5
  • Openshift version: 4.10.0-0.okd-2022-07-09-073606
  • mariadb-operator version: v0.0.5
  • Install method: static manifest synced with argoCD
  • Install flavour: custom

Improve `Job` resiliency

Make use of initContainer to wait until MariaDB StatefulSet is ready in the BackupMariaDb and RestoreMariaDb jobs.

Error Creating Restore Job

1.6759603221844468e+09 ERROR Reconciler error {"controller": "restoremariadb", "controllerGroup": "database.mmontes.io", "controllerKind": "RestoreMariaDB", "restoreMariaDB": {"name":"restore","namesp: 1 error occurred:\n\t* error reconciling batch: error creating Job: jobs.batch "restore" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: ,
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234

Looks Like fix is: (I'm still on old CRD naming convention)

  • apiGroups:
    • database.mmontes.io
      resources:
    • restoremariadbs/finalizers
      verbs:
    • update

Perform upgrades in `MariaDB` CRD

Some fields of the StatefulSet are inmutable, for example, upgrading the volumeClaimTemplates will result in the following error:

# statefulsets.apps "mariadb" was not valid:
# * spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy' and 'minReadySeconds' are forbidden

As an alternative, the operator could perform a blue-green deployment by provisioning the new StatefulSet first and decomissioning the old one afterwards.

[Bug] Grant controller fails to grant/revoke privileges if database name contains dashes

Describe the bug

Given the following CRs

---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Database
metadata:
  name: ak-test
spec:
  mariaDbRef:
    name: mariadb
  characterSet: utf8
  collate: utf8_general_ci
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: User
metadata:
  name: ak-test-user
spec:
  mariaDbRef:
    name: mariadb
  passwordSecretKeyRef:
    name: ak-test-user
    key: password
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Grant
metadata:
  name: ak-test-grant
spec:
  mariaDbRef:
    name: mariadb
  privileges:
    - "ALL"
  database: "ak-test"
  table: "*"
  username: ak-test-user
  grantOption: false
---
apiVersion: v1
kind: Secret
metadata:
  name: ak-test-user
data:
  password: NjRiYzJlZWYyYzgyN2ZlY2JmYjFiYzMzOWIyMTFkZTQ=

the controller fails to add grant with the following error

error reconciling in TemplateReconciler: error creating ak-test-grant: 1 error occurred:\n\t* error granting privileges in MariaDB: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '-test.* TO 'ak-test-user'@'%' WITH GRANT OPTION'

Expected behaviour

I expect the controller to successfully assign the grant

Environment details:

  • Kubernetes version: 1.25.
  • mariadb-operator version: mmontes11/mariadb-operator:v0.0.9
  • Install method: helm
  • Install flavour: recommended

[Feature] Replication together with Galera

It might be useful to support both these cases from the MariaDB documentation:

  • Configuring a Cluster Node as a Replication Master
  • Configuring a Cluster Node as a Replication Slave

This would allow us to:

Seamless migrate large datasets to a new MariaDB cluster managed by the Operator.
This use-case is really useful in migrations. For example, you could setup replication from the old cluster to the new cluster (where the new cluster is managed by the Operator). In this case the dataset is constantly synced, in which case you're able to switch seamless to the new cluster.

Allows to spin-up additional read-only nodes that never get promoted to master.
In our setup, developers have access to a slave MariaDB server. This to ensure that they're not able to impact production workloads by doing heavy queries or queries that apply certain locks.

If you'd like to split this up in different issues, please let me know.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.