Git Product home page Git Product logo

helm's People

Contributors

beryju avatar chandonpierre avatar channel-42 avatar cubic3d avatar dependabot[bot] avatar dirtycajunrice avatar eugene-davis avatar genofire avatar halkeye avatar icekom avatar ikogan avatar issy avatar jjgadgets avatar jr0dd avatar mareo avatar marvkis avatar mircogericke avatar rissson avatar rojikku avatar schmoaaaaah avatar sherif-fanous avatar valkenburg-prevue-ch avatar watcherwhale avatar wrenix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

helm's Issues

Version discrepencies

The readme says
image
(appversion 2022.6.1) but the chart.yaml is appVersion: 2022.6.2

Is there a reason to track the chart version independently, as opposed to just automating releases for each upstream?

Error when using PR dev images due to tag size

From goauthentik/authentik#6061

Error: 2 errors occurred:
        * Deployment.apps "authentik-worker" is invalid: spec.template.labels: Invalid value: "gh-stages-authenticator_validate-fix-regression-1687776866-22f27e4": must be no more than 63 characters
        * Deployment.apps "authentik-server" is invalid: spec.template.labels: Invalid value: "gh-stages-authenticator_validate-fix-regression-1687776866-22f27e4": must be no more than 63 characters

cc @shmanubhav

GitOps Support Docs

Hey Authentik team.

I just wanted to say thank you for all your hard work, I am loving Authentik, and I am keen to see it grow!

In my particular case I want to declaritively define an outpost, application, and provider. The former has a documented example for manual creation in kubernetes. However the latter two do not nor do they have any blueprint examples or breakdowns (if this is possible).

I believe blueprints are the key to this in Authentik and allowing for GitOps for persistent and consistent spin up and spin down of clusters in testing and production. If possible I would also like to create a GitOps documentation page for Kubernetes specifically as it is preferable in k8s (in my opinion) to spin up with all the desired flows configuration theming etc out of the gate without any clicks. Environment variables (they work well) and sealed-secrets (of my own addition) seem to cover half this GitOps use case and I believe blueprints is the final hurdle for me.

Support for kubernetes secrets

Is this helm chart configured to support using Kubernetes secrets to define the db password & Authentik secret_key value? I'd like to check in my config to a repo though am concerned about inadvertently checking in secrets.

Upgrading from 2023.3.x to 2023.4.x

Hello. I've been looking for some detailed information on upgrading from 2023.3.x to 2023.4.x. The HelmRelease status on attempting the upgrade is:

Helm upgrade failed: template: authentik/charts/serviceAccount/templates/service-account.yaml:4:11: executing "authentik/charts/serviceAccount/templates/service-account.yaml" at <include "authentik-remote-cluster.fullname" .>: error calling include: template: authentik/charts/serviceAccount/templates/_helpers.tpl:14:17: executing "authentik-remote-cluster.fullname" at <.Chart.IsRoot>: can't evaluate field IsRoot in type interface {}

The values.yaml files haven't changed between those versions in any way apparently related to remote clusters or service accounts. The release page for 2023.4 says there's a breaking change (Kubernetes only) Changes to RBAC objects created by helm but doesn't say anything about what to do it I don't care whether I keep the old behavior or not - I'm only installing proxies/outposts in the authentik namespace. In fact, the upgrading section says "This release does not introduce any new requirements" so I didn't expect to have to do anything special. I'm simply trying to upgrade from 2023.3 to 2023.4 without changing anything I don't need to change.

Do I need to disable/change some value? Do I need to change charts if I already have the ClusterRole/etc. configured from previous releases? Is there an upgrade documentation page I missed? Thanks.

Regression: Service Account Name

Somewhere between #150 and #163 a regression was introduced where the serviceAccount name created by the authentik-remote-cluster chart does not match the service account being mounted into the worker container.:

# Source: authentik/charts/serviceAccount/templates/service-account.yaml
kind: ServiceAccount
metadata:
  name: release-name
# Source: authentik/templates/worker-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-authentik-worker
.......
    spec:
      serviceAccountName: release-name-authentik

One of the naming schemes of authentik.names.fullname OR authentik-remote-cluster.fullname should change to fix this

redis affinity dose not work

I know this is not related with authentik but you can manage the helm-chart.
The nodeAffinityPreset in bitnami's helm charts(redis, common) not work in version 15.7.6.
In case of postgres have no problem this.

`.Values.envValueFrom` is templated incorrectly since 2022.8.2

As of 2022.8.2, .Values.envValueFrom gets templated incorrectly.

If I template a chart with the following in my values.yaml:

envValueFrom:
  AUTHENTIK_POSTGRESQL__PASSWORD:
    secretKeyRef:
      name: postgresql
      key: postgres-password

The previous chart would create:

env:
  - name: AUTHENTIK_POSTGRESQL__PASSWORD
    valueFrom:
      secretKeyRef:
        name: PostgreSQL
        key: postgres-password

But as of 2022.8.2, the following gets templated

env:
  - name: "name"
    value: "AUTHENTIK_POSTGRESQL__PASSWORD"
  - name: "valueFrom"
    value: "secretKeyRef:\n  key: postgres-password\n  name: postgresql"

It seems like the most straightforward fix would be to revert the env changes, but I don't want to revert them if it was changed for a reason. I am working on a better fix, but much of the env functionality would need to change. Was this modified to fix another issue?

Install for Remote Outpost on K8S needs updating

Describe the bug
When installing a Remote Outpost (see following link) via Helm, the post install script generate will not work with Kubernetes >= v1.2.4, as K8S no longer automatically generates User Tokens, so a step needs to be added to this, in addition, this seems to break the current method of getting the CA.

To Reproduce

  1. Install the remote output using the

    helm install my-authentik-remote-cluster goauthentik/authentik-remote-cluster --version

  2. A script will be outputted, that will generate the YML to import into Authentik

** Possible Alternative Script **

I've made some modifications to the original script, which now works with v1.2.4:

        # your server name goes here
        KUBE_API=https://localhost:8443
        SERVICE_ACCOUNT=svr-authentik-authentik-remote-cluster

        KUBE_CA=$(kubectl config view --minify --raw --output 'jsonpath={..cluster.certificate-authority-data}')
        KUBE_TOKEN=$(kubectl create token $SERVICE_ACCOUNT )

        echo "apiVersion: v1
        kind: Config
        clusters:
        - name: default-cluster
        cluster:
            certificate-authority-data: ${KUBE_CA}
            server: ${KUBE_API}
        contexts:
        - name: default-context
        context:
            cluster: default-cluster
            namespace: default
            user: authentik-user
        current-context: default-context
        users:
        - name: authentik-user
        user:
            token: ${KUBE_TOKEN}" 

The old script still probably works with < v1.24, so you may want to just output this in addition, with a message basically saying "If your using v1.24+ then use this instead:"

Chart version 5.2.2 populates empty envVars causing envFrom.. not to work

Chart version: 5.2.2

This is portion of my deployment spec

...

    spec:
      containers:
      - args:
        - server
        env:
        - name: AUTHENTIK_AVATARS
          value: gravatar
        - name: AUTHENTIK_EMAIL__FROM
        - name: AUTHENTIK_EMAIL__HOST
        - name: AUTHENTIK_EMAIL__PASSWORD
        - name: AUTHENTIK_EMAIL__PORT
          value: "587"
        - name: AUTHENTIK_EMAIL__TIMEOUT
          value: "30"
        - name: AUTHENTIK_EMAIL__USE_SSL
          value: "false"
        - name: AUTHENTIK_EMAIL__USE_TLS
          value: "false"
        - name: AUTHENTIK_EMAIL__USERNAME
        - name: AUTHENTIK_ERROR_REPORTING__ENABLED
          value: "false"
        - name: AUTHENTIK_ERROR_REPORTING__ENVIRONMENT
          value: k8s
        - name: AUTHENTIK_ERROR_REPORTING__SEND_PII
          value: "false"
        - name: AUTHENTIK_GEOIP
          value: /geoip/GeoLite2-City.mmdb
        - name: AUTHENTIK_LOG_LEVEL
          value: debug
        - name: AUTHENTIK_OUTPOSTS__CONTAINER_IMAGE_BASE
          value: ghcr.io/goauthentik/%(type)s:%(version)s
        - name: AUTHENTIK_POSTGRESQL__HOST
          value: authentik-postgres
        - name: AUTHENTIK_POSTGRESQL__NAME
          value: postgres
        - name: AUTHENTIK_POSTGRESQL__PORT
          value: "5432"
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__ACCESS_KEY
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__BUCKET
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__HOST
        - name: AUTHENTIK_SECRET_KEY
        - name: AUTHENTIK_POSTGRESQL__PASSWORD
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__INSECURE_SKIP_VERIFY
          value: "false"
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__LOCATION
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__REGION
        - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__SECRET_KEY
        - name: AUTHENTIK_POSTGRESQL__USER
          value: postgres
        - name: AUTHENTIK_REDIS__HOST
          value: authentik-redis-master
        envFrom:
        - secretRef:
            name: authentik

My envFrom secret has important values like AUTHENTIK_SECRET_KEY and such but it's getting overwritten from blank env vars

Leaked secrets in environment variables

Deployment template puts secrets like authentik.secret_key and authentik.postgresql.password as plain environment variables without using secrets.

It would be better to handle secrets explicitly and not put them as env vars by default.

As a workaround I have created secrets manually and used envValueFrom to specify them.

using envFrom clears env

Am using argocd to try to deploy, so want to seperate the secrets out of the helm file.
As such I have,

helm:
      values: |
        authentik:
        envFrom:
        - secretRef:
            name: authentik-secrets

Which seems to be read, but I think this is having the unfortunate effect of clearing all the config from the authentiik server.

Deployment ends up looking like

spec:
      containers:
        - args:
            - server
          env: null
          envFrom:
            - secretRef:
                name: authentik-secrets

and the authentik server pod logs.

{"event": "PostgreSQL connection failed, retrying... (could not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (::1) and accepting\n\tTCP/IP connections on port 5432?\ncould not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (127.0.0.1) and accepting\n\tTCP/IP connections on port 5432?\n)", "level": "info", "logger": "__main__", "timestamp": 1648550029.4643047}``

So it strikes me as a ending up in a very 'default' state. 

Is there an examples of using secrets along with the default config?. I could possibly retype it all in env, but there should be a way to just specify the key as a secret and use the sane defaults for the rest.

[Easy Fix] Ingress template for the Helm chart not covering https

Describe the bug
The value of spec.rules[0].http.paths[0].backend.service.port.name should be ${{ .Values.service.name }} and not http which prevent the user from using Authentik with https ports.

Not sure where in the repo the Helm charts are situated but used this for reference.

To Reproduce
NA

Expected behavior
NA

Screenshots
NA

Logs
NA

Version and Deployment (please complete the following information):
authentik version: 2022.3.1
Deployment: helm

Additional context
NA

Feature Request: Add support to import blueprints from Secrets

Use-case: we are storing our secrets in external Vaults and use external-secrets operator to sync the external secrets with Kubernetes.

As we store various secrets in the authentik-blueprints (eg: certs, keys), in my opinion it would make sense to make it possible to import blueprints from k8s secrets as well, not just from configmaps.

Add ability to add additional sidecars and init containers in the Helm chart

To support Cloud SQL connectivity in GKE, the Authentik pod needs to have an additional sidecar that does the proxying. Currently, the helm chart does not support adding additional sidecars.

A great example from the Vault helm chart:

https://github.com/hashicorp/vault-helm/blob/main/values.yaml#L326-L346

  # extraInitContainers is a list of init containers. Specified as a YAML list.
  # This is useful if you need to run a script to provision TLS certificates or
  # write out configuration files in a dynamic way.
  extraInitContainers: null
    # # This example installs a plugin pulled from github into the /usr/local/libexec/vault/oauthapp folder,
    # # which is defined in the volumes value.
    # - name: oauthapp
    #   image: "alpine"
    #   command: [sh, -c]
    #   args:
    #     - cd /tmp &&
    #       wget https://github.com/puppetlabs/vault-plugin-secrets-oauthapp/releases/download/v1.2.0/vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64.tar.xz -O oauthapp.xz &&
    #       tar -xf oauthapp.xz &&
    #       mv vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64 /usr/local/libexec/vault/oauthapp &&
    #       chmod +x /usr/local/libexec/vault/oauthapp
    #   volumeMounts:
    #     - name: plugins
    #       mountPath: /usr/local/libexec/vault

  # extraContainers is a list of sidecar containers. Specified as a YAML list.
  extraContainers: null

And inside the deployment:

{{- if .Values.server.extraInitContainers }}
initContainers:
  {{ toYaml .Values.server.extraInitContainers | nindent 8}}
{{- end }}
{{- if .Values.server.extraContainers }}
  {{ toYaml .Values.server.extraContainers | nindent 8}}
{{- end }}

PostgreSQL Password

I'm having some problems deploying this helm chart. Running version 5.2.3

Logs:
authentik-worker

{"event": "PostgreSQL connection failed, retrying... (connection to server at \"authentik-postgresql\" (10.43.143.115), port 5432 failed: FATAL:  password authentication failed for user \"authentik\"\n)", "level": "info", "logger": "__main__", "timestamp": 1647281978.839573}
{"event": "PostgreSQL connection successful", "level": "info", "logger": "__main__", "timestamp": 1647281978.8397024}
{"event": "PostgreSQL connection failed, retrying... (connection to server at \"authentik-postgresql\" (10.43.143.115), port 5432 failed: FATAL:  password authentication failed for user \"authentik\"\n)", "level": "info", "logger": "__main__", "timestamp": 1647281979.8466108}
{"event": "PostgreSQL connection successful", "level": "info", "logger": "__main__", "timestamp": 1647281979.846741}
{"event": "PostgreSQL connection failed, retrying... (connection to server at \"authentik-postgresql\" (10.43.143.115), port 5432 failed: FATAL:  password authentication failed for user \"authentik\"\n)", "level": "info", "logger": "__main__", "timestamp": 1647281980.8527002}
{"event": "PostgreSQL connection successful", "level": "info", "logger": "__main__", "timestamp": 1647281980.8528295}
// repeated endessly

authentik-server

{"event": "PostgreSQL connection failed, retrying... (connection to server at \"authentik-postgresql\" (10.43.143.115), port 5432 failed: FATAL:  password authentication failed for user \"authentik\"\n)", "level": "info", "logger": "__main__", "timestamp": 1647281849.623484}
{"event": "PostgreSQL connection successful", "level": "info", "logger": "__main__", "timestamp": 1647281849.623618}
{"event": "PostgreSQL connection failed, retrying... (connection to server at \"authentik-postgresql\" (10.43.143.115), port 5432 failed: FATAL:  password authentication failed for user \"authentik\"\n)", "level": "info", "logger": "__main__", "timestamp": 1647281850.6303198}
{"event": "PostgreSQL connection successful", "level": "info", "logger": "__main__", "timestamp": 1647281850.630462}
{"event": "PostgreSQL connection failed, retrying... (connection to server at \"authentik-postgresql\" (10.43.143.115), port 5432 failed: FATAL:  password authentication failed for user \"authentik\"\n)", "level": "info", "logger": "__main__", "timestamp": 1647281851.6366813}
{"event": "PostgreSQL connection successful", "level": "info", "logger": "__main__", "timestamp": 1647281851.6368034}
// repeated endessly

authentik-postgresql

2022-03-14 18:15:35.212 GMT [112] FATAL:  password authentication failed for user "authentik"
2022-03-14 18:15:35.212 GMT [112] DETAIL:  Password does not match for user "authentik".
	Connection matched pg_hba.conf line 1: "host     all             all             0.0.0.0/0               md5"

Describing the secrets, both authentik-postgresql and authentik-secrets are the same.

Here is my Helm release:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: authentik
  namespace: auth
spec:
  interval: 5m
  chart:
    spec:
      chart: authentik
      version: 5.2.3
      sourceRef:
        kind: HelmRepository
        name: authentik-charts
        namespace: flux-system
  install:
    createNamespace: true
    remediation:
      retries: 3
  upgrade:
    remediation:
      retries: 3
  values:
    image:
      repository: ghcr.io/goauthentik/server
      tag: 2022.3.1
    authentik:
      error_reporting:
        enabled: false
      log_level: "debug"
    ingress:
      enabled: true
      ingressClassName: "traefik"
      hosts:
        - host: "authentik.${SECRET_DOMAIN}"
          paths:
            - path: "/"
              pathType: Prefix
      tls:
        - hosts:
            - "authentik.${SECRET_DOMAIN}"
          secretName: "authentik-tls"
      annotations:
        cert-manager.io/cluster-issuer: "letsencrypt-production"
        hajimari.io/enable: "true"
        hajimari.io/icon: "shield-key"
        traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
    postgresql:
      enabled: true
    redis:
      enabled: true
    prometheus:
      rules:
        create: true
      serviceMonitor:
        create: true

  valuesFrom:
    - kind: Secret
      name: authentik-secrets

and here my secrets:

apiVersion: v1
kind: Secret
metadata:
    name: authentik-secrets
    namespace: auth
type: Opaque
stringData:
    values.yaml: |
        postgresql:
          postgresqlPassword: "password" # not real
        authentik:
          postgresql:
            password: "password"
          secret_key: "karaoke" # not real

EnvValueFrom Secret EMAIL Password Not being Used

Error Message:

{"event": "Task published", "level": "info", "logger": "authentik.root.celery", "pid": 90, "task_id": "6d0b5841-aaa1-4706-918d-35a315e1eb7a", "task_name": "authentik.blueprints.v1.tasks.clear_failed_blueprints", "timestamp": "2023-02-20T20:47:51.000522"}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/manage.py", line 31, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 440, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 402, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 448, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 96, in wrapped
    res = handle_func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/authentik/stages/email/management/commands/test_email.py", line 37, in handle
    send_mail(message.__dict__, stage.pk)
  File "/usr/local/lib/python3.11/site-packages/celery/local.py", line 188, in __call__
    return self._get_current_object()(*a, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/celery/app/task.py", line 392, in __call__
    return self.run(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/celery/app/autoretry.py", line 54, in run
    ret = task.retry(exc=exc, **retry_kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/celery/app/task.py", line 701, in retry
    raise_with_context(exc or Retry('Task can be retried', None))
  File "/usr/local/lib/python3.11/site-packages/celery/app/autoretry.py", line 34, in run
    return task._orig_run(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/authentik/stages/email/tasks.py", line 103, in send_mail
    raise exc
  File "/authentik/stages/email/tasks.py", line 73, in send_mail
    backend.open()
  File "/usr/local/lib/python3.11/site-packages/django/core/mail/backends/smtp.py", line 91, in open
    self.connection.login(self.username, self.password)
  File "/usr/local/lib/python3.11/smtplib.py", line 739, in login
    (code, resp) = self.auth(
                   ^^^^^^^^^^
  File "/usr/local/lib/python3.11/smtplib.py", line 642, in auth
    (code, resp) = self.docmd("AUTH", mechanism + " " + response)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/smtplib.py", line 432, in docmd
    return self.getreply()
           ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/smtplib.py", line 405, in getreply
    raise SMTPServerDisconnected("Connection unexpectedly closed")
smtplib.SMTPServerDisconnected: Connection unexpectedly closed

My configuration:

authentik:
    # -- Secret key used for cookie singing and unique user IDs,
    # don't change this after the first install
    # pulled in from secrets
    secret_key: ""
    # -- Mode for the avatars. Defaults to gravatar. Possible options 'gravatar' and 'none'
    avatars: none
    email:
      # -- SMTP Server emails are sent from, fully optional
      host: "smtp.sendgrid.net"
      port: 587
      # -- SMTP credentials, when left empty, not authentication will be done
      username: "apikey"
      # -- SMTP credentials, when left empty, no authentication will be done (can use ENVFrom)
      password: "" # <-- This is where I suspect the error is
      # -- Enable either use_tls or use_ssl, they can't be enabled at the same time.
      use_tls: true
      # -- Enable either use_tls or use_ssl, they can't be enabled at the same time.
      use_ssl: false
      # -- Connection timeout
      timeout: 30
      # -- Email from address, can either be in the format "[email protected]" or "authentik <[email protected]>"
      from: "[email protected]"
    outposts:
      # -- Template used for managed outposts. The following placeholders can be used
      # %(type)s - the type of the outpost
      # %(version)s - version of your authentik install
      # %(build_hash)s - only for beta versions, the build hash of the image
      container_image_base: ghcr.io/goauthentik/%(type)s:%(version)s
    error_reporting:
      # -- This sends anonymous usage-data, stack traces on errors and
      # performance data to sentry.beryju.org, and is fully opt-in
      enabled: false
      # -- This is a string that is sent to sentry with your error reports
      environment: "k3s"
    postgresql:
      port: 5432
    redis:
      # -- set the redis hostname to talk to
      # @default -- `{{ .Release.Name }}-redis-master`
      host: '{{ .Release.Name }}-redis-master'

  envValueFrom:
    AUTHENTIK_SECRET_KEY:
      secretKeyRef:
        name: authentik-secrets
        key: secret-key
    AUTHENTIK_POSTGRESQL__PASSWORD:
      secretKeyRef:
        name: authentik-postgresql
        key: postgresql-password
    AUTHENTIK_EMAIL__PASSWORD:
      secretKeyRef:
        name: authentik-secrets
        key: email-password

The strange thing is that when I pass the email.password value directly in plain text to the config, it works. However, when I leave the email.password value empty and instead pass the value from a secret using the envValueFrom construct, it will still show up that the environment variable is correct on the Pod, but when I attempt to send the email I get the above error message.

Can anyone with more knowledge help me out here? I am new to Authentik.

Thanks :)

Outdated postgres chart dependency

It seems that the used postgres chart version (10.16.2) is outdated compared to latest release (12.1.3) by two major versions.

Being new to helm charts I've spent quite some time overriding values according to the latest docs to no avail. Then figure out where the version 10 docs are because bitnami repo is a monorepo with no apparent tagging consistency. These are of course my own problems of not knowing the ecosystem enough or issues one should address to bitnami org. Though it would be beneficial to newcomers to at least use the latest version of chart dependencies with links to respective docs of how to override common configuration.

Are there any blockers to update to the latest postgres chart?

Run authentik using ArgoCD finished wih error about missing provided secret key

Scenario:

  1. Exist private repository where Argo will be read information about aplication. Argo intalled on Oracle VPS => Kubernetes.
    In my case I had one aplicaiton, which read other aplication in specific folder => apps
    Definition:
project: default
source:
  repoURL: '[email protected]:<censored>'
  path: apps
  targetRevision: HEAD
  directory:
    recurse: true
    jsonnet: {}
destination:
  server: 'https://kubernetes.default.svc'
  namespace: argocd
syncPolicy:
  automated:
    prune: true
    selfHeal: true
  syncOptions:
    - CreateNamespace=true
    - Validate=true
    - PruneLast=true
    - RespectIgnoreDifferences=false
    - ApplyOutOfSyncOnly=false
    - ServerSideApply=true
    - Replace=false
  retry:
    limit: 3
    backoff:
      duration: 60s
      factor: 2
      maxDuration: 3m0s
  1. In apss I had folder with: authentik
    That folder have several files like:
    Chart.yaml
apiVersion: v2
name: goauthentik
description: An Umbrella Helm chart
type: application
version: 0.1.0
appVersion: "1.0"

dependencies:
- name: authentik
  version: 2023.*
  repository: https://charts.goauthentik.io/

aplication.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: authentik
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: authentik
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    name: ''
    namespace: authentik
    server: 'https://kubernetes.default.svc'
  source:
    path: apps/authentik
    repoURL: '[email protected]:<censored>'
    targetRevision: HEAD
    helm:
      valueFiles:
      - values.yaml
  sources: []
  project: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground

values.yaml

replicas: 1
priorityClassName:
securityContext: {}

worker:
  replicas: 1
  priorityClassName:
  securityContext: {}

image:
  repository: ghcr.io/goauthentik/server
  digest: ""
  pullPolicy: IfNotPresent
  pullSecrets: []

initContainers: {}

additionalContainers: {}

ingress:
  enabled: false
  ingressClassName: "traefik-ingress"
  annotations: {
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
  }
  labels: {}
  hosts:
    - host: <censored my domain>
      paths:
        - path: "/"
          pathType: Prefix
  tls: []

annotations: {}

podAnnotations: {}

authentik:
  log_level: error
  secret_key: "<censored some value>"
  geoip: /geoip/GeoLite2-City.mmdb
  email:
    host: ""
    port: 587
    username: ""
    password: ""
    use_tls: false
    use_ssl: false
    timeout: 30
    from: ""
  outposts:
    container_image_base: ghcr.io/goauthentik/%(type)s:%(version)s
  error_reporting:
    enabled: false
    environment: "k8s"
    send_pii: false
  redis:
    host: "{{ .Release.Name }}-redis-master"
    password: ""
  geoip:
    enabled: false

blueprints: []

#secret to avoid add information about DB
envFrom:
  - secretRef:
       name: authentik-secret

envValueFrom: {}

service:
  enabled: true
  type: ClusterIP
  port: 80
  name: http
  protocol: TCP
  labels: {}
  annotations: {}

volumes: []

volumeMounts: []

affinity: {}

tolerations: []

nodeSelector: {}

resources:
  server: {}
  worker: {}

autoscaling:
  server:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
  worker:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 80

livenessProbe:
  enabled: true
  httpGet:
    path: /-/health/live/
    port: http
  initialDelaySeconds: 5
  periodSeconds: 10

startupProbe:
  enabled: true
  httpGet:
    path: /-/health/live/
    port: http
  failureThreshold: 60
  periodSeconds: 5

readinessProbe:
  enabled: true
  httpGet:
    path: /-/health/ready/
    port: http
  periodSeconds: 10

serviceAccount:
  create: true
  annotations: {}
  serviceAccountSecret:
    enabled: false

prometheus:
  serviceMonitor:
    create: false
    interval: 30s
    scrapeTimeout: 3s
    labels: {}
  rules:
    create: false
    labels: {}

postgresql:
  enabled: false

redis:
  enabled: true

sealed-psql-secret.yml

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: authentik-secret
  namespace: authentik
spec:
  encryptedData:
    AUTHENTIK_POSTGRESQL__HOST: <censored>
    AUTHENTIK_POSTGRESQL__NAME: <censored>
    AUTHENTIK_POSTGRESQL__PASSWORD: <censored>
    AUTHENTIK_POSTGRESQL__USER: <censored>
    AUTHENTIK_SECRET_KEY: <censored>
  template:
    metadata:
      creationTimestamp: null
      name: authentik-secret
      namespace: authentik
    type: stringData
---

  1. Commit changes and look if argo correctly run authenik.

Reality: No, finished with error on pods:
authentik-server-
authentik-worker-



{"event": "Loaded config", "level": "debug", "logger": "authentik.lib.config", "timestamp": 1698178364.4328628, "file": "/authentik/lib/default.yml"}

{"event": "Loaded environment variables", "level": "debug", "logger": "authentik.lib.config", "timestamp": 1698178364.4334147, "count": 28}

{"event": "Starting authentik bootstrap", "level": "info", "logger": "authentik.lib.config", "timestamp": 1698178364.4335992}

{"event": "----------------------------------------------------------------------", "level": "info", "logger": "authentik.lib.config", "timestamp": 1698178364.4336236}

{"event": "Secret key missing, check https://goauthentik.io/docs/installation/.", "level": "info", "logger": "authentik.lib.config", "timestamp": 1698178364.433638}

{"event": "----------------------------------------------------------------------", "level": "info", "logger": "authentik.lib.config", "timestamp": 1698178364.4336486}

however like you see, I provided secret key in secret or in values?
Then why run aplication using ArgoCD finished with failed status and complains about missing secret key?

Deprecated of Using Pod Security Policies

  • Removed feature
    PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.

Cause

I tried to install Authentic via instructions on the website
However, the helm upgrade --install was not successful.
Kubectl Version is Lastest and drop support for setting up securityContext.
I tried downgrade my kubectl version to match my Kubernetes version, but notworking.

May Caused by File:

allowPrivilegeEscalation: false

readOnlyRootFilesystem: false

Error

helm upgrade --install authentik authentik/authentik -f values.yaml --namespace  authentik  --debug
history.go:56: [debug] getting history for release authentik
Release "authentik" does not exist. Installing it now.
install.go:200: [debug] Original chart version: ""
install.go:217: [debug] CHART PATH: /Users/evsio0n/Library/Caches/helm/repository/authentik-2023.8.3.tgz

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Deployment.spec.template.spec.securityContext): unknown field "allowPrivilegeEscalation" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "capabilities" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "privileged" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "readOnlyRootFilesystem" in io.k8s.api.core.v1.PodSecurityContext]
helm.go:84: [debug] error validating "": error validating data: [ValidationError(Deployment.spec.template.spec.securityContext): unknown field "allowPrivilegeEscalation" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "capabilities" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "privileged" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "readOnlyRootFilesystem" in io.k8s.api.core.v1.PodSecurityContext]
helm.sh/helm/v3/pkg/kube.scrubValidationError
	helm.sh/helm/v3/pkg/kube/client.go:809
helm.sh/helm/v3/pkg/kube.(*Client).Build
	helm.sh/helm/v3/pkg/kube/client.go:350
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
	helm.sh/helm/v3/pkg/action/install.go:305
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:287
main.newUpgradeCmd.func2
	helm.sh/helm/v3/cmd/helm/upgrade.go:130
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1044
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:968
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_arm64.s:1172
unable to build kubernetes objects from release manifest
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
	helm.sh/helm/v3/pkg/action/install.go:307
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:287
main.newUpgradeCmd.func2
	helm.sh/helm/v3/cmd/helm/upgrade.go:130
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1044
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:968
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_arm64.s:1172
  • My Kustomize Version
kubectl version --short
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.23.10
  • My Kubernetes Version
 kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k8s-master-1   Ready    control-plane,master   3d7h   v1.23.10
k8s-master-2   Ready    control-plane,master   3d7h   v1.23.10
k8s-master-3   Ready    control-plane,master   3d7h   v1.23.10
k8s-worker-1   Ready    worker                 3d7h   v1.23.10
k8s-worker-2   Ready    worker                 3d7h   v1.23.10
k8s-worker-3   Ready    worker                 14h    v1.23.10
  • After downgrade Kubectl
โฏ helm upgrade --install authentik authentik/authentik -f values.yaml --namespace  authentik  --reset-values
Release "authentik" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Deployment.spec.template.spec.securityContext): unknown field "allowPrivilegeEscalation" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "capabilities" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "privileged" in io.k8s.api.core.v1.PodSecurityContext, ValidationError(Deployment.spec.template.spec.securityContext): unknown field "readOnlyRootFilesystem" in io.k8s.api.core.v1.PodSecurityContext]
โฏ helm search repo authentik
NAME                              	CHART VERSION	APP VERSION	DESCRIPTION
authentik/authentik               	2023.8.3     	2023.8.3   	authentik is an open-source Identity Provider f...
authentik/authentik-remote-cluster	1.2.2        	2023.6.0   	RBAC required for a remote cluster to be connec...
authentik/postgresql              	12.8.2       	15.4.0     	PostgreSQL (Postgres) is an open source object-...
authentik/redis                   	15.7.6       	6.2.6      	Open source, advanced key-value store. It is of...
โฏ helm get values -n authentik
โฏ kubectl version --short
Client Version: v1.23.10
Server Version: v1.23.10

Service Account and Volume Mounts Post-Deployment Split

I ran into a slight issue that happened since the helm chart split the worker and server deployment.
The worker will now happily generate a service account (via a non-configurable name), but the server deployment will use default. When combined with the volume configuration applying on both deployments, if you are using csi secret store to load in secrets from HashiCorp Vault there are SA's that need access where there used to be just one.

The most straight-forward way to fix this would probably be to just have the worker and server use the same service account once it is created, but for max-security deployments it might make more sense to make both the SAs and the volumes + volumeMounts individually configurable.

Env Var Override

I've got a question, possibly a behavior question, about setting environment variables for the deployment:

env:
  AUTHENTIK_POSTGRESQL_HOSTNANE: [redacted]
  AUTHENTIK_POSTGRESQL__NAME: [redacted]

However when I create a release like this, deployment ends up failing because it can't reach postgres, and when I look at the deployment the AUTHENTIK_POSTGRESQL__HOSTNAME and AUTHENTIK_POSTGRESQL__NAME variables are defined twice. Once with my values, and once with seemingly default values. From the deployment template it looks like that's happening because of

{{- include "authentik.env" (dict "root" $ "values" $.Values.authentik) | indent 12 }}
So I guess what I'm wondering is if it's possible to merge those two sources of env vars rather than concatenating them.

Multi-architecture support, arm64

Currently helm chart doesn't support arm64 as the way how you're supposed to specify which redis, and postgresql image to use are not documented.

$ k logs -n security authentik-redis-master-0
standard_init_linux.go:228: exec user process caused: exec format error
$ k logs -n security authentik-postgresql-0
standard_init_linux.go:228: exec user process caused: exec format error

Specify the minium requirements of system resources in the chart

If the node Authentik worker ends up getting assigned to, happens to have less than optimal amount of RAM, k8s cannot know to not schedule it there without the request of resources specified so what happen is system OOM.

Node is an Raspberry Pi 4, with 2GB of RAM even tho there's others that have more.

Events:
  Type     Reason                   Age                   From     Message
  ----     ------                   ----                  ----     -------
  Warning  SystemOOM                33m                   kubelet  System OOM encountered, victim process: celery, pid: 3602044
  Warning  SystemOOM                30m                   kubelet  System OOM encountered, victim process: celery, pid: 3603940
  Warning  SystemOOM                21m                   kubelet  System OOM encountered, victim process: celery, pid: 3612872
  Normal   NodeHasSufficientPID     21m (x40 over 16h)    kubelet  Node k8s-worker3 status is now: NodeHasSufficientPID
  Normal   NodeReady                21m (x40 over 16h)    kubelet  Node k8s-worker3 status is now: NodeReady
  Warning  SystemOOM                6m28s                 kubelet  System OOM encountered, victim process: celery, pid: 3632477

Option to disable default Blueprints

When deploying authentik via Helm, there is an option to specify a list of blueprints with ConfigMaps that are mounted into the Worker.

The specified blueprints are mounted at /blueprints/mounted/{{ $name }} as you can see here.

So there is no way to overwrite the default blueprints in /blueprints/default, /blueprints/example and /blueprints/system here.

The developer documentation says here that you can disable the default blueprints by mounting other (e.g. empty) files or directories in the corresponding paths.

This is not possible with this chart. I don't want many of the default objects (e.g. the default flows) in my deployment, because I configure my objects myself (and differently). Here, after deployment, the default objects have to be deleted manually and that is inconvenient.

It would be nice if there was a way to disable default Blueprints in this Chart.

Redis: Error 111 connecting to None:6379. Connection refused.

Hi,

i wanted to give authentik a shot & ran into this issue.

authentik-server:

{"event": "Redis Connection failed, retrying... (Error 111 connecting to None:6379. Connection refused.)", "level": "info", "logger": "__main__", "timestamp": 1637770878.299152}

So i guess the redis hostname is not set correctly. But I checked the ENV and is seems correct: AUTHENTIK_REDIS__HOST: authentik-redis-master

Here are the rest of my values:

ingress:
  enabled: true
  ingressClassName: ""
  annotations:
    cert-manager.io/cluster-issuer: vault-issuer
  hosts:
    - host: auth.dc
      paths:
        - path: "/"
          pathType: Prefix
  tls:
    - hosts: ['auth.dc']
      secretName: "auth-tls"

authentik:
  # -- Log level for server and worker
  log_level: info
  # -- Secret key used for cookie singing and unique user IDs,
  # don't change this after the first install
  secret_key: ""
  # -- Path for the geoip database. If the file doesn't exist, GeoIP features are disabled.
  geoip: /geoip/GeoLite2-City.mmdb
  # -- Mode for the avatars. Defaults to gravatar. Possible options 'gravatar' and 'none'
  avatars: none

  postgresql:
    # -- set the postgresql hostname to talk to
    # if unset and .Values.postgresql.enabled == true, will generate the default
    # @default -- `{{ .Release.Name }}-postgresql`
    host: '{{ .Release.Name }}-postgresql'
    # -- postgresql Database name
    # @default -- `authentik`
    name: "authentik"
    # -- postgresql Username
    # @default -- `authentik`
    user: "authentik"
    port: 5432
  redis:
    # -- set the redis hostname to talk to
    # @default -- `{{ .Release.Name }}-redis-master`
    host: '{{ .Release.Name }}-redis-master'


envValueFrom:
  AUTHENTIK_SECRET_KEY:
    secretKeyRef:
      key: secret_key
      name: authentik
  AUTHENTIK_POSTGRESQL__PASSWORD:
    secretKeyRef:
      key: postgresql-password
      name: postgres
  AUTHENTIK_REDIS__PASSWORD:
    secretKeyRef:
      key: password
      name: redis


postgresql:
  # -- enable the bundled bitnami postgresql chart
  enabled: true
  postgresqlUsername: "authentik"
  # postgresqlPassword: ""
  postgresqlDatabase: "authentik"
  persistence:
    enabled: true
  #  storageClass:
    accessModes:
      - ReadWriteOnce
  existingSecret: postgres

redis:
  # -- enable the bundled bitnami redis chart
  enabled: true
  architecture: standalone
  auth:
    enabled: true
    existingSecret: redis
    existingSecretPasswordKey: password

Chart Version: 4.0.3

Server and worker are using same labels and wrong selector on Service

Currently on 1.0.0-RC5 the server and worker deployment are templated by a single template resulting in same labels:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-authentik-worker
  labels:
    helm.sh/chart: authentik-1.0.0-RC5
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2021.4.5"
    app.kubernetes.io/managed-by: Helm
...
apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-authentik-server
  labels:
    helm.sh/chart: authentik-1.0.0-RC5
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2021.4.5"
    app.kubernetes.io/managed-by: Helm

which makes impossible for the Service to select the server (it matches both):

apiVersion: v1
kind: Service
...
  labels:
    helm.sh/chart: authentik-1.0.0-RC5
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2021.4.5"
    app.kubernetes.io/managed-by: Helm

Services should also not select by Helm related labels, see https://github.com/k8s-at-home/library-charts/blob/main/charts/stable/common/templates/lib/chart/_labels.tpl#L16

Add License file

Chart.yaml says its GPL-3.0-only but its good practice to have a top level LICENSE file to make things clear

Run authentik using ArgoCD raise validation - geoip

Scenario:

  1. Exist private repository where Argo will be read information about aplication. Argo intalled on Oracle VPS => Kubernetes.
    In my case I had one application, which read other aplication in specific folder => apps
    Definition:
project: default
source:
  repoURL: '[email protected]:<censored>'
  path: apps
  targetRevision: HEAD
  directory:
    recurse: true
    jsonnet: {}
destination:
  server: 'https://kubernetes.default.svc'
  namespace: argocd
syncPolicy:
  automated:
    prune: true
    selfHeal: true
  syncOptions:
    - CreateNamespace=true
    - Validate=true
    - PruneLast=true
    - RespectIgnoreDifferences=false
    - ApplyOutOfSyncOnly=false
    - ServerSideApply=true
    - Replace=false
  retry:
    limit: 3
    backoff:
      duration: 60s
      factor: 2
      maxDuration: 3m0s

  1. In apss I had folder with: authentik
    That folder have several files like:
    Chart.yaml
apiVersion: v2
name: goauthentik
description: An Umbrella Helm chart
type: application
version: 0.1.0
appVersion: "1.0"

dependencies:
- name: authentik
  version: 2023.*
  repository: https://charts.goauthentik.io/

aplication.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: authentik
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: authentik
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    name: ''
    namespace: authentik
    server: 'https://kubernetes.default.svc'
  source:
    path: apps/authentik
    repoURL: '[email protected]:<censored>'
    targetRevision: HEAD
    helm:
      valueFiles:
      - values.yaml
  sources: []
  project: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground

values.yaml

replicas: 1
priorityClassName:
securityContext: {}

worker:
  replicas: 1
  priorityClassName:
  securityContext: {}

image:
  repository: ghcr.io/goauthentik/server
  digest: ""
  pullPolicy: IfNotPresent
  pullSecrets: []

initContainers: {}

additionalContainers: {}

ingress:
  enabled: false
  ingressClassName: "traefik-ingress"
  annotations: {
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
  }
  labels: {}
  hosts:
    - host: <censored my domain>
      paths:
        - path: "/"
          pathType: Prefix
  tls: []

annotations: {}

podAnnotations: {}

authentik:
  log_level: error
  secret_key: "<censored some value>"
  geoip: /geoip/GeoLite2-City.mmdb
  email:
    host: ""
    port: 587
    username: ""
    password: ""
    use_tls: false
    use_ssl: false
    timeout: 30
    from: ""
  outposts:
    container_image_base: ghcr.io/goauthentik/%(type)s:%(version)s
  error_reporting:
    enabled: false
    environment: "k8s"
    send_pii: false
  redis:
    host: "{{ .Release.Name }}-redis-master"
    password: ""

blueprints: []

#secret to avoid add information about DB
envFrom:
  - secretRef:
       name: authentik-secret

envValueFrom: {}

service:
  enabled: true
  type: ClusterIP
  port: 80
  name: http
  protocol: TCP
  labels: {}
  annotations: {}

volumes: []

volumeMounts: []

affinity: {}

tolerations: []

nodeSelector: {}

resources:
  server: {}
  worker: {}

autoscaling:
  server:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
  worker:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 80

livenessProbe:
  enabled: true
  httpGet:
    path: /-/health/live/
    port: http
  initialDelaySeconds: 5
  periodSeconds: 10

startupProbe:
  enabled: true
  httpGet:
    path: /-/health/live/
    port: http
  failureThreshold: 60
  periodSeconds: 5

readinessProbe:
  enabled: true
  httpGet:
    path: /-/health/ready/
    port: http
  periodSeconds: 10

serviceAccount:
  create: true
  annotations: {}
  serviceAccountSecret:
    enabled: false

geoip:
  enabled: false

prometheus:
  serviceMonitor:
    create: false
    interval: 30s
    scrapeTimeout: 3s
    labels: {}
  rules:
    create: false
    labels: {}

postgresql:
  enabled: false

redis:
  enabled: true

Be aware I added geoip in main object, similar like was described in those places:
https://artifacthub.io/packages/helm/goauthentik/authentik
https://github.com/goauthentik/helm/blob/main/charts/authentik/values.yaml

  1. Commit changes and look if argo correctly run authenik.

Reality: No, finished with error:

ComparisonError: Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = Manifest generation error (cached): `helm template . --name-template authentik --namespace authentik --kube-version 1.27 --values <path to cached source>/apps/authentik/values.yaml --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2 --api-versions autoscaling/v2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions bitnami.com/v1alpha1 --api-versions bitnami.com/v1alpha1/SealedSecret --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta2 --api-versions flowcontrol.apiserver.k8s.io/v1beta2/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta2/PriorityLevelConfiguration --api-versions flowcontrol.apiserver.k8s.io/v1beta3 --api-versions flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration --api-versions helm.cattle.io/v1 --api-versions helm.cattle.io/v1/HelmChart --api-versions helm.cattle.io/v1/HelmChartConfig --api-versions k3s.cattle.io/v1 --api-versions k3s.cattle.io/v1/Addon --api-versions longhorn.io/v1beta1 --api-versions longhorn.io/v1beta1/BackingImage --api-versions longhorn.io/v1beta1/BackingImageDataSource --api-versions longhorn.io/v1beta1/BackingImageManager --api-versions longhorn.io/v1beta1/Backup --api-versions longhorn.io/v1beta1/BackupTarget --api-versions longhorn.io/v1beta1/BackupVolume --api-versions longhorn.io/v1beta1/Engine --api-versions longhorn.io/v1beta1/EngineImage --api-versions longhorn.io/v1beta1/InstanceManager --api-versions longhorn.io/v1beta1/Node --api-versions longhorn.io/v1beta1/RecurringJob --api-versions longhorn.io/v1beta1/Replica --api-versions longhorn.io/v1beta1/Setting --api-versions longhorn.io/v1beta1/ShareManager --api-versions longhorn.io/v1beta1/Volume --api-versions longhorn.io/v1beta2 --api-versions longhorn.io/v1beta2/BackingImage --api-versions longhorn.io/v1beta2/BackingImageDataSource --api-versions longhorn.io/v1beta2/BackingImageManager --api-versions longhorn.io/v1beta2/Backup --api-versions longhorn.io/v1beta2/BackupTarget --api-versions longhorn.io/v1beta2/BackupVolume --api-versions longhorn.io/v1beta2/Engine --api-versions longhorn.io/v1beta2/EngineImage --api-versions longhorn.io/v1beta2/InstanceManager --api-versions longhorn.io/v1beta2/Node --api-versions longhorn.io/v1beta2/Orphan --api-versions longhorn.io/v1beta2/RecurringJob --api-versions longhorn.io/v1beta2/Replica --api-versions longhorn.io/v1beta2/Setting --api-versions longhorn.io/v1beta2/ShareManager --api-versions longhorn.io/v1beta2/Snapshot --api-versions longhorn.io/v1beta2/SupportBundle --api-versions longhorn.io/v1beta2/SystemBackup --api-versions longhorn.io/v1beta2/SystemRestore --api-versions longhorn.io/v1beta2/Volume --api-versions longhorn.io/v1beta2/VolumeAttachment --api-versions metallb.io/v1alpha1 --api-versions metallb.io/v1alpha1/AddressPool --api-versions metallb.io/v1beta1 --api-versions metallb.io/v1beta1/AddressPool --api-versions metallb.io/v1beta1/BFDProfile --api-versions metallb.io/v1beta1/BGPAdvertisement --api-versions metallb.io/v1beta1/BGPPeer --api-versions metallb.io/v1beta1/Community --api-versions metallb.io/v1beta1/IPAddressPool --api-versions metallb.io/v1beta1/L2Advertisement --api-versions metallb.io/v1beta2 --api-versions metallb.io/v1beta2/BGPPeer --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions postgresql.cnpg.io/v1 --api-versions postgresql.cnpg.io/v1/Backup --api-versions postgresql.cnpg.io/v1/Cluster --api-versions postgresql.cnpg.io/v1/Pooler --api-versions postgresql.cnpg.io/v1/ScheduledBackup --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/CSIStorageCapacity --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions traefik.containo.us/v1alpha1 --api-versions traefik.containo.us/v1alpha1/IngressRoute --api-versions traefik.containo.us/v1alpha1/IngressRouteTCP --api-versions traefik.containo.us/v1alpha1/IngressRouteUDP --api-versions traefik.containo.us/v1alpha1/Middleware --api-versions traefik.containo.us/v1alpha1/MiddlewareTCP --api-versions traefik.containo.us/v1alpha1/ServersTransport --api-versions traefik.containo.us/v1alpha1/TLSOption --api-versions traefik.containo.us/v1alpha1/TLSStore --api-versions traefik.containo.us/v1alpha1/TraefikService --api-versions traefik.io/v1alpha1 --api-versions traefik.io/v1alpha1/IngressRoute --api-versions traefik.io/v1alpha1/IngressRouteTCP --api-versions traefik.io/v1alpha1/IngressRouteUDP --api-versions traefik.io/v1alpha1/Middleware --api-versions traefik.io/v1alpha1/MiddlewareTCP --api-versions traefik.io/v1alpha1/ServersTransport --api-versions traefik.io/v1alpha1/ServersTransportTCP --api-versions traefik.io/v1alpha1/TLSOption --api-versions traefik.io/v1alpha1/TLSStore --api-versions traefik.io/v1alpha1/TraefikService --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --include-crds` failed exit status 1: Error: template: goauthentik/charts/authentik/templates/worker-deployment.yaml:28:43: executing "goauthentik/charts/authentik/templates/worker-deployment.yaml" at <include (print $.Template.BasePath "/secret.yaml") .>: error calling include: template: goauthentik/charts/authentik/templates/secret.yaml:13:10: executing "goauthentik/charts/authentik/templates/secret.yaml" at <$.Values.geoip.enabled>: can't evaluate field enabled in type interface {} Use --debug flag to render out invalid YAML

Seems like most important information is:

<include (print $.Template.BasePath "/secret.yaml") .>: error calling include: template: goauthentik/charts/authentik/templates/secret.yaml:13:10: executing "goauthentik/charts/authentik/templates/secret.yaml" at <$.Values.geoip.enabled>: can't evaluate field enabled in type interface {} Use --debug flag to render out invalid YAML

And As far I understood in this place:
https://github.com/goauthentik/helm/blob/9faeb471a4be825b617443ffb3b2c1f9f8f14f51/charts/authentik/templates/secret.yaml#L12C1-L12C1
we add information from: Values.authentik and include to secret.yaml
In line 13 we tried to read information from values, however geoip is outside: authentik "object" and that is reason why error is raised?

[Feature Request] add additionalObjects

Hello! I've suggestion for authentik helm chart, for example define an additionalObjects: [] for create extra manifests via values. (for ex. extra pv, extra pvc, extra cm, or any other k8s object). Thanks!

I suggest this for case of use, for example for authetic-media pv/pvc for created in the same chart, without dependence of create manual objects or another chart. And for Issue https://github.com/goauthentik/authentik/issues/5205#issuecomment-1573388459 or any other extra k8s object we need to create.

Example:

values.yaml:

# -- Additional API resources to deploy
additionalObjects: []

additionalObjects.yaml:

{{- range .Values.additionalObjects }}
---
{{ toYaml . }}
{{- end }}

GitOps (Flux) Authentik "secret_key" encryption at rest.

I suspect most people deploying in kubernetes will be using GitOps something like Flux/Argo. I would propose some consideration be given to authentik->secret_key and email deatils being populated from a kubernetes secret. This can then be encrypted in the cluster git repository.

If this already exists feel free to correct me, but I don't think exposing any keys in git is a good idea particularly for something that is designed to act as an SSO. It would be great if these were instead pulled from a secret that allows at least for this to be encrypted within any git repo it might be placed. Those of us using Flux and I suspect ArgoCD will be forced to expose this in git :(

EDIT: Did some more digging in the values.yaml looks like this might be what I'm after. Although still doesn't look like anything to cover the email config

envValueFrom: {}
#  AUTHENTIK_VAR_NAME:
#    secretKeyRef:
#      key: password
#      name: my-secret

HTTPS with LoadBalancer?

I'm trying to set the service type to LoadBalancer, but when using this setup the HTTPS port isn't mapped and I'm not seeing any way to set it up to get through to the ingress other than HTTP, what am I missing?

Why does Authentik need a ClusterRole?

Hi!

Thanks for providing this nice chart!

I was working on setting up Authentik in a multi-tenant environment and realized that this chart grants the "authentik" user A LOT of privileges in the cluster.
Specifically, the included ClusterRole grants permissions to read/modify/delete ALL secrets/deployments/ingresses in the entire cluster:
https://github.com/goauthentik/helm/blob/main/charts/authentik/templates/rbac.yaml#L30

Could someone explain why this is necessary?
As far as I can tell, Authentik only needs to be able to manage these in its own namespace (for managing the outpost deployments).

Invalid Postgress Password

Same postgres errors seen in #61.

I am not sure why the chart is not deploying with matching passwords. I have tried with and without persistent storage enabled, fully deleting and PVs and PVCs along the way.

Here is my helm release (all included secrets are randomly generated and not important)

---
# Source: authentik/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: authentik
  namespace: authentik
---
# Source: authentik/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: authentik-postgresql
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-10.16.2
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/managed-by: Helm
  namespace: authentik
type: Opaque
data:
  postgresql-postgres-password: "T1lqbFN0d1p0UA=="
  postgresql-password: "NWFHS0JDVnU2Vw=="
---
# Source: authentik/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: authentik
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
      - services
      - configmaps
    verbs:
      - get
      - create
      - delete
      - list
      - patch
  - apiGroups:
      - extensions
      - apps
    resources:
      - deployments
    verbs:
      - get
      - create
      - delete
      - list
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - create
      - delete
      - list
      - patch
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
    verbs:
      - get
      - create
      - delete
      - list
      - patch
  - apiGroups:
      - monitoring.coreos.com
    resources:
      - servicemonitors
    verbs:
      - get
      - create
      - delete
      - list
      - patch
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - list
---
# Source: authentik/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: authentik
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: authentik
subjects:
  - kind: ServiceAccount
    name: authentik
    namespace: authentik
---
# Source: authentik/charts/postgresql/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: authentik-postgresql-headless
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-10.16.2
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/managed-by: Helm
    # Use this annotation in addition to the actual publishNotReadyAddresses
    # field below because the annotation will stop being respected soon but the
    # field is broken in some versions of Kubernetes:
    # https://github.com/kubernetes/kubernetes/issues/58662
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  namespace: authentik
spec:
  type: ClusterIP
  clusterIP: None
  # We want all pods in the StatefulSet to have their addresses published for
  # the sake of the other Postgresql pods even before they're ready, since they
  # have to be able to talk to each other in order to become ready.
  publishNotReadyAddresses: true
  ports:
    - name: tcp-postgresql
      port: 5432
      targetPort: tcp-postgresql
  selector:
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/instance: authentik
---
# Source: authentik/charts/postgresql/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: authentik-postgresql
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-10.16.2
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/managed-by: Helm
  annotations:
  namespace: authentik
spec:
  type: ClusterIP
  ports:
    - name: tcp-postgresql
      port: 5432
      targetPort: tcp-postgresql
  selector:
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/instance: authentik
    role: primary
---
# Source: authentik/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: authentik
  labels:
    helm.sh/chart: authentik-2022.7.3
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/version: "2022.7.2"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 9100
      name: http-metrics
      protocol: TCP
      targetPort: http-metrics
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/component: "server"
---
# Source: authentik/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: authentik-server
  labels:
    helm.sh/chart: authentik-2022.7.3
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/version: "2022.7.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: "server"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: authentik
      app.kubernetes.io/instance: authentik
      app.kubernetes.io/component: "server"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: authentik
        app.kubernetes.io/instance: authentik
        app.kubernetes.io/component: "server"
        app.kubernetes.io/version: "2022.7.2"
    spec:
      serviceAccountName: authentik
      enableServiceLinks: true
      priorityClassName:
      securityContext:
        {}
      containers:
        - name: authentik
          image: "ghcr.io/goauthentik/server:2022.7.2"
          imagePullPolicy: "IfNotPresent"
          args: ["server"]
          env:
            - name: AUTHENTIK_AVATARS
              value: "gravatar"
            - name: AUTHENTIK_EMAIL__PORT
              value: "587"
            - name: AUTHENTIK_EMAIL__TIMEOUT
              value: "30"
            - name: AUTHENTIK_EMAIL__USE_SSL
              value: "false"
            - name: AUTHENTIK_EMAIL__USE_TLS
              value: "false"
            - name: AUTHENTIK_ERROR_REPORTING__ENABLED
              value: "true"
            - name: AUTHENTIK_ERROR_REPORTING__ENVIRONMENT
              value: "k8s"
            - name: AUTHENTIK_ERROR_REPORTING__SEND_PII
              value: "false"
            - name: AUTHENTIK_GEOIP
              value: "/geoip/GeoLite2-City.mmdb"
            - name: AUTHENTIK_LOG_LEVEL
              value: "info"
            - name: AUTHENTIK_OUTPOSTS__CONTAINER_IMAGE_BASE
              value: "ghcr.io/goauthentik/%(type)s:%(version)s"
            - name: AUTHENTIK_POSTGRESQL__HOST
              value: "authentik-postgresql"
            - name: AUTHENTIK_POSTGRESQL__NAME
              value: "authentik"
            - name: AUTHENTIK_POSTGRESQL__PASSWORD
              value: "5aGKBCVu6W"
            - name: AUTHENTIK_POSTGRESQL__PORT
              value: "5432"
            - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__INSECURE_SKIP_VERIFY
              value: "false"
            - name: AUTHENTIK_POSTGRESQL__USER
              value: "authentik"
            - name: AUTHENTIK_REDIS__HOST
              value: "authentik-redis-master"
            - name: AUTHENTIK_REDIS__PASSWORD
              value: "5aGKBCVu6W"
            - name: AUTHENTIK_SECRET_KEY
              value: "5aGKBCVu6W"
          volumeMounts:
            - name: geoip-db
              mountPath: /geoip
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
            - name: http-metrics
              containerPort: 9300
              protocol: TCP
            - name: https
              containerPort: 9443
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /-/health/live/
              port: http
            initialDelaySeconds: 50
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /-/health/ready/
              port: http
            initialDelaySeconds: 50
            periodSeconds: 10
      volumes:
        - name: geoip-db
          emptyDir: {}
---
# Source: authentik/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: authentik-worker
  labels:
    helm.sh/chart: authentik-2022.7.3
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/version: "2022.7.2"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: "worker"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: authentik
      app.kubernetes.io/instance: authentik
      app.kubernetes.io/component: "worker"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: authentik
        app.kubernetes.io/instance: authentik
        app.kubernetes.io/component: "worker"
        app.kubernetes.io/version: "2022.7.2"
    spec:
      serviceAccountName: authentik
      enableServiceLinks: true
      priorityClassName:
      securityContext:
        {}
      containers:
        - name: authentik
          image: "ghcr.io/goauthentik/server:2022.7.2"
          imagePullPolicy: "IfNotPresent"
          args: ["worker"]
          env:
            - name: AUTHENTIK_AVATARS
              value: "gravatar"
            - name: AUTHENTIK_EMAIL__PORT
              value: "587"
            - name: AUTHENTIK_EMAIL__TIMEOUT
              value: "30"
            - name: AUTHENTIK_EMAIL__USE_SSL
              value: "false"
            - name: AUTHENTIK_EMAIL__USE_TLS
              value: "false"
            - name: AUTHENTIK_ERROR_REPORTING__ENABLED
              value: "true"
            - name: AUTHENTIK_ERROR_REPORTING__ENVIRONMENT
              value: "k8s"
            - name: AUTHENTIK_ERROR_REPORTING__SEND_PII
              value: "false"
            - name: AUTHENTIK_GEOIP
              value: "/geoip/GeoLite2-City.mmdb"
            - name: AUTHENTIK_LOG_LEVEL
              value: "info"
            - name: AUTHENTIK_OUTPOSTS__CONTAINER_IMAGE_BASE
              value: "ghcr.io/goauthentik/%(type)s:%(version)s"
            - name: AUTHENTIK_POSTGRESQL__HOST
              value: "authentik-postgresql"
            - name: AUTHENTIK_POSTGRESQL__NAME
              value: "authentik"
            - name: AUTHENTIK_POSTGRESQL__PASSWORD
              value: "5aGKBCVu6W"
            - name: AUTHENTIK_POSTGRESQL__PORT
              value: "5432"
            - name: AUTHENTIK_POSTGRESQL__S3_BACKUP__INSECURE_SKIP_VERIFY
              value: "false"
            - name: AUTHENTIK_POSTGRESQL__USER
              value: "authentik"
            - name: AUTHENTIK_REDIS__HOST
              value: "authentik-redis-master"
            - name: AUTHENTIK_REDIS__PASSWORD
              value: "5aGKBCVu6W"
            - name: AUTHENTIK_SECRET_KEY
              value: "5aGKBCVu6W"
          volumeMounts:
            - name: geoip-db
              mountPath: /geoip
      volumes:
        - name: geoip-db
          emptyDir: {}
---
# Source: authentik/charts/postgresql/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: authentik-postgresql
  labels:
    app.kubernetes.io/name: postgresql
    helm.sh/chart: postgresql-10.16.2
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: primary
  annotations:
  namespace: authentik
spec:
  serviceName: authentik-postgresql-headless
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: postgresql
      app.kubernetes.io/instance: authentik
      role: primary
  template:
    metadata:
      name: authentik-postgresql
      labels:
        app.kubernetes.io/name: postgresql
        helm.sh/chart: postgresql-10.16.2
        app.kubernetes.io/instance: authentik
        app.kubernetes.io/managed-by: Helm
        role: primary
        app.kubernetes.io/component: primary
    spec:
      affinity:
        podAffinity:

        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: postgresql
                    app.kubernetes.io/instance: authentik
                    app.kubernetes.io/component: primary
                namespaces:
                  - "authentik"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:

      securityContext:
        fsGroup: 1001
      automountServiceAccountToken: false
      initContainers:
        - name: init-chmod-data
          image: docker.io/bitnami/bitnami-shell:10-debian-10-r305
          imagePullPolicy: "IfNotPresent"
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
          command:
            - /bin/sh
            - -cx
            - |
              chmod -R 777 /dev/shm
          securityContext:
            runAsUser: 0
          volumeMounts:
            - name: dshm
              mountPath: /dev/shm
      containers:
        - name: authentik-postgresql
          image: docker.io/bitnami/postgresql:11.14.0-debian-10-r28
          imagePullPolicy: "IfNotPresent"
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
          securityContext:
            runAsUser: 1001
          env:
            - name: BITNAMI_DEBUG
              value: "true"
            - name: POSTGRESQL_PORT_NUMBER
              value: "5432"
            - name: POSTGRESQL_VOLUME_DIR
              value: "/bitnami/postgresql"
            - name: PGDATA
              value: "/bitnami/postgresql/data"
            - name: POSTGRES_POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: authentik-postgresql
                  key: postgresql-postgres-password
            - name: POSTGRES_USER
              value: "authentik"
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: authentik-postgresql
                  key: postgresql-password
            - name: POSTGRES_DB
              value: "authentik"
            - name: POSTGRESQL_ENABLE_LDAP
              value: "no"
            - name: POSTGRESQL_ENABLE_TLS
              value: "no"
            - name: POSTGRESQL_LOG_HOSTNAME
              value: "false"
            - name: POSTGRESQL_LOG_CONNECTIONS
              value: "false"
            - name: POSTGRESQL_LOG_DISCONNECTIONS
              value: "false"
            - name: POSTGRESQL_PGAUDIT_LOG_CATALOG
              value: "off"
            - name: POSTGRESQL_CLIENT_MIN_MESSAGES
              value: "error"
            - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
              value: "pgaudit"
          ports:
            - name: tcp-postgresql
              containerPort: 5432
          livenessProbe:
            exec:
              command:
                - /bin/sh
                - -c
                - exec pg_isready -U "authentik" -d "dbname=authentik" -h 127.0.0.1 -p 5432
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
                - /bin/sh
                - -c
                - -e
                - |
                  exec pg_isready -U "authentik" -d "dbname=authentik" -h 127.0.0.1 -p 5432
                  [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - name: dshm
              mountPath: /dev/shm
      volumes:
        - name: dshm
          emptyDir:
            medium: Memory
        - name: data
          emptyDir: {}
---
# Source: authentik/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: authentik
  labels:
    helm.sh/chart: authentik-2022.7.3
    app.kubernetes.io/name: authentik
    app.kubernetes.io/instance: authentik
    app.kubernetes.io/version: "2022.7.2"
    app.kubernetes.io/managed-by: Helm
spec:
  tls:
    - hosts: []
      secretName: ""
  rules:
    - host: "authentik.redacted"
      http:
        paths:
          - path: "/"
            pathType: Prefix
            backend:
              service:
                name: authentik
                port:
                  name: http

values.yaml

authentik:
    secret_key: "N0fPsgPCYHJ0yh2HoIWM1mzaRs97QdMBTKQ87f5E2G3RzRJ5"

    # This sends anonymous usage-data, stack traces on errors and
    # performance data to sentry.beryju.org, and is fully opt-in
    error_reporting:
        enabled: true
    postgresql:
        password: "N0fPsgPCYHJ0yh2HoIWM1mzaRs97QdMBTKQ87f5E2G3RzRJ5"
    redis:
        password: "N0fPsgPCYHJ0yh2HoIWM1mzaRs97QdMBTKQ87f5E2G3RzRJ5"

ingress:
    enabled: true
    hosts:
      - host: authentik.redacted
        paths:
          - path: "/"
            pathType: Prefix

postgresql:
    enabled: true
    postgresqlPassword: "N0fPsgPCYHJ0yh2HoIWM1mzaRs97QdMBTKQ87f5E2G3RzRJ5"
    volumePermissions:
      ## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume
      ##
      enabled: true

    primary:
      extraEnvVars:
      - name: BITNAMI_DEBUG
        value: "true"
    
    image:
      debug: true
    
    persistence:
      enabled: false
      storageClass: nfs-main-pool
      accessModes:
        - ReadWriteMany

redis:
    enabled: false
    storageClass: nfs-main-pool
    auth:
      enabled: false
    password: "N0fPsgPCYHJ0yh2HoIWM1mzaRs97QdMBTKQ87f5E2G3RzRJ5"
    accessModes:
      - ReadWriteMany

Running the following to get the secrets:

kubectl get secrets -o json | jq '.items[] | {name: .metadata.name,data: .data|map_values(@base64d)}' | grep postgres

  "name": "authentik-postgresql",
    "postgresql-password": "N0fPsgPCYHJ0yh2HoIWM1mzaRs97QdMBTKQ87f5E2G3RzRJ5",
    "postgresql-postgres-password": "OYjlStwZtP"

It looks like postgresql-postgres-password is being generated somewhere else as it changes with each full reinstall. I don't know if this was a bitnami change, but I have tried various variable combinations and cannot get that value to populate.

Passing additional values to inner Postgres helmchart?

I would like to configure the Postgres with a pre-existing claim

  postgresql:
    enabled: true
    postgresqlPassword: "7ucC2kxRgHbr9oh7KdYyxb3I"
    primary:
      persistence:
        existingClaim: authentik-pvc

Intutively this should work (passing values to the dependent chart) but it doesn't seem to do anything.

(this inevitably raises the question of why vendorize the chart instead of just adding upstream bitnami with a few more lines of example values.yaml?)

Update to release Release 2021.8

Noticed the deploy didn't have the embedded outpost, so was a bit confused following setup.
Looks like this is only in 2021.8, and the chart has 2021.7

Funnily enough the docs imply to just update the helm chart, but I think that hasn't landed yet.

Current workaround for me was

add in values.yaml

image:
  tag: 2021.8.3

Add contributing information

This could be a CONTRIBUTING.md file, or just items to the readme.

Do you want chart versions bumped? Do you want commits squashed? etc

OCI registry-backed blueprints

It appears that the helm chart has support for defining blueprints via a configmap. Is there a way to configure authentik to apply OCI registry-backed blueprints via the chart?

helm values for geoip get missparsed as float64

Describe the bug
Helm values for geoip are not parsed properly

To Reproduce
Steps to reproduce the behavior:

  1. Use helm
  2. Fill in the values for geoip accountId, licenseKey and updateInterval

Expected behavior
Helm deployment should complete

Screenshots
If applicable, add screenshots to help explain your problem.

Logs

Helm upgrade failed: template: authentik/templates/worker-deployment.yaml:28:43: executing "authentik/templates/worker-deployment.yaml" at <include (print $.Template.BasePath "/secret.yaml") .>: error calling include: template: authentik/templates/secret.yaml:14:93: executing "authentik/templates/secret.yaml" at <b64enc>: wrong type for value; expected string; got float64 Last Helm logs: preparing upgrade for authentik resetting values to the chart's original version

Version and Deployment (please complete the following information):

  • authentik version: 2023.5.3
  • Deployment: helm
  • CD: fluxcd

Additional context
This issue seems to be very similar to this one helm/helm#2434 where the parsing of numerical values gets confused

The Redis HPA is no longer supported in the latest version of K8S

It looks like the Bitnami Charts that you are using have not been updated to pull in the latest versions of the chart that add support for autoscaling/v2 with the deprecation of autoscaling/v2beta1 in v1.22

It should be a quick merge against the Bitnami charts to fix

Incorrect service account name between charts configuration

Describe the bug
Hello ๐Ÿ‘‹

I'm currently trying the projects out, and I've an issue about authentik worker pods not running. After a little bit of debugging it seems that the cause is service account used by the worker deployment being "not created"

captain@glados:~$ kubectl describe rs/auth-authentik-worker-796d85c7dc -n infra
Name:           auth-authentik-worker-796d85c7dc
# ...
Events:
  Type     Reason        Age                 From                   Message
  ----     ------        ----                ----                   -------
  Warning  FailedCreate  14m (x19 over 35m)  replicaset-controller  Error creating: pods "auth-authentik-worker-796d85c7dc-" is forbidden: error looking up service account infra/auth-authentik: serviceaccount "auth-authentik" not found

And sure enough the service account seems to not have the assumed value

captain@glados:~$ kubectl get sa -n infra
NAME         SECRETS   AGE
auth         0         41m
auth-redis   0         41m
default      0         5d3h
# Exhaustive list

I don't really know charts templating system, so CMIIW: it seems that

name: {{ include "authentik-remote-cluster.fullname" . }}

uses the Release.Name value

{{- define "authentik-remote-cluster.fullname" -}}
{{- if not .Chart.IsRoot }}
{{- .Release.Name }}

which in this case happens to be auth, causing the mismatch between the resources values

To Reproduce
Steps to reproduce the behavior:

I didn't exactly note down precisely the steps, since this is happening on currently-running cluster, but IIRC it's just

helm upgrade --install auth -n infra goauthentik/ -f goauthentik.yaml

Where goauthentik/ is just the extracted charts from $ helm pull --untar --untardir goauthentik/ authentik/authentik --version 2023.5.4 and goauthentik.yaml is just the default value from the Installation docs page

Expected behavior
Service account should be created with the correct name (or the configuration value changed to follow the service account name).

Version and Deployment:

  • authentik version: 2023.5.4
  • Deployment: helm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.