Git Product home page Git Product logo

helm's People

Contributors

bennettpiatershs avatar camaeel avatar dependabot[bot] avatar elthariel avatar fabyouloosefab avatar gllb avatar hnicke avatar igusev avatar jidckii avatar joooostb avatar kryddd avatar leosussan avatar lyz-code avatar mart-kuc avatar nijel avatar pre-commit-ci[bot] avatar renovate-bot avatar renovate[bot] avatar rustymunkey avatar sgrzemski avatar skellla avatar svenstaro avatar tarioch avatar thpham avatar uip9av6y avatar yann-j avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

helm's Issues

Helm Version 0.4.2 doesn't print Postgres or Redis env vars into deployment.

Describe the issue

Recently updated to version 0.4.2 and deployed. I noticed that the app wasn't working to I inspected the rendered yaml and noticed none of the postgres or redis env vars were in there. Despite the relevant section of my values file looking like this:

postgresql:
  enabled: false
  postgresqlHost: // removed for issue
  postgresqlUsername: "postgres"
  postgresqlDatabase: "weblate"

redis:
  enabled: false
  password: "" // My redis instance has no password
  redisHost: // removed for issue

The issue is clearly this commit: b1aa48d

But the docs contradict this commit by saying

External postgres database endpoint, to be used if postgresql.enabled == false

and

External redis database endpoint, to be used if redis.enabled == false

Seems like there was a miscommunication as to what enabled means. Perhaps renaming that configuration key from enabled to external to make things more clear?

Workaround

just append --version 0.4.1 to your helm command.

wrong port in NOTES.txt

Describe the issue

after installing the chart, message says to use port-forward to port 80 in the pod.
Result: connection refused.
Seems the Container is configured to listen on port 8080.
Change to kubectl port-forward $POD_NAME 8080:8080 works.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. install chart
    2.follow advice from the welcome message to port-forward
  2. open localhost:8080
    => connection refused

Expected behavior

see the web ui

Screenshots

No response

Exception traceback

No response

Additional context

No response

Redis Service Name not matching Pod requierment

Describe the issue

Dear community,

I run the helm chart version via argoCD:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: translation
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
  labels:
    type: translation

spec:
  project: default

  sources:
    - repoURL: "https://helm.weblate.org"
      chart: weblate
      targetRevision: 0.4.29
      helm:
        valueFiles:
          - $values/kubernetes/translation/weblate/values.yaml

    - repoURL: "https://github.com/XXXXX/XXXXXX.git"
      targetRevision: main
      ref: values

  # Destination cluster and namespace to deploy the application
  destination:
    name: in-cluster
    namespace: translation
  # Sync policy
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
      allowEmpty: false
    syncOptions:
      - Validate=true
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground
      - PruneLast=true
    managedNamespaceMetadata:
      labels:
        enviroment: production
        type: translation

  revisionHistoryLimit: 5

and use it stock vaules in the DB and redis setup

image:
  repository: weblate/weblate
  tag: 4.18.2.1
  pullPolicy: IfNotPresent

imagePullSecrets: []
nameOverride: ''
fullnameOverride: ''
updateStrategy: Recreate
...
postgresql:
  auth:
  # PostgreSQL user should be a superuser to
  # be able to install pg_trgm extension. Alternatively you can install it
  # manually prior starting Weblate.
    userName: ''
    enablePostgresUser: true
    postgresPassword: weblate
    database: weblate
    existingSecret: ''
    secretKeys:
      userPasswordKey: postgresql-password
  service:
    ports:
      postgresql: 5432
  enabled: true
  # postgresql.postgresqlHost -- External postgres database endpoint, to be
  # used if `postgresql.enabled == false`
  # @default -- `None`
  postgresqlHost:

redis:
  architecture: standalone
  auth:
    enabled: true
    password: weblate
    existingSecret: ''
    existingSecretPasswordKey: redis-password
  db: 1
  enabled: true
  # redis.redisHost -- External redis database endpoint, to be
  # used if `redis.enabled == false`
  # @default -- `None`
  redisHost:

and noticed that the weblate pod fails to start as it tries to connect to redis service with the name
translation-weblate-redis-master, however the created service of redis is named translation-redis-master.

image

File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 670, in has_key raise ConnectionInterrupted(connection=client) from edjango_redis.exceptions.ConnectionInterrupted: Redis ConnectionError: Error -2 connecting to translation-weblate-redis-master:6379. Name or service not known.

Celery SIGKILL

Hi,
we try to deploy weblate with the current helm charts, but the POD does not start correctly.
Right now we receive a lot of 502 and assume the problem is with celery, because we get a lot of SIGKILL log entries.

2021-04-27 10:37:00,929 INFO success: celery-notify entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2021-04-27 10:37:00,929 INFO exited: celery-celery (terminated by SIGKILL; not expected) 2021-04-27 10:37:00,934 INFO spawned: 'celery-celery' with pid 29994 2021-04-27 10:37:01,363 INFO success: celery-translate entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2021-04-27 10:37:01,364 INFO exited: celery-memory (terminated by SIGKILL; not expected) celery-beat stdout | celery beat v5.0.5 (singularity) is starting.

2021-04-27 10:36:55,925 INFO exited: celery-translate (terminated by SIGKILL; not expected) celery-translate stderr | [2021-04-27 10:36:55,922: ERROR/ForkPoolWorker-1] Process ForkPoolWorker-1 celery-translate stderr | Traceback (most recent call last): celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/process.py", line 327, in _bootstrap celery-translate stderr | self.run() celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/process.py", line 114, in run celery-translate stderr | self._target(*self._args, **self._kwargs) celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/pool.py", line 290, in __call__ celery-translate stderr | self.on_loop_start(pid=pid) # callback on loop start celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/celery/concurrency/asynpool.py", line 240, in on_loop_start celery-translate stderr | self.outq.put((WORKER_UP, (pid,))) celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/queues.py", line 366, in put celery-translate stderr | self.send_payload(ForkingPickler.dumps(obj)) celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/queues.py", line 358, in send_payload celery-translate stderr | self._writer.send_bytes(value) celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/connection.py", line 227, in send_bytes celery-translate stderr | self._send_bytes(m[offset:offset + size]) celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/connection.py", line 453, in _send_bytes celery-translate stderr | self._send(header + buf) celery-translate stderr | File "/usr/local/lib/python3.7/dist-packages/billiard/connection.py", line 406, in _send celery-translate stderr | n = write(self._handle, buf) celery-translate stderr | BrokenPipeError: [Errno 32] Broken pipe

Anyone who has seen similar issues?

Howto add addons

Hi, how can I add addons using values.yaml? I try to add with extraConfig and configOvveride but weblate seems to ignore it.

extraConfig: {
  WEBLATE_ADDONS = (
    "weblate.addons.gettext.GenerateMoAddon",
    "weblate.addons.gettext.UpdateLinguasAddon",
    "weblate.addons.gettext.UpdateConfigureAddon",
    "weblate.addons.gettext.MsgmergeAddon",
    "weblate.addons.gettext.GettextCustomizeAddon",
    "weblate.addons.gettext.GettextAuthorComments",
    "weblate.addons.cleanup.CleanupAddon",
    "weblate.addons.consistency.LangaugeConsistencyAddon",
    "weblate.addons.discovery.DiscoveryAddon",
    "weblate.addons.flags.SourceEditAddon",
    "weblate.addons.flags.TargetEditAddon",
    "weblate.addons.flags.SameEditAddon",
    "weblate.addons.flags.BulkEditAddon",
    "weblate.addons.generate.GenerateFileAddon",
    "weblate.addons.json.JSONCustomizeAddon",
    "weblate.addons.properties.PropertiesSortAddon",
    "weblate.addons.git.GitSquashAddon",
    "weblate.addons.removal.RemoveComments",
    "weblate.addons.removal.RemoveSuggestions",
    "weblate.addons.resx.ResxUpdateAddon",
    "weblate.addons.autotranslate.AutoTranslateAddon",
    "weblate.addons.yaml.YAMLCustomizeAddon",
    "weblate.addons.cdn.CDNJSAddon",
    "weblate.addons.example.ExampleAddon",
  ),
}

or

configOverride: |
  WEBLATE_ADDONS = (
      # Built-in addons
      "weblate.addons.gettext.GenerateMoAddon",
      "weblate.addons.gettext.UpdateLinguasAddon",
      "weblate.addons.gettext.UpdateConfigureAddon",
      "weblate.addons.gettext.MsgmergeAddon",
      "weblate.addons.gettext.GettextCustomizeAddon",
      "weblate.addons.gettext.GettextAuthorComments",
      "weblate.addons.cleanup.CleanupAddon",
      "weblate.addons.consistency.LangaugeConsistencyAddon",
      "weblate.addons.discovery.DiscoveryAddon",
      "weblate.addons.flags.SourceEditAddon",
      "weblate.addons.flags.TargetEditAddon",
      "weblate.addons.flags.SameEditAddon",
      "weblate.addons.flags.BulkEditAddon",
      "weblate.addons.generate.GenerateFileAddon",
      "weblate.addons.json.JSONCustomizeAddon",
      "weblate.addons.properties.PropertiesSortAddon",
      "weblate.addons.git.GitSquashAddon",
      "weblate.addons.removal.RemoveComments",
      "weblate.addons.removal.RemoveSuggestions",
      "weblate.addons.resx.ResxUpdateAddon",
      "weblate.addons.autotranslate.AutoTranslateAddon",
      "weblate.addons.yaml.YAMLCustomizeAddon",
      "weblate.addons.cdn.CDNJSAddon",
      # Add-on you want to include
      "weblate.addons.example.ExampleAddon",
  )

Ability to use existing secret for config

Hi,
This looks promising!

I see you already support setting some "secret" environment variables via values.

In my case, I'd like to be able to configure these and additional environment variables, for example:

GITHUB_TOKEN
WEBLATE_AUTH_LDAP_SERVER_URI

I would also like to do this without needing to put them in my values.yaml file.

...for example by adding a secret beforehand and configuring this chart to use my predefined secret instead of creating a new one.

As this chart is using postgres, here's an example of how the postgres chart either creates a secret or uses an existing one:
https://github.com/helm/charts/blob/12a061a/stable/postgresql/templates/_helpers.tpl#L210-L232

What do you think, would you be open to supporting something like this?


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Looking for a maintainer

Currently, this repo doesn't have an active maintainer. I try to keep it up-to-date, but my Kubernetes knowledge is close to zero, so I won't make many changes besides updating containers.

Want to help? Just pick one of the issues or pull requests and contribute.

Want to be in charge? I'm happy to give you control over this repo once you show reasonable contributions.

Switch to StatefulSet

I saw the official Helm chart the other day and one thing stood out — it models Weblate as a stateless Deployment rather than StatefulSet which is great for stateful services. As far as I know, Weblate is currently a stateful service and can't be scaled horizontally. We started using Weblate on Kubernetes way before the official Helm chart got released and we first modelled it as Deployment too but the upgrades were somewhat problematic and it kept failing when trying to re-attach persistent disk to the newly spun up container. We used the "Recreate" rollout strategy but it would still fail and then we switched over to StatefulSet and this issue's been gone ever since.

Anywya, the idea is, should we remodel Weblate as a StatefulService? Is there any specific reason why we're using the Deployment object? I'm assuming that you've already considered it and that there are some reasons that I might have not thought about.

Originally posted by @mareksuscak in WeblateOrg/weblate#4806

Length of translation

Hi,

I am trying to increase the length of the translation in weblate.

I changed the environment variable as per the documentation to max-length:500
After the change I can see that English(Source Language) translation limit is 10000 now. but for all other languages its still same.

Can you please help me to increase the limit for other languages aswell.

Installation on AWS via helm fails to connect to redis

Describe the issue

I am following the helm installation instructions here:
https://docs.weblate.org/en/latest/admin/install/kubernetes.html
to install weblate onto an existing AWS cluster.
Installation command: helm install i18n weblate/weblate
Pods created:

                           READY   STATUS    RESTARTS   AGE
i18n-postgresql-0                               1/1     Running   0          32s
i18n-redis-master-0                             1/1     Running   0          32s
i18n-weblate-6bb6698799-6fqd7                   0/1     Running   0          32s

The redis and postgres pods start up ok, but the weblate pod fails. Running kubectl logs i18n-weblate-6bb6698799-6fqd7 shows the following error:

Starting...
Generating Django secret...
Creating /app/data/python/customize
[2023-01-20 18:20:20,748: WARNING/13] Handled exception: ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 31, in _decorator
    return method(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 137, in has_key
    return self.client.has_key(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 666, in has_key
    raise ConnectionInterrupted(connection=client) from e
django_redis.exceptions.ConnectionInterrupted: Redis ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.

It looks like weblate is trying to connect to i18n-weblate-redis-master:6379 when it should be connecting to: i18n-redis-master-0:6379.

Is there some way to specify the redis host that weblate should connect to?

Thanks in advance,

-Don

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

Using AWS with an EKS cluster, follow the instructions here:
https://docs.weblate.org/en/latest/admin/install/kubernetes.html
Run "kubectl get pods" to list the pods, and ensure that the weblate pod starts up cleanly
Use "kubectl logs " to inspect the logs

Expected behavior

weblate should start up.

Screenshots

No response

Exception traceback

[ec2-user@ip-10-0-2-136 weblate]$ kubectl logs i18n-weblate-6bb6698799-6fqd7
Starting...
[2023-01-20 18:55:44,173: WARNING/10] Handled exception: ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 31, in _decorator
    return method(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 137, in has_key
    return self.client.has_key(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 666, in has_key
    raise ConnectionInterrupted(connection=client) from e
django_redis.exceptions.ConnectionInterrupted: Redis ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/weblate", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/usr/local/lib/python3.11/site-packages/weblate/runner.py", line 19, in main
    utility.execute()
  File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 440, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 402, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 448, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django/core/management/commands/shell.py", line 117, in handle
    exec(options["command"], globals())
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 38, in _decorator
    raise e.__cause__
  File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 664, in has_key
    return client.exists(key) == 1
           ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/redis/commands/core.py", line 1697, in exists
    return self.execute_command("EXISTS", *names)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/redis/client.py", line 1255, in execute_command
    conn = self.connection or pool.get_connection(command_name, **options)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 1427, in get_connection
    connection.connect()
  File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 630, in connect
    raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.
redis at i18n-weblate-redis-master is unavailable - retrying 30
[2023-01-20 18:55:48,083: WARNING/27] Handled exception: ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 31, in _decorator
    return method(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 137, in has_key
    return self.client.has_key(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 666, in has_key
    raise ConnectionInterrupted(connection=client) from e
django_redis.exceptions.ConnectionInterrupted: Redis ConnectionError: Error -2 connecting to i18n-weblate-redis-master:6379. Name or service not known.

During handling of the above exception, another exception occurred:

How do you run Weblate?

Other

Weblate versions

No response

Weblate deploy checks

No response

Additional context

Helm Version: v3.8.2
Kubernetes Version: 1.19.

0.4.2 breaks support for external Postgres & Redis usage

Describe the issue

b1aa48d disables postgres & redis env vars if the subcharts are disabled.
In our setup, we do not want to deploy Redis & Postgres via the subcharts, but relied on the env vars to configure the connection to our external Postgres & Redis resources.

Since the Postgres & Redis charts don't export their values, I think there isn't any better way other than duplicating the connection settings both for the parent- and subcharts.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

Our values.yml file:

[...]
postgresql:
  postgresqlHost: azure.com
  postgresqlUsername: [email protected]
  postgresqlPassword: <pw>
  postgresqlDatabase: <db>
  enabled: false

redis:
  # same as postgres 

Expected behavior

POSTGRES_* env vars are still injected into weblate container

Screenshots

No response

Exception traceback

No response

Additional context

No response

Incompatible changes break upgrade

Describe the issue

Currently I'm running 4.14.2 and trying to upgrade to 4.14.latest, which fails.

First error is this

$ helm upgrade -n weblate weblate weblate/weblate -f values.yaml
Error: UPGRADE FAILED: execution error at (weblate/charts/postgresql/templates/secrets.yaml:17:24): 
PASSWORDS ERROR: The secret "weblate-postgresql" does not contain the key "postgres-password"

After duplicating the old entry postgresql-passwort to postgres-password using kubectl -n weblate edit secret/weblate-postgresql I now get the next error (reformatted for readability):

$ helm upgrade -n weblate weblate weblate/weblate -f values.yaml
Error: UPGRADE FAILED:
cannot patch "weblate-postgresql" with kind StatefulSet: StatefulSet.apps "weblate-postgresql" is invalid: spec:
  Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
&&
cannot patch "weblate-redis-master" with kind StatefulSet: StatefulSet.apps "weblate-redis-master" is invalid: spec:
  Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. helm upgrade -n weblate weblate weblate/weblate -f values.yaml

Expected behavior

helm upgrading Weblate

Screenshots

No response

Exception traceback

$ helm upgrade -n weblate weblate weblate/weblate -f values.yaml --debug
…
client.go:250: [debug] error updating the resource "weblate-postgresql":
         cannot patch "weblate-postgresql" with kind StatefulSet: StatefulSet.apps "weblate-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
client.go:510: [debug] Patch StatefulSet "weblate-redis-master" in namespace weblate
client.go:250: [debug] error updating the resource "weblate-redis-master":
         cannot patch "weblate-redis-master" with kind StatefulSet: StatefulSet.apps "weblate-redis-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
$ helm upgrade -n weblate weblate weblate/weblate -f values.yaml --debug --dry-run
…


### Additional context

_No response_

Redis 6 compatibility

Describe the issue

When updating the chart to Redis 6 it fails to start:

Traceback (most recent call last):
  File "/usr/local/bin/weblate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/site-packages/weblate/runner.py", line 34, in main
    utility.execute()
  File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 420, in execute
    django.setup()
  File "/usr/local/lib/python3.10/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.10/site-packages/django/apps/registry.py", line 124, in populate
    app_config.ready()
  File "/usr/local/lib/python3.10/site-packages/weblate/vcs/apps.py", line 86, in ready
    with lockfile:
  File "/usr/local/lib/python3.10/site-packages/weblate/utils/lock.py", line 81, in __enter__
    if not self._lock.acquire(timeout=self._timeout):
  File "/usr/local/lib/python3.10/site-packages/redis_lock/__init__.py", line 226, in acquire
    if self._held:
  File "/usr/local/lib/python3.10/site-packages/redis_lock/__init__.py", line 197, in _held
    return self.id == self.get_owner_id()
  File "/usr/local/lib/python3.10/site-packages/redis_lock/__init__.py", line 210, in get_owner_id
    owner_id = self._client.get(self._name)
  File "/usr/local/lib/python3.10/site-packages/redis/commands/core.py", line 1600, in get
    return self.execute_command("GET", name)
  File "/usr/local/lib/python3.10/site-packages/redis/client.py", line 1215, in execute_command
    conn = self.connection or pool.get_connection(command_name, **options)
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 1386, in get_connection
    connection.connect()
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 626, in connect
    self.on_connect()
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 716, in on_connect
    auth_response = self.read_response()
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 836, in read_response
    raise response
redis.exceptions.ResponseError: WRONGPASS invalid username*** or user is disabled.

It is most likely caused by wrongly generated configuration, either in the Docker image itself or in the chart. We do test Docker images with Redis 6, but AFAIK it is not using password there, while the chart is, so that might make the difference.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. Updated the redis chart to latest one
  2. See the error https://github.com/WeblateOrg/helm/runs/6769378543?check_suite_focus=true#step:9:269

Expected behavior

No response

Screenshots

No response

Exception traceback

No response

Additional context

No response

Ability to select ingress pathType

Describe the problem

We can't control the pathType of the ingress resource, it's hardcoded to "ImplementationSpecific".

Describe the solution you'd like

Be able to select the pathType we need for the ingress resource.

Describe alternatives you've considered

No response

Screenshots

No response

Additional context

No response

Ability to add own ca-certificate

Hi,

if you use a self hosted GitLab instance for example, you need to add the ca-certificate for communication over https.
Yet there is no possibility to add this via the values for this chart. I need to patch the deployment afterwards to mount the ca-certificate inside the Weblate container as a secret:

kubectl patch deployment weblate -n weblate --type='json' -p='[{"op": "add", "path": "/spec/template/spec/volumes/0", "value": {"name": "own-ca-certificate", "secret": {"secretName": "own-ca-certificate"}}}]'

kubectl patch deployment weblate -n weblate --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/volumeMounts/0", "value": {"mountPath": "/etc/ssl/certs/own-ca-certificate.pem", "subPath": "own-ca-certificate.pem", "name": "own-ca-certificate"}}]'

It would be nice if it would be possible to create a secret containing ca-certificate directly via the values.yaml

Chart not installable due to removed dependencies

Describe the issue

As a result of bitnami/charts#10539 the dependencies of the chart are gone.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

I've tried to update the dependencies, but then Weblate fails to authenticate with redis, see https://github.com/WeblateOrg/helm/runs/6769378543?check_suite_focus=true#step:9:269

Traceback (most recent call last):
  File "/usr/local/bin/weblate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/site-packages/weblate/runner.py", line 34, in main
    utility.execute()
  File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 420, in execute
    django.setup()
  File "/usr/local/lib/python3.10/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.10/site-packages/django/apps/registry.py", line 124, in populate
    app_config.ready()
  File "/usr/local/lib/python3.10/site-packages/weblate/vcs/apps.py", line 86, in ready
    with lockfile:
  File "/usr/local/lib/python3.10/site-packages/weblate/utils/lock.py", line 81, in __enter__
    if not self._lock.acquire(timeout=self._timeout):
  File "/usr/local/lib/python3.10/site-packages/redis_lock/__init__.py", line 226, in acquire
    if self._held:
  File "/usr/local/lib/python3.10/site-packages/redis_lock/__init__.py", line 197, in _held
    return self.id == self.get_owner_id()
  File "/usr/local/lib/python3.10/site-packages/redis_lock/__init__.py", line 210, in get_owner_id
    owner_id = self._client.get(self._name)
  File "/usr/local/lib/python3.10/site-packages/redis/commands/core.py", line 1600, in get
    return self.execute_command("GET", name)
  File "/usr/local/lib/python3.10/site-packages/redis/client.py", line 1215, in execute_command
    conn = self.connection or pool.get_connection(command_name, **options)
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 1386, in get_connection
    connection.connect()
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 626, in connect
    self.on_connect()
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 716, in on_connect
    auth_response = self.read_response()
  File "/usr/local/lib/python3.10/site-packages/redis/connection.py", line 836, in read_response
    raise response
redis.exceptions.ResponseError: WRONGPASS invalid username*** or user is disabled.

Expected behavior

  • The chart dependencies should be kept up to date (is there something like dependabot for this)?
  • Setup regular testing of the chart to get noticed of such breakages earlier than somebody submits a PR.

Screenshots

No response

Exception traceback

No response

Additional context

No response

Evicted pods on GKE

I've noticed that I'm getting a lot of evicted weblate pods in Google Kubernetes Engine using this helm chart. Currently I'm on 4.1.1.

image

Is this a misconfiguration that I missed somewhere? Or perhaps a memory leak? That's 2.17 GB of ram which feels like quite a bit.

The container uses 1.94-1.97 GB consistently so perhaps there are limits I'm unaware of
image

Fails to start if data volume is mounted via CIFS or NFS

Describe the issue

I'm trying to deploy Weblate on a Azure Kubernetes Cluster and want to mount the /app/data directory from a volume based on Azure Files. This results in the volume being mounted using either the CIFS or NFS protocol. However when I try to do this Weblate fails to start because celery beat is not able to (re-)create its database file.

It works if I provide the volume backed by a disk or the default EmptyDir.

This issue comment mentions that the underlying gdbm lib shows problems like this when using network filesystems.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

Trying to generalize the instructions so that it may show the issue on a generic cluster:

  1. Create a Persistent Volume Claim based on a volume backed by a CIFS or NFS file system.
  2. Deploy the chart with the value persistence.existingClaim set to this PVC

Expected behavior

The pod should start.

Screenshots

No response

Exception traceback

Starting...
Postgres 130008 is up
Starting database migration...
Operations to perform:
  Apply all migrations: accounts, addons, admin, auth, authtoken, checks, configuration, contenttypes, fonts, gitexport, glossary, lang, memory, metrics, screenshots, sessions, social_django, trans, utils, vcs, weblate_auth, wladmin
Running migrations:
  No migrations to apply.
Updating user admin
Refreshing stats...
Removing corrupted schedule file: error(2, 'No such file or directory')
[2023-01-19 09:45:13,511: WARNING/164] Handled exception: FileNotFoundError: [Errno 2] No such file or directory: '/app/data/celery/beat-schedule'
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 533, in setup_schedule
    self._store = self._open_schedule()
                  ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 523, in _open_schedule
    return self.persistence.open(self.schedule_filename, writeback=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/shelve.py", line 243, in open
    return DbfilenameShelf(filename, flag, protocol, writeback)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/shelve.py", line 227, in __init__
    Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
                         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dbm/__init__.py", line 95, in open
    return mod.open(file, flag, mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
_gdbm.error: [Errno 11] Resource temporarily unavailable: '/app/data/celery/beat-schedule'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/weblate/utils/management/commands/cleanup_celery.py", line 45, in handle
    self.setup_schedule()
  File "/usr/local/lib/python3.11/site-packages/weblate/utils/management/commands/cleanup_celery.py", line 41, in setup_schedule
    scheduler.setup_schedule()
  File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 541, in setup_schedule
    self._store = self._destroy_open_corrupted_schedule(exc)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 529, in _destroy_open_corrupted_schedule
    return self._open_schedule()
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/celery/beat.py", line 523, in _open_schedule
    return self.persistence.open(self.schedule_filename, writeback=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/shelve.py", line 243, in open
    return DbfilenameShelf(filename, flag, protocol, writeback)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/shelve.py", line 227, in __init__
    Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
                         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dbm/__init__.py", line 95, in open
    return mod.open(file, flag, mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
_gdbm.error: [Errno 2] No such file or directory: '/app/data/celery/beat-schedule'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/weblate", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/usr/local/lib/python3.11/site-packages/weblate/runner.py", line 34, in main
    utility.execute()
  File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 440, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 402, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.11/site-packages/weblate/utils/management/base.py", line 42, in execute
    return super().execute(*args, **options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 448, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/weblate/utils/management/commands/cleanup_celery.py", line 48, in handle
    self.try_remove(settings.CELERY_BEAT_SCHEDULE_FILENAME)
  File "/usr/local/lib/python3.11/site-packages/weblate/utils/management/commands/cleanup_celery.py", line 35, in try_remove
    os.remove(filename)
FileNotFoundError: [Errno 2] No such file or directory: '/app/data/celery/beat-schedule'

Additional context

No response

Weblate pod not running - probe failing and unable to connect to redis

Describe the issue

I am using the weblate service in our infrastructure and used the commands mentioned in the below link to install it :-https://docs.weblate.org/en/latest/admin/install/kubernetes.html
After that just added admin name, site domain and few other setup parameters in the deployment manifest and found the Postgresql and redis to be running; but the weblate container is not running as it shows liveness and readiness probe failed error as well as the logs say that it is unable to connect to redis.

POD STATUS
# kubectl get pods | grep -i release
my-release-postgresql-0 1/1 Running 0 21h
my-release-redis-master-0 1/1 Running 0 21h
my-release-weblate-74c484d9d5-9kbm8 0/1 Running 7 (5m59s ago) 25m

POD DESCRIPTION:
# kubectl describe pods my-release-weblate-74c484d9d5-9kbm8
Normal Scheduled 26m default-scheduler Successfully assigned default/my-release-weblate-74c484d9d5-9kbm8 to gke-production-clust-production-clust-acab2557-dnk6
Normal Started 21m (x3 over 26m) kubelet Started container weblate
Warning Unhealthy 19m (x5 over 24m) kubelet Liveness probe failed: Get "http://10.28.1.5:8080/healthz/": dial tcp 10.28.1.5:8080: connect: connection refused
Normal Pulled 18m (x4 over 26m) kubelet Container image "weblate/weblate:4.15.2-1" already present on machine
Normal Created 18m (x4 over 26m) kubelet Created container weblate
Warning BackOff 5m55s (x21 over 21m) kubelet Back-off restarting failed container
Warning Unhealthy 3s (x18 over 24m) kubelet Readiness probe failed: Get "http://10.28.1.5:8080/healthz/": dial tcp 10.28.1.5:8080: connect: connection refused

POD LOGS:
# kubectl logs my-release-weblate-74c484d9d5-9kbm8
File "/usr/local/lib/python3.11/site-packages/redis/client.py", line 1255, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 1427, in get_connection
connection.connect()
File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 630, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 111 connecting to my-release-redis-master:6379. Connection refused.
raise ConnectionInterrupted(connection=client) from e
django_redis.exceptions.ConnectionInterrupted: Redis ConnectionError: Error 111 connecting to my-release-redis-master:6379. Connection refused.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/weblate", line 8, in
sys.exit(main())
^^^^^^

Few months back I deployed weblate using the same process and faced the postgress and redis connection error (did not face liveness/readiness probe failure back then). It was solved by changing the REDIS and POSTGRES HOSTNAME values in the deployment manifest to:
REDIS_HOST: my-release-redis-master
POSTGRES_DATABASE: weblate
POSTGRES_HOST: my-release-postgresql

But this time these changes too doesnt work.

Also would like to point out that at the weblate pod level the /healthz/ path is not responding a 200 OK at the port 8080 and that is the issue for LIVENESS/READINESS probe failure.

# kubectl exec -it my-release-weblate-74c484d9d5-9kbm8 -- curl http://localhost:8080/healthz
curl: (7) Failed to connect to localhost port 8080: Connection refused

$ hostname
my-release-weblate-74c484d9d5-9kbm8
$ curl localhost:8080/healthz
curl: (7) Failed to connect to localhost port 8080: Connection refused

Request you to assist us fix this issue at the earliest and we would be really grateful towards this.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. Go to '...'
  2. Scroll down to '...'
  3. Click on '...'
  4. See error

Expected behavior

No response

Screenshots

No response

Exception traceback

No response

How do you run Weblate?

Other

Weblate versions

No response

Weblate deploy checks

No response

Additional context

No response

Where to find the backup files for Weblate

Hi

I want to know where we can find the automated backup files for weblate.
I have successfully deployed weblate via docker compose file and its working fine. We need to export the instance data and we cannot find the backup files. we were checking the path "/tmp/backups/" by connecting to the weblate container and its empty.
Can you help me in finding where the backup files stored. Attaching the screenshot for reference where we can see that the backups are successfully created.

Thanks In Advance!!

MicrosoftTeams-image (2)

Correctly set `settings-override.py` permissions

When creating the service with the helm chart, the following error is risen:

weblate.E027 	The path /app/data/settings-override.py is owned by different user, check your DATA_DIR settings.

I've seen that the file is mounted in the container with the following permissions:

-rw-r--r--  1 root    weblate     0 Jun 15 15:30 settings-override.py

While debugging this I've seen that the weblate user belongs to the root group though it doesn't have permissions to edit root owned files, wouldn't it be better if it didn't belong to that group?

weblate@weblate-547cf6f5b-pjwvz:/etc$ id
uid=1000(weblate) gid=1000(weblate) groups=1000(weblate),0(root),5(tty)
weblate@weblate-547cf6f5b-pjwvz:/etc$ echo "# hi" >> /etc/hosts
bash: hosts: Permission denied

Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

Upgrade Weblate version in the chart

Currently the chart is using weblate 4.0.4, but when started it states:

INFOS:
?: (weblate.I031) New Weblate version is available, please upgrade to 4.1.

I haven't dived into the documentation to see if it's straightforward, sorry :(

Confusing default values for SMTP (TLS/SSL)

Describe the issue

The default values for SMTP are confusing:

# emailTLS -- Use TLS when sending emails
emailTLS: true
# emailSSL -- Use SSL when sending emails
emailSSL: true

You can always only set one of these to true, otherwise celery complains:

EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. Create a helm release without setting either emailTLS or emailSSL to false
  2. You won't be able to send emails with the exception below.

Expected behavior

Since port 587 is used in the default values, I would suggest setting only emailTLS to true.

See also here:
https://docs.djangoproject.com/en/4.0/ref/settings/#std:setting-EMAIL_USE_TLS

Screenshots

No response

Exception traceback

celery-notify stderr | [2022-02-28 15:31:11,160: ERROR/ForkPoolWorker-1] Failure while executing task
celery-notify stderr | Traceback (most recent call last):
celery-notify stderr |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
celery-notify stderr |     R = retval = fun(*args, **kwargs)
celery-notify stderr |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
celery-notify stderr |     return self.run(*args, **kwargs)
celery-notify stderr |   File "/usr/local/lib/python3.10/site-packages/weblate/accounts/tasks.py", line 138, in send_mails
celery-notify stderr |     connection = get_connection()
celery-notify stderr |   File "/usr/local/lib/python3.10/site-packages/django/core/mail/__init__.py", line 35, in get_connection
celery-notify stderr |     return klass(fail_silently=fail_silently, **kwds)
celery-notify stderr |   File "/usr/local/lib/python3.10/site-packages/django/core/mail/backends/smtp.py", line 31, in __init__
celery-notify stderr |     raise ValueError(
celery-notify stderr | ValueError: EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True.

Additional context

No response

Ability to add Init container

Describe the problem

Hello
I have a problem after installing weblate by helm chart

/app/bin/start: 14: /app/bin/start: cannot create /app/data/secret: Permission denied

Describe the solution you'd like

Usually I use init container for changing fs permissions on pvc

Could you add ability to use init container in helm chart properties?
Thank you

Describe alternatives you've considered

No response

Screenshots

No response

Additional context

No response

Excessive Migration Time and Readiness/Liveness Timeouts During System Update

Describe the issue

When performing an update, the migration process takes an excessive amount of time, leading to readiness/liveness timeouts and subsequent pod restarts.

The migration process exceeds the anticipated timeframe, resulting in readiness/liveness timeouts and the subsequent restart of the pod.

The readiness and liveness timeouts are not adjustable via the values yaml file, which limits our ability to mitigate the issue.

Recommendation:

  1. Introduce the option to modify the readiness and liveness timeout values as configurable variables in the values yaml file.
  2. Alternatively, consider increasing the default readiness and liveness timeout values to accommodate scenarios where the migration process requires additional time due to limited hardware resources.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. Initiate an update on the system.
  2. Observe the migration process during the update.

Expected behavior

The migration process completes within a reasonable timeframe, allowing the pod to maintain its readiness and liveness.

Screenshots

log

Exception traceback

No response

Additional context

livenessProbe:
httpGet:
path: {{ .Values.sitePrefix }}/healthz/
port: http
failureThreshold: 10
initialDelaySeconds: 90
periodSeconds: 30
readinessProbe:
httpGet:
path: {{ .Values.sitePrefix }}/healthz/
port: http
failureThreshold: 5
initialDelaySeconds: 90
periodSeconds: 30

The extended duration of the migration process on our system can be attributed to the constrained hardware resources available.

Getting a 502 when loading some pages on weblate

I am installing weblate with the following command:

helm install weblate weblate/weblate -f path\to\values.yaml

And the content of values.yaml as follows:

adminEmail: "[email protected]"
adminUser: "my.user"
adminPassword: "my.password"

service:
  type: LoadBalancer

All 3 pods are successfully created and running, but I keep getting 502 bad gateway for some actions on the page, namely every time I try to delete projects, and often when just trying to access the page.

Any idea how I can fix this issue?

Deploy Weblate's components separately to allow for horizontal scaling

Describe the problem

Right now, the Weblate helm chart deploys itself as a single monolithic instance, meaning that every component runs in a single container. Causing slow startup times due to having to initialize every single component in the same container.

But according to https://docs.weblate.org/en/latest/admin/install/docker.html#scaling-horizontally, you can actually scale Weblate horizontally by deploying multiple instances of Weblate, with extra arguments to make them run a specific component.

Doing this allows for high-availability as individual components can be restarted without affecting the whole instance, meaning that when a certain component (i.e Celery, or the web server) fails, it can quickly restart and reconnect to the cluster without affecting downtime.

Describe the solution you'd like

Extend the Helm chart to allow Weblate components to be deployed separately instead of running everything in a single binary.

I got an example for a similar distributed Helm deployment that works similarly to what I'm describing, from Grafana Mimir which uses only 1 single Grafana Mimir container, but with extra arguments to target a specific component https://github.com/grafana/mimir/tree/main/operations/helm/charts/mimir-distributed

While that deployment has its own hash ring clustering, seems like our distributed Docker compose only relies on a common database backend, so this should be fine enough.

Describe alternatives you've considered

No response

Screenshots

No response

Additional context

No response

Change repository name

Hi, I understand that using helm as the repository name for your organization makes sense. But it can lead to conflicts for the people that are forking your repository.

Something like weblate-helm or weblate-chart would solve it.

Thanks

Release notes for helm chart

Describe the problem

Currently I can not find any kind of release notes for new releases of the weblate helm chart. The only way to see what happened and what must be changed in the yaml deployment or values files is to go trough the commit history manually and read line-by-line trough the code changes. This is very inefficient and still not save to catch all changes.

Describe the solution you'd like

I'd like to have a release note for each new weblate helm chart release which mentions what has been changed any also if and what needs to be changed to migrate from the pervious version to the new one. Something like this would be amazing: https://github.com/WeblateOrg/weblate/releases

Describe alternatives you've considered

read trough the git commit history manually, line-by-line (current approach)

Screenshots

No response

Additional context

No response

Get automated PRs for dependency updates

Describe the problem

The chart dependencies should be kept up to date.

Describe the solution you'd like

Is there something like dependabot for updating helm charts?

Describe alternatives you've considered

No response

Screenshots

No response

Additional context

Part of #163

Ability to add custom labels

Describe the problem

I need to add some custom labels to the resources deployed by the chart

Describe the solution you'd like

Allow user to add custom labels via chart values

Describe alternatives you've considered

No response

Screenshots

No response

Additional context

No response

Authentication fails in clean weblate helm repo installation

I am installing the latest weblate version on an AKS cluster, via the following command:

helm install weblate weblate/weblate -f C:\path\to\values.yaml

The content of values.yaml is as follows:

# adminEmail -- Email of Admin Account
adminEmail: "[email protected]"
# adminUser -- Name of Admin Account
adminUser: "my-user"
# adminPassword -- Password of Admin Account
adminPassword: *********

service:
  type: LoadBalancer

Weblate goes in a crash loop after installation, and I am able to see this in the logs:

psql: error: FATAL: password authentication failed for user "postgres"

What do you think is the issue here?

The path /app/data/settings-override.py is owned by different user, check your DATA_DIR settings.

Describe the issue

Chart won't properly start with the following error:

check stderr | SystemCheckError: System check identified some issues:
check stderr | 
check stderr | CRITICALS:
check stderr | ?: (weblate.E027) The path /app/data/settings-override.py is owned by different user, check your DATA_DIR settings.
check stderr | 	HINT: https://docs.weblate.org/en/weblate-4.4.2/admin/install.html#file-permissions

What is the proper way to address this issue and have Weblate properly start? (currently returns 502 Bad Gateway).

Looks related to #16 but there is no detailed fix there.

I already tried

podSecurityContext:
  fsGroup: 0

But does not look like it works.

  • I've read and searched the docs and did not find the answer there.
    If you didn’t try already, try to search there what you wrote above.

To Reproduce the issue

Install the chart with default persistence values.

CHART_VERSION:="0.3.0"
APP_VERSION:="4.4.2"

Expected behavior

Weblate properly starts.

Screenshots

Exception traceback

Additional context

Logs me out after adding 2 new translations

Describe the issue

Hi Team,

When ever I tried to add new translations it logs me out after adding 2 new translations, and I need to wait for an hour, to add other translations again it logs me out.

Can you please help !!

Regards,
Raman Muthu

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. Go to '...'
  2. Scroll down to '...'
  3. Click on '...'
  4. See error

Expected behavior

No response

Screenshots

No response

Exception traceback

No response

Additional context

No response

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): update helm release postgresql to v15.5.21
  • chore(deps): update helm release redis to v20

Detected dependencies

github-actions
.github/workflows/closing.yml
  • peter-evans/create-or-update-comment v4
  • peter-evans/create-or-update-comment v4
  • ubuntu 22.04
.github/workflows/helm-release.yaml
  • actions/checkout v4
  • azure/setup-helm v3.5
  • helm/chart-releaser-action v1.6.0
  • ubuntu 22.04
.github/workflows/helm-test.yaml
  • actions/checkout v4
  • azure/setup-helm v3.5
  • actions/setup-python v5
  • helm/chart-testing-action v2.6.1
  • actions/checkout v4
  • azure/setup-helm v3.5
  • actions/setup-python v5
  • helm/chart-testing-action v2.6.1
  • helm/kind-action v1.10.0
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/label-sync.yml
  • actions/checkout v4
  • srealmoreno/label-sync-action v1
  • ubuntu 22.04
.github/workflows/labels.yml
  • peter-evans/create-or-update-comment v4
  • peter-evans/create-or-update-comment v4
  • peter-evans/create-or-update-comment v4
  • peter-evans/create-or-update-comment v4
  • peter-evans/create-or-update-comment v4
  • ubuntu 22.04
.github/workflows/pre-commit.yml
  • actions/checkout v4
  • actions/cache v4
  • actions/setup-python v5
  • ubuntu 22.04
.github/workflows/pull_requests.yaml
  • peter-evans/enable-pull-request-automerge v3
.github/workflows/stale.yml
  • actions/stale v9
  • ubuntu 22.04
helm-values
charts/weblate/values.yaml
  • weblate/weblate 5.6.2.0
helmv3
charts/weblate/Chart.yaml
  • postgresql 15.5.20
  • redis 19.6.4
pip_requirements
requirements-lint.txt
  • pre-commit ==3.8.0
pre-commit
.pre-commit-config.yaml
  • pre-commit/pre-commit-hooks v4.6.0
  • macisamuele/language-formatters-pre-commit-hooks v2.14.0
  • executablebooks/mdformat 0.7.17
  • adrienverge/yamllint v1.35.1
  • norwoodj/helm-docs v1.14.2
regex
.pre-commit-config.yaml
  • mdformat-gfm 0.3.6
  • mdformat-black 0.1.1
  • mdformat-shfmt 0.1.0
charts/weblate/Chart.yaml
  • weblate/weblate 5.6.2.0
.github/workflows/helm-test.yaml
  • helm/helm v3.15.3
  • helm/helm v3.15.3

  • Check this box to trigger a request for Renovate to run again on this repository

Authentication with Azure AD

Hi,
I have configured weblate with helm charts in AKS and trying to setup Authentication of weblate application with my Azure Active Directory. I had followed the steps provided in the documentation and got stuck in a place where we need to edit "weblate/settings.py"

I am not able to find the file "Weblate/settings.py" in my Weblate POD. Please help us with the path for "Weblate/settings.py"

Support existing claims for redis and postgresql

Describe the problem

Both redis and postgresql charts support the configuration of an existing claim, but it seems to be impossible to use this with the weblate helm chart

Describe the solution you'd like

Add support for existingClaims for both redis and postgresql subcharts

  • Postgresql: primary.persistence.existingClaim
  • Redis: master.persistence.existingClaim

Describe alternatives you've considered

No response

Screenshots

No response

Additional context

No response

Using external PostgreSQL database doesn't work

Describe the issue

When setting postgresql.enabled to false, the entire POSTGRES_* environment variable block is missing.

In the values file it says:

  # postgresql.postgresqlHost -- External postgres database endpoint, to be
  # used if `postgresql.enabled == false`

But in the deployment the entire block is omitted if postgresql.enabled is false:

{{- if .Values.postgresql.enabled }}
- name: POSTGRES_DATABASE
value: "{{ .Values.postgresql.postgresqlDatabase }}"
- name: POSTGRES_HOST
value: "{{ .Values.postgresql.postgresqlHost | default (include "weblate.postgresql.fullname" .) }}"
- name: POSTGRES_PORT
value: "{{ .Values.postgresql.service.port }}"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: {{ include "weblate.fullname" . }}
key: postgresql-user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "weblate.fullname" . }}
key: postgresql-password
{{- end }}

The value of postgresqlHost would have been used in the following line:

value: "{{ .Values.postgresql.postgresqlHost | default (include "weblate.postgresql.fullname" .) }}"

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

  1. Create a helm release with values similar to this one:
postgresql:
  enabled: false
  postgresqlUsername: weblate
  postgresqlDatabase: weblate
  postgresqlHost: my-external-psql-server.com
  1. The release will fail with an error:
KeyError: 'POSTGRES_DATABASE'

Expected behavior

All POSTGRES_* environment variables are still set even if postgresql.enabled is false.

Screenshots

No response

Exception traceback

Traceback (most recent call last):
  File "/usr/local/bin/weblate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/site-packages/weblate/runner.py", line 36, in main
    from weblate.utils.errors import report_error
  File "/usr/local/lib/python3.10/site-packages/weblate/utils/errors.py", line 32, in <module>
    import weblate.utils.version
  File "/usr/local/lib/python3.10/site-packages/weblate/utils/version.py", line 23, in <module>
    from weblate.vcs.base import RepositoryException
  File "/usr/local/lib/python3.10/site-packages/weblate/vcs/base.py", line 38, in <module>
    from weblate.utils.lock import WeblateLock
  File "/usr/local/lib/python3.10/site-packages/weblate/utils/lock.py", line 23, in <module>
    from django_redis.cache import RedisCache
  File "/usr/local/lib/python3.10/site-packages/django_redis/cache.py", line 12, in <module>
    DJANGO_REDIS_SCAN_ITERSIZE = getattr(settings, "DJANGO_REDIS_SCAN_ITERSIZE", 10)
  File "/usr/local/lib/python3.10/site-packages/django/conf/__init__.py", line 84, in __getattr__
    self._setup(name)
  File "/usr/local/lib/python3.10/site-packages/django/conf/__init__.py", line 71, in _setup
    self._wrapped = Settings(settings_module)
  File "/usr/local/lib/python3.10/site-packages/django/conf/__init__.py", line 179, in __init__
    mod = importlib.import_module(self.SETTINGS_MODULE)
  File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/usr/local/lib/python3.10/site-packages/weblate/settings_docker.py", line 67, in <module>
    "NAME": os.environ["POSTGRES_DATABASE"],
  File "/usr/local/lib/python3.10/os.py", line 679, in __getitem__
    raise KeyError(key) from None

Additional context

No response

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: .github/renovate.json
Error type: The renovate configuration file contains some invalid settings
Message: Invalid regExp for regexManagers[4].fileMatch: '^\.github/workflows)/[^/]+\.ya?ml$'

Issue with bitnami redis chart (pod not starting)

Describe the issue

redis-master-0 pod don't start.

�[38;5;6mredis �[38;5;5m14:08:02.92 �[0m�[38;5;2mINFO �[0m ==> ** Starting Redis **
1:C 31 Aug 2021 14:08:02.935 # Can't chdir to '/data': Permission denied

I already tried

Already tried to drop the whole namespace and reinstall.

To Reproduce the issue

I am using a manually created PVC for weblate itself:

          apiVersion: v1
          kind: PersistentVolumeClaim
          metadata:
            name: "weblate-pvc"
            namespace: "weblate"
          spec:
            accessModes:
              - ReadWriteMany
            storageClassName: longhorn-2r
            resources:
              requests:
                storage: 1Gi

And my helm install call/definition (through ansible):

        release_name: weblate
        chart_ref: weblate/weblate
        release_values:
          fullnameOverride: "weblate"
          emailHost: "10.0.0.111"
          serverEmail: weblate@x
          defaultFromEmail: weblate@x
          allowedHosts: weblate.x,localhost
          siteDomain: weblate.x
          adminEmail: x@x
          adminPassword: 123456
          debug: "0"
          postgresql:
            enabled: false
            postgresqlHost: "10.0.0.12"
            postgresqlPassword: 123456
          ingress:
            enabled: false
          persistence:
            existingClaim: "weblate-pvc"
          extraConfig:
            WEBLATE_GITHUB_USERNAME: 123456
            WEBLATE_SOCIAL_AUTH_GITHUB_KEY: 123456
            WEBLATE_SOCIAL_AUTH_GITHUB_SECRET: 123456
            WEBLATE_REGISTRATION_OPEN: "1"
            WEBLATE_TIME_ZONE: 'Europe/Paris'
            WEBLATE_ENABLE_HTTPS: "1"

I can see a PVC for redis which is green redis-data-weblate-redis-master-0 , but the pod don't start because of some right issues.

allow to disable security context

Describe the issue

Installing Weblate via Helm charts on Openshift is currently non trivial. Openshift manages UID and GID of each container per default and ensures it's not running as root.

In some context we include weblate as subchart to be able to deploy other stuff in the same time like externalSecretName.

The current default values.yaml explicitly define:

podSecurityContext:
  fsGroup: 1000

securityContext: {}

which is NOT possible to override due to current helm bug.

I saw many project starting to add a boolean flag to enable or not.
A bit like what you can do with your bitnami redis and postgresql subcharts.

postgresql:
  securityContext:
    enabled: false
  containerSecurityContext:
    enabled: false

redis:
  securityContext:
    enabled: false
  containerSecurityContext:
    enabled: false

Would you be ready to consider such a change for the chart ?

thank you for your considerations.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

Expected behavior

Screenshots

No response

Exception traceback

No response

Additional context

No response

Update ingress to networking.k8s.io/v1

Is your feature request related to a problem? If so, please describe.
Currently I get the following warning by k8s:

networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress

Describe the solution you'd like
Update to networking.k8s.io/v1 for the ingress.

Pod takes too long to become ready

Describe the issue

Lately the readiness probe has been upped to 300 seconds.
This is too high.
The chart is currently designed in a way that zero downtime deployments are not possible.
Therefore the startup time needs to improve.

In contrast to the liveness probe, a failing readiness probe is not a problem during container start up. I.e., kubernetes will not restart the container. A failing readiness probe will only exclude the container from being loadbalanced in the weblate service.

Both readiness probe initial delay and liveness probe initial delay should be configurable and use a sensible default value.

I already tried

  • I've read and searched the documentation.
  • I've searched for similar issues in this repository.

Steps to reproduce the behavior

deploy the app, app takes too until it's ready

Expected behavior

No response

Screenshots

No response

Exception traceback

No response

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.