Git Product home page Git Product logo

charts's Introduction

Sentry helm charts

Sentry is a cross-platform crash reporting and aggregation platform.

This repository aims to support Sentry >=10 and move out from the deprecated Helm charts official repo.

Big thanks to the maintainers of the deprecated chart. This work has been partly inspired by it.

How this chart works

helm repo add sentry https://sentry-kubernetes.github.io/charts

Values

For now the full list of values is not documented but you can get inspired by the values.yaml specific to each directory.

Upgrading from 22.x.x Version of This Chart to 23.x.x

This version introduces changes to definitions of ingest-consumers and workers. These changes allow to balance ingestion pipeline with more granularity.

Major Changes

  • Ingest consumers: Templates for Deployment and HPA manifests are now separate for ingest-consumer-events, ingest-consumer-attachments and ingest-consumer-transactions.
  • Workers: Templates for two additional worker Deployments added, each of them with its own HPA. By default they're configured for error- and transaction-related tasks processing, but queues to consume can be redefined for both.

Migration Guide

Since labels are immutable in k8s Deployments, helm upgrade --force should be used to recreate ingest-consumer Deployments. As an alternative, existing ingest-consumer Deployments can be removed manually with kubectl delete before upgrading helm release.

Upgrading from 21.x.x Version of This Chart to 22.x.x

This version introduces a significant change by dropping support for Kafka Zookeeper and transitioning to Kafka Kraft mode. This change requires action on your part to ensure a smooth upgrade.

Major Changes

  • Kafka Upgrade: We have upgraded from Kafka 23.0.7 to 27.1.2. This involves moving from Zookeeper to Kraft, requiring a fresh setup of Kafka.

Migration Guide

  1. Backup Your Data: Ensure all your data is backed up before starting the migration process.

  2. Retrieve the Cluster ID from Zookeeper by executing:

    kubectl exec -it <your-zookeeper-pod> -- zkCli.sh get /cluster/id
  3. Deploy at least one Kraft controller-only in your deployment with zookeeperMigrationMode=true. The Kraft controllers will migrate the data from your Kafka ZkBroker to Kraft mode.

    To do this, add the following values to your Zookeeper deployment when upgrading:

    controller:
        replicaCount: 1
        controllerOnly: true
        zookeeperMigrationMode: true
    broker:
        zookeeperMigrationMode: true
    kraft:
        enabled: true
        clusterId: "<your_cluster_id>"
  4. Wait until all brokers are ready. You should see the following log in the broker logs:

    INFO [KafkaServer id=100] Finished catching up on KRaft metadata log, requesting that the KRaft controller unfence this broker (kafka.server.KafkaServer)
    INFO [BrokerLifecycleManager id=100 isZkBroker=true] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)

    In the controllers, the following message should show up:

    Transitioning ZK migration state from PRE_MIGRATION to MIGRATION (org.apache.kafka.controller.FeatureControlManager)
  5. Once all brokers have been successfully migrated, set 'broker.zookeeperMigrationMode=false' to fully migrate them.

    broker:
      zookeeperMigrationMode: false
  6. To conclude the migration, switch off migration mode on controllers and stop Zookeeper:

    controller:
        zookeeperMigrationMode: false
    zookeeper:
        enabled: false

    After the migration is complete, you should see the following message in your controllers:

    [2023-07-13 13:07:45,226] INFO [QuorumController id=1] Transitioning ZK migration state from MIGRATION to POST_MIGRATION (org.apache.kafka.controller.FeatureControlManager)
  7. (Optional) If you would like to switch to a non-dedicated cluster, set 'controller.controllerOnly=false'. This will cause controller-only nodes to switch to controller+broker nodes.

    At this point, you could manually decommission broker-only nodes by reassigning its partitions to controller-eligible nodes.

    For more information about decommissioning a Kafka broker, check the official documentation.

Upgrading from 20.x.x version of this Chart to 21.x.x

Bumped dependencies:

  • memcached > 6.5.9
  • kafka > 23.0.7 - This is a major update, but only kafka version is updated. See bitnami charts' update note
  • clickhouse > 3.7.0 - Supports priorityClassName and max_suspicious_broken_parts config.
  • zookeeper > 11.4.11 - 2 Major updates from v9 to v11. See To v10 upgrade notes and To v11 upgrade notes
  • rabbitmq > 11.16.2

Upgrading from 19.x.x version of this Chart to 20.x.x

Bumped dependencies:

  • kafka > 22.1.3 - now supports Kraft. Note that the upgrade is breaking and that you have to start a new kafka from scratch to use it.

Example:

kafka:
  zookeeper:
    enabled: false
  kraft:
    enabled: true

Upgrading from 18.x.x version of this Chart to 19.x.x

Chart dependencies has been upgraded because of sentry requirements. Changes:

  • The minimum required version of Postgresql is 14.5 (works with 15.x too)

Bumped dependencies:

  • postgresql > 12.5.1 - latest wersion of chart with postgres 15

Upgrading from 17.x.x version of this Chart to 18.x.x

If Kafka is complaining about unknown or missing topic, please connect to kafka-0 and run

/opt/bitnami/kafka/bin/kafka-topics.sh --create --topic ingest-replay-recordings --bootstrap-server localhost:9092

Upgrading from 16.x.x version of this Chart to 17.x.x

Sentry version from 22.10.0 onwards should be using chart 17.x.x

  • post process forwarder events and transactions topics are split in Sentry 22.10.0

You can delete the deployment "sentry-post-process-forward" as it's no longer needed.

sentry-worker may failed to start by #774. If you encountered this issue, please reset counters-0, triggers-0 queues.

Upgrading from 15.x.x version of this Chart to 16.x.x

system.secret-key is removed

See https://github.com/sentry-kubernetes/charts/tree/develop/sentry#sentry-secret-key

Upgrading from 14.x.x version of this Chart to 15.x.x

Chart dependencies has been upgraded because of bitnami charts removal. Changes:

  • nginx.service.port: 80 > nginx.service.ports.http: 80
  • kafka.service.port > kafka.service.ports.client

Bumped dependencies:

  • redis > 16.12.1 - latest version of chart
  • kafka > 16.3.2 - chart aligned with zookeeper dependency, upgraded kafka to 3.11
  • rabbit > 8.32.2 - latest 3.9.* image version of chart
  • postgresql > 10.16.2 - latest wersion of chart with postgres 11
  • nginx > 12.0.4 - latest version of chart

Upgrading from 13.x.x version of this Chart to 14.0.0

ClickHouse was reconfigured with sharding and replication in-mind, If you are using external ClickHouse, you don't need to do anything.

WARNING: You will lose current event data
Otherwise, you should delete the old ClickHouse volumes in-order to upgrade to this version.

Upgrading from 12.x.x version of this Chart to 13.0.0

The service annotions have been moved from the service section to the respective service's service sub-section. So what was:

service:
  annotations:
    alb.ingress.kubernetes.io/healthcheck-path: /_health/
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port

will now be set per service:

sentry:
  web:
    service:
      annotations:
        alb.ingress.kubernetes.io/healthcheck-path: /_health/
        alb.ingress.kubernetes.io/healthcheck-port: traffic-port

relay:
  service:
    annotations:
      alb.ingress.kubernetes.io/healthcheck-path: /api/relay/healthcheck/ready/
      alb.ingress.kubernetes.io/healthcheck-port: traffic-port

Upgrading from 10.x.x version of this Chart to 11.0.0

If you were using clickhouse tabix externally, we disabled it per default.

Upgrading from 9.x.x version of this Chart to 10.0.0

If you were using clickhouse ImagePullSecrets, we unified the way it's used.

Upgrading from 8.x.x version of this Chart to 9.0.0

to simplify 1st time installations, backup value on clickhouse has been changed to false.

clickhouse.clickhouse.configmap.remote_servers.replica.backup

Upgrading from 7.x.x version of this Chart to 8.0.0

  • the default value of features.orgSubdomains is now "false"

Upgrading from 6.x.x version of this Chart to 7.0.0

  • the default mode of relay is now "proxy". You can change it through the values.yaml file
  • we removed the githubSso variable for the oauth github configuration. It was using the old environment variable, that doesn't work with Sentry anymore. Just use the common github.xxxx configuration for both oauth & the application integration.

Upgrading from 5.x.x version of this Chart to 6.0.0

  • The sentry.configYml value is now in a real yaml format
  • If you were previously using relay.asHook, the value is now asHook

Upgrading from 4.x.x version of this Chart to 5.0.0

As Relay is now part of this chart your need to make sure you enable either Nginx or the Ingress. Please read the next paragraph for more information.

If you are using an ingress gateway (like istio), you have to change your inbound path from sentry-web to nginx.

NGINX and/or Ingress

By default, NGINX is enabled to allow sending the incoming requests to Sentry Relay or the Django backend depending on the path. When Sentry is meant to be exposed outside of the Kubernetes cluster, it is recommended to disable NGINX and let the Ingress do the same. It's recommended to go with the go to Ingress Controller, NGINX Ingress but others should work as well.

Note: if you are using NGINX Ingress, please set this annotation on your ingress : nginx.ingress.kubernetes.io/use-regex: "true". If you are using additionalHostNames the nginx.ingress.kubernetes.io/upstream-vhost annotation might also come in handy. It sets the Host header to the value you provide to avoid CSRF issues.

Letsencrypt on NGINX Ingress Controller

nginx:
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    enabled: true
    hostname: fqdn
    ingressClassName: "nginx"
    tls: true

Clickhouse warning

Snuba only supports a UTC timezone for Clickhouse. Please keep the initial value!

Upgrading from 3.1.0 version of this Chart to 4.0.0

Following Helm Chart best practices the new version introduces some breaking changes, all configuration for external resources moved to separate config branches: externalClickhouse, externalKafka, externalRedis, externalPostgresql.

Here is a mapping table of old values and new values:

Before After
postgresql.postgresqlHost externalPostgresql.host
postgresql.postgresqlPort externalPostgresql.port
postgresql.postgresqlUsername externalPostgresql.username
postgresql.postgresqlPassword externalPostgresql.password
postgresql.postgresqlDatabase externalPostgresql.database
postgresql.postgresSslMode externalPostgresql.sslMode
redis.host externalRedis.host
redis.port externalRedis.port
redis.password externalRedis.password

Upgrading from deprecated 9.0 -> 10.0 Chart

As this chart runs in helm 3 and also tries its best to follow on from the original Sentry chart. There are some steps that needs to be taken in order to correctly upgrade.

From the previous upgrade, make sure to get the following from your previous installation:

  • Redis Password (If Redis auth was enabled)
  • PostgreSQL Password Both should be in the secrets of your original 9.0 release. Make a note of both of these values.

Upgrade Steps

Due to an issue where transferring from Helm 2 to 3. Statefulsets that use the following: heritage: {{ .Release.Service }} in the metadata field will error out with a Forbidden error during the upgrade. The only workaround is to delete the existing statefulsets (Don't worry, PVC will be retained):

kubectl delete --all sts -n <Sentry Namespace>

Once the statefulsets are deleted. Next steps is to convert the helm release from version 2 to 3 using the helm 3 plugin:

helm3 2to3 convert <Sentry Release Name>

Finally, it's just a case of upgrading and ensuring the correct params are used:

If Redis auth enabled:

helm upgrade -n <Sentry namespace> <Sentry Release> . --set redis.usePassword=true --set redis.password=<Redis Password> --set postgresql.postgresqlPassword=<Postgresql Password>

If Redis auth is disabled:

helm upgrade -n <Sentry namespace> <Sentry Release> . --set postgresql.postgresqlPassword=<Postgresql Password>

Please also follow the steps for Major version 3 to 4 migration

PostgreSQL

By default, PostgreSQL is installed as part of the chart. To use an external PostgreSQL server set postgresql.enabled to false and then set postgresql.postgresHost and postgresql.postgresqlPassword. The other options (postgresql.postgresqlDatabase, postgresql.postgresqlUsername and postgresql.postgresqlPort) may also want changing from their default values.

To avoid issues when upgrade this chart, provide postgresql.postgresqlPassword for subsequent upgrades. This is due to an issue in the PostgreSQL chart where password will be overwritten with randomly generated passwords otherwise. See https://github.com/helm/charts/tree/master/stable/postgresql#upgrade for more detail.

Persistence

This chart is capable of mounting the sentry-data PV in the Sentry worker and cron pods. This feature is disabled by default, but is needed for some advanced features such as private sourcemaps.

You may enable mounting of the sentry-data PV across worker and cron pods by changing filestore.filesystem.persistence.persistentWorkers to true. If you plan on deploying Sentry containers across multiple nodes, you may need to change your PVC's access mode to ReadWriteMany and check that your PV supports mounting across multiple nodes.

Roadmap

  • Lint in Pull requests
  • Public availability through Github Pages
  • Automatic deployment through Github Actions
  • Symbolicator deployment
  • Testing the chart in a production environment
  • Improving the README

charts's People

Contributors

adonskoy avatar agalin avatar alekitto avatar alexef avatar christiangonre avatar dandydeveloper avatar farodin91 avatar j0sh0nat0r avatar jandersen-plaid avatar joscha-alisch avatar kimxogus avatar klengyel avatar mariusmitrofanbostontr avatar mokto avatar nikolaiiuzhaninys avatar nishaero avatar pauvos avatar phylu avatar pschwager avatar pyama86 avatar renovate[bot] avatar sagor999 avatar shcherbak avatar sue445 avatar tartanlegrand avatar vaivanov avatar wtayyeb avatar xvilo avatar yoan-adfinis avatar z0rc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

GeoIP Support

We should probably support GeoIP, potentially through a sidecar using this Docker image.

Stuck on sentry-db-init with no logs

Made it pretty far, but getting stuck here. Just stays pending forever with no logs:

sentry-db-init-j56m6                0/1     Pending     0          4m19s
snuba-db-init-mkzgt                 0/1     Completed   0          4m49s
snuba-migrate-4tm7d                 0/1     Completed   0          4m37s
client.go:234: [debug] Starting delete for "sentry-db-init" Job
client.go:98: [debug] creating 1 resource(s)
client.go:439: [debug] Watching for changes to Job sentry-db-init with timeout of 20m0s
client.go:467: [debug] Add/Modify event for sentry-db-init: ADDED
client.go:506: [debug] sentry-db-init: Jobs active: 1, jobs failed: 0, jobs succeeded: 0

I'm running bare-metal on digital ocean, any ideas?

db-init fails with fe_sendauth: no password supplied

Seems to be the same issue as #14 but I am still experiencing it with 0.11.1.

db-init log:

13:49:11 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
13:49:16 [INFO] sentry.plugins.github: apps-not-configured
Traceback (most recent call last):
  File "/usr/local/bin/sentry", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python2.7/site-packages/sentry/runner/__init__.py", line 166, in main
    cli(prog_name=get_prog(), obj={}, max_content_width=100)
  File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/sentry/runner/decorators.py", line 30, in inner
    return ctx.invoke(f, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 168, in upgrade
    _upgrade(not noinput, traceback, verbosity, not no_repair)
  File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 121, in _upgrade
    _migrate_from_south(verbosity)
  File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 93, in _migrate_from_south
    if not _has_south_history(connection):
  File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 78, in _has_south_history
    cursor = connection.cursor()
  File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor
    return self._cursor()
  File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/decorators.py", line 44, in inner
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/base.py", line 99, in _cursor
    cursor = super(DatabaseWrapper, self)._cursor()
  File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor
    self.ensure_connection()
  File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
    self.connect()
  File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
    six.reraise(dj_exc_type, dj_exc_value, traceback)
  File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
    self.connect()
  File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect
    self.connection = self.get_new_connection(conn_params)
  File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
    connection = Database.connect(**conn_params)
  File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 126, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: fe_sendauth: no password supplied

Using these values:

prefix:

  user:
    create: true
    email: [email protected]
    password: aaaa

  images:
    sentry:
      repository: getsentry/sentry
      tag: b15b1512a59051c84775e5eb7186a4da505e0ac4
      pullPolicy: IfNotPresent
      # imagePullSecrets: []
    snuba:
      repository: getsentry/snuba
      tag: 9d678c2551045a696e5701d845a83c77dc528bd0
      pullPolicy: IfNotPresent
      # imagePullSecrets: []

  sentry:
    web:
      replicas: 1
      env: {}
      probeInitialDelaySeconds: 10
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []

      autoscaling:
        enabled: false
        minReplicas: 2
        maxReplicas: 5
        targetCPUUtilizationPercentage: 50

    worker:
      replicas: 3
      # concurrency: 4
      env: {}
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []
    cron:
      env: {}
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []
    postProcessForward:
      replicas: 1
      env: {}
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []

  snuba:
    api:
      replicas: 1
      env: {}
      probeInitialDelaySeconds: 10
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []

      autoscaling:
        enabled: false
        minReplicas: 2
        maxReplicas: 5
        targetCPUUtilizationPercentage: 50

    consumer:
      replicas: 1
      env: {}
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []
    replacer:
      env: {}
      resources: {}
      affinity: {}
      nodeSelector: {}
      # tolerations: []
      # podLabels: []


  hooks:
    enabled: true
    dbInit:
      resources:
        limits:
          memory: 2048Mi
        requests:
          cpu: 300m
          memory: 2048Mi
    snubaInit:
      resources:
        limits:
          cpu: 2000m
          memory: 1Gi
        requests:
          cpu: 700m
          memory: 1Gi

  system:
    url: ""
    adminEmail: ""
    secretKey: 'icLq77rCyY_qrMMpXa6TQNjkDV6mU!c'
    public: false #  This should only be used if you’re installing Sentry behind your company’s firewall.

  mail:
    backend: dummy # smtp
    useTls: false
    username: ""
    password: ""
    port: 25
    host: ""
    from: ""

  symbolicator:
    enabled: false

  auth:
    register: false

  service:
    name: sentry
    type: ClusterIP
    externalPort: 9000
    annotations: {}
    # externalIPs:
    # - 192.168.0.1
    # loadBalancerSourceRanges: []

  github: {} # https://github.com/settings/apps (Create a Github App)
  # github:
  #   appId: "xxxx"
  #   appName: MyAppName
  #   clientId: "xxxxx"
  #   clientSecret: "xxxxx"
  #   privateKey: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpA" !!!! Don't forget a trailing \n
  #   webhookSecret:  "xxxxx`"

  githubSso: {} # https://github.com/settings/developers (Create a OAuth App)
    # clientId: "xx"
    # clientSecret: "xx"

  slack: {}
  # slack:
  #   clientId:
  #   clientSecret:
  #   verificationToken:

  ingress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-production
      kubernetes.io/tls-acme: "true"
    hostname: sentry.test.dev
    tls:
    - secretName: sentry-certs
      hosts:
        - sentry.test.dev

  filestore:
    # Set to one of filesystem, gcs or s3 as supported by Sentry.
    backend: filesystem

    filesystem:
      path: /var/lib/sentry/files

      ## Enable persistence using Persistent Volume Claims
      ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
      ##
      persistence:
        enabled: true
        ## database data Persistent Volume Storage Class
        ## If defined, storageClassName: <storageClass>
        ## If set to "-", storageClassName: "", which disables dynamic provisioning
        ## If undefined (the default) or set to null, no storageClassName spec is
        ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
        ##   GKE, AWS & OpenStack)
        ##
        # storageClass: "-"
        accessMode: ReadWriteOnce
        size: 10Gi

        ## Whether to mount the persistent volume to the Sentry worker and
        ## cron deployments. This setting needs to be enabled for some advanced
        ## Sentry features, such as private source maps. If you disable this
        ## setting, the Sentry workers will not have access to artifacts you upload
        ## through the web deployment.
        ## Please note that you may need to change your accessMode to ReadWriteMany
        ## if you plan on having the web, worker and cron deployments run on
        ## different nodes.
        persistentWorkers: false

    ## Point this at a pre-configured secret containing a service account. The resulting
    ## secret will be mounted at /var/run/secrets/google
    gcs:
      # credentialsFile: credentials.json
      #  secretName:
      #  bucketName:

    ## Currently unconfigured and changing this has no impact on the template configuration.
    s3: {}
    #  accessKey:
    #  secretKey:
    #  bucketName:
    #  endpointUrl:
    #  signature_version:
    #  region_name:
    #  default_acl:

  config:
    configYml: |
      # No YAML Extension Config Given
    sentryConfPy: |
      # No Python Extension Config Given


  clickhouse:
    enabled: true
    clickhouse:
      imageVersion: "19.16"

  kafka:
    enabled: true
    replicaCount: 3
    allowPlaintextListener: true
    defaultReplicationFactor: 3
    offsetsTopicReplicationFactor: 3
    transactionStateLogReplicationFactor: 3
    transactionStateLogMinIsr: 3

    service:
      port: 9092

  redis:
    ## Required if the Redis component of this chart is disabled. (Existing Redis)
    #
    hostOverride: ""
    enabled: true
    nameOverride: sentry-redis
    usePassword: false
    # Only used when internal redis is disabled
    # host: redis
    # Just omit the password field if your redis cluster doesn't use password
    # password: redis
    # port: 6379
    master:
      persistence:
        enabled: true

  postgresql:
    ## Required if the Postgresql component of this chart is disabled. (Existing Postgres)
    #
    hostOverride: ""
    enabled: true
    nameOverride: sentry-postgresql
    postgresqlUsername: postgres
    postgresqlDatabase: sentry
    # Only used when internal PG is disabled
    # postgresqlHost: postgres
    # postgresqlPassword: postgres
    # postgresqlPort: 5432
    # postgresSslMode: require
    replication:
      enabled: false
      slaveReplicas: 2
      synchronousCommit: "on"
      numSynchronousReplicas: 1

  rabbitmq:
    ## If disabled, Redis will be used instead as the broker.
    enabled: true
    forceBoot: true
    replicaCount: 3
    rabbitmqErlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA
    rabbitmqUsername: guest
    rabbitmqPassword: guest
    nameOverride: ""

    podDisruptionBudget:
      minAvailable: 1

    persistentVolume:
      enabled: true
    resources: {}
    # rabbitmqMemoryHighWatermark: 600MB
    # rabbitmqMemoryHighWatermarkType: absolute

    definitions:
      policies: |-
       {
         "name": "ha-all",
         "pattern": "^((?!celeryev.*).)*$",
         "vhost": "/",
         "definition": {
           "ha-mode": "all",
           "ha-sync-mode": "automatic",
           "ha-sync-batch-size": 1
         }
       }

Sentry configmap:

apiVersion: v1
data:
  config.yml: |-
    system.secret-key: icLq77rCyY_qrMMpXa6TQNjkDV6mU!c

    symbolicator.enabled: false
    symbolicator.options:
      url: "http://sentry-symbolicator:3021"

    mail.backend: "dummy"
    mail.use-tls: false
    mail.username: ""
    mail.password: ""
    mail.port: 25
    mail.host: ""
    mail.from: ""

    ################
    #    Redis     #
    ################
    redis.clusters:
      default:
        hosts:
          0:
            host: "sentry-sentry-redis-master"
            port: 6379

    ################
    # File storage #
    ################
    # Uploaded media uses these `filestore` settings. The available
    # backends are either `filesystem` or `s3`.
    filestore.backend: 'filesystem'
    filestore.options:
      location: '/var/lib/sentry/files'


    # No YAML Extension Config Given
  sentry.conf.py: "from sentry.conf.server import *  # NOQA\n\nDATABASES = {\n    \"default\":
    {\n        \"ENGINE\": \"sentry.db.postgres\",\n        \"NAME\": \"sentry\",\n
    \       \"USER\": \"postgres\",\n        \"PASSWORD\": os.environ.get(\"POSTGRES_PASSWORD\",
    \"\"),\n        \"HOST\": \"sentry-sentry-postgresql\",\n        \"PORT\": \"5432\",\n
    \   }\n}\n\n# You should not change this setting after your database has been
    created\n# unless you have altered all schemas first\nSENTRY_USE_BIG_INTS = True\n\n###########\n#
    General #\n###########\n\n# Instruct Sentry that this install intends to be run
    by a single organization\n# and thus various UI optimizations should be enabled.\nSENTRY_SINGLE_ORGANIZATION
    = True\n\nSENTRY_OPTIONS[\"system.event-retention-days\"] = int(env('SENTRY_EVENT_RETENTION_DAYS')
    or 90)\n\n#########\n# Queue #\n#########\n\n# See https://docs.getsentry.com/on-premise/server/queue/
    for more\n# information on configuring your queue broker and workers. Sentry relies\n#
    on a Python framework called Celery to manage queues.\nBROKER_URL = os.environ.get(\"BROKER_URL\",
    \"amqp://guest:guest@sentry-rabbitmq:5672//\")\n  \n#########\n# Cache #\n#########\n\n#
    Sentry currently utilizes two separate mechanisms. While CACHES is not a\n# requirement,
    it will optimize several high throughput patterns.\n\n# CACHES = {\n#     \"default\":
    {\n#         \"BACKEND\": \"django.core.cache.backends.memcached.MemcachedCache\",\n#
    \        \"LOCATION\": [\"memcached:11211\"],\n#         \"TIMEOUT\": 3600,\n#
    \    }\n# }\n\n# A primary cache is required for things such as processing events\nSENTRY_CACHE
    = \"sentry.cache.redis.RedisCache\"\n\nDEFAULT_KAFKA_OPTIONS = {\n    \"bootstrap.servers\":
    \"sentry-kafka:9092\",\n    \"message.max.bytes\": 50000000,\n    \"socket.timeout.ms\":
    1000,\n}\n\nSENTRY_EVENTSTREAM = \"sentry.eventstream.kafka.KafkaEventStream\"\nSENTRY_EVENTSTREAM_OPTIONS
    = {\"producer_configuration\": DEFAULT_KAFKA_OPTIONS}\n\nKAFKA_CLUSTERS[\"default\"]
    = DEFAULT_KAFKA_OPTIONS\n\n###############\n# Rate Limits #\n###############\n\n#
    Rate limits apply to notification handlers and are enforced per-project\n# automatically.\n\nSENTRY_RATELIMITER
    = \"sentry.ratelimits.redis.RedisRateLimiter\"\n\n##################\n# Update
    Buffers #\n##################\n\n# Buffers (combined with queueing) act as an
    intermediate layer between the\n# database and the storage API. They will greatly
    improve efficiency on large\n# numbers of the same events being sent to the API
    in a short amount of time.\n# (read: if you send any kind of real data to Sentry,
    you should enable buffers)\n\nSENTRY_BUFFER = \"sentry.buffer.redis.RedisBuffer\"\n\n##########\n#
    Quotas #\n##########\n\n# Quotas allow you to rate limit individual projects or
    the Sentry install as\n# a whole.\n\nSENTRY_QUOTAS = \"sentry.quotas.redis.RedisQuota\"\n\n########\n#
    TSDB #\n########\n\n# The TSDB is used for building charts as well as making things
    like per-rate\n# alerts possible.\n\nSENTRY_TSDB = \"sentry.tsdb.redissnuba.RedisSnubaTSDB\"\n\n#########\n#
    SNUBA #\n#########\n\nSENTRY_SEARCH = \"sentry.search.snuba.EventsDatasetSnubaSearchBackend\"\nSENTRY_SEARCH_OPTIONS
    = {}\nSENTRY_TAGSTORE_OPTIONS = {}\n\n###########\n# Digests #\n###########\n\n#
    The digest backend powers notification summaries.\n\nSENTRY_DIGESTS = \"sentry.digests.backends.redis.RedisBackend\"\n\n##############\n#
    Web Server #\n##############\n\nSENTRY_WEB_HOST = \"0.0.0.0\"\nSENTRY_WEB_PORT
    = 9000\nSENTRY_PUBLIC = False\nSENTRY_WEB_OPTIONS = {\n    \"http\": \"%s:%s\"
    % (SENTRY_WEB_HOST, SENTRY_WEB_PORT),\n    \"protocol\": \"uwsgi\",\n    # This
    is needed to prevent https://git.io/fj7Lw\n    \"uwsgi-socket\": None,\n    \"http-keepalive\":
    True,\n    \"memory-report\": False,\n    # 'workers': 3,  # the number of web
    workers\n}\n\n###########\n# SSL/TLS #\n###########\n\n# If you're using a reverse
    SSL proxy, you should enable the X-Forwarded-Proto\n# header and enable the settings
    below\n\n# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n# SESSION_COOKIE_SECURE
    = True\n# CSRF_COOKIE_SECURE = True\n# SOCIAL_AUTH_REDIRECT_IS_HTTPS = True\n\n#
    End of SSL/TLS settings\n\n############\n# Features #\n############\n\n\nSENTRY_FEATURES
    = {\n  \"auth:register\": True\n}\nSENTRY_FEATURES[\"projects:sample-events\"]
    = False\nSENTRY_FEATURES.update(\n    {\n        feature: True\n        for feature
    in (\n            \"organizations:discover\",\n            \"organizations:events\",\n
    \           \"organizations:global-views\",\n            \"organizations:integrations-issue-basic\",\n
    \           \"organizations:integrations-issue-sync\",\n            \"organizations:invite-members\",\n
    \           \"organizations:new-issue-ui\",\n            \"organizations:repos\",\n
    \           \"organizations:require-2fa\",\n            \"organizations:sentry10\",\n
    \           \"organizations:sso-basic\",\n            \"organizations:sso-rippling\",\n
    \           \"organizations:sso-saml2\",\n            \"organizations:suggested-commits\",\n
    \           \"projects:custom-inbound-filters\",\n            \"projects:data-forwarding\",\n
    \           \"projects:discard-groups\",\n            \"projects:plugins\",\n
    \           \"projects:rate-limits\",\n            \"projects:servicehooks\",\n
    \       )\n    }\n)\n\n\n######################\n# GitHub Integration #\n#####################\n\nGITHUB_APP_ID
    = ''\nGITHUB_API_SECRET = ''\n\n#########################\n# Bitbucket Integration
    #\n########################\n\n# BITBUCKET_CONSUMER_KEY = 'YOUR_BITBUCKET_CONSUMER_KEY'\n#
    BITBUCKET_CONSUMER_SECRET = 'YOUR_BITBUCKET_CONSUMER_SECRET'\n\n# No Python Extension
    Config Given"
kind: ConfigMap
metadata:
  creationTimestamp: "2020-04-14T13:48:03Z"
  labels:
    app: sentry
    chart: sentry-0.11.1
    heritage: Helm
    release: sentry
  name: sentry-sentry
  namespace: sentry
  resourceVersion: "48576753"
  selfLink: /api/v1/namespaces/sentry/configmaps/sentry-sentry
  uid: 87cd326e-464e-47a2-8de0-eb97092dcda6

Snuba consumer CrashLoopBackOff

sentry-snuba-consumer-6d8ccf79ff-lcdjj 0/1 CrashLoopBackOff 8 27m

version:
release 0.12.3

  snuba:
    repository: getsentry/snuba
    tag: 477561d1d5ae17ed9b88ed67cf208d0029dd07b4
    pullPolicy: IfNotPresent
clickhouse:
  enabled: true
  clickhouse:
    imageVersion: "19.16"

error log:

2020-04-23 14:11:42,904 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 538}
2020-04-23 14:11:43,912 Flushing 8 items (from {Partition(topic=Topic(name='events'), index=0): Offsets(lo=538, hi=545)}): forced:False size:False time:True
2020-04-23 14:11:43,933 Worker flush took 20ms
2020-04-23 14:12:14,119 Flushing 1 items (from {Partition(topic=Topic(name='events'), index=0): Offsets(lo=546, hi=546)}): forced:False size:False time:True
Traceback (most recent call last):
  File "/usr/local/bin/snuba", line 11, in <module>
    load_entry_point('snuba', 'console_scripts', 'snuba')()
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/usr/src/snuba/snuba/cli/consumer.py", line 178, in consumer
    consumer.run()
  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 137, in run
    self._run_once()
  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 142, in _run_once
    self._flush()
  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 242, in _flush
    self.worker.flush_batch(self.__batch_results)
  File "/usr/src/snuba/snuba/consumer.py", line 101, in flush_batch
    self.__writer.write(inserts)
  File "/usr/src/snuba/snuba/clickhouse/http.py", line 83, in write
    raise ClickhouseError(int(code), message)
snuba.clickhouse.errors.ClickhouseError: [60] Table default.sentry_local doesn't exist. (version 19.16.17.1 (official build))

Unclear intallation instructions

Hi, Im not all to familiar with the deprecated chart, but im having some trouble installing this chart as a Helm dependency in my project. When doing a helm search sentry after adding the repo from the README.md, the Chart version is 3.1.0 as of now. But when inserting it as a Chart dependency, doing a helm dep update and looking at the values.yaml in the downloaded sentry-3.1.0.tgz, the file looks very different to the current release in this github repo. And it's not just missing comments, the whole values.yaml has a completely different structure. So now I'm not sure if the 3.1.0 Chart version is a old version or just a completely different Chart altogether. Even the Chart.yaml looks very different to the one in the latest 3.0.1 release in this repo.

I'm now quite confused as of how I could get the latest chart of this repo without downloading it from the releases page and inserting it manually. Maybe it's just confusion on my part, but I can imagine others might have the same issue.
Can anyone clear that up, please?

Add example values.yaml with resource recommendations

When you install this, it has nearly no resource limits/requests and can easily take down some of your nodes. I tried ;)

It would be cool to have an example template with all the resource definitions of the main and subcharts. Guessing them is always so hard and maybe somebody has experience with some useful default recommended values.

Helm upgrade failed

I'm not able to upgrade the chart.

Error: UPGRADE FAILED: failed to replace object: Service "sentry-sentry-clickhouse-replica" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-clickhouse" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-clickhouse-tabix" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-zookeeper" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-rabbitmq" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable

Volume existing Claims

It would be nice to have this options in order to use an existing volume

filestore:
  filesystem:
    persistence:
      enabled: true
      existingClaim: "sentry"

db-init job failed with default options

Hello,

I'm trying to install sentry with the helm chart, and having issue with it.

'db-init' job always failed.

Here is the commands that I used

> helm repo add sentry https://sentry-kubernetes.github.io/charts
> helm repo update

// installing sentry with default values.
> helm install test --wait sentry/sentry

When I enter 'kubectl get pods' I got few failed pod created by db-init job.

> kubectl get pods
...
sentry-test-db-init-bbdrl                              0/1     Error       0          17s
sentry-test-db-init-f5szm                              0/1     Error       0          27s
sentry-test-db-init-hvtqq                              0/1     Error       0          34s

Here is the full log of these pods, it saids password authentication failure.

2020-07-05T07:24:33.699691445Z 07:24:33 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
2020-07-05T07:24:37.096954703Z 07:24:37 [INFO] sentry.plugins.github: apps-not-configured
2020-07-05T07:24:37.44814423Z Traceback (most recent call last):
2020-07-05T07:24:37.448184276Z   File "/usr/local/bin/sentry", line 8, in <module>
2020-07-05T07:24:37.448190958Z     sys.exit(main())
2020-07-05T07:24:37.448195843Z   File "/usr/local/lib/python2.7/site-packages/sentry/runner/__init__.py", line 166, in main
2020-07-05T07:24:37.44820079Z     cli(prog_name=get_prog(), obj={}, max_content_width=100)
2020-07-05T07:24:37.448205407Z   File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in __call__
2020-07-05T07:24:37.44825461Z     return self.main(*args, **kwargs)
2020-07-05T07:24:37.448325686Z   File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
2020-07-05T07:24:37.448336165Z     rv = self.invoke(ctx)
2020-07-05T07:24:37.4483415Z   File "/usr/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
2020-07-05T07:24:37.448575913Z     return _process_result(sub_ctx.command.invoke(sub_ctx))
2020-07-05T07:24:37.448593774Z   File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
2020-07-05T07:24:37.448725785Z     return ctx.invoke(self.callback, **ctx.params)
2020-07-05T07:24:37.448741583Z   File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
2020-07-05T07:24:37.448800729Z     return callback(*args, **kwargs)
2020-07-05T07:24:37.448815546Z   File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
2020-07-05T07:24:37.448821339Z     return f(get_current_context(), *args, **kwargs)
2020-07-05T07:24:37.448825948Z   File "/usr/local/lib/python2.7/site-packages/sentry/runner/decorators.py", line 30, in inner
2020-07-05T07:24:37.448887917Z     return ctx.invoke(f, *args, **kwargs)
2020-07-05T07:24:37.448898106Z   File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
2020-07-05T07:24:37.44891772Z     return callback(*args, **kwargs)
2020-07-05T07:24:37.44892334Z   File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
2020-07-05T07:24:37.448940672Z     return f(get_current_context(), *args, **kwargs)
2020-07-05T07:24:37.44894626Z   File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 174, in upgrade
2020-07-05T07:24:37.449034581Z     _upgrade(not noinput, traceback, verbosity, not no_repair, with_nodestore)
2020-07-05T07:24:37.449047054Z   File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 121, in _upgrade
2020-07-05T07:24:37.449053366Z     _migrate_from_south(verbosity)
2020-07-05T07:24:37.449068934Z   File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 93, in _migrate_from_south
2020-07-05T07:24:37.449096207Z     if not _has_south_history(connection):
2020-07-05T07:24:37.449104958Z   File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/upgrade.py", line 78, in _has_south_history
2020-07-05T07:24:37.449110233Z     cursor = connection.cursor()
2020-07-05T07:24:37.449114757Z   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor
2020-07-05T07:24:37.449188661Z     return self._cursor()
2020-07-05T07:24:37.449199228Z   File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/decorators.py", line 44, in inner
2020-07-05T07:24:37.449205047Z     return func(self, *args, **kwargs)
2020-07-05T07:24:37.449210168Z   File "/usr/local/lib/python2.7/site-packages/sentry/db/postgres/base.py", line 97, in _cursor
2020-07-05T07:24:37.449257896Z     return super(DatabaseWrapper, self)._cursor()
2020-07-05T07:24:37.449267889Z   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor
2020-07-05T07:24:37.449317058Z     self.ensure_connection()
2020-07-05T07:24:37.449326793Z   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
2020-07-05T07:24:37.44936076Z     self.connect()
2020-07-05T07:24:37.449368767Z   File "/usr/local/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
2020-07-05T07:24:37.449403008Z     six.reraise(dj_exc_type, dj_exc_value, traceback)
2020-07-05T07:24:37.449413707Z   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
2020-07-05T07:24:37.449441583Z     self.connect()
2020-07-05T07:24:37.44945057Z   File "/usr/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect
2020-07-05T07:24:37.449481718Z     self.connection = self.get_new_connection(conn_params)
2020-07-05T07:24:37.449489393Z   File "/usr/local/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
2020-07-05T07:24:37.449537086Z     connection = Database.connect(**conn_params)
2020-07-05T07:24:37.449544749Z   File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 127, in connect
2020-07-05T07:24:37.449556407Z     conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
2020-07-05T07:24:37.449650157Z django.db.utils.OperationalError: FATAL:  password authentication failed for user "postgres"

Events are missing

Hi,
wondering if anyone encountered similar issue and might know what might be causing this.
Sentry receives event, sends email about event being created. But, if you follow the link to sentry, it says:
Sorry, the events for this issue could not be found.
Even though it shows:
Events: 1
but if you click on Events tab, it is also empty.

It happens to some events only. At some point something changes, and then events are coming through and showing up just fine.

Nothing in logs shows any errors or anything that would explain this.
snuba consumer is working fine and no errors.
workers are working fine and not backlogged.
Queue is completely empty.

Only thing that is different from default chart, is that we don't use rabbitmq, instead using redis as broker. Could that cause such behavior?

Any help or ideas would be greatly appreciated.

Chart doesn't work if mail settings are not set

Hi it fails when you leave defaults empty on this error.

File "/usr/local/lib/python2.7/site-packages/sentry/options/manager.py", line 270, in validate_option
    raise TypeError("%r: got %r, expected %r" % (key, _type(value), opt.type))
TypeError: 'mail.username': got <type 'NoneType'>, expected string

Unable to enable tabix ingress.

I am getting the following error while trying to enable tabix ingress using values.yaml file.

Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Ingress.spec.rules): invalid type for io.k8s.api.extensions.v1beta1.IngressSpec.rules: got "map", expected "array"

The template file https://github.com/sentry-kubernetes/charts/blob/develop/clickhouse/templates/ingress-clickhouse.yaml seems to be the issue. spec/rules should be array instead of a map.

DSN not being generated.

Creating a new project, DSN is not generated. The DSN (both DSN and the deprecated) fields are blank.
And clicking generate new key doesn't do it either.
Anyone else run into this issue?

Workers unable to reconnect to RabbitMQ after unexpected restart

So I appear to have a problem in a cluster I'm working with where the readiness probes are failing and the mq components keep restarting (This is my issue to deal with).

However, the workers appear to struggle reconnecting as a consumer to the MQ server even though RabbitMQ seems to be OK. This MAY be because the service doesn't see the pod as "ready" because the readiness probe isn't finishing.

This may be an entirely localised issue, but I'm jotting it down here just insane.

@Mokto

Revert separation of the database / Readd database dependencies.

I'll undertake this in a local fork for the time being, but I really think this chart should offer a direct upgrade path if it is replacing the official stable repository.

In my case, we'd like to go from Sentry 9 -> Sentry 10, but at present, out of the box, this chart does not offer that. I'll be making updates to support this.

Updating this last statement: It's a helm3 chart. Deps exist. Egg on face!

Using helm3 I get `helm.go:75: [debug] failed pre-install: timed out waiting for the condition`

helm install sentry . --debug
install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /Users/arian/repos/charts/sentry

client.go:234: [debug] Starting delete for "sentry" ConfigMap
client.go:98: [debug] creating 1 resource(s)
client.go:234: [debug] Starting delete for "snuba" ConfigMap
client.go:98: [debug] creating 1 resource(s)
client.go:234: [debug] Starting delete for "snuba-db-init" Job
client.go:259: [debug] jobs.batch "snuba-db-init" not found
client.go:98: [debug] creating 1 resource(s)
client.go:439: [debug] Watching for changes to Job snuba-db-init with timeout of 5m0s
client.go:467: [debug] Add/Modify event for snuba-db-init: ADDED
client.go:506: [debug] snuba-db-init: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:467: [debug] Add/Modify event for snuba-db-init: MODIFIED
client.go:506: [debug] snuba-db-init: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: failed pre-install: timed out waiting for the condition
helm.go:75: [debug] failed pre-install: timed out waiting for the condition
helm version
version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"}

Cleanup Verbosity of "values.yaml" in latest chart.

I'm almost certain a number of our dependencies already use the values we're passing through the sentry values.

The current root values file is 1159 lines, I'm certain there's a ton of fat we can trim through using the defaults from the dependency charts.

I assume for people coming to this chart, it's pretty overwhelming having all these options, 90% they probably don't need.,

PVC for Sentry Web component. Why?

Why do we have a PVC for the sentry-web component? This is bad because:

a. Replicas can't be spawned
b. Deletion of these resources can be cumbersome for backend storage provisioners.

Just want to understand the reasoning behind this?

I'm not sure if the web component actually has any need for persistence? But I don't know the deeper architectural design of the web component. My impression was that there were no persistent files written to disk.

Operation cannot be fulfilled on resourcequotas

When I did

➜  ~ helm install sentry/sentry --generate-name

I've got

Error: Operation cannot be fulfilled on resourcequotas "iskiridomov": the object has been modified; please apply your changes to the latest version and try again

Any help?

sentry-snuba-consumer and sentry-snuba-outcomes-consumer crashing

Hey

I recently installed sentry on-premise but I'm experiencing weird errors when attempting to load projects/issue streams.

sentry-snuba-consumer and sentry-snuba-outcomes-consumer both crash and have the following logs:

Consumer:

2020-07-17 19:08:15,209 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0, Partition(topic=Topic(name='events'), index=1): 2, Partition(topic=Topic(name='events'), index=2): 0, Partition(topic=Topic(name='events'), index=3): 0, Partition(topic=Topic(name='events'), index=4): 0, Partition(topic=Topic(name='events'), index=5): 0, Partition(topic=Topic(name='events'), index=6): 19, Partition(topic=Topic(name='events'), index=7): 0, Partition(topic=Topic(name='events'), index=8): 0, Partition(topic=Topic(name='events'), index=9): 0, Partition(topic=Topic(name='events'), index=10): 0, Partition(topic=Topic(name='events'), index=11): 69}

2020-07-17 19:08:16,316 Flushing 13 items (from {Partition(topic=Topic(name='events'), index=11): Offsets(lo=69, hi=70), Partition(topic=Topic(name='events'), index=6): Offsets(lo=19, hi=29)}): forced:False size:False time:True

2020-07-17 19:08:16,434 Worker flush took 118ms

2020-07-17 19:08:21,255 Flushing 1 items (from {Partition(topic=Topic(name='events'), index=11): Offsets(lo=71, hi=71)}): forced:False size:False time:True

Traceback (most recent call last):

  File "/usr/local/bin/snuba", line 11, in <module>

    load_entry_point('snuba', 'console_scripts', 'snuba')()

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 722, in __call__

    return self.main(*args, **kwargs)

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 697, in main

    rv = self.invoke(ctx)

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke

    return _process_result(sub_ctx.command.invoke(sub_ctx))

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 895, in invoke

    return ctx.invoke(self.callback, **ctx.params)

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 535, in invoke

    return callback(*args, **kwargs)

  File "/usr/src/snuba/snuba/cli/consumer.py", line 162, in consumer

    consumer.run()

  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 137, in run

    self._run_once()

  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 142, in _run_once

    self._flush()

  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 242, in _flush

    self.worker.flush_batch(self.__batch_results)

  File "/usr/src/snuba/snuba/consumer.py", line 109, in flush_batch

    self.__writer.write(inserts)

  File "/usr/src/snuba/snuba/clickhouse/http.py", line 84, in write

    raise ClickhouseError(int(code), message)

snuba.clickhouse.errors.ClickhouseError: [60] Table default.sentry_dist doesn't exist. (version 19.16.19.85 (official build))

Outcomes Consumer:

2020-07-17 19:09:06,207 New partitions assigned: {Partition(topic=Topic(name='outcomes'), index=0): 21, Partition(topic=Topic(name='outcomes'), index=1): 11, Partition(topic=Topic(name='outcomes'), index=2): 1, Partition(topic=Topic(name='outcomes'), index=3): 9, Partition(topic=Topic(name='outcomes'), index=4): 2, Partition(topic=Topic(name='outcomes'), index=5): 7, Partition(topic=Topic(name='outcomes'), index=6): 7, Partition(topic=Topic(name='outcomes'), index=7): 32, Partition(topic=Topic(name='outcomes'), index=8): 5, Partition(topic=Topic(name='outcomes'), index=9): 10, Partition(topic=Topic(name='outcomes'), index=10): 11, Partition(topic=Topic(name='outcomes'), index=11): 10}

2020-07-17 19:09:09,779 Flushing 1 items (from {Partition(topic=Topic(name='outcomes'), index=8): Offsets(lo=5, hi=5)}): forced:False size:False time:True

Traceback (most recent call last):

  File "/usr/local/bin/snuba", line 11, in <module>

    load_entry_point('snuba', 'console_scripts', 'snuba')()

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 722, in __call__

    return self.main(*args, **kwargs)

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 697, in main

    rv = self.invoke(ctx)

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke

    return _process_result(sub_ctx.command.invoke(sub_ctx))

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 895, in invoke

    return ctx.invoke(self.callback, **ctx.params)

  File "/usr/local/lib/python3.7/site-packages/click/core.py", line 535, in invoke

    return callback(*args, **kwargs)

  File "/usr/src/snuba/snuba/cli/consumer.py", line 162, in consumer

    consumer.run()

  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 137, in run

    self._run_once()

  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 142, in _run_once

    self._flush()

  File "/usr/src/snuba/snuba/utils/streams/batching.py", line 242, in _flush

    self.worker.flush_batch(self.__batch_results)

  File "/usr/src/snuba/snuba/consumer.py", line 109, in flush_batch

    self.__writer.write(inserts)

  File "/usr/src/snuba/snuba/clickhouse/http.py", line 84, in write

    raise ClickhouseError(int(code), message)

snuba.clickhouse.errors.ClickhouseError: [60] Table default.outcomes_raw_dist doesn't exist. (version 19.16.19.85 (official build))

I have custom values on the template, they are

system:
  url: "xxxxxx"
  adminEmail: "xxxxxxx"
  secretKey: 'xxxxxxxx'
  public: false #  This should only be used if you’re installing Sentry behind your company’s firewall.

# Auth disabled because it'll be using Azure.
auth:
  register: false

# Don't want a user creating by default
user:
  create: false

## -- MICROSERVICE CONFIGURATION --
kafka:
  enabled: false

## This value is only used when kafka.enabled is set to false
externalKafka:
  host: "kafka-0.kafka-headless.databases.svc.cluster.local"
  port: 9092

redis:
  enabled: false

## This value is only used when redis.enabled is set to false
externalRedis:
  host: "redis-master.databases.svc.cluster.local"
  database: 4
  prefix: 'turret'
  port: 6379

postgresql:
  enabled: false

## This value is only used when postgresql.enabled is set to false
externalPostgresql:
  host: "xxxxxxxx"
  port: 0000
  username: "xxxx"
  password: "xxxxx"
  database: xxxxx
  # sslMode: require

rabbitmq:
  ## If disabled, Redis will be used instead as the broker.
  enabled: false

filestore:
  # Set to one of filesystem, gcs or s3 as supported by Sentry.
  backend: s3

  s3:
    accessKey: 'xxx'
    secretKey: 'xxxx'
    bucketName: xxx
    endpointUrl: 'xxxx'
    # signature_version:
    region_name: 'xxxxx'
    # default_acl:

I can provide any extra needed info.

Thanks,

Attempting to update to latest sentry tag caused Internal errors for sentry

I attempted to use latest tags for sentry and snuba:
getsentry/sentry:00ae12b89402f10158e36413d35a94d642b56082
getsentry/snuba:3b11c3b64901d91fbbd0622ca718bcb7d871dd4a
But once rolled out, sentry became unusable, with every page giving internal error.

snuba-api, snuba-consumer were logging this error:
clickhouse_driver.errors.NetworkError: Code: 210. Cannot assign requested address (localhost:9000)
I wonder if something changed with regards to how they expect clickhouse address to be provided? Since it shouldn't be using localhost to connect to clickhouse.

@J0sh0nat0r you mentioned previously that you are running using latest tags for sentry? Did you do anything differently on your end to make it work?

This is using chart version 1.3.1

OpenShift unable to deploy because of privileged users

hello,

it would be great to ship all statefulsets that it is possible to run in openshift

for example:
create Pod sentry-clickhouse-replica-0 in StatefulSet sentry-clickhouse-replica failed error: pods "sentry-clickhouse-replica-0" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1016890000, 1016899999] spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

A redeploy is detecting changes to the services

Hi there,

When redeploying the helm chart, with the same values file, I am experiencing the below:

 upgrade.go:291: [debug] warning: Upgrade "sentry" failed: failed to replace object: Service "sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable
 Error: UPGRADE FAILED: failed to replace object: Service "sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable
 helm.go:75: [debug] failed to replace object: Service "sentry-kafka" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-web" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "sentry-snuba" is invalid: spec.clusterIP: Invalid value: "": field is immutable

This is how I am invoking helm

helm upgrade --install --debug \
      $APP_NAME \
      sentry/sentry \
      --namespace=$APP_NAME \
      --version 3.0.1 \
      -f values.yaml

Any ideas?

Kubernetes 1.16 support

Hello just upgraded to 1.16 kubernetes version and I got stuck with sentry helm

Error: unable to build kubernetes objects from current release manifest: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"

I guess statefulSet in apps/v1beta2 is deprecated and apps/v1 should be used instead, any chance of fix?

https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

Metrics Support

From the previous chart there was support for statsd/prometheus via

helm/charts#14355

Is there any plans for this type of 'native' integration in this chart? a-la metrics.enabled=true?

Worker cannot connect to RabbitMQ

Sentry worker cannot connect to rabbitmq, this cause that issues are not processed I guess .

11:36:23 [ERROR] celery.worker.consumer: consumer: Cannot connect to amqp://guest:**@sentry-rabbitmq:5672//: Error opening socket: hostname lookup failed.

sentry-web is getting stuck in pending state after update if using PVC with ReadWriteOnce

if filestore is configured with PVC with access mode ReadWriteOnce then during update sentry-web will be stuck in pending state, as by default it will attempt to perform rolling update. It will create new instance of sentry-web, while old one is running, and new instance will never get to running state, as it will not be able to mount PVC while old instance is still running.

Fix for this is simple:
Add following to deployment-sentry-web.yaml:

  strategy:
    type: Recreate

That will enforce shut down of old sentry-web before creating a newer instance of sentry-web.
Possibly might be worth adding it only if filestore.filesystem.persistence.accessMode is set to ReadWriteOnce.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.