Git Product home page Git Product logo

kubernetes-docs's Introduction

ONLYOFFICE Docs for Kubernetes

This repository contains a set of files to deploy ONLYOFFICE Docs into a Kubernetes cluster or OpenShift cluster.

Contents

Introduction

  • You must have a Kubernetes or OpenShift cluster installed. Please, checkout the reference to set up Kubernetes. Please, checkout the reference to setup OpenShift.
  • You should also have a local configured copy of kubectl. See this guide how to install and configure kubectl.
  • You should install Helm v3.7+. Please follow the instruction here to install it.
  • If you use OpenShift, you can use both oc and kubectl to manage deploy.
  • If the installation of components external to ‘Docs’ is performed from Helm Chart in an OpenShift cluster, then it is recommended to install them from a user who has the cluster-admin role, in order to avoid possible problems with access rights. See this guide to add the necessary roles to the user.

Deploy prerequisites

Note: It may be required to apply SecurityContextConstraints policy when installing into OpenShift cluster, which adds permission to run containers from a user whose ID = 1001.

To do this, run the following commands:

$ oc apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/scc/helm-components.yaml
$ oc adm policy add-scc-to-group scc-helm-components system:authenticated

1. Add Helm repositories

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo add nfs-server-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner
$ helm repo add onlyoffice https://download.onlyoffice.com/charts/stable
$ helm repo update

2. Install Persistent Storage

If you want to use Amazon S3 as a cache, please skip this step.

Install NFS Server Provisioner

Note: When installing NFS Server Provisioner, Storage Classes - NFS is created. When installing to an OpenShift cluster, the user must have a role that allows you to create Storage Classes in the cluster. Read more here.

$ helm install nfs-server nfs-server-provisioner/nfs-server-provisioner \
  --set persistence.enabled=true \
  --set persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set persistence.size=PERSISTENT_SIZE
  • PERSISTENT_STORAGE_CLASS is a Persistent Storage Class available in your Kubernetes cluster.

    Persistent Storage Classes for different providers:

    • Amazon EKS: gp2
    • Digital Ocean: do-block-storage
    • IBM Cloud: Default ibmc-file-bronze. More storage classes
    • Yandex Cloud: yc-network-hdd or yc-network-ssd. More details
    • minikube: standard
  • PERSISTENT_SIZE is the total size of all Persistent Storages for the nfs Persistent Storage Class. You can express the size as a plain integer with one of these suffixes: T, G, M, Ti, Gi, Mi. For example: 9Gi.

See more details about installing NFS Server Provisioner via Helm here.

Configure a Persistent Volume Claim

Note: The default nfs Persistent Volume Claim is 8Gi. You can change it in the values.yaml file in the persistence.storageClass and persistence.size section. It should be less than PERSISTENT_SIZE at least by about 5%. It's recommended to use 8Gi or more for persistent storage for every 100 active users of ONLYOFFICE Docs.

The PersistentVolume type to be used for PVC placement must support Access Mode ReadWriteMany. Also, PersistentVolume must have as the owner the user from whom the ONLYOFFICE Docs will be started. By default it is ds (101:101).

Note: If you want to enable WOPI, please set the parameter wopi.enabled=true. In this case Persistent Storage must be connected to the cluster nodes with the disabled caching attributes for the mounted directory for the clients. For NFS Server Provisioner it can be achieved by adding noac option to the parameter storageClass.mountOptions. Please find more information here.

3. Deploy RabbitMQ

To install RabbitMQ to your cluster, run the following command:

$ helm install rabbitmq bitnami/rabbitmq \
  --set persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set metrics.enabled=false

Note: Set the metrics.enabled=true to enable exposing RabbitMQ metrics to be gathered by Prometheus.

See more details about installing RabbitMQ via Helm here.

4. Deploy Redis

To install Redis to your cluster, run the following command:

$ helm install redis bitnami/redis \
  --set architecture=standalone \
  --set master.persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set metrics.enabled=false

Note: Set the metrics.enabled=true to enable exposing Redis metrics to be gathered by Prometheus.

See more details about installing Redis via Helm here.

5. Deploy Database

As a database server, you can use PostgreSQL, MySQL or MariaDB

If PostgreSQL is selected as the database server, then follow these steps

To install PostgreSQL to your cluster, run the following command:

$ helm install postgresql bitnami/postgresql \
  --set auth.database=postgres \
  --set primary.persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set primary.persistence.size=PERSISTENT_SIZE \
  --set metrics.enabled=false

See more details about installing PostgreSQL via Helm here.

If MySQL is selected as the database server, then follow these steps

To install MySQL to your cluster, run the following command:

$ helm install mysql bitnami/mysql \
  --set auth.database=onlyoffice \
  --set auth.username=onlyoffice \
  --set primary.persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set primary.persistence.size=PERSISTENT_SIZE \
  --set metrics.enabled=false

See more details about installing MySQL via Helm here.

Here PERSISTENT_SIZE is a size for the Database persistent volume. For example: 8Gi.

It's recommended to use at least 2Gi of persistent storage for every 100 active users of ONLYOFFICE Docs.

Note: Set the metrics.enabled=true to enable exposing Database metrics to be gathered by Prometheus.

6. Deploy StatsD exporter

This step is optional. You can skip step #6 entirely if you don't want to run StatsD exporter

6.1 Add Helm repositories

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo add kube-state-metrics https://kubernetes.github.io/kube-state-metrics
$ helm repo update

6.2 Installing Prometheus

To install Prometheus to your cluster, run the following command:

$ helm install prometheus -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/extraScrapeConfigs.yaml prometheus-community/prometheus \
  --set server.global.scrape_interval=1m

To change the scrape interval, specify the server.global.scrape_interval parameter.

See more details about installing Prometheus via Helm here.

6.3 Installing StatsD exporter

To install StatsD exporter to your cluster, run the following command:

$ helm install statsd-exporter prometheus-community/prometheus-statsd-exporter \
  --set statsd.udpPort=8125 \
  --set statsd.tcpPort=8126 \
  --set statsd.eventFlushInterval=30000ms

See more details about installing Prometheus StatsD exporter via Helm here.

To allow the StatsD metrics in ONLYOFFICE Docs, follow step 5.2

7. Make changes to Node-config configuration files

This step is optional. You can skip step #7 entirely if you don't need to make changes to the configuration files

7.1 Create a ConfigMap containing a json file

In order to create a ConfigMap from a file that contains the local.json structure, you need to run the following command:

$ kubectl create configmap local-config \
  --from-file=./local.json

Note: Any name can be used instead of local-config.

7.2 Specify parameters when installing ONLYOFFICE Docs

When installing ONLYOFFICE Docs, specify the extraConf.configMap=local-config and extraConf.filename=local.json parameters

Note: If you need to add a configuration file after the ONLYOFFICE Docs is already installed, you need to execute step 7.1 and then run the helm upgrade documentserver onlyoffice/docs --set extraConf.configMap=local-config --set extraConf.filename=local.json --no-hooks command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks if the parameters are specified in the values.yaml file.

8. Add custom Fonts

This step is optional. You can skip step #8 entirely if you don't need to add your fonts

In order to add fonts to images, you need to rebuild the images. Refer to the relevant steps in this manual. Then specify your images when installing the ONLYOFFICE Docs.

9. Add Plugins

This step is optional. You can skip step #9 entirely if you don't need to add plugins

In order to add plugins to images, you need to rebuild the images. Refer to the relevant steps in this manual. Then specify your images when installing the ONLYOFFICE Docs.

10. Change interface themes

This step is optional. You can skip step #10 entirely if you don't need to change the interface themes

10.1 Create a ConfigMap containing a json file

To create a ConfigMap with a json file that contains the interface themes, you need to run the following command:

$ kubectl create configmap custom-themes \
  --from-file=./custom-themes.json

Note: Instead of custom-themes and custom-themes.json you can use any other names.

10.2 Specify parameters when installing ONLYOFFICE Docs

When installing ONLYOFFICE Docs, specify the extraThemes.configMap=custom-themes and extraThemes.filename=custom-themes.json parameters.

Note: If you need to add interface themes after the ONLYOFFICE Docs is already installed, you need to execute step 10.1 and then run the helm upgrade documentserver onlyoffice/docs --set extraThemes.configMap=custom-themes --set extraThemes.filename=custom-themes.json --no-hooks command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks if the parameters are specified in the values.yaml file.

11. Connecting Amazon S3 bucket as a cache to ONLYOFFICE Helm Docs

In order to connect Amazon S3 bucket as a cache, you need to create a configuration file or edit the existing one in accordance with this guide and change the value of the parameter persistence.storageS3 to true.

Deploy ONLYOFFICE Docs

Note: It may be required to apply SecurityContextConstraints policy when installing into OpenShift cluster, which adds permission to run containers from a user whose ID = 101.

To do this, run the following commands:

$ oc apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/scc/docs-components.yaml
$ oc adm policy add-scc-to-group scc-docs-components system:authenticated

Also, you must set the podSecurityContext.enabled parameter to true:

$ helm install documentserver onlyoffice/docs --set podSecurityContext.enabled=true

1. Deploy the ONLYOFFICE Docs license

1.1. Create secret

If you have a valid ONLYOFFICE Docs license, create a secret license from the file:

$ kubectl create secret generic license --from-file=path/to/license.lic

Note: The source license file name should be 'license.lic' because this name would be used as a field in the created secret.

1.2. Specify parameters when installing ONLYOFFICE Docs

When installing ONLYOFFICE Docs, specify the license.existingSecret=license parameter.

$ helm install documentserver onlyoffice/docs --set license.existingSecret=license

Note: If you need to add license after the ONLYOFFICE Docs is already installed, you need to execute step 1.1 and then run the helm upgrade documentserver onlyoffice/docs --set license.existingSecret=license --no-hooks command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks if the parameters are specified in the values.yaml file.

2. Deploy ONLYOFFICE Docs

To deploy ONLYOFFICE Docs with the release name documentserver:

$ helm install documentserver onlyoffice/docs

The command deploys ONLYOFFICE Docs on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Note: When installing ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.

3. Uninstall ONLYOFFICE Docs

To uninstall/delete the documentserver deployment:

$ helm delete documentserver

Executing the helm delete command launches hooks, which perform some preparatory actions before completely deleting the ONLYOFFICE Docs, which include stopping the server, cleaning up the used PVC and database tables. The default hook execution time is 300s. The execution time can be changed using --timeout [time], for example:

$ helm delete documentserver --timeout 25m

Note: When deleting ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.

If you want to delete the ONLYOFFICE Docs without any preparatory actions, run the following command:

$ helm delete documentserver --no-hooks

The helm delete command removes all the Kubernetes components associated with the chart and deletes the release.

4. Parameters

Parameter Description Default
connections.dbType The database type. Possible values are postgres, mariadb or mysql postgres
connections.dbHost The IP address or the name of the Database host postgresql
connections.dbUser Database user postgres
connections.dbPort Database server port number 5432
connections.dbName Name of the Database database the application will be connected with postgres
connections.dbPassword Database user password. If set to, it takes priority over the connections.dbExistingSecret ""
connections.dbSecretKeyName The name of the key that contains the Database user password postgres-password
connections.dbExistingSecret Name of existing secret to use for Database passwords. Must contain the key specified in connections.dbSecretKeyName postgresql
connections.redisConnectorName Defines which connector to use to connect to Redis. If you need to connect to Redis Sentinel, set the value ioredis redis
connections.redistHost The IP address or the name of the Redis host redis-master
connections.redisPort The Redis server port number 6379
connections.redisUser The Redis user name. The value in this parameter overrides the value set in the options object in local.json if you add custom configuration file default
connections.redisDBNum Number of the redis logical database to be selected. The value in this parameter overrides the value set in the options object in local.json if you add custom configuration file 0
connections.redisClusterNodes List of nodes in the Redis cluster. There is no need to specify every node in the cluster, 3 should be enough. You can specify multiple values. It must be specified in the host:port format []
connections.redisSentinelGroupName Name of a group of Redis instances composed of a master and one or more slaves. Used if connections.redisConnectorName is set to ioredis mymaster
connections.redisPassword The password set for the Redis account. If set to, it takes priority over the connections.redisExistingSecret. The value in this parameter overrides the value set in the options object in local.json if you add custom configuration file ""
connections.redisSecretKeyName The name of the key that contains the Redis user password redis-password
connections.redisExistingSecret Name of existing secret to use for Redis passwords. Must contain the key specified in connections.redisSecretKeyName. The password from this secret overrides password set in the options object in local.json redis
connections.redisNoPass Defines whether to use a Redis auth without a password. If the connection to Redis server does not require a password, set the value to true false
connections.amqpType Defines the AMQP server type. Possible values are rabbitmq or activemq rabbitmq
connections.amqpHost The IP address or the name of the AMQP server rabbitmq
connections.amqpPort The port for the connection to AMQP server 5672
connections.amqpVhost The virtual host for the connection to AMQP server /
connections.amqpUser The username for the AMQP server account user
connections.amqpProto The protocol for the connection to AMQP server amqp
connections.amqpPassword AMQP server user password. If set to, it takes priority over the connections.amqpExistingSecret ""
connections.amqpSecretKeyName The name of the key that contains the AMQP server user password rabbitmq-password
connections.amqpExistingSecret The name of existing secret to use for AMQP server passwords. Must contain the key specified in connections.amqpSecretKeyName rabbitmq
persistence.existingClaim Name of an existing PVC to use. If not specified, a PVC named "ds-files" will be created ""
persistence.annotations Defines annotations that will be additionally added to "ds-files" PVC. If set to, it takes priority over the commonAnnotations {}
persistence.storageClass PVC Storage Class for ONLYOFFICE Docs data volume nfs
persistence.size PVC Storage Request for ONLYOFFICE Docs volume 8Gi
persistence.storageS3 Defines whether S3 will be used as cache storage. Set to true if you will use S3 as cache storage false
namespaceOverride The name of the namespace in which Onlyoffice Docs will be deployed. If not set, the name will be taken from .Release.Namespace ""
commonLabels Defines labels that will be additionally added to all the deployed resources. You can also use tpl as the value for the key {}
commonAnnotations Defines annotations that will be additionally added to all the deployed resources. You can also use tpl as the value for the key. Some resources may override the values specified here with their own {}
serviceAccount.create Enable ServiceAccount creation false
serviceAccount.name Name of the ServiceAccount to be used. If not set and serviceAccount.create is true the name will be taken from .Release.Name or serviceAccount.create is false the name will be "default" ""
serviceAccount.annotations Map of annotations to add to the ServiceAccount. If set to, it takes priority over the commonAnnotations {}
serviceAccount.automountServiceAccountToken Enable auto mount of ServiceAccountToken on the serviceAccount created. Used only if serviceAccount.create is true true
license.existingSecret Name of the existing secret that contains the license. Must contain the key license.lic ""
license.existingClaim Name of the existing PVC in which the license is stored. Must contain the file license.lic ""
log.level Defines the type and severity of a logged event. Possible values are ALL, TRACE, DEBUG, INFO, WARN, ERROR, FATAL, MARK, OFF WARN
log.type Defines the format of a logged event. Possible values are pattern, json, basic, coloured, messagePassThrough, dummy pattern
log.pattern Defines the log pattern if log.type=pattern [%d] [%p] %c - %.10000m
wopi.enabled Defines if WOPI is enabled. If the parameter is enabled, then caching attributes for the mounted directory (PVC) should be disabled for the client false
metrics.enabled Specifies the enabling StatsD for ONLYOFFICE Docs false
metrics.host Defines StatsD listening host statsd-exporter-prometheus-statsd-exporter
metrics.port Defines StatsD listening port 8125
metrics.prefix Defines StatsD metrics prefix for backend services ds.
extraConf.configMap The name of the ConfigMap containing the json file that override the default values ""
extraConf.filename The name of the json file that contains custom values. Must be the same as the key name in extraConf.ConfigMap local.json
extraThemes.configMap The name of the ConfigMap containing the json file that contains the interface themes ""
extraThemes.filename The name of the json file that contains custom interface themes. Must be the same as the key name in extraThemes.configMap custom-themes.json
podAntiAffinity.type Types of Pod antiaffinity. Allowed values: soft or hard soft
podAntiAffinity.topologyKey Node label key to match kubernetes.io/hostname
podAntiAffinity.weight Priority when selecting node. It is in the range from 1 to 100 100
nodeSelector Node labels for pods assignment {}
tolerations Tolerations for pods assignment []
imagePullSecrets Container image registry secret name ""
requestFilteringAgent.allowPrivateIPAddress Defines if it is allowed to connect private IP address or not. requestFilteringAgent parameters are used if JWT is disabled: jwt.enabled=false false
requestFilteringAgent.allowMetaIPAddress Defines if it is allowed to connect meta address or not false
requestFilteringAgent.allowIPAddressList Defines the list of IP addresses allowed to connect. This values are preferred than requestFilteringAgent.denyIPAddressList []
requestFilteringAgent.denyIPAddressList Defines the list of IP addresses allowed to connect []
docservice.annotations Defines annotations that will be additionally added to Docservice Deployment. If set to, it takes priority over the commonAnnotations {}
docservice.podAnnotations Map of annotations to add to the Docservice deployment pods rollme: "{{ randAlphaNum 5 | quote }}"
docservice.replicas Docservice replicas quantity. If the docservice.autoscaling.enabled parameter is enabled, it is ignored 2
docservice.updateStrategy.type Docservice deployment update strategy type Recreate
docservice.customPodAntiAffinity Prohibiting the scheduling of Docservice Pods relative to other Pods containing the specified labels on the same node {}
docservice.podAffinity Defines Pod affinity rules for Docservice Pods scheduling by nodes relative to other Pods {}
docservice.nodeAffinity Defines Node affinity rules for Docservice Pods scheduling by nodes {}
docservice.initContainers Defines containers that run before docservice and proxy containers in the Docservice deployment pod. For example, a container that changes the owner of the PersistentVolume []
docservice.image.repository Docservice container image repository* onlyoffice/docs-docservice-de
docservice.image.tag Docservice container image tag 8.0.1-1
docservice.image.pullPolicy Docservice container image pull policy IfNotPresent
docservice.resources.requests The requested resources for the Docservice container {}
docservice.resources.limits The resources limits for the Docservice container {}
docservice.readinessProbe.enabled Enable readinessProbe for Docservice container true
docservice.livenessProbe.enabled Enable livenessProbe for Docservice container true
docservice.startupProbe.enabled Enable startupProbe for Docservice container true
docservice.autoscaling.enabled Enable Docservice deployment autoscaling false
docservice.autoscaling.annotations Defines annotations that will be additionally added to Docservice deployment HPA. If set to, it takes priority over the commonAnnotations {}
docservice.autoscaling.minReplicas Docservice deployment autoscaling minimum number of replicas 2
docservice.autoscaling.maxReplicas Docservice deployment autoscaling maximum number of replicas 4
docservice.autoscaling.targetCPU.enabled Enable autoscaling of Docservice deployment by CPU usage percentage true
docservice.autoscaling.targetCPU.utilizationPercentage Docservice deployment autoscaling target CPU percentage 70
docservice.autoscaling.targetMemory.enabled Enable autoscaling of Docservice deployment by memory usage percentage false
docservice.autoscaling.targetMemory.utilizationPercentage Docservice deployment autoscaling target memory percentage 70
docservice.autoscaling.customMetricsType Custom, additional or external autoscaling metrics for the Docservice deployment []
docservice.autoscaling.behavior Configuring Docservice deployment scaling behavior policies for the scaleDown and scaleUp fields {}
proxy.accessLog Defines the nginx config access_log format directive off
proxy.gzipProxied Defines the nginx config gzip_proxied directive off
proxy.clientMaxBodySize Defines the nginx config client_max_body_size directive 100m
proxy.workerConnections Defines the nginx config worker_connections directive 4096
proxy.secureLinkSecret Defines secret for the nginx config directive secure_link_md5 verysecretstring
proxy.infoAllowedIP Defines ip addresses for accessing the info page []
proxy.infoAllowedUser Defines user name for accessing the info page. If not set to, Nginx Basic Authentication will not be applied to access the info page. For more details, see here ""
proxy.infoAllowedPassword Defines user password for accessing the info page. Used if proxy.infoAllowedUser is set password
proxy.infoAllowedSecretKeyName The name of the key that contains the info auth user password. Used if proxy.infoAllowedUser is set info-auth-password
proxy.infoAllowedExistingSecret Name of existing secret to use for info auth password. Used if proxy.infoAllowedUser is set. Must contain the key specified in proxy.infoAllowedSecretKeyName. If set to, it takes priority over the proxy.infoAllowedPassword ""
proxy.welcomePage.enabled Defines whether the welcome page will be displayed true
proxy.image.repository Docservice Proxy container image repository* onlyoffice/docs-proxy-de
proxy.image.tag Docservice Proxy container image tag 8.0.1-1
proxy.image.pullPolicy Docservice Proxy container image pull policy IfNotPresent
proxy.resources.requests The requested resources for the Proxy container {}
proxy.resources.limits The resources limits for the Proxy container {}
proxy.readinessProbe.enabled Enable readinessProbe for Proxy container true
proxy.livenessProbe.enabled Enable livenessProbe for Proxy container true
proxy.startupProbe.enabled Enable startupProbe for Proxy container true
converter.annotations Defines annotations that will be additionally added to Converter Deployment. If set to, it takes priority over the commonAnnotations {}
converter.podAnnotations Map of annotations to add to the Converter deployment pods rollme: "{{ randAlphaNum 5 | quote }}"
converter.replicas Converter replicas quantity. If the converter.autoscaling.enabled parameter is enabled, it is ignored 2
converter.updateStrategy.type Converter deployment update strategy type RollingUpdate
converter.updateStrategy.rollingUpdate.maxUnavailable Maximum number of Converter Pods unavailable during the update process. Used only when converter.updateStrategy.type=RollingUpdate 25%
converter.updateStrategy.rollingUpdate.maxSurge Maximum number of Converter Pods created over the desired number of Pods. Used only when converter.updateStrategy.type=RollingUpdate 25%
converter.customPodAntiAffinity Prohibiting the scheduling of Converter Pods relative to other Pods containing the specified labels on the same node {}
converter.podAffinity Defines Pod affinity rules for Converter Pods scheduling by nodes relative to other Pods {}
converter.nodeAffinity Defines Node affinity rules for Converter Pods scheduling by nodes {}
converter.initContainers Defines containers that run before docservice and proxy containers in the Docservice deployment pod. For example, a container that changes the owner of the PersistentVolume []
converter.image.repository Converter container image repository* onlyoffice/docs-converter-de
converter.image.tag Converter container image tag 8.0.1-1
converter.image.pullPolicy Converter container image pull policy IfNotPresent
converter.resources.requests The requested resources for the Converter container {}
converter.resources.limits The resources limits for the Converter container {}
converter.autoscaling.enabled Enable Converter deployment autoscaling false
converter.autoscaling.annotations Defines annotations that will be additionally added to Converter deployment HPA. If set to, it takes priority over the commonAnnotations {}
converter.autoscaling.minReplicas Converter deployment autoscaling minimum number of replicas 2
converter.autoscaling.maxReplicas Converter deployment autoscaling maximum number of replicas 16
converter.autoscaling.targetCPU.enabled Enable autoscaling of converter deployment by CPU usage percentage true
converter.autoscaling.targetCPU.utilizationPercentage Converter deployment autoscaling target CPU percentage 70
converter.autoscaling.targetMemory.enabled Enable autoscaling of Converter deployment by memory usage percentage false
converter.autoscaling.targetMemory.utilizationPercentage Converter deployment autoscaling target memory percentage 70
converter.autoscaling.customMetricsType Custom, additional or external autoscaling metrics for the Converter deployment []
converter.autoscaling.behavior Configuring Converter deployment scaling behavior policies for the scaleDown and scaleUp fields {}
example.enabled Enables the installation of Example false
example.annotations Defines annotations that will be additionally added to Example StatefulSet. If set to, it takes priority over the commonAnnotations {}
example.podAnnotations Map of annotations to add to the example pod rollme: "{{ randAlphaNum 5 | quote }}"
example.updateStrategy.type Example StatefulSet update strategy type RollingUpdate
example.customPodAntiAffinity Prohibiting the scheduling of Example Pod relative to other Pods containing the specified labels on the same node {}
example.podAffinity Defines Pod affinity rules for Example Pod scheduling by nodes relative to other Pods {}
example.nodeAffinity Defines Node affinity rules for Example Pod scheduling by nodes {}
example.image.repository Example container image name onlyoffice/docs-example
example.image.tag Example container image tag 8.0.1-1
example.image.pullPolicy Example container image pull policy IfNotPresent
example.dsUrl ONLYOFFICE Docs external address. It should be changed only if it is necessary to check the operation of the conversion in Example (e.g. http://<documentserver-address>/) /
example.resources.requests The requested resources for the Example container {}
example.resources.limits The resources limits for the Example container {}
example.extraConf.configMap The name of the ConfigMap containing the json file that override the default values. See an example of creation here ""
example.extraConf.filename The name of the json file that contains custom values. Must be the same as the key name in example.extraConf.ConfigMap local.json
jwt.enabled Specifies the enabling the JSON Web Token validation by the ONLYOFFICE Docs. Common for inbox and outbox requests true
jwt.secret Defines the secret key to validate the JSON Web Token in the request to the ONLYOFFICE Docs. Common for inbox and outbox requests MYSECRET
jwt.header Defines the http header that will be used to send the JSON Web Token. Common for inbox and outbox requests Authorization
jwt.inBody Specifies the enabling the token validation in the request body to the ONLYOFFICE Docs false
jwt.inbox JSON Web Token validation parameters for inbox requests only. If not specified, the values of the parameters of the common jwt are used {}
jwt.outbox JSON Web Token validation parameters for outbox requests only. If not specified, the values of the parameters of the common jwt are used {}
jwt.existingSecret The name of an existing secret containing variables for jwt. If not specified, a secret named jwt will be created ""
service.existing The name of an existing service for ONLYOFFICE Docs. If not specified, a service named documentserver will be created ""
service.annotations Map of annotations to add to the ONLYOFFICE Docs service. If set to, it takes priority over the commonAnnotations {}
service.type ONLYOFFICE Docs service type ClusterIP
service.port ONLYOFFICE Docs service port 8888
service.sessionAffinity Session Affinity for ONLYOFFICE Docs service. If not set, None will be set as the default value ""
service.sessionAffinityConfig Configuration for ONLYOFFICE Docs service Session Affinity. Used if the service.sessionAffinity is set {}
ingress.enabled Enable the creation of an ingress for the ONLYOFFICE Docs false
ingress.annotations Map of annotations to add to the Ingress. If set to, it takes priority over the commonAnnotations nginx.ingress.kubernetes.io/proxy-body-size: 100m
ingress.ingressClassName Used to reference the IngressClass that should be used to implement this Ingress nginx
ingress.host Ingress hostname for the ONLYOFFICE Docs ingress ""
ingress.ssl.enabled Enable ssl for the ONLYOFFICE Docs ingress false
ingress.ssl.secret Secret name for ssl to mount into the Ingress tls
ingress.path Specifies the path where ONLYOFFICE Docs will be available /
grafana.enabled Enable the installation of resources required for the visualization of metrics in Grafana false
grafana.namespace The name of the namespace in which RBAC components and Grafana resources will be deployed. If not set, the name will be taken from namespaceOverride if set, or .Release.Namespace ""
grafana.ingress.enabled Enable the creation of an ingress for the Grafana. Used if you set grafana.enabled to true and want to use Nginx Ingress to access Grafana false
grafana.ingress.annotations Map of annotations to add to Grafana Ingress. If set to, it takes priority over the commonAnnotations nginx.ingress.kubernetes.io/proxy-body-size: 100m
grafana.dashboard.enabled Enable the installation of ready-made Grafana dashboards. Used if you set grafana.enabled to true false
podSecurityContext.enabled Enable security context for the pods false
podSecurityContext.converter.runAsUser User ID for the Converter pods 101
podSecurityContext.converter.runAsGroup Group ID for the Converter pods 101
podSecurityContext.docservice.runAsUser User ID for the Docservice pods 101
podSecurityContext.docservice.runAsGroup Group ID for the Docservice pods 101
podSecurityContext.jobs.runAsUser User ID for pods created by jobs 101
podSecurityContext.jobs.runAsGroup Group ID for pods created by jobs 101
podSecurityContext.example.runAsUser User ID for the Example pod 1001
podSecurityContext.example.runAsGroup Group ID for the Example pod 1001
podSecurityContext.tests.runAsUser User ID for the Test pod 0
podSecurityContext.tests.runAsGroup Group ID for the Test pod 0
webProxy.enabled Specify whether a Web proxy is used in your network to access the Pods of k8s cluster to the Internet false
webProxy.http Web Proxy address for HTTP traffic http://proxy.example.com
webProxy.https Web Proxy address for HTTPS traffic https://proxy.example.com
webProxy.noProxy Patterns for IP addresses or k8s services name or domain names that shouldn’t use the Web Proxy localhost,127.0.0.1,docservice
privateCluster Specify whether the k8s cluster is used in a private network without internet access false
upgrade.job.enabled Enable the execution of job pre-upgrade before upgrading ONLYOFFICE Docs true
upgrade.job.annotations Defines annotations that will be additionally added to pre-upgrade Job. If set to, it takes priority over the commonAnnotations {}
upgrade.job.image.repository Job by upgrade image repository onlyoffice/docs-utils
upgrade.job.image.tag Job by upgrade image tag 8.0.1-1
upgrade.job.image.pullPolicy Job by upgrade image pull policy IfNotPresent
upgrade.job.resources.requests The requested resources for the job pre-upgrade container {}
upgrade.job.resources.limits The resources limits for the job pre-upgrade container {}
upgrade.existingConfigmap.tblRemove.name The name of the existing ConfigMap that contains the sql file for deleting tables from the database remove-db-scripts
upgrade.existingConfigmap.tblRemove.keyName The name of the sql file containing instructions for deleting tables from the database. Must be the same as the key name in upgrade.existingConfigmap.tblRemove.name removetbl.sql
upgrade.existingConfigmap.tblCreate.name The name of the existing ConfigMap that contains the sql file for craeting tables from the database init-db-scripts
upgrade.existingConfigmap.tblCreate.keyName The name of the sql file containing instructions for creating tables from the database. Must be the same as the key name in upgrade.existingConfigmap.tblCreate.name createdb.sql
upgrade.existingConfigmap.dsStop The name of the existing ConfigMap that contains the ONLYOFFICE Docs upgrade script. If set, the four previous parameters are ignored. Must contain a key stop.sh ""
rollback.job.enabled Enable the execution of job pre-rollback before rolling back ONLYOFFICE Docs true
rollback.job.annotations Defines annotations that will be additionally added to pre-rollback Job. If set to, it takes priority over the commonAnnotations {}
rollback.job.image.repository Job by rollback image repository onlyoffice/docs-utils
rollback.job.image.tag Job by rollback image tag 8.0.1-1
rollback.job.image.pullPolicy Job by rollback image pull policy IfNotPresent
rollback.job.resources.requests The requested resources for the job rollback container {}
rollback.job.resources.limits The resources limits for the job rollback container {}
rollback.existingConfigmap.tblRemove.name The name of the existing ConfigMap that contains the sql file for deleting tables from the database remove-db-scripts
rollback.existingConfigmap.tblRemove.keyName The name of the sql file containing instructions for deleting tables from the database. Must be the same as the key name in rollback.existingConfigmap.tblRemove.name removetbl.sql
rollback.existingConfigmap.tblCreate.name The name of the existing ConfigMap that contains the sql file for craeting tables from the database init-db-scripts
rollback.existingConfigmap.tblCreate.keyName The name of the sql file containing instructions for creating tables from the database. Must be the same as the key name in rollback.existingConfigmap.tblCreate.name createdb.sql
rollback.existingConfigmap.dsStop The name of the existing ConfigMap that contains the ONLYOFFICE Docs rollback script. If set, the four previous parameters are ignored. Must contain a key stop.sh ""
delete.job.enabled Enable the execution of job pre-delete before deleting ONLYOFFICE Docs true
delete.job.annotations Defines annotations that will be additionally added to pre-delete Job. If set to, it takes priority over the commonAnnotations {}
delete.job.image.repository Job by delete image repository onlyoffice/docs-utils
delete.job.image.tag Job by delete image tag 8.0.1-1
delete.job.image.pullPolicy Job by delete image pull policy IfNotPresent
delete.job.resources.requests The requested resources for the job delete container {}
delete.job.resources.limits The resources limits for the job delete container {}
delete.existingConfigmap.tblRemove.name The name of the existing ConfigMap that contains the sql file for deleting tables from the database remove-db-scripts
delete.existingConfigmap.tblRemove.keyName The name of the sql file containing instructions for deleting tables from the database. Must be the same as the key name in delete.existingConfigmap.tblRemove.name removetbl.sql
delete.existingConfigmap.dsStop The name of the existing ConfigMap that contains the ONLYOFFICE Docs delete script. If set, the two previous parameters are ignored. Must contain a key stop.sh ""
install.job.enabled Enable the execution of job pre-install before installing ONLYOFFICE Docs true
install.job.annotations Defines annotations that will be additionally added to pre-install Job. If set to, it takes priority over the commonAnnotations {}
install.job.image.repository Job by pre-install ONLYOFFICE Docs image repository onlyoffice/docs-utils
install.job.image.tag Job by pre-install ONLYOFFICE Docs image tag 8.0.1-1
install.job.image.pullPolicy Job by pre-install ONLYOFFICE Docs image pull policy IfNotPresent
install.job.resources.requests The requested resources for the job pre-install container {}
install.job.resources.limits The resources limits for the job pre-install container {}
install.existingConfigmap.tblCreate.name The name of the existing ConfigMap that contains the sql file for craeting tables from the database init-db-scripts
install.existingConfigmap.tblCreate.keyName The name of the sql file containing instructions for creating tables from the database. Must be the same as the key name in install.existingConfigmap.tblCreate.name createdb.sql
install.existingConfigmap.initdb The name of the existing ConfigMap that contains the initdb script. If set, the two previous parameters are ignored. Must contain a key initdb.sh ""
grafanaDashboard.job.annotations Defines annotations that will be additionally added to Grafana Dashboard Job. If set to, it takes priority over the commonAnnotations {}
grafanaDashboard.job.image.repository Job by Grafana Dashboard ONLYOFFICE Docs image repository onlyoffice/docs-utils
grafanaDashboard.job.image.tag Job by Grafana Dashboard ONLYOFFICE Docs image tag 8.0.1-1
grafanaDashboard.job.image.pullPolicy Job by Grafana Dashboard ONLYOFFICE Docs image pull policy IfNotPresent
grafanaDashboard.job.resources.requests The requested resources for the job Grafana Dashboard container {}
grafanaDashboard.job.resources.limits The resources limits for the job Grafana Dashboard container {}
tests.enabled Enable the resources creation necessary for ONLYOFFICE Docs launch testing and connected dependencies availability testing. These resources will be used when running the helm test command true
tests.annotations Defines annotations that will be additionally added to Test Pod. If set to, it takes priority over the commonAnnotations {}
tests.image.repository Test container image name onlyoffice/docs-utils
tests.image.tag Test container image tag 8.0.1-1
tests.image.pullPolicy Test container image pull policy IfNotPresent
tests.resources.requests The requested resources for the test container {}
tests.resources.limits The resources limits for the test container {}
  • *Note: The prefix -de is specified in the value of the image repository, which means solution type. Possible options:

    • Nothing is specified. For the open-source community version
    • -de. For commercial Developer Edition
    • -ee. For commercial Enterprise Edition

    If you use the community version, there may be problems with co-editing documents.

    The default value of this parameter refers to the ONLYOFFICE Document Server Developer Edition. To learn more about this edition and compare it with other editions, please see the comparison table on this page.

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install documentserver onlyoffice/docs --set ingress.enabled=true,ingress.ssl.enabled=true,ingress.host=example.com

This command gives expose ONLYOFFICE Docs via HTTPS.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install documentserver -f values.yaml onlyoffice/docs

Tip: You can use the default values.yaml

5. Configuration and installation details

5.1 Example deployment (optional)

To deploy the example, set the example.enabled parameter to true:

$ helm install documentserver onlyoffice/docs --set example.enabled=true

5.2 Metrics deployment (optional)

To deploy metrics, set metrics.enabled to true:

$ helm install documentserver onlyoffice/docs --set metrics.enabled=true

If you want to use Grafana to visualize metrics, set grafana.enabled to true. If you want to use Nginx Ingress to access Grafana, set grafana.ingress.enabled to true:

$ helm install documentserver onlyoffice/docs --set grafana.enabled=true --set grafana.ingress.enabled=true

5.3 Expose ONLYOFFICE Docs

5.3.1 Expose ONLYOFFICE Docs via Service (HTTP Only)

You should skip step#5.3.1 if you are going to expose ONLYOFFICE Docs via HTTPS

This type of exposure has the least overheads of performance, it creates a loadbalancer to get access to ONLYOFFICE Docs. Use this type of exposure if you use external TLS termination, and don't have another WEB application in the k8s cluster.

To expose ONLYOFFICE Docs via service, set the service.type parameter to LoadBalancer:

$ helm install documentserver onlyoffice/docs --set service.type=LoadBalancer,service.port=80

Run the following command to get the documentserver service IP:

$ kubectl get service documentserver -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

After that, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-SERVICE-IP/.

If the service IP is empty, try getting the documentserver service hostname:

$ kubectl get service documentserver -o jsonpath="{.status.loadBalancer.ingress[*].hostname}"

In this case, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-SERVICE-HOSTNAME/.

5.3.2 Expose ONLYOFFICE Docs via Ingress

5.3.2.1 Installing the Kubernetes Nginx Ingress Controller

To install the Nginx Ingress Controller to your cluster, run the following command:

$ helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true,controller.replicaCount=2

Note: To install Nginx Ingress with the same parameters and to enable exposing ingress-nginx metrics to be gathered by Prometheus, run the following command:

$ helm install nginx-ingress -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/ingress_values.yaml ingress-nginx/ingress-nginx

See more detail about installing Nginx Ingress via Helm here.

5.3.2.2 Expose ONLYOFFICE Docs via HTTP

You should skip step5.3.2.2 if you are going to expose ONLYOFFICE Docs via HTTPS

This type of exposure has more overheads of performance compared with exposure via service, it also creates a loadbalancer to get access to ONLYOFFICE Docs. Use this type if you use external TLS termination and when you have several WEB applications in the k8s cluster. You can use the one set of ingress instances and the one loadbalancer for those. It can optimize the entry point performance and reduce your cluster payments, cause providers can charge a fee for each loadbalancer.

To expose ONLYOFFICE Docs via ingress HTTP, set the ingress.enabled parameter to true:

$ helm install documentserver onlyoffice/docs --set ingress.enabled=true

Run the following command to get the documentserver ingress IP:

$ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

After that, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-INGRESS-IP/.

If the ingress IP is empty, try getting the documentserver ingress hostname:

$ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].hostname}"

In this case, ONLYOFFICE Docs will be available at http://DOCUMENTSERVER-INGRESS-HOSTNAME/.

5.3.2.3 Expose ONLYOFFICE Docs via HTTPS

This type of exposure allows you to enable internal TLS termination for ONLYOFFICE Docs.

Create the tls secret with an ssl certificate inside.

Put the ssl certificate and the private key into the tls.crt and tls.key files and then run:

$ kubectl create secret generic tls \
  --from-file=./tls.crt \
  --from-file=./tls.key
$ helm install documentserver onlyoffice/docs --set ingress.enabled=true,ingress.ssl.enabled=true,ingress.host=example.com

Run the following command to get the documentserver ingress IP:

$ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

If the ingress IP is empty, try getting the documentserver ingress hostname:

$ kubectl get ingress documentserver -o jsonpath="{.status.loadBalancer.ingress[*].hostname}"

Associate the documentserver ingress IP or hostname with your domain name through your DNS provider.

After that, ONLYOFFICE Docs will be available at https://your-domain-name/.

6. Scale ONLYOFFICE Docs (optional)

This step is optional. You can skip step 6 entirely if you want to use default deployment settings.

6.1 Horizontal Pod Autoscaling

You can enable Autoscaling so that the number of replicas of docservice and converter deployments is calculated automatically based on the values and type of metrics.

For resource metrics, API metrics.k8s.io must be registered, which is generally provided by metrics-server. It can be launched as a cluster add-on.

To use the target utilization value (target.type==Utilization), it is necessary that the values for resources.requests are specified in the deployment.

For more information about Horizontal Pod Autoscaling, see here.

To enable HPA for the docservice deployment, specify the docservice.autoscaling.enabled=true parameter. In this case, the docservice.replicas parameter is ignored and the number of replicas is controlled by HPA.

Similarly, to enable HPA for the converter deployment, specify the converter.autoscaling.enabled=true parameter. In this case, the converter.replicas parameter is ignored and the number of replicas is controlled by HPA.

With the autoscaling.enabled parameter enabled, by default Autoscaling will adjust the number of replicas based on the average percentage of CPU Utilization. For other configurable Autoscaling parameters, see the Parameters table.

6.2 Manual scaling

The docservice and converter deployments consist of 2 pods each other by default.

To scale the docservice deployment, use the following command:

$ kubectl scale -n default deployment docservice --replicas=POD_COUNT

where POD_COUNT is a number of the docservice pods.

Do the same to scale the converter deployment:

$ kubectl scale -n default deployment converter --replicas=POD_COUNT

7. Update ONLYOFFICE Docs

It's necessary to set the parameters for updating. For example,

$ helm upgrade documentserver onlyoffice/docs \
  --set docservice.image.tag=[version]

Note: also need to specify the parameters that were specified during installation

Or modify the values.yaml file and run the command:

$ helm upgrade documentserver -f values.yaml onlyoffice/docs

Running the helm upgrade command runs a hook that shuts down the ONLYOFFICE Docs and cleans up the database. This is needed when updating the version of ONLYOFFICE Docs. The default hook execution time is 300s. The execution time can be changed using --timeout [time], for example

$ helm upgrade documentserver -f values.yaml onlyoffice/docs --timeout 15m

Note: When upgrading ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.

If you want to update any parameter other than the version of the ONLYOFFICE Docs, then run the helm upgrade command without hooks, for example:

$ helm upgrade documentserver onlyoffice/docs --set jwt.enabled=false --no-hooks

To rollback updates, run the following command:

$ helm rollback documentserver

Note: When rolling back ONLYOFFICE Docs in a private k8s cluster behind a Web proxy or with no internet access, see the notes below.

8. Shutdown ONLYOFFICE Docs (optional)

To perform the shutdown, run the following command:

$ kubectl apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/shutdown-ds.yaml -n <NAMESPACE>

Where:

  • <NAMESPACE> - Namespace where ONLYOFFICE Docs is installed. If not specified, the default value will be used: default.

For example:

$ kubectl apply -f https://raw.githubusercontent.com/ONLYOFFICE/Kubernetes-Docs/master/sources/shutdown-ds.yaml -n onlyoffice

After successfully executing the Pod shutdown-ds that created the Job, delete this Job with the following command:

$ kubectl delete job shutdown-ds -n <NAMESPACE>

If after stopping ONLYOFFICE Docs you need to start it again then restart docservice and converter pods. For example, using the following command:

$ kubectl delete pod converter-*** docservice-*** -n <NAMESPACE>

9. Update ONLYOFFICE Docs license (optional)

In order to update the license, you need to perform the following steps:

  • Place the license.lic file containing the new key in some directory
  • Run the following commands:
$ kubectl delete secret license -n <NAMESPACE>
$ kubectl create secret generic license --from-file=path/to/license.lic -n <NAMESPACE>
  • Restart docservice and converter pods. For example, using the following command:
$ kubectl delete pod converter-*** docservice-*** -n <NAMESPACE>

10. ONLYOFFICE Docs installation test (optional)

You can test ONLYOFFICE Docs availability and access to connected dependencies by running the following command:

$ helm test documentserver -n <NAMESPACE>

The output should have the following line:

Phase: Succeeded

To view the log of the Pod running as a result of the helm test command, run the following command:

$ kubectl logs -f test-ds -n <NAMESPACE>

The ONLYOFFICE Docs availability check is considered a priority, so if it fails with an error, the test is considered to be failed.

After this, you can delete the test-ds Pod by running the following command:

$ kubectl delete pod test-ds -n <NAMESPACE>

Note: This testing is for informational purposes only and cannot guarantee 100% availability results. It may be that even though all checks are completed successfully, an error occurs in the application. In this case, more detailed information can be found in the application logs.

11. Run Jobs in a private k8s cluster (optional)

When running Job for installation, update, rollback and deletion, the container being launched needs Internet access to download the latest sql scripts. If the access of containers to the external network is prohibited in your k8s cluster, then you can perform these Jobs by setting the privateCluster=true parameter and manually create a ConfigMap with the necessary sql scripts.

To do this, run the following commands:

If your cluster already has remove-db-scripts and init-db-scripts configmaps, then delete them:

$ kubectl delete cm remove-db-scripts init-db-scripts

Download the ONLYOFFICE Docs database scripts for database cleaning and database tables creating:

If PostgreSQL is selected as the database server:

$ wget -O removetbl.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/postgresql/removetbl.sql
$ wget -O createdb.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/postgresql/createdb.sql

If MySQL is selected as the database server:

$ wget -O removetbl.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/mysql/removetbl.sql
$ wget -O createdb.sql https://raw.githubusercontent.com/ONLYOFFICE/server/master/schema/mysql/createdb.sql

Create a configmap from them:

$ kubectl create configmap remove-db-scripts --from-file=./removetbl.sql
$ kubectl create configmap init-db-scripts --from-file=./createdb.sql

Note: If you specified a different name for ConfigMap and for the file from which it is created, set the appropriate parameters for the corresponding Jobs:

  • existingConfigmap.tblRemove.name and existingConfigmap.tblRemove.keyName for scripts for database cleaning
  • existingConfigmap.tblCreate.name and existingConfigmap.tblCreate.keyName for scripts for database tables creating

Next, when executing the commands helm install|upgrade|rollback|delete, set the parameter privateCluster=true

Note: If it is possible to use a Web Proxy in your network to ensure the Pods containers have access to the Internet, then you can leave the parameter privateCluster=false, not manually create a configmaps with sql scripts and set the parameter webProxy.enabled=true, also setting the appropriate parameters for the Web Proxy.

12. Access to the info page (optional)

The access to /info page is limited by default. In order to allow the access to it, you need to specify the IP addresses or subnets (that will be Proxy container clients in this case) using proxy.infoAllowedIP parameter. Taking into consideration the specifics of Kubernetes net interaction it is possible to get the original IP of the user (being Proxy client) though it's not a standard scenario. Generally the Pods / Nodes / Load Balancer addresses will actually be the clients, so these addresses are to be used. In this case the access to the info page will be available to everyone. You can further limit the access to the info page using Nginx Basic Authentication which you can turn on by setting proxy.infoAllowedUser parameter value and by setting the password using proxy.infoAllowedPassword parameter, alternatively you can use the existing secret with password by setting its name with proxy.infoAllowedExistingSecret parameter.

Using Grafana to visualize metrics (optional)

This step is optional. You can skip this section if you don't want to install Grafana

1. Deploy Grafana

Note: It is assumed that step #6.2 has already been completed.

1.1 Deploy Grafana without installing ready-made dashboards

You should skip step #1.1 if you want to Deploy Grafana with the installation of ready-made dashboards

To install Grafana to your cluster, run the following command:

$ helm install grafana bitnami/grafana \
  --set service.ports.grafana=80 \
  --set config.useGrafanaIniFile=true \
  --set config.grafanaIniConfigMap=grafana-ini \
  --set datasources.secretName=grafana-datasource

1.2 Deploy Grafana with the installation of ready-made dashboards

1.2.1 Installing ready-made Grafana dashboards

To install ready-made Grafana dashboards, set the grafana.enabled and grafana.dashboard.enabled parameters to true. If ONLYOFFICE Docs is already installed you need to run the helm upgrade documentserver onlyoffice/docs --set grafana.enabled=true --set grafana.dashboard.enabled=true --no-hooks command or helm upgrade documentserver -f ./values.yaml onlyoffice/docs --no-hooks if the parameters are specified in the values.yaml file. As a result, ready-made dashboards in the JSON format will be downloaded from the Grafana website, the necessary edits will be made to them and configmap will be created from them. A dashboard will also be added to visualize metrics coming from the ONLYOFFICE Docs (it is assumed that step #6 has already been completed).

1.2.2 Installing Grafana

To install Grafana to your cluster, run the following command:

$ helm install grafana bitnami/grafana \
  --set service.ports.grafana=80 \
  --set config.useGrafanaIniFile=true \
  --set config.grafanaIniConfigMap=grafana-ini \
  --set datasources.secretName=grafana-datasource \
  --set dashboardsProvider.enabled=true \
  --set dashboardsConfigMaps[0].configMapName=dashboard-node-exporter \
  --set dashboardsConfigMaps[0].fileName=dashboard-node-exporter.json \
  --set dashboardsConfigMaps[1].configMapName=dashboard-deployment \
  --set dashboardsConfigMaps[1].fileName=dashboard-deployment.json \
  --set dashboardsConfigMaps[2].configMapName=dashboard-redis \
  --set dashboardsConfigMaps[2].fileName=dashboard-redis.json \
  --set dashboardsConfigMaps[3].configMapName=dashboard-rabbitmq \
  --set dashboardsConfigMaps[3].fileName=dashboard-rabbitmq.json \
  --set dashboardsConfigMaps[4].configMapName=dashboard-postgresql \
  --set dashboardsConfigMaps[4].fileName=dashboard-postgresql.json \
  --set dashboardsConfigMaps[5].configMapName=dashboard-nginx-ingress \
  --set dashboardsConfigMaps[5].fileName=dashboard-nginx-ingress.json \
  --set dashboardsConfigMaps[6].configMapName=dashboard-documentserver \
  --set dashboardsConfigMaps[6].fileName=dashboard-documentserver.json \
  --set dashboardsConfigMaps[7].configMapName=dashboard-cluster-resourses \
  --set dashboardsConfigMaps[7].fileName=dashboard-cluster-resourses.json

After executing this command, the following dashboards will be imported into Grafana:

  • Node Exporter
  • Deployment Statefulset Daemonset
  • Redis Dashboard for Prometheus Redis Exporter
  • RabbitMQ-Overview
  • PostgreSQL Database
  • NGINX Ingress controller
  • ONLYOFFICE Docs
  • Resource usage by Pods and Containers

Note: You can see the description of the ONLYOFFICE Docs metrics that are visualized in Grafana here.

See more details about installing Grafana via Helm here.

2 Access to Grafana via Ingress

Note: It is assumed that step #5.3.2.1 has already been completed.

If ONLYOFFICE Docs was installed with the parameter grafana.ingress.enabled (step #5.2) then access to Grafana will be at: http://INGRESS-ADDRESS/grafana/

If Ingres was installed using a secure connection (step #5.3.2.3), then access to Grafana will be at: https://your-domain-name/grafana/

3. View gathered metrics in Grafana

Go to the address http(s)://your-domain-name/grafana/

Login - admin

To get the password, run the following command:

$ kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode

In the dashboard section, you will see the added dashboards that will display the metrics received from Prometheus.

kubernetes-docs's People

Contributors

agolybev avatar alesuiss avatar cyger avatar danilapog avatar jpxd avatar kireevdmitry avatar shockwavenn avatar svetlana81 avatar t0rtila avatar vyacheslavsemin avatar xrtrx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-docs's Issues

Unknown error while downloading rtf -> docx -> docx

Steps to reproduc:

  1. Upload attached rtf to ONLYOFFICE Community , connected to kube-documentserver
  2. ONLYOFFICE Community askes to convert this file to docx, agree
  3. Open resulting docx in DocumentEditor
  4. Save As -> Docx from DocumentEditor

image

Problem is not reproduce on Centos based machine with rpm DocumentServer via DocumentServer test example

01068.zip

Different list of files on test example

Started example service and randomly created some files

Seems that each refresh of example page I have three ranom state:

  1. one docx file
  2. one docx, one xlsx file
  3. no files at all
    output

`WARNING: This chart is deprecated` for several dependencies

Commands like

helm install nfs-server stable/nfs-server-provisioner \
  --set persistence.enabled=true \
  --set persistence.storageClass=PERSISTENT_STORAGE_CLASS \
  --set persistence.size=PERSISTENT_SIZE
helm install rabbitmq stable/rabbitmq
helm install redis stable/redis \
  --set cluster.enabled=false \
  --set usePassword=false
helm install postgresql stable/postgresql \
  --set initdbScriptsConfigMap=init-db-scripts \
  --set postgresqlDatabase=postgres \
  --set persistence.size=8Gi

show warning WARNING: This chart is deprecated

Not sure how this is critical, but sometime in future we may lose those dependencies

Maybe related to #11

No auto-scaling support

If some pod have a lot of perforamnce load currently only way to add more power - manually spinup some more nodes.

Need support of autoscaling

permission error on proxy image

15#15: *26 open() "/var/www/onlyoffice/documentserver/sdkjs/common/Images/content_controls/[email protected]" failed (13: Permission denied),

This is what the proxy images currently return, is it possible that they have been wrongly build? B/c those are static files in the container itself, but they have permission on the ds user and not the nginx user running the whole thing. Seems like this image wasn't tested.

extraConf.configMap for pgPoolExtraOptions

I am deploying onlyoffice in kubernetes using argo-cd.
While this is working fine in a lab environment where the external postgresql server accepts unencrypted connections, it is not working in prod env where the hosted postgresql only accepts tls connections.

So when I try to deploy the docservice containers do not come up, and I always see this error:
[2023-06-12T10:12:22.336] [WARN] [docId] [userId] nodeJS - sqlQuery error sqlCommand: SELECT column_name FROM information_schema.COLUMNS: error: pg_hba.conf rejects connection for host "10.7.226.14", user "onlyoffice", database "onlyoffice", no encryption

The information I found is that I have to set ssl in pgPoolExtraOptions to true - and I can confirm that with a small nodejs test script that with ssl set to false I get the same error there, and ssl set to true works.

So I created a configmap accordingly that ends up in docservice container like this:
sh-4.2$ cat /etc/onlyoffice/documentserver/local.json { "sql": { "pgPoolExtraOptions": { "ssl": true }}}
and added in values passed by argo-cd to helm:
values: | extraConf: configMap: local-config

But when I try to deploy I still get the same error. What is the correct format for this configmap?

ingress annotations does not handle multiline

on values.yaml, you cannot use multiline annotations, as it failed yaml syntax. It is required for example with HAProxy ingress controller to add multiple headers.

ingress:
  enabled: true
  annotations:
    haproxy.org/response-set-header: |
      X-Content-Type-Options "nosniff"
      Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
Error: Failed to render chart: exit status 1: Error: YAML parse error on docs/templates/ingresses/documentserver.yaml: error converting YAML to JSON: yaml: line 12: could not find expected ':'

Syntax error is here : https://github.com/ONLYOFFICE/Kubernetes-Docs/blob/master/templates/ingresses/documentserver.yaml#L12
I guess correct template should be (to be tested) :

  annotations:
    {{- range $key, $value := .Values.ingress.annotations }}
    {{ $key }}: {{- $value | toYaml | nindent 4 }}
    {{- end }}

Source: https://stackoverflow.com/questions/50951124/multiline-string-to-a-variable-in-a-helm-template

Fails to install when deploying with ArgoCD

Due to the way that ArgoCD converts the Helm hooks this chart fails to deploy in ArgoCD. In the chart on this page one can see the helm.sh/hook: pre-upgrade get converted to a argocd.argoproj.io/hook: PreSync which gets executed on every sync. ArgoCD under the hood does not use helm in the way one would think. It's likely that other GitOps tools will have similar issues. This causes the job pre-upgrade to be executed on first run before the PVC it needs is created. It get's stuck in a progressing state and will not complete unless you selectively sync everything except the two applications.

image

onlyoffice <=> nextcloud

Hi all,

we have an instance with 2 docservice replica and 2 nextcloud replica (all stuff is deployed with onlyoffice and nextcloud helmcharts)
onlyoffice become unhealthy after some delay and no edition can continue to work.

2 extracts from onlyoffice logs :

nodeJS - sendServerRequest error: url = https://box.mydomain.com/apps/onlyoffice/track?doc=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ…;data = {"key":"1743264896","status":3,"users":["oc992s7o213w_*.*@*.*.fr"],"actions":[{"type":0,"userid":"oc992s7o213w_*.*@*.*.fr"}],"lastsave":"2024-05-07T11:24:40.000Z","notmodified":false,"token":"eyJhbGci…"} Error: Error response: statusCode:400; headers:{"server":"nginx/1.25.4","date":"Tue, 07 May 2024 12:26:57 GMT","content-type":"application/json; charset=utf-8","content-length":"27","set-cookie":["oc_sessionPassphrase=LA4Q9…; path=/; secure; HttpOnly; SameSite=Lax","__Host-nc_sameSiteCookielax=true; path=/; httponly;secure; expires=Fri, 31-Dec-2100 23:59:59 GMT; SameSite=lax","__Host-nc_sameSiteCookiestrict=true; path=/; httponly;secure; expires=Fri, 31-Dec-2100 23:59:59 GMT; SameSite=strict","oc992s7o213w=2113bb256dd0…; path=/; secure; HttpOnly; SameSite=Lax","da2677c5e2708…=1fa0282a5b1b207…; path=/; HttpOnly; Secure; SameSite=None"],"expires":"Thu, 19 Nov 1981 08:52:00 GMT","pragma":"no-cache","x-request-id":"hvducjxZmavEXHF1QTM8","cache-control":"no-cache, no-store, must-revalidate","content-security-policy":"default-src 'none';base-uri 'none';manifest-src 'self';frame-ancestors 'none'","feature-policy":"autoplay 'none';camera 'none';fullscreen 'none';geolocation 'none';microphone 'none';payment 'none'","x-robots-tag":"noindex, nofollow, noindex, nofollow","referrer-policy":"no-referrer","x-content-type-options":"nosniff","x-download-options":"noopen","x-frame-options":"SAMEORIGIN","x-permitted-cross-domain-policies":"none","x-xss-protection":"1; mode=block","connection":"close"}; body:
…
sqlQuery error sqlCommand: INSERT INTO task_result (tenant, id, status, statu: error: duplicate key value violates unique constraint "task_result_pkey"
…

it worked well before with one replica on each side.

any idea ?

regards

Add resources for Job templates

Hello,

Our Openshift cluster rejects anything that doesn't have resources.limits.memory set; currently that means that install/upgrade/rollback/delete jobs don't work for us. It would be nice to add the option to specify resources for the jobs.

Cheers,
Arthur

Starting service in minikube failing

Use default storage class according to output

# kubectl get sc
standard (default)   k8s.io/minikube-hostpath                          Delete          Immediate           false                  25m

Exectutet all command and got in result:
image
Containers hangup on ContainerCreating

Seems nfs mount is reason:

# kubectl describe pods converter-79c7ff8c89-8lnpv
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/30066470-e9bc-4cbc-86db-93581f571b21/volumes/kubernetes.io~nfs/pvc-66216c1b-f7f5-4afc-8d23-904f02ae843b --scope -- mount -t nfs -o vers=3 10.106.252.253:/export/pvc-66216c1b-f7f5-4afc-8d23-904f02ae843b /var/lib/kubelet/pods/30066470-e9bc-4cbc-86db-93581f571b21/volumes/kubernetes.io~nfs/pvc-66216c1b-f7f5-4afc-8d23-904f02ae843b
Output: Running scope as unit: run-r88e121c8cd7b45f59cfdda7eab24b50b.scope
mount: /var/lib/kubelet/pods/30066470-e9bc-4cbc-86db-93581f571b21/volumes/kubernetes.io~nfs/pvc-66216c1b-f7f5-4afc-8d23-904f02ae843b: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
  Warning  FailedMount  17m  kubelet, lobashov-minikube  MountVolume.SetUp failed for volume "pvc-66216c1b-f7f5-4afc-8d23-904f02ae843b" : mount failed: exit status 32

Roadmap and release plans for feature/release-7.0.1

Hello guys,

I am very interested on the changes that you implement on feature/release-7.0.1.
Do you have a planned release date for this branch?
Do you also plan to package the chart and deploy it in a helm repo? This would be of great help. Otherwise I always need to checkout the sources and package it myself.
Additionally is there a roadmap of future changes to the helm chart?

Thanks in advance.

Best regards,
Martin

Docservice: Unable to verify the first certificate

I have installed the whole stack as it was advised in the readme. After that, I added a let's encrypt certificate manually to the ingress and turned off the rejectUnauthorized flag by mounting it the default.json file as a ConfigMap. Afterward I set hostAliases to the docservice, converter and the nextcloud deployment so they can resolve the hostname.
Everything was fine and I was able to connect the nextcloud to the documentserver and I created a docx file but as I open it the following error appears in the logs of docservice container:

[2022-01-31T16:06:41.318] [ERROR] nodeJS - sendServerRequest error: docId = 2064873878;url = https://cloud.mydomain.com/apps/onlyoffice/track?doc=eyJ0eXAiOmZpbGVJZCI6MjY5LCJmaWxlUGF0aCI6IlwvRG9jdW1lbnQuZG9jeCIsInNoYXJlVG9rZW4iOm51bGwsImFjdGlvbiI6InRyYWNrIn0.mIAPa0jDYqvzojVaXXtr1IYkUipVaEh-3WbUG7aBGQA;data = {"key":"2064873878","status":2,"url":"https://documentserver.mydomain.com/cache/files/2064873878_1811/output.docx/output.docx?md5=PsPlPZyEMDGqxrkV6PBIwA&expires=1643646102&filename=output.docx","history":{},"users":["ocjsd52tu6j7_admin"],"actions":[{"type":0,"userid":"ocjsd_admin"}],"lastsave":"2022-01-31T15:35:08.000Z","notmodified":false,"filetype":"docx"}
Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1514:34)
at TLSSocket.emit (events.js:400:28)
at TLSSocket._finishInit (_tls_wrap.js:936:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:708:12)

Is this usable standalone?

Heylo,

I am quite new to onlyoffice galaxy and only deployed it as docker-compose deployment (install.sh script) for testing. As an permament solution I would like to move to kubernetes instead and tried out this helm deployment, but can't get out of the example view and can't save anything. Is this deployment usable as standalone or just supposed to be integrated into an application or eg. the community server?

In that case I would likely have to build a complete workspace deployment, so community, documents, (mail) and control-panel umbrella chart?

As the documents server image can use an integreated rabbitmq and postgres, these don't have to be deployed if just one instance of the document server is running? So as an idea, rebuild the docker deployment as an helm chart? (no-ha and so on, i know)

Sorry for the maybe obvious and blunt questions.

Thanks for any feedback,
Jakob

file permissions issue in converter

I have followed the Helm install.

The trouble is with the files directory in the PVC

ls -la /var/lib/onlyoffice/documentserver/App_Data/cache/
total 12K
drwxr-xr-x 3 ds ds 4.0K Jun 15 08:33 .
drwxr-xr-x 4 ds ds 4.0K Jun 15 08:33 ..
drwxr-xr-x 8 root root 4.0K Jul 1 09:40 files

The converter can't write the cache files

I've bypassed that with an initContainer to chown the directory, but it's temporary because I've edited the deployment directly in the cluster, and it's "hacky".

Do I miss something ?

Problems with access rights when deploying to an OpenShift cluster

OpenShift uses RBAC policies by default to define and apply permissions.
OpenShift users who do not have the cluster admin role added may have problems deploying manifests that use various actions (verbs: "get", "list", "create", etc.) on resources ("pods", "deployments", "statefulsets", "endpoints", etc.).
These problems are usually related to the fact that the user does not have rights to perform the requested actions.
To fix this, without giving the user the cluster admin role, you can create a role with the required actions on the resources, and then bind it to the user. Read more here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.