Git Product home page Git Product logo

redis-cluster-operator's Introduction

redis-cluster-operator

Overview

Redis Cluster Operator manages Redis Cluster atop Kubernetes.

The operator itself is built with the Operator framework.

Redis Cluster atop Kubernetes

Each master node and its slave nodes is managed by a statefulSet, create a headless svc for each statefulSet, and create a clusterIP service for all nodes.

Each statefulset uses PodAntiAffinity to ensure that the master and slaves are dispersed on different nodes. At the same time, when the operator selects the master in each statefulset, it preferentially select the pod with different k8s nodes as master.

Table of Contents

Prerequisites

  • go version v1.13+.
  • Access to a Kubernetes v1.13.10 cluster.

Features

  • Customize the number of master nodes and the number of replica nodes per master

  • Password

  • Safely Scaling the Redis Cluster

  • Backup and Restore

  • Persistent Volume

  • Custom Configuration

  • Prometheus Discovery

Quick Start

Deploy redis cluster operator

Install Step by step

Register the DistributedRedisCluster and RedisClusterBackup custom resource definition (CRD).

$ kubectl create -f deploy/crds/redis.kun_distributedredisclusters_crd.yaml
$ kubectl create -f deploy/crds/redis.kun_redisclusterbackups_crd.yaml

A namespace-scoped operator watches and manages resources in a single namespace, whereas a cluster-scoped operator watches and manages resources cluster-wide. You can chose run your operator as namespace-scoped or cluster-scoped.

// cluster-scoped
$ kubectl create -f deploy/service_account.yaml
$ kubectl create -f deploy/cluster/cluster_role.yaml
$ kubectl create -f deploy/cluster/cluster_role_binding.yaml
$ kubectl create -f deploy/cluster/operator.yaml

// namespace-scoped
$ kubectl create -f deploy/service_account.yaml
$ kubectl create -f deploy/namespace/role.yaml
$ kubectl create -f deploy/namespace/role_binding.yaml
$ kubectl create -f deploy/namespace/operator.yaml

Install using helm chart

Add Helm repository

helm repo add ucloud-operator https://ucloud.github.io/redis-cluster-operator/
helm repo update

Install chart

helm install --generate-name ucloud-operator/redis-cluster-operator

Verify that the redis-cluster-operator is up and running:

$ kubectl get deployment
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
redis-cluster-operator   1/1     1            1           1d

Usage

Deploy a sample Redis Cluster

NOTE: Only the redis cluster that use persistent storage(pvc) can recover after accidental deletion or rolling update.Even if you do not use persistence(like rdb or aof), you need to set pvc for redis.

$ kubectl apply -f deploy/example/redis.kun_v1alpha1_distributedrediscluster_cr.yaml

Verify that the cluster instances and its components are running.

$ kubectl get distributedrediscluster
NAME                              MASTERSIZE   STATUS    AGE
example-distributedrediscluster   3            Scaling   11s

$ kubectl get all -l redis.kun/name=example-distributedrediscluster
NAME                                          READY   STATUS    RESTARTS   AGE
pod/drc-example-distributedrediscluster-0-0   1/1     Running   0          2m48s
pod/drc-example-distributedrediscluster-0-1   1/1     Running   0          2m8s
pod/drc-example-distributedrediscluster-1-0   1/1     Running   0          2m48s
pod/drc-example-distributedrediscluster-1-1   1/1     Running   0          2m13s
pod/drc-example-distributedrediscluster-2-0   1/1     Running   0          2m48s
pod/drc-example-distributedrediscluster-2-1   1/1     Running   0          2m15s

NAME                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
service/example-distributedrediscluster     ClusterIP   172.17.132.71   <none>        6379/TCP,16379/TCP   2m48s
service/example-distributedrediscluster-0   ClusterIP   None            <none>        6379/TCP,16379/TCP   2m48s
service/example-distributedrediscluster-1   ClusterIP   None            <none>        6379/TCP,16379/TCP   2m48s
service/example-distributedrediscluster-2   ClusterIP   None            <none>        6379/TCP,16379/TCP   2m48s

NAME                                                     READY   AGE
statefulset.apps/drc-example-distributedrediscluster-0   2/2     2m48s
statefulset.apps/drc-example-distributedrediscluster-1   2/2     2m48s
statefulset.apps/drc-example-distributedrediscluster-2   2/2     2m48s

$ kubectl get distributedrediscluster
NAME                              MASTERSIZE   STATUS    AGE
example-distributedrediscluster   3            Healthy   4m

Scaling Up the Redis Cluster

Increase the masterSize to trigger the scaling up.

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  # Increase the masterSize to trigger the scaling.
  masterSize: 4
  ClusterReplicas: 1
  image: redis:5.0.4-alpine

Scaling Down the Redis Cluster

Decrease the masterSize to trigger the scaling down.

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  # Increase the masterSize to trigger the scaling.
  masterSize: 3
  ClusterReplicas: 1
  image: redis:5.0.4-alpine

Backup and Restore

NOTE: Only Ceph S3 object storage and PVC is supported now

Backup

$ kubectl create -f deploy/example/backup-restore/redisclusterbackup_cr.yaml

Restore from backup

$ kubectl create -f deploy/example/backup-restore/restore.yaml

Prometheus Discovery

$ kubectl create -f deploy/example/prometheus-exporter.yaml

Create Redis Cluster with password

$ kubectl create -f deploy/example/custom-password.yaml

Persistent Volume

$ kubectl create -f deploy/example/persistent.yaml

Custom Configuration

$ kubectl create -f deploy/example/custom-config.yaml

Custom Service

$ kubectl create -f deploy/example/custom-service.yaml

Custom Resource

$ kubectl create -f deploy/example/custom-resources.yaml

ValidatingWebhook

see ValidatingWebhook

End to end tests

see e2e

redis-cluster-operator's People

Contributors

alexeynsorokin avatar gaopenghigh avatar hufon avatar oldwang12 avatar polefishu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis-cluster-operator's Issues

Redis Connectivity

Hi,
Could you provide me any documentatin to understand the operator or can we have a section for redis connectivity in the readme?

we are looking for some assistance to know the operator details to use it in our production.
regards,

HTTP support for S3 backup

Hi,

Error message:

 Events:
  Type     Reason        Age        From                           Message
  ----     ------        ----       ----                           -------
  Warning  BakcupFailed  <invalid>  redis-cluster-operator-backup  GetBucketLocation: RequestError: send request failed
caused by: Get https://my-s3-endpoint:80/rediscluster?location=: http: server gave HTTP response to HTTPS client

can you please consider to allow http while connecting to S3 endpoint. I tried setting up insecure to true. looks like it's getting ignored . Is it possible to provide support for HTTP endpoint by considering the insecure parameter just like in the below yaml config

apiVersion: redis.kun/v1alpha1
kind: RedisClusterBackup
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    #redis.kun/scope: cluster-scoped
  name: redisclusterbackup
  namespace: redis
spec:
  image: redis-tools:5.0.4
  redisClusterName: rediscluster
  storageSecretName: rediscluster-secret
  # Replace this with the s3 info
  s3:
    endpoint: my-s3-endpoint:80
    bucket: rediscluster
    insecure: true

insecure: true is added in the above yaml config file

Infinite error requeue in V0.2.0 during scale down.

Version 0.2.0 should be able to handle scale down as per the commit history. When scaling down, an infinite requeue loop occurs with the cluster in an inconsistent state.

The IP address of the pod isn't static. I am working on setting up an environment to turn on debug logging as I believe the error is related to the Admin.getInfos() function but I am unable to see any debug output other than the logs below.

It seems that the deleted master is still in the cluster node list but failing which is taking down the whole cluster. When this occurs, the scale down code is never reached as the error is caught and requeued earlier in the controller.

To reproduce:
Deploy the operator
Deploy the example at 4 nodes
Update the deployment to 3 nodes
Redeploy

Logs:

CheckRedisNodeNum: redis pods are not all ready

followed by an infinite loop of the following
`

Sending CLUSTER MEET messages to join the cluster
node xx.xx.x.x:6379 attached properly
wait for cluster join: Cluster view is inconsistent
...
Sending CLUSTER MEET messages to join the cluster
node xy.xx.x.x:6379 attached properly
wait for cluster join: Cluster view is inconsistent
...
Sending CLUSTER MEET messages to join the cluster
node zx.xx.x.x:6379 attached properly
wait for cluster join: Cluster view is inconsistent
`

make build-image fail

make build-image

docker build --build-arg VERSION=v0.2.3-26 --build-arg GIT_SHA=a01fa97 -t /redis-cluster-operator:v0.2.3-26 .
invalid argument "/redis-cluster-operator:v0.2.3-26" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
make: *** [build-image] Error 125

CLUSTER FORGET errors

My cluster got in a bad state, it's unclear why.

  • very recent redis-cluster-operator (from 2 weeks ago)
  • OpenShift 3.11 which is built on kubernete 3.11.
  • redis-5.0.8
  • 3 master 3 replicas
{"level":"info","ts":1592421784.707177,"logger":"controller_distributedrediscluster.redis_util","msg":"Forget node done","Request.Namespace":"cobra-bench","Request.Name":"redis","node":"dea890e5848908b58d70b2911ce69e037cf88772"}
{"level":"info","ts":1592421784.707182,"logger":"controller_distributedrediscluster","msg":"[FixUntrustedNodes] found untrust node","Request.Namespace":"cobra-bench","Request.Name":"redis","node":"{Redis ID: 712df7a404a1aa19fd966f3f7d0ccc68aacb48bb, role: None, master: , link: connected, status: [handshake], addr: 172.25.197.16:6379, slots: [], len(migratingSlots): 0, len(importingSlots): 0}"}
{"level":"info","ts":1592421784.7072167,"logger":"controller_distributedrediscluster","msg":"[FixUntrustedNodes] try to forget node","Request.Namespace":"cobra-bench","Request.Name":"redis","nodeId":"712df7a404a1aa19fd966f3f7d0ccc68aacb48bb"}
{"level":"info","ts":1592421785.1034586,"logger":"controller_distributedrediscluster.redis_util","msg":"CLUSTER FORGET","Request.Namespace":"cobra-bench","Request.Name":"redis","id":"712df7a404a1aa19fd966f3f7d0ccc68aacb48bb","from":"172.28.152.78:6379"}
{"level":"info","ts":1592421785.105386,"logger":"controller_distributedrediscluster.redis_util","msg":"CLUSTER FORGET","Request.Namespace":"cobra-bench","Request.Name":"redis","id":"712df7a404a1aa19fd966f3f7d0ccc68aacb48bb","from":"172.29.215.140:6379"}
{"level":"error","ts":1592421785.1057348,"logger":"controller_distributedrediscluster","msg":"Unable to execute FORGET command: unexpected error on node 172.29.215.140:6379","Request.Namespace":"cobra-bench","Request.Name":"redis","error":"ERR Unknown node 712df7a404a1aa19fd966f3f7d0ccc68aacb48bb","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/ucloud/redis-cluster-operator/pkg/redisutil.(*AdminConnections).ValidateResp\n\t/src/pkg/redisutil/connections.go:296\ngithub.com/ucloud/redis-cluster-operator/pkg/redisutil.(*Admin).ForgetNode\n\t/src/pkg/redisutil/admin.go:480\ngithub.com/ucloud/redis-cluster-operator/pkg/controller/heal.(*CheckAndHeal).FixUntrustedNodes\n\t/src/pkg/controller/heal/untrustenodes.go:46\ngithub.com/ucloud/redis-cluster-operator/pkg/controller/manager.(*realHeal).Heal\n\t/src/pkg/controller/manager/healer.go:31\ngithub.com/ucloud/redis-cluster-operator/pkg/controller/distributedrediscluster.(*ReconcileDistributedRedisCluster).Reconcile\n\t/src/pkg/controller/distributedrediscluster/distributedrediscluster_controller.go:248\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}
{"level":"info","ts":1592421785.105893,"logger":"controller_distributedrediscluster.redis_util","msg":"CLUSTER FORGET","Request.Namespace":"cobra-bench","Request.Name":"redis","id":"712df7a404a1aa19fd966f3f7d0ccc68aacb48bb","from":"172.28.5.80:6379"}
{"level":"error","ts":1592421785.1062799,"logger":"controller_distributedrediscluster","msg":"Unable to execute FORGET command: unexpected error on node 172.28.5.80:6379","Request.Namespace":"cobra-bench","Request.Name":"redis","error":"ERR Unknown node 712df7a404a1aa19fd966f3f7d0ccc68aacb48bb","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/ucloud/redis-cluster-operator/pkg/redisutil.(*AdminConnections).ValidateResp\n\t/src/pkg/redisutil/connections.go:296\ngithub.com/ucloud/redis-cluster-operator/pkg/redisutil.(*Admin).ForgetNode\n\t/src/pkg/redisutil/admin.go:480\ngithub.com/ucloud/redis-cluster-operator/pkg/controller/heal.(*CheckAndHeal).FixUntrustedNodes\n\t/src/pkg/controller/heal/untrustenodes.go:46\ngithub.com/ucloud/redis-cluster-operator/pkg/controller/manager.(*realHeal).Heal\n\t/src/pkg/controller/manager/healer.go:31\ngithub.com/ucloud/redis-cluster-operator/pkg/controller/distributedrediscluster.(*ReconcileDistributedRedisCluster).Reconcile\n\t/src/pkg/controller/distributedrediscluster/distributedrediscluster_controller.go:248\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}

There were 0 restarts:

NAME                                      READY   STATUS    RESTARTS   AGE
cobra-6-mbfqx                             1/1     Running   0          18d
drc-redis-0-0                             1/1     Running   0          18d
drc-redis-0-1                             1/1     Running   0          18d
drc-redis-1-0                             1/1     Running   0          18d
drc-redis-1-1                             1/1     Running   0          18d
drc-redis-2-0                             1/1     Running   0          15d
drc-redis-2-1                             1/1     Running   0          15d
redis-cluster-operator-7cc9cd4489-cmd4g   1/1     Running   0          18d

But I see 4 master and slaves with CLUSTER NODES

/data # redis-cli -u redis://172.16.101.247 -c CLUSTER NODES | awk '{print $2, $3}'
172.25.197.16:6379@16379 slave,fail
172.29.215.140:6379@16379 slave
172.28.5.80:6379@16379 slave
172.31.121.77:6379@16379 master
172.30.225.140:6379@16379 master
172.28.152.78:6379@16379 master
172.30.160.203:6379@16379 master,fail
172.30.112.12:6379@16379 myself,master

Any ideas, guess on how to investigate this ?

How to configure affinity or NodeSelector ?

In my cluster, the nodes are logically grouped. I now hope that the redis instance created by the Operator can only run on the specified logical grouping. How should I configure it, please help me.

Add Support to Set Hostnetwork True and Choose Different Ports

is it possible to set hostnetwork true.
Also, choose different port for redis deployment.

we are planning to use multiple redis clusters and setting host network true( when allowed by ucloud operator) requires lot of nodes. So, we are planning to choose different port for each redis cluster.

kubernete 1.11 support

Hi there !

How hard would it be to support kubernete 1.11 ?

At work we are currently stuck on openshift 3.11 which is based on kubernete 1.11, but this operator claims that it needs kubernete 1.13.

masterSize should be less than or equal to 10

Hi,

I am testing this operator as a solution for my environment, where I have up to 40 masters, but when I try to apply the following resource:

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: redis-cluster-ucloud
spec:
  # Add fields here
  masterSize: 20
  clusterReplicas: 1
  image: redis:5.0.4-alpine
  config:
    appendonly: "no"
    save: ""
  storage:
    type: persistent-claim
    size: 50Mi
    class: gp2
    deleteClaim: true

I get this error:

The DistributedRedisCluster "redis-cluster-ucloud" is invalid: spec.masterSize: Invalid value: 10: spec.masterSize in body should be less than or equal to 10

Is it possible to raise this limit by editing the chart values?

Thanks

fix failed nodes

remove node that still known by some redis node but which doesn't exists anymore meaning it is failed, and pod not in kubernetes, or without targetable IP.

eg:

9f022683ce0648d2ece63f8228c841af9cf17f32 10.25.245.155:6379@16379 master,fail - 1571727438167 1571727435000 7 connected 10240-12287
21f9762f617fa7015eabc93f5a4f572dd10984d0 10.25.42.216:6379@16379 master,fail - 1571727438570 1571727433000 10 connected 956-1365 4916-5461 9012-9557 13108-13653
3ef0fe113df780567435bcc260ec11e6d99a0889 10.25.131.198:6379@16379 master,fail - 1571727438671 1571727435000 21 connected 4508-4915 8192-9011 12288-13107
b31fa1fc343a2e35454879fbd7766a05ed737595 10.25.148.113:6379@16379 master - 0 1571727891672 23 connected
19d921e27cf9310466fa47dd878d897cd60abe36 10.25.158.242:6379@16379 slave,fail 32bd732c4e64131c40d97699acb2b67857d5e154 1571727417746 1571727411000 15 connected
c99cf754651352d3924c11535aea5dbff897d035 10.25.102.60:6379@16379 slave,fail 9f022683ce0648d2ece63f8228c841af9cf17f32 1571727418046 1571727412936 7 connected
0545c6aa7be1edac7699dafcac8e74ae42a27fe9 10.25.155.37:6379@16379 master - 0 1571727896686 18 connected
914f4a8913c98e2109ba7e52d74c05d605ef746b 10.25.179.30:6379@16379 myself,master - 0 1571727881000 1 connected
9ae71d8f27b1fb549b91a84bc99e1b7018344b92 10.25.15.34:6379@16379 slave,fail 3ef0fe113df780567435bcc260ec11e6d99a0889 1571727438369 1571727435058 21 connected
316bb73f32ca8c9145636f75793d356766fe33b4 10.25.25.24:6379@16379 master,fail - 1571727438470 1571727432000 6 connected 2048-4095
c683896392e38c65597840ab4a92198b1abfc8d6 10.25.44.46:6379@16379 master - 0 1571727897000 0 connected
664f54e000664947fb8d0db1fe65f4e321b7dd7e 10.25.202.241:6379@16379 master,fail - 1571727438671 1571727437566 16 connected 1366-2047 4506-4507 5462-6143 9558-10239
116650d29e02ccbc31dc66ab628bc55a7f96adeb 10.25.188.223:6379@16379 slave,fail e1449e08e4b4fe086478ea505be8a322b977707c 1571727418249 1571727416000 13 connected
e1449e08e4b4fe086478ea505be8a322b977707c 10.25.9.93:6379@16379 master,fail - 1571727438671 1571727432048 13 connected 14336-16383
fc87c2ddb5df7947dbc17fae11745610dedeabd4 10.25.188.250:6379@16379 slave,fail 664f54e000664947fb8d0db1fe65f4e321b7dd7e 1571727418148 1571727413000 16 connected
2ff1b5f51c760b098e6841c5bbad96a27c2dee87 10.25.96.66:6379@16379 slave,fail 72e0dd24c9a8b316d84fd7e0e5f6c2d9013f495a 1571727417946 1571727416544 2 connected
32bd732c4e64131c40d97699acb2b67857d5e154 10.25.43.116:6379@16379 master,fail - 1571727437866 1571727437065 15 connected 0-955 4096-4505 13654-14335
3a60d6efef2b7926da156366ede8a390f56d786a 10.25.18.171:6379@16379 master - 0 1571727898191 17 connected
6970871b843c542292633a1271edc81320bd50bc 10.25.3.46:6379@16379 master - 0 1571727898692 22 connected
3f04760c7f9d1c5b2274cbf35de52d66f500e99d 10.25.222.230:6379@16379 master - 0 1571727894680 19 connected
8c39a041fc77318e8b96920f976cfc9649a2070d 10.25.193.225:6379@16379 master - 0 1571727898000 20 connected
72e0dd24c9a8b316d84fd7e0e5f6c2d9013f495a 10.25.174.2:6379@16379 master,fail - 1571727438369 1571727435000 2 connected 6144-8191
cf95d11b5990fe1969bab7c35ca01dcbb7d9e2c2 10.25.86.64:6379@16379 slave,fail 21f9762f617fa7015eabc93f5a4f572dd10984d0 1571727418451 1571727417545 10 connected
6b62334219867fc925ddf3e238eb0951678b2583 10.25.198.105:6379@16379 slave,fail 316bb73f32ca8c9145636f75793d356766fe33b4 1571727417946 1571727413000 6 connected

how to use redis user to run pod

I used redis:5.0.4-apline image
when I use docker or kubectl run to run this image, the user is redis
but the user is root in redis-operator

redis exporter

  • add container for exporter
  • add annotations to ensure prometheus can get metrics automatically.
"prometheus.io/scrape": "true",
"prometheus.io/port":   "9121",

status.nodes is null?

status:
  nodes: null
  reason: 'DispatchMasters: missing statefulset drc-back-ad-srv-redis-cluster-0'
  restore:
    backup: null
  status: Failed

but the StatefulSet is up and running, and the cluster is remain running

kubectl get sts |grep drc-back-ad-srv-redis-cluster-0
drc-back-ad-srv-redis-cluster-0   3/3     141d

what cause the issue? and any idea to resolve this?

PostStartHookError

I want to use the definition password , then I have changed the "custom-password.yaml" ,Change the password: MWYyZDFlMmU2N2Rm to password: MTIzNDU2Cg==
kubeclt apply -f custom-password.yaml ,There's an error "PostStartHookError: command '/bin/sh -c echo ${REDIS_PASSWORD} > /data/redis_password' exited with 126: 0 "
kubectl logs -f drc-redis-cluster-0-0 ,like this
`*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 4

'requirepass '123456'
Unbalanced quotes in configuration line
`

Redis operator deployment works fine, but I can't get a working redis cluster

I tried with both modes: namespace and cluster but I get the same result:

kubectl get distributedrediscluster
NAME        MASTERSIZE   STATUS   AGE
redis-one   3                     15s

So here is how I proceed:

Git

git clone https://github.com/ucloud/redis-cluster-operator.git
cd redis-cluster-operator
git checkout v0.2.0

Kubectl

kubectl create -f deploy/crds/redis.kun_distributedredisclusters_crd.yaml
kubectl create -f deploy/crds/redis.kun_redisclusterbackups_crd.yaml
kubectl create -f deploy/service_account.yaml
kubectl create -f deploy/namespace/role.yaml
kubectl create -f deploy/namespace/role_binding.yaml
kubectl create -f deploy/namespace/operator.yaml

Checking the operator:

$ kubectl get deployment
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
redis-cluster-operator   1/1     1            1           78s
$ kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
redis-cluster-operator-d866cbd6c-rqxfp   1/1     Running   0          12m

So far, so good.

I need the password mode, so I'm creating a Secret file:

apiVersion: v1
kind: Secret
metadata:
  name: redis-auth
  namespace: p6y
type: Opaque
data:
  password: QmhvbyEK
kubectl apply -f secret.yaml

And finally the DistributedRedisCluster file:

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  name: redis-one
  namespace: p6y
spec:
  image: uhub.service.ucloud.cn/operator/redis:5.0.4-alpine
  masterSize: 3
  clusterReplicas: 1
  passwordSecret:
    name: redis-auth
  resources:
    limits:
      cpu: 200m
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 100Mi
kubectl apply -f drc.yaml

Now checking the health of the cluster:

$ kubectl get distributedrediscluster -n p6y
NAME        MASTERSIZE   STATUS   AGE
redis-one   3                     100s

Here are the operator logs:

$ kubectl logs -f redis-cluster-operator-d866cbd6c-rqxfp
{"level":"info","ts":1579703505.3296764,"logger":"cmd","msg":"Go Version: go1.13.3"}
{"level":"info","ts":1579703505.3298333,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1579703505.329855,"logger":"cmd","msg":"Version of operator-sdk: v0.13.0"}
{"level":"info","ts":1579703505.3298721,"logger":"cmd","msg":"Version of operator: 0.1.1+0000000"}
{"level":"info","ts":1579703505.330385,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1579703506.30127,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1579703506.3117602,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1579703507.2682493,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"0.0.0.0:8383"}
{"level":"info","ts":1579703507.2686155,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1579703511.1277542,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"redis-cluster-operator-metrics","Service.Namespace":"default"}
{"level":"info","ts":1579703512.079728,"logger":"cmd","msg":"Could not create ServiceMonitor object","error":"no ServiceMonitor registered with the API"}
{"level":"info","ts":1579703512.079791,"logger":"cmd","msg":"Install prometheus-operator in your cluster to create ServiceMonitor objects","error":"no ServiceMonitor registered with the API"}
{"level":"info","ts":1579703512.0798006,"logger":"cmd","msg":"Starting the Cmd."}
{"level":"info","ts":1579703512.080263,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1579703512.0804253,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"distributedrediscluster-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1579703512.080463,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"redisclusterbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1579703512.1810398,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"distributedrediscluster-controller"}
{"level":"info","ts":1579703512.1811314,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"distributedrediscluster-controller","worker count":4}
{"level":"info","ts":1579703512.1819806,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"redisclusterbackup-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1579703512.2825923,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"redisclusterbackup-controller"}
{"level":"info","ts":1579703512.2826486,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"redisclusterbackup-controller","worker count":2}

Other information:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:41:55Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

RBAC issue: User "system:serviceaccount:dev:redis-cluster-operator" cannot get resource "pods" in API group

Hello,

I have deployed the helm chart with --version=0.1.0 --set operator.namespace=dev.

This is currently failing:

➤  kubectl get deployments.apps redis-cluster-operator
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
redis-cluster-operator   0/1     1            0           46s

The operator logs shows the following:

➤  kubectl logs redis-cluster-operator-6b89b7c7c7-xxt4k
{"level":"info","ts":1608217532.1860723,"logger":"cmd","msg":"Go Version: go1.13.3"}
{"level":"info","ts":1608217532.1861515,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1608217532.18617,"logger":"cmd","msg":"Version of operator-sdk: v0.13.0"}
{"level":"info","ts":1608217532.1861897,"logger":"cmd","msg":"Version of operator: v0.2.0-62+12f703c"}
{"level":"info","ts":1608217532.186407,"logger":"leader","msg":"Trying to become the leader."}
{"level":"error","ts":1608217533.4404905,"logger":"k8sutil","msg":"Failed to get Pod","Pod.Namespace":"dev","Pod.Name":"redis-cluster-operator-6b89b7c7c7-xxt4k","error":"pods \"redis-cluster-operator-6b89b7c7c7-xxt4k\" is forbidden: User \"system:serviceaccount:dev:redis-cluster-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"dev\": RBAC: clusterrole.rbac.authorization.k8s.io \"redis-cluster-operator\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/k8sutil.GetPod\n\t/go/pkg/mod/github.com/operator-framework/[email protected]/pkg/k8sutil/k8sutil.go:128\ngithub.com/operator-framework/operator-sdk/pkg/leader.myOwnerRef\n\t/go/pkg/mod/github.com/operator-framework/[email protected]/pkg/leader/leader.go:160\ngithub.com/operator-framework/operator-sdk/pkg/leader.Become\n\t/go/pkg/mod/github.com/operator-framework/[email protected]/pkg/leader/leader.go:67\nmain.main\n\t/src/cmd/manager/main.go:99\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}
{"level":"error","ts":1608217533.4405937,"logger":"cmd","msg":"","error":"pods \"redis-cluster-operator-6b89b7c7c7-xxt4k\" is forbidden: User \"system:serviceaccount:dev:redis-cluster-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"dev\": RBAC: clusterrole.rbac.authorization.k8s.io \"redis-cluster-operator\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\t/src/cmd/manager/main.go:101\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}

Any ideas ?

redis cluster is down after node recover from NotReady

deploy a redis cluster with 3 master and 3 slave
I kill one node's kubelet,then wait pods on this node turn to Terminating.
start kubelet,pod of statefulset reschedule and running state.
but nodes.conf have 8 nodes.all is master.

Missing STATUS when executing `kubectl get distributedrediscluster`

$ kubectl get distributedrediscluster
NAME                              MASTERSIZE   STATUS    AGE
example-distributedrediscluster   3                      98s

Notice how STATUS does not display anything.

OS: macOS
I have tested this with minikube (1.18) + kubernete enabled in docker desktop (1.16).

I am using the steps to run the operator in a namespace, not at the cluster level.

$ kubectl get -o yaml distributedrediscluster
apiVersion: v1
items:
- apiVersion: redis.kun/v1alpha1
  kind: DistributedRedisCluster
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"redis.kun/v1alpha1","kind":"DistributedRedisCluster","metadata":{"annotations":{"redis.kun/scope":"cluster-scoped"},"name":"example-distributedrediscluster","namespace":"default"},"spec":{"clusterReplicas":1,"image":"redis:5.0.4-alpine","masterSize":3}}
      redis.kun/scope: cluster-scoped
    creationTimestamp: 2020-05-26T18:30:30Z
    generation: 1
    name: example-distributedrediscluster
    namespace: default
    resourceVersion: "20898"
    selfLink: /apis/redis.kun/v1alpha1/namespaces/default/distributedredisclusters/example-distributedrediscluster
    uid: 3e3435af-c515-4049-bc87-655d8a797888
  spec:
    clusterReplicas: 1
    image: redis:5.0.4-alpine
    masterSize: 3
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

The fact that resourceVersion and selfLink are empty is a smell.

Can't find method "NewHealer"

There is a method "NewHealer" mentioned in go file "pkg\controller\distributedrediscluster\distributedrediscluster_controller.go", but not found in target package "github.com/ucloud/redis-cluster-operator/pkg/controller/manager".

Capable to disperse master nodes?

Hi, for redis-cluster, master nodes have to be dispersed enough that there is not single machine where half master nodes are deployed (required by fail-over master election).

Do we have this built it?

serviceName cant change after Cluster creation

Hello people and thank you for your nice work!

Posting here an observation to track it as possible issue.

Observed that if a Cluster is created and then we edit specs to change serviceName, actual k8s service name does not change after reconciliation.

memory set policy

Shoud I config maxmemory or just specify 0? I have already specified resources mem limit.

The rename of the config comand creates issue with the redis-py-cluster

rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True, password='XXXXX')

Because the config command has been renamed the connection fails.

rediscluster.exceptions.RedisClusterException: ERROR sending 'config get cluster-require-full-coverage' command to redis server: {'host': '10.249.9.38', 'port': 6379, 'name': '10.249.9.38:6379', 'server_type': 'slave'}

As a workaround, you have to set skip_full_coverage_check=True.

This might work on software built in the house but other software might not expose this configuration to the user.

rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True, skip_full_coverage_check=True, password='XXXXX')

Removing authentication from an existing cluster doesn't work

Adding authentication to an existing cluster works and triggers the recycling of the pods as expected.

The other way around (removing the authentication from an existing cluster) seems not working.

It doesn't trigger any recycling of the pods and even if you delete the pods manually they get recreated with the authentication.

Changing the password of an existing cluster doesn't work too

Cluster doesn't start when using Istio

When creating a DistributedRedisCluster on a Namespace with Istio sidecar injection enabled the cluster fail to start.

Basically readiness probe fails because the command redis-cli -h $(hostname) ping can not success because Envoy is intercepting connections to :6379 and applying mTLS which the client know nothing about.

An easy workaround would be allowing to specify annotations to be applied by the operator to all pods. This way we could add sidecar.istio.io/inject:false which Istio will recognize and will not inject sidecars so the Redis cluster would operate like if there was no Istio.

Ideally though the probes would need to be changed to support being run under a service mesh like Istio.

Maybe we can open a second Redis port and add annotations to the pods so that Istio does not try to intercept that one?
Maybe probes could use localhost?

Redundant Headless service

These package creates n headless service(equal to stateful set). What is the use of that Headless service as the pods are being accessed without the headless service using their IP?

How to connect to Redis Pods during development?

I am trying to debug this package using
operator-sdk up local
and I am not able to connect to the Redis pods during development.

I think the main reason would be running the operator locally and the pod is inside the kubernetes cluster.

It is working fine while running the operator inside the docker in the kubernetes cluster as a deployment. It would be tiring and long job to deploy every time in the cluster you make changes in the operator while debugging thus increasing the development time.

How to connect to redis pod while doing development with just their IP ?

The update of the operator get stuck

Screenshot 2020-05-14 at 15 25 03

When I try to change the version of the docker image the recycling of the pods get stuck because of some lock file.

{"level":"info","ts":1589462653.6875901,"logger":"cmd","msg":"Go Version: go1.13.3"} ││ {"level":"info","ts":1589462653.6877007,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} │
│ {"level":"info","ts":1589462653.6877236,"logger":"cmd","msg":"Version of operator-sdk: v0.13.0"} ││ {"level":"info","ts":1589462653.6877367,"logger":"cmd","msg":"Version of operator: v0.2.0-34-dirty+7494848"} │
│ {"level":"info","ts":1589462653.77739,"logger":"leader","msg":"Trying to become the leader."} ││ {"level":"info","ts":1589462654.48222,"logger":"leader","msg":"Found existing lock","LockOwner":"redis-cluster-operator-56f8d58767-rn8nd"} │
│ {"level":"info","ts":1589462654.4928203,"logger":"leader","msg":"Not the leader. Waiting."}

Redis cluster can not recover from failure

I have deployed a 3-master-3-slave Redis cluster in my stage environment and it is running well.
But after I have deleted one of these six pods, the cluster can't recover. The IP address of the pod deleted before is still there, in "fail" status. So there are 7 nodes in my sight: 1 master failed, 1 master running but with no slots, 3 masters with slots, 2 slaves.
How to get it back to normal state? Could you add a new feature to solve this problem?
Thank you very much!

Image versions should be fixed in manifest files

I'm deploying the version v0.2.0 from the v0.2.0 tag which has the image latest tag for the clustered and the namespaced manifests which doesn't look right to me.

The master branch should use the latest docker image tag, no problem, but the releases should have a fixed docker image tag.

Update

I found this issue where @polefishu suggest using the v0.2.0 tag of the docker image, which seems to confirm the content of this issue.

Using default storage class

Hello,

Currently when using a persistent volume, it seems we cannot use the default cluster storage class.

If the class specification is missing in the storage specification, the resulting PVC will have its class set to "" which means to pre-bind a PVC (see Reserving a PersistentVolume) instead of not defining the storageClassName at all.

Is this intentional ?

validating failed on creating new drc

hi, I deployed the operator as cluster-scoped following the quick start guide, and everything seemed fine, but when I tried to create a DRC by kubectl create -f deploy/example/custom-password.yaml, it throws an error, saying validating error:

> kubectl create -f deploy/example/custom-password.yaml
error validating "deploy/example/custom-password.yaml": error validating data: [ValidationError(DistributedRedisCluster.spec): unknown field "image" in kun.redis.v1alpha1.DistributedRedisCluster.spec, ValidationError(DistributedRedisCluster.spec): unknown field "passwordSecret" in kun.redis.v1alpha1.DistributedRedisCluster.spec, ValidationError(DistributedRedisCluster.spec): unknown field "resources" in kun.redis.v1alpha1.DistributedRedisCluster.spec]; if you choose to ignore these errors, turn validation off with --validate=false

CRDs I have created:

> kubectl get crd
NAME                                 CREATED AT
distributedredisclusters.redis.kun   2020-07-09T08:56:53Z
redisclusterbackups.redis.kun        2020-07-09T09:03:34Z

> kubectl get deployments -A
NAMESPACE     NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
default       redis-cluster-operator   1/1     1            1           5h59m
kube-system   coredns                  1/1     1            1           21d
kube-system   kubernetes-dashboard     1/1     1            1           21d
kube-system   metrics-server           1/1     1            1           21d

the crd distributedredisclusters.redis.kun definition:

> kubectl get crd distributedredisclusters.redis.kun -oyaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{},"name":"distributedredisclusters.redis.kun"},"spec":{"additionalPrinterColumns":[{"JSONPath":".spec.masterSize","description":"The number of redis master node in the ensemble","name":"MasterSize","type":"integer"},{"JSONPath":".status.status","description":"The status of redis cluster","name":"Status","type":"string"},{"JSONPath":".metadata.creationTimestamp","name":"Age","type":"date"},{"JSONPath":".status.numberOfMaster","description":"The current master number of redis cluster","name":"CurrentMasters","priority":1,"type":"integer"},{"JSONPath":".spec.image","description":"The image of redis cluster","name":"Images","priority":1,"type":"string"}],"group":"redis.kun","names":{"kind":"DistributedRedisCluster","listKind":"DistributedRedisClusterList","plural":"distributedredisclusters","shortNames":["drc"],"singular":"distributedrediscluster"},"scope":"Namespaced","subresources":{"status":{}},"validation":{"openAPIV3Schema":{"description":"DistributedRedisCluster is the Schema for the distributedredisclusters API","properties":{"apiVersion":{"description":"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources","type":"string"},"kind":{"description":"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds","type":"string"},"metadata":{"type":"object"},"spec":{"description":"DistributedRedisClusterSpec defines the desired state of DistributedRedisCluster","properties":{"clusterReplicas":{"format":"int32","maximum":3,"minimum":1,"type":"integer"},"masterSize":{"format":"int32","maximum":12,"minimum":3,"type":"integer"},"serviceName":{"pattern":"[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*","type":"string"}},"type":"object"},"status":{"description":"DistributedRedisClusterStatus defines the observed state of DistributedRedisCluster","type":"object"}},"type":"object"}},"version":"v1alpha1","versions":[{"name":"v1alpha1","served":true,"storage":true}]}}
  creationTimestamp: "2020-07-09T08:56:53Z"
  generation: 1
  name: distributedredisclusters.redis.kun
  resourceVersion: "3190816"
  selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/distributedredisclusters.redis.kun
  uid: 2fa61f63-9a31-46b7-b51c-82a2e8c30578
spec:
  additionalPrinterColumns:
  - JSONPath: .spec.masterSize
    description: The number of redis master node in the ensemble
    name: MasterSize
    type: integer
  - JSONPath: .status.status
    description: The status of redis cluster
    name: Status
    type: string
  - JSONPath: .metadata.creationTimestamp
    name: Age
    type: date
  - JSONPath: .status.numberOfMaster
    description: The current master number of redis cluster
    name: CurrentMasters
    priority: 1
    type: integer
  - JSONPath: .spec.image
    description: The image of redis cluster
    name: Images
    priority: 1
    type: string
  conversion:
    strategy: None
  group: redis.kun
  names:
    kind: DistributedRedisCluster
    listKind: DistributedRedisClusterList
    plural: distributedredisclusters
    shortNames:
    - drc
    singular: distributedrediscluster
  preserveUnknownFields: true
  scope: Namespaced
  subresources:
    status: {}
  validation:
    openAPIV3Schema:
      description: DistributedRedisCluster is the Schema for the distributedredisclusters
        API
      properties:
        apiVersion:
          description: 'APIVersion defines the versioned schema of this representation
            of an object. Servers should convert recognized schemas to the latest
            internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
          type: string
        kind:
          description: 'Kind is a string value representing the REST resource this
            object represents. Servers may infer this from the endpoint the client
            submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
          type: string
        metadata:
          type: object
        spec:
          description: DistributedRedisClusterSpec defines the desired state of DistributedRedisCluster
          properties:
            clusterReplicas:
              format: int32
              maximum: 3
              minimum: 1
              type: integer
            masterSize:
              format: int32
              maximum: 12
              minimum: 3
              type: integer
            serviceName:
              pattern: '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
              type: string
          type: object
        status:
          description: DistributedRedisClusterStatus defines the observed state of
            DistributedRedisCluster
          type: object
      type: object
  version: v1alpha1
  versions:
  - name: v1alpha1
    served: true
    storage: true
status:
  acceptedNames:
    kind: DistributedRedisCluster
    listKind: DistributedRedisClusterList
    plural: distributedredisclusters
    shortNames:
    - drc
    singular: distributedrediscluster
  conditions:
  - lastTransitionTime: "2020-07-09T08:56:53Z"
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: null
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  storedVersions:
  - v1alpha1

Kubernetes version being v1.15.0

Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Add k8s event

Add create, update, failed... event for sync redis cluster.

Rolling update drops all data without persistence

If persistence isn't enabled then any change to the DistributedRedisCluster resource (e.g. image or resource allocation update) causes the Redis cluster to loose all the data.

It appears that the operator waits until all the Redis nodes in a shard (StatefulSet) are updated before configuring them all, instead of doing it one by one for each node. This means that new nodes when they're brought up can't replicate from the existing ones. Making this operator impossible to use without persistence enabled, if any maintenance to be ever done.

To address this, one option is that nodes should be joined to the cluster through an init container when updating and health check should ensure replication being complete before declaring the node healthy.

恭喜Ucloud上市,咨询一个问题

go: finding sigs.k8s.io/controller-runtime v0.4.0
go: finding sigs.k8s.io/controller-tools v0.2.2
go: finding sigs.k8s.io/kustomize v2.0.3+incompatible
go: finding sigs.k8s.io/structured-merge-diff v0.0.0-20190817042607-6149e4549fca
go: finding sigs.k8s.io/testing_frameworks v0.1.2
go: finding sigs.k8s.io/yaml v1.1.0
go: finding vbom.ml/util v0.0.0-20160121211510-db5cfe13f5cc ---->
vbom.ml/util 这个在build Dockerfile的时候一直不能下载完成,卡住在这里。 设置了 proxy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.