Git Product home page Git Product logo

redis's Issues

Add explicit version tag to latest images

Currently, the redis, redis-exporter and redis-sentinel images that are pushed to quay.io do not have a corresponding tag with the explicit version.
We prefer to be able to refer to a specific to avoid unintended upgrades when a new version is published.
Please tag your images with the explicit version number, in addition to the 'latest' tag.

What is the last tag created on quay.io/opstree

Using renovate to run updates over redis images from quay.io/opstree/redis.
While in repository latest tag is 7.2.3 renovate brings also 7.2.5 version.
Last Thursday when doing test saw 7.2.4 being created in repository, meanwhile, most likely during same day it was deleted.
Unfortunately tags most probably remain in renovate's database.
Trying to understand what is happening around this container image repository and if it can be used as reliable source for updates.

Transparent Huge Pages (THP) warning

When deploying this docker image (via Redis Operator - thanks for that one!) I get a warning:

WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

Is there a way to configure it/set THP settings by default?

Push updated images to Quay.io

Hello,

I'm using your Redis operator (v0.14.0) on Openshift/OKD clusters and I realized that the default image (ie. quay.io/opstree/redis:v7.0.5) used by the operator and helm charts was built several months ago (November 2022). As such, it does not contain latest bug fixes from the newest Redis releases, nor the most needed write permission fix that would allow proper deployment on Openshift clusters without having to lower security (see #28).

I know I can build it myself but I believe that it would be a smoother on-boarding experience if working images were provided by default to the users of your Redis operator.

Thanks a lot for what you're doing.

Lack of persistency in /node-conf path

Hi,
Starting from this change (7459d11) we have been facing issues with node configuration consistency.
Followers failed to reconnect back to the cluster once their pod had been replaced. The followers couldn't find the node configuration since it did not persist after the restart.
As a temporary fix we moved the /node-conf into /data, which fixed the issue since /data was a mounted volume.

Kind Regards,
Guy

[feat]: Fix Makefile

We should fix commands in README.md, like:

make build-redis-image
make build-redis-exporter-image

We should fix to:

make build-redis
make build-redis-exporter

redis 7

Hi! Love yall's work.
Any plans for Redis 7 or for pushing the most recent version to quay that supports TLS?

rootles container

While starting this container as non root user (eg. 65532:65532) the entrypoint.sh is failing to configure redis:

mkdir: can't create directory '/opt/redis': Permission denied
/usr/bin/entrypoint.sh: line 21: /etc/redis/redis.conf: Permission denied
/usr/bin/entrypoint.sh: line 32: /etc/redis/redis.conf: Permission denied
sed: /data/nodes.conf: No such file or directory
/usr/bin/entrypoint.sh: line 52: /etc/redis/redis.conf: Permission denied

It would be nice to be able to run this container in rootles mode, since our K8 enforces securityContext.runAsNonRoot=true. Also in combination with the helm chart it would be nice to be able to set securityContext.readOnlyRootFilesystem=true.

Cannot start redis when make build_redis, /data dir persimission denied

➜ redis git:(master) ✗ docker run -it --name redis6.2 -d quay.io/opstree/redis:v6.2.8
78c22d5b2712de83227d0e148e8ce850cae801fa2c91f30fd681270df6b75ff7
➜ redis git:(master) ✗ docker logs redis6.2
Redis is running without password which is not recommended
Setting up redis in standalone mode
Running without TLS mode
Starting redis service in standalone mode.....
9:C 17 Jan 2023 07:12:09.936 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
9:C 17 Jan 2023 07:12:09.936 # Redis version=6.2.8, bits=64, commit=00000000, modified=0, pid=9, just started
9:C 17 Jan 2023 07:12:09.936 # Configuration loaded
9:M 17 Jan 2023 07:12:09.937 * monotonic clock: POSIX clock_gettime
9:M 17 Jan 2023 07:12:09.940 # Can't open the append-only file: Permission denied

Issues with deploying a RedisCluster on OpenShift

I've seen a few mentions of this issue here, but all issues are closed, and creating a RedisCluster on OpenShift using this operator still doesn't seem to work.

Note that I'm using the Redis Operator 0.13.0, the latest one that is available in the Operator Hub on OpenShift.

I'm using the following Manifest to create the cluster.

kind: RedisCluster
apiVersion: redis.redis.opstreelabs.in/v1beta1
metadata:
  name: redis-cluster
spec:
  clusterSize: 2
  clusterVersion: v7
  securityContext:
    fsGroup: 1000
    runAsUser: 1000
  persistenceEnabled: true
  kubernetesConfig:
    image: 'quay.io/opstree/redis:v7.0.5'
    imagePullPolicy: IfNotPresent
  redisExporter:
    enabled: false
    image: 'quay.io/opstree/redis-exporter:v1.44.0'
    imagePullPolicy: IfNotPresent
  storage:
    volumeClaimTemplate:
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

I can see it creates a RedisCluster with a name redis-cluster, and a StatefulSet named redis-cluster-leader. However, the Statefulset doesn't contain any pods and has the following error. It seems that this is due to the runAsUser parameter in the securityContext.

create Pod redis-cluster-leader-0 in StatefulSet redis-cluster-leader failed

error: pods "redis-cluster-leader-0" is forbidden: unable to validate against any security context constraint: [
provider "anyuid": Forbidden: not usable by user or serviceaccount,
provider restricted-v2: .spec.securityContext.fsGroup: Invalid value: []int64{1000}: 1000 is not an allowed group, spec.containers[0].securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000800000, 1000809999],
provider "restricted": Forbidden: not usable by user or serviceaccount,
provider "nonroot-v2": Forbidden: not usable by user or serviceaccount,
provider "nonroot": Forbidden: not usable by user or serviceaccount,
provider "noobaa": Forbidden: not usable by user or serviceaccount,
provider "noobaa-endpoint": Forbidden: not usable by user or serviceaccount,
provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,
provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount,
provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
provider "hostaccess": Forbidden: not usable by user or serviceaccount,
provider "ocs-metrics-exporter": Forbidden: not usable by user or serviceaccount,
provider "rook-ceph": Forbidden: not usable by user or serviceaccount,
provider "node-exporter": Forbidden: not usable by user or serviceaccount,
provider "rook-ceph-csi": Forbidden: not usable by user or serviceaccount,
provider "privileged": Forbidden: not usable by user or serviceaccount
]

When I set the securityContext to an empty object (like below), I can see that two pods are created, but I get the permission errors at startup.

kind: RedisCluster
apiVersion: redis.redis.opstreelabs.in/v1beta1
metadata:
  name: redis-cluster
spec:
  clusterSize: 2
  clusterVersion: v7
  securityContext: {}
...
Redis is running without password which is not recommended
/usr/bin/entrypoint.sh: line 22: /etc/redis/redis.conf: Permission denied
/usr/bin/entrypoint.sh: line 32: /etc/redis/redis.conf: Permission denied
sed: /data/nodes.conf: No such file or directory
/usr/bin/entrypoint.sh: line 72: /etc/redis/redis.conf: Permission denied
Running without TLS mode
Starting redis service in cluster mode.....
10:C 05 Apr 2023 09:36:00.658 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10:C 05 Apr 2023 09:36:00.658 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=10, just started
10:C 05 Apr 2023 09:36:00.658 # Configuration loaded
10:M 05 Apr 2023 09:36:00.659 * monotonic clock: POSIX clock_gettime
10:M 05 Apr 2023 09:36:00.659 * Running mode=standalone, port=6379.
10:M 05 Apr 2023 09:36:00.660 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10:M 05 Apr 2023 09:36:00.660 # Server initialized
10:M 05 Apr 2023 09:36:00.660 * Ready to accept connections

Inside the container, I observe these permissions:

/data $ id
uid=1000800000(1000800000) gid=0(root) groups=1000800000

/data $ ls -l /etc/redis
total 4K     
-rwxr-xr-x    1 redis    redis        114 Oct 30 13:58 redis.conf

/data $ ls -ln /etc/redis
total 4
-rwxr-xr-x    1 1000     1000           114 Oct 30 13:58 redis.conf

So it seems that this change isn't applied: #4

When I run this locally, on docker, it seems that the image isn't modified like in above PR.

$ docker run -it --entrypoint=/bin/bash quay.io/opstree/redis:v7.0.5
bash-5.1$ ls -lah /etc/redis/
total 12K
drwxr-xr-x    1 redis    redis       4.0K Oct 30 13:58 .
drwxr-xr-x    1 root     root        4.0K Apr  5 09:51 ..
-rwxr-xr-x    1 redis    redis        114 Oct 30 13:58 redis.conf
bash-5.1$

See also:

Operator crashes with segfault on K8s 1.30.1

When creating a simple Redis resource, the operator crashes with a segfault.

Kubernetes: v1.30.1
Vendor: Talos
Architecture: amd64

# redis.yaml
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: Redis
metadata:
  name: test
spec:
  kubernetesConfig:
    image: quay.io/redis/redis:v7.2.3

Logs:

I0622 09:51:13.831039       1 leaderelection.go:250] attempting to acquire leader lease redis-operator/6cab913b.redis.opstreelabs.in...
I0622 09:51:29.430462       1 leaderelection.go:260] successfully acquired lease redis-operator/6cab913b.redis.opstreelabs.in
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis","source":"kind source: *v1beta2.Redis"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redisreplication","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisReplication","source":"kind source: *v1beta2.RedisReplication"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"redisreplication","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisReplication"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster","source":"kind source: *v1beta2.RedisCluster"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel","source":"kind source: *v1beta2.RedisSentinel"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting EventSource","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel","source":"kind source: *v1beta2.RedisReplication"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting Controller","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"redisreplication","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisReplication","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"rediscluster","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisCluster","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"redissentinel","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"RedisSentinel","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Starting workers","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis","worker count":1}
{"level":"info","ts":"2024-06-22T09:51:29Z","logger":"controllers.Redis","msg":"Reconciling opstree redis controller","Request.Namespace":"default","Request.Name":"test"}
{"level":"info","ts":"2024-06-22T09:51:29Z","msg":"Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference","controller":"redis","controllerGroup":"redis.redis.opstreelabs.in","controllerKind":"Redis","Redis":{"name":"test","namespace":"default"},"namespace":"default","name":"test","reconcileID":"e8277a10-9345-4793-ad38-6613217ecd24"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x17a70e8]

goroutine 230 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116 +0x1e5
panic({0x19cf900?, 0x2cc0ca0?})
	/usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.getProbeInfo(0x0, 0x0?, 0x0, 0x0)
	/workspace/k8sutils/statefulset.go:617 +0x3e8
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateContainerDef({_, _}, {{0xc000524260, 0x1a}, {0x0, 0x0}, 0x0, 0x0, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:369 +0x159
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.generateStatefulSetsDef({{0xc0009101b8, 0x4}, {0x0, 0x0}, {0xc0009101c0, 0x7}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...)
	/workspace/k8sutils/statefulset.go:234 +0x467
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateOrUpdateStateFul({_, _}, {{_, _}, _}, {_, _}, {{0xc0009101b8, 0x4}, {0x0, ...}, ...}, ...)
	/workspace/k8sutils/statefulset.go:100 +0x1a5
github.com/OT-CONTAINER-KIT/redis-operator/k8sutils.CreateStandaloneRedis(0xc0004cf680, {0x1f12bd0, 0xc000103380})
	/workspace/k8sutils/redis-standalone.go:59 +0x853
github.com/OT-CONTAINER-KIT/redis-operator/controllers.(*RedisReconciler).Reconcile(0xc0005ab4a0, {0x0?, 0x0?}, {{{0xc0009101c0?, 0x5?}, {0xc0009101b8?, 0xc00080fd08?}}})
	/workspace/controllers/redis_controller.go:67 +0x346
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1efc1d0?, {0x1ef8ed0?, 0xc0006ff8c0?}, {{{0xc0009101c0?, 0xb?}, {0xc0009101b8?, 0x0?}}})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000129b80, {0x1ef8f08, 0xc0002139a0}, {0x1a86860?, 0xc000622800?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316 +0x3cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000129b80, {0x1ef8f08, 0xc0002139a0})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1af
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 96
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x565

redis 6.2 add a new feature: disable-thp

#################### KERNEL transparent hugepage CONTROL ######################

Usually the kernel Transparent Huge Pages control is set to "madvise" or

or "never" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which

case this config has no effect. On systems in which it is set to "always",

redis will attempt to disable it specifically for the redis process in order

to avoid latency problems specifically with fork(2) and CoW.

If for some reason you prefer to keep it enabled, you can set this config to

"no" and the kernel global to "always".

disable-thp yes

It's very useful

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.