mongodb / mongodb-kubernetes-operator Goto Github PK
View Code? Open in Web Editor NEWMongoDB Community Kubernetes Operator
License: Other
MongoDB Community Kubernetes Operator
License: Other
Hi,
I'm using the operator on OpenShift 4.3 and I'm seeing a readiness probe failure on, I think, the mongodb-agent pod.
I've followed the instructions in the readme to deploy. Due to the probe failure the pod isn't added to the created service.
Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0+b4261e0", GitCommit:"b4261e07ed", GitTreeState:"clean", BuildDate:"2019-07-06T03:16:01Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.1", GitCommit:"b9b84e0", GitTreeState:"clean", BuildDate:"2020-04-26T20:16:35Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
$ cat tournamentdb.yml
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: tournamentdb
spec:
members: 3
type: ReplicaSet
version: "4.2.6"
$ oc get pods
NAME READY STATUS RESTARTS AGE
mongodb-kubernetes-operator-687d7bfcb8-c9z6l 1/1 Running 0 10m
tournamentdb-0 1/2 Running 0 4m36s
goroutine 1 [running]:
main.main()
/Users/cianhatton/go/src/github.com/10gen/ops-manager-kubernetes/probe/readiness.go:239 +0x277
0s Warning Unhealthy pod/tournamentdb-0 Readiness probe failed: panic: couldn't open sink "/var/log/mongodb-mms-automation/readiness.log": open /var/log/mongodb-mms-automation/readiness.log: permission denied
$ oc describe pod tournamentdb-0
Name: tournamentdb-0
Namespace: service-binding-demo
Priority: 0
PriorityClassName:
Node: crc-dv9sm-master-0/192.168.126.11
Start Time: Sun, 05 Jul 2020 13:23:59 +0100
Labels: app=tournamentdb-svc
controller-revision-hash=tournamentdb-55cf6df467
statefulset.kubernetes.io/pod-name=tournamentdb-0
Annotations: k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.128.0.183"
],
"dns": {},
"default-route": [
"10.128.0.1"
]
}]
openshift.io/scc: restricted
Status: Running
IP: 10.128.0.183
Controlled By: StatefulSet/tournamentdb
Init Containers:
mongod-prehook:
Container ID: cri-o://ccb200a7439fd36b7b34600f6a7e259721b102143cc24cba8125b24a954e2775
Image: quay.io/mongodb/mongodb-kubernetes-operator-pre-stop-hook:1.0.1
Image ID: quay.io/mongodb/mongodb-kubernetes-operator-pre-stop-hook@sha256:4b7a96acb9e8a936412a87fbdcdb3ccac7b785069bd096c09f7d175648a27ca1
Port:
Host Port:
Command:
cp
pre-stop-hook
/hooks/pre-stop-hook
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 05 Jul 2020 13:24:04 +0100
Finished: Sun, 05 Jul 2020 13:24:04 +0100
Ready: True
Restart Count: 0
Environment:
Mounts:
/hooks from hooks (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-jfgzx (ro)
Containers:
mongodb-agent:
Container ID: cri-o://bf54b34dd87f4e67fa572c414e29f77b8af54fecca87251fb010354fe9e38522
Image: quay.io/mongodb/mongodb-agent:10.15.1.6468-1
Image ID: quay.io/mongodb/mongodb-agent@sha256:b24548bf5104acd7fed88791542d9c8eaf80ee47714f1cb248670c3735995928
Port:
Host Port:
Command:
agent/mongodb-agent
-cluster=/var/lib/automation/config/automation-config
-skipMongoStart
-noDaemonize
-healthCheckFilePath=/var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
-serveStatusPort=5000
State: Running
Started: Sun, 05 Jul 2020 13:24:07 +0100
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 500M
Requests:
cpu: 500m
memory: 400M
Readiness: exec [/var/lib/mongodb-mms-automation/probes/readinessprobe] delay=5s timeout=1s period=10s #success=1 #failure=240
Environment:
AGENT_STATUS_FILEPATH: /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
Mounts:
/data from data-volume (rw)
/var/lib/automation/config from automation-config (ro)
/var/log/mongodb-mms-automation/healthstatus from healthstatus (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-jfgzx (ro)
mongod:
Container ID: cri-o://b94b3d41b5172bdabd02d3ad7a36332b92e05e03814334056030b8566315b553
Image: mongo:4.2.6
Image ID: docker.io/library/mongo@sha256:8c48baa1571469d7f5ae6d603b92b8027ada5eb39826c009cb33a13b46864908
Port:
Host Port:
Command:
/bin/sh
-c
while [ ! -f /data/automation-mongod.conf ]; do sleep 3 ; done ; sleep 2 ;
# start mongod with this configuration
mongod -f /data/automation-mongod.conf ;
# start the pre-stop-hook to restart the Pod when needed
# If the Pod does not require to be restarted, the pre-stop-hook will
# exit(0) for Kubernetes to restart the container.
/hooks/pre-stop-hook ;
State: Running
Started: Sun, 05 Jul 2020 13:24:07 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 500M
Requests:
cpu: 500m
memory: 400M
Environment:
AGENT_STATUS_FILEPATH: /healthstatus/agent-health-status.json
PRE_STOP_HOOK_LOG_PATH: /hooks/pre-stop-hook.log
Mounts:
/data from data-volume (rw)
/healthstatus from healthstatus (rw)
/hooks from hooks (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-jfgzx (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-volume-tournamentdb-0
ReadOnly: false
healthstatus:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
hooks:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
automation-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: tournamentdb-config
Optional: false
mongodb-kubernetes-operator-token-jfgzx:
Type: Secret (a volume populated by a Secret)
SecretName: mongodb-kubernetes-operator-token-jfgzx
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 58s default-scheduler Successfully assigned service-binding-demo/tournamentdb-0 to crc-dv9sm-master-0
Normal Pulling 56s kubelet, crc-dv9sm-master-0 Pulling image "quay.io/mongodb/mongodb-kubernetes-operator-pre-stop-hook:1.0.1"
Normal Pulled 53s kubelet, crc-dv9sm-master-0 Successfully pulled image "quay.io/mongodb/mongodb-kubernetes-operator-pre-stop-hook:1.0.1"
Normal Created 53s kubelet, crc-dv9sm-master-0 Created container mongod-prehook
Normal Started 53s kubelet, crc-dv9sm-master-0 Started container mongod-prehook
Normal Pulling 53s kubelet, crc-dv9sm-master-0 Pulling image "quay.io/mongodb/mongodb-agent:10.15.1.6468-1"
Normal Pulled 50s kubelet, crc-dv9sm-master-0 Successfully pulled image "quay.io/mongodb/mongodb-agent:10.15.1.6468-1"
Normal Created 50s kubelet, crc-dv9sm-master-0 Created container mongodb-agent
Normal Started 50s kubelet, crc-dv9sm-master-0 Started container mongodb-agent
Normal Pulled 50s kubelet, crc-dv9sm-master-0 Container image "mongo:4.2.6" already present on machine
Normal Created 50s kubelet, crc-dv9sm-master-0 Created container mongod
Normal Started 50s kubelet, crc-dv9sm-master-0 Started container mongod
Warning Unhealthy 1s (x5 over 41s) kubelet, crc-dv9sm-master-0 Readiness probe failed: panic: couldn't open sink "/var/log/mongodb-mms-automation/readiness.log": open /var/log/mongodb-mms-automation/readiness.log: permission denied
goroutine 1 [running]:
main.main()
/Users/cianhatton/go/src/github.com/10gen/ops-manager-kubernetes/probe/readiness.go:239 +0x277
After creating the cluster resource , a headless service will be created and that service is added to the /var/lib/automation/config/automation-config
.
But it is using cluster.local as suffix not my cluster dns suffix like kubet-cluster.internal
eg - test-mongodb-0.test-mongodb-svc.test-mongo.svc.cluster.local
is created instead of `test-mongodb-0.test-mongodb-svc.test-mongo.svc.kubet-cluster.internal``
Any where we can change that?
When deploying the operator on OpenShift, the mongodb Pods from the StatefulSet fail to start:
$ oc logs mongodb-0 -c mongodb-agent
panic: Failed to get current user: user: unknown userid 1001130000
goroutine 1 [running]:
com.tengen/cm/util.init.3()
/data/mci/bd62cd177fa9529ccacb53e13b1bf2a3/mms-automation/build/go-dependencies/src/com.tengen/cm/util/user.go:14 +0xe5
Please add support for 1.19 version.
On applying CustomResourceDefinition i get an error:
kubectl apply -f deploy/crds/mongodb.com_mongodb_crd.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/mongodb.mongodb.com created
With the readme i was able to deplo the operatore fine.
However, trying to deploy a replica set fails.
I can see a service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/example-mongodb-svc ClusterIP None 27017/TCP 37s
the statefulset
NAME READY AGE
statefulset.apps/example-mongodb 0/3 37s
and of course the mongodb CR
NAME PHASE VERSION
example-mongodb
but nothing else happens.
Operator log is checking every 10 secs for it to come up but it doesn't.
Anything else i can look to figure out what is wrong?
Also more information what you can put in the mongodb cr would really be helpful ;-)
Hi i want to expose the mongodb publicly. But im not sure where to set mongodb service as type load balancer?
Can anyone provide an example of this, thanks
Is there currently any way to specify which mongo image will be pulled?
It would be great if we could specify images in ECR for example.
Hi,
After removing the mongodb resource everything gets removed except for the "*-svc".
Is that on purpose or a bug?
I am trying to deploy mongodb.com_v1_mongodb_scram_cr.yaml which comes by default on an AKS cluster . I have only changed the replicaset count to 1 from 3. Do I need to create a secret with "my-user-password" name?
I even tried creating a secret with the same name but still no luck.
kubectl create secret generic my-user-password --from-literal=password=pass@word1
Error Message from POD Describe:*******************
pod has unbound immediate PersistentVolumeClaims Readiness probe failed:
Warning FailedScheduling 31m (x2 over 31m) default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 31m default-scheduler Successfully assigned default/example-scram-mongodb-0 to aks-nodepool1-42666287-vmss000000
Normal SuccessfulAttachVolume 30m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-cfae6a47-5
*****************************Error from Console
[2020-07-22T12:58:24.214-00] [.error] [cm/director/director.go:updateCurrentState:719] [12:58:24.214] Error determining if process=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) is up : [12:58:24.214] Error executing WithClientFor() for cp=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) connectMode=SingleConnect : [12:58:24.214] Error checking out client (0x0) for connParam=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) connectMode=SingleConnect : [12:58:24.214] Error dialing to connParams=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false): tried 3 identities, but none of them worked. They were (mms-automation@admin[[SCRAM-SHA-256]][20], __system@local[[MONGODB-CR/SCRAM-SHA-1 SCRAM-SHA-256]][668], )
[2020-07-22T12:58:24.214-00] [.error] [cm/director/director.go:planAndExecute:525] [12:58:24.214] Failed to compute states : [12:58:24.214] Error determining if process=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) is up : [12:58:24.214] Error executing WithClientFor() for cp=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) connectMode=SingleConnect : [12:58:24.214] Error checking out client (0x0) for connParam=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) connectMode=SingleConnect : [12:58:24.214] Error dialing to connParams=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false): tried 3 identities, but none of them worked. They were (mms-automation@admin[[SCRAM-SHA-256]][20], __system@local[[MONGODB-CR/SCRAM-SHA-1 SCRAM-SHA-256]][668], )
[2020-07-22T12:58:24.214-00] [.error] [cm/director/director.go:mainLoop:398] [12:58:24.214] Failed to planAndExecute : [12:58:24.214] Failed to compute states : [12:58:24.214] Error determining if process=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) is up : [12:58:24.214] Error executing WithClientFor() for cp=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) connectMode=SingleConnect : [12:58:24.214] Error checking out client (0x0) for connParam=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false) connectMode=SingleConnect : [12:58:24.214] Error dialing to connParams=example-scram-mongodb-0.example-scram-mongodb-svc.default.svc.cluster.local:27017 (local=false): tried 3 identities, but none of them worked. They were (mms-automation@admin[[SCRAM-SHA-256]][20], __system@local[[MONGODB-CR/SCRAM-SHA-1 SCRAM-SHA-256]][668], )
[12:58:33.230] Still in error state. iteration 245
Yaml Used
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: example-scram-mongodb
spec:
members: 1
type: ReplicaSet
version: "4.2.6"
security:
authentication:
enabled: true
modes: ["SCRAM"]
users:
- name: my-user
db: admin
passwordSecretRef:
name: my-user-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
I have been trying to get this working on a local minikube cluster. However, while I was trying to get this working, I have realized something that this operator only works on the namespace which it has been deployed to.
So my question is; is this operator work on cluster-wide deployments?
Thank you.
I have followed the instruction from the README to install the Operator.
Then I applied the following configuration to my cluster:
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: event-store
spec:
members: 2
type: ReplicaSet
version: '4.2.7'
featureCompatibilityVersion: '4.0'
So far everything works fine and running k get po -n mongodb
returns
NAME READY STATUS RESTARTS AGE
event-store-0 2/2 Running 0 49m
event-store-1 2/2 Running 0 44m
mongodb-kubernetes-operator-85cb7f7b87-5sbwk 1/1 Running 0 56m
The service is also up and running:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
event-store-svc ClusterIP None <none> 27017/TCP 51m
Now I deployed my NodeJs application, which tries to connect to MongoDb like this:
const client = new MongoClient('mongodb://event-store-svc.mongodb.svc.cluster.local:27017', {
useNewUrlParser: true,
useUnifiedTopology: true,
});
However, I cannot connect to the database. I get this error:
MongoServerSelectionError: connect ECONNREFUSED 10.1.87.44:27017
at Timeout._onTimeout (node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7)
Did I forget something? What am I doing wrong?
Thanks for your great work!
In release v0.2.0 the spec.statefulSet.spec.volumeClaimTemplates.metadata.name
must be data-volume
, otherwise two claims are created.
git clone https://github.com/mongodb/mongodb-kubernetes-operator.git #clone repo
git checkout v0.2.0 #use v.0.2.0
kubectl create -f deploy/crds/mongodb.com_mongodb_crd.yaml #deploy crd
kubectl create -f deploy/ #deploy operator
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: mongo-test
spec:
members: 3
type: ReplicaSet
version: "4.2.6"
persistent: true
security:
authentication:
modes: ["SCRAM"]
users:
- name: my-user
db: admin
passwordSecretRef:
name: my-user-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume-2
spec:
accessModes: [ "ReadWriteOnce", "ReadWriteMany" ]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Secret
metadata:
name: my-user-password
type: Opaque
stringData:
password: 58LObjiMpxcjP1sMDW
kubectl apply -f test.yaml #deploy test
The used PersistantVolumeClaims should be named: data-volume-2-mongo-test-x
For every pod, two PVCs are created.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-2-mongo-test-0 Bound pvc-d83db90b-5d2d-4b7a-9c63-849d0b2ad0cb 20Gi RWO,RWX standard 66s
data-volume-2-mongo-test-1 Bound pvc-d63c3c8e-95fa-4b1f-9eeb-7de9a7936c08 20Gi RWO,RWX standard 38s
data-volume-2-mongo-test-2 Bound pvc-ca003ee3-3ce6-450c-a752-3c0046d3b5f3 20Gi RWO,RWX standard 6s
data-volume-mongo-test-0 Bound pvc-d23d1a73-27a4-4e47-bb64-cccdd1adf311 10G RWO standard 66s
data-volume-mongo-test-1 Bound pvc-291fd348-262c-435f-beb3-047c882d40f8 10G RWO standard 38s
data-volume-mongo-test-2 Bound pvc-0277ef48-faa3-478b-955f-676a6d7838bb 10G RWO standard 6s
Hi,
I am trying to deploy MongoDB on Openshift without Enterprise Operator. I followed the README files but they are a bit confusing for me so I would like to explain what i did step by step and then which error I had.
Here are the steps I took:
After this when I ran "kubectl get mongodb --namespace " , I could not see any status in phase column and there is an error in the operator: error creating automation config config map: Secret "example-openshift-mongodb-my-user-scram-credentials" not found
It looked like there is no secret with such name and I created a secret with that name (example-openshift-mongodb-my-user-scram-credentials) and the operator gave this error: error creating automation config config map: credentials secret did not have all of the required keys.So I think I am missing some secret definitions or I have incorrect secrets. Do you have any idea what might be the problem? Any help is appreciated, thank you!
Adding a kustomization file would enable the deployment to be consumed as "kustomization module".
A very light weight one is by no means any stance for or against any specific deployment mode, it just enables this kind of consumption.
This consumption in a kustomization.yaml is what I'm aiming for:
resorces:
- gitub.com/mongodb/mongodb-kuberentes-operator/deploy?ref=master
Given this appears to be implemented using a headless service with no cluster IP and no kube-proxy support as a result, how does one using docker desktop on Windows connect via an IDE tool like Robo 3T etc? I have tried just about everything I can think of with no success. Any guidance would be much appreciated!
I have a cluster with 2 physical machines and no cloud providers. i am trying to deploy the statefulset as i want to create a mongodb with persistent. Due to local storage i have created manually some Persistent Volumes, and i am trying to bound at them with volume claim templates. here is my extra code:
volumeClaimTemplates: - metadata: name: mongo-persistent-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi storageClassName: "localstorage"
Are there specific instructions for local storages and manualy persistent volumes?
for better undestanding here are my storage yaml, and mongo deployment yaml
mongo.zip
As per the title I cannot connect to the replicaset using the supplied password.
Am I using the correct user/password combination? Or am I missing something?
ubuntu@test-1-1:~/mongodb-kubernetes-operator$ k exec -it mongodb-0 -c mongod -- mongo --host mongodb://admin:58LObjiMpxcjP1sMDW@mongodb-0:27017/?replicaSet=mongodb
MongoDB shell version v4.2.6
connecting to: mongodb://mongodb-0:27017/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=mongodb
2020-09-16T19:00:14.430+0000 I NETWORK [js] Starting new replica set monitor for mongodb/mongodb-0:27017
2020-09-16T19:00:14.430+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mongodb-0:27017
2020-09-16T19:00:14.431+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for mongodb is mongodb/mongodb-0.mongodb-svc.default.svc.cluster.local:27017,mongodb-1.mongodb-svc.default.svc.cluster.local:27017,mongodb-2.mongodb-svc.default.svc.cluster.local:27017
2020-09-16T19:00:14.432+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mongodb-0.mongodb-svc.default.svc.cluster.local:27017
2020-09-16T19:00:14.432+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mongodb-2.mongodb-svc.default.svc.cluster.local:27017
2020-09-16T19:00:14.432+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mongodb-1.mongodb-svc.default.svc.cluster.local:27017
2020-09-16T19:00:14.433+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for mongodb is mongodb/mongodb-0.mongodb-svc.default.svc.cluster.local:27017,mongodb-1.mongodb-svc.default.svc.cluster.local:27017,mongodb-2.mongodb-svc.default.svc.cluster.local:27017
2020-09-16T19:00:14.436+0000 I NETWORK [js] Marking host mongodb-0.mongodb-svc.default.svc.cluster.local:27017 as failed :: caused by :: Location40659: can't connect to new replica set master [mongodb-0.mongodb-svc.default.svc.cluster.local:27017], err: AuthenticationFailed: Authentication failed.
2020-09-16T19:00:14.437+0000 E QUERY [js] Error: can't connect to new replica set master [mongodb-0.mongodb-svc.default.svc.cluster.local:27017], err: AuthenticationFailed: Authentication failed. :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2020-09-16T19:00:14.438+0000 F - [main] exception: connect failed
2020-09-16T19:00:14.438+0000 E - [main] exiting with code 1
command terminated with exit code 1
Deployed using;
ubuntu@test-1-1:~/mongodb-kubernetes-operator$ cat ./deploy/crds/mongo-scram-cr.yml
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: mongodb
spec:
members: 3
type: ReplicaSet
version: "4.2.6"
persistent: true
podSpec:
persistence:
single:
labelSelector: "mongodb"
storage: "16Gi"
storageClass: "local-path"
security:
authentication:
modes: ["SCRAM"]
users:
- name: my-user
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: my-user-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
# the user credentials will be generated from this secret
# once the credentials are generated, this secret is no longer required
---
apiVersion: v1
kind: Secret
metadata:
name: my-user-password
type: Opaque
stringData:
password: 58LObjiMpxcjP1sMDW
I looked at the YAML reference at https://docs.mongodb.com/kubernetes-operator/master/tutorial/deploy-replica-set/ and have been trying to tweak the following YAML:
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: rocketchat-mdb
spec:
members: 3
type: ReplicaSet
version: "4.2.6"
persistent: true
podSpec:
memory: 512M
persistence:
multiple:
data:
storage: 1Gi
journal:
storage: 500M
logs:
storage: 500M
However, each member of the replica set still has 10Gi per PVC. How can I correctly change it?
Hello,
1 - Currently we have Taints for a node pool in our AKS cluster and we want to have the MongoDB community database deployed to that specific node pool. How do we specify the tolerance?
2 - Also do you plan to support Standalone deployment of MongoDB?
Hi,
and thanks for working on this.
I find it confusing to have the repo with the apache license and the agent with a proprietary license. Especially when you can't run this repo without the agent.
I'd kindly recommend you to change the approach to make it clearer for everybody and avoid confusion. (When people are confused, or they didn't expect something, sometimes they become angry and it is frustrating for both parties, I'd like to avoid this, that's why I try to give you my little advice).
Either you relicense the agent to be apache as well, in this case, everything is clearly open source. This would be my preferred option :)
Or you relicense this repo with the same license as the agent. Hence making it clear that you can't use this repo without any proprietary software (which is confusing right now because it is apache).
Hope you understand my wish :)
Currently, I'd qualify this repo as fopensource, it looks like open source and it isn't, and it creates frustration to people that arrive to this conclusion after some digging.
IMHO, you'd gain to clearly label it as proprietary or open source, either way, but clearly.
Thanks for spending time reading me and best wishes :)
If you set resource limits as described in #176 and change the values operator doesn't notice any changes.
Expected behaviour would be for it to apply the new values.
I haven't tested in detail if that is true for everything.
Top level specs like version changes work, below spec.containers it doesn't get noticed any more apparently.
I just tried the release v0.2.0 which include users:
in the file deploy/crds/mongodb.com_v1_mongodb_cr.yaml
and it does not works. Got warn: WARN mongodb/replica_set_controller.go:174 error creating automation config config map: Secret "digital-factory-mongodb-mdfdb-admin-scram-credentials" not found
No pod created because, I suppose, of this warning. And when I remove the users:
block from the file deploy/crds/mongodb.com_v1_mongodb_cr.yaml
it works and pods got created.
Environment
GKE
Scenario
deploy/crds/mongodb.com_v1_mongodb_cr.yaml
from example-mongodb to digital-factory-mongodb and user name from my-user to mdfdb-adminWe have installed mongodb replicaset cluster on k8s cluster. We are using mysql client to connect to BI connector. The schema is not getting created when we use the mongosql command : "mongosqld --config /mongosqld.conf". Mongosqld.conf is as below:
<----security:
enabled: true
defaultSource: "admin"
mongodb:
net:
uri: "pod-ip:27017"
auth:
username: "admin"
password: "password"
mechanism: "SCRAM-SHA-1"
net:
bindIp: 127.0.0.1
port: 3307
processManagement:
service:
name: mongosqld
displayName: mongosqld
description: "BI Connector SQL proxy server" ---->
The mysql command used to connect: " mysql --host 127.0.0.1 --port 3307 -u admin?source=admin -p --ssl-mode required --enable-cleartext-plugin". The error we are getting is "Unable to connect to foreign data source: MongoDB".
Kindly suggest if we are missing out on some configurations.
Thanks
I was wondering what are the differences between this operator and the enterprise one? I couldn't find any documentation on the topic. No pricing comparisons, no feature comparisons, no license comparisons... Is this community driver production ready? Clearly the enterprise should be...
Hi there,
I am trying to deploy a mongo instance in kind with the scram_cr (with members set to 1):
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: example-scram-mongodb
spec:
members: 1
type: ReplicaSet
version: "4.2.6"
security:
authentication:
modes: ["SCRAM"]
users:
- name: my-user
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: my-user-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
# the user credentials will be generated from this secret
# once the credentials are generated, this secret is no longer required
---
apiVersion: v1
kind: Secret
metadata:
name: my-user-password
type: Opaque
stringData:
password: 58LObjiMpxcjP1sMDW
I can then connect to mongodb with Robo3T or Datagrip. Authentication succeeds and I can view the admin database.
However when I am trying to create a new Database/Collection I get an unauthorized message:
Failed to create database 'test'.
Error:
ListCollections failed: { operationTime: Timestamp(1598951458, 1), ok: 0.0, errmsg: "command listCollections requires authentication", code: 13, codeName: "Unauthorized", $clusterTime: { clusterTime: Timestamp(1598951458, 1), signature: { hash: BinData(0, 811ABEADC3EBD1CFC3EE8256396A6D7A6EB06BBF), keyId: 6867154498687598596 } } }
Following the installation procedure from the readme shows an error and does not create the example-mongodb
pod. Looking into the logs of the crd/mongodb.mongodb.com
shows:
error: no kind "CustomResourceDefinition" is registered for version "apiextensions.k8s.io/v1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
After doing a git bisect, it seems this has been broken since 82f4dc9 2020-08-25 | CLOUDP-66799: Configure SCRAM users
Hey,
I wanted to put my MongoDB instances into the same namespace than my micro-services. How could I achieve that ?
I saw we can change the WATCH_NAMESPACE
variable of the operator yaml file, but can we pass a wildcard or a list of namespaces ?
Thanks !
Hi,
I tried to deploy the pods with steps in readme file. Encountered "pod has unbound immediate PersistentVolumeClaims" error.
In the docker logs, it states "Error reading cluster config from /var/lib/automation/config/automation-config : [10:46:02.528] Cluster config did not pass validation for pre-expansion semantics : mongoDbTools field is required if there are any managed MongoDB processes"
I'm new to Kubernetes, What should I do here?
Another question is for connecting to the database, is it correct to use 'mongodb://example-mongodb-svc:27017' ?
Thanks for the help!
Regards,
James Zheng
Hello,
I got a weird issue where my mongo db can't start. The pods were running before (and crashed?) and while trying to restart, the mongodb-agent
container is marked as failing starting.
If I delete everything (claimed volumes and the mongodb resource) everything will start and run correctly, but after some time, the pods did stop and can't restart.
Note: I had a 3 replicas and scale down to 1 for budget reason, can it be the cause?
The resource:
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: order-db
namespace: mongodb
spec:
members: 1 # was 3
type: ReplicaSet
version: 4.2.8
The mongodb-agent
logs are (cut):
[2020-08-11T08:49:01.281-00] [.info] [main/components/agent.go:LoadClusterConfig:219] [08:49:01.281] clusterConfig unchanged
[2020-08-11T08:49:02.235-00] [.info] [cm/mongoctl/processctl.go:Update:3298] <order-db-0> [08:49:02.235] <DB_WRITE> Updated with query map[] and update [{$set [{agentFeatures [StateCache]} {nextVersion 3}]}] and upsert=true on local.clustermanager
[2020-08-11T08:49:02.271-00] [.info] [cm/director/director.go:computePlan:279] <order-db-0> [08:49:02.271] ... process has a plan : WaitPrimary,RsReconfig
<order-db-0> [08:49:02.270] ... process has a plan : WaitPrimary,RsReconfig
<order-db-0> [08:49:02.272] Running step 'WaitPrimary' as part of move 'WaitPrimary'
[2020-08-11T08:49:02.272-00] [.info] [cm/director/director.go:executePlan:867] <order-db-0> [08:49:02.272] Running step 'WaitPrimary' as part of move 'WaitPrimary'
[2020-08-11T08:49:02.272-00] [.info] [cm/director/director.go:tracef:772] <order-db-0> [08:49:02.272] Precondition of 'WaitPrimary' applies because
[All the following are true:
['currentState.NeedToStepDownCurrentPrimary' = false]
['currentState.Up' = true]
]
[2020-08-11T08:49:02.274-00] [.info] [cm/director/director.go:planAndExecute:563] <order-db-0> [08:49:02.274] Step=WaitPrimary as part of Move=WaitPrimary in plan failed : <order-db-0> [08:49:02.274] Postcondition not yet met for step WaitPrimary because ['currentState.IsPrimary' = false].
The mongod
container logs are (cut):
2020-08-11T08:55:11.164+0000 I CONNPOOL [Replication] Connecting to order-db-1.order-db-svc.mongodb.svc.cluster.local:27017
2020-08-11T08:55:12.225+0000 I CONNPOOL [Replication] Connecting to order-db-1.order-db-svc.mongodb.svc.cluster.local:27017
2020-08-11T08:55:12.239+0000 I REPL_HB [replexec-0] Heartbeat to order-db-1.order-db-svc.mongodb.svc.cluster.local:27017 failed after 2 retries, response status: HostUnreachable: Error connecting to order-db-1.order-db-svc.mongodb.svc.cluster.local:27017 :: caused by :: Could not find address for order-db-1.order-db-svc.mongodb.svc.cluster.local:27017: SocketException: Host not found (authoritative)
Do you have any idea ?
Thanks,
Hi,
How do i alter the storage class so that it does not use the default one but uses another one. Im basically trying to get persistent volumes so they are created with aws ebs
It looks like version 0.1.1 for mongodb-kubernetes-operator was deleted from quay.io, so the latest release from this repo fails when trying to pull the image. I'm not sure if this is intentional.
https://quay.io/repository/mongodb/mongodb-kubernetes-operator?tab=history
root@secondary-1:~# docker pull quay.io/mongodb/mongodb-kubernetes-operator:0.1.1
Error response from daemon: unknown: Tag 0.1.1 was deleted or has expired. To pull, revive via time machine
Hey,
Is it possible to access all the nodes in ReplicaSet with just a single Service Name? This will be very useful.?
Suppose My ReplicaSet are pod:
mongo-0 mongo-1 mongo-2
.
So to access them I will have a host
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27107
Instead of this if we will be able to do just
mongodb://mongo:27107
that would be cool. As we won't have to change the client connection string whenever we scale up or down.
PS: Here mongo
is the service
Im installing a replicaset using example from reame, and y get the next error:
kubectl logs mongodb/example-scram-mongod
error: no kind "MongoDB" is registered for version "mongodb.com/v1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
I have de crds
NAME CREATED AT
mongodb.mongodb.com 2020-09-29T09:40:19Z
Using minikube and google cloud. Same error...
It would be nice if you could add some examples how to use host mounts with this operator.
I want to build a script for mongodump, which my backup solution triggers and I want to backup the whole mongodb data dir
I tried to update my mongodb to the latest 4.2.10 but nothing happened.
I noticed the CR does have an annotation "mongodb.com/v1.lastVersion: 4.2.9". After updating that to 4.2.10 the operatore ran the upgrade.
When and how is that annotation supposed to get updated?
Hi there,
First, may I say it is awesome that you're all spinning up a MongoDB community operator!
Second, I noticed the CA cert is expected to be in the ConfigMap, as ca.crt
per #75 and #81. May I ask why you look for the ca.crt
in the ConfigMap, and not the Secret?
As background:
My understanding is, when cert-manager provisions a Certificate, and persists it to the k8s cluster within a Secret, it stores the ca.crt
as a property in the related secret, too. I just submitted this PR to update the cert-manager docs, but wanted to give you all a heads up.
In other words, if you leave the reference to the ca.crt
in the ConfigMap (instead of moving it to the same secret where the tls.crt, and tls.key live), it'll create more work for folks that use cert-manager, because they'll have to copy the ca.crt
's value from the Secret to a ConfigMap.
It's entirely possible I'm missing something here, but wanted to throw it out there for discussion. Thanks a lot for your help!
Please help and thanks in advance!
Version:
https://github.com/mongodb/mongodb-kubernetes-operator/tree/37c8442101bfe371fe7f7a5bb2b5525b5b9e69f0
Reproduce Steps:
git clone https://github.com/mongodb/mongodb-kubernetes-operator.git
cd mongodb-kubernetes-operator
kubectl create -f deploy/crds/mongodb.com_mongodb_crd.yaml
kubectl create -f deploy/ --namespace test-mongo
kubectl apply -f deploy/crds/mongodb.com_v1_mongodb_scram_cr.yaml --namespace test-mongo
kubectl logs -n mongo example-scram-mongodb-0 mongodb-agent
Error Log:
[2020-09-04T05:11:28.267-00] [.info] [cm/util/sysdep_unix.go:LockAutomationLockFile:321] [05:11:28.267] Locking automation lock file at /tmp/mongodb-mms-automation.lock
[2020-09-04T05:11:28.268-00] [.info] [main/components/agent.go:NewAgent:121] [05:11:28.268] Constructing new agent object with desiredClusterConfigPath=/var/lib/automation/config/automation-config
[2020-09-04T05:11:28.269-00] [.info] [cm/mongosqld/custodian.go:NewCustodian:137] <mongosqld custodian> [05:11:28.268] Started
[2020-09-04T05:11:28.269-00] [.info] [cm/dataexplorer/dataexplorer.go:controlLoop:108] <dataExplorer> [05:11:28.269] Starting control loop
[2020-09-04T05:11:28.269-00] [.info] [realtime/rtcollector/rtcollector.go:controlLoop:87] <rtCollector> [05:11:28.269] Starting control loop
[2020-09-04T05:11:28.270-00] [.info] [cm/kmipproxy/custodian.go:mainLoop:205] <kmipProxyMaintainer> [05:11:28.269] Starting main loop
[2020-09-04T05:11:29.710-00] [.info] [main/components/agent.go:LoadClusterConfig:234] [05:11:29.710] New cluster config received! 0 (<nil>) -> 1 (2020-09-04 05:11:13.804572918 +0000 UTC)
[2020-09-04T05:11:29.710-00] [.info] [cm/modules/agents_unix.go:KillAllAgents:42] [05:11:29.710] Killing all running mongodb-mms-monitoring-agent agents at /var/lib/mongodb-mms-automation/mongodb-mms-monitoring-agent-.+\..+_.+/mongodb-mms-monitoring-agent *$
[2020-09-04T05:11:29.715-00] [.info] [cm/modules/agents_unix.go:KillAllAgents:42] [05:11:29.715] Killing all running mongodb-mms-backup-agent agents at /var/lib/mongodb-mms-automation/mongodb-mms-backup-agent-.+\..+_.+/mongodb-mms-backup-agent *$
[2020-09-04T05:11:29.934-00] [.info] [main/components/agent.go:removeUnusedVersionsHelper:700] [05:11:29.934] Removing MongoDBTools version = 100.1.0-f83cdc73a56d5a3454d2fa30f84b89612313459b() from /var/lib/mongodb-mms-automation/mongodb-database-tools-ubuntu1604-x86_64-100.1.0 because it is no longer in the cluster config.
[2020-09-04T05:11:29.935-00] [.info] [main/components/agent.go:launchNewDirectors:514] [05:11:29.935] Launched director example-scram-mongodb-0
[2020-09-04T05:11:29.935-00] [.info] [cm/dataexplorer/dataexplorer.go:controlLoop:114] <dataExplorer> [05:11:29.935] Received new cluster config!
[2020-09-04T05:11:29.935-00] [.info] [realtime/rtcollector/rtcollector.go:controlLoop:95] <rtCollector> [05:11:29.935] Received new cluster config!
[2020-09-04T05:11:29.935-00] [.info] [cm/director/director.go:incorporateConfig:301] <example-scram-mongodb-0> [05:11:29.935] clusterConfig edition is different (<nil> -> 2020-09-04 05:11:13.804572918 +0000 UTC). Incorporating...
[2020-09-04T05:11:29.939-00] [.info] [cm/mongoclientservice/mongoclientservice.go:sendClientToRequesters:710] [05:11:29.939] Server at example-scram-mongodb-0.example-scram-mongodb-svc.mongo.svc.cluster.local:27017 (local=false) is down. Informing all client requests and disposing of client (0x0)
[2020-09-04T05:11:29.954-00] [.info] [cm/mongoclientservice/mongoclientservice.go:sendClientToRequesters:710] [05:11:29.954] Server at example-scram-mongodb-1.example-scram-mongodb-svc.mongo.svc.cluster.local:27017 (local=false) is down. Informing all client requests and disposing of client (0x0)
[2020-09-04T05:11:29.954-00] [.info] [cm/maintainers/externalmaintainer.go:mainLoop:467] [05:11:29.954] Got a new descriptor
[2020-09-04T05:11:29.969-00] [.info] [cm/mongoclientservice/mongoclientservice.go:sendClientToRequesters:710] [05:11:29.969] Server at example-scram-mongodb-2.example-scram-mongodb-svc.mongo.svc.cluster.local:27017 (local=false) is down. Informing all client requests and disposing of client (0x0)
[2020-09-04T05:11:30.238-00] [.info] [cm/director/director.go:computePlan:279] <example-scram-mongodb-0> [05:11:30.237] ... process has a plan : DownloadMongoDBTools,Start,WaitAllRsMembersUp,RsInit,WaitFeatureCompatibilityVersionCorrect
[2020-09-04T05:11:30.238-00] [.info] [cm/director/director.go:tracef:772] <example-scram-mongodb-0> [05:11:30.238] Running step: 'DownloadMongoDBTools' of move 'DownloadMongoDBTools' because
['currentState.MongoDBToolsDownloaded' = false]
[2020-09-04T05:11:30.238-00] [.info] [cm/action/downloadmongo.go:downloadUngzipUntar:201] <example-scram-mongodb-0> [05:11:30.238] Starting to download and extract https://dummy into /var/lib/mongodb-mms-automation
[2020-09-04T05:11:30.463-00] [.error] [cm/util/download.go:DownloadCustomClient:197] <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:11:30.463-00] [.error] [cm/action/downloadmongo.go:downloadUngzipUntar:210] <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:11:30.463-00] [.info] [cm/action/downloadmongo.go:downloadMongoBinary:154] <example-scram-mongodb-0> [05:11:30.463] Error downloading https://dummy : sleeping for 30 seconds and trying the download again.
err = <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:11:30.956-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:30.956] clusterConfig unchanged
[2020-09-04T05:11:31.957-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:31.957] clusterConfig unchanged
[2020-09-04T05:11:32.959-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:32.959] clusterConfig unchanged
[2020-09-04T05:11:33.960-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:33.960] clusterConfig unchanged
[2020-09-04T05:11:34.962-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:34.962] clusterConfig unchanged
[2020-09-04T05:11:35.964-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:35.964] clusterConfig unchanged
[2020-09-04T05:11:36.966-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:36.966] clusterConfig unchanged
[2020-09-04T05:11:37.968-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:37.968] clusterConfig unchanged
[2020-09-04T05:11:38.969-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:38.969] clusterConfig unchanged
[2020-09-04T05:11:39.971-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:39.971] clusterConfig unchanged
[2020-09-04T05:11:40.972-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:40.972] clusterConfig unchanged
[2020-09-04T05:11:41.974-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:41.974] clusterConfig unchanged
[2020-09-04T05:11:42.975-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:42.975] clusterConfig unchanged
[2020-09-04T05:11:43.977-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:43.977] clusterConfig unchanged
[2020-09-04T05:11:44.978-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:44.978] clusterConfig unchanged
[2020-09-04T05:11:45.980-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:45.980] clusterConfig unchanged
[2020-09-04T05:11:46.982-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:46.982] clusterConfig unchanged
[2020-09-04T05:11:47.983-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:47.983] clusterConfig unchanged
[2020-09-04T05:11:48.984-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:48.984] clusterConfig unchanged
[2020-09-04T05:11:49.986-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:49.986] clusterConfig unchanged
[2020-09-04T05:11:50.988-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:50.988] clusterConfig unchanged
[2020-09-04T05:11:51.989-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:51.989] clusterConfig unchanged
[2020-09-04T05:11:52.991-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:52.991] clusterConfig unchanged
[2020-09-04T05:11:53.992-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:53.992] clusterConfig unchanged
[2020-09-04T05:11:54.995-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:54.995] clusterConfig unchanged
[2020-09-04T05:11:55.996-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:55.996] clusterConfig unchanged
[2020-09-04T05:11:56.997-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:56.997] clusterConfig unchanged
[2020-09-04T05:11:57.999-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:57.998] clusterConfig unchanged
[2020-09-04T05:11:59.000-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:11:59.000] clusterConfig unchanged
[2020-09-04T05:12:00.002-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:00.002] clusterConfig unchanged
[2020-09-04T05:12:00.464-00] [.error] [cm/director/director.go:executePlan:939] <example-scram-mongodb-0> [05:12:00.464] Failed to apply action. Result = <nil> : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:12:00.464-00] [.error] [cm/director/director.go:planAndExecute:566] <example-scram-mongodb-0> [05:12:00.464] Plan execution failed on step DownloadMongoDBTools as part of move DownloadMongoDBTools : <example-scram-mongodb-0> [05:12:00.464] Failed to apply action. Result = <nil> : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:12:00.464-00] [.error] [cm/director/director.go:mainLoop:398] <example-scram-mongodb-0> [05:12:00.464] Failed to planAndExecute : <example-scram-mongodb-0> [05:12:00.464] Plan execution failed on step DownloadMongoDBTools as part of move DownloadMongoDBTools : <example-scram-mongodb-0> [05:12:00.464] Failed to apply action. Result = <nil> : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:11:30.463] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:12:01.003-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:01.003] clusterConfig unchanged
[2020-09-04T05:12:01.466-00] [.info] [cm/mongoclientservice/mongoclientservice.go:sendClientToRequesters:710] [05:12:01.466] Server at example-scram-mongodb-0.example-scram-mongodb-svc.mongo.svc.cluster.local:27017 (local=false) is down. Informing all client requests and disposing of client (0x0)
[2020-09-04T05:12:01.720-00] [.info] [cm/mongoclientservice/mongoclientservice.go:sendClientToRequesters:710] [05:12:01.720] Server at example-scram-mongodb-1.example-scram-mongodb-svc.mongo.svc.cluster.local:27017 (local=false) is down. Informing all client requests and disposing of client (0x0)
[2020-09-04T05:12:01.990-00] [.info] [cm/mongoclientservice/mongoclientservice.go:sendClientToRequesters:710] [05:12:01.990] Server at example-scram-mongodb-2.example-scram-mongodb-svc.mongo.svc.cluster.local:27017 (local=false) is down. Informing all client requests and disposing of client (0x0)
[2020-09-04T05:12:02.257-00] [.info] [cm/director/director.go:computePlan:279] <example-scram-mongodb-0> [05:12:02.257] ... process has a plan : DownloadMongoDBTools,Start,WaitAllRsMembersUp,RsInit,WaitFeatureCompatibilityVersionCorrect
[2020-09-04T05:12:02.257-00] [.info] [cm/director/director.go:tracef:772] <example-scram-mongodb-0> [05:12:02.257] Running step: 'DownloadMongoDBTools' of move 'DownloadMongoDBTools' because
['currentState.MongoDBToolsDownloaded' = false]
[2020-09-04T05:12:02.258-00] [.info] [cm/action/downloadmongo.go:downloadUngzipUntar:201] <example-scram-mongodb-0> [05:12:02.258] Starting to download and extract https://dummy into /var/lib/mongodb-mms-automation
[2020-09-04T05:12:02.807-00] [.error] [cm/util/download.go:DownloadCustomClient:197] <example-scram-mongodb-0> [05:12:02.807] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:12:02.807-00] [.error] [cm/action/downloadmongo.go:downloadUngzipUntar:210] <example-scram-mongodb-0> [05:12:02.807] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:12:02.807] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:12:02.807-00] [.info] [cm/action/downloadmongo.go:downloadMongoBinary:154] <example-scram-mongodb-0> [05:12:02.807] Error downloading https://dummy : sleeping for 30 seconds and trying the download again.
err = <example-scram-mongodb-0> [05:12:02.807] Error downloading url=https://dummy to /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.0.2 : <example-scram-mongodb-0> [05:12:02.807] Error downloading url=https://dummy : resp=<nil> : Get "https://dummy": dial tcp: lookup dummy on 10.96.0.10:53: no such host
[2020-09-04T05:12:11.004-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:11.004] clusterConfig unchanged
[2020-09-04T05:12:12.005-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:12.005] clusterConfig unchanged
[2020-09-04T05:12:13.007-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:13.007] clusterConfig unchanged
[2020-09-04T05:12:14.008-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:14.008] clusterConfig unchanged
[2020-09-04T05:12:15.009-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:15.009] clusterConfig unchanged
[2020-09-04T05:12:16.011-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:16.011] clusterConfig unchanged
[2020-09-04T05:12:17.012-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:17.012] clusterConfig unchanged
[2020-09-04T05:12:18.014-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:18.014] clusterConfig unchanged
[2020-09-04T05:12:19.015-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:19.015] clusterConfig unchanged
[2020-09-04T05:12:20.016-00] [.info] [main/components/agent.go:LoadClusterConfig:237] [05:12:20.016] clusterConfig unchanged
since the default values for cpu and memory are somewhat low .. can you provide an example (as in extend the readme probably) how to set those?
https://github.com/mongodb/mongodb-kubernetes-operator/pull/157/files looks like it should be possible
How to configure the resource limits for the mongodb pods?
I didn't find any documentation about this.
Hi, sadly my installation broke again..
Mongodb kubernetes operator 1.7 worked fine for about a month and than out of nowhere it broke and if I check the logs of the ops manager that is broken I get this error in there:
/opt/scripts/agent-launcher.sh: line 114: splittedAgentFlags[*]: unbound variable
I really don't understand! I removed the entire namespace and enterprise operator. Reinstalled everything again and I get the same error. It's really wierd! Why has this happened? And how do I resolve this?
Hi
Thanks for your sharing of mongodb operator. I'm learning how to build k8s operator by mongodb these days. I tried to create mongodb operator following readme insturctions. But it failed for :
[failure descripiton]
There is an error in operator while checking satus of replica set. When Reconcile is called again, and an error was returned by r.client.CreateOrUpdate(&svc)
The service already exists... moving forward: Service "example-mongodb-svc" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update
I tried to fix it by Kubernetes Service invalid clusterIP or resourceVersion. But it didn't work.
my k8s version:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Any response/help will be much appreciated.
Kube version compatibility issue because of kubeVersion referred in Chart,yaml.
Potential fix is to ignore this or change this to kubeVersion: '>=1.13-0' so that it can work with more range of versions with string literals.
Hi,
Currently StatefulSets don't support the change of volumeClaimTemplates after creating.
This is (especially for a database) a bad thing cause if the storage increases it is hard to increase the volume size.
kubernetes/kubernetes#68737
kubernetes/enhancements#660
kubernetes/enhancements#661
kubernetes/enhancements#1848
It would be nice if the operator can detect changes to the volumeClaimTemplates and patch the existing PVCs accordingly as a workaround.
Hi,
Following the guides and examples I created the following yaml and created a new deployment with kubectl create:
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: mongodb
spec:
members: 3
type: ReplicaSet
version: "4.2.7"
users:
- name: mongoRoot
db: admin
passwordSecretRef:
name: mongo-admin-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
security:
tls:
enabled: true
certificateKeySecretRef:
name: mongo-pod-cert
caConfigMapRef:
name: ca
statefulSet:
spec:
template:
spec:
containers:
- name: "mongodb-agent"
env:
- name: MANAGED_SECURITY_CONTEXT
value: "true"
- name: "mongod"
env:
- name: MANAGED_SECURITY_CONTEXT
value: "true"
However when checking the changemap created I can see that TLS and auth is disabled?
"auth": {
"disabled": true,
"authoritativeSet": false,
"autoAuthMechanism": "MONGODB-CR"
},
"tls": {
"CAFilePath": "",
"clientCertificateMode": "OPTIONAL"
},
Have I missed a step?
Thanks
Mark
I tried scaling a Replicaset yesterday. It works fine when scaling up. However, when scaling down the remaining mongod instances keeps trying to reconnect to the removed replicas.
Logs after downscale:
2020-09-25T07:22:12.265+0000 I CONNPOOL [Replication] Connecting to mongodb-core-1.mongodb-core-svc.mongodb.svc.cluster.local:27017
2020-09-25T07:22:12.274+0000 I REPL_HB [replexec-0] Heartbeat to mongodb-core-1.mongodb-core-svc.mongodb.svc.cluster.local:27017 failed after 2 retries, response status: HostUnreachable: Error connecting to mongodb-core-1.mongodb-core-svc.mongodb.svc.cluster.local:27017 :: caused by :: Could not find address for mongodb-core-1.mongodb-core-svc.mongodb.svc.cluster.local:27017: SocketException: Host not found (authoritative)
I tried to deploy cluster on kind with version set to 4.4.0 and it stuck on agent checking health.
It would be cool to support newest 4.4 mongo too.
So i bootstrapped the cluster on three nodes running k3s. dmzkubectl is just an alias to access the right cluster in dev.
lars@d04:~/kubernetes-cluster/test$ git clone https://github.com/mongodb/mongodb-kubernetes-operator.git
Cloning into 'mongodb-kubernetes-operator'...
remote: Enumerating objects: 224, done.
remote: Counting objects: 100% (224/224), done.
remote: Compressing objects: 100% (135/135), done.
remote: Total 4158 (delta 139), reused 149 (delta 82), pack-reused 3934
Receiving objects: 100% (4158/4158), 17.86 MiB | 13.09 MiB/s, done.
Resolving deltas: 100% (2429/2429), done.
lars@d04:~/kubernetes-cluster/test$ dmzkubectl create namespace testmongo
namespace/testmongo created
lars@d04:~/kubernetes-cluster/test$ dmzkubectl -n testmongo apply -f mongodb-kubernetes-operator/deploy/
deployment.apps/mongodb-kubernetes-operator created
deployment.apps/mongodb-kubernetes-operator configured
role.rbac.authorization.k8s.io/mongodb-kubernetes-operator created
rolebinding.rbac.authorization.k8s.io/mongodb-kubernetes-operator created
serviceaccount/mongodb-kubernetes-operator created
lars@d04:~/kubernetes-cluster/test$ dmzkubectl -n testmongo apply -f mongodb-kubernetes-operator/deploy/crds/mongodb.com_v1_mongodb_scram_cr.yaml
mongodb.mongodb.com/example-scram-mongodb created
secret/my-user-password created
lars@d04:~/kubernetes-cluster/test$ dmzkubectl -n testmongo get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongodb-kubernetes-operator-7f5ff99dd4-hqzkx 1/1 Running 0 4m33s 10.42.2.92 ef2 <none> <none>
pod/example-scram-mongodb-0 2/2 Running 0 2m11s 10.42.0.97 ef3 <none> <none>
pod/example-scram-mongodb-1 2/2 Running 0 113s 10.42.1.76 ef1 <none> <none>
pod/example-scram-mongodb-2 2/2 Running 0 71s 10.42.2.94 ef2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/example-scram-mongodb-svc ClusterIP None <none> 27017/TCP 2m12s app=example-scram-mongodb-svc
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mongodb-kubernetes-operator 1/1 1 1 4m34s mongodb-kubernetes-operator quay.io/mongodb/mongodb-kubernetes-operator:0.2.0 name=mongodb-kubernetes-operator
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/mongodb-kubernetes-operator-7f5ff99dd4 1 1 1 4m33s mongodb-kubernetes-operator quay.io/mongodb/mongodb-kubernetes-operator:0.2.0 name=mongodb-kubernetes-operator,pod-template-hash=7f5ff99dd4
replicaset.apps/mongodb-kubernetes-operator-978bb9f8c 0 0 0 4m33s mongodb-kubernetes-operator quay.io/mongodb/mongodb-kubernetes-operator:0.2.0 name=mongodb-kubernetes-operator,pod-template-hash=978bb9f8c
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/example-scram-mongodb 3/3 2m12s mongodb-agent,mongod quay.io/mongodb/mongodb-agent:10.15.1.6468-1,mongo:4.2.6
Everything so everything looks good, but the operator stucks in a reconciling loop...
lars@d04:~/kubernetes-cluster/test$ dmzkubectl -n testmongo logs pod/mongodb-kubernetes-operator-7f5ff99dd4-hqzkx
2020-09-08T14:21:24.498Z INFO manager/main.go:49 Watching namespace: testmongo
2020-09-08T14:21:24.953Z INFO manager/main.go:66 Registering Components.
2020-09-08T14:21:24.953Z INFO manager/main.go:78 Starting the Cmd.
2020-09-08T14:23:43.432Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:23:43.466Z DEBUG scram/scram.go:106 password secret was not found, reading from credentials from secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:23:43.466Z WARN mongodb/replica_set_controller.go:174 error creating automation config config map: Secret "example-scram-mongodb-my-user-scram-credentials" not found {"ReplicaSet": "testmongo/example-scram-mongodb"}
github.com/mongodb/mongodb-kubernetes-operator/pkg/controller/mongodb.(*ReplicaSetReconciler).Reconcile
/go/pkg/controller/mongodb/replica_set_controller.go:174
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
2020-09-08T14:23:44.467Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:23:44.504Z DEBUG scram/scram.go:149 No existing credentials found, generating new credentials
2020-09-08T14:23:44.504Z DEBUG scram/scram.go:127 Generating new credentials and storing in secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:23:44.549Z DEBUG scram/scram.go:138 Successfully generated SCRAM credentials
2020-09-08T14:23:44.564Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:44.574Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:45.699Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:23:45.767Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:23:45.781Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:45.789Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:45.789Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:45.800Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:45.801Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:23:45.801Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:55.801Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:23:55.865Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:23:55.877Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:55.885Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:55.885Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:55.896Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:23:55.897Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:23:55.897Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:05.897Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:24:05.969Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:24:05.983Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:05.991Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:05.991Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:06.003Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:06.003Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:24:06.003Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:16.004Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:24:16.070Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:24:16.083Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:16.098Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:16.098Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:16.109Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:16.110Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:24:16.110Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:26.110Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:24:26.182Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:24:26.196Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:26.204Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:26.204Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:26.215Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:26.216Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:24:26.216Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:36.216Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:24:36.281Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:24:36.294Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:36.302Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:36.302Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:36.313Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:36.314Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:24:36.314Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:46.314Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:24:46.381Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:24:46.395Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:46.410Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:46.410Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:46.421Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:46.423Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:24:46.423Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:56.424Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:24:56.496Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:24:56.510Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:56.518Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:56.518Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:56.529Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:24:56.530Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:24:56.530Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:06.530Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:25:06.603Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:25:06.615Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:06.623Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:06.623Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:06.634Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:06.635Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:25:06.635Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:16.636Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:25:16.706Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:25:16.720Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:16.734Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:16.734Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:16.745Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:16.746Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:25:16.746Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:26.747Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:25:26.819Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:25:26.832Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:26.841Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:26.841Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:26.850Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:26.851Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": false, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:25:26.851Z INFO mongodb/replica_set_controller.go:216 StatefulSet testmongo/example-scram-mongodb is not yet ready, retrying in 10 seconds {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.852Z INFO mongodb/replica_set_controller.go:171 Reconciling MongoDB {"ReplicaSet": "testmongo/example-scram-mongodb", "MongoDB.Spec": {"members":3,"type":"ReplicaSet","version":"4.2.6","security":{"authentication":{"modes":["SCRAM"]},"tls":{"enabled":false,"optional":false,"certificateKeySecretRef":{"name":""},"caConfigMapRef":{"name":""}}},"users":[{"name":"my-user","db":"admin","passwordSecretRef":{"name":"my-user-password","key":""},"roles":[{"db":"admin","name":"clusterAdmin"},{"db":"admin","name":"userAdminAnyDatabase"}]}],"statefulSet":{"spec":{"selector":null,"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":null}},"serviceName":"","updateStrategy":{}}},"additionalMongodConfig":{}}, "MongoDB.Status": {"mongoUri":"","phase":""}}
2020-09-08T14:25:36.923Z DEBUG scram/scram.go:122 Credentials have not changed, using credentials stored in: secret/example-scram-mongodb-my-user-scram-credentials
2020-09-08T14:25:36.936Z DEBUG mongodb/replica_set_controller.go:178 Ensuring the service exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.943Z INFO mongodb/replica_set_controller.go:311 The service already exists... moving forward: services "example-scram-mongodb-svc" already exists {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.943Z DEBUG mongodb/replica_set_controller.go:194 Creating/Updating StatefulSet {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.954Z DEBUG mongodb/replica_set_controller.go:208 Ensuring StatefulSet is ready, with type: RollingUpdate {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.955Z INFO mongodb/replica_set_controller.go:298 StatefulSet Readiness {"ReplicaSet": "testmongo/example-scram-mongodb", "isReady": true, "hasPerformedUpgrade": false, "areEqual": true}
2020-09-08T14:25:36.955Z DEBUG mongodb/replica_set_controller.go:220 Resetting StatefulSet UpdateStrategy {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.955Z DEBUG mongodb/replica_set_controller.go:226 Setting MongoDB Annotations {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.969Z DEBUG mongodb/replica_set_controller.go:242 Updating MongoDB Status {"ReplicaSet": "testmongo/example-scram-mongodb"}
2020-09-08T14:25:36.977Z WARN mongodb/replica_set_controller.go:245 Error updating the status of the MongoDB resource: could not update status: Operation cannot be fulfilled on mongodb.mongodb.com "example-scram-mongodb": the object has been modified; please apply your changes to the latest version and try again {"ReplicaSet": "testmongo/example-scram-mongodb"}
github.com/mongodb/mongodb-kubernetes-operator/pkg/controller/mongodb.(*ReplicaSetReconciler).Reconcile
/go/pkg/controller/mongodb/replica_set_controller.go:245
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
so how can I follow this instruction: please apply your changes to the latest version and try again {"ReplicaSet": "testmongo/example-scram-mongodb"}
?
lars@d04:~/kubernetes-cluster/test$ dmzkubectl -n testmongo get mdb example-scram-mongodb
NAME PHASE VERSION
example-scram-mongodb
lars@d04:~/kubernetes-cluster/test$ dmzkubectl -n testmongo describe mdb example-scram-mongodb
Name: example-scram-mongodb
Namespace: testmongo
Labels: <none>
Annotations: mongodb.com/v1.hasLeftReadyStateAnnotationKey: false
mongodb.com/v1.lastVersion: 4.2.6
API Version: mongodb.com/v1
Kind: MongoDB
Metadata:
Creation Timestamp: 2020-09-08T14:23:43Z
Generation: 2
Managed Fields:
API Version: mongodb.com/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:members:
f:security:
.:
f:authentication:
.:
f:modes:
f:type:
f:version:
Manager: kubectl
Operation: Update
Time: 2020-09-08T14:23:43Z
API Version: mongodb.com/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:mongodb.com/v1.hasLeftReadyStateAnnotationKey:
f:mongodb.com/v1.lastVersion:
f:spec:
f:additionalMongodConfig:
f:security:
f:tls:
.:
f:caConfigMapRef:
.:
f:name:
f:certificateKeySecretRef:
.:
f:name:
f:enabled:
f:optional:
f:statefulSet:
.:
f:spec:
.:
f:selector:
f:serviceName:
f:template:
.:
f:metadata:
.:
f:creationTimestamp:
f:spec:
.:
f:containers:
f:updateStrategy:
f:users:
f:status:
.:
f:mongoUri:
f:phase:
Manager: mongodb-kubernetes-operator
Operation: Update
Time: 2020-09-08T14:36:41Z
Resource Version: 6206201
Self Link: /apis/mongodb.com/v1/namespaces/testmongo/mongodb/example-scram-mongodb
UID: c34562e2-ec3c-48a0-869a-fcc6748b6d67
Spec:
Additional Mongod Config: <nil>
Members: 3
Security:
Authentication:
Modes:
SCRAM
Tls:
Ca Config Map Ref:
Name:
Certificate Key Secret Ref:
Name:
Enabled: false
Optional: false
Stateful Set:
Spec:
Selector: <nil>
Service Name:
Template:
Metadata:
Creation Timestamp: <nil>
Spec:
Containers: <nil>
Update Strategy:
Type: ReplicaSet
Users:
Db: admin
Name: my-user
Password Secret Ref:
Key:
Name: my-user-password
Roles:
Db: admin
Name: clusterAdmin
Db: admin
Name: userAdminAnyDatabase
Version: 4.2.6
Events: <none>
Hi mongodb team,
I'm just curious, are there plans to package this operator as a helm chart?
If no, why? I assume now might be too early to package it (because it is in a state of flux). Or perhaps there's some other set of reasons?
In the meantime, I'll follow the instructions to clone and then kubectl create & get. Thank you so very much for putting this together, it is very much appreciated.
Take care,
Kyle
How can we create users with SCRAM-SHA-1 authentication mechanism?
Hello,
How can we specify the storageClassName
for the sts
getting created by the operator?
In the example and in the crd definition I do not see any place where to set it up.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.