Hello ๐ This is such a cool project, and I'm excited to use it! It's clear that a lot of hard work and careful planning is going into it to make the UX and DX top-notch! ๐ฅ โ๏ธ ๐ โค๏ธ
Sorry to open an issue, but perhaps it's user error on my part. Can you provide some guidance?
Environment
OS
ubuntu@shp1:~/opentdf-chart$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Files
ubuntu@shp1:~/opentdf-chart$ ls -la
total 32
drwxrwxr-x 2 ubuntu ubuntu 4096 May 9 14:04 .
drwxr-x--- 9 ubuntu ubuntu 4096 May 9 14:04 ..
-rw-rw-r-- 1 ubuntu ubuntu 75 May 9 14:01 ecparams.tmp
-rw-rw-r-- 1 ubuntu ubuntu 1099 May 9 14:01 kas-cert.pem
-rw-rw-r-- 1 ubuntu ubuntu 562 May 9 14:01 kas-ec-cert.pem
-rw------- 1 ubuntu ubuntu 241 May 9 14:01 kas-ec-private.pem
-rw------- 1 ubuntu ubuntu 1704 May 9 14:01 kas-private.pem
-rw-rw-r-- 1 ubuntu ubuntu 562 May 9 14:04 myvalues.yaml
Kubernetes distro
ubuntu@shp1:~/opentdf-chart$ k3s --version
k3s version v1.29.4+k3s1 (94e29e2e)
go version go1.21.9
Secrets
kubectl get secrets -A
NAMESPACE NAME TYPE DATA AGE
kube-system k3s-serving kubernetes.io/tls 2 23m
kube-system shp1.node-password.k3s Opaque 1 23m
default kas-private-keys Opaque 4 9m32s
default platform-1715263488-tls kubernetes.io/tls 2 6m46s
Values file
# cat myvalues.yaml
playground: true # Enable playground mode
# Only need to configure keycloak ingress and adminIngress
keycloak:
proxy: edge # Your keycloak proxy (edge, passthrough, reencrypt)
ingress:
enabled: true
selfSigned: true
annotations: {}
# route.openshift.io/termination: edge
hostname: # Your keycloak hostname (e.g. keycloak.example.com)
adminIngress:
enabled: true
selfSigned: true
annotations: {}
# route.openshift.io/termination: edge
hostname: # Your keycloak admin hostname (e.g. keycloak-admin.example.com)
Expected behavior
Some kind of quick UX, which gives a newcomer confidence in the solution. For instance, helm install ...
with the Playground configuration should "just work".
Actual behavior
It seems like things work (e.g. via helm install
), but looking closely feels like it fails silently:
ubuntu@shp1:~/opentdf-chart$ helm install opentdf/platform --generate-name -f myvalues.yaml
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/ubuntu/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/ubuntu/.kube/config
NAME: platform-1715264152
LAST DEPLOYED: Thu May 9 14:15:52 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
______ _ _____ _______ ______ _____ _ _ _ _______ _____ _ _ ___ ______ _ ___________
| ___ \ | / _ \ \ / / __ \| ___ \ _ | | | | \ | | _ \ | ___| \ | | / _ \ | ___ \ | | ___| _ \
| |_/ / | / /_\ \ V /| | \/| |_/ / | | | | | | \| | | | | | |__ | \| |/ /_\ \| |_/ / | | |__ | | | |
| __/| | | _ |\ / | | __ | /| | | | | | | . ` | | | | | __|| . ` || _ || ___ \ | | __|| | | |
| | | |____| | | || | | |_\ \| |\ \\ \_/ / |_| | |\ | |/ / | |___| |\ || | | || |_/ / |____| |___| |/ /
\_| \_____/\_| |_/\_/ \____/\_| \_|\___/ \___/\_| \_/___/ \____/\_| \_/\_| |_/\____/\_____/\____/|___/
Keycloak Application: keycloak.local
Keycloak Admin Application: keycloak.local
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=platform,app.kubernetes.io/instance=platform-1715264152" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
Check the exit status for due diligence:
ubuntu@shp1:~/opentdf-chart$ echo $?
0
Looking at pods
ubuntu@shp1:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-6c86858495-7ztwx 1/1 Running 2 (22m ago) 30m
kube-system coredns-6799fbcd5-fldhz 1/1 Running 2 (22m ago) 30m
kube-system metrics-server-54fd9b65b-jkwvq 1/1 Running 2 (22m ago) 30m
default platform-db-0 1/1 Running 0 3m20s
default platform-keycloak-0 1/1 Running 0 3m20s
default platform-1715264152-7696569549-h7d45 0/1 ImagePullBackOff 0 3m20s
Looking at events
ubuntu@shp1:~$ kubectl get events -A
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
default 31m Normal Starting node/shp1 Starting kubelet.
default 31m Warning InvalidDiskCapacity node/shp1 invalid capacity 0 on image filesystem
default 31m Normal NodeHasSufficientMemory node/shp1 Node shp1 status is now: NodeHasSufficientMemory
default 31m Normal NodeHasNoDiskPressure node/shp1 Node shp1 status is now: NodeHasNoDiskPressure
default 31m Normal NodeHasSufficientPID node/shp1 Node shp1 status is now: NodeHasSufficientPID
default 31m Normal NodeAllocatableEnforced node/shp1 Updated Node Allocatable limit across pods
default 31m Normal NodeReady node/shp1 Node shp1 status is now: NodeReady
default 31m Normal NodePasswordValidationComplete node/shp1 Deferred node password secret validation complete
kube-system 31m Normal ApplyingManifest addon/ccm Applying manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
kube-system 31m Normal AppliedManifest addon/ccm Applied manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
kube-system 31m Normal ApplyingManifest addon/coredns Applying manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
kube-system 31m Normal AppliedManifest addon/coredns Applied manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
kube-system 31m Normal ApplyingManifest addon/local-storage Applying manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
kube-system 31m Normal AppliedManifest addon/local-storage Applied manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
kube-system 31m Normal ApplyingManifest addon/aggregated-metrics-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
kube-system 31m Normal AppliedManifest addon/aggregated-metrics-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
kube-system 31m Normal ApplyingManifest addon/auth-delegator Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
kube-system 31m Normal AppliedManifest addon/auth-delegator Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
kube-system 31m Normal ApplyingManifest addon/auth-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
kube-system 31m Normal AppliedManifest addon/auth-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
kube-system 31m Normal ApplyingManifest addon/metrics-apiservice Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
kube-system 31m Normal AppliedManifest addon/metrics-apiservice Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
kube-system 31m Normal ApplyingManifest addon/metrics-server-deployment Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
kube-system 31m Normal AppliedManifest addon/metrics-server-deployment Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
kube-system 31m Normal ApplyingManifest addon/metrics-server-service Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
kube-system 31m Normal AppliedManifest addon/metrics-server-service Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
kube-system 31m Normal ApplyingManifest addon/resource-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
kube-system 31m Normal AppliedManifest addon/resource-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
kube-system 31m Normal ApplyingManifest addon/rolebindings Applying manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
default 31m Normal Synced node/shp1 Node synced successfully
kube-system 31m Normal AppliedManifest addon/rolebindings Applied manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
kube-system 31m Normal ApplyingManifest addon/runtimes Applying manifest at "/var/lib/rancher/k3s/server/manifests/runtimes.yaml"
kube-system 31m Normal AppliedManifest addon/runtimes Applied manifest at "/var/lib/rancher/k3s/server/manifests/runtimes.yaml"
default 31m Normal Starting node/shp1
default 31m Normal RegisteredNode node/shp1 Node shp1 event: Registered Node shp1 in Controller
kube-system 31m Normal ScalingReplicaSet deployment/local-path-provisioner Scaled up replica set local-path-provisioner-6c86858495 to 1
kube-system 31m Normal ScalingReplicaSet deployment/coredns Scaled up replica set coredns-6799fbcd5 to 1
kube-system 31m Normal ScalingReplicaSet deployment/metrics-server Scaled up replica set metrics-server-54fd9b65b to 1
kube-system 31m Normal SuccessfulCreate replicaset/local-path-provisioner-6c86858495 Created pod: local-path-provisioner-6c86858495-7ztwx
kube-system 31m Normal Scheduled pod/local-path-provisioner-6c86858495-7ztwx Successfully assigned kube-system/local-path-provisioner-6c86858495-7ztwx to shp1
kube-system 31m Normal Scheduled pod/coredns-6799fbcd5-fldhz Successfully assigned kube-system/coredns-6799fbcd5-fldhz to shp1
kube-system 31m Normal Scheduled pod/metrics-server-54fd9b65b-jkwvq Successfully assigned kube-system/metrics-server-54fd9b65b-jkwvq to shp1
kube-system 31m Normal SuccessfulCreate replicaset/metrics-server-54fd9b65b Created pod: metrics-server-54fd9b65b-jkwvq
kube-system 31m Normal SuccessfulCreate replicaset/coredns-6799fbcd5 Created pod: coredns-6799fbcd5-fldhz
kube-system 31m Normal Pulling pod/local-path-provisioner-6c86858495-7ztwx Pulling image "rancher/local-path-provisioner:v0.0.26"
kube-system 31m Normal Pulling pod/metrics-server-54fd9b65b-jkwvq Pulling image "rancher/mirrored-metrics-server:v0.7.0"
kube-system 31m Normal Pulling pod/coredns-6799fbcd5-fldhz Pulling image "rancher/mirrored-coredns-coredns:1.10.1"
kube-system 31m Normal Pulled pod/local-path-provisioner-6c86858495-7ztwx Successfully pulled image "rancher/local-path-provisioner:v0.0.26" in 2.009s (2.009s including waiting)
kube-system 31m Normal Created pod/local-path-provisioner-6c86858495-7ztwx Created container local-path-provisioner
kube-system 31m Normal Started pod/local-path-provisioner-6c86858495-7ztwx Started container local-path-provisioner
kube-system 31m Normal Pulled pod/coredns-6799fbcd5-fldhz Successfully pulled image "rancher/mirrored-coredns-coredns:1.10.1" in 2.326s (2.326s including waiting)
kube-system 31m Normal Created pod/coredns-6799fbcd5-fldhz Created container coredns
kube-system 31m Normal Started pod/coredns-6799fbcd5-fldhz Started container coredns
kube-system 31m Normal Pulled pod/metrics-server-54fd9b65b-jkwvq Successfully pulled image "rancher/mirrored-metrics-server:v0.7.0" in 2.506s (2.506s including waiting)
kube-system 31m Normal Created pod/metrics-server-54fd9b65b-jkwvq Created container metrics-server
kube-system 31m Normal Started pod/metrics-server-54fd9b65b-jkwvq Started container metrics-server
kube-system 31m Warning Unhealthy pod/metrics-server-54fd9b65b-jkwvq Readiness probe failed: HTTP probe failed with statuscode: 500
kube-system 31m Warning Unhealthy pod/coredns-6799fbcd5-fldhz Readiness probe failed: Get "http://10.42.0.4:8181/ready": dial tcp 10.42.0.4:8181: connect: connection refused
kube-system 31m Warning Unhealthy pod/metrics-server-54fd9b65b-jkwvq Readiness probe failed: Get "https://10.42.0.3:10250/readyz": dial tcp 10.42.0.3:10250: connect: connection refused
default 29m Normal Starting node/shp1 Starting kubelet.
default 29m Warning InvalidDiskCapacity node/shp1 invalid capacity 0 on image filesystem
default 29m Normal NodeHasSufficientMemory node/shp1 Node shp1 status is now: NodeHasSufficientMemory
default 29m Normal NodeHasNoDiskPressure node/shp1 Node shp1 status is now: NodeHasNoDiskPressure
default 29m Normal NodeHasSufficientPID node/shp1 Node shp1 status is now: NodeHasSufficientPID
default 29m Warning Rebooted node/shp1 Node shp1 has been rebooted, boot id: 29977563-566c-4af6-99aa-24c209273ff3
default 29m Normal NodeNotReady node/shp1 Node shp1 status is now: NodeNotReady
default 29m Normal NodeAllocatableEnforced node/shp1 Updated Node Allocatable limit across pods
kube-system 29m Normal SandboxChanged pod/local-path-provisioner-6c86858495-7ztwx Pod sandbox changed, it will be killed and re-created.
kube-system 29m Normal SandboxChanged pod/metrics-server-54fd9b65b-jkwvq Pod sandbox changed, it will be killed and re-created.
kube-system 29m Normal SandboxChanged pod/coredns-6799fbcd5-fldhz Pod sandbox changed, it will be killed and re-created.
kube-system 29m Normal ApplyingManifest addon/ccm Applying manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
kube-system 29m Normal AppliedManifest addon/ccm Applied manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
kube-system 29m Normal ApplyingManifest addon/coredns Applying manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
kube-system 29m Normal AppliedManifest addon/coredns Applied manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
kube-system 29m Normal ApplyingManifest addon/local-storage Applying manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
kube-system 29m Normal AppliedManifest addon/local-storage Applied manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
kube-system 29m Normal ApplyingManifest addon/aggregated-metrics-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
kube-system 29m Normal Pulled pod/local-path-provisioner-6c86858495-7ztwx Container image "rancher/local-path-provisioner:v0.0.26" already present on machine
kube-system 29m Normal AppliedManifest addon/aggregated-metrics-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
kube-system 29m Normal Pulled pod/metrics-server-54fd9b65b-jkwvq Container image "rancher/mirrored-metrics-server:v0.7.0" already present on machine
kube-system 29m Normal ApplyingManifest addon/auth-delegator Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
kube-system 29m Normal AppliedManifest addon/auth-delegator Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
kube-system 29m Normal Pulled pod/coredns-6799fbcd5-fldhz Container image "rancher/mirrored-coredns-coredns:1.10.1" already present on machine
kube-system 29m Normal ApplyingManifest addon/auth-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
kube-system 29m Normal AppliedManifest addon/auth-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
kube-system 29m Normal ApplyingManifest addon/metrics-apiservice Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
kube-system 29m Normal Created pod/local-path-provisioner-6c86858495-7ztwx Created container local-path-provisioner
kube-system 29m Normal AppliedManifest addon/metrics-apiservice Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
kube-system 29m Normal Created pod/coredns-6799fbcd5-fldhz Created container coredns
kube-system 29m Normal Created pod/metrics-server-54fd9b65b-jkwvq Created container metrics-server
kube-system 29m Normal Started pod/local-path-provisioner-6c86858495-7ztwx Started container local-path-provisioner
kube-system 29m Normal ApplyingManifest addon/metrics-server-deployment Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
kube-system 29m Normal Started pod/coredns-6799fbcd5-fldhz Started container coredns
kube-system 29m Normal Started pod/metrics-server-54fd9b65b-jkwvq Started container metrics-server
kube-system 29m Normal AppliedManifest addon/metrics-server-deployment Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
kube-system 29m Normal ApplyingManifest addon/metrics-server-service Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
kube-system 29m Normal AppliedManifest addon/metrics-server-service Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
kube-system 29m Normal ApplyingManifest addon/resource-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
kube-system 29m Normal AppliedManifest addon/resource-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
default 29m Normal NodePasswordValidationComplete node/shp1 Deferred node password secret validation complete
kube-system 29m Normal ApplyingManifest addon/rolebindings Applying manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
kube-system 29m Normal AppliedManifest addon/rolebindings Applied manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
kube-system 29m Normal ApplyingManifest addon/runtimes Applying manifest at "/var/lib/rancher/k3s/server/manifests/runtimes.yaml"
kube-system 29m Normal AppliedManifest addon/runtimes Applied manifest at "/var/lib/rancher/k3s/server/manifests/runtimes.yaml"
kube-system 28m Warning Unhealthy pod/coredns-6799fbcd5-fldhz Readiness probe failed: Get "http://10.42.0.6:8181/ready": dial tcp 10.42.0.6:8181: connect: connection refused
default 28m Normal Starting node/shp1
kube-system 28m Warning Unhealthy pod/metrics-server-54fd9b65b-jkwvq Readiness probe failed: Get "https://10.42.0.5:10250/readyz": dial tcp 10.42.0.5:10250: connect: connection refused
kube-system 28m Warning Unhealthy pod/coredns-6799fbcd5-fldhz Readiness probe failed: HTTP probe failed with statuscode: 503
default 28m Normal NodeReady node/shp1 Node shp1 status is now: NodeReady
default 28m Normal RegisteredNode node/shp1 Node shp1 event: Registered Node shp1 in Controller
kube-system 28m Warning FailedToUpdateEndpoint endpoints/metrics-server Failed to update endpoint kube-system/metrics-server: Operation cannot be fulfilled on endpoints "metrics-server": the object has been modified; please apply your changes to the latest version and try again
kube-system 28m Warning Unhealthy pod/metrics-server-54fd9b65b-jkwvq Readiness probe failed: HTTP probe failed with statuscode: 500
default 23m Normal Starting node/shp1 Starting kubelet.
default 23m Warning InvalidDiskCapacity node/shp1 invalid capacity 0 on image filesystem
default 23m Normal NodeHasSufficientMemory node/shp1 Node shp1 status is now: NodeHasSufficientMemory
default 23m Normal NodeHasNoDiskPressure node/shp1 Node shp1 status is now: NodeHasNoDiskPressure
default 23m Normal NodeHasSufficientPID node/shp1 Node shp1 status is now: NodeHasSufficientPID
default 23m Warning Rebooted node/shp1 Node shp1 has been rebooted, boot id: b519e901-07f4-4b04-a95c-c029a58a9dd4
default 23m Normal NodeNotReady node/shp1 Node shp1 status is now: NodeNotReady
default 23m Normal NodeAllocatableEnforced node/shp1 Updated Node Allocatable limit across pods
default 23m Normal NodeReady node/shp1 Node shp1 status is now: NodeReady
kube-system 23m Normal ApplyingManifest addon/ccm Applying manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
kube-system 23m Normal AppliedManifest addon/ccm Applied manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
kube-system 23m Normal ApplyingManifest addon/coredns Applying manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
kube-system 23m Normal SandboxChanged pod/metrics-server-54fd9b65b-jkwvq Pod sandbox changed, it will be killed and re-created.
kube-system 23m Normal AppliedManifest addon/coredns Applied manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
kube-system 23m Normal SandboxChanged pod/coredns-6799fbcd5-fldhz Pod sandbox changed, it will be killed and re-created.
kube-system 23m Normal ApplyingManifest addon/local-storage Applying manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
kube-system 23m Normal SandboxChanged pod/local-path-provisioner-6c86858495-7ztwx Pod sandbox changed, it will be killed and re-created.
kube-system 23m Normal AppliedManifest addon/local-storage Applied manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
kube-system 23m Normal ApplyingManifest addon/aggregated-metrics-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
kube-system 23m Normal AppliedManifest addon/aggregated-metrics-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
kube-system 23m Normal ApplyingManifest addon/auth-delegator Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
kube-system 23m Normal AppliedManifest addon/auth-delegator Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
kube-system 23m Normal ApplyingManifest addon/auth-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
kube-system 23m Normal AppliedManifest addon/auth-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
kube-system 23m Normal ApplyingManifest addon/metrics-apiservice Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
kube-system 23m Normal AppliedManifest addon/metrics-apiservice Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
kube-system 23m Normal ApplyingManifest addon/metrics-server-deployment Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
kube-system 23m Normal AppliedManifest addon/metrics-server-deployment Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
kube-system 23m Normal ApplyingManifest addon/metrics-server-service Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
kube-system 23m Normal Pulled pod/local-path-provisioner-6c86858495-7ztwx Container image "rancher/local-path-provisioner:v0.0.26" already present on machine
kube-system 23m Normal Pulled pod/coredns-6799fbcd5-fldhz Container image "rancher/mirrored-coredns-coredns:1.10.1" already present on machine
kube-system 23m Normal Pulled pod/metrics-server-54fd9b65b-jkwvq Container image "rancher/mirrored-metrics-server:v0.7.0" already present on machine
kube-system 23m Normal AppliedManifest addon/metrics-server-service Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
kube-system 23m Normal Created pod/coredns-6799fbcd5-fldhz Created container coredns
kube-system 23m Normal Created pod/local-path-provisioner-6c86858495-7ztwx Created container local-path-provisioner
kube-system 23m Normal Created pod/metrics-server-54fd9b65b-jkwvq Created container metrics-server
kube-system 23m Normal Started pod/coredns-6799fbcd5-fldhz Started container coredns
kube-system 23m Normal Started pod/metrics-server-54fd9b65b-jkwvq Started container metrics-server
kube-system 23m Normal Started pod/local-path-provisioner-6c86858495-7ztwx Started container local-path-provisioner
kube-system 23m Normal ApplyingManifest addon/resource-reader Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
kube-system 23m Normal AppliedManifest addon/resource-reader Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
default 23m Normal NodePasswordValidationComplete node/shp1 Deferred node password secret validation complete
kube-system 23m Normal ApplyingManifest addon/rolebindings Applying manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
kube-system 23m Normal AppliedManifest addon/rolebindings Applied manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
kube-system 23m Normal ApplyingManifest addon/runtimes Applying manifest at "/var/lib/rancher/k3s/server/manifests/runtimes.yaml"
kube-system 23m Normal AppliedManifest addon/runtimes Applied manifest at "/var/lib/rancher/k3s/server/manifests/runtimes.yaml"
default 23m Normal Starting node/shp1
kube-system 23m Warning Unhealthy pod/coredns-6799fbcd5-fldhz Readiness probe failed: Get "http://10.42.0.8:8181/ready": dial tcp 10.42.0.8:8181: connect: connection refused
kube-system 23m Warning Unhealthy pod/metrics-server-54fd9b65b-jkwvq Readiness probe failed: Get "https://10.42.0.9:10250/readyz": dial tcp 10.42.0.9:10250: connect: connection refused
default 22m Normal RegisteredNode node/shp1 Node shp1 event: Registered Node shp1 in Controller
kube-system 22m Warning Unhealthy pod/metrics-server-54fd9b65b-jkwvq Readiness probe failed: HTTP probe failed with statuscode: 500
default 15m Normal ScalingReplicaSet deployment/platform-1715263488 Scaled up replica set platform-1715263488-7f75649476 to 1
default 15m Normal SuccessfulCreate replicaset/platform-1715263488-7f75649476 Created pod: platform-1715263488-7f75649476-4bz5j
default 15m Normal Scheduled pod/platform-1715263488-7f75649476-4bz5j Successfully assigned default/platform-1715263488-7f75649476-4bz5j to shp1
default 15m Normal SuccessfulCreate statefulset/platform-db create Claim data-platform-db-0 Pod platform-db-0 in StatefulSet platform-db success
default 15m Normal Scheduled pod/platform-keycloak-0 Successfully assigned default/platform-keycloak-0 to shp1
default 15m Normal SuccessfulCreate statefulset/platform-keycloak create Pod platform-keycloak-0 in StatefulSet platform-keycloak successful
default 15m Normal WaitForFirstConsumer persistentvolumeclaim/data-platform-db-0 waiting for first consumer to be created before binding
default 15m Normal SuccessfulCreate statefulset/platform-db create Pod platform-db-0 in StatefulSet platform-db successful
default 15m Normal Provisioning persistentvolumeclaim/data-platform-db-0 External provisioner is provisioning volume for claim "default/data-platform-db-0"
default 15m Normal ExternalProvisioning persistentvolumeclaim/data-platform-db-0 Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
default 15m Normal SuccessfulCreate job/platform-keycloak-keycloak-config-cli Created pod: platform-keycloak-keycloak-config-cli-s7g9r
default 15m Normal Scheduled pod/platform-keycloak-keycloak-config-cli-s7g9r Successfully assigned default/platform-keycloak-keycloak-config-cli-s7g9r to shp1
default 15m Normal Pulling pod/platform-keycloak-0 Pulling image "docker.io/bitnami/keycloak:24.0.3-debian-12-r0"
default 15m Normal Pulling pod/platform-keycloak-keycloak-config-cli-s7g9r Pulling image "docker.io/bitnami/keycloak-config-cli:5.12.0-debian-12-r1"
kube-system 15m Normal Pulling pod/helper-pod-create-pvc-1c1d97c9-63dc-475e-8ab4-b8ec6ffb5302 Pulling image "rancher/mirrored-library-busybox:1.36.1"
kube-system 15m Normal Pulled pod/helper-pod-create-pvc-1c1d97c9-63dc-475e-8ab4-b8ec6ffb5302 Successfully pulled image "rancher/mirrored-library-busybox:1.36.1" in 1.286s (1.286s including waiting)
kube-system 15m Normal Created pod/helper-pod-create-pvc-1c1d97c9-63dc-475e-8ab4-b8ec6ffb5302 Created container helper-pod
kube-system 15m Normal Started pod/helper-pod-create-pvc-1c1d97c9-63dc-475e-8ab4-b8ec6ffb5302 Started container helper-pod
default 15m Normal ProvisioningSucceeded persistentvolumeclaim/data-platform-db-0 Successfully provisioned volume pvc-1c1d97c9-63dc-475e-8ab4-b8ec6ffb5302
default 15m Normal Scheduled pod/platform-db-0 Successfully assigned default/platform-db-0 to shp1
default 15m Normal Pulling pod/platform-db-0 Pulling image "docker.io/bitnami/os-shell:12-debian-12-r18"
default 15m Normal Pulled pod/platform-keycloak-keycloak-config-cli-s7g9r Successfully pulled image "docker.io/bitnami/keycloak-config-cli:5.12.0-debian-12-r1" in 14.948s (14.948s including waiting)
default 15m Normal Created pod/platform-keycloak-keycloak-config-cli-s7g9r Created container keycloak-config-cli
default 15m Normal Started pod/platform-keycloak-keycloak-config-cli-s7g9r Started container keycloak-config-cli
default 15m Normal Pulled pod/platform-db-0 Successfully pulled image "docker.io/bitnami/os-shell:12-debian-12-r18" in 7.101s (7.101s including waiting)
default 15m Normal Created pod/platform-db-0 Created container copy-certs
default 15m Normal Started pod/platform-db-0 Started container copy-certs
default 14m Normal Pulled pod/platform-keycloak-0 Successfully pulled image "docker.io/bitnami/keycloak:24.0.3-debian-12-r0" in 22.282s (22.282s including waiting)
default 14m Normal Created pod/platform-keycloak-0 Created container init-quarkus-directory
default 14m Normal Started pod/platform-keycloak-0 Started container init-quarkus-directory
default 14m Normal Pulling pod/platform-db-0 Pulling image "docker.io/bitnami/postgresql:16.2.0-debian-12-r14"
default 14m Normal Pulled pod/platform-keycloak-0 Container image "docker.io/bitnami/keycloak:24.0.3-debian-12-r0" already present on machine
default 14m Normal Created pod/platform-keycloak-0 Created container keycloak
default 14m Normal Started pod/platform-keycloak-0 Started container keycloak
default 14m Normal Pulled pod/platform-db-0 Successfully pulled image "docker.io/bitnami/postgresql:16.2.0-debian-12-r14" in 8.244s (8.244s including waiting)
default 14m Normal Created pod/platform-db-0 Created container postgresql
default 14m Normal Started pod/platform-db-0 Started container postgresql
default 14m Warning Unhealthy pod/platform-db-0 Readiness probe failed: 127.0.0.1:5432 - no response
default 13m Normal Pulling pod/platform-1715263488-7f75649476-4bz5j Pulling image "registry.opentdf.io/platform:0.1.0"
default 13m Warning Failed pod/platform-1715263488-7f75649476-4bz5j Failed to pull image "registry.opentdf.io/platform:0.1.0": rpc error: code = NotFound desc = failed to pull and unpack image "registry.opentdf.io/platform:0.1.0": failed to resolve reference "registry.opentdf.io/platform:0.1.0": registry.opentdf.io/platform:0.1.0: not found
default 13m Warning Failed pod/platform-1715263488-7f75649476-4bz5j Error: ErrImagePull
default 13m Warning Failed pod/platform-1715263488-7f75649476-4bz5j Error: ImagePullBackOff
default 12m Warning Unhealthy pod/platform-keycloak-0 Readiness probe failed: Get "http://10.42.0.12:8080/realms/master": dial tcp 10.42.0.12:8080: connect: connection refused
default 12m Normal SuccessfulCreate job/platform-keycloak-keycloak-config-cli Created pod: platform-keycloak-keycloak-config-cli-2l9tk
default 12m Normal Scheduled pod/platform-keycloak-keycloak-config-cli-2l9tk Successfully assigned default/platform-keycloak-keycloak-config-cli-2l9tk to shp1
default 12m Normal Pulled pod/platform-keycloak-keycloak-config-cli-2l9tk Container image "docker.io/bitnami/keycloak-config-cli:5.12.0-debian-12-r1" already present on machine
default 12m Normal Created pod/platform-keycloak-keycloak-config-cli-2l9tk Created container keycloak-config-cli
default 12m Normal Started pod/platform-keycloak-keycloak-config-cli-2l9tk Started container keycloak-config-cli
default 12m Normal Completed job/platform-keycloak-keycloak-config-cli Job completed
default 10m Normal BackOff pod/platform-1715263488-7f75649476-4bz5j Back-off pulling image "registry.opentdf.io/platform:0.1.0"
default 10m Normal Killing pod/platform-db-0 Stopping container postgresql
default 10m Normal Killing pod/platform-keycloak-0 Stopping container keycloak
default 4m16s Normal ScalingReplicaSet deployment/platform-1715264152 Scaled up replica set platform-1715264152-7696569549 to 1
default 4m16s Normal SuccessfulCreate replicaset/platform-1715264152-7696569549 Created pod: platform-1715264152-7696569549-h7d45
default 4m15s Normal Scheduled pod/platform-1715264152-7696569549-h7d45 Successfully assigned default/platform-1715264152-7696569549-h7d45 to shp1
default 4m16s Normal SuccessfulCreate statefulset/platform-keycloak create Pod platform-keycloak-0 in StatefulSet platform-keycloak successful
default 4m16s Normal SuccessfulCreate statefulset/platform-db create Pod platform-db-0 in StatefulSet platform-db successful
default 4m15s Normal Scheduled pod/platform-keycloak-0 Successfully assigned default/platform-keycloak-0 to shp1
default 4m15s Normal Scheduled pod/platform-db-0 Successfully assigned default/platform-db-0 to shp1
default 4m16s Normal SuccessfulCreate job/platform-keycloak-keycloak-config-cli Created pod: platform-keycloak-keycloak-config-cli-t77rm
default 4m15s Normal Scheduled pod/platform-keycloak-keycloak-config-cli-t77rm Successfully assigned default/platform-keycloak-keycloak-config-cli-t77rm to shp1
default 4m15s Normal Pulled pod/platform-keycloak-keycloak-config-cli-t77rm Container image "docker.io/bitnami/keycloak-config-cli:5.12.0-debian-12-r1" already present on machine
default 4m15s Normal Pulled pod/platform-keycloak-0 Container image "docker.io/bitnami/keycloak:24.0.3-debian-12-r0" already present on machine
default 4m15s Normal Created pod/platform-keycloak-keycloak-config-cli-t77rm Created container keycloak-config-cli
default 4m15s Normal Created pod/platform-keycloak-0 Created container init-quarkus-directory
default 4m15s Normal Started pod/platform-keycloak-keycloak-config-cli-t77rm Started container keycloak-config-cli
default 4m15s Normal Started pod/platform-keycloak-0 Started container init-quarkus-directory
default 4m15s Normal Pulled pod/platform-db-0 Container image "docker.io/bitnami/os-shell:12-debian-12-r18" already present on machine
default 4m15s Normal Created pod/platform-db-0 Created container copy-certs
default 4m15s Normal Started pod/platform-db-0 Started container copy-certs
default 4m15s Normal Pulled pod/platform-keycloak-0 Container image "docker.io/bitnami/keycloak:24.0.3-debian-12-r0" already present on machine
default 4m15s Normal Pulled pod/platform-db-0 Container image "docker.io/bitnami/postgresql:16.2.0-debian-12-r14" already present on machine
default 4m14s Normal Created pod/platform-db-0 Created container postgresql
default 4m14s Normal Created pod/platform-keycloak-0 Created container keycloak
default 4m14s Normal Started pod/platform-keycloak-0 Started container keycloak
default 4m14s Normal Started pod/platform-db-0 Started container postgresql
default 2m56s Warning Unhealthy pod/platform-keycloak-0 Readiness probe failed: Get "http://10.42.0.18:8080/realms/master": dial tcp 10.42.0.18:8080: connect: connection refused
default 2m52s Normal Pulling pod/platform-1715264152-7696569549-h7d45 Pulling image "registry.opentdf.io/platform:0.1.0"
default 2m52s Warning Failed pod/platform-1715264152-7696569549-h7d45 Failed to pull image "registry.opentdf.io/platform:0.1.0": rpc error: code = NotFound desc = failed to pull and unpack image "registry.opentdf.io/platform:0.1.0": failed to resolve reference "registry.opentdf.io/platform:0.1.0": registry.opentdf.io/platform:0.1.0: not found
default 2m52s Warning Failed pod/platform-1715264152-7696569549-h7d45 Error: ErrImagePull
default 2m46s Warning Unhealthy pod/platform-keycloak-0 Readiness probe failed: Get "http://10.42.0.18:8080/realms/master": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
default 2m39s Normal Completed job/platform-keycloak-keycloak-config-cli Job completed
default 2m25s Warning Failed pod/platform-1715264152-7696569549-h7d45 Error: ImagePullBackOff
default 2m11s Normal BackOff pod/platform-1715264152-7696569549-h7d45 Back-off pulling image "registry.opentdf.io/platform:0.1.0"
Description
The README itself calls out the fact that the default chart doesn't work:
|
**_NOTE:_** Until a stable platform release is available, set the `image.tag` to `nightly` to use the latest nightly build. |
Is there a reason we aren't providing a working default and calling attention to our planned/future configuration in the README text?
Is this user error, or is there a reason we prefer the current configuration?
Will you accept a PR to set the default image.tag
and re-word this bit of the README?
For instance, it could say:
"Currently, we are using an unstable "nightly" image as as the default. In the future, expect this to be a pinned version"
Or, perhaps we just give some context & extra configuration in the "Playground" YAML?