Comments (4)
Hey Brandon,
I'm new to open source and k3s. After going through this issue, I understood why pods' logs need to be read by local-path-provisioner. I also got some idea about persistent volumes. I think I can solve this issue by giving permission in ClusterRole file.
But I noticed that you have self-assigned this. If you can, please assign this issue to me. It would be my first contribution to k3s. If you cannot, then please guide me to the next beginner-friendly issue.
from k3s.
This has already been fixed in the above-linked PR. Furthermore, this is a backport issue; the fix has also already been merged to master and this issue exists to track backporting the fix.
from k3s.
oh sorry, thanks for your reply @brandond.
from k3s.
##Environment Details
Reproduced using VERSION=v1.27.12+k3s1
Validated using COMMIT=2d48b19624efec5082f1864e64f3a13ca4124354
Infrastructure
- Cloud
Node(s) CPU architecture, OS, and version:
Linux 5.14.21-150500.53-default x86_64 GNU/Linux
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP5"
Cluster Configuration:
NAME STATUS ROLES AGE VERSION
ip-21-23-27-25 Ready control-plane,etcd,master 3h35m v1.27.12+k3s-2d48b196
Config.yaml:
node-external-ip: 3.2.1.1
token: YOUR_TOKEN_HERE
write-kubeconfig-mode: 644
debug: true
profile: cis
protect-kernel-defaults: true
cluster-init: true
embedded-registry: true
Reproduction
$ curl https://get.k3s.io --output install-"k3s".sh
$ sudo chmod +x install-"k3s".sh
$ sudo groupadd --system etcd && sudo useradd -s /sbin/nologin --system -g etcd etcd
$ sudo modprobe ip_vs_rr
$ sudo modprobe ip_vs_wrr
$ sudo modprobe ip_vs_sh
$ sudo printf "on_oovm.panic_on_oom=0 \nvm.overcommit_memory=1 \nkernel.panic=10 \nkernel.panic_ps=1 \nkernel.panic_on_oops=1 \n" > ~/90-kubelet.conf
$ sudo cp 90-kubelet.conf /etc/sysctl.d/
$ sudo systemctl restart systemd-sysctl
$ COMMIT=2d48b19624efec5082f1864e64f3a13ca4124354
$ sudo INSTALL_K3S_COMMIT=$COMMIT INSTALL_K3S_EXEC=server ./install-k3s.sh
$ set_kubefig
$ vim pv-test.yaml
$ vim pod-test.yaml
$ k get deploy -n kube-system local-path-provisioner -o jsonpath='{$.spec.template.spec.containers[:1].image}'
$ k apply -f pvc-test.yaml
$ k apply -f pod-test.yaml
$ kgp -A -o wide
$ k delete -f pod-test.yaml -f pvc-test.yaml
$ kg pv -A
$ k logs pod/local-path-provisioner
$ k logs pod/local-path-provisioner-79ffd768b5-hbr5f -n kube-system
$ k logs pod/local-path-provisioner-79ffd768b5-hbr5f -n kube-system
$ kg clusterrole local-path-provisioner-role -o yaml
Results:
//both new COMMIT_IDs and existing release retain the same versions of local-path-provisioner
$ k get deploy -n kube-system local-path-provisioner -o jsonpath='{$.spec.template.spec.containers[:1].image}'
rancher/local-path-provisioner:v0.0.26
// existing release clusterrole resource permissions attention to missing resources: pod/logs
$ kg clusterrole local-path-provisioner-role -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
objectset.rio.cattle.io/applied: H4sIAAAAAA
objectset.rio.cattle.io/id: ""
objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
objectset.rio.cattle.io/owner-name: local-storage
objectset.rio.cattle.io/owner-namespace: kube-system
creationTimestamp: "2024-04-15T18:42:20Z"
labels:
objectset.rio.cattle.io/hash: 183f35c65ffbc3064603f43f1580d8c68a2dabd4
name: local-path-provisioner-role
resourceVersion: "273"
uid: 6c447fa9-505f-43f3-b3d7-fa289476146f
rules:
- apiGroups:
- ""
resources:
- nodes
- persistentvolumeclaims
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- endpoints
- persistentvolumes
- pods
verbs:
- '*'
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
// latest commit install now includes the pods/log resources to the clusterrole
$ kg clusterrole local-path-provisioner-role -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
objectset.rio.cattle.io/applied: H4sIAAAAAAAYDAAA
objectset.rio.cattle.io/id: ""
objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
objectset.rio.cattle.io/owner-name: local-storage
objectset.rio.cattle.io/owner-namespace: kube-system
creationTimestamp: "2024-04-15T18:42:22Z"
labels:
objectset.rio.cattle.io/hash: 183f35c65ffbc3064603f43f1580d8c68a2dabd4
name: local-path-provisioner-role
resourceVersion: "266"
uid: d17fd944-63fd-48a2-8091-8ef1fab134a5
rules:
- apiGroups:
- ""
resources:
- nodes
- persistentvolumeclaims
- configmaps
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- endpoints
- persistentvolumes
- pods
verbs:
- '*'
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
I did not hit the error during reproduction in the pod logs for what it's worth. But as the change is a permissions change on the clusterrole it's pretty straightforward to check if it has the right permissions from the kubectl api.
$ k logs pod/local-path-provisioner-79ffd768b5-hbr5f -n kube-system
I0415 18:42:40.149547 1 controller.go:811] Starting provisioner controller rancher.io/local-path_local-path-provisioner-79ffd768b5-hbr5f_1ad2a9f4-b6df-45a0-86d9-cf3b0e708a01!
I0415 18:42:40.249973 1 controller.go:860] Started provisioner controller rancher.io/local-path_local-path-provisioner-79ffd768b5-hbr5f_1ad2a9f4-b6df-45a0-86d9-cf3b0e708a01!
I0415 21:31:29.435853 1 controller.go:1337] provision "default/test-pvc" class "local-path": started
time="2024-04-15T21:31:29Z" level=info msg="Creating volume pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f at ip-21-23-27-25:/var/lib/rancher/k3s/storage/pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f_default_test-pvc"
time="2024-04-15T21:31:29Z" level=info msg="create the helper pod helper-pod-create-pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f into kube-system"
I0415 21:31:29.438960 1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc", UID:"f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f", APIVersion:"v1", ResourceVersion:"39115", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-pvc"
time="2024-04-15T21:31:33Z" level=info msg="Volume pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f has been created on ip-21-23-27-25:/var/lib/rancher/k3s/storage/pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f_default_test-pvc"
time="2024-04-15T21:31:33Z" level=info msg="Start of helper-pod-create-pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f logs"
time="2024-04-15T21:31:33Z" level=info msg="Illegal option -a"
time="2024-04-15T21:31:33Z" level=info msg="End of helper-pod-create-pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f logs"
I0415 21:31:33.529520 1 controller.go:1442] provision "default/test-pvc" class "local-path": volume "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f" provisioned
I0415 21:31:33.529544 1 controller.go:1455] provision "default/test-pvc" class "local-path": succeeded
I0415 21:31:33.529552 1 volume_store.go:212] Trying to save persistentvolume "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f"
I0415 21:31:33.535082 1 volume_store.go:219] persistentvolume "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f" saved
I0415 21:31:33.535313 1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-pvc", UID:"f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f", APIVersion:"v1", ResourceVersion:"39115", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f
I0415 21:34:45.922683 1 controller.go:1471] delete "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f": started
time="2024-04-15T21:34:45Z" level=info msg="Deleting volume pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f at ip-21-23-27-25:/var/lib/rancher/k3s/storage/pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f_default_test-pvc"
time="2024-04-15T21:34:45Z" level=info msg="create the helper pod helper-pod-delete-pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f into kube-system"
time="2024-04-15T21:34:48Z" level=info msg="Volume pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f has been deleted on ip-21-23-27-25:/var/lib/rancher/k3s/storage/pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f_default_test-pvc"
time="2024-04-15T21:34:48Z" level=info msg="Start of helper-pod-delete-pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f logs"
time="2024-04-15T21:34:48Z" level=info msg="Illegal option -a"
time="2024-04-15T21:34:48Z" level=info msg="End of helper-pod-delete-pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f logs"
I0415 21:34:48.962630 1 controller.go:1486] delete "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f": volume deleted
I0415 21:34:48.967323 1 controller.go:1531] delete "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f": persistentvolume deleted
I0415 21:34:48.967342 1 controller.go:1536] delete "pvc-f98d7cb8-4aa8-41ee-ac8b-eccdbaab8e2f": succeeded
from k3s.
Related Issues (20)
- v1-release/channels does not respect json content type HOT 4
- All instalation & namespace back to default
- Why do k3s cluster certificates need to be updated manually
- CNI bin dir changes with K3s version
- k3s + traefik, how to custom define externalTrafficPolicy = Local HOT 2
- Support selinux on Tumbleweed HOT 3
- ImageVolume broken HOT 2
- How to specify and call the root user for the k8s WebSocket interface?
- 5000 node perf test.
- k3s server kube-system CrashLoopBackOff after installing nftables v1.1.0 HOT 4
- airgap k3s v1.30.4+k3s1: FATA[0000] exec: "k3s-agent": executable file not found in $PATH HOT 3
- K3s log showing certificate expired error, no old certificates found HOT 2
- DNS problem for k3s multicloud cluster HOT 13
- [Windows] CoreDNS pod IP is not updated on Windows node
- kubernetes cluster fails to serve request if 1 of node out of 2 servers node goes down. HOT 1
- Add support for Alibaba Cloud Linux HOT 1
- When I do compile k3s-arm64 binary it doesn't work HOT 8
- Cannot add node-label with key starting with `node-role.kubernetes.io/` to `/etc/rancher/k3s/config.yaml` HOT 2
- Adding Elestio as deployment option HOT 2
- Persistent memory leak of k3s control plane instance HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from k3s.