JuiceFS CSI Driver allows you to use JuiceFS in Kubernetes, learn more at:
JuiceFS CSI Driver is open-sourced under Apache License 2.0, see LICENSE.
JuiceFS CSI Driver
Home Page: https://github.com/juicedata/juicefs
License: Apache License 2.0
juicedata/juicefs-csi-driver: v0.10.5
pod status:
juicefs-<nodeName>-<pvName> CrashLoopBackOff
incorrect command:
/bin/mount.juicefs mysql://root:123456@(10.99.100.166:3306)/juicefs /jfs/<pvName>
err message:
Syntax error: "(" unexpected
correct command:
/bin/mount.juicefs mysql://root:123456@\(10.99.100.166:3306\)/juicefs /jfs/<pvName>
success
I'm installing csi-driver chart with below commands:
helm upgrade juicefs-csi-driver juicefs-csi-driver/juicefs-csi-driver --install -f juicefs_values.yaml
and the content of juicefs_values.yaml below:
- name: juicefs
enabled: true
reclaimPolicy: Retain
backend:
name: juicefs
metaurl: redis://:[email protected]
storage: s3
accessKey: minio
secretKey: minio123
bucket: http://minio-service.default.svc:9000/milvus-test
I'm using juicefs
as the name
parameter. and I've got the output for test if juicefs is working correctly like below:
...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: juicefs-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Pi
storageClassName: juicefs-sc
...
the example using juicefs-sc
as the storageClassName, but indeed, we create it with the name with juicefs
.
Right now, we need to show down application completely to upgrade this CSI driver.
something like the "mountOptions", yaml below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: juicefs-sc
namespace: kube-system
annotations:
storageclass.kubernetes.io/is-default-class: "true"
mountOptions:
- allow-other
parameters:
csi.storage.k8s.io/node-publish-secret-name: juicefs-6hh8cg24g8
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
csi.storage.k8s.io/provisioner-secret-name: juicefs-6hh8cg24g8
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
provisioner: csi.juicefs.com
Warning FailedMount 16s (x7 over 47s) kubelet, minikube MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry/ is not a directory
Workaround:
minikube ssh
sudo mkdir -p /var/lib/kubelet/plugins_registry/
Package juicefs-cli and util-linux with https://github.com/GoogleContainerTools/distroless to reduce image size and improve deterministic.
When meta is down, the driver has problem cleaning up existing mount points
I0314 03:32:54.829147 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/159f0ab4-1337-4249-8577-9b446909246d/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.829214 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/159f0ab4-1337-4249-8577-9b446909246d/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.829312 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/babd88dc-ed30-4fe5-a814-9c71b1e53de2/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.829343 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/babd88dc-ed30-4fe5-a814-9c71b1e53de2/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849151 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/7635a844-d00d-4208-a03e-eb95a7493e92/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849200 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/7635a844-d00d-4208-a03e-eb95a7493e92/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849258 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/311d1dbf-ddd1-43a4-9e13-f9828803cd7f/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849290 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/311d1dbf-ddd1-43a4-9e13-f9828803cd7f/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849333 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/0d627b6c-156e-4ed6-8fcf-8a907c2e9f24/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849359 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/0d627b6c-156e-4ed6-8fcf-8a907c2e9f24/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849401 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/9ec56f3f-2ac1-4d0d-9b37-738a9215a8ac/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849429 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/9ec56f3f-2ac1-4d0d-9b37-738a9215a8ac/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
juicefs-plugin
in juicefs-csi-node
pod enters CrashLoopBackOff
state because of liveness probe failure after some chaos injection.
Pod status
Warning Unhealthy 3m6s (x10 over 4m46s) kubelet, ip-172-20-53-84.us-west-1.compute.internal Liveness probe failed: Get http://172.20.53.84:9808/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Pod logs
juicefs-plugin 2020-03-01T08:53:01.71694209Z I0301 08:53:01.716596 1 driver.go:29 Driver: csi.juicefs.com Version: 0.4.0
juicefs-plugin 2020-03-01T08:53:01.718131295Z I0301 08:53:01.718053 1 mount_linux.go:174 Cannot run systemd-run, assuming non-systemd OS
juicefs-plugin 2020-03-01T08:53:01.718274488Z I0301 08:53:01.718069 1 mount_linux.go:175 systemd-run failed with: exit status 1
juicefs-plugin 2020-03-01T08:53:01.718301342Z I0301 08:53:01.718078 1 mount_linux.go:176 systemd-run output: Failed to create bus connection: No such file or directory
juicefs-plugin 2020-03-01T08:53:01.719807055Z I0301 08:53:01.719195 1 mount_linux.go:174 Cannot run systemd-run, assuming non-systemd OS
juicefs-plugin 2020-03-01T08:53:01.719821081Z I0301 08:53:01.719211 1 mount_linux.go:175 systemd-run failed with: exit status 1
juicefs-plugin 2020-03-01T08:53:01.719826043Z I0301 08:53:01.719222 1 mount_linux.go:176 systemd-run output: Failed to create bus connection: No such file or directory
juicefs-plugin 2020-03-01T08:53:01.719830592Z I0301 08:53:01.719384 1 driver.go:66 Listening for connection on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
juicefs-plugin 2020-03-01T08:53:03.319457137Z I0301 08:53:03.319356 1 node.go:190 NodeGetInfo: called with args
juicefs-plugin 2020-03-01T08:53:19.819314863Z I0301 08:53:19.817523 1 node.go:144 NodeUnpublishVolume: called with args volume_id:"pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e" target_path:"/var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount"
juicefs-plugin 2020-03-01T08:53:19.819337825Z I0301 08:53:19.818022 1 node.go:153 NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount
juicefs-plugin 2020-03-01T08:53:19.819344287Z I0301 08:53:19.818039 1 mount_linux.go:211 Unmounting /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount
juicefs-plugin 2020-03-01T08:53:19.891456554Z E0301 08:53:19.891354 1 driver.go:53 GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount": Unmount failed: exit status 32
juicefs-plugin 2020-03-01T08:53:19.891475673Z Unmounting arguments: /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount
juicefs-plugin 2020-03-01T08:53:19.891481783Z Output: umount: /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount: not mounted.
juicefs-plugin 2020-03-01T08:53:19.891486586Z
node-driver-registrar 2020-03-01T08:53:01.917351514Z I0301 08:53:01.916474 1 main.go:110 Version: v1.1.0-0-g80a94421
node-driver-registrar 2020-03-01T08:53:01.917378114Z I0301 08:53:01.916518 1 main.go:120 Attempting to open a gRPC connection with: "/csi/csi.sock"
node-driver-registrar 2020-03-01T08:53:01.917384312Z I0301 08:53:01.916531 1 connection.go:151 Connecting to unix:///csi/csi.sock
node-driver-registrar 2020-03-01T08:53:01.917478996Z I0301 08:53:01.917414 1 main.go:127 Calling CSI driver to discover driver name
node-driver-registrar 2020-03-01T08:53:01.917538499Z I0301 08:53:01.917462 1 connection.go:180 GRPC call: /csi.v1.Identity/GetPluginInfo
node-driver-registrar 2020-03-01T08:53:01.918413895Z I0301 08:53:01.917472 1 connection.go:181 GRPC request: {}
node-driver-registrar 2020-03-01T08:53:01.920875388Z I0301 08:53:01.919200 1 connection.go:183 GRPC response: {"name":"csi.juicefs.com","vendor_version":"0.4.0"}
node-driver-registrar 2020-03-01T08:53:01.920893792Z I0301 08:53:01.919732 1 connection.go:184 GRPC error: <nil>
node-driver-registrar 2020-03-01T08:53:01.920899347Z I0301 08:53:01.919739 1 main.go:137 CSI driver name: "csi.juicefs.com"
node-driver-registrar 2020-03-01T08:53:01.92090414Z I0301 08:53:01.919938 1 node_register.go:54 Starting Registration Server at: /registration/csi.juicefs.com-reg.sock
node-driver-registrar 2020-03-01T08:53:01.920909174Z I0301 08:53:01.920024 1 node_register.go:61 Registration Server started at: /registration/csi.juicefs.com-reg.sock
node-driver-registrar 2020-03-01T08:53:02.318863064Z I0301 08:53:02.318241 1 main.go:77 Received GetInfo call: &InfoRequest{}
node-driver-registrar 2020-03-01T08:53:03.318440053Z I0301 08:53:03.318150 1 main.go:77 Received GetInfo call: &InfoRequest{}
node-driver-registrar 2020-03-01T08:53:03.39350671Z I0301 08:53:03.393407 1 main.go:87 Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
liveness-probe 2020-03-01T08:53:02.075990243Z W0301 08:53:02.075794 1 deprecatedflags.go:53 Warning: option connection-timeout="3s" is deprecated and has no effect
liveness-probe 2020-03-01T08:53:02.076034383Z I0301 08:53:02.075838 1 connection.go:151 Connecting to unix:///csi/csi.sock
liveness-probe 2020-03-01T08:53:02.07712102Z I0301 08:53:02.077055 1 main.go:86 calling CSI driver to discover driver name
liveness-probe 2020-03-01T08:53:02.077885125Z I0301 08:53:02.077820 1 main.go:91 CSI driver name: "csi.juicefs.com"
liveness-probe 2020-03-01T08:53:02.077923772Z I0301 08:53:02.077872 1 main.go:100 Serving requests to /healthz on: 0.0.0.0:9808
liveness-probe 2020-03-01T08:53:58.655528503Z E0301 08:53:58.655238 1 connection.go:129 Lost connection to unix:///csi/csi.sock.
node-driver-registrar 2020-03-01T08:53:58.655487491Z E0301 08:53:58.654969 1 connection.go:129 Lost connection to unix:///csi/csi.sock.
node-driver-registrar 2020-03-01T08:54:58.655237845Z E0301 08:54:58.654795 1 connection.go:129 Lost connection to unix:///csi/csi.sock.
liveness-probe 2020-03-01T08:54:58.655148872Z E0301 08:54:58.654839 1 connection.go:129 Lost connection to unix:///csi/csi.sock.
PV & PVC status
~ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e 10Pi RWX Delete Bound chaos-sut/juicefs-pvc juicefs-sc 3d22h
~ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
juicefs-pvc Bound pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e 10Pi RWX juicefs-sc 3d22h
~ kubectl get po
No resources found in chaos-sut namespace.
Prevent plugin to drain all node resources under heavy load.
Wether it supports Redis Sentinel Mode or not? and how to set up .Thanks
version: juicefs-csi-driver/v0.10.7
After modifying the mountOptions parameter cache-size in StorageClass and pv from 500 to 1000, the business container is not restarted, and the JuiceFS mount pod juicefs-qsy05-pvc-c7f6e1c4-5c3d-4fc7-8955-63ffdebbedc5 container corresponding to the restart is forced to have the mount parameter unchanged. , Still the old 500 value.
/bin/mount.juicefs redis://xxxxx:6379/1 /jfs/pvc-c7f6e1c4-5c3d-4fc7-8955-63ffdebbedc5
-o writeback,enable-xattr,max-uploads=60,cache-size=500 . It should be 1000.
When I restarted the business container, the cache-size of the JuiceFS mount pod mount parameter changed to 1000.
Because it is necessary to smoothly upgrade the mountOptions parameters, it would be unfriendly if the business container was restarted to update the mountOptions parameters.
To enhance resource control, logging and isolation.
When Mound Pod Name is longer than 63, it will be Truncated to 63. And the Pod Will Created.
But you can't get pod info by original name. So the mount action will be failed.
SHOULD check the length of PodName, return failed once it is longer than 63.
In our production env, the NodeName offen is very long, and the PodName can be overlong very easily.
Can we use status.hostIP instead of spec.nodeName to format PodName ? I don't know if it will cause any new problems!
We use IAM roles for service accounts to grant access to S3 buckets. Both juicefs-csi-controller & juicefs-csi-node work fine with manually mounting the S3 backend storages.
The problem is that the jfs-mount pods are provisioned with the default service-account, which doesn't have access to the S3 backend, leading to mounted but unusable pv/pvc.
Maybe more configurations of jfs-mount could be exposed in .values.storageClasses.mountPod.*
?
BTW, what is the minimal S3 privilidges to make juicefs work?
I found a problem with juicefs-csi-driver today. Create a pvc and bind it to a pod application,the PV id is “pvc-d7562c2d-a68c-4ca4-b012-b5772a2eebf5”, write files, then stop the pod, after deleting pvc, pv will continue to create pvc regardless of Retain or delete state, the new PV id is "pvc-c53e25e6-8004-4444-b6c6-2a2726cce4f0",and continue to bind to a pod for use, and find /var/lib/juicefs/volume/pvc-c53e25e6-8004-4444-b6c6-2a2726cce4f0 The newly created PV directory under the volume directory contains the old PV directory "pvc-d7562c2d-a68c-4ca4-b012-b5772a2eebf5" that was just deleted.
I think the metadata and data are not cleaned up after deleting pvc and pv. When the RECLAIM POLICY of PV is Delete, when the PVC is deleted, the PV is automatically deleted and the data in the metadata and object storage should be cleaned up.
Also, when the RECLAIM POLICY of the PV is Retain, after deleting the PVC and manually deleting the PV, the metadata and data in the object storage should be automatically cleaned up.
https://github.com/juicedata/juicefs-csi-driver/blob/master/charts/juicefs-csi-driver/templates/daemonset.yaml#L99, this configuration makes the juicefs host port occupied and not able to be deployed twice.
I am trying out csi driver on the local cluster I created using Kind. I followed the basic.yaml
tutorial, everything looks fine, but the pod is having issue.
kubectl get pv,pvc,sc
:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-f8452e9e-e3c1-4c8a-8f75-382d522815a4 1Gi RWX Delete Bound default/juicefs-pvc juicefs-sc 23m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/juicefs-pvc Bound pvc-f8452e9e-e3c1-4c8a-8f75-382d522815a4 1Gi RWX juicefs-sc 23m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/juicefs-sc csi.juicefs.com Delete Immediate false 23m
storageclass.storage.k8s.io/standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 38h
kubectl describe pod juicefs-app | tail
:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 25m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 25m default-scheduler Successfully assigned default/juicefs-app to zityspace-control-plane
Warning FailedMount 9m29s (x5 over 23m) kubelet, zs-control-plane Unable to attach or mount volumes: unmounted volumes=[juicefs-pv], unattached volumes=[juicefs-pv kube-api-access-wxdgk]: timed out waiting for the condition
Warning FailedMount 2m38s (x5 over 18m) kubelet, zs-control-plane Unable to attach or mount volumes: unmounted volumes=[juicefs-pv], unattached volumes=[kube-api-access-wxdgk juicefs-pv]: timed out waiting for the condition
Warning FailedMount 117s (x19 over 25m) kubelet, zs-control-plane MountVolume.SetUp failed for volume "pvc-f8452e9e-e3c1-4c8a-8f75-382d522815a4" : rpc error: code = Internal desc = Could not mount juicefs: rpc error: code = Internal desc = Could not format juicefs: exit status 1
I did generate the correct secret and started a redis server on my local machine. It works well on my local machine when I use juicefs to format a volume and mount it and interact with it.
Any thoughts on what could be wrong when I try it in Kind cluster? Because kind cluster is running insider a docker container, I don't know if it is because of the connection from the cluster to my local machine's redis server causing the issue.
Also what is the default directory path in the node for the dynamic provisioned volume? I couldn't find it in storageclass definition. Thanks.
E.g. v0.5.0-juicefs-4.4.0
Currently JuiceFS mount process's log is lost, we need to capture it to a log file for troubleshooting.
#143
Remove target before Delete Pod will cause Pod leak.
remove target success -> del ref or delete pod faild -> kubelet retry -> target not exist -> return umount success -> mount pod leak
Need go through following code to del pod when target not exist.
Commit 86aca1e forces the --no-update
option. What should we do if the access credential is rotated or revoked, or any other configuration is changed?
There seems something wrong with version workflow. The docker image tag v0.6.0-dirty-juicefs4.4.1
has a dirty state.
It would be nice if this config was configurable in the values.yaml file for the Helm chart.
Useful to have a safe default e.g. /tmp if the user doesn't specify it in the mountOptions.
jfs-default-cache:
Type: HostPath (bare host directory volume)
Path: /var/jfsCache
HostPathType: DirectoryOrCreate
Serve two versions of gRPC service in the same server.
When installing JuiceFS CSI driver, all parameters are required. But if user already formatted a volume, we only need name
and metaurl
, other parameters should be optional.
now there are only two access mode:
ReadWriteOnce
ReadWriteMany
Can ReadOnlyMany be supported ?
Given the following situation all within the same namespace.
NodeA --- PodA --- Volume A ---+
| ---- PersistentVolumeClaim (RWM) ---- StorageClass
NodeB --- PodB --- Volume B ---+
Volume A will successfully mount and you see this pod running:
juicefs-k3d-nodeA-pvc-467cfe22-0099-442f-8165-61be09295da1
But PodB will fail to start with this error:
Could not mount juicefs: rpc error: code = Internal desc = Could not mount juicef │
│ s: pods "juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1" not found
And this error in the juicefs-csi-node for NodeB:
│ I1110 11:04:53.566859 1 juicefs.go:328] Mount: skip mounting for existing mount point "/jfs/pvc-467cfe22-0099-442f-8165-61be09295da1" │
│ I1110 11:04:53.566869 1 juicefs.go:331] Mount: add mount ref of configMap of volumeId "pvc-467cfe22-0099-442f-8165-61be09295da1" │
│ I1110 11:04:53.566880 1 client.go:65] Get pod juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1 │
│ I1110 11:04:53.571303 1 client.go:68] Can't get pod juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1 namespace system-filesystem: pods "juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1" not found │
│ E1110 11:04:53.571324 1 driver.go:60] GRPC error: rpc error: code = Internal desc = Could not mount juicefs: rpc error: code = Internal desc = Could not mount juicefs: pods "juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1" not │
│ I1110 11:06:55.655175 1 node.go:217] NodeGetCapabilities: called with args ```
I am trying out in alibaba cloud ASK cluster, and after I do kubectl apply -f k8s.yaml
, the csi-controller
pod has status ProviderFailed
, and describing the pod it shows Message: The specified parameter HostPathVolume is not valid
. I guess it's because of hostPath
is being used in k8s.yaml
? So my question is does juicefs's csi-driver support ASK? Thanks.
Pods will be evicted by kubelet when the node is in a memory stressed state or pod's resource usage exceeds requests.
JuiceFS mount pod provides file system services and can't be evicted before app pod. Mount pod can be set system-node-critical refer to : Larger numbers are reserved for critical system Pods that should not normally be preempted or evicted.
Add more log when auth failed, like the output returned by auth process.
juicefs hangs on
juicefs-csi-driver/pkg/juicefs/juicefs.go
Line 144 in 841f401
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.417448 1 juicefs.go:129] CreateVol: volume not existed
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.417904 1 juicefs.go:139] CreateVol: making directory "/jfs/tencentcloud-ap-hongkong-d41d8cd98f00b204e9800998ecf8427e/pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.523984 1 controller.go:106] DeleteVolume: Deleting volume "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.810776 1 juicefs.go:143] CreateVol: writing meta to "/jfs/tencentcloud-ap-hongkong-d41d8cd98f00b204e9800998ecf8427e/pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3/.juicefs"
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.089644 1 connection.go:183] GRPC response: {}
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.091426 1 connection.go:184] GRPC error: <nil>
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.091626 1 controller.go:1361] delete "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3": volume deleted
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.134818 1 controller.go:1407] delete "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3": persistentvolume deleted
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.140481 1 controller.go:1409] delete "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3": succeeded
juicefs-csi-controller-0 csi-attacher I1022 15:15:27.141501 1 controller.go:205] Started PV processing "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.045560 1 connection.go:183] GRPC response: {}
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.052369 1 connection.go:184] GRPC error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.052462 1 controller.go:979] Final error received, removing PVC be4e1e7e-f4de-11e9-8bc4-080027bf2ce3 from claims in progress
juicefs-csi-controller-0 csi-provisioner W1022 15:16:19.052474 1 controller.go:886] Retrying syncing claim "be4e1e7e-f4de-11e9-8bc4-080027bf2ce3", failure 0
juicefs-csi-controller-0 csi-provisioner E1022 15:16:19.087483 1 controller.go:908] error syncing claim "be4e1e7e-f4de-11e9-8bc4-080027bf2ce3": failed to provision volume with StorageClass "juicefs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.095413 1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"juicefs-pvc", UID:"be4e1e7e-f4de-11e9-8bc4-080027bf2ce3", APIVersion:"v1", ResourceVersion:"8609", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "juicefs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.087913 1 controller.go:1196] provision "default/juicefs-pvc" class "juicefs-sc": started
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.094726 1 controller.go:442] CreateVolumeRequest {Name:pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3 CapacityRange:required_bytes:11258999068426240 VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > ] Parameters:map[csi.storage.k8s.io/node-publish-secret-name:juicefs-gd72m9k54b csi.storage.k8s.io/node-publish-secret-namespace:default csi.storage.k8s.io/provisioner-secret-name:juicefs-gd72m9k54b csi.storage.k8s.io/provisioner-secret-namespace:default] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.099855 1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"juicefs-pvc", UID:"be4e1e7e-f4de-11e9-8bc4-080027bf2ce3", APIVersion:"v1", ResourceVersion:"8609", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/juicefs-pvc"
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.105243 1 connection.go:180] GRPC call: /csi.v1.Controller/CreateVolume
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.105285 1 connection.go:181] GRPC request: {"capacity_range":{"required_bytes":11258999068426240},"name":"pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3","secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":5}}]}
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:20.115357 1 controller.go:61] CreateVolume: Secrets contains keys [bucket name secretkey token accesskey]
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:20.115402 1 controller.go:65] CreateVolume: Mounting juicefs "pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:20.115423 1 juicefs.go:240] AuthFs: cmd "/usr/bin/juicefs", args []string{"auth", "tencentcloud-ap-hongkong", "--accesskey=AKIDLXWLyOicQ1OYWH48GCvQkrlSwEJAbPkr", "--accesskey2=", "--bucket=juicefs-tencentcloud-ap-hongkong-1252455339", "--bucket2=", "--token=[secret]", "--secretkey=[secret]", "--secretkey2=[secret]", "--passphrase=[secret]"}
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:22.346516 1 juicefs.go:182] MountFs: authentication output is ''
What feature you'd like to add:
Currently JuiceFS supports using redis meta engine with sentinel. But there is no way to pass sentinel password in redis URL.
On baremetal I could set env SENTINEL_PASSWORD which juicefs would read to connect to sentinel.
But on CSI driver, I cannot achieve that by simply editing values.yaml or the helm chart.
Why is this feature needed:
It would be useful to use CSI driver with redis sentinel meta engine
It would be helpful if the official images were built for arm64 as well. It would greatly simplify deployments to AWS Graviton and/or Raspberry Pi 4B based environments.
Normal SuccessfulAttachVolume 63s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-bbd05807-f1a9-11e9-9590-0800273e5475"
Warning FailedMount 31s (x7 over 62s) kubelet, minikube MountVolume.MountDevice failed for volume "pvc-bbd05807-f1a9-11e9-9590-0800273e5475" : driver name csi.juicefs.com not found in the list of registered CSI drivers
I can't find any place which are using events api..
Existing volumes should reconcile even when plugin pods are killed and recreated.
It would be nice to have it for debugging
juicefs-plugin I1017 23:57:20.609725 1 driver.go:29[] Driver: csi.juicefs.com Version:
juicefs-plugin I1017 23:57:20.613619 1 mount_linux.go:174[] Cannot run systemd-run, assuming non-systemd OS
What happened:
When trying to mount a JuiceFS volume I get the following error from my Pod.
kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient:
driver name csi.juicefs.com not found in the list of registered CSI drivers
What you expected to happen:
It would mount.
Anything else:
I do not have a custom kubelet path but I do run the following to make it shared:
sudo mount -o bind --make-shared /var/lib/kubelet /var/lib/kubelet
Environment:
./juicefs --version
) or Hadoop Java SDK version:Using Helm Chart.
k3d v5.0.1
cat /etc/os-release
):Ubuntu 21.10 (Impish Indri)
uname -a
):Linux 5.13.0-19-generic
S3 - AP Southeast 2
6.0.15 - self maintained
Redis is running on the same host as k3d.
Logs
csi-controller
│ juicefs-plugin I1018 12:17:02.039058 1 driver.go:29] Driver: csi.juicefs.com version v0.10.6 commit b2bc111627ba895fe72429525e77525ed8a5b2f7 date 2021-09-27T08:31:49Z │
│ juicefs-plugin I1018 12:17:02.292936 1 controller.go:48] Controller: /bin/sh: 1: /sbin/modprobe: not found │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644) │
│ juicefs-plugin I1018 12:17:02.511823 1 node.go:62] Node: /bin/sh: 1: /sbin/modprobe: not found │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644) │
│ juicefs-plugin I1018 12:17:02.512858 1 driver.go:73] Listening for connection on address: &net.UnixAddr{Name:"/var/lib/csi/sockets/pluginproxy/csi.sock", Net:"unix"} │
│ juicefs-plugin I1018 12:17:03.148773 1 controller.go:145] ControllerGetCapabilities: called with args &csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8 │
│ csi-provisioner I1018 12:17:02.136402 1 feature_gate.go:243] feature gates: &{map[]} │
│ csi-provisioner I1018 12:17:02.136428 1 csi-provisioner.go:107] Version: v1.6.0-0-g321fa5c1c │
│ csi-provisioner I1018 12:17:02.136437 1 csi-provisioner.go:121] Building kube configs for running in cluster... │
│ csi-provisioner I1018 12:17:02.143679 1 connection.go:153] Connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock │
│ csi-provisioner I1018 12:17:03.145324 1 common.go:111] Probing CSI driver for readiness │
│ csi-provisioner I1018 12:17:03.145342 1 connection.go:182] GRPC call: /csi.v1.Identity/Probe │
│ csi-provisioner I1018 12:17:03.145348 1 connection.go:183] GRPC request: {} │
│ csi-provisioner I1018 12:17:03.146655 1 connection.go:185] GRPC response: {} │
│ csi-provisioner I1018 12:17:03.146919 1 connection.go:186] GRPC error: <nil> │
│ csi-provisioner I1018 12:17:03.146925 1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginInfo │
│ csi-provisioner I1018 12:17:03.146927 1 connection.go:183] GRPC request: {} │
│ csi-provisioner I1018 12:17:03.147341 1 connection.go:185] GRPC response: {"name":"csi.juicefs.com","vendor_version":"v0.10.6"} │
│ csi-provisioner I1018 12:17:03.147569 1 connection.go:186] GRPC error: <nil> │
│ csi-provisioner I1018 12:17:03.147577 1 csi-provisioner.go:163] Detected CSI driver csi.juicefs.com │
│ csi-provisioner W1018 12:17:03.147581 1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified. │
│ csi-provisioner I1018 12:17:03.147586 1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginCapabilities │
│ csi-provisioner I1018 12:17:03.147589 1 connection.go:183] GRPC request: {} │
│ csi-provisioner I1018 12:17:03.148043 1 connection.go:185] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}}]} │
│ csi-provisioner I1018 12:17:03.148470 1 connection.go:186] GRPC error: <nil> │
│ csi-provisioner I1018 12:17:03.148475 1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerGetCapabilities │
│ csi-provisioner I1018 12:17:03.148478 1 connection.go:183] GRPC request: {} │
│ csi-provisioner I1018 12:17:03.148920 1 connection.go:185] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]} │
│ csi-provisioner I1018 12:17:03.149735 1 connection.go:186] GRPC error: <nil> │
│ csi-provisioner I1018 12:17:03.149925 1 controller.go:709] Using saving PVs to API server in background │
│ csi-provisioner I1018 12:17:03.150045 1 reflector.go:153] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:135 │
│ csi-provisioner I1018 12:17:03.150052 1 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135 │
│ csi-provisioner I1018 12:17:03.150045 1 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:135 │
│ csi-provisioner I1018 12:17:03.150088 1 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135 │
│ csi-provisioner I1018 12:17:03.250029 1 shared_informer.go:227] caches populated │
│ csi-provisioner I1018 12:17:03.250045 1 shared_informer.go:227] caches populated │
│ csi-provisioner I1018 12:17:03.250054 1 controller.go:799] Starting provisioner controller csi.juicefs.com_juicefs-csi-controller-0_bda7e207-0518-4545-8cc0-231491ff50b2! │
│ csi-provisioner I1018 12:17:03.250074 1 clone_controller.go:58] Starting CloningProtection controller │
│ csi-provisioner I1018 12:17:03.250104 1 clone_controller.go:74] Started CloningProtection controller │
│ csi-provisioner I1018 12:17:03.250117 1 volume_store.go:97] Starting save volume queue │
│ csi-provisioner I1018 12:17:03.250193 1 reflector.go:153] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:836 │
│ csi-provisioner I1018 12:17:03.250202 1 reflector.go:188] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:836 │
│ csi-provisioner I1018 12:17:03.250199 1 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (15m0s) from github.com/kubernetes-csi/external-provisioner/pkg/controller/clone_controller.go: │
│ csi-provisioner I1018 12:17:03.250206 1 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from github.com/kubernetes-csi/external-provisioner/pkg/controller/clone_controller.go:72 │
│ csi-provisioner I1018 12:17:03.250316 1 reflector.go:153] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:833 │
│ csi-provisioner I1018 12:17:03.250331 1 reflector.go:188] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:833 │
│ csi-provisioner I1018 12:17:03.350172 1 shared_informer.go:227] caches populated │
│ csi-provisioner I1018 12:17:03.350325 1 controller.go:848] Started provisioner controller csi.juicefs.com_juicefs-csi-controller-0_bda7e207-0518-4545-8cc0-231491ff50b2! │
│ csi-provisioner I1018 12:24:10.254544 1 reflector.go:432] github.com/kubernetes-csi/external-provisioner/pkg/controller/clone_controller.go:72: Watch close - *v1.PersistentVolumeClaim total 0 items │
│ csi-provisioner I1018 12:24:14.253241 1 reflector.go:432] sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:836: Watch close - *v1.StorageClass total 0 items received │
│ csi-provisioner I1018 12:25:22.154599 1 reflector.go:432] k8s.io/client-go/informers/factory.go:135: Watch close - *v1.PersistentVolumeClaim total 0 items received │
│ csi-provisioner I1018 12:25:29.256043 1 reflector.go:432] sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:833: Watch close - *v1.PersistentVolume total 0 items received │
│ csi-provisioner I1018 12:26:45.152943 1 reflector.go:432] k8s.io/client-go/informers/factory.go:135: Watch close - *v1.StorageClass total 0 items received │
csi-node
│ juicefs-plugin I1018 10:08:38.742096 1 driver.go:29] Driver: csi.juicefs.com version v0.10.6 commit b2bc111627ba895fe72429525e77525ed8a5b2f7 date 2021-09-27T08:31:49Z │
│ juicefs-plugin I1018 10:08:38.957467 1 controller.go:48] Controller: /bin/sh: 1: /sbin/modprobe: not found │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644) │
│ juicefs-plugin I1018 10:08:39.171010 1 node.go:62] Node: /bin/sh: 1: /sbin/modprobe: not found │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644) │
│ juicefs-plugin I1018 10:08:39.171479 1 driver.go:73] Listening for connection on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"} │
│ juicefs-plugin I1018 10:08:45.148241 1 node.go:234] NodeGetInfo: called with args │
│ node-driver-registrar I1018 10:08:44.589148 1 main.go:110] Version: v1.1.0-0-g80a94421 │
│ node-driver-registrar I1018 10:08:44.589170 1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock" │
│ node-driver-registrar I1018 10:08:44.589178 1 connection.go:151] Connecting to unix:///csi/csi.sock │
│ node-driver-registrar I1018 10:08:44.589426 1 main.go:127] Calling CSI driver to discover driver name │
│ node-driver-registrar I1018 10:08:44.589435 1 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo │
│ node-driver-registrar I1018 10:08:44.589437 1 connection.go:181] GRPC request: {} │
│ node-driver-registrar I1018 10:08:44.590538 1 connection.go:183] GRPC response: {"name":"csi.juicefs.com","vendor_version":"v0.10.6"} │
│ node-driver-registrar I1018 10:08:44.590804 1 connection.go:184] GRPC error: <nil> │
│ node-driver-registrar I1018 10:08:44.590807 1 main.go:137] CSI driver name: "csi.juicefs.com" │
│ node-driver-registrar I1018 10:08:44.590817 1 node_register.go:54] Starting Registration Server at: /registration/csi.juicefs.com-reg.sock │
│ node-driver-registrar I1018 10:08:44.590855 1 node_register.go:61] Registration Server started at: /registration/csi.juicefs.com-reg.sock │
│ node-driver-registrar I1018 10:08:45.147290 1 main.go:77] Received GetInfo call: &InfoRequest{} │
│ node-driver-registrar I1018 10:08:45.171885 1 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} │
as the document said, CSI-driver support k8s version v1.13+
btw in k8s 1.14/1.15 CSI feature is Beta
so need an install file which
apiVersion: storage.k8s.io/v1beta1
# apiVersion: storage.k8s.io/v1
kind: CSIDriver
E0426 08:15:27.168818 2261 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/csi.juicefs.com/csi.sock" failed. No retries permitted until 2020-04-26 08:17:29.168797504 +0000 UTC m=+22096.921279600 (durationBeforeRetry 2m2s). Error: "RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins/csi.juicefs.com/csi.sock, err: rpc error: code = Unimplemented desc = unknown service pluginregistration.Registration"
$ kubectl apply -k examples/static-provisioning
$ kubectl describe pod app
...
Warning FailedMount 0s (x9 over 2m19s) kubelet, ip-172-20-32-62.ec2.internal MountVolume.SetUp failed for volume "juicefs-aws-us-east-1" : rpc error: code = Internal desc = Could not bind "/jfs/aws-us-east-1/aws-us-east-1" at "/var/lib/kubelet/pods/dc64632f-aa22-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t none -o bind /jfs/aws-us-east-1/aws-us-east-1 /var/lib/kubelet/pods/dc64632f-aa22-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount
Output: mount: /var/lib/kubelet/pods/dc64632f-aa22-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount: special device /jfs/aws-us-east-1/aws-us-east-1 does not exist.
What feature you'd like to add:
Support for arm64 docker images
Why is this feature needed:
With juicefs-csi-driver becoming part of the kubernetes ecosystem, it would be helpful to support arm64 docker images.
Google Storage requires credentials files referred by environment variable. Currently JuiceFS CSI driver supports command line arguments only for authentication only.
Need a solution for file bases auth.
We should build the juicefs client with ceph (RADOS) support by default.
Currently it is skipped for all mounted mount points. Needs to allow multiple mount with different options.
I0720 02:53:55.397828 1 node.go:76] NodePublishVolume: volume_id is aws-us-east-1
I0720 02:53:55.397847 1 node.go:87] NodePublishVolume: volume_capability is mount:<fs_type:"juicefs" > access_mode:<mode:MULTI_NODE_MULTI_WRITER >
I0720 02:53:55.397880 1 node.go:93] NodePublishVolume: creating dir /var/lib/kubelet/pods/9ad50b95-aa99-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount
I0720 02:53:55.397902 1 node.go:120] NodePublishVolume: mounting juicefs with secret [accesskey name secretkey token], options [metacache cache-size=100 cache-dir=/var/foo]
I0720 02:53:56.864199 1 juicefs.go:175] MountFs: authentication output is ''
I0720 02:53:56.865338 1 juicefs.go:253] Mount: skip mounting for existing mount point "/jfs/aws-us-east-1"
I0720 02:53:56.865361 1 node.go:130] NodePublishVolume: binding /jfs/aws-us-east-1 at /var/lib/kubelet/pods/9ad50b95-aa99-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount with options [metacache cache-size=100 cache-dir=/var/foo]
The stderr of juicefs-plugin container in juicefs-csi-node:
I0202 05:58:55.348033 1 node.go:179] NodeGetCapabilities: called with args
I0202 05:58:58.669430 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-17d66882-7689-45d8-98fd-791d69ef2ed5" target_path:"/var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount"
I0202 05:58:58.669881 1 node.go:159] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
I0202 05:58:58.669893 1 mount_linux.go:211] Unmounting /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
E0202 05:58:58.670538 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount": Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
Output: umount: /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount: not mounted.
I0202 05:58:59.673526 1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-17d66882-7689-45d8-98fd-791d69ef2ed5" target_path:"/var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount"
I0202 05:58:59.673980 1 node.go:159] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
I0202 05:58:59.673991 1 mount_linux.go:211] Unmounting /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
E0202 05:58:59.674608 1 driver.go:53] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount": Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
Output: umount: /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount: not mounted.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.