Git Product home page Git Product logo

juicefs-csi-driver's Introduction

juicefs-csi-driver's People

Contributors

bzd111 avatar camper42 avatar chnliyong avatar davies avatar dependabot[bot] avatar fishincat avatar fungaren avatar glove747 avatar hexilee avatar kyungwan-nam avatar lixuchun314 avatar molei20021 avatar nadenf avatar rogerz avatar sanwan avatar showjason avatar suave avatar swartz-k avatar timfeirg avatar tonicmuroq avatar wph95 avatar xiaogaozi avatar xiaolao avatar xing0821 avatar yuhr123 avatar yujunz avatar zhijian-pro avatar zivgan avatar zwwhdls avatar zxh326 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

juicefs-csi-driver's Issues

When mysql is the metadata storage engine, error: "Syntax error: "(" unexpected"

version:

juicedata/juicefs-csi-driver: v0.10.5

detail:

pod status:
juicefs-<nodeName>-<pvName>     CrashLoopBackOff

incorrect command:
/bin/mount.juicefs mysql://root:123456@(10.99.100.166:3306)/juicefs /jfs/<pvName>
err message:
Syntax error: "(" unexpected

correct command:
/bin/mount.juicefs mysql://root:123456@\(10.99.100.166:3306\)/juicefs /jfs/<pvName>
success

Improvement for helm chart's output message.

I'm installing csi-driver chart with below commands:

helm upgrade juicefs-csi-driver  juicefs-csi-driver/juicefs-csi-driver --install -f  juicefs_values.yaml

and the content of juicefs_values.yaml below:

- name: juicefs
  enabled: true
  reclaimPolicy: Retain
  backend:
    name: juicefs
    metaurl: redis://:[email protected]
    storage: s3
    accessKey: minio
    secretKey: minio123
    bucket: http://minio-service.default.svc:9000/milvus-test

I'm using juicefs as the name parameter. and I've got the output for test if juicefs is working correctly like below:


...

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: juicefs-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Pi
  storageClassName: juicefs-sc

...

the example using juicefs-sc as the storageClassName, but indeed, we create it with the name with juicefs.

seems like not support mount option like allow-other

something like the "mountOptions", yaml below:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: juicefs-sc
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
mountOptions:
  - allow-other
parameters:
  csi.storage.k8s.io/node-publish-secret-name: juicefs-6hh8cg24g8
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: juicefs-6hh8cg24g8
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
provisioner: csi.juicefs.com

Failed to deploy on minikube

Warning  FailedMount  16s (x7 over 47s)  kubelet, minikube  MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry/ is not a directory

Workaround:

minikube ssh
sudo mkdir -p /var/lib/kubelet/plugins_registry/

Massive error when losing connection to meta

When meta is down, the driver has problem cleaning up existing mount points

I0314 03:32:54.829147       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/159f0ab4-1337-4249-8577-9b446909246d/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.829214       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/159f0ab4-1337-4249-8577-9b446909246d/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.829312       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/babd88dc-ed30-4fe5-a814-9c71b1e53de2/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.829343       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/babd88dc-ed30-4fe5-a814-9c71b1e53de2/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849151       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/7635a844-d00d-4208-a03e-eb95a7493e92/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849200       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/7635a844-d00d-4208-a03e-eb95a7493e92/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849258       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/311d1dbf-ddd1-43a4-9e13-f9828803cd7f/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849290       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/311d1dbf-ddd1-43a4-9e13-f9828803cd7f/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849333       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/0d627b6c-156e-4ed6-8fcf-8a907c2e9f24/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849359       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/0d627b6c-156e-4ed6-8fcf-8a907c2e9f24/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point
I0314 03:32:54.849401       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94" target_path:"/var/lib/kubelet/pods/9ec56f3f-2ac1-4d0d-9b37-738a9215a8ac/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount"
E0314 03:32:54.849429       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Error checking "/var/lib/kubelet/pods/9ec56f3f-2ac1-4d0d-9b37-738a9215a8ac/volumes/kubernetes.io~csi/pvc-719bbf50-9ca9-41cd-b537-9301d1b07d94/mount" is not mount point

node driver plugin enters `CrashLoopBackOff` on chaos

juicefs-plugin in juicefs-csi-node pod enters CrashLoopBackOff state because of liveness probe failure after some chaos injection.

Pod status

Warning  Unhealthy  3m6s (x10 over 4m46s)  kubelet, ip-172-20-53-84.us-west-1.compute.internal  Liveness probe failed: Get http://172.20.53.84:9808/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Pod logs

juicefs-plugin 2020-03-01T08:53:01.71694209Z I0301 08:53:01.716596       1 driver.go:29 Driver: csi.juicefs.com Version: 0.4.0
juicefs-plugin 2020-03-01T08:53:01.718131295Z I0301 08:53:01.718053       1 mount_linux.go:174 Cannot run systemd-run, assuming non-systemd OS
juicefs-plugin 2020-03-01T08:53:01.718274488Z I0301 08:53:01.718069       1 mount_linux.go:175 systemd-run failed with: exit status 1
juicefs-plugin 2020-03-01T08:53:01.718301342Z I0301 08:53:01.718078       1 mount_linux.go:176 systemd-run output: Failed to create bus connection: No such file or directory
juicefs-plugin 2020-03-01T08:53:01.719807055Z I0301 08:53:01.719195       1 mount_linux.go:174 Cannot run systemd-run, assuming non-systemd OS
juicefs-plugin 2020-03-01T08:53:01.719821081Z I0301 08:53:01.719211       1 mount_linux.go:175 systemd-run failed with: exit status 1
juicefs-plugin 2020-03-01T08:53:01.719826043Z I0301 08:53:01.719222       1 mount_linux.go:176 systemd-run output: Failed to create bus connection: No such file or directory
juicefs-plugin 2020-03-01T08:53:01.719830592Z I0301 08:53:01.719384       1 driver.go:66 Listening for connection on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
juicefs-plugin 2020-03-01T08:53:03.319457137Z I0301 08:53:03.319356       1 node.go:190 NodeGetInfo: called with args
juicefs-plugin 2020-03-01T08:53:19.819314863Z I0301 08:53:19.817523       1 node.go:144 NodeUnpublishVolume: called with args volume_id:"pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e" target_path:"/var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount"
juicefs-plugin 2020-03-01T08:53:19.819337825Z I0301 08:53:19.818022       1 node.go:153 NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount
juicefs-plugin 2020-03-01T08:53:19.819344287Z I0301 08:53:19.818039       1 mount_linux.go:211 Unmounting /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount
juicefs-plugin 2020-03-01T08:53:19.891456554Z E0301 08:53:19.891354       1 driver.go:53 GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount": Unmount failed: exit status 32
juicefs-plugin 2020-03-01T08:53:19.891475673Z Unmounting arguments: /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount
juicefs-plugin 2020-03-01T08:53:19.891481783Z Output: umount: /var/lib/kubelet/pods/e2bf0edf-5da3-4d34-bb51-80a5458862b5/volumes/kubernetes.io~csi/pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e/mount: not mounted.
juicefs-plugin 2020-03-01T08:53:19.891486586Z
node-driver-registrar 2020-03-01T08:53:01.917351514Z I0301 08:53:01.916474       1 main.go:110 Version: v1.1.0-0-g80a94421
node-driver-registrar 2020-03-01T08:53:01.917378114Z I0301 08:53:01.916518       1 main.go:120 Attempting to open a gRPC connection with: "/csi/csi.sock"
node-driver-registrar 2020-03-01T08:53:01.917384312Z I0301 08:53:01.916531       1 connection.go:151 Connecting to unix:///csi/csi.sock
node-driver-registrar 2020-03-01T08:53:01.917478996Z I0301 08:53:01.917414       1 main.go:127 Calling CSI driver to discover driver name
node-driver-registrar 2020-03-01T08:53:01.917538499Z I0301 08:53:01.917462       1 connection.go:180 GRPC call: /csi.v1.Identity/GetPluginInfo
node-driver-registrar 2020-03-01T08:53:01.918413895Z I0301 08:53:01.917472       1 connection.go:181 GRPC request: {}
node-driver-registrar 2020-03-01T08:53:01.920875388Z I0301 08:53:01.919200       1 connection.go:183 GRPC response: {"name":"csi.juicefs.com","vendor_version":"0.4.0"}
node-driver-registrar 2020-03-01T08:53:01.920893792Z I0301 08:53:01.919732       1 connection.go:184 GRPC error: <nil>
node-driver-registrar 2020-03-01T08:53:01.920899347Z I0301 08:53:01.919739       1 main.go:137 CSI driver name: "csi.juicefs.com"
node-driver-registrar 2020-03-01T08:53:01.92090414Z I0301 08:53:01.919938       1 node_register.go:54 Starting Registration Server at: /registration/csi.juicefs.com-reg.sock
node-driver-registrar 2020-03-01T08:53:01.920909174Z I0301 08:53:01.920024       1 node_register.go:61 Registration Server started at: /registration/csi.juicefs.com-reg.sock
node-driver-registrar 2020-03-01T08:53:02.318863064Z I0301 08:53:02.318241       1 main.go:77 Received GetInfo call: &InfoRequest{}
node-driver-registrar 2020-03-01T08:53:03.318440053Z I0301 08:53:03.318150       1 main.go:77 Received GetInfo call: &InfoRequest{}
node-driver-registrar 2020-03-01T08:53:03.39350671Z I0301 08:53:03.393407       1 main.go:87 Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
liveness-probe 2020-03-01T08:53:02.075990243Z W0301 08:53:02.075794       1 deprecatedflags.go:53 Warning: option connection-timeout="3s" is deprecated and has no effect
liveness-probe 2020-03-01T08:53:02.076034383Z I0301 08:53:02.075838       1 connection.go:151 Connecting to unix:///csi/csi.sock
liveness-probe 2020-03-01T08:53:02.07712102Z I0301 08:53:02.077055       1 main.go:86 calling CSI driver to discover driver name
liveness-probe 2020-03-01T08:53:02.077885125Z I0301 08:53:02.077820       1 main.go:91 CSI driver name: "csi.juicefs.com"
liveness-probe 2020-03-01T08:53:02.077923772Z I0301 08:53:02.077872       1 main.go:100 Serving requests to /healthz on: 0.0.0.0:9808
liveness-probe 2020-03-01T08:53:58.655528503Z E0301 08:53:58.655238       1 connection.go:129 Lost connection to unix:///csi/csi.sock.
node-driver-registrar 2020-03-01T08:53:58.655487491Z E0301 08:53:58.654969       1 connection.go:129 Lost connection to unix:///csi/csi.sock.
node-driver-registrar 2020-03-01T08:54:58.655237845Z E0301 08:54:58.654795       1 connection.go:129 Lost connection to unix:///csi/csi.sock.
liveness-probe 2020-03-01T08:54:58.655148872Z E0301 08:54:58.654839       1 connection.go:129 Lost connection to unix:///csi/csi.sock.

PV & PVC status

~ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE
pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e   10Pi       RWX            Delete           Bound    chaos-sut/juicefs-pvc   juicefs-sc              3d22h

~ kubectl get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
juicefs-pvc   Bound    pvc-36642d7f-f240-43ed-a220-c8b8c0afbf9e   10Pi       RWX            juicefs-sc     3d22h

~ kubectl get po
No resources found in chaos-sut namespace.

The mount parameter of JuiceFS mount pod has not changed

version: juicefs-csi-driver/v0.10.7
After modifying the mountOptions parameter cache-size in StorageClass and pv from 500 to 1000, the business container is not restarted, and the JuiceFS mount pod juicefs-qsy05-pvc-c7f6e1c4-5c3d-4fc7-8955-63ffdebbedc5 container corresponding to the restart is forced to have the mount parameter unchanged. , Still the old 500 value.
/bin/mount.juicefs redis://xxxxx:6379/1 /jfs/pvc-c7f6e1c4-5c3d-4fc7-8955-63ffdebbedc5
-o writeback,enable-xattr,max-uploads=60,cache-size=500 . It should be 1000.

When I restarted the business container, the cache-size of the JuiceFS mount pod mount parameter changed to 1000.
Because it is necessary to smoothly upgrade the mountOptions parameters, it would be unfriendly if the business container was restarted to update the mountOptions parameters.

problem of mount name too long

When Mound Pod Name is longer than 63, it will be Truncated to 63. And the Pod Will Created.
But you can't get pod info by original name. So the mount action will be failed.

  1. SHOULD check the length of PodName, return failed once it is longer than 63.

  2. In our production env, the NodeName offen is very long, and the PodName can be overlong very easily.
    Can we use status.hostIP instead of spec.nodeName to format PodName ? I don't know if it will cause any new problems!

ServiceAccount of jfs-mount pods

We use IAM roles for service accounts to grant access to S3 buckets. Both juicefs-csi-controller & juicefs-csi-node work fine with manually mounting the S3 backend storages.

The problem is that the jfs-mount pods are provisioned with the default service-account, which doesn't have access to the S3 backend, leading to mounted but unusable pv/pvc.

Maybe more configurations of jfs-mount could be exposed in .values.storageClasses.mountPod.*?

BTW, what is the minimal S3 privilidges to make juicefs work?

Metadata and data in the object storage should be automatically cleaned up after deleting pvc and pv

I found a problem with juicefs-csi-driver today. Create a pvc and bind it to a pod application,the PV id is “pvc-d7562c2d-a68c-4ca4-b012-b5772a2eebf5”, write files, then stop the pod, after deleting pvc, pv will continue to create pvc regardless of Retain or delete state, the new PV id is "pvc-c53e25e6-8004-4444-b6c6-2a2726cce4f0",and continue to bind to a pod for use, and find /var/lib/juicefs/volume/pvc-c53e25e6-8004-4444-b6c6-2a2726cce4f0 The newly created PV directory under the volume directory contains the old PV directory "pvc-d7562c2d-a68c-4ca4-b012-b5772a2eebf5" that was just deleted.
I think the metadata and data are not cleaned up after deleting pvc and pv. When the RECLAIM POLICY of PV is Delete, when the PVC is deleted, the PV is automatically deleted and the data in the metadata and object storage should be cleaned up.
Also, when the RECLAIM POLICY of the PV is Retain, after deleting the PVC and manually deleting the PV, the metadata and data in the object storage should be automatically cleaned up.

FailedMount in Kind cluster

I am trying out csi driver on the local cluster I created using Kind. I followed the basic.yaml tutorial, everything looks fine, but the pod is having issue.

kubectl get pv,pvc,sc:

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
persistentvolume/pvc-f8452e9e-e3c1-4c8a-8f75-382d522815a4   1Gi        RWX            Delete           Bound    default/juicefs-pvc   juicefs-sc              23m

NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/juicefs-pvc   Bound    pvc-f8452e9e-e3c1-4c8a-8f75-382d522815a4   1Gi        RWX            juicefs-sc     23m

NAME                                             PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/juicefs-sc           csi.juicefs.com         Delete          Immediate              false                  23m
storageclass.storage.k8s.io/standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  38h

kubectl describe pod juicefs-app | tail:

Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
                             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From                              Message
  ----     ------            ----                 ----                              -------
  Warning  FailedScheduling  25m                  default-scheduler                 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         25m                  default-scheduler                 Successfully assigned default/juicefs-app to zityspace-control-plane
  Warning  FailedMount       9m29s (x5 over 23m)  kubelet, zs-control-plane  Unable to attach or mount volumes: unmounted volumes=[juicefs-pv], unattached volumes=[juicefs-pv kube-api-access-wxdgk]: timed out waiting for the condition
  Warning  FailedMount       2m38s (x5 over 18m)  kubelet, zs-control-plane  Unable to attach or mount volumes: unmounted volumes=[juicefs-pv], unattached volumes=[kube-api-access-wxdgk juicefs-pv]: timed out waiting for the condition
  Warning  FailedMount       117s (x19 over 25m)  kubelet, zs-control-plane  MountVolume.SetUp failed for volume "pvc-f8452e9e-e3c1-4c8a-8f75-382d522815a4" : rpc error: code = Internal desc = Could not mount juicefs: rpc error: code = Internal desc = Could not format juicefs: exit status 1

I did generate the correct secret and started a redis server on my local machine. It works well on my local machine when I use juicefs to format a volume and mount it and interact with it.

Any thoughts on what could be wrong when I try it in Kind cluster? Because kind cluster is running insider a docker container, I don't know if it is because of the connection from the cluster to my local machine's redis server causing the issue.

Also what is the default directory path in the node for the dynamic provisioned volume? I couldn't find it in storageclass definition. Thanks.

Mount Pod leak

#143
Remove target before Delete Pod will cause Pod leak.

remove target success -> del ref or delete pod faild -> kubelet retry -> target not exist -> return umount success -> mount pod leak

Need go through following code to del pod when target not exist.

JFS Default Cache should be configurable in Helm Chart

It would be nice if this config was configurable in the values.yaml file for the Helm chart.

Useful to have a safe default e.g. /tmp if the user doesn't specify it in the mountOptions.

jfs-default-cache:                                                                                                                                                                                                                      
  Type:          HostPath (bare host directory volume)                                                                                                                                                                                  
  Path:          /var/jfsCache
  HostPathType:  DirectoryOrCreate

more access mode support

now there are only two access mode:
ReadWriteOnce
ReadWriteMany

Can ReadOnlyMany be supported ?

ReadWriteMany mode fails to mount

Given the following situation all within the same namespace.

NodeA --- PodA --- Volume A  ---+
                                | ---- PersistentVolumeClaim (RWM) ---- StorageClass
NodeB --- PodB --- Volume B  ---+

Volume A will successfully mount and you see this pod running:

juicefs-k3d-nodeA-pvc-467cfe22-0099-442f-8165-61be09295da1

But PodB will fail to start with this error:

Could not mount juicefs: rpc error: code = Internal desc = Could not mount juicef │
│ s: pods "juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1" not found  

And this error in the juicefs-csi-node for NodeB:

│ I1110 11:04:53.566859       1 juicefs.go:328] Mount: skip mounting for existing mount point "/jfs/pvc-467cfe22-0099-442f-8165-61be09295da1"                                                                                                                 │
│ I1110 11:04:53.566869       1 juicefs.go:331] Mount: add mount ref of configMap of volumeId "pvc-467cfe22-0099-442f-8165-61be09295da1"                                                                                                                      │
│ I1110 11:04:53.566880       1 client.go:65] Get pod juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1                                                                                                                                      │
│ I1110 11:04:53.571303       1 client.go:68] Can't get pod juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1 namespace system-filesystem: pods "juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1" not found               │
│ E1110 11:04:53.571324       1 driver.go:60] GRPC error: rpc error: code = Internal desc = Could not mount juicefs: rpc error: code = Internal desc = Could not mount juicefs: pods "juicefs-k3d-nodeB-pvc-467cfe22-0099-442f-8165-61be09295da1" not │
│ I1110 11:06:55.655175       1 node.go:217] NodeGetCapabilities: called with args                                                                                                                                                                            ```

Does the csi driver support serverless kubernetes like ASK?

I am trying out in alibaba cloud ASK cluster, and after I do kubectl apply -f k8s.yaml, the csi-controller pod has status ProviderFailed, and describing the pod it shows Message: The specified parameter HostPathVolume is not valid. I guess it's because of hostPath is being used in k8s.yaml ? So my question is does juicefs's csi-driver support ASK? Thanks.

juicefs hangs when writing pvc meta file

juicefs hangs on

if ioutil.WriteFile(metaPath, meta, 0644) != nil {
when provisioning pvc on tencentcloud

juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.417448       1 juicefs.go:129] CreateVol: volume not existed
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.417904       1 juicefs.go:139] CreateVol: making directory "/jfs/tencentcloud-ap-hongkong-d41d8cd98f00b204e9800998ecf8427e/pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.523984       1 controller.go:106] DeleteVolume: Deleting volume "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 juicefs-plugin I1022 15:15:25.810776       1 juicefs.go:143] CreateVol: writing meta to "/jfs/tencentcloud-ap-hongkong-d41d8cd98f00b204e9800998ecf8427e/pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3/.juicefs"
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.089644       1 connection.go:183] GRPC response: {}
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.091426       1 connection.go:184] GRPC error: <nil>
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.091626       1 controller.go:1361] delete "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3": volume deleted
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.134818       1 controller.go:1407] delete "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3": persistentvolume deleted
juicefs-csi-controller-0 csi-provisioner I1022 15:15:27.140481       1 controller.go:1409] delete "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3": succeeded
juicefs-csi-controller-0 csi-attacher I1022 15:15:27.141501       1 controller.go:205] Started PV processing "pvc-4e6f0bc0-f4dd-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.045560       1 connection.go:183] GRPC response: {}
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.052369       1 connection.go:184] GRPC error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.052462       1 controller.go:979] Final error received, removing PVC be4e1e7e-f4de-11e9-8bc4-080027bf2ce3 from claims in progress
juicefs-csi-controller-0 csi-provisioner W1022 15:16:19.052474       1 controller.go:886] Retrying syncing claim "be4e1e7e-f4de-11e9-8bc4-080027bf2ce3", failure 0
juicefs-csi-controller-0 csi-provisioner E1022 15:16:19.087483       1 controller.go:908] error syncing claim "be4e1e7e-f4de-11e9-8bc4-080027bf2ce3": failed to provision volume with StorageClass "juicefs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
juicefs-csi-controller-0 csi-provisioner I1022 15:16:19.095413       1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"juicefs-pvc", UID:"be4e1e7e-f4de-11e9-8bc4-080027bf2ce3", APIVersion:"v1", ResourceVersion:"8609", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "juicefs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.087913       1 controller.go:1196] provision "default/juicefs-pvc" class "juicefs-sc": started
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.094726       1 controller.go:442] CreateVolumeRequest {Name:pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3 CapacityRange:required_bytes:11258999068426240  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > ] Parameters:map[csi.storage.k8s.io/node-publish-secret-name:juicefs-gd72m9k54b csi.storage.k8s.io/node-publish-secret-namespace:default csi.storage.k8s.io/provisioner-secret-name:juicefs-gd72m9k54b csi.storage.k8s.io/provisioner-secret-namespace:default] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.099855       1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"juicefs-pvc", UID:"be4e1e7e-f4de-11e9-8bc4-080027bf2ce3", APIVersion:"v1", ResourceVersion:"8609", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/juicefs-pvc"
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.105243       1 connection.go:180] GRPC call: /csi.v1.Controller/CreateVolume
juicefs-csi-controller-0 csi-provisioner I1022 15:16:20.105285       1 connection.go:181] GRPC request: {"capacity_range":{"required_bytes":11258999068426240},"name":"pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3","secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":5}}]}
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:20.115357       1 controller.go:61] CreateVolume: Secrets contains keys [bucket name secretkey token accesskey]
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:20.115402       1 controller.go:65] CreateVolume: Mounting juicefs "pvc-be4e1e7e-f4de-11e9-8bc4-080027bf2ce3"
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:20.115423       1 juicefs.go:240] AuthFs: cmd "/usr/bin/juicefs", args []string{"auth", "tencentcloud-ap-hongkong", "--accesskey=AKIDLXWLyOicQ1OYWH48GCvQkrlSwEJAbPkr", "--accesskey2=", "--bucket=juicefs-tencentcloud-ap-hongkong-1252455339", "--bucket2=", "--token=[secret]", "--secretkey=[secret]", "--secretkey2=[secret]", "--passphrase=[secret]"}
juicefs-csi-controller-0 juicefs-plugin I1022 15:16:22.346516       1 juicefs.go:182] MountFs: authentication output is ''

[FEATURES] add sentinel password option

What feature you'd like to add:
Currently JuiceFS supports using redis meta engine with sentinel. But there is no way to pass sentinel password in redis URL.
On baremetal I could set env SENTINEL_PASSWORD which juicefs would read to connect to sentinel.
But on CSI driver, I cannot achieve that by simply editing values.yaml or the helm chart.

Why is this feature needed:
It would be useful to use CSI driver with redis sentinel meta engine

Build arm64 docker image

It would be helpful if the official images were built for arm64 as well. It would greatly simplify deployments to AWS Graviton and/or Raspberry Pi 4B based environments.

Failed to mount device in kubelet

Normal   SuccessfulAttachVolume  63s                  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-bbd05807-f1a9-11e9-9590-0800273e5475"
Warning  FailedMount             31s (x7 over 62s)    kubelet, minikube        MountVolume.MountDevice failed for volume "pvc-bbd05807-f1a9-11e9-9590-0800273e5475" : driver name csi.juicefs.com not found in the list of registered CSI drivers

CSI version not displayed in log message

It would be nice to have it for debugging

juicefs-plugin I1017 23:57:20.609725       1 driver.go:29[] Driver: csi.juicefs.com Version:
juicefs-plugin I1017 23:57:20.613619       1 mount_linux.go:174[] Cannot run systemd-run, assuming non-systemd OS

csi.juicefs.com not found in the list of registered CSI drivers

What happened:
When trying to mount a JuiceFS volume I get the following error from my Pod.

kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: 
driver name csi.juicefs.com not found in the list of registered CSI drivers

What you expected to happen:
It would mount.

Anything else:

I do not have a custom kubelet path but I do run the following to make it shared:

sudo mount -o bind --make-shared /var/lib/kubelet /var/lib/kubelet

Environment:

  • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version:
Using Helm Chart.
  • Cloud provider or hardware configuration running JuiceFS:
k3d v5.0.1
  • OS (e.g: cat /etc/os-release):
Ubuntu 21.10 (Impish Indri)
  • Kernel (e.g. uname -a):
Linux 5.13.0-19-generic
  • Object storage (cloud provider and region):
S3 - AP Southeast 2
  • Redis info (version, cloud provider managed or self maintained):
6.0.15 - self maintained
  • Network connectivity (JuiceFS to Redis, JuiceFS to object storage):
Redis is running on the same host as k3d.

Logs

csi-controller

│ juicefs-plugin I1018 12:17:02.039058       1 driver.go:29] Driver: csi.juicefs.com version v0.10.6 commit b2bc111627ba895fe72429525e77525ed8a5b2f7 date 2021-09-27T08:31:49Z                                 │
│ juicefs-plugin I1018 12:17:02.292936       1 controller.go:48] Controller: /bin/sh: 1: /sbin/modprobe: not found                                                                                             │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644)                                                                                                                                                    │
│ juicefs-plugin I1018 12:17:02.511823       1 node.go:62] Node: /bin/sh: 1: /sbin/modprobe: not found                                                                                                         │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644)                                                                                                                                                    │
│ juicefs-plugin I1018 12:17:02.512858       1 driver.go:73] Listening for connection on address: &net.UnixAddr{Name:"/var/lib/csi/sockets/pluginproxy/csi.sock", Net:"unix"}                                  │
│ juicefs-plugin I1018 12:17:03.148773       1 controller.go:145] ControllerGetCapabilities: called with args &csi.ControllerGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8 │
│ csi-provisioner I1018 12:17:02.136402       1 feature_gate.go:243] feature gates: &{map[]}                                                                                                                   │
│ csi-provisioner I1018 12:17:02.136428       1 csi-provisioner.go:107] Version: v1.6.0-0-g321fa5c1c                                                                                                           │
│ csi-provisioner I1018 12:17:02.136437       1 csi-provisioner.go:121] Building kube configs for running in cluster...                                                                                        │
│ csi-provisioner I1018 12:17:02.143679       1 connection.go:153] Connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock                                                                              │
│ csi-provisioner I1018 12:17:03.145324       1 common.go:111] Probing CSI driver for readiness                                                                                                                │
│ csi-provisioner I1018 12:17:03.145342       1 connection.go:182] GRPC call: /csi.v1.Identity/Probe                                                                                                           │
│ csi-provisioner I1018 12:17:03.145348       1 connection.go:183] GRPC request: {}                                                                                                                            │
│ csi-provisioner I1018 12:17:03.146655       1 connection.go:185] GRPC response: {}                                                                                                                           │
│ csi-provisioner I1018 12:17:03.146919       1 connection.go:186] GRPC error: <nil>                                                                                                                           │
│ csi-provisioner I1018 12:17:03.146925       1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginInfo                                                                                                   │
│ csi-provisioner I1018 12:17:03.146927       1 connection.go:183] GRPC request: {}                                                                                                                            │
│ csi-provisioner I1018 12:17:03.147341       1 connection.go:185] GRPC response: {"name":"csi.juicefs.com","vendor_version":"v0.10.6"}                                                                        │
│ csi-provisioner I1018 12:17:03.147569       1 connection.go:186] GRPC error: <nil>                                                                                                                           │
│ csi-provisioner I1018 12:17:03.147577       1 csi-provisioner.go:163] Detected CSI driver csi.juicefs.com                                                                                                    │
│ csi-provisioner W1018 12:17:03.147581       1 metrics.go:142] metrics endpoint will not be started because `metrics-address` was not specified.                                                              │
│ csi-provisioner I1018 12:17:03.147586       1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginCapabilities                                                                                           │
│ csi-provisioner I1018 12:17:03.147589       1 connection.go:183] GRPC request: {}                                                                                                                            │
│ csi-provisioner I1018 12:17:03.148043       1 connection.go:185] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}}]}                                                                           │
│ csi-provisioner I1018 12:17:03.148470       1 connection.go:186] GRPC error: <nil>                                                                                                                           │
│ csi-provisioner I1018 12:17:03.148475       1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerGetCapabilities                                                                                     │
│ csi-provisioner I1018 12:17:03.148478       1 connection.go:183] GRPC request: {}                                                                                                                            │
│ csi-provisioner I1018 12:17:03.148920       1 connection.go:185] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}                                                                               │
│ csi-provisioner I1018 12:17:03.149735       1 connection.go:186] GRPC error: <nil>                                                                                                                           │
│ csi-provisioner I1018 12:17:03.149925       1 controller.go:709] Using saving PVs to API server in background                                                                                                │
│ csi-provisioner I1018 12:17:03.150045       1 reflector.go:153] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:135                                                  │
│ csi-provisioner I1018 12:17:03.150052       1 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135                                                         │
│ csi-provisioner I1018 12:17:03.150045       1 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:135                                          │
│ csi-provisioner I1018 12:17:03.150088       1 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135                                                │
│ csi-provisioner I1018 12:17:03.250029       1 shared_informer.go:227] caches populated                                                                                                                       │
│ csi-provisioner I1018 12:17:03.250045       1 shared_informer.go:227] caches populated                                                                                                                       │
│ csi-provisioner I1018 12:17:03.250054       1 controller.go:799] Starting provisioner controller csi.juicefs.com_juicefs-csi-controller-0_bda7e207-0518-4545-8cc0-231491ff50b2!                              │
│ csi-provisioner I1018 12:17:03.250074       1 clone_controller.go:58] Starting CloningProtection controller                                                                                                  │
│ csi-provisioner I1018 12:17:03.250104       1 clone_controller.go:74] Started CloningProtection controller                                                                                                   │
│ csi-provisioner I1018 12:17:03.250117       1 volume_store.go:97] Starting save volume queue                                                                                                                 │
│ csi-provisioner I1018 12:17:03.250193       1 reflector.go:153] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:836            │
│ csi-provisioner I1018 12:17:03.250202       1 reflector.go:188] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:836                  │
│ csi-provisioner I1018 12:17:03.250199       1 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (15m0s) from github.com/kubernetes-csi/external-provisioner/pkg/controller/clone_controller.go: │
│ csi-provisioner I1018 12:17:03.250206       1 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from github.com/kubernetes-csi/external-provisioner/pkg/controller/clone_controller.go:72     │
│ csi-provisioner I1018 12:17:03.250316       1 reflector.go:153] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:833        │
│ csi-provisioner I1018 12:17:03.250331       1 reflector.go:188] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:833              │
│ csi-provisioner I1018 12:17:03.350172       1 shared_informer.go:227] caches populated                                                                                                                       │
│ csi-provisioner I1018 12:17:03.350325       1 controller.go:848] Started provisioner controller csi.juicefs.com_juicefs-csi-controller-0_bda7e207-0518-4545-8cc0-231491ff50b2!                               │
│ csi-provisioner I1018 12:24:10.254544       1 reflector.go:432] github.com/kubernetes-csi/external-provisioner/pkg/controller/clone_controller.go:72: Watch close - *v1.PersistentVolumeClaim total 0 items  │
│ csi-provisioner I1018 12:24:14.253241       1 reflector.go:432] sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:836: Watch close - *v1.StorageClass total 0 items received      │
│ csi-provisioner I1018 12:25:22.154599       1 reflector.go:432] k8s.io/client-go/informers/factory.go:135: Watch close - *v1.PersistentVolumeClaim total 0 items received                                    │
│ csi-provisioner I1018 12:25:29.256043       1 reflector.go:432] sigs.k8s.io/sig-storage-lib-external-provisioner/v5/controller/controller.go:833: Watch close - *v1.PersistentVolume total 0 items received  │
│ csi-provisioner I1018 12:26:45.152943       1 reflector.go:432] k8s.io/client-go/informers/factory.go:135: Watch close - *v1.StorageClass total 0 items received                                             │

csi-node

│ juicefs-plugin I1018 10:08:38.742096       1 driver.go:29] Driver: csi.juicefs.com version v0.10.6 commit b2bc111627ba895fe72429525e77525ed8a5b2f7 date 2021-09-27T08:31:49Z                                 │
│ juicefs-plugin I1018 10:08:38.957467       1 controller.go:48] Controller: /bin/sh: 1: /sbin/modprobe: not found                                                                                             │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644)                                                                                                                                                    │
│ juicefs-plugin I1018 10:08:39.171010       1 node.go:62] Node: /bin/sh: 1: /sbin/modprobe: not found                                                                                                         │
│ juicefs-plugin JuiceFS version 4.5.4 (2021-09-27 9c8a644)                                                                                                                                                    │
│ juicefs-plugin I1018 10:08:39.171479       1 driver.go:73] Listening for connection on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}                                                              │
│ juicefs-plugin I1018 10:08:45.148241       1 node.go:234] NodeGetInfo: called with args                                                                                                                      │
│ node-driver-registrar I1018 10:08:44.589148       1 main.go:110] Version: v1.1.0-0-g80a94421                                                                                                                 │
│ node-driver-registrar I1018 10:08:44.589170       1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"                                                                                  │
│ node-driver-registrar I1018 10:08:44.589178       1 connection.go:151] Connecting to unix:///csi/csi.sock                                                                                                    │
│ node-driver-registrar I1018 10:08:44.589426       1 main.go:127] Calling CSI driver to discover driver name                                                                                                  │
│ node-driver-registrar I1018 10:08:44.589435       1 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo                                                                                             │
│ node-driver-registrar I1018 10:08:44.589437       1 connection.go:181] GRPC request: {}                                                                                                                      │
│ node-driver-registrar I1018 10:08:44.590538       1 connection.go:183] GRPC response: {"name":"csi.juicefs.com","vendor_version":"v0.10.6"}                                                                  │
│ node-driver-registrar I1018 10:08:44.590804       1 connection.go:184] GRPC error: <nil>                                                                                                                     │
│ node-driver-registrar I1018 10:08:44.590807       1 main.go:137] CSI driver name: "csi.juicefs.com"                                                                                                          │
│ node-driver-registrar I1018 10:08:44.590817       1 node_register.go:54] Starting Registration Server at: /registration/csi.juicefs.com-reg.sock                                                             │
│ node-driver-registrar I1018 10:08:44.590855       1 node_register.go:61] Registration Server started at: /registration/csi.juicefs.com-reg.sock                                                              │
│ node-driver-registrar I1018 10:08:45.147290       1 main.go:77] Received GetInfo call: &InfoRequest{}                                                                                                        │
│ node-driver-registrar I1018 10:08:45.171885       1 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}                                                   │

Errors under Kubernetes 1.16

E0426 08:15:27.168818 2261 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/csi.juicefs.com/csi.sock" failed. No retries permitted until 2020-04-26 08:17:29.168797504 +0000 UTC m=+22096.921279600 (durationBeforeRetry 2m2s). Error: "RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins/csi.juicefs.com/csi.sock, err: rpc error: code = Unimplemented desc = unknown service pluginregistration.Registration"

Failed to mount static provisioned volume

$ kubectl apply -k examples/static-provisioning
$ kubectl describe pod app
...
  Warning  FailedMount             0s (x9 over 2m19s)  kubelet, ip-172-20-32-62.ec2.internal  MountVolume.SetUp failed for volume "juicefs-aws-us-east-1" : rpc error: code = Internal desc = Could not bind "/jfs/aws-us-east-1/aws-us-east-1" at "/var/lib/kubelet/pods/dc64632f-aa22-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount": mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t none -o bind /jfs/aws-us-east-1/aws-us-east-1 /var/lib/kubelet/pods/dc64632f-aa22-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount
Output: mount: /var/lib/kubelet/pods/dc64632f-aa22-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount: special device /jfs/aws-us-east-1/aws-us-east-1 does not exist.

[FEATURES] support for arm64 docker images

What feature you'd like to add:

Support for arm64 docker images

Why is this feature needed:

With juicefs-csi-driver becoming part of the kubernetes ecosystem, it would be helpful to support arm64 docker images.

Support GCP credentials for GS access

Google Storage requires credentials files referred by environment variable. Currently JuiceFS CSI driver supports command line arguments only for authentication only.

Need a solution for file bases auth.

mount options not effective for already mounted volumes

Currently it is skipped for all mounted mount points. Needs to allow multiple mount with different options.

I0720 02:53:55.397828       1 node.go:76] NodePublishVolume: volume_id is aws-us-east-1
I0720 02:53:55.397847       1 node.go:87] NodePublishVolume: volume_capability is mount:<fs_type:"juicefs" > access_mode:<mode:MULTI_NODE_MULTI_WRITER >
I0720 02:53:55.397880       1 node.go:93] NodePublishVolume: creating dir /var/lib/kubelet/pods/9ad50b95-aa99-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount
I0720 02:53:55.397902       1 node.go:120] NodePublishVolume: mounting juicefs with secret [accesskey name secretkey token], options [metacache cache-size=100 cache-dir=/var/foo]
I0720 02:53:56.864199       1 juicefs.go:175] MountFs: authentication output is ''
I0720 02:53:56.865338       1 juicefs.go:253] Mount: skip mounting for existing mount point "/jfs/aws-us-east-1"
I0720 02:53:56.865361       1 node.go:130] NodePublishVolume: binding /jfs/aws-us-east-1 at /var/lib/kubelet/pods/9ad50b95-aa99-11e9-83c9-0a1af3b5d5c6/volumes/kubernetes.io~csi/juicefs-aws-us-east-1/mount with options [metacache cache-size=100 cache-dir=/var/foo]

NodeUnpublishVolume failed as the target path is not mounted

The stderr of juicefs-plugin container in juicefs-csi-node:

I0202 05:58:55.348033       1 node.go:179] NodeGetCapabilities: called with args
I0202 05:58:58.669430       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-17d66882-7689-45d8-98fd-791d69ef2ed5" target_path:"/var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount"
I0202 05:58:58.669881       1 node.go:159] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
I0202 05:58:58.669893       1 mount_linux.go:211] Unmounting /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
E0202 05:58:58.670538       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount": Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
Output: umount: /var/lib/kubelet/pods/46444c82-1d4d-44ec-a5a2-95e7e5a6e9aa/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount: not mounted.

I0202 05:58:59.673526       1 node.go:150] NodeUnpublishVolume: called with args volume_id:"pvc-17d66882-7689-45d8-98fd-791d69ef2ed5" target_path:"/var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount"
I0202 05:58:59.673980       1 node.go:159] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
I0202 05:58:59.673991       1 mount_linux.go:211] Unmounting /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
E0202 05:58:59.674608       1 driver.go:53] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount": Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount
Output: umount: /var/lib/kubelet/pods/59f0ee08-e965-4b33-b546-75782fa4ad41/volumes/kubernetes.io~csi/pvc-17d66882-7689-45d8-98fd-791d69ef2ed5/mount: not mounted.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.