Git Product home page Git Product logo

synology-csi's People

Contributors

bokysan avatar jparklab avatar masayaaoyama avatar squishykid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synology-csi's Issues

Unable to mount volume on k8s 1.17

Using the 1.16 deployment. Simple 1gb PVC on a test deployment.

Startup cycles with the following errors:

Warning FailedMount MountVolume.SetUp failed for volume "pvc-691477a2-ee2c-4c8e-ac82-66bfba218f2b" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Unable to list existing sessions: exit status 21 a minute ago
Warning FailedMount Unable to attach or mount volumes: unmounted volumes=[1gb], unattached volumes=[1gb default-token-cdhl5]: timed out waiting for the condition 2 minutes ago

running on ubuntu 18.04

Pod fails with error 19

I installed synology-csi on a Kubernetes 1.20 cluster with a Synology target running 6.2.3 with the following configuration file:

host: 192.168.15.25           # ip address or hostname of the Synology NAS
port: 5000                 # change this if you use a port other than the default one
sslVerify: false           # set this true to use https
username:          # username
password:        # password
loginApiVersion: 2         # Optional. Login version. From 2 to 6. Defaults to "2".
loginHttpMethod: auto  # Optional. Method. "GET", "POST" or "auto" (default). "auto" uses POST on version >= 6
sessionName: Core          # You won't need to touch this value
enableSynoToken: no        # Optional. Set to 'true' to enable syno token. Only for versions 3 and above.
enableDeviceToken: yes     # Optional. Set to 'true' to enable device token. Only for versions 6 and above.
#deviceId: <device-id>      # Optional. Only for versions 6 and above. If not set, DEVICE_ID environment var is read.
deviceName: k8s-demo         # Optional. Only for versions 6 and above.

I created a storage class and created a MySQL pod and PVC. It created the LUN and target properly. However, the pod isn't coming up with the following error.

  Warning  FailedMount             8m21s                  kubelet                  MountVolume.SetUp failed for volume "pvc-3d03fefc-5121-4830-a864-07a2058c602a" : rpc error: code = Internal desc = Failed to mount /dev/disk/by-path/ip-192.168.15.25:3260-iscsi-iqn.2000-01.com.synology:kube-csi-pvc-3d03fefc-5121-4830-a864-07a2058c602a-lun-1 to /var/lib/kubelet/pods/0a7e3a14-b669-405a-be33-bad7aeb78c7b/volumes/kubernetes.io~csi/pvc-3d03fefc-5121-4830-a864-07a2058c602a/mount(fstype: ext4, options: [rw]): mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o rw,defaults /dev/disk/by-path/ip-192.168.15.25:3260-iscsi-iqn.2000-01.com.synology:kube-csi-pvc-3d03fefc-5121-4830-a864-07a2058c602a-lun-1 /var/lib/kubelet/pods/0a7e3a14-b669-405a-be33-bad7aeb78c7b/volumes/kubernetes.io~csi/pvc-3d03fefc-5121-4830-a864-07a2058c602a/mount
Output: mount: /var/lib/kubelet/pods/0a7e3a14-b669-405a-be33-bad7aeb78c7b/volumes/kubernetes.io~csi/pvc-3d03fefc-5121-4830-a864-07a2058c602a/mount: mount point does not exist.
  Warning  FailedMount  5m                   kubelet  MountVolume.WaitForAttach failed for volume "pvc-3d03fefc-5121-4830-a864-07a2058c602a" : volume 7.1 has GET error for volume attachment csi-32c2ef0ef3c3a72f742c3751c5f6030fc5c2fc9c49e794831218199319a8b4ce: Get "https://192.168.15.214:6443/apis/storage.k8s.io/v1/volumeattachments/csi-32c2ef0ef3c3a72f742c3751c5f6030fc5c2fc9c49e794831218199319a8b4ce": http2: client connection force closed via ClientConn.Close
  Warning  FailedMount  25s (x2 over 6m22s)  kubelet  Unable to attach or mount volumes: unmounted volumes=[test-vol], unattached volumes=[test-vol default-token-wzllc]: timed out waiting for the condition
  Warning  FailedMount  22s (x8 over 8m19s)  kubelet  MountVolume.SetUp failed for volume "pvc-3d03fefc-5121-4830-a864-07a2058c602a" : rpc error: code = Internal desc = Failed to mount /dev/disk/by-path/ip-192.168.15.25:3260-iscsi-iqn.2000-01.com.synology:kube-csi-pvc-3d03fefc-5121-4830-a864-07a2058c602a-lun-1 to /var/lib/kubelet/pods/0a7e3a14-b669-405a-be33-bad7aeb78c7b/volumes/kubernetes.io~csi/pvc-3d03fefc-5121-4830-a864-07a2058c602a/mount(fstype: ext4, options: [rw]): failed to get disk format of disk /dev/disk/by-path/ip-192.168.15.25:3260-iscsi-iqn.2000-01.com.synology:kube-csi-pvc-3d03fefc-5121-4830-a864-07a2058c602a-lun-1: blkid returns invalid output: /dev/disk/by-path/ip-192.168.15.25:3260-iscsi-iqn.2000-01.com.synology:kube-csi-pvc-3d03fefc-5121-4830-a864-07a2058c602a-lun-1: UUID="b358f104-4463-431d-94ff-b8996ae6920e" TYPE="ext4"

Issues Creating more than 1 lun and target

Hi @jparklab ,

After your explaination from #42 and since no one have asked before, i create this issue
I try to execute following command

csc controller create-volume
--req-bytes 2147483648
-e tcp://127.0.0.1:10000
test-volume
"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"

but somehow, they create more than 1 luns and target like in the screenshot (attachment)
and also it gives me

Target:
image

Lun
image

ubuntu@node1:~/synology-csi$ csc controller create-volume
--req-bytes 2147483648
-e tcp://127.0.0.1:10000
test-volume
"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"
"2.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="2"
"3.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-8.1" "mappingIndex"="1" "targetID"="3"
"4.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-2147483648" "mappingIndex"="1" "targetID"="4"
Failed to create a LUN(name: kube-csi-iqn=iqn.2000-01.com.synology:kube-csi-test-volume, location: /volume1, size: 2147483648, type: BLUN): Failed to create: (18990503)

I expect only 1 target and lun create for this. Or this is wrong assumption and these following result is the correct behaviour.

Thanks

CSI Node fails to log into a ISCSI target

A pod with a volume created by the synology-csi driver got stuck at ContainerCreating state because csi-node failed to log into the iscsi target

I0817 19:59:34.286245       1 iscsi.go:64] [EXECUTING] -m node -T iqn.2000-01.com.synology:kube-csi-pvc-72c686af-41ab-45a2-a985-b69a0b304e60 -p jpnas --login
I0817 19:59:34.296846       1 iscsi.go:68] Error running iscsiadm login: exit status 12

When I run login command manually inside the csi node pod, I got the following error

[root@cluster-1 /]# iscsiadm  -m node -T iqn.2000-01.com.synology:kube-csi-pvc-72c686af-41ab-45a2-a985-b69a0b304e60 -p jpnas --login
Logging in to [iface: default, target: iqn.2000-01.com.synology:kube-csi-pvc-72c686af-41ab-45a2-a985-b69a0b304e60, portal: 192.168.1.196,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.synology:kube-csi-pvc-72c686af-41ab-45a2-a985-b69a0b304e60, portal: 192.168.1.196,3260].
iscsiadm: initiator reported error (12 - iSCSI driver not found. Please make sure it is loaded, and retry the operation)
iscsiadm: Could not log into all portals

syno-config.yml Link to the differing API version parameter documentation

I opted to use v2 since it seemed like it didn't need a lot of extra config, however I was wondering if you could add a link to the API docs you used to help those that want to use the newer API versions with obtaining the appropriate settings. Either that or adding in more comments to the syno-config.yml documentation would be another great option.

For example I didn't know where I would find the deviceId and deviceName settings for the v6 API. Clarifying where to get these settings/values would be helpful in the future. I also had no idea what enableSynoToken means and what it relates to other than that it's a v3+ API option.

Minikube compatibility

Hello,

Is this driver suppose to work on minikube too? I get plenty of errors... I can post logs/errors if you wish.

My env:
Minikube 1.2.0 (MacOS 10.14.5, hyperkit driver)
Kubernetes v1.15.0 on Docker 18.09.6

Failed to mount existing volumes

Since GetDiskFormat function in k8s.io/util is not compatible with blkid binary of alphine linux(
kubernetes/utils#122). synology-csi-node plugin fails to mount existing volumes when a pod is recreated.

One workaround would be to use ubuntu image instead of alphine image until k8s.io/util is updated.

Failed to find device

MountVolume.SetUp failed for volume "pvc-eb1e2360-f321-4653-aeed-967d1bdcad46" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Unknown desc = Failed to find device for iqn.2000-01.com.synology:kube-csi-pvc-eb1e2360-f321-4653-aeed-967d1bdcad46-lun-1

exist: iqn.2000-01.com.synology:kube-csi-pvc-eb1e2360-f321-4653-aeed-967d1bdcad46

PV and PVC seems to be correct.

kubectl describe pvc katib-mysql -n kubeflow

Name:          katib-mysql
Namespace:     kubeflow
StorageClass:  synology-iscsi-storage
Status:        Bound
Volume:        pvc-eb1e2360-f321-4653-aeed-967d1bdcad46
Labels:        app.kubernetes.io/component=katib
               app.kubernetes.io/name=katib-controller
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.synology.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    katib-mysql-5754b5dd66-w7gl6
Events:
  Type     Reason                 Age                 From                                                                              Message
  ----     ------                 ----                ----                                                                              -------
  Normal   ExternalProvisioning   18m (x62 over 33m)  persistentvolume-controller                                                       waiting for a volume to be created, either by external provisioner "csi.synology.com" or manually created by system administrator
  Warning  ProvisioningFailed     13m                 persistentvolume-controller                                                       storageclass.storage.k8s.io "synology-iscsi-storage" not found
  Normal   Provisioning           11m                 csi.synology.com_synology-csi-provisioner-0_503fa873-e55b-407a-91b4-c947eda5c855  External provisioner is provisioning volume for claim "kubeflow/katib-mysql"
  Normal   ProvisioningSucceeded  11m                 csi.synology.com_synology-csi-provisioner-0_503fa873-e55b-407a-91b4-c947eda5c855  Successfully provisioned volume pvc-eb1e2360-f321-4653-aeed-967d1bdcad46

Occasional volume attach failure

The initial StatefulSet will create and start running successfully (on apollo [node 1]), then the subsequent ones will fail as follows (on starbuck [node 2]):

Warning  FailedAttachVolume  26s                attachdetach-controller  Multi-Attach error for volume "pvc-e4056d51-0609-4140-8f87-ceb4c660895e" Volume is already exclusively attached to one node and can't be attached to another

and if I cordon the node and put them on the same node I get the following:

Warning  FailedMount  1s (x5 over 9s)  kubelet, apollo    MountVolume.WaitForAttach failed for volume "pvc-e4056d51-0609-4140-8f87-ceb4c660895e" : volume attachment is being deleted

If I scale the StatefulSet to 0 and delete the PVC and then rescale to 1 and let it recreate the PVC and deploy to the same as the original node (apollo) I get the following:

Warning  FailedMount       19s                    kubelet, apollo    Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data config default-token-ws2vb]: timed out waiting for the condition

This is also starting to occur on regular deployments as well as of k8s 1.17.

Reference Issue

Hi @jparklab ,

Thanks for your great csi libs for kubernetes. I try to using this lib for v1.18
several things that i dont understand

  1. in syno-config.yml
    deviceId: <device-id> # Optional. Only for versions 6 and above. If not set, DEVICE_ID environment var is read. deviceName: <name> # Optional. Only for versions 6 and above.
  • how to now which version that i'm using
  • what is deviceId and devinceName and how to look value for this
  1. when i try to debug
    csc controller create-volume \ --req-bytes 2147483648 \ -e tcp://127.0.0.1:10000 \ test-volume "8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"
  • what is mappingIndex and targetId ? and how to look value for this?

Help very appreciate. Thanks

After upgrade to k8s 1.24 - Could not log into all portals

Hi,

I just upgraded to Kubernetes 1.24. Everything seems to be alright except one PVC (all others are working fine).

MountVolume.MountDevice failed for volume "pvc-303fb8ea-5777-401e-9ab3-a907e33675da" : 
rpc error: code = Internal desc = rpc error: code = Internal desc = Failed to login with target iqn [iqn.2000-01.com.synology:nas01.pvc-303fb8ea-5777-401e-9ab3-a907e33675da], 
err: iscsiadm: Could not login to [iface: default, target: iqn.2000-01.com.synology:nas01.pvc-303fb8ea-5777-401e-9ab3-a907e33675da, portal: 10.x.x.xxx,3260]. 
iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure) 
iscsiadm: Could not log into all portals Logging in to [iface: default, target: iqn.2000-01.com.synology:nas01.pvc-303fb8ea-5777-401e-9ab3-a907e33675da, portal: 10.x.x.xxx,3260] (multiple) (exit status 19)

Has anyone an idea how to fix this?

Dockerhub image needs to be updated

The latest tags for jparklab/synology-csi image In Dockerhub don't include #36, the fix for #35. I can also confirm that a newly built image resolves the problem, but just deploying from deploy/kubernetes/... currently results in a broken install.

Create Volume Parameters

Hi @jparklab ,

In the README.md you said we can simulate volume creation by execute this statement

csc controller create-volume \ --req-bytes 2147483648 \ -e tcp://127.0.0.1:10000 \ test-volume \ "8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"
I understand the command until test-volume, like the one that I saw in gocsi create volume, it only said

csc controller new --endpoint /csi/server.sock
--cap 1,block
--cap MULTI_NODE_MULTI_WRITER,mount,xfs,uid=500
--params region=us,zone=texas
--params disabled=false
MyNewVolume1 MyNewVolume2

therefore what is the meaning of

"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"

Failed to run ISCSI login: exit status 19

Hi,

I had a working synology-csi setup a few weeks ago, but after upgrading the synology, I get the following message when trying to connect the PVs to the synology:

MountVolume.SetUp failed for volume "pvc-b46bb6d2-f144-44cc-b283-ee9b95712f78" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Failed to run ISCSI login: exit status 19

Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[local-archive data felix-db-token-np76r shared-memory]: timed out waiting for the condition

Here is my synology config with login details removed:

# syno-config.yml file
host: 10.0.3.50        # ip address or hostname of the Synology NAS
port: 5000              # change this if you use a port other than the default one
username:  
password:
sessionName: Core       # You won't need to touch this value
sslVerify: false        # set this true to use https

Here is my synology model number and version:

RS4017xs+
DSM 6.2.2-24922 Update 6

I have iscsiadm installed on all nodes, and I installed the v1.17 version of synology-csi on Kubernetes 1.17, rancher 2.3.5 running on a local cluster. Any ideas?

CreateVolume times out and controller gets stuck in already exists loop

Thanks for your work on this!

I think the provisioning of my LUN/Target is taking longer than some hard-coded 10 second timeout.

Once the timeout occurs, the controller continues trying to create the same objects on the Synology, but constantly just receives the already exists error.

I feel like there should be better handling of the timeout case and would it be possible to make the timeout interval configurable?

I1205 22:21:38.562909       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0", UID:"f3621cf5-893e-4c33-a920-863d0713df96", APIVersion:"v1", ResourceVersion:"63903", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0"
I1205 22:21:38.562324       1 controller.go:122] GRPC request: 
I1205 22:21:38.568163       1 controller.go:124] GRPC response: capabilities:<service:<type:CONTROLLER_SERVICE > > 
I1205 22:21:38.568361       1 controller.go:125] GRPC error: <nil>
I1205 22:21:38.568416       1 controller.go:121] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I1205 22:21:38.568433       1 controller.go:122] GRPC request: 
I1205 22:21:38.570195       1 controller.go:124] GRPC response: capabilities:<rpc:<type:LIST_VOLUMES > > capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > 
I1205 22:21:38.570474       1 controller.go:125] GRPC error: <nil>
I1205 22:21:38.570988       1 controller.go:121] GRPC call: /csi.v1.Identity/GetPluginInfo
I1205 22:21:38.571049       1 controller.go:122] GRPC request: 
I1205 22:21:38.573108       1 controller.go:124] GRPC response: name:"csi.synology.com" vendor_version:"0.1.0" 
I1205 22:21:38.573209       1 controller.go:125] GRPC error: <nil>
I1205 22:21:38.573255       1 controller.go:447] CreateVolumeRequest {Name:pvc-f3621cf5-893e-4c33-a920-863d0713df96 CapacityRange:required_bytes:53687091200  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1205 22:21:38.573742       1 controller.go:121] GRPC call: /csi.v1.Controller/CreateVolume
I1205 22:21:38.573773       1 controller.go:122] GRPC request: name:"pvc-f3621cf5-893e-4c33-a920-863d0713df96" capacity_range:<required_bytes:53687091200 > volume_capabilities:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > 
I1205 22:21:48.573869       1 controller.go:124] GRPC response: 
I1205 22:21:48.574026       1 controller.go:125] GRPC error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
W1205 22:21:48.574065       1 controller.go:490] CreateVolume timeout: 10s has expired, operation will be retried
I1205 22:21:53.574593       1 controller.go:121] GRPC call: /csi.v1.Controller/CreateVolume
I1205 22:21:53.574661       1 controller.go:122] GRPC request: name:"pvc-f3621cf5-893e-4c33-a920-863d0713df96" capacity_range:<required_bytes:53687091200 > volume_capabilities:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > 
I1205 22:21:55.412035       1 controller.go:124] GRPC response: 
I1205 22:21:55.412223       1 controller.go:125] GRPC error: rpc error: code = AlreadyExists desc = Volume pvc-f3621cf5-893e-4c33-a920-863d0713df96 already exists, found LUN kube-csi-pvc-f3621cf5-893e-4c33-a920-863d0713df96
W1205 22:21:55.412489       1 controller.go:686] Retrying syncing claim "default/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0" because failures 0 < threshold 15
E1205 22:21:55.412849       1 controller.go:701] error syncing claim "default/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0": failed to provision volume with StorageClass "synology-iscsi-storage": rpc error: code = AlreadyExists desc = Volume pvc-f3621cf5-893e-4c33-a920-863d0713df96 already exists, found LUN kube-csi-pvc-f3621cf5-893e-4c33-a920-863d0713df96
I1205 22:21:55.414065       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0", UID:"f3621cf5-893e-4c33-a920-863d0713df96", APIVersion:"v1", ResourceVersion:"63903", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "synology-iscsi-storage": rpc error: code = AlreadyExists desc = Volume pvc-f3621cf5-893e-4c33-a920-863d0713df96 already exists, found LUN kube-csi-pvc-f3621cf5-893e-4c33-a920-863d0713df96```

Support ext4 Synology volumes

Currently, this project only supports BTRFS. I would like to see ext4 compatibility added.

Using CSC, this can be done with the following parameters:

Thick Provision:
csc controller create-volume --req-bytes 2147483648 -e tcp://127.0.0.1:10000 --params type=FILE test-volume

Thin Provision:
csc controller create-volume --req-bytes 2147483648 -e tcp://127.0.0.1:10000 --params type=THIN test-volume

Support expanding volumes

Check if synology has APIs to expand volumes, and update the driver to support expanding volumes if so.

PVs don't reattach after power outage

Sometimes I'll get a power outage and my persistent volumes don't re-attach automatically. I typically have to kill the csi-attacher pods and synology-csi pods to resolve manually.

Seems like something is probably needed in control loop to reattempt attaching after a failure every X time frame to fix this scenario.

Self-signed certificate for Synology DSM

I tried to setup the CSI driver on my single-node (microk8s.io) kubernetes cluster which is based on Ubuntu 19.10.

To fit the my security needs I changed the port and enabled sslVerify. But this raises the issue, that the self-signed certificate from the Synology NAS could not be verified.
I added the CA certificate to /usr/local/share/ca-certificates/ and executed update-ca-certificates which added the new CA certificate. To be sure I rebooted the whole single-node cluster.

But even with this, the driver is still complaining. IP and password has been replaced.

$ kubectl logs synology-csi-attacher-0 -n synology-csi -c csi-plugin
I0104 11:51:25.024914       1 driver.go:66] Driver: csi.synology.com
I0104 11:51:25.025339       1 driver.go:77] Use synology: https://1.1.1.1:5001/webapi
I0104 11:51:25.026033       1 session.go:128] Logging in via https://1.1.1.1:5001/webapi/auth.cgi?account=k8s&api=SYNO.API.Auth&format=sid&method=login&passwd=xxx&session=Core&version=2
I0104 11:51:25.223472       1 driver.go:82] Failed to login: Get https://1.1.1.1:5001/webapi/auth.cgi?account=k8s&api=SYNO.API.Auth&format=sid&method=login&passwd=xxx&session=Core&version=2: x509: certificate signed by unknown authority
Failed to create driver: Get https://1.1.1.1:5001/webapi/auth.cgi?account=k8s&api=SYNO.API.Auth&format=sid&method=login&passwd=xxx&session=Core&version=2: x509: certificate signed by unknown authority

Do somebody know how to solve this?

Multiarch not working on arm64

1.) Found that the csi containers from quay are not multi-arch, building my own so not a big deal at the moment.

2.) The jparklab/synology-csi:v1.0.0-kubernetes-1.18.0 from the k8s v1.18 deployment (node.yml) is only an amd64 build. You also have a jparklab/synology-csi:v1.18.0 tag that seems to have the multi-arch builds. I edited the daemonset to use the 1.18.0 tag, but also getting a standard_init_linux.go:211: exec user process caused "exec format error" on that image. It seems like something not quite right on the build in addition to the deployment having the incorrect tag for multi-arch use.

Let me know if I can help out in anyway on this, happy to assist with debugging/rebuilding etc.

Installation as Helm chart

Very nice job! I love it!!!

Do you have any plans to perform the installation as a helm chart, and maybe provide an easier way (an operator, maybe) to place the csi binary on the hosts without manual intervention?

Authentication (2FA and administrator group)

I tried get the CSI driver working. But failed twice. Not sure whether this could be changed or it's given in that way by Synology.

  1. If 2FA is enabled and/or enforced for users than no successful authentication is possible. I'm aware, that the url has an additional attribute of otp to provide this token. But given by its nature, this one will change every minute. Other services like FTP, SMB, ... allow to login without providing this token. Is something similar possible for the API?
    It's arguable which behaviour is the better one - only interactive login on DSM needs the 2FA (all other services not) versus strict enforcement for all services (not the case today)

  2. Is there a way to grant a user just with the privileges for API and iSCSI? I don't like the fact that a fully privileged user is configured. On the one hand, the password is in plain-text and everyone with access to Kubernetes has also full access the Synology NAS. On the other hand the full authentication request (url + username + password) is logged in plain-text. Even the read-access to the secret is prevented, but still accessible through the logs.

I guess these limitations are driven by the feature set offered by Synology and not by the author of the driver. But maybe someone has an idea how to make it more convenient and secure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.