A Container Storage Interface Driver for Synology NAS
make
# e.g. docker build -t jparklab/synology-csi .
docker build -t <repo>[:<tag>] .
Build a docker image using ubuntu stretch as the base image.
# e.g. docker build -f Dockerfile.ubuntu -t jparklab/synology-csi .
e.g. docker build -f Dockerfile.ubuntu -t <repo>[:tag>] .
Here we use gocsi to test the driver,
You need to create a config file that contains information to connect to the Synology NAS API. See Create a config file below
# You can specify any name for nodeid
$ go run cmd/syno-csi-plugin/main.go \
--nodeid CSINode \
--endpoint tcp://127.0.0.1:10000 \
--synology-config syno-config.yml
$ csc identity plugin-info -e tcp://127.0.0.1:10000
$ csc controller create-volume \
--req-bytes 2147483648 \
-e tcp://127.0.0.1:10000 \
test-volume
"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"
The first column in the output is the volume D
$ csc controller list-volumes -e tcp://127.0.0.1:10000
"8.1" 2147483648 "iqn"="iqn.2000-01.com.synology:kube-csi-test-volume" "mappingIndex"="1" "targetID"="8"
# e.g.
# csc controller delete-volume -e tcp://127.0.0.1:10000 8.1
$ csc controller delete-volume -e tcp://127.0.0.1:10000 <volume id>
For kubernetes v1.12, and v1.13, feature gates need to be enabled to use CSI drivers. Follow instructions on https://kubernetes-csi.github.io/docs/csi-driver-object.html and https://kubernetes-csi.github.io/docs/csi-node-object.html to set up your kubernetes cluster.
---
# syno-config.yml file
host: <hostname> # ip address or hostname of the Synology NAS
port: 5000 # change this if you use a port other than the default one
username: <login> # username
password: <password> # password
sessionName: Core # You won't need to touch this value
sslVerify: false # set this true to use https
kubectl create secret generic synology-config --from-file=syno-config.yml
kubectl apply -f deploy/kubernetes/v1.15
(v1.12 is also tested, v1.13 has not been tested)
NOTE:
synology-csi-attacher and synology-csi-provisioner need to run on the same node.
(probably..)