Git Product home page Git Product logo

moosefs-csi's Introduction

Container Storage Interface (CSI) driver for MooseFS

Container storage interface is an industry standard that enables storage vendors to develop a plugin once and have it work across a number of container orchestration systems.

MooseFS is a petabyte Open-Source distributed file system. It aims to be fault-tolerant, highly available, highly performing, scalable general-purpose network distributed file system for data centers.

MooseFS source code can be found on GitHub.


Note that a pool of MooseFS Clients that are available for use by containers is created on each node. By default the number of MooseFS Clients in the pool is 1.

Installation on Kubernetes

Prerequisites

  • MooseFS Cluster up and running

  • --allow-privileged=true flag set for both API server and kubelet (default value for kubelet is true)

Deployment

  1. Complete deploy/kubernetes/csi-moosefs-config.yaml configuration file with your settings:

    • master_host – domain name (recommended) or IP address of your MooseFS Master Server(s). It is an equivalent to -H master_host or -o mfsmaster=master_host passed to MooseFS Client.
    • master_port – port number of your MooseFS Master Server. It is an equivalent to -P master_port or -o mfsport=master_port passed to MooseFS Client.
    • k8s_root_dir – each mount's root directory on MooseFS. Each path is relative to this one. Equivalent to -S k8s_root_dir or -o mfssubfolder=k8s_root_dir passed to MooseFS Client.
    • driver_working_dir – a driver working directory inside MooseFS where persistent volumes, logs and metadata is stored (actual path is: k8s_root_dir/driver_working_dir)
    • mount_count – number of pre created MooseFS clients running on each node and apply:
    • mfs_logging – driver can create logs from each component in k8s_root_dir/driver_working_dir/logs directory. Boolean "true"/"false" value.
    $ kubectl apply -f deploy/kubernetes/csi-moosefs-config.yaml
    
  2. ConfigMap should now be created:

    $ kubectl get configmap -n kube-system
    NAME                                 DATA   AGE
    csi-moosefs-config                   6      42s
    
  3. Update deploy/kubernetes/csi-moosefs.yaml file with the image that uses required MooseFS or MooseFS Pro version and MooseFS CSI Plugin version. Default images are the latest version of the plugin and the latest version of MooseFS (Community):

    • Find plugin named csi-moosefs-plugin

    • Update the image version suffix in the plugin's section accordingly, for example:

      • 0.9.4-3.0.117 – for plugin version 0.9.4 and MooseFS Community 3.0.117
      • 0.9.4-4.44.4-pro – for plugin version 0.9.4 and MooseFS Pro 4.44.4

      You can find a complete list of available images at:
      https://registry.moosefs.com/v2/moosefs-csi-plugin/tags/list.

      Note there are two occurrences of csi-moosefs-plugin in csi-moosefs.yaml file and it is necessary to update the image version in both places of the file.

  4. Deploy CSI MooseFS plugin along with CSI Sidecar Containers:

    $ kubectl apply -f deploy/kubernetes/csi-moosefs.yaml
    
  5. Ensure that all the containers are ready, up and running

    kube@k-master:~$ kubectl get pods -n kube-system | grep csi-moosefs
    csi-moosefs-controller-0                   4/4     Running   0          44m
    csi-moosefs-node-7h4pj                     2/2     Running   0          44m
    csi-moosefs-node-8n5hj                     2/2     Running   0          44m
    csi-moosefs-node-n4prg                     2/2     Running   0          44m
    

    You should see a single csi-moosefs-controller-x running and csi-moosefs-node-xxxxx one per each node.

    You may also take a look at your MooseFS CGI Monitoring Interface ("Mounts" tab) to check if new Clients are connected – mount points: /mnt/controller and /mnt/${nodeId}[_${mountId}].

Verification

  1. Create a persistent volume claim for 5 GiB:

    $ kubectl apply -f examples/kubernetes/dynamic-provisioning/pvc.yaml
    
  2. Verify if the persistent volume claim exists and wait until it's STATUS is Bound:

    $ kubectl get pvc
    NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
    my-moosefs-pvc   Bound    pvc-a62451d4-0d75-4f81-bfb3-8402c59bfc25   5Gi        RWX            moosefs-storage   69m
    
  3. After its in Bound state, create a sample workload that mounts the volume:

    $ kubectl apply -f examples/kubernetes/dynamic-provisioning/pod.yaml
    
  4. Verify the storage is mounted:

    $ kubectl exec my-moosefs-pod -- df -h /data
    Filesystem                Size      Used Available Use% Mounted on
    172.17.2.80:9421          4.2T      1.4T      2.8T  33% /data
    

    You may take a look at MooseFS CGI Monitoring Interface ("Quotas" tab) to check if a quota for 5 GiB on a newly created volume directory has been set. Dynamically provisioned volumes are stored on MooseFS in k8s_root_dir/driver_working_dir/volumes directory.

  5. Clean up:

    $ kubectl delete -f examples/kubernetes/dynamic-provisioning/pod.yaml
    $ kubectl delete -f examples/kubernetes/dynamic-provisioning/pvc.yaml
    

More examples and capabilities

Volume Expansion

Volume expansion can be done by updating and applying corresponding PVC specification.

Note: the volume size can only be increased. Any attempts to decrease it will result in an error. It is not recommended to resize Persistent Volume MooseFS-allocated quotas via MooseFS native tools, as such changes will not be visible in your Container Orchestrator.

Static provisioning

Volumes can be also provisioned statically by creating or using a existing directory in k8s_root_dir/driver_working_dir/volumes. Example PersistentVolume examples/kubernetes/static-provisioning/pv.yaml definition, requires existing volume in volumes directory.

Mount MooseFS inside containers

It is possible to mount any MooseFS directory inside containers using static provisioning.

  1. Create a Persistent Volume (examples/kubernetes/mount-volume/pv.yaml):

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: my-moosefs-pv-mount
    spec:
      storageClassName: ""               # empty Storage Class
      capacity:
        storage: 1Gi                     # required, however does not have any effect
      accessModes:
        - ReadWriteMany
      csi:
        driver: csi.moosefs.com
        volumeHandle: my-mount-volume   # unique volume name
        volumeAttributes:
          mfsSubDir: "/"                 # subdirectory to be mounted as a rootdir (inside k8s_root_dir)
    
  2. Create corresponding Persistent Volume Claim (examples/kubernetes/mount-volume/pvc.yaml):

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: my-moosefs-pvc-mount
    spec:
      storageClassName: ""               # empty Storage Class
      volumeName: my-moosefs-pv-mount
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi                   # at least as much as in PV, does not have any effect
    
  3. Apply both configurations:

    $ kubectl apply -f examples/kubernetes/mount-volume/pv.yaml
    $ kubectl apply -f examples/kubernetes/mount-volume/pvc.yaml
    
  4. Verify that PVC exists and wait until it is bound to the previously created PV:

    $ kubectl get pvc
    NAME                    STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    my-moosefs-pvc-mount    Bound     my-moosefs-pv-mount   1Gi        RWX                           23m
    
  5. Create a sample workload that mounts the volume:

    $ kubectl apply -f examples/kubernetes/mount-volume/pod.yaml
    
  6. Verify that the storage is mounted:

    $ kubectl exec -it my-moosefs-pod-mount -- ls /data
    

You should see the content of k8s_root_dir/mfsSubDir.

  1. Clean up:

    $ kubectl delete -f examples/kubernetes/mount-volume/pod.yaml
    $ kubectl delete -f examples/kubernetes/mount-volume/pvc.yaml
    $ kubectl delete -f examples/kubernetes/mount-volume/pv.yaml
    

By using containers[*].volumeMounts[*].subPath field of PodSpec it is possible to specify a proper MooseFS subdirectory using only one PV/PVC pair, without creating a new one for each subdirectory:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: my-site-app
spec:
  template:
    spec:
      containers:
        - name: my-frontend
          # ...
          volumeMounts:
            - name: my-moosefs-mount
              mountPath: "/var/www/my-site/assets/images"
              subPath: "resources/my-site/images"
            - name: my-moosefs-mount
              mountPath: "/var/www/my-site/assets/css"
              subPath: "resources/my-site/css"
      volumes:
        - name: my-moosefs-mount
          persistentVolumeClaim:
            claimName: my-moosefs-pvc-mount

Version Compatibility

Kubernetes MooseFS CSI Driver
v1.26.2 v0.9.4
v1.24.2 v0.9.4

Copyright

Copyright (c) 2020-2023 Saglabs SA

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and limitations under the License.

License

Apache License, Version 2.0

Code of conduct

Participation in this project is governed by Kubernetes/CNCF code of conduct

moosefs-csi's People

Contributors

karolmajek avatar maniankara avatar pkonopelko avatar xandrus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

moosefs-csi's Issues

sourceMountPath or subPath(Expr) for volumes

Hi,

I have an issue to integrate moosefs-csi in StatefulSet. I need a way to determine the source path to mount. I already know I can make a subPath under volumeMounts such as:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: moosefs-volume
        subPath: subdir  # <--------- here
      command: [ "sleep", "1000000" ]
  volumes:
    - name: moosefs-volume
      persistentVolumeClaim:
        claimName: moosefs-csi-pvc

but I need to do this under Volume:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: moosefs-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: moosefs-volume
      persistentVolumeClaim:
        claimName: moosefs-csi-pvc
      subPath: subdir # <--------- here

Even if it looks the same, it is fundamentally different. I need to be able to choose which directory in the source I want to mount, with the ability to add a POD_NAME in case of StatefulSet.
My use case is OpenLdap, that has a data persistence in /var/lib/openldap. I cannot mount /var/lib/openldap/$(POD_NAME) or I would need to modify the OpenLdap configuration to refer to a dynamic path.

Eg of what I want for my usecase:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: moosefs-volume
      command: [ "sleep", "1000000" ]
      env:
        - name: POD_NAME # <--------- here
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
  volumes:
    - name: moosefs-volume
      persistentVolumeClaim:
        claimName: moosefs-csi-pvc
      subPathExpr: $(POD_NAME) # <--------- here

Tag

Hi,

The last commit has been over a year ago. Can we expect a tag? Thank you!

Ubuntu14.04 Support?

Kubernetes Version:1.15.3。
Unable to deploy on Ubuntu14.04 system?

Error: failed to start container "csi-moosefs-plugin": Error response from daemon: OCI runtime create failed: container_ linux.go:348 : starting container process caused "process_ linux.go:402 : container init caused "rootfs_ linux.go:58 : mounting \"/data/k8s/docker/data/containers/6f49623a0d7dec934bbed2c40546e89fd27eb1ce8dddcfe4cb8c02629230b1e2/mounts/shm\" to rootfs \"/data/k8s/docker/data/overlay2/dedbeb9ad481e91e40c43c18b39bf97e641926c3cec8ac9c42620e02391f7a86/merged\" at \"/dev/shm\" caused \"SecureJoin: too many links\""": unknown

Subnet hardcoded: rpc error: code = Unknown desc = InvalidParameterException: Error retrieving subnet information

Bug from @karolmajek:

Subnet is hardcoded. After looking into the code there are few occurances.
I get the following error:
rpc error: code = Unknown desc = InvalidParameterException: Error retrieving subnet information: com.amazonaws.services.ec2m.model.AmazonEC2Exception: The subnet ID 'subnet-xyz' does not exist (Service: AmazonEC2; Status Code: 400; Error Code: InvalidSubnetID.NotFound

k3s support?

Is there any reason why this shouldn't work on k3s?

I think it's just a problem with the kubernetes version and the version of the api's moosefs is trying to use but I might be wrong.

I'm not however sure how i'd convert the the .yml to work with v1

kubectl apply -f moosefs-csi-ep.yaml -n kube-system customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io created storageclass.storage.k8s.io/moosefs-block-storage unchanged serviceaccount/csi-moosefs-controller-sa created serviceaccount/csi-moosefs-node-sa created clusterrole.rbac.authorization.k8s.io/csi-moosefs-provisioner-role created clusterrolebinding.rbac.authorization.k8s.io/csi-moosefs-provisioner-binding created clusterrole.rbac.authorization.k8s.io/csi-moosefs-attacher-role created clusterrolebinding.rbac.authorization.k8s.io/csi-moosefs-attacher-binding created clusterrole.rbac.authorization.k8s.io/csi-moosefs-snapshotter-role created clusterrolebinding.rbac.authorization.k8s.io/csi-moosefs-snapshotter-binding created clusterrole.rbac.authorization.k8s.io/csi-moosefs-driver-registrar-controller-role created clusterrolebinding.rbac.authorization.k8s.io/csi-moosefs-driver-registrar-controller-binding created clusterrole.rbac.authorization.k8s.io/csi-moosefs-driver-registrar-node-role created clusterrolebinding.rbac.authorization.k8s.io/csi-moosefs-driver-registrar-node-binding created unable to recognize "moosefs-csi-ep.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1" unable to recognize "moosefs-csi-ep.yaml": no matches for kind "DaemonSet" in version "apps/v1beta2"

root@k3s:~/moosefs-csi/deploy/kubernetes# kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3+k3s1", GitCommit:"5b17a175ce333dfb98cb8391afeb1f34219d9275", GitTreeState:"clean", BuildDate:"2020-02-27T07:28:53Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes v1.20.2

Deployment fails with StatefulSet and DaemonSet with the following errors:
unable to recognize "deploy/kubernetes/moosefs-csi-ep.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1"
unable to recognize "deploy/kubernetes/moosefs-csi-ep.yaml": no matches for kind "DaemonSet" in version "apps/v1beta2"

I changed DaemonSet to apps/v1 and that seems to resolve the error message for DaemonSet but no DaemonSet is created when I use "kubectl get daemonsets -A"

But StatefulSet complains with "error: error validating "deploy/kubernetes/moosefs-csi-ep.yaml": error validating data: ValidationError(StatefulSet.spec): missing required field "selector" in io.k8s.api.apps.v1.StatefulSetSpec; if you choose to ignore these errors, turn validation off with --validate=false"

Does anyone know what the update needs to be for DaemonSet and StatefulSet to get this owrking in Kubernetes v1.20.2 with Containerd?

mountpoint err when restart csi node-plugin

Start fuse process use systemd is one way to solve the problem

this is systemd example :
csi node-plugin shoud create it when NodeStageVolume

[root@k8s1 test]# cat  /run/systemd/transient/mfs-fuse-pvc-5be4a515-13a1-4c81-b3ac-b93713bc04de.service
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=mfs fuse mount of pvc-5be4a515-13a1-4c81-b3ac-b93713bc04de

[Service]
ExecStart=
ExecStart="mount" "-f" "-t" "moosefs" "10.109.148.254:9421:/liuchang-test" "/var/lib/kubelet/plugins/kubernetes.io/csi/liuchang-test.csi.moosefs.com/e7673584d954b10a7e08aea91ae532734ebc4300f3f2a621c7e907d03025b0ac/globalmount"
ExecStopPost=
ExecStopPost="/bin/umount" "-f" "-l" "/var/lib/kubelet/plugins/kubernetes.io/csi/liuchang-test.csi.moosefs.com/e7673584d954b10a7e08aea91ae532734ebc4300f3f2a621c7e907d03025b0ac/globalmount"

[Unit]
CollectMode=inactive-or-failed
[[email protected] test]#

I have fix it in my env, this is a test that recreate node-plugin and mountpoint is healthy

[root@k8s1 demo]# kubectl  get  pod  -n mfs   -owide
NAME                                     READY   STATUS    RESTARTS   AGE     IP             NODE                              NOMINATED NODE   READINESS GATES
csi-moosefs-controller-liuchang-test-0   4/4     Running   0          20m     10.55.140.59   k8s1  <none>           <none>
csi-moosefs-node-liuchang-test-894jr     2/2     Running   0          20m     10.55.140.60   k8s2 <none>           <none>
csi-moosefs-node-liuchang-test-965c9     2/2     Running   0          20m     10.55.140.61   k8s3 <none>           <none>
csi-moosefs-node-liuchang-test-pq96r     2/2     Running   0          9m37s   10.55.140.59   k8s1  <none>           <none>
[[email protected] demo]# kubectl  delete  pod  -n mfs  csi-moosefs-node-liuchang-test-pq96r
pod "csi-moosefs-node-liuchang-test-pq96r" deleted
[root@k8s1 demo]#
[root@k8s1 demo]#
[root@k8s1 demo]# systemctl  status  mfs-fuse-pvc-79be2cc4-411e-47ed-947f-dbd2150b40e6
● mfs-fuse-pvc-79be2cc4-411e-47ed-947f-dbd2150b40e6.service - mfs fuse mount of pvc-79be2cc4-411e-47ed-947f-dbd2150b40e6
     Loaded: loaded (/run/systemd/transient/mfs-fuse-pvc-79be2cc4-411e-47ed-947f-dbd2150b40e6.service; transient)
  Transient: yes
     Active: active (running) since Tue 2023-12-26 14:40:18 CST; 1min 56s ago
   Main PID: 203854 (mount)
      Tasks: 17 (limit: 153589)
     Memory: 263.6M
        CPU: 534ms
     CGroup: /system.slice/mfs-fuse-pvc-79be2cc4-411e-47ed-947f-dbd2150b40e6.service
             ├─203854 mount -f -t moosefs 10.109.148.254:9421:/liuchang-test /var/lib/kubelet/plugins/kubernetes.io/csi/liuchang-test.csi.moosefs.com/73d5febbce3f883aac9afb4f4311ff27dafc29eb61b4fd9c24a1a68327d59a9f/globalmount
             └─203855 "mfsmount (mounted on: /data1/kubelet/plugins/kubernetes.io/csi/liuchang-test.csi.moosefs.com/73d5febbce3f883aac9afb4f4311ff27dafc29eb61b4fd9c24a1a68327d59a9f/globalmount)"

Dec 26 14:40:18 k8s1 mount[203855]: mfsmount[203855]: out of memory killer disabled
Dec 26 14:40:18 k8s1 mount[203855]: mfsmount[203855]: monotonic clock function: clock_gettime
Dec 26 14:40:18 k8s1 mfsmount[203855]: setting glibc malloc arena max to 4
Dec 26 14:40:18 k8s1 mfsmount[203855]: setting glibc malloc arena test to 4
Dec 26 14:40:18 k8s1 mfsmount[203855]: out of memory killer disabled
Dec 26 14:40:18 k8s1 mfsmount[203855]: monotonic clock function: clock_gettime
Dec 26 14:40:18 k8s1 mount[203855]: mfsmount[203855]: monotonic clock speed: 331293 ops / 10 mili seconds
Dec 26 14:40:18 k8s1 mfsmount[203855]: monotonic clock speed: 331293 ops / 10 mili seconds
Dec 26 14:40:19 k8s1 mount[203855]: mfsmount[203855]: my st_dev: 2097256
Dec 26 14:40:19 k8s1 mfsmount[203855]: my st_dev: 2097256
[root@k8s1 demo]#
[root@k8s1demo]# kubectl  exec -it   mfs-subpath-6db68d88d4-6z8vw   -- df  -h
Filesystem                Size      Used Available Use% Mounted on
overlay                 198.9G     33.6G    165.3G  17% /
tmpfs                    64.0M         0     64.0M   0% /dev
mfs#10.109.148.254:9421
                         10.0G         0     10.0G   0% /data
/dev/sda2               198.9G     33.6G    165.3G  17% /etc/hosts
/dev/sda2               198.9G     33.6G    165.3G  17% /dev/termination-log
/dev/sda2               198.9G     33.6G    165.3G  17% /etc/hostname
/dev/sda2               198.9G     33.6G    165.3G  17% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                    18.6G     12.0K     18.6G   0% /run/secrets/kubernetes.io/serviceaccount
tmpfs                    11.7G         0     11.7G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    11.7G         0     11.7G   0% /proc/scsi
tmpfs                    11.7G         0     11.7G   0% /sys/firmware

fail umount when delete deployment

After utilize exiting on-premise MooseFS storage for your Kubernetes cluster
kubectl create -f deployment.yaml, create a deployment
cat /proc/mounts, then we get some new mount

+/var/lib/kubelet/pods/743c5dca-43d5-11e9-811f-1866daf4cfd0/volumes/kubernetes.io~secret/default-token-lzx7d
+/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5662bfb2-43d5-11e9-811f-1866daf4cfd0/globalmount
+/var/lib/kubelet/pods/743c5dca-43d5-11e9-811f-1866daf4cfd0/volumes/kubernetes.io~csi/pvc-5662bfb2-43d5-11e9-811f-1866daf4cfd0/mount
+/var/lib/docker/overlay2/3bedd9d4263fa4a4d09f1790e513a52054a434d37261c85b19c969c08bb98c40/merged
+/var/lib/docker/containers/c1f010eb59c9f05e09aba6c5d07710279474beeed8ffec7b6f70d8ec5d12597c/shm
+/var/lib/docker/overlay2/fbadb2bd74d171298ff95fb8252a8aa3cb1a5c2c41638472cac61dc0a8d3d861/merged
+/var/lib/docker/overlay2/eb25ab4ed13ff23788a02c7e1b908656d069f84621ab9ec185963fb90a825f4f/merged

kubectl delete -f deployment.yaml, after delete a deployment
cat /proc/mounts, we can see moosefs can not be umount

-/var/lib/kubelet/pods/743c5dca-43d5-11e9-811f-1866daf4cfd0/volumes/kubernetes.io~secret/default-token-lzx7d
-/var/lib/docker/overlay2/3bedd9d4263fa4a4d09f1790e513a52054a434d37261c85b19c969c08bb98c40/merged
-/var/lib/docker/containers/c1f010eb59c9f05e09aba6c5d07710279474beeed8ffec7b6f70d8ec5d12597c/shm
-/var/lib/docker/overlay2/fbadb2bd74d171298ff95fb8252a8aa3cb1a5c2c41638472cac61dc0a8d3d861/merged
-/var/lib/docker/overlay2/eb25ab4ed13ff23788a02c7e1b908656d069f84621ab9ec185963fb90a825f4f/merged

moosefs-csi doesn't work on armv7/armhf

Hi, I have MooseFS installed on the armv7/armhf nodes and my main k3s cluster is consuming it without any issues.
After validating overall performance/stability, I've also thought to add MFS CSI to my armv7/armhf K3S cluster, however each of the pod's container log is failing with a single error in log.
for example, for controller
provisioner:
exec /csi-provisioner: exec format error
attacher:
exec /csi-attacher: exec format error
resizer:
exec /csi-resizer: exec format error
csi-moosefs-plugin:
exec /bin/moosefs-csi-plugin: exec format error
same goes for the node pods.
looks like, those images aren't built for armhf/armv7 however are tagged like that, or am I missing something?

Server Version: version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2+k3s1", GitCommit:"6330a5b49cfe3030af9c26ec29471e1d5c288cdd", GitTreeState:"clean", BuildDate:"2023-09-20T23:41:19Z", GoVersion:"go1.20.8 X:nounified", Compiler:"gc", Platform:"linux/arm"}

-o mfspassword in the CSI

mfsmount allows the user to specify the following option to mount to a MooseFS master server that requires password authentication to view a specific directory.

-o mfspassword=PASSWORD

This is useful in a production environment where certain directories on the Master server need to be restricted from the clients. I am just wondering whether such a feature exist on moosefs-csi?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.