Git Product home page Git Product logo

directpv's Introduction

DirectPV

DirectPV is a CSI driver for Direct Attached Storage. In a simpler sense, it is a distributed persistent volume manager, and not a storage system like SAN or NAS. It is useful to discover, format, mount, schedule and monitor drives across servers.

Distributed data stores such as object storage, databases and message queues are designed for direct attached storage, and they handle high availability and data durability by themselves. Running them on traditional SAN or NAS based CSI drivers (Network PV) adds yet another layer of replication/erasure coding and extra network hops in the data path. This additional layer of disaggregation results in increased-complexity and poor performance.

Architecture Diagram

Quickstart

  1. Install DirectPV Krew plugin
$ kubectl krew install directpv
  1. Install DirectPV in your kubernetes cluster
$ kubectl directpv install
  1. Get information of the installation
$ kubectl directpv info
  1. Add drives
# Probe and save drive information to drives.yaml file.
$ kubectl directpv discover

# Initialize selected drives.
$ kubectl directpv init drives.yaml
  1. Deploy a demo MinIO server
$ curl -sfL https://github.com/minio/directpv/raw/master/functests/minio.yaml | kubectl apply -f -

Further information

Refer detailed documentation

Join Community

DirectPV is a MinIO project. You can contact the authors over the slack channel

License

DirectPV is released under GNU AGPLv3 license. Refer the LICENSE document for a complete copy of the license.

directpv's People

Contributors

aead avatar balamurugana avatar bortek avatar buccella avatar cniackz avatar dependabot[bot] avatar dlay42 avatar gecube avatar gnanderson avatar harshavardhana avatar haslersn avatar juneezee avatar kaplanlior avatar nitisht avatar nontster avatar praveenrajmani avatar ravindk89 avatar sinhaashish avatar sstubbs avatar wlan0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

directpv's Issues

Tiered Storage

How would I differentiate between live and archive storage? I was thinking that I need a separate minio tenant setup via the operator that's why I asked this here #12 but perhaps I'm thinking about this wrong. Is there any way to achieve this via this driver and minio operator?

about minio and k8s csi

  1. Is this interface still being maintained?
  2. Must it be JBOD?Is an x86 server ok?
    3.Support ReadWriteMany?

Fix volume-node affinity conflict

Steps to reproduce :-

Try to create volumes when direct-csi is down and bring it back up after some time.

You will see the following log lines in minio pod

  Warning  FailedScheduling  <unknown>        0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  <unknown>        0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  <unknown>        0/4 nodes are available: 4 node(s) had volume node affinity conflict.

Possible solution :-

Configure selectors for the statefulsets to match the nodeAffinity in corresponding PVs.

About minio combining with K8S

4 servers, each server 4 piece of hard disk, one piece of hard disk used to install the system, the remaining three hard disk do raid 0 or LVM, make 3 disk 1 piece of logical drives, the operating system is centos7, logical disk formatted into XFS, perform the mount, the mount point/data, through four minio k8s deployment, use hostpath make minio can use server logical disk full capacity

Would like to consult, the above deployment, whether feasible?

If the above deployment scenario is feasible, how can you deploy your app via JBOD-CSI driver to use minio clusters Such as creating buckets or writing data to buckets? like test-app.yaml?

Direct CSI supports PVC Access Mode "ReadWriteMany"?

I am wondering that direct csi supports the access mode ReadWriteMany of PVC.

I got some errors while running spark example pi job on kubernetes.
First, I have created pvc like this:

cat <<EOF > spark-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: spark-driver-pvc
  namespace: spark
  labels: {}
  annotations: {}
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: direct.csi.min.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: spark-exec-pvc
  namespace: spark
  labels: {}
  annotations: {}
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: direct.csi.min.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: spark-driver-localdir-pvc
  namespace: spark
  labels: {}
  annotations: {}
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: direct.csi.min.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: spark-exec-localdir-pvc
  namespace: spark
  labels: {}
  annotations: {}
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: direct.csi.min.io
EOF

kubectl apply -f spark-pvc.yml;

and run spark pi job:

spark-submit \
--master k8s://https://10.233.0.1:443 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--driver-memory 1g \
--executor-memory 4g \
--executor-cores 2 \
--num-executors 2 \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.mount.path=/checkpoint \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.mount.subPath=checkpoint \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.mount.readOnly=false \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.claimName=spark-driver-pvc \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.mount.path=/checkpoint \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.mount.subPath=checkpoint \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.mount.readOnly=false \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.options.claimName=spark-exec-pvc \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.path=/localdir \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.readOnly=false \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.options.claimName=spark-driver-localdir-pvc \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.path=/localdir \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.readOnly=false \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.options.claimName=spark-exec-localdir-pvc \
--conf spark.kubernetes.container.image=mykidong/spark:v3.0.0 \
--conf spark.kubernetes.driver.container.image=mykidong/spark:v3.0.0 \
--conf spark.kubernetes.executor.container.image=mykidong/spark:v3.0.0 \
--conf spark.kubernetes.namespace=spark \
--conf spark.kubernetes.container.image.pullPolicy=IfNotPresent \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar 100;

My Spark Pi Job works fine, if the number of executors is set to 1 with the value of --num-executors 1 .
But with more than 2 executors, I go the following errors with the description of pod:

[pcp@master-0 ~]$ kubectl get po -n spark
NAME                               READY   STATUS              RESTARTS   AGE
spark-pi-4d9d707399602358-driver   1/1     Running             0          21s
spark-pi-edbda97399605e6a-exec-1   1/1     Running             0          6s
spark-pi-edbda97399605e6a-exec-2   0/1     ContainerCreating   0          6s
[pcp@master-0 ~]$ kubectl describe po spark-pi-edbda97399605e6a-exec-2 -n spark
Name:           spark-pi-edbda97399605e6a-exec-2
Namespace:      spark
Priority:       0
Node:           minion-0/10.240.0.5
Start Time:     Wed, 29 Jul 2020 07:01:39 +0000
Labels:         spark-app-selector=spark-18df56262b474f8989411d72b84ba81a
                spark-exec-id=2
                spark-role=executor
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  Pod/spark-pi-4d9d707399602358-driver
Containers:
  spark-kubernetes-executor:
    Container ID:
    Image:         mykidong/spark:v3.0.0
    Image ID:
    Port:          7079/TCP
    Host Port:     0/TCP
    Args:
      executor
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  4505Mi
    Requests:
      cpu:     2
      memory:  4505Mi
    Environment:
      SPARK_USER:             pcp
      SPARK_DRIVER_URL:       spark://CoarseGrainedScheduler@spark-pi-4d9d707399602358-driver-svc.spark.svc:7078
      SPARK_EXECUTOR_CORES:   2
      SPARK_EXECUTOR_MEMORY:  4g
      SPARK_APPLICATION_ID:   spark-18df56262b474f8989411d72b84ba81a
      SPARK_CONF_DIR:         /opt/spark/conf
      SPARK_EXECUTOR_ID:      2
      SPARK_EXECUTOR_POD_IP:   (v1:status.podIP)
      SPARK_JAVA_OPT_0:       -Dspark.driver.blockManager.port=7079
      SPARK_JAVA_OPT_1:       -Dspark.driver.port=7078
      SPARK_LOCAL_DIRS:       /localdir
    Mounts:
      /checkpoint from checkpointpvc (rw,path="checkpoint")
      /localdir from spark-local-dir-localdirpvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from spark-token-5m2hb (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  spark-local-dir-localdirpvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  spark-exec-localdir-pvc
    ReadOnly:   false
  checkpointpvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  spark-exec-pvc
    ReadOnly:   false
  spark-token-5m2hb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  spark-token-5m2hb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age               From               Message
  ----     ------       ----              ----               -------
  Normal   Scheduled    <unknown>         default-scheduler  Successfully assigned spark/spark-pi-edbda97399605e6a-exec-2 to minion-0
  Warning  FailedMount  2s (x6 over 18s)  kubelet, minion-0  MountVolume.SetUp failed for volume "pvc-7568f3e6-931e-4249-a9f2-1803c45a1a1b" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = mount failed: exit status 255
Mounting command: mount
Mounting arguments: -t ext4 -o bind /data/minio/data4/46a6c696-d169-11ea-8dc4-0a5f797c5014 /var/lib/kubelet/pods/4dabe5be-17d8-4d52-862e-572e2d7ac9c6/volumes/kubernetes.io~csi/pvc-7568f3e6-931e-4249-a9f2-1803c45a1a1b/mount
Output: mount: mounting /data/minio/data4/46a6c696-d169-11ea-8dc4-0a5f797c5014 on /var/lib/kubelet/pods/4dabe5be-17d8-4d52-862e-572e2d7ac9c6/volumes/kubernetes.io~csi/pvc-7568f3e6-931e-4249-a9f2-1803c45a1a1b/mount failed: No such file or directory
  Warning  FailedMount  2s (x6 over 18s)  kubelet, minion-0  MountVolume.SetUp failed for volume "pvc-0d62e28b-8473-4b8a-aa34-bc13715dd53b" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = mount failed: exit status 255
Mounting command: mount
Mounting arguments: -t ext4 -o bind /data/minio/data1/46429e3c-d169-11ea-8dc4-0a5f797c5014 /var/lib/kubelet/pods/4dabe5be-17d8-4d52-862e-572e2d7ac9c6/volumes/kubernetes.io~csi/pvc-0d62e28b-8473-4b8a-aa34-bc13715dd53b/mount
Output: mount: mounting /data/minio/data1/46429e3c-d169-11ea-8dc4-0a5f797c5014 on /var/lib/kubelet/pods/4dabe5be-17d8-4d52-862e-572e2d7ac9c6/volumes/kubernetes.io~csi/pvc-0d62e28b-8473-4b8a-aa34-bc13715dd53b/mount failed: No such file or directory

I think, multiple pods like spark executors can access volumes with the access mode of ReadWriteMany.
Any idea about that?

directcsi plugin not available in krew

kubectl krew install directcsi
Updated the local copy of plugin index.
F1203 09:34:04.832111 19397 root.go:79] plugin "directcsi" does not exist in the plugin index

Is there a different index that needs to be added to krew? If yes - why is it not documented anywhere?

Docker daemon where direct-csi is running died

Docker daemon logs

Dec 01 07:15:52 ip-172-31-43-209 dockerd[4910]: time="2020-12-01T07:15:52.508902012Z" level=error msg="Handler for POST /v1.40/containers/5ce3637bab01922aca6774e1fd3a2aa07d5407e17a6361d47c9ffb15782ef97b/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/docker/containers/86c307ce255f8f24033e778a1656a49c4e2cc99c9bf37a3392c5996b3cdf0f44/mounts/shm\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/c77f98a178c9059f65e24da6a5a20e785acd8c9f6d29229bb8e191297bf6c606/merged\\\\\\\" at \\\\\\\"/var/lib/docker/overlay2/c77f98a178c9059f65e24da6a5a20e785acd8c9f6d29229bb8e191297bf6c606/merged/dev/shm\\\\\\\" caused \\\\\\\"no space left on device\\\\\\\"\\\"\": unknown"
Dec 01 07:21:40 ip-172-31-43-209 dockerd[4910]: time="2020-12-01T07:21:40.376812630Z" level=error msg="Handler for POST /v1.40/containers/34ae010c1f782e3673168bf953c63e68b6a99ba1ecce479d231d23ed0da016e5/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/docker/containers/86c307ce255f8f24033e778a1656a49c4e2cc99c9bf37a3392c5996b3cdf0f44/mounts/shm\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/3922e3bd805916558623dcdf1f25f9ffec8cb498276ac98de2180f4efa9bc3a8/merged\\\\\\\" at \\\\\\\"/var/lib/docker/overlay2/3922e3bd805916558623dcdf1f25f9ffec8cb498276ac98de2180f4efa9bc3a8/merged/dev/shm\\\\\\\" caused \\\\\\\"no space left on device\\\\\\\"\\\"\": unknown"
Dec 01 07:27:01 ip-172-31-43-209 dockerd[4910]: time="2020-12-01T07:27:01.199969198Z" level=error msg="Handler for POST /v1.40/containers/ce5c284b152ff1c6adf2ecec8cca29f6dabe50c9ff24cfb94d34feaf76503310/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/docker/containers/86c307ce255f8f24033e778a1656a49c4e2cc99c9bf37a3392c5996b3cdf0f44/mounts/shm\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/afed8879ffcbf12bc7b3169794ded2bc24a4022a8dbfc76608ca82021c6c9a24/merged\\\\\\\" at \\\\\\\"/var/lib/docker/overlay2/afed8879ffcbf12bc7b3169794ded2bc24a4022a8dbfc76608ca82021c6c9a24/merged/dev/shm\\\\\\\" caused \\\\\\\"no space left on device\\\\\\\"\\\"\": unknown"

Quick googling tells us that the issue is with mounting procfs. Since we do not do that anymore, this issue should not persist. However, documenting it here in case others face this .

Add e2e tests

Run e2e tests similar to k8s prow bot. It should be done as a part of CI

Liveness Probe Error

I'm getting this error on Kubernetes 1.19 Liveness probe errored: strconv.Atoi: parsing "healthz": invalid syntax.

Seems to be working fine though.

All pods show errors when deploying direct-csi from current master.

I deployed direct-csi from the current master via the following standalone kustomization.yaml file:

namespace: direct-csi

bases:
  - github.com/minio/direct-csi?ref=99d420a9a179b80ce784b7391bfe120ed5b9fb7b

Unfortunately all pods show errors:

$ kubectl get pod -n direct-csi
NAME                                            READY   STATUS             RESTARTS   AGE
direct-csi-controller-min-io-7f865bff65-2m6nf   1/2     CrashLoopBackOff   9          25m
direct-csi-controller-min-io-7f865bff65-2q65s   1/2     CrashLoopBackOff   9          25m
direct-csi-controller-min-io-7f865bff65-bgd7x   1/2     CrashLoopBackOff   9          25m
direct-csi-min-io-948qx                         2/3     CrashLoopBackOff   9          25m
direct-csi-min-io-glbcn                         2/3     CrashLoopBackOff   9          25m
direct-csi-min-io-mxpzv                         2/3     CrashLoopBackOff   9          25m
$ kubectl describe pod -n direct-csi direct-csi-controller-min-io-7f865bff65-2m6nf | grep Warning
  Warning  BackOff    23m (x2 over 23m)     kubelet, kube02  Back-off restarting failed container
  Warning  Unhealthy  4m6s (x593 over 23m)  kubelet, kube02  Liveness probe errored: strconv.Atoi: parsing "healthz": invalid syntax
$ kubectl logs -n direct-csi direct-csi-controller-min-io-7f865bff65-2m6nf csi-provisioner
I1127 16:36:00.550953       1 feature_gate.go:226] feature gates: &{map[Topology:true]}
I1127 16:36:00.551024       1 csi-provisioner.go:98] Version: v1.2.1-0-g971feacb
I1127 16:36:00.551043       1 csi-provisioner.go:112] Building kube configs for running in cluster...
I1127 16:36:00.562733       1 connection.go:151] Connecting to unix:///csi/csi.sock
W1127 16:36:10.562993       1 connection.go:170] Still connecting to unix:///csi/csi.sock
W1127 16:36:20.563011       1 connection.go:170] Still connecting to unix:///csi/csi.sock
W1127 16:36:30.562958       1 connection.go:170] Still connecting to unix:///csi/csi.sock
[...]
$ kubectl logs -n direct-csi direct-csi-controller-min-io-7f865bff65-2m6nf direct-csi-controller
Error: unknown flag: --controller
$ kubectl describe pod -n direct-csi direct-csi-min-io-948qx | grep Warning
  Warning  BackOff    4m52s (x99 over 24m)  kubelet, kube03  Back-off restarting failed container
$ kubectl logs -n direct-csi direct-csi-min-io-948qx node-driver-registrar
I1127 16:35:55.587635   30549 main.go:110] Version: v1.3.0-0-g6e9fff3e
I1127 16:35:55.587703   30549 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I1127 16:35:55.587723   30549 connection.go:151] Connecting to unix:///csi/csi.sock
W1127 16:36:05.587958   30549 connection.go:170] Still connecting to unix:///csi/csi.sock
W1127 16:36:15.587979   30549 connection.go:170] Still connecting to unix:///csi/csi.sock
W1127 16:36:25.587946   30549 connection.go:170] Still connecting to unix:///csi/csi.sock
[...]
$ kubectl logs -n direct-csi direct-csi-min-io-948qx direct-csi
Error: unknown flag: --procfs
kubectl logs -n direct-csi direct-csi-min-io-948qx liveness-probe
I1127 16:36:08.462467   31529 connection.go:151] Connecting to unix:///csi/csi.sock
W1127 16:36:18.462759   31529 connection.go:170] Still connecting to unix:///csi/csi.sock
W1127 16:36:28.462670   31529 connection.go:170] Still connecting to unix:///csi/csi.sock
W1127 16:36:38.462772   31529 connection.go:170] Still connecting to unix:///csi/csi.sock
[...]

Better document what DIRECT CSI is for and what it does

Currently the README.md doesn't explain to me, as someone who never heared of DIRECT CSI before, what exactly it does. Through some research I found out, that it is a Kubernetes Operator that

  • provides a storage class,
  • provisions a PV whenever a PVC referencing the storage class is created and
  • uses block devices directly attached to the Kubernetes node as backing storage for the created PVs.

Is this correct?

However, for me there are some open questions:

  • Can this be seen as an alternative to local-path-provisioner?
  • What are the differences, advantages and disadvantages compared to local-path-provisioner?
  • Does DIRECT CSI provide file storage or block storage, or can we even choose per PVC?
  • If DIRECT CSI supports file storage but uses block devices as backing storage, then what file system does it use internally?
  • Most important: Are the privisioned PVs so-called local volumes (i.e. pods claiming them can only be scheduled on one node)?
  • Can I use a different number and size of backing block devices per Kubernetes node?

About minio combining with K8S

image

I drew a simple structure diagram to express my idea more clearly
Your efficiency is too high. I haven't had time to upload this structure drawing

Would like to consult, the above deployment, whether feasible?

The question mark in the structure drawing is my question

I want to know how to use minio clustering when K8S deploys an app that doesn't use S3?

Investigate conversion webhook

To support type conversion for the central controller and the child controllers, implement conversion webhook in the central controller.

Multiple Storage Classes

Currently I use openebs local pvs for databases. I will be using this for minio but I am wondering is it possible to create multiple strorage classes and if so how would I do that? I would rather just use this than openebs and this so that I make things simpler.

Add drive access tiers

kubectl direct-csi drives access-tier --drives *** --nodes *** Hot/Warm/Cold

This should set drive.status.accessTier to "Hot/Cold/Warm" in the drive object. Drives of a particular tier can be chosen by setting the following parameter in the storage class

direct.csi.min.io/access-tier: "Hot" or "Cold" or "Warm"

Microk8s Compatibility

Hi,

I'm trying to install this on microk8s but I get the following error:

MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry is not a directory

I had a similar issue with rook using ceph which was fixed by changing the kubelet directory in the config. It would be great if there was a way to do this. I've istalled it with:

DIRECT_CSI_DRIVES=data{1...4} DIRECT_CSI_DRIVES_DIR=/mnt kubectl apply -k github.com/minio/direct-csi

Invalid CRD spec, creation fail

Coming from minio/operator#241 I tried to deploy direct-csi in the cluster but I get an error for the CRD at applying the config.

DIRECT_CSI_DRIVES=data{1...4} DIRECT_CSI_DRIVES_DIR=/srv/minio-pv KUBELET_DIR_PATH=/var/lib/kubelet kubectl create --dry-run=client -o yaml -k github.com/minio/direct-csi > minio-direct-csi.yaml
kubectl apply -f minio-direct-csi.yaml
namespace/direct-csi created
storageclass.storage.k8s.io/direct.csi.min.io created
serviceaccount/direct-csi-min-io created
clusterrole.rbac.authorization.k8s.io/direct-csi-min-io created
clusterrolebinding.rbac.authorization.k8s.io/direct-csi-min-io created
configmap/direct-csi-config created
secret/direct-csi-min-io created
service/direct-csi-min-io created
deployment.apps/direct-csi-controller-min-io created
daemonset.apps/direct-csi-min-io created
csidriver.storage.k8s.io/direct.csi.min.io created
The CustomResourceDefinition "storagetopologies.direct.csi.min.io" is invalid: spec.validation.openAPIV3Schema.properties[metadata]: Forbidden: must not specify anything other than name and generateName, but metadata is implicitly specified
kubectl get all
NAME                                               READY   STATUS    RESTARTS   AGE
pod/direct-csi-controller-min-io-57dbfcdb4-9jxjj   2/2     Running   0          16s
pod/direct-csi-controller-min-io-57dbfcdb4-qbf4r   2/2     Running   0          16s
pod/direct-csi-controller-min-io-57dbfcdb4-qthm2   2/2     Running   0          16s
pod/direct-csi-min-io-26t2r                        3/3     Running   0          16s
pod/direct-csi-min-io-lxc84                        3/3     Running   0          16s
pod/direct-csi-min-io-plmll                        3/3     Running   0          16s

NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/direct-csi-min-io   ClusterIP   10.43.70.228   <none>        12345/TCP   16s

NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/direct-csi-min-io   3         3         3       3            3           <none>          16s

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/direct-csi-controller-min-io   3/3     3            3           16s

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/direct-csi-controller-min-io-57dbfcdb4   3         3         3       16s

If I delete the deployment, it says that the CRD was not even created thanks to the original error.

kubectl delete -f minio-direct-csi.yaml 
namespace "direct-csi" deleted
storageclass.storage.k8s.io "direct.csi.min.io" deleted
serviceaccount "direct-csi-min-io" deleted
clusterrole.rbac.authorization.k8s.io "direct-csi-min-io" deleted
clusterrolebinding.rbac.authorization.k8s.io "direct-csi-min-io" deleted
configmap "direct-csi-config" deleted
secret "direct-csi-min-io" deleted
service "direct-csi-min-io" deleted
deployment.apps "direct-csi-controller-min-io" deleted
daemonset.apps "direct-csi-min-io" deleted
csidriver.storage.k8s.io "direct.csi.min.io" deleted
Error from server (NotFound): error when deleting "minio-direct-csi.yaml": customresourcedefinitions.apiextensions.k8s.io "storagetopologies.direct.csi.min.io" not found

Kubernets 1.17.0 no matches for kind "CSIDriver" in version "storage.k8s.io/v1"

Hi,
In my 1.18.0 cluster deployment giving this error message. Any how to resolve this error message.

# DIRECT_CSI_DRIVES=data{1...4} DIRECT_CSI_DRIVES_DIR=/mnt kubectl apply -k github.com/minio/direct-csi
namespace/direct-csi unchanged
storageclass.storage.k8s.io/direct.csi.min.io unchanged
serviceaccount/direct-csi-min-io unchanged
clusterrole.rbac.authorization.k8s.io/direct-csi-min-io unchanged
clusterrolebinding.rbac.authorization.k8s.io/direct-csi-min-io unchanged
configmap/direct-csi-config unchanged
secret/direct-csi-min-io unchanged
service/direct-csi-min-io unchanged
deployment.apps/direct-csi-controller-min-io unchanged
daemonset.apps/direct-csi-min-io unchanged
error: unable to recognize "github.com/minio/direct-csi": # no matches for kind "CSIDriver" in version "storage.k8s.io/v1"

Thanks

Installation of new-master branch

Hi,

I've tried quite a few different ways to get the master branch to work with multiple storage classes. I'm having no luck though. How would I install the new-master branch? It doesn't look like it has kustomize files in it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.