Git Product home page Git Product logo

sig-storage-local-static-provisioner's Introduction

Local Persistence Volume Static Provisioner

Coverage Status

The local volume static provisioner manages PersistentVolume lifecycle for pre-allocated disks by detecting and creating PVs for each local disk on the host, and cleaning up the disks when released. It does not support dynamic provisioning.

Table of Contents

Overview

Local persistent volumes allows users to access local storage through the standard PVC interface in a simple and portable way. The PV contains node affinity information that the system uses to schedule pods to the correct nodes.

An external static provisioner is provided here to help simplify local storage management once the local volumes are configured. Note that the local storage provisioner is different from most provisioners and does not support dynamic provisioning. Instead, it requires that administrators preconfigure the local volumes on each node and if volumes are supposed to be

  1. Filesystem volumeMode (default) PVs - mount them under discovery directories.
  2. Block volumeMode PVs - create a symbolic link under discovery directory to the block device on the node.

The provisioner will manage the volumes under the discovery directories by creating and cleaning up PersistentVolumes for each volume.

A caveat to scheduling a Pod on the same node as its local PV is that when the node hosting the PV is deleted, while the data is likely lost, the PV object still exists and therefore the system is indefinitely trying to schedule the Pod to a deleted node. See our local volume node cleanup documentation which contains information on how to make your workloads automatically recover from node deletion.

User Guide

Getting started

To get started with local static provisioning, you can follow our getting started guide to bring up a Kubernetes cluster with some local disks, deploy local-volume-provisioner to provision local volumes and use PVC in your pod to request a local PV.

Managing your local volumes

See our operations documentation which contains of preparing, setting up and cleaning up local volumes on the nodes.

Deploying

See our helm documentation for how to deploy and configure local-volume-provisioner in Kubernetes cluster with helm.

If you want to manage provisioner with plain YAML files, you can refer to our example yamls. helm generated yamls are good sources of examples too. Here is a full explanation of provisioner configuration.

Upgrading

See our upgrading documentation for how to upgrade provisioner version or update configuration in Kubernetes cluster.

FAQs

See FAQs.

Best Practices

See Best Practices.

Version Compatibility

Recommended provisioner versions with Kubernetes versions

Provisioner version K8s version Reason
2.6.0 1.12+
2.5.0 1.12+
2.4.0 1.12+ fs on block support
2.2.0 1.10 Beta API default, block
2.0.0 1.8, 1.9 Mount propagation
1.0.1 1.7

K8s Feature Status

Also see known issues and CHANGELOG.

1.14: GA

  • No new features added

1.12: Beta

  • Added support for automatically formatting a filesystem on the given block device in localVolumeSource.path

1.10: Beta

  • New PV.NodeAffinity field added.
  • Important: Alpha PV NodeAffinity annotation is deprecated. Users must manually update their PVs to use the new NodeAffinity field or run a one-time update job.
  • Alpha: Raw block support added.

1.9: Alpha

  • New StorageClass volumeBindingMode parameter that will delay PVC binding until a pod is scheduled.

1.7: Alpha

  • New local PersistentVolume source that allows specifying a directory or mount point with node affinity.
  • Pod using the PVC that is bound to this PV will always get scheduled to that node.

Future features

  • Local block devices as a volume source, with partitioning and fs formatting
  • Dynamic provisioning for shared local persistent storage
  • Local PV health monitoring, taints and tolerations
  • Inline PV (use dedicated local disk as ephemeral storage)

E2E Tests

Running

Run ./hack/e2e.sh -h to view help.

View CI Results

Check testgrid sig-storage-local-static-provisioner dashboard.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

sig-storage-local-static-provisioner's People

Contributors

alice-sawatzky avatar andyzhangx avatar arianvp avatar cofyc avatar davidmccormick avatar ddysher avatar dependabot[bot] avatar dhirajh avatar dobsonj avatar dungdm93 avatar havefun83 avatar hzxuzhonghu avatar ianchakeres avatar johngmyers avatar jsafrane avatar justinblalock87 avatar k8s-ci-robot avatar lichuqiang avatar mauriciopoppe avatar maybe-sybr avatar msau42 avatar n0rad avatar robbilie avatar sbezverk avatar skylenet avatar stevehipwell avatar umagnus avatar wongma7 avatar yibozhuang avatar yuyulei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sig-storage-local-static-provisioner's Issues

Support for PodSecurityPolicies

In order to install the provisioner in clusters which have PodSecurityPolicy in effect, a policy fulfilling the configured security contexts is required

Path "/mnt/disks/by-uuid" is not an actual mountpoint

I just ran this on a new GKE cluster (1.12.5-gke.5 without RBAC) and while it appears that the PVs are all created correctly, each pod logs out an error every 10 seconds saying that Path "/mnt/disks/by-uuid" is not an actual mountpoint. Is that something I need to be worried about causing data integrity problems?

It feels like it's probably iterating the by-uuid directory to get the disks, but not excluding the parent directory from the iteration.

logs on all DaemonSet pods

common.go:316] StorageClass "local-scsi" configured with MountDir "/mnt/disks", HostDir "/mnt/disks", VolumeMode "Filesystem", FsType "", BlockCleanerCommand ["/scripts/quick_reset.sh"]
main.go:60] Loaded configuration: {StorageClassConfig:map[local-scsi:{HostDir:/mnt/disks MountDir:/mnt/disks BlockCleanerCommand:[/scripts/quick_reset.sh] VolumeMode:Filesystem FsType:}] NodeLabelsForPV:[] UseAlphaAPI:false UseJobForCleaning:false MinResyncPeriod:{Duration:5m0s} UseNodeNameOnly:true}
main.go:61] Ready to run...
common.go:378] Creating client using in-cluster config
main.go:82] Starting controller
main.go:96] Starting metrics server at :8080
controller.go:45] Initializing volume cache
cache.go:55] Added pv "local-pv-a1e255f1" to cache
cache.go:55] Added pv "local-pv-18439af4" to cache
cache.go:55] Added pv "local-pv-df57f635" to cache
cache.go:55] Added pv "local-pv-6384a96b" to cache
cache.go:55] Added pv "local-pv-1e0913d8" to cache
cache.go:55] Added pv "local-pv-709ace0e" to cache
cache.go:55] Added pv "local-pv-2d4262cf" to cache
cache.go:55] Added pv "local-pv-d216f352" to cache
controller.go:108] Controller started
discovery.go:269] Path "/mnt/disks/by-uuid" is not an actual mountpoint
discovery.go:269] Path "/mnt/disks/by-uuid" is not an actual mountpoint
...
---
(repeats every 10 seconds forever)
discovery.go:269] Path "/mnt/disks/by-uuid" is not an actual mountpoint
---

---
(repeats every 10 minutes forever)
cache.go:64] Updated pv "local-pv-a1e255f1" to cache
cache.go:64] Updated pv "local-pv-1e0913d8" to cache
cache.go:64] Updated pv "local-pv-709ace0e" to cache
cache.go:64] Updated pv "local-pv-18439af4" to cache
cache.go:64] Updated pv "local-pv-df57f635" to cache
cache.go:64] Updated pv "local-pv-2d4262cf" to cache
cache.go:64] Updated pv "local-pv-d216f352" to cache
cache.go:64] Updated pv "local-pv-6384a96b" to cache
---

helm generated yaml

---
# Source: provisioner/templates/provisioner.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-provisioner-config
  namespace: local-volume-provisioner
  labels:
    heritage: "Tiller"
    release: "release-name"
    chart: provisioner-2.3.0
data:
  useNodeNameOnly: "true"
  storageClassMap: |
    local-scsi:
       hostDir: /mnt/disks
       mountDir: /mnt/disks
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: local-volume-provisioner
  namespace: local-volume-provisioner
  labels:
    app: local-volume-provisioner
    heritage: "Tiller"
    release: "release-name"
    chart: provisioner-2.3.0
spec:
  selector:
    matchLabels:
      app: local-volume-provisioner
  template:
    metadata:
      labels:
        app: local-volume-provisioner
    spec:
      serviceAccountName: local-storage-admin
      nodeSelector:
        cloud.google.com/gke-local-ssd: "true"
      tolerations:
        - operator: Exists
      containers:
        - image: "quay.io/external_storage/local-volume-provisioner:v2.3.0"
          name: provisioner
          securityContext:
            privileged: true
          env:
          - name: MY_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: MY_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: JOB_CONTAINER_IMAGE
            value: "quay.io/external_storage/local-volume-provisioner:v2.3.0"
          volumeMounts:
            - mountPath: /etc/provisioner/config
              name: provisioner-config
              readOnly: true
            - mountPath: /dev
              name: provisioner-dev
            - mountPath: /mnt/disks
              name: local-scsi
              mountPropagation: "HostToContainer"
      volumes:
        - name: provisioner-config
          configMap:
            name: local-provisioner-config
        - name: provisioner-dev
          hostPath:
            path: /dev
        - name: local-scsi
          hostPath:
            path: /mnt/disks

Cut 2.3.0

  • build, tag and test container image
  • Update changelog
  • Update helm
  • git tag

/assign

local-volume-provisioner for arm

Hi,
I need to configure the local-storage-provisioner for an arm cluster, but the quay.io/external_storage/local-volume-provisioner image does not work on arm devices.
Is it possible to do that? And, if it is, how? Can I get the Dockerfile to change the base image or exists any image for that? Thanks.

Publish Helm Chart to a Repo

Feature Request:

It would be nice be able to use the helm chart from the official incubator/stable repository of helm - https://github.com/helm/charts.

Being able to run helm install local-static-provisioner -f myconfig.yaml is more convenient than checking out the code and running numerous helm/kubectl commands to install the local static provisioner.

kubectl get pv ,No resources found. [help]

I follow the steps๏ผŒ
mkdir
kubectl create -f deployment/kubernetes/example/default_example_storageclass.yaml
helm template ./helm/provisioner > deployment/kubernetes/provisioner_generated.yaml
kubectl create -f deployment/kubernetes/provisioner_generated.yaml
kubectl get pv
but here show
No resources found.

Switch to standard go project layout

Why:

  • simpler
  • newbie friendly

e.g.

  • /provisioner/cmd -> /cmd/local-volume-provisioner
  • /utils/ -> cmd/utils/
  • merge /provisioner/Makefile with /Makefile
  • put some docs under docs/ (new) directory
    • /provisioner/README.md -> /docs/Provisioner.md
    • /provisioner/RELEASE.md -> /docs/RELEASE.md
    • /provisioner/TODO.md -> /TODO.md
  • /provisioner/deployment -> /deployment
  • /provisioner/pkg -> /pkg

cc @msau42 What do you think?

e2e tests fail on GKE because account lacks permission

Testgrid: https://testgrid.k8s.io/sig-storage-local-static-provisioner#master-gke-default
Example: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-sig-storage-local-static-provisioner-master-gke-default/415
Reason: Cannot create local-storage-provisioner-jobs-role role object at here.

Error log:

/home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/test/e2e/e2e_test.go:285
Expected error:
    <*errors.StatusError | 0xc0010401b0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "roles.rbac.authorization.k8s.io \"local-storage-provisioner-jobs-role\" is forbidden: attempt to grant extra privileges: [{[*] [batch] [jobs] [] []}] user=&{[email protected]  [system:authenticated] map[user-assertion.cloud.google.com:[AKUJVpl4/VP79aN4mEN2QMVLDAeQd9+ylFX4ocaQzWfRhRPqnBr/tP3SpvcSYE2Bm/aMNmf2pXrS7dt3YuI8/Q6vMXPg8apHgE0qGKmVUnzY9KafsDuqlVRANQiTCsYjZefCU3txWYR8hLxO1rJ90aOYyDBkACBY9BnWYNNOEpEOYvx8xlB7EWjNXo97N9LhH8vv3QvjEY6kQwsL+FVKRnttwoNCbUfBWQ+OSYNKOpCik9FE6fITqL2mprhdxcHfBuE0wyzmV/W/j9XNpAlt6ojE1s5+]]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}] ruleResolutionErrors=[]",
            Reason: "Forbidden",
            Details: {
                Name: "local-storage-provisioner-jobs-role",
                Group: "rbac.authorization.k8s.io",
                Kind: "roles",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    roles.rbac.authorization.k8s.io "local-storage-provisioner-jobs-role" is forbidden: attempt to grant extra privileges: [{[*] [batch] [jobs] [] []}] user=&{[email protected]  [system:authenticated] map[user-assertion.cloud.google.com:[AKUJVpl4/VP79aN4mEN2QMVLDAeQd9+ylFX4ocaQzWfRhRPqnBr/tP3SpvcSYE2Bm/aMNmf2pXrS7dt3YuI8/Q6vMXPg8apHgE0qGKmVUnzY9KafsDuqlVRANQiTCsYjZefCU3txWYR8hLxO1rJ90aOYyDBkACBY9BnWYNNOEpEOYvx8xlB7EWjNXo97N9LhH8vv3QvjEY6kQwsL+FVKRnttwoNCbUfBWQ+OSYNKOpCik9FE6fITqL2mprhdxcHfBuE0wyzmV/W/j9XNpAlt6ojE1s5+]]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}] ruleResolutionErrors=[]
not to have occurred
/home/prow/go/src/sigs.k8s.io/sig-storage-local-static-provisioner/test/e2e/e2e_test.go:540

pod directory is not created

The directory for the pod does not get created. Have to create the directory manually on the node.

FailedMount | MountVolume.SetUp failed for volume "local-pv-3a82f760" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -o bind /mnt/disks/disk1 /var/lib/kubelet/pods/27153f2b-7664-11e9-95b0-000af7854ba0/volumes/kubernetes.io~local-volume/local-pv-3a82f760 Output: mount: special device /mnt/disks/disk1 does not exist

FIX

mkdir /var/lib/kubelet/pods/27153f2b-7664-11e9-95b0-000af7854ba0/volumes/kubernetes.io~local-volume/local-pv-3a82f760

Fix release process

hack/run-e2e.sh will update docker configuration via gcloud auth configure-docker, but DOCKER_CONFIG in release job is for quay.io and read-only mounted.

A deployment issue when docker --init flag is enabled

We mount /dev into pod for device discovery, it has conflicts when docker --init flag is enabled because docker will mount init at /dev/init.

I reported this issue to moby project (moby/moby#37645) and it is fixed in moby/moby#37665. But it takes some time for this to be fixed in docker CE/EE.

Workarounds:

It may have better workaround, but I don't have time to investigate.

Migrate all e2e tests from k/k repo

/assign

e2e tests from k/k repo:

  • Local volume provisioner [Serial] should create and recreate local persistent volume
  • Local volume provisioner [Serial] should not create local persistent volume for filesystem volume that was not bind mounted
  • Local volume provisioner [Serial] should discover dynamically created local persistent volume mountpoint in discovery directory
  • Stress with local volume provisioner [Serial] should use be able to process many pods and reuse local volumes (done in #31)

Could not get node information

I am struggling to get this running. Im following these instructions

https://docs.openshift.com/container-platform/3.11/install_config/configuring_local.html

I got everything done, but when the pod attempts to start, i get this error.

F0312 02:19:33.183835 1 main.go:115] Could not get node information: nodes "okdtest.panasas.com" is forbidden: User "system:serviceaccount:default:panfs-storage-admin" cannot get nodes at the cluster scope: no RBAC policy matched

what am i missing?

Print version at startup

also, it can print version with -v flag is provided.

It helps to detect the version of the running program.

Consider support for RAID in local provisioner

For context, this issue stems from a Slack conversation with @msau42 that we wanted to capture here.

It would be awesome if the local static provisioner could support RAID'ing devices together.

Currently, if a user wants to RAID local disks together they must do so manually before presenting the provisioner with a filesystem or block device. Some ways this can currently be achieved include but are not limited to:

  1. Constructing the RAID volume at node provisioning time and passing formatting it to the provisioner as an FS, or as block device.
  2. Using an init container to the local static provisioner that RAIDs disks together, either as block devices or after dismantling FS's (this was @msau42's idea).
  3. Using some other process to accomplish RAID'ing/formatting, and then labeling nodes that have been set up and using NodeAffinity to only run the local provisioner on those nodes.

Option (1) is unfortunately not suitable for managed Kubernetes platforms where the only knobs the user have may be how many local FS/block volumes they want, and not how they're formatted. The other options have the benefit of potentially being compatible with a managed platform, however they require manual intervention from the user. Many of these goals may be accomplished with the LVM provisioner, but it sounds like that might be far off enough to warrant work in the meantime.

If it aligns with the goals of the local static provisioner, it would be really helpful if the provisioner could handle being presented with block devices and RAID'ing (and potentially formatting) them before creating the PV. To relax the constraint of requiring block devices maybe this same functionality could eventually include deconstructing FS's as well. I noticed "Local block devices as a volume source, with partitioning and fs formatting" is on the roadmap so maybe this could fit in there?

I wanted to start this issue as a place to discuss some of these issues. If we can reach consensus on how to proceed I'd be happy to help contribute.

PersistentVolume naming

To simplify administration, it would be helpful to have a friendly name embedded in the generated PV's name.

This could be accomplished by adding the mount point/device name before the checksum.

For example, the following PV:

Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: local-volume-provisioner-dev-b7cb269c-21f6-11e9-8d64-000c291c888b
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      hdd
Status:            Available
Claim:
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          19Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [dev]
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /mnt/disks/gitlab
Events:    <none>

would become local-pv-gitlab-27f18fc5

This shouldn't cause any problems, as the name is already used to generate the checksum - any changes in filename would already cause a change in PV name.

Cleanup Job fails

The cleanup job of block devices returns error "/mnt/kubernetes/vol_10g_b is not a block device.", and PV remains stuck in "Released" state.

Error is triggered by validateBlockDevice function in the cleanup pod (local-volume-provisioner:v2.3.0).
This pod is correctly mounting /mnt/kubernetes (mountPath), but not mounting /dev, hence the symlinks are invalid.

Creating and using the local volumes works, because local-volume-provisioner job does mount /dev.

kubectl get jobs -n kube-system cleanup-local-pv-4579e45f -o yaml
apiVersion: batch/v1
kind: Job
-- snip -- 
  name: cleanup-local-pv-4579e45f
  namespace: kube-system
-- snip -- 
      containers:
      - command:
        - /scripts/quick_reset.sh
        env:
        - name: LOCAL_PV_BLKDEVICE
          value: /mnt/kubernetes/vol_10g_b
        image: quay.io/external_storage/local-volume-provisioner:v2.3.0
        imagePullPolicy: IfNotPresent
        name: cleaner
        resources: {}
        securityContext:
          privileged: true
          procMount: Default
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /mnt/kubernetes
          name: mount-25626bfb
  -- snip -- 
      volumes:
      - hostPath:
          path: /mnt/kubernetes
          type: ""
        name: mount-25626bfb
-- snip -- 
kubectl get pods -n kube-system local-volume-provisioner-pjwsg -o yaml
apiVersion: v1
kind: Pod
-- snip -- 
  name: local-volume-provisioner-pjwsg
  namespace: kube-system
-- snip -- 
    image: quay.io/external_storage/local-volume-provisioner:v2.3.0
    imagePullPolicy: IfNotPresent
    name: provisioner
-- snip -- 
    securityContext:
      privileged: true
      procMount: Default
-- snip -- 
    volumeMounts:
    - mountPath: /etc/provisioner/config
      name: provisioner-config
      readOnly: true
    - mountPath: /dev
      name: provisioner-dev
    - mountPath: /mnt/kubernetes
      mountPropagation: HostToContainer
      name: local-disks
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: local-storage-admin-token-wwp9b
      readOnly: true
-- snip -- 
  volumes:
  - configMap:
      defaultMode: 420
      name: local-provisioner-config
    name: provisioner-config
  - hostPath:
      path: /dev
      type: ""
    name: provisioner-dev
  - hostPath:
      path: /mnt/kubernetes
      type: ""
    name: local-disks
  - name: local-storage-admin-token-wwp9b
    secret:
      defaultMode: 420
      secretName: local-storage-admin-token-wwp9b
-- snip -- 

Configure prow jobs

Current:

  • pull-sig-storage-local-static-provisioner
    • make
  • pull-sig-storage-local-static-provisioner-verify
    • make verify
  • pull-sig-storage-local-static-provisioner-test
    • make test

After e2e tests are done:

  • pull-sig-storage-local-static-provisioner-e2e
    • make e2e

/assign

Able to migrate volumes to another storage class and safer mechanism to prevent paths to be discovered again

Motivation

Able to migrate PVs (actually underlying volumes) to another storage class and do not be afraid volumes can possibly be discovered more than once.

Problem

Current PV name (generated from path + class + node) is used as a unique identifier for the path on the node. It has some drawbacks, e.g. cannot change PV naming convention anymore, and users cannot change the storage class name, otherwise, volumes will be discovered again.

For example, at first, a user creates a storage class foo to discovery volumes under /mnt/disks, but find foo name is not a good name and want to change it or simply changed the configuration by accident, volumes under /mnt/disks will be discovered again and new PVs with same paths will be created. Pods which expect to use different PVs under different storage class may mount the same volume.

Solution

  • Check PV.spec.path and PV.nodeAffinity, if there is a matching PV in PV cache, skip discovering it
  • A discovery path can be configured under more than one storage class, but only one can be enabled for discovering
    • Note that all storage classes are able to be enabled for deleting

Benefits:

  • Do not depend on PV name, we can use any naming mechanism in future
  • It helps the user to migrate volumes to new storage class

Use cases

Name of storage class is changed by accident

In the beginning, alice as system admin uses the following configuration for provisioner:

    foo:
      hostDir: /mnt/disks
      mountDir: /mnt/disks

Some volumes are discovered for storage class foo. Alice changes the storage class name to bar in provisioner configuration:

    bar:
      hostDir: /mnt/disks
      mountDir: /mnt/disks

Volumes under storage class foo will not be created under storage class bar. Only new volumes added in /mnt/disks will be created.

This is safer than the current behavior.

Note that, old volumes under storage class foo cannot be deleted by provisioner in this scenario because current provisioner will skip if storage class is not found in its configuration. (For in-process deleting, provisioner has no way to know mount path in provisioner container without storage class configuration) However, users can add old configuration back to recover deletion.

Migrate volumes to new storage class

In the beginning, alice as system admin uses the following configuration for provisioner:

    foo:
      hostDir: /mnt/disks
      mountDir: /mnt/disks

Alice wants to migrate volumes under /mnt/disks to another storage class bar, she can add a new storage class:

    foo:
      hostDir: /mnt/disks
      mountDir: /mnt/disks
      disabledForDiscovery: true # only one storage class for same `hostDir` can be active for discovery (default)
    bar:
      hostDir: /mnt/disks
      mountDir: /mnt/disks

Provisioner now can discover new volumes from /mnt/disks under storage class bar.

When volumes under foo are released and marked persistentVolumeReclaimPolicy=Delete, they will be deleted by provisioner.

When all volumes under foo are deleted, foo storage class can be removed from provisioner configuration. Of course, this is optional.

    bar:
      hostDir: /mnt/disks
      mountDir: /mnt/disks

Prepare a new release

Decide if 2.3.1 or 2.4.0. Taking a quick glance at the commits, it was mostly release tooling and helm chart fixes that went in. cc @cofyc if there was something major I missed.

/assign

Dynamic Provisioning Implementation

Is this repo the right place to submit a PR with dynamic local volume provisioning?
I read in the documentation that dynamic provisioning is on the roadmap, but wanted to double check because the name of the repo has "static" in it which doesn't look like a good omen.

Operation/troubleshooting documentation

In addition to getting start documentation, we need to some operation docs to guide users to operate in the production cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.