Git Product home page Git Product logo

csi-digitalocean's Introduction

csi-digitalocean

A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage. The CSI plugin allows you to use DigitalOcean Block Storage with your preferred Container Orchestrator.

The DigitalOcean CSI plugin is mostly tested on Kubernetes. Theoretically it should also work on other Container Orchestrators, such as Mesos or Cloud Foundry. Feel free to test it on other CO's and give us a feedback.

Releases

The DigitalOcean CSI plugin follows semantic versioning. The version will be bumped following the rules below:

  • Bug fixes will be released as a PATCH update.
  • New features (such as CSI spec bumps with no breaking changes) will be released as a MINOR update.
  • Significant breaking changes makes a MAJOR update.

Features

Below is a list of functionality implemented by the plugin. In general, CSI features implementing an aspect of the specification are available on any DigitalOcean Kubernetes version for which beta support for the feature is provided.

See also the project examples for use cases.

Volume Expansion

Volumes can be expanded by updating the storage request value of the corresponding PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
  namespace: default
spec:
  [...]
  resources:
    requests:
      # The field below can be increased.
      storage: 10Gi
      [...]

After successful expansion, the status section of the PVC object will reflect the actual volume capacity.

Important notes:

  • Volumes can only be increased in size, not decreased; attempts to do so will lead to an error.
  • Expanding a volume that is larger than the target size will have no effect. The PVC object status section will continue to represent the actual volume capacity.
  • Resizing volumes other than through the PVC object (e.g., the DigitalOcean cloud control panel) is not recommended as this can potentially cause conflicts. Additionally, size updates will not be reflected in the PVC object status section immediately, and the section will eventually show the actual volume capacity.

Raw Block Volume

Volumes can be used in raw block device mode by setting the volumeMode on the corresponding PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
  namespace: default
spec:
  [...]
  volumeMode: Block

Important notes:

  • If using volume expansion functionality, only expansion of the underlying persistent volume is guaranteed. We do not guarantee to automatically expand the filesystem if you have formatted the device.

Volume Snapshots

Snapshots can be created and restored through VolumeSnapshot objects.

Note:

Version 1 of the CSI driver supports v1alpha1 Volume Snapshots only.

Version 2 and 3 of the CSI driver supports v1beta1 Volume Snapshots only.

Version 4 and later of the CSI driver support v1 Volume Snapshots only, which is backwards compatible to v1beta1. However, version 3 renders snapshots unusable that had previously been marked as invalid. See the csi-snapshotter documentation on the validating webhook and v1beta1 to v1 upgrade notes.


See also the example.

Volume Statistics

Volume statistics are exposed through the CSI-conformant endpoints. Monitoring systems such as Prometheus can scrape metrics and provide insights into volume usage.

Volume Transfer

Volumes can be transferred across clusters. The exact steps are outlined in our example.

Installing to Kubernetes

Kubernetes Compatibility

The following table describes the required DigitalOcean CSI driver version per supported Kubernetes release.

Kubernetes Release DigitalOcean CSI Driver Version
1.19 v3
1.20 v3
1.21 v3
1.22 v4
1.23 v4.2.0+
1.24 v4.3.0+
1.25 v4.4.0+
1.26 v4.5.0+
1.27 v4.6.0+
1.28 v4.7.0+
1.29 v4.8.0+
1.30 v4.9.0+

Note:

The DigitalOcean Kubernetes product comes with the CSI driver pre-installed and no further steps are required.


Driver modes

By default, the driver supports both the controller and node mode. It can manage DigitalOcean Volumes via the cloud API and mount them on the required node. The actually used mode is determined by how the driver is deployed and configured. The suggested release manifests provide separate deployments for controller and node modes, respectively.

When running outside of DigitalOcean droplets, the driver can only function in controller mode. This requires to set the --region flag to a valid DigitalOcean region slug in addition to the other flags.

The --region flag must not be set when running the driver on DigitalOcean droplets.

Alternatively driver can be run in node only mode on DigitalOcean droplets. Driver would only handle node related requests like mount volume. Driver runs in node only mode when --token flag is not provided.

Skip secret creation (section 1. in following deployment instructions) when using node only mode as API token is not required.

Modes --token flag --region flag
Controller and Node mode in DigitalOcean
Controller only mode not in DigitalOcean
Node only mode in DigitalOcean

Requirements

  • --allow-privileged flag must be set to true for the API server
  • --allow-privileged flag must be set to true for the kubelet in Kubernetes 1.14 and below (flag does not exist in later releases)
  • --feature-gates=KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true feature gate flags must be set to true for both the API server and the kubelet
  • Mount Propagation needs to be enabled. If you use Docker, the Docker daemon of the cluster nodes must allow shared mounts.

1. Create a secret with your DigitalOcean API Access Token

Replace the placeholder string starting with a05... with your own secret and save it as secret.yml:

apiVersion: v1
kind: Secret
metadata:
  name: digitalocean
  namespace: kube-system
stringData:
  access-token: "a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da"

and create the secret using kubectl:

$ kubectl create -f ./secret.yml
secret "digitalocean" created

You should now see the digitalocean secret in the kube-system namespace along with other secrets

$ kubectl -n kube-system get secrets
NAME                  TYPE                                  DATA      AGE
default-token-jskxx   kubernetes.io/service-account-token   3         18h
digitalocean          Opaque                                1         18h

2. Provide authentication data for the snapshot validation webhook

Snapshots are validated through a ValidatingWebhookConfiguration which requires proper CA, certificate, and key data. The manifests in snapshot-validation-webhook.yaml should provide sufficient scaffolding to inject the data accordingly. However, the details on how to create and manage them is up to the user and dependent on the exact environment the webhook runs in. See the XXX-marked comments in the manifests file for user-required injection points.

The official snapshot webhook example offers a non-production-ready solution suitable for testing. For full production readiness, something like cert-manager can be leveraged.

3. Deploy the CSI plugin and sidecars

Always use the latest release compatible with your Kubernetes release (see the compatibility information).

The releases directory holds manifests for all plugin releases. You can deploy a specific version by executing the command

# Do *not* add a blank space after -f
kubectl apply -fhttps://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-vX.Y.Z/{crds.yaml,driver.yaml,snapshot-controller.yaml}

where vX.Y.Z is the plugin target version. (Note that for releases older than v2.0.0, the driver was contained in a single YAML file. If you'd like to deploy an older release you need to use kubectl apply -fhttps://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-vX.Y.Z.yaml)

If you see any issues during the installation, this could be because the newly created CRDs haven't been established yet. If you call kubectl apply -f again on the same file, the missing resources will be applied again.

The above does not include the snapshot validating webhook which needs extra configuration as outlined above. You may append ,snapshot-validation-webhook.yaml to the {...} list if you want to install a (presumably configured) webhook as well.

4. Test and verify

Create a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage

Check that a new PersistentVolume is created based on your claim:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM             STORAGECLASS       REASON    AGE
pvc-0879b207-9558-11e8-b6b4-5218f75c62b9   5Gi        RWO            Delete           Bound     default/csi-pvc   do-block-storage             3m

The above output means that the CSI plugin successfully created (provisioned) a new Volume on behalf of you. You should be able to see this newly created volume under the Volumes tab in the DigitalOcean UI

The volume is not attached to any node yet. It'll only attached to a node if a workload (i.e: pod) is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified Container:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-do-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-do-volume
      persistentVolumeClaim:
        claimName: csi-pvc

Check if the pod is running successfully:

kubectl describe pods/my-csi-app

Write inside the app container:

$ kubectl exec -ti my-csi-app /bin/sh
/ # touch /data/hello-world
/ # exit
$ kubectl exec -ti my-csi-app /bin/sh
/ # ls /data
hello-world

Upgrading

When upgrading to a new Kubernetes minor version, you should upgrade the CSI driver to match. See the table above for which driver version is used with each Kubernetes version.

Special consideration is necessary when upgrading from Kubernetes 1.11 or earlier, which uses CSI driver version 0.2 or earlier. In these early releases, the driver name was com.digitalocean.csi.dobs, while in all subsequent releases it is dobs.csi.digitalocean.com. When upgrading, use the commandline flag --driver-name to force the new driver to use the old name. Failing to do so will cause any existing PVs to be unusable since the new driver will not manage them and the old driver is no longer running.

Configuration

Default volumes paging size

Some CSI driver operations require paging through the volumes returned from the DO Volumes API. By default, the page size is not defined and causes the DO API to choose a value as specified in the API reference. In the vast majority of cases, this should work fine. However, for accounts with a very large number of volumes, the API server-chosen default page size may be too small to return all volumes within the configured (sidecar-provided) timeout.

For that reason, the default page size can be customized by passing the --default-volumes-page-size flag a positive number.


Notes:

  1. The user is responsible for selecting a value below the maximum limit mandated by the DO API. Please see the API reference link above to see the current limit.
  2. The configured sidecar timeout values may need to be aligned with the chosen page size. In particular, csi-attacher invokes ListVolumes to periodically synchronize the API and cluster-local volume states; as such, its timeout must be large enough to account for the expected number of volumes in the given account and region.
  3. The default page size does not become effective if an explicit page size (more precisely, max entries in CSI spec speak) is passed to a given gRPC method.

API rate limiting

DO API usage is subject to certain rate limits. In order to protect against running out of quota for extremely heavy regular usage or pathological cases (e.g., bugs or API thrashing due to an interfering third-party controller), a custom rate limit can be configured via the --do-api-rate-limit flag. It accepts a float value, e.g., --do-api-rate-limit=3.5 to restrict API usage to 3.5 queries per second.

Flags

Name Description Default
--validate-attachment Validate if the attachment has fully completed before formatting/mounting the device false

The --validate-attachment options adds an additional validation which checks for the /sys/class/block/<device name>/device/state file content for the running status. When enabling this flag, it prevents a racing condition where the DOBS volumes aren't fully attached which can be misinterpreted by the CSI implementation causing a force format of the volume which results in data loss.


Development

Requirements:

  • Go at the version specified in .github/workflows/test.yaml
  • Docker (for building via the Makefile, post-unit testing, and publishing)

Dependencies are managed via Go modules.

PRs from the code-hosting repository are automatically unit- and end-to-end-tested in our CI (implemented by Github Actions). See the .github/workflows directory for details.

For every green build of the master branch, the container image digitalocean/do-csi-plugin:master is updated and pushed at the end of the CI run. This allows to test the latest commit easily.

Steps to run the tests manually are outlined below.

Unit Tests

To execute the unit tests locally, run:

make test

End-to-End Tests

If you do not have write permissions to digitalocean/do-csi-plugin on Docker Hub or are worried about conflicting usage of that tag, you can also publish under a different (presumably personal) organization:

DOCKER_REPO=johndoe/do-csi-plugin VERSION=latest-feature make publish

This would yield the published container image johndoe/do-csi-plugin:latest-feature.

Assuming you have your DO API token assigned to the DIGITALOCEAN_ACCESS_TOKEN environment variable, you can then spin up a DOKS cluster on-the-fly and execute the upstream end-to-end tests for a given set of Kubernetes versions like this:

make test-e2e E2E_ARGS="-driver-image johndoe/do-csi-plugin:latest-feature 1.30 1.29 1.28"

See our documentation for an overview on how the end-to-end tests work as well as usage instructions.

Integration Tests

There is a set of custom integration tests which are mostly useful for Kubernetes pre-1.14 installations as these are not covered by the upstream end-to-end tests.

To run the integration tests on a DOKS cluster, follow the instructions.

Prepare CSI driver for a new Kubernetes minor version

  1. Review recently merged PRs and any in-progress / planned work to ensure any bugs scheduled for the release have been fixed and merged.
  2. Bump kubernetes dependency versions
    1. If needed, update the deploy/kubernetes/releases/csi-digitalocean-dev images to their latest stable version.
  3. Support running e2e on new $MAJOR.$MINOR
    1. Since we only support three minor versions at a time. E2e tests for the oldest supported version can be removed.
  4. Verify e2e tests pass - see here about running tests locally
  5. Prepare for release
  6. If necessary, update Go version to the latest stable Go Binary
    1. Update Dockerfile
    2. Update go.mod
  7. Perform release

See e2e test README on how to run conformance tests locally.

Updating the Kubernetes dependencies

Run

make NEW_KUBERNETES_VERSION=X.Y.Z update-k8s

to update the Kubernetes dependencies to version X.Y.Z.

Note: Make sure to also add support to the e2e tests for the new kubernetes version, following these instructions.

Releasing

Releases may happen either for the latest minor version of the CSI driver maintained in the master branch, or an older minor version still maintained in one of the release-* branches. In this section, we will call that branch the release branch.

To release a new version vX.Y.Z, first check out the release branch and bump the version:

make NEW_VERSION=vX.Y.Z bump-version

This will create the set of files specific to a new release. Make sure everything looks good; in particular, ensure that the change log is up-to-date and is not missing any important, user-facing changes.

Create a new branch with all changes:

git checkout -b prepare-release-vX.Y.Z
git add .
git push origin

After it is merged to the release branch, wait for the release branch build to go green. (This will entail another run of the entire test suite.)

Finally, check out the release branch again, tag the release, and push it:

git checkout <release branch>
git pull
git tag vX.Y.Z
git push origin vX.Y.Z

(This works for non-master release branches as well since the checkout Github Action we use defaults to checking out the ref/SHA that triggered the workflow.)

The CI will publish the container image digitalocean/do-csi-plugin:vX.Y.Z and create a Github Release under the name vX.Y.Z automatically. Nothing else needs to be done.

Contributing

At DigitalOcean we value and love our community! If you have any issues or would like to contribute, feel free to open an issue or PR.

csi-digitalocean's People

Contributors

adamwg avatar andrewsomething avatar batazor avatar collinshoop avatar cpanato avatar dependabot[bot] avatar dylanrhysscott avatar fatih avatar feluxe avatar forestlovewood avatar gottwald avatar jcodybaker avatar joonas avatar kperath avatar lldrlove avatar mo-rieger avatar nanzhong avatar nicktate avatar nussjustin avatar phillc avatar plantfansam avatar quite4work avatar r0kas avatar samcoren avatar srikiz avatar testwill avatar timoreimann avatar varshavaradarajan avatar vaskoz avatar waynr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-digitalocean's Issues

DO client plugin uses private network, unable to reach DO API

What did you do?

Created a cluster in DO with private networking between the nodes, using private IP as apiserver advertising address, then followed the instructions in the README (created secret, applied yaml, created csi-pvc).

After that PVC exists, but PV doesn't get created, and claim remains in the pending state indefinitely .

What did you expect to happen?

PV gets created.

Configuration (MUST fill this out):

  • system logs:

cluster-info dump

  • manifests: directly from the readme, namely: created a secret, then applied csi config, then tried to create a csi-pvc with 5Gi.

  • CSI Version: 0.2.0

  • Kubernetes Version: 1.11.3

  • Cloud provider/framework version, if applicable (such as Rancher): DigitalOcean, installed k8s via latest kubeadm.

Additional logs

From the logs of csi-do-plugin it looks to me that it cannot reach the DO API because it tries to do so over the private network.

Unable to mount a volume in a pod

What did you do?

Just followed the instructions from the CSI-DO readme file in order to add a volume to a pod.

What did you expect to happen?

I should have a pod with the running status and a working mounted volume.
Instead, my pod is in the ContainerCreating status.

Note: The Volume has been correctly created and I can see it in the DO back-office.

Configuration:

  • system logs:

https://gist.github.com/did/39fc1726017e4b7f04343a48fe02b149

  • manifests, such as pvc, deployments, etc.. you used to reproduce:

Exactly the same files used in the README.

  • CSI Version:

v0.2.0 (https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.2.0.yaml)

  • Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:18:13Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider/framework version, if applicable (such as Rancher):

DigitalOcean without Rancher.

Important: I'm using the new Kubernetes cluster (in BETA) from DigitalOcean.

Thanks for your help!

Attachment Issues

What did you do? (required. The issue will be closed when not provided.)

Had problems attaching and detaching pods along with some odd 500 errors during what we later figured was an API outage for Digital Ocean. We were deleting pods with PVC and creating new pods with PVC and recieving problems with almost all actions. We were also left in a state where the CSI resources seem to be trying to delete resources that do not exist.

Some of this was due to getting rate limited once these requests started to pile up.

We tried restarting the controller pods and even deleting the whole CSI system and re-installing it, but the problems persisted.

What did you expect to happen?

Not sure, I think that the API issues caused some problems with this software and I'm not sure that these problems are really preventable. I am mostly hoping this information can be useful.

Configuration (MUST fill this out):

  • system logs:

https://gist.github.com/ZachGlassman/380978ff77b2cf7fdb2560afc954aa1a

  • CSI Version: 0.2.0

  • Kubernetes Version: 1.11.2

  • Cloud provider/framework version, if applicable (such as Rancher):
    Plain Kubernetes with kubeadm

Panic when deleting

Hey, I already have 180 restarts on my pod csi-provisioner-doplugin-0.

Here are the logs:

time="2018-05-26T16:01:21Z" level=info msg="get plugin info called" method=get_plugin_info node_id=94784XXX region=fra1 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"0.1.0\" "
time="2018-05-26T16:01:21Z" level=info msg="delete volume called" method=delete_volume node_id=94784XXX region=fra1 volume_id=252ec68a-4664-11e8-9dc3-0242ac110XXX
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7c8e35]

goroutine 32 [running]:
github.com/digitalocean/csi-digitalocean/driver.(*Driver).DeleteVolume(0xc420094900, 0x94f040, 0xc4201b24e0, 0xc420170760, 0xc420094900, 0x0, 0x0)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/driver/controller.go:152 +0x335
github.com/digitalocean/csi-digitalocean/vendor/github.com/container-storage-interface/spec/lib/go/csi/v0._Controller_DeleteVolume_Handler.func1(0x94f040, 0xc4201b24e0, 0x88a220, 0xc420170760, 0x7fbaabe886c8, 0x0, 0x8a7ac0, 0xc420170740)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/vendor/github.com/container-storage-interface/spec/lib/go/csi/v0/csi.pb.go:2148 +0x86
github.com/digitalocean/csi-digitalocean/driver.(*Driver).Run.func1(0x94f040, 0xc4201b24e0, 0x88a220, 0xc420170760, 0xc420170780, 0xc4201707a0, 0x853ca0, 0xbe0bc8, 0x8d0260, 0xc4204112c0)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/driver/driver.go:129 +0x78
github.com/digitalocean/csi-digitalocean/vendor/github.com/container-storage-interface/spec/lib/go/csi/v0._Controller_DeleteVolume_Handler(0x8cfbe0, 0xc420094900, 0x94f040, 0xc4201b24e0, 0xc4201822a0, 0xc4200771c0, 0x0, 0x0, 0x267d01a97ab537a3, 0x1bfdf1598e0ef9d6)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/vendor/github.com/container-storage-interface/spec/lib/go/csi/v0/csi.pb.go:2150 +0x167
github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4200d5600, 0x9507a0, 0xc420001200, 0xc4204112c0, 0xc4201b26f0, 0xbb92f8, 0x0, 0x0, 0x0)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc/server.go:923 +0x889
github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4200d5600, 0x9507a0, 0xc420001200, 0xc4204112c0, 0x0)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc/server.go:1148 +0x1318
github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42002a040, 0xc4200d5600, 0x9507a0, 0xc420001200, 0xc4204112c0)
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc/server.go:637 +0x9f
created by github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
	/Users/fatih/go/src/github.com/digitalocean/csi-digitalocean/vendor/google.golang.org/grpc/server.go:635 +0xa1

I can send you the node_id or volume_id via Slack, in case you need them. I don't think so.

StatefulSet volume_claim_template does not provision volume

We have a setup where we have 3 different namespaces : backend, frontend, data

The data namespace contains everything in regards to replicated mongo, percona, ... and the way these are replicated is through stateful sets with volume claims.

When trying to use a volume claim from this namespace the csi does not respond to any claims.

I checked and the csi is perfectly running (in the default namespace according to the yml spec provided here).

However when trying to provision from a volume claim in a stateful set it doesn't seem to provision anything :/

I get provisioning failed unable to authenticate

I have a valid token, added as a secret per the docs. When I kubectl describe pvc/csi-pvc, I get the error message StorageClass "do-block-storage": rpc error: code = Internal desc = GET https://api.digitalocean.com/v2/volumes?name=pvc-xxx&region=sfo2: 401 Unable to authenticate you. doctl work with this token without issue.

Attachment sometimes fails with `GRPC error: <nil>`

Sometimes attach fails sporadically, possibly when creating multiple services with volumes

I0422 14:27:29.258025       1 connection.go:235] GRPC call: /csi.v0.Controller/ControllerPublishVolume
I0422 14:27:29.258038       1 connection.go:236] GRPC request: volume_id:"fake-id" node_id:"88877937" volume_capability:<mount:<fs_t
ype:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_attributes:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"fake-8081-com.digital
ocean.csi.dobs" >
I0422 14:27:32.762527       1 connection.go:238] GRPC response:
I0422 14:27:32.762575       1 connection.go:239] GRPC error: <nil>

Stuck on "waiting for a volume to be created"

Overview

I am trying to follow the instructions on the README.md, but am getting a pod has unbound PersistentVolumeClaims warning on the Pod, and a waiting for a volume to be created, either by external provisioner "com.digitalocean.csi.dobs" or manually created by system administrator message when I inspect the PVC.

Details

Environment

$ uname -a
Linux 4.15.0-23-generic #25-Ubuntu SMP Wed May 23 18:02:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

$ minikube version
minikube version: v0.27.0

Steps to Reproduce

I went to https://cloud.digitalocean.com/settings/api/tokens and clicked on the "Generate New Token" button to generate the token.

screenshot from 2018-06-20 23-07-19

Then I reset everything.

$ sudo kubeadm reset

And started my local cluster with Minikube, passing in the --allow-privileged flag.

$ sudo -E minikube start --vm-driver=none --extra-config=apiserver.allow-privileged=true --extra-config=kubelet.allow-privileged=true

Then, I created the Secret Object using the token I got.

$ kubectl  get secrets
NAME                  TYPE                                  DATA      AGE
default-token-h9xtj   kubernetes.io/service-account-token   3         1h

$ kubectl -n kube-system get secrets
NAME                 TYPE         DATA      AGE
...
digitalocean         Opaque       1         37m
...

And more details:

$ kubectl get secrets/digitalocean -n kube-system -o json
{
    "apiVersion": "v1",
    "data": {
        "access-token": "xyz"
    },
    "kind": "Secret",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"name\":\"digitalocean\",\"namespace\":\"kube-system\"},\"stringData\":{\"access-token\":\"abc\"}}\n"
        },
        "creationTimestamp": "2018-06-20T21:32:54Z",
        "name": "digitalocean",
        "namespace": "kube-system",
        "resourceVersion": "3471",
        "selfLink": "/api/v1/namespaces/kube-system/secrets/digitalocean",
        "uid": "7decac6e-74d1-11e8-a79b-54e1ad13e25a"
    },
    "type": "Opaque"
}

Then, I installed the CSI plugin, using a variation of the .yaml where the default storage class annotation is removed

annotations:
    storageclass.kubernetes.io/is-default-class: "true"

Then, I applied exactly the same PVC as in the README.md

$ kubectl apply -f manifests/elasticsearch/test-pvc.yaml
persistentvolumeclaim "csi-pvc" created

But when I inspect the PVC, the message says waiting for a volume to be created

$ kubectl describe pvc
Name:          csi-pvc
Namespace:     default
StorageClass:  do-block-storage
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"csi-pvc","namespace":"default"},"spec":{"accessModes":["ReadWrit...
               volume.beta.kubernetes.io/storage-provisioner=com.digitalocean.csi.dobs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  ExternalProvisioning  9s (x3 over 39s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "com.digitalocean.csi.dobs" or manually created by system administrator

And if I try to run the Pod, it gives me a pod has unbound PersistentVolumeClaims warning.

$ kubectl describe pod my-csi-app
Name:         my-csi-app
Namespace:    default
Node:         <none>
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-csi-app","namespace":"default"},"spec":{"containers":[{"command":["sleep","1000...
Status:       Pending
IP:
Containers:
  my-frontend:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sleep
      1000000
    Environment:  <none>
    Mounts:
      /data from my-do-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-h9xtj (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  my-do-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-pvc
    ReadOnly:   false
  default-token-h9xtj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-h9xtj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  1m (x57 over 16m)  default-scheduler  pod has unbound PersistentVolumeClaims

I have tried using the kube-system namespace for the PVC, but the results are the same.

Thanks in advance for any help you are able to provide.

PersistentVolumeClaim stuck at "pending" state

What did you do?

I've initialized my cluster with kubeadm init --pod-network-cidr=192.168.0.0/16 and followed the instructions provided in the README, stopping after the creation of the pvc.

What did you expect to happen?

For the PV to be created.

Configuration:

My PVC configuration:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pv-claim
  labels:
    app: cramkle
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  • CSI Version: v0.1.4

  • Kubernetes Version: v1.11.2

Intermittent MountVolume.MountDevice errors on Pod creation

I've set up Rancher 2.0 cluster on DO to test CSI-DO. At first attempt i followed README.md and succeeded with example app. However after trying to do my own stuff I've started consistently getting Pod creation errors. To rule out the unknowns I've wiped all my stuff and returned back to example app and confirmed that i'm consistently getting this result:

Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Warning  FailedScheduling        21s (x4 over 22s)  default-scheduler        pod has unbound PersistentVolumeClaims
  Normal   Scheduled               19s                default-scheduler        Successfully assigned my-csi-app to node-1
  Normal   SuccessfulMountVolume   19s                kubelet, node-1          MountVolume.SetUp succeeded for volume "default-token-jw8hg"
  Normal   SuccessfulAttachVolume  15s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2"
  Warning  FailedMount             5s (x4 over 10s)   kubelet, node-1          MountVolume.MountDevice failed for volume "pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2 does not exist and no size was specified.\n"

However just after waiting for couple hours problem is gone. I wonder if it was Block Storage degradation I've just happened to witness or this is something related to CSI-DO. Unfortunatelly I've wiped out that cluster and after setting up new one example app deploys just fine. I will provide additional info you might need if I witness this problem again.

Resizing Persistent Volumes

What did you do?

I created a persistent volume claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    field.cattle.io/creatorId: u-xxx
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
  creationTimestamp: 2018-11-19T17:11:59Z
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    cattle.io/creator: norman
  name: mypvc
  namespace: mynamespace
  resourceVersion: "xxx"
  selfLink: /api/v1/namespaces/mynamespace/persistentvolumeclaims/mypvc
  uid: xxx
spec:
  accessModes:
  - ReadWriteOnce
  dataSource: null
  resources:
    requests:
      storage: 8Gi
  storageClassName: do-block-storage
  volumeMode: Filesystem
  volumeName: pvc-xxx
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  phase: Bound

I tried to resize the persistent volume claim as described here.

spec:
  ...
  resources:
    requests:
      storage: 12Gi
  ...

But I got the following error:

> kubectl edit pvc mypvc -n mynamespace
error: persistentvolumeclaims "mypvc" could not be patched: persistentvolumeclaims "mypvc" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize

What did you expect to happen?

I wanted to expand the existing volume.

Configuration

  • CSI Version:

    • quay.io/k8scsi/csi-provisioner:v0.4.1
    • digitalocean/do-csi-plugin:v0.3.1
    • quay.io/k8scsi/csi-attacher:v0.4.1
  • Kubernetes Version: v1.12.1

  • Cloud provider/framework version, if applicable (such as Rancher):

    • cluster created via DigitalOcean Kubernetes Beta
    • managed via another Rancher 2.1.1 instance (imported cluster)

Feature Request : Allow created resources to have a name prefix

At the moment when for example a volume is provisioned through CSI it gets assigned a name that is bound to be unique.

Such as pvc-123456789ewqioequwq

This approach works beautifully for a purely machine-based operator. But since there is no way to prefix a resource name it leads to confusion when looking at the digitalocean volumes list, especially when the volumes that are provisioned have different use cases.

As such it would/could be beneficial to allow for a prefix to be set ( a prefix, not an override ).

For example : let's say a pvc creates a 10Gb volume for a mysql backup and one for a mongo backup, the resources created by csi would be something like

  • pvc-ewqio4234eew423qquwq
  • pvc-jsaldjwui4233oe4234234

Which would be understandable for an automated system, but is not understandable to a human operator just looking at the digitalocean volume list either through doctl or the web interface.

If a prefix is allowed, it could become something like :

  • staging-mysql-pvc-ewqio4234eew423qquwq
  • staging-mongo-pvc-jsaldjwui4233oe4234234

I have opened a similar feature request that I might do a PR for on CCM, if possible the implementation should align with how the CCM ( if it's a desirable feature request off course ) implementation works so that developers using both the CCM and CSI can easily understand it.

Issue on CCM : digitalocean/digitalocean-cloud-controller-manager#102

AttachVoume.Attach failed for voume: node has no NodeID annotation

What did you do?

  • Installed csi-digitalocean-v0.2.0 on Kubernetes 1.11.2
  • Created PersistentVolumeClaim
  • Added volume to a Pod

What did you expect to happen?

  • Volume is mounted into Pod

What happend

  • DO volume is created

  • v1/PersistentVolumeClaim is bound

    my-volume1 Bound pvc-xxxx 5Gi RWO do-block-storage 48m

  • Pod does not start

    AttachVolume.Attach failed for volume "pvc-xxxx" : node "kubernetes-node-ng-1" has no NodeID annotation
    Unable to mount volumes for pod "db-xxxx/xxx-xx_mynamespace(xxxx-xxxx-xxxx-xxx-xxxx)": timeout expired waiting for volumes to attach or mount for pod "mynamespace"/"db-xxxx". list of unmounted volumes=[my-volume1]. list of unattached volumes=[plone-volume1 default-token-xxxx]

Configuration (MUST fill this out):

  • CSI Version: 0.2.0

  • Kubernetes Version: 1.11.2

  • Cloud provider/framework version, if applicable (such as Rancher):

Plain Kubernetes deployed with kubeadm

Switch from dep to Go modules

With Go v1.11, Go modules is now an experimental mode integrated into the Go toolchain. It allows native dependency management without the need of an additional tool. Also Go modules allows us to develop the plugin without the need of a GOPATH, which makes it easier for first-time contributors or Go users. In overall it has many benefits compared to other tools.

CSI didn't check whether the volumeHandle ID is duplicate

What did you do? (required. The issue will be closed when not provided.)

Create 2 persistent volumes with the same volumeHandle ID and attached it to the pods in the same node.

What did you expect to happen?

I expect CSI will check whether the volumeHandle ID is duplicate, the container will not be created.

Configuration (MUST fill this out):

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{"pv.kubernetes.io/provisioned-by":"dobs.csi.digitalocean.com"},"name":"wordpress-webserver-pv","namespace":""},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"3Gi"},"csi":{"driver":"dobs.csi.digitalocean.com","fsType":"ext4","volumeAttributes":{"com.digitalocean.csi/format":"true"},"volumeHandle":"dd208124-e9df-11e8-9963-0a58ac14b61a"},"storageClassName":"do-block-storage"}}
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
  creationTimestamp: 2018-11-20T13:51:14Z
  finalizers:
  - kubernetes.io/pv-protection
  - external-attacher/dobs-csi-digitalocean-com
  name: wordpress-webserver-pv
  resourceVersion: "452634"
  selfLink: /api/v1/persistentvolumes/wordpress-webserver-pv
  uid: 58769042-eccb-11e8-9629-7a95cce61e52
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 3Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: wordpress-webserver-pvc
    namespace: wordpress
    resourceVersion: "452576"
    uid: 55f9cc28-eccb-11e8-9629-7a95cce61e52
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeAttributes:
      com.digitalocean.csi/format: "true"
    volumeHandle: dd208124-e9df-11e8-9963-0a58ac14b61a
  persistentVolumeReclaimPolicy: Retain
  storageClassName: do-block-storage
  volumeMode: Filesystem
status:
  phase: Bound
---
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{"pv.kubernetes.io/provisioned-by":"dobs.csi.digitalocean.com"},"name":"wordpress-mysql-pv","namespace":""},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"3Gi"},"csi":{"driver":"dobs.csi.digitalocean.com","fsType":"ext4","volumeAttributes":{"com.digitalocean.csi/noformat":"true"},"volumeHandle":"cb0ac84f-e9df-11e8-9963-0a58ac14b61a"},"storageClassName":"do-block-storage"}}
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
  creationTimestamp: 2018-11-20T13:51:13Z
  finalizers:
  - kubernetes.io/pv-protection
  - external-attacher/dobs-csi-digitalocean-com
  name: wordpress-mysql-pv
  resourceVersion: "452635"
  selfLink: /api/v1/persistentvolumes/wordpress-mysql-pv
  uid: 58399a2a-eccb-11e8-9629-7a95cce61e52
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 3Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: wordpress-mysql-pvc
    namespace: wordpress
    resourceVersion: "452582"
    uid: 561fe82f-eccb-11e8-9629-7a95cce61e52
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
    volumeHandle: dd208124-e9df-11e8-9963-0a58ac14b61a
  persistentVolumeReclaimPolicy: Retain
  storageClassName: do-block-storage
  volumeMode: Filesystem
status:
  phase: Bound
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"wordpress-mysql-pvc","namespace":"wordpress"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"3Gi"}},"volumeName":"wordpress-mysql-pv"}}
      pv.kubernetes.io/bind-completed: "yes"
    creationTimestamp: 2018-11-20T13:51:10Z
    finalizers:
    - kubernetes.io/pvc-protection
    name: wordpress-mysql-pvc
    namespace: wordpress
    resourceVersion: "452629"
    selfLink: /api/v1/namespaces/wordpress/persistentvolumeclaims/wordpress-mysql-pvc
    uid: 561fe82f-eccb-11e8-9629-7a95cce61e52
  spec:
    accessModes:
    - ReadWriteOnce
    dataSource: null
    resources:
      requests:
        storage: 3Gi
    storageClassName: do-block-storage
    volumeMode: Filesystem
    volumeName: wordpress-mysql-pv
  status:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 3Gi
    phase: Bound
---
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"wordpress-webserver-pvc","namespace":"wordpress"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"3Gi"}},"volumeName":"wordpress-webserver-pv"}}
      pv.kubernetes.io/bind-completed: "yes"
    creationTimestamp: 2018-11-20T13:51:09Z
    finalizers:
    - kubernetes.io/pvc-protection
    name: wordpress-webserver-pvc
    namespace: wordpress
    resourceVersion: "452622"
    selfLink: /api/v1/namespaces/wordpress/persistentvolumeclaims/wordpress-webserver-pvc
    uid: 55f9cc28-eccb-11e8-9629-7a95cce61e52
  spec:
    accessModes:
    - ReadWriteOnce
    dataSource: null
    resources:
      requests:
        storage: 3Gi
    storageClassName: do-block-storage
    volumeMode: Filesystem
    volumeName: wordpress-webserver-pv
  status:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 3Gi
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"wordpress"},"name":"wordpress-mysql","namespace":"wordpress"},"spec":{"selector":{"matchLabels":{"app":"wordpress","tier":"mysql"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"wordpress","tier":"mysql"}},"spec":{"containers":[{"env":[{"name":"MYSQL_ROOT_PASSWORD","valueFrom":{"secretKeyRef":{"key":"password","name":"mysql-pass"}}}],"image":"mysql:5.6","name":"mysql","ports":[{"containerPort":3306,"name":"mysql"}],"volumeMounts":[{"mountPath":"/var/lib/mysql","name":"mysql-pvc"},{"mountPath":"/etc/mysql/conf.d/","name":"mysql-extra-config"}]}],"volumes":[{"name":"mysql-pvc","persistentVolumeClaim":{"claimName":"wordpress-mysql-pvc"}},{"configMap":{"name":"mysql-extra-config"},"name":"mysql-extra-config"}]}}}}
  creationTimestamp: 2018-11-20T13:57:31Z
  generation: 1
  labels:
    app: wordpress
  name: wordpress-mysql
  namespace: wordpress
  resourceVersion: "453288"
  selfLink: /apis/extensions/v1beta1/namespaces/wordpress/deployments/wordpress-mysql
  uid: 399e12f7-eccc-11e8-9629-7a95cce61e52
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: mysql-pass
        image: mysql:5.6
        imagePullPolicy: IfNotPresent
        name: mysql
        ports:
        - containerPort: 3306
          name: mysql
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mysql-pvc
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: mysql-pvc
        persistentVolumeClaim:
          claimName: wordpress-mysql-pvc
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"wordpress"},"name":"wordpress-webserver","namespace":"wordpress"},"spec":{"selector":{"matchLabels":{"app":"wordpress","tier":"frontend"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"wordpress","tier":"frontend"}},"spec":{"containers":[{"args":["www-data-sftp:pass:33:33:www-data"],"image":"atmoz/sftp:latest","name":"sftp","ports":[{"containerPort":22,"name":"ssh"}],"volumeMounts":[{"mountPath":"/home/www-data-sftp/www-data","name":"webserver"}]},{"env":null,"image":"wordpress:4.9.8-php7.2-apache","name":"wordpress","ports":[{"containerPort":80,"name":"wordpress"}],"volumeMounts":[{"mountPath":"/var/www/html","name":"webserver"}]}],"volumes":[{"name":"webserver","persistentVolumeClaim":{"claimName":"wordpress-webserver-pvc"}}]}}}}
  creationTimestamp: 2018-11-20T13:57:33Z
  generation: 2
  labels:
    app: wordpress
  name: wordpress-webserver
  namespace: wordpress
  resourceVersion: "487549"
  selfLink: /apis/extensions/v1beta1/namespaces/wordpress/deployments/wordpress-webserver
  uid: 3ab6471b-eccc-11e8-9629-7a95cce61e52
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - args:
        - www-data-sftp:pass:33:33:www-data
        image: atmoz/sftp:latest
        imagePullPolicy: Always
        name: sftp
        ports:
        - containerPort: 22
          name: ssh
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /home/www-data-sftp/www-data
          name: webserver
      - image: wordpress:4.9.8-php7.2-apache
        imagePullPolicy: IfNotPresent
        name: wordpress
        ports:
        - containerPort: 80
          name: wordpress
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/www/html
          name: webserver
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: webserver
        persistentVolumeClaim:
          claimName: wordpress-webserver-pvc
  • CSI Version:
    do-csi-plugin:v0.3.1
    csi-attacher:v0.4.1
  • Kubernetes Version:
    v1.12.1

com.digitalocean.csi/format: "true" dangerous?

I just came across this new example on how to use existing volumes

I'm glad this works now! But I'm worried about this part:

    volumeAttributes:
      com.digitalocean.csi/format: "true"

volumeAttributes has a special, csi-digitalocean specific annotation called com.digitalocean.csi/format. If you add this key, the CSI plugin make sure to not format the volume. If you don't add this, it'll be formatted.

I'm not aware of the internals, but formatting existing volumes by default seems very harsh ;-).
Wouldn't it be better if existing volumes would not be formatted by default, only if you explicitly say so in the config?

Another thing I noticed. If I see the config for the PV and read format: "true" I would think this will format the volume, not the opposite; I can imagine people accidentally formatting their volumes by setting it to "false" intuitively.

Error when upgrading workload

What did you do? (required. The issue will be closed when not provided.)

I deployed a blog app using ghost and everything ok here.
Then I did an upgrade just setting a new env value and at this moment I got an error to attach the volume.

What did you expect to happen?

To upgrade without error when attaching and detaching volumes.

I'm using Rancher 2.0.8

Configuration (MUST fill this out):

  • system logs:

Please provide the following logs:


kubectl cluster-info dump > kubernetes-dump.log

This will output everthing from your cluster. Please use a private gist via
https://gist.github.com/ to share this dump with us

  • manifests, such as pvc, deployments, etc.. you used to reproduce:

workload

volume claim

volume

events

Everything ok until now.

Now, an upgrade was made just to change an env value. and...

upgrade events

To fix, I manually deleted the pod.

manual deleted

  • CSI Version:
    0.2.0

  • Kubernetes Version:
    1.11.2

  • Cloud provider/framework version, if applicable (such as Rancher):
    Rancher 2.0.8

Is there support for force deleting persistent volumes?

I'm experiencing #32 on only one pod (a similar one works), so I tried to delete the persistent volume claim, then force delete the persistent volume itself (a manual delete didn't work and it's been stuck Terminating for more than an hour).

From the logs I see:

si-provisioner-doplugin-0 csi-provisioner I0822 18:57:53.526521       1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync
csi-provisioner-doplugin-0 csi-provisioner I0822 18:57:53.529329       1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: forcing resync
csi-provisioner-doplugin-0 csi-provisioner E0822 18:57:53.529400       1 controller.go:685] Exceeded failedDeleteThreshold threshold: 15, for volume "pvc-b154104f-a4e7-11e8-b870-16ebc57da236", provisioner will not attempt retries for this volume
csi-provisioner-doplugin-0 csi-provisioner I0822 18:57:54.111269       1 reflector.go:286] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync

which stays in place even when I detach the volume and delete manually. Is there any way to "purge" a persistent volume? Thanks!

CSI: Add ErrorLog interceptor for the gRPC servers

We have multiple methods in the plugin that might return an error. We should add an interceptor that logs the return error of all functions automatically. I've didn't looked into it, but I believe there might be something like this.

csi-doplugin daemonset pods fail to start after upgrade to 0.1.3

We've been running the CSI fine for the past 3 months, but since about a week suddenly one of our pvc's got disconnected on DigitalOcean ( not attached to the droplet anymore ).

The CPU and Memory suddenly started spiking on that particular droplet due to the csi-provisioner trying to fix the issue :

I0818 13:27:03.875395       1 controller.go:93] Connecting to /var/lib/csi/sockets/pluginproxy/csi.sock
I0818 13:27:03.875588       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:03.875733       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:04.875988       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:06.009125       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:06.009453       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:07.117278       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:07.117483       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:08.300558       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:09.250929       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:09.251189       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:10.259481       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:11.377851       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:11.377939       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:12.384654       1 controller.go:120] Still trying, connection is CONNECTING
I0818 13:27:12.384811       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:13.527965       1 controller.go:120] Still trying, connection is TRANSIENT_FAILURE
I0818 13:27:13.875619       1 controller.go:113] Connection timed out

( this is about the 300th restart of the csi-provisioner plugin pod, all with the same output ).

I've tried deleting the pod/pvc/pvc claim that were being the issue, but for some reason the pvc claim from the provisioner is now stuck in pending mode ?

running on digitalocean/do-csi-plugin 0.1.0

Files not persisted/written to block storage volume

I have a kubernetes cluster setup using Rancher 2.0 (https://rancher.com/).

I followed the install instructions and created the Persistent Volume Claim and pod. I can see in DigitalOcean the volume is created and I can access the mounted directory in the busybox container shell and write files to it.

However when the pod is removed and a new pod is created using the same persistent volume claim, the volume is empty and all the previous files are lost.

Also if I delete the pod and instead of creating a new pod I attach the volume to a new droplet directly in digitalocean I see the volume is empty. The files created in the /data directory of the busybox pod are not in the block storage volume.

I tested the same setup in rancher using a different storage driver (rancher/longhorn) and the files were correctly persisted across different pods.

Is there a setting I'm missing to make the files be written to the volume? I followed the instructions step by step copying the yaml as is.

driver name com.digitalocean.csi.dobs not found in the list of registered CSI drivers

What did you do? (required. The issue will be closed when not provided.)

  • Created new Kubernetes cluster (v1.12.0) with one master, one worker and calico network using kubeadm
  • Installed csi-digitalocean-v0.2.0 using instructions in readme.md
  • Created PVC csi-pvc in section 3. Test and verify of instructions
  • Created POD my-csi-app
  • Pod is in state ContainerCreating with this events (kubectl describe)
Events:
  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               22m                 default-scheduler        Successfully assigned default/my-csi-app to worker1
  Normal   SuccessfulAttachVolume  21m                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-c3738d2cc47711e8"
  Warning  FailedMount             106s (x9 over 20m)  kubelet, worker1         Unable to mount volumes for pod "my-csi-app_default(e9139ae0-c477-11e8-917b-de5efd564cc5)": timeout expired waiting for volumes to attach or mount for pod "default"/"my-csi-app". list of unmounted volumes=[my-do-volume]. list of unattached volumes=[my-do-volume default-token-gbtz2]
  Warning  FailedMount             80s (x18 over 21m)  kubelet, worker1         MountVolume.MountDevice failed for volume "pvc-c3738d2cc47711e8" : driver name com.digitalocean.csi.dobs not found in the list of registered CSI drivers

What did you expect to happen?

  • Pod created and volume attached to it

Configuration (MUST fill this out):

  • system logs:

https://gist.github.com/Barvoj/6f15f342a1db88bd1717e4d92ec2915a

  • manifests, such as pvc, deployments, etc.. you used to reproduce:

I created new cluster using this steps:
create 2 new digital ocean droplets (master and worker)

Run on all nodes:

apt-get update
apt-get upgrade

# Install docker as described https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install -y docker-ce

# Install kubeadm as described https://kubernetes.io/docs/setup/independent/install-kubeadm/

# Disable swap
swapoff -a

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Run on master:

# Create cluster as described https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=68.183.36.31

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

Run on worker:
kubeadm join from output of kubeadm init

Make kubelet privileged on all nodes
Add --allow-privileged=true in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf for KUBELET_KUBECONFIG_ARGS and then systemctl restart kubelet

Instal CSI
I followed https://github.com/digitalocean/csi-digitalocean#1-create-a-secret-with-your-digitalocean-api-access-token

  • CSI Version: v0.2.0
  • Kubernetes Version: v1.12.0
  • Cloud provider/framework version, if applicable (such as Rancher): kubeadm v1.12.0

PVC fails to mount using RancherOS

Hi,

When trying to create a PVC and attaching it, it fails due to the below error:

MountVolume.MountDevice failed for volume "pvc-2be219db-955f-11e8-9a13-e6af7cff535a" : rpc error: code = Unavailable desc = grpc: the connection is unavailable

Tried with both 1.10 and 1.11

OS: RancherOS 1.4
Kubernetes 1.10

Something wrong with roles and accesses

What did you do? (required. The issue will be closed when not provided.)

Followed README instructions.

What did you expect to happen?

Volume provisioning when PVC is created.

What happened?

Nothing got provisioned, with no apparent error.

The clue is in the provisioner container log:

E1015 06:09:56.572929       1 reflector.go:205] github.com/kubernetes-csi/external-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:csi-do-controller-sa" cannot list persistentvolumes at the cluster scope: [clusterrole.rbac.authorization.k8s.io "system:csi-external-provisioner" not found, clusterrole.rbac.authorization.k8s.io "system:csi-external-attacher" not found]

Configuration (MUST fill this out):

Dump is too big for gist.
Let me know if you need it, I will find a way to host it.

Driver does not support reclaimPolicy: Retain

Currently the driver always formating volumes before mounting and does not respect reclaimPolicy: Retain.
For the Retain policy the driver should not format volumes and leave any data on them.

strange log from stateful pod

What did you do? (required. The issue will be closed when not provided.)

I was just testing some installation on DO + K8s
2 worker with 8 giga ram and an elasticsearch cluster with blockstorage
after some day I stressed the cluster and elasticsearch get out of controll

What did you expect to happen?

i thought the stateful pod could crash and restart, but not asking to format the blockstorage

there is a limit on blockstorage mapped to nodes? maybe i mapped more then a disk and this crashed the do plugin for storage? .. I'm sure I've done something wrong installing it..

Configuration (MUST fill this out):

  • system logs:
    MountVolume.MountDevice failed for volume "pvc-2601e322bb2611e8" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2601e322bb2611e8' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2601e322bb2611e8 does not exist and no size was specified.\n"
    Readiness probe failed: Get http://10.2.2.151:9200/_cluster/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Please provide the following logs:

kubectl cluster-info dump > kubernetes-dump.log
ahem.. to many GIGA...

This will output everthing from your cluster. Please use a private gist via
https://gist.github.com/ to share this dump with us

  • CSI Version:
    0.20

  • Kubernetes Version:
    1.10

  • Cloud provider/framework version, if applicable (such as Rancher):
    installed via typhoon then added csi-digitalocean

Add method name to the log fields

It's better to have a method:foo in our logs instead of logging foo is called. This makes it easy to follow logs for a given method

Support Docker Swarm

Has anyone used this with docker swarm mode?
Will it work and if yes, could someone provide an example in a stack file?

Thank you very much.

Connect existing block storage volumes?

Hi there,

is it possible to use csi-digitalocean to connect a kubernetes pod to an existing block storage volume?

I looked at the docs, examples and the issues, but I couldn't find anything related to this.

pv/pvc not being deleted - "no finalizer, ignoring"

Have a zombie pv + pvc which are not being deleted, getting this in the logs:

I0418 09:36:13.409772       1 reflector.go:286] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: forcing resync
I0418 09:36:13.410725       1 controller.go:167] Started VA processing "csi-fake-id-1"
I0418 09:36:13.411232       1 csi_handler.go:76] CSIHandler: processing VA "csi-fake-id-1"
I0418 09:36:13.411641       1 csi_handler.go:98] "csi-fake-id-1" is already attached
I0418 09:36:13.412087       1 csi_handler.go:92] CSIHandler: finished processing "csi-fake-id-1"
I0418 09:36:13.412500       1 controller.go:167] Started VA processing "csi-fake-id-2"
I0418 09:36:13.412908       1 csi_handler.go:76] CSIHandler: processing VA "csi-fake-id-2"
I0418 09:36:13.413357       1 csi_handler.go:98] "csi-fake-id-2" is already attached
I0418 09:36:13.413759       1 csi_handler.go:92] CSIHandler: finished processing "csi-fake-id-2"
I0418 09:36:13.413823       1 controller.go:167] Started VA processing "csi-fake-id-3"
I0418 09:36:13.413866       1 csi_handler.go:76] CSIHandler: processing VA "csi-fake-id-3"
I0418 09:36:13.413881       1 csi_handler.go:98] "csi-fake-id-3" is already attached
I0418 09:36:13.413896       1 csi_handler.go:92] CSIHandler: finished processing "csi-fake-id-3"
I0418 09:36:13.413938       1 controller.go:167] Started VA processing "csi-fake-id-4"
I0418 09:36:13.413957       1 csi_handler.go:76] CSIHandler: processing VA "csi-fake-id-4"
I0418 09:36:13.413971       1 csi_handler.go:98] "csi-fake-id-4" is already attached
I0418 09:36:13.413986       1 csi_handler.go:92] CSIHandler: finished processing "csi-fake-id-4"
I0418 09:36:13.420868       1 reflector.go:286] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:87: forcing resync
I0418 09:36:13.421003       1 controller.go:197] Started PV processing "kubernetes-dynamic-pv-fake-id-5"
I0418 09:36:13.421026       1 csi_handler.go:350] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-5"
I0418 09:36:13.421064       1 csi_handler.go:354] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-5": no deletion timestamp, ignoring
I0418 09:36:13.421101       1 controller.go:197] Started PV processing "kubernetes-dynamic-pv-fake-id-6"
I0418 09:36:13.421117       1 csi_handler.go:350] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-6"
I0418 09:36:13.421131       1 csi_handler.go:354] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-6": no deletion timestamp, ignoring
I0418 09:36:13.421178       1 controller.go:197] Started PV processing "kubernetes-dynamic-pv-fake-id-7"
I0418 09:36:13.421194       1 csi_handler.go:350] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-7"
I0418 09:36:13.421207       1 csi_handler.go:354] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-7": no deletion timestamp, ignoring
I0418 09:36:13.421250       1 controller.go:197] Started PV processing "kubernetes-dynamic-pv-fake-id-8"
I0418 09:36:13.421266       1 csi_handler.go:350] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-8"
I0418 09:36:13.421359       1 csi_handler.go:370] CSIHandler: processing PV "kubernetes-dynamic-pv-fake-id-8": no finalizer, ignoring

The relevant one is kubernetes-dynamic-pv-fake-id-8

Able to create volume, but unable to mount it to pod

What did you do? (required. The issue will be closed when not provided.)

I tried to create a new DO storage class using this plugin for my OpenShift 3.10 cluster.

What did you expect to happen?

A new volume gets created and mounted to the pod.

Configuration (MUST fill this out):

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: do-block-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: com.digitalocean.csi.dobs


---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: digitalocean-csi-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["create", "delete", "get", "list", "watch", "update", "patch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "get", "list", "watch", "update", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: digitalocean-csi-role
subjects:
  - kind: ServiceAccount
    name: digitalocean-csi
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: digitalocean-csi-role
  apiGroup: rbac.authorization.k8s.io

---

kind: Secret
apiVersion: v1
type: Opaque
stringData:
  access-token: "xyz"
metadata:
  name: digitalocean

---

kind: Deployment
apiVersion: apps/v1
metadata:
  name: digitalocean-csi-controller
spec:
  replicas: 2
  selector:
    matchLabels:
      app: digitalocean-csi-controllers
  template:
    metadata:
      labels:
        app: digitalocean-csi-controllers
    spec:
      serviceAccount: digitalocean-csi
      containers:
        - name: csi-attacher
          image: lakshminp/csi-attacher:v3.10
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election"
            - "--leader-election-namespace=$(MY_NAMESPACE)"
            - "--leader-election-identity=$(MY_NAME)"
          env:
            - name: MY_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-provisioner
          image: lakshminp/csi-provisioner:v3.10
          args:
            - "--v=5"
            - "--provisioner=com.digitalocean.csi.dobs"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: digitalocean-driver
          image: digitalocean/do-csi-plugin:v0.2.0
          args :
            - "--endpoint=unix://$(ADDRESS)"
            - "--token=$(DIGITALOCEAN_ACCESS_TOKEN)"
            - "--url=$(DIGITALOCEAN_API_URL)"
          env:
            - name: NODEID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName            
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DIGITALOCEAN_API_URL
              value: https://api.digitalocean.com/
            - name: DIGITALOCEAN_ACCESS_TOKEN
              valueFrom:
                secretKeyRef:
                  name: digitalocean
                  key: access-token
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
      volumes:
        - name: socket-dir
          emptyDir:

---

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: digitalocean-csi-ds
spec:
  selector:
    matchLabels:
      app: digitalocean-csi-driver
  template:
    metadata:
      labels:
        app: digitalocean-csi-driver
    spec:
      nodeSelector:
          role: node
      serviceAccount: digitalocean-csi
      containers:
        - name: csi-driver-registrar
          image: registry.access.redhat.com/openshift3/csi-driver-registrar:v3.10
          securityContext:
            privileged: true
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: digitalocean-driver
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          image: digitalocean/do-csi-plugin:v0.2.0
          args :
            - "--endpoint=unix://$(ADDRESS)"
            - "--token=$(DIGITALOCEAN_ACCESS_TOKEN)"
            - "--url=$(DIGITALOCEAN_API_URL)"
          env:
            - name: NODEID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DIGITALOCEAN_API_URL
              value: https://api.digitalocean.com/
            - name: DIGITALOCEAN_ACCESS_TOKEN
              valueFrom:
                secretKeyRef:
                  name: digitalocean
                  key: access-token
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: mountpoint-dir
              mountPath: /var/lib/origin/openshift.local.volumes/pods/
              mountPropagation: "Bidirectional"
            - name: cloud-metadata
              mountPath: /var/lib/cloud/data/
            - name: dev
              mountPath: /dev
      volumes:
        - name: cloud-metadata
          hostPath:
            path: /var/lib/cloud/data/
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/com.digitalocean.csi.dobs
            type: DirectoryOrCreate
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/origin/openshift.local.volumes/pods/
            type: Directory
        - name: dev
          hostPath:
            path: /dev

  • system logs:

from pod:

I1024 16:04:13.418610       1 csi_handler.go:330] Saved attach error to "csi-c8af0f7c0ba46bafd1a40b16396a1f03edaeeb51acdc7da1666c2a0db8d6e2b6"
I1024 16:04:13.418658       1 csi_handler.go:86] Error processing "csi-c8af0f7c0ba46bafd1a40b16396a1f03edaeeb51acdc7da1666c2a0db8d6e2b6": failed to attach: node "openshift-master" has no NodeID annotation

from web console:


 
--
12:09:43 PM | postgresql-2-1-9ll8k | Pod | Warning | Failed Mount | Unable to mount volumes for pod "postgresql-2-1-9ll8k_validate(fac2a09e-d7a5-11e8-8c1a-1a5063441dd8)": timeout expired waiting for volumes to attach or mount for pod "validate"/"postgresql-2-1-9ll8k". list of unmounted volumes=[postgresql-2-data]. list of unattached volumes=[postgresql-2-data default-token-fbws6]4 times in the last 21 minutes
12:09:09 PM | postgresql-2-1-9ll8k | Pod | Warning | Failed Attach Volume | AttachVolume.Attach failed for volume "kubernetes-dynamic-pv-f8864c5ed7a511e8" : node "openshift-master" has no NodeID annotation

Please provide the following logs:


kubectl cluster-info dump > kubernetes-dump.log

This will output everthing from your cluster. Please use a private gist via
https://gist.github.com/ to share this dump with us

  • manifests, such as pvc, deployments, etc.. you used to reproduce:

Please provide the total set of manifests that are needed to reproduce the
issue. Just providing the pvc is not helpful. If you cannot provide it due
privacy concerns, please try creating a reproducible case.

  • CSI Version:

v0.2.0

  • Kubernetes Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2018-08-03T07:21:49Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2018-10-12T17:04:15Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider/framework version, if applicable (such as Rancher):

DigitalOcean with OpenShift 3.10.

I followed the same instructions as in OpenShift CSI configuration document.

Change container/k8s resources to a consistent name schema

Daemonset name: csi-doplugin
Container name: csi-doplugin

Statefulset name: csi-attacher-doplugin
Container name: digitalocean-csi-plugin

Statefulset name: csi-provisioner-doplugin
Container name: digitalocean-csi-plugin

Our image name: digitalocean/do-csi-plugin


Currently these are the names we're using and it's confusing. We should name them consistently. Suggestions:

  • csi-node-plugin (daemonset), the container inside the pod will be called: csi-plugin
  • csi-controller-plugin (statefulset)
    • Combine csi-attacher-doplugin and csi-provisioner plugin into a single statefulset
    • name container as csi-plugin
  • rename docker container name digitalocean/csi-plugin

This is not the final decision, please comment if you have concerns or any suggestions.

Feature Improvement : Verify volume_limit when making a provisioning call to DO api

I just had an issue when rolling out a production stack where I went over the volume_limit. (only allowed 10 volumes ... srsly ???).

I've raised the issue with the DO support line to have the limit for our account increased, because you really can't run any large stack with that limit :/.

This limit was also not checked in the CSI, hence my reasoning for opening a ticket here :).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.