Git Product home page Git Product logo

gcp-filestore-csi-driver's Introduction

gcp-filestore-csi-driver

Google Cloud Filestore CSI driver for use in Kubernetes and other container orchestrators.

Disclaimer: Deploying this driver manually is not an officially supported Google product. For a fully managed and supported filestore experience on kubernetes, use GKE with the managed filestore driver.

Project Overview

This driver allows volumes backed by Google Cloud Filestore instances to be dynamically created and mounted by workloads.

Project Status

Status: GA

Latest image: registry.k8s.io/cloud-provider-gcp/gcp-filestore-csi-driver:v1.6.13

Also see known issues and CHANGELOG.

The manifest bundle which captures all the driver components (driver pod which includes the containers csi-external-provisioner, csi-external-resizer, csi-external-snapshotter, gcp-filestore-driver, csi-driver-registrar, csi driver object, rbacs, pod security policies etc) can be picked up from the master branch overlays directory. We structure the overlays directory per minor version of kubernetes because not all driver components can be used with all kubernetes versions. For example volume snapshots are supported 1.17+ kubernetes versions thus stable-1-16 driver manifests does not contain the snapshotter sidecar. Read more about overlays here.

Example: stable-1-19 overlays bundle can be used to deploy all the components of the driver on kubernetes 1.19. stable-master overlays bundle can be used to deploy all the components of the driver on kubernetes master.

CSI Compatibility

This plugin is compatible with CSI version 1.3.0.

Plugin Features

Supported CreateVolume parameters

This version of the driver creates a new Cloud Filestore instance per volume. Customizable parameters for volume creation include:

Parameter Values Default Description
tier "standard"/"basic_hdd"
"premium"/"basic_ssd"
"enterprise"
"high_scale_ssd"/"zonal"
"standard" storage performance tier
network string "default" VPC name.
When using "PRIVATE_SERVICE_ACCESS" connect-mode, network needs to be the full VPC name.
reserved-ipv4-cidr string "" CIDR range to allocate Filestore IP Ranges from.
The CIDR must be large enough to accommodate multiple Filestore IP Ranges of /29 each, /26 if enterprise tier is used.
reserved-ip-range string "" IP range to allocate Filestore IP Ranges from.
This flag is used instead of "reserved-ipv4-cidr" when "connect-mode" is set to "PRIVATE_SERVICE_ACCESS" and the value must be an allocated IP address range.
The IP range must be large enough to accommodate multiple Filestore IP Ranges of /29 each, /26 if enterprise tier is used.
connect-mode "DIRECT_PEERING"
"PRIVATE_SERVICE_ACCESS"
"DIRECT_PEERING" The network connect mode of the Filestore instance.
To provision Filestore instance with shared-vpc from service project, PRIVATE_SERVICE_ACCESS mode must be used.
instance-encryption-kms-key string "" Fully qualified resource identifier for the key to use to encrypt new instances.

For Kubernetes clusters, these parameters are specified in the StorageClass.

Note that non-default networks require extra firewall setup

Current supported Features

  • Volume resizing: CSI Filestore driver supports volume expansion for all supported Filestore tiers. See user-guide here. Volume expansion feature is beta in kubernetes 1.16+.

  • Labels: Filestore supports labels per instance, which is a map of key value pairs. Filestore CSI driver enables user provided labels to be stamped on the instance. User can provide labels by using 'labels' key in StorageClass.parameters. In addition, Filestore instance can be labelled with information about what PVC/PV the instance was created for. To obtain the PVC/PV information, '--extra-create-metadata' flag needs to be set on the CSI external-provisioner sidecar. User provided label keys and values must comply with the naming convention as specified here. Please see this storage class examples to apply custom user-provided labels to the Filestore instance.

  • Topology preferences: Filestore performance and network usage is affected by topology. For example, it is recommended to run workloads in the same zone where the Cloud Filestore instance is provisioned in. The following table describes how provisioning can be tuned by topology. The volumeBindingMode is specified in the StorageClass used for provisioning. 'strict-topology' is a flag passed to the CSI provisioner sidecar. 'allowedTopology' is also specified in the StorageClass. The Filestore driver will use the first topology in the preferred list, or if empty the first in the requisite list. If topology feature is not enabled in CSI provisioner (--feature-gates=Topology=false), CreateVolume.accessibility_requirements will be nil, and the driver simply creates the instance in the zone where the driver deployment running. See user-guide here. Topology feature is GA in kubernetes 1.17+.

    SC Bind Mode 'strict-topology' SC allowedTopology CSI provisioner Behavior
    WaitForFirstCustomer true Present If the topology of the node selected by the schedule is not in allowedTopology, provisioning fails and the scheduler will continue with a different node. Otherwise, CreateVolume is called with requisite and preferred topologies set to that of the selected node
    WaitForFirstCustomer false Present If the topology of the node selected by the schedule is not in allowedTopology, provisioning fails and the scheduler will continue with a different node. Otherwise, CreateVolume is called with requisite set to allowedTopology and preferred set to allowedTopology rearranged with the selected node topology as the first parameter
    WaitForFirstCustomer true Not Present Call CreateVolume with requisite set to selected node topology, and preferred set to the same
    WaitForFirstCustomer false Not Present Call CreateVolume with requisite set to aggregated topology across all nodes, which matches the topology of the selected node, and preferred is set to the sorted and shifted version of requisite, with selected node topology as the first parameter
    Immediate N/A Present Call CreateVolume with requisite set to allowedTopology and preferred set to the sorted and shifted version of requisite at a randomized index
    Immediate N/A Not Present Call CreateVolume with requisite = aggregated topology across nodes which contain the topology keys of CSINode objects, preferred = sort and shift requisite at a randomized index
  • Volume Snapshot: The CSI driver currently supports CSI VolumeSnapshots on a GCP Filestore instance using the GCP Filestore Backup feature. CSI VolumeSnapshot is a Beta feature in k8s enabled by default in 1.17+. The GCP Filestore Snapshot alpha is not currently supported, but will be in the future via the type parameter in the VolumeSnapshotClass. For more details see the user-guide here.

  • Volume Restore: The CSI driver supports out-of-place restore of new GCP Filestore instance from a given GCP Filestore Backup. See user-guide restore steps here and GCP Filestore Backup restore documentation here. This feature needs kubernetes 1.17+.

  • Pre-provisioned Filestore instance: Pre-provisioned filestore instances can be leveraged and consumed by workloads by mapping a given filestore instance to a PersistentVolume and PersistentVolumeClaim. See user-guide here and filestore documentation here

  • FsGroup: CSIVolumeFSGroupPolicy is a Kubernetes feature in Beta is 1.20, which allows CSI drivers to opt into FSGroup policies. The stable-master overlay of Filestore CSI driver now supports this. See the user-guide here on how to apply fsgroup to volumes backed by filestore instances. For a workaround to apply fsgroup on clusters 1.19 (with CSIVolumeFSGroupPolicy feature gate disabled), and clusters <= 1.18 see user-guide here

  • Resource Tags: Filestore supports resource tags for instance and backup resources, which is a map of key value pairs. Filestore CSI driver enables user defined tags to be attached to instance and backup resources created by the driver. User can provide resource tags by using resource-tags key in StorageClass.parameters or using the --resource-tags command line option, and the tags should be defined as comma separated values of the form <parent_id>/<tagKey_shortname>/<tagValue_shortname> where, parentID is the ID of Organization or Project resource where tag key and tag value resources exist, tagKey_shortname is the shortName of the tag key resource, tagValue_shortname is the shortName of the tag value resource and a maximum of 50 tags can be attached to per resource. See https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing for more details. Please see storage class example to define resource tags to be attached to the Filestore instance resources.

Future Features

  • Non-root access: By default, GCFS instances are only writable by the root user and readable by all users. Provide a CreateVolume parameter to set non-root owners.
  • Subdirectory provisioning: Given an existing Cloud Filestore instance, provision a subdirectory as a volume. This provisioning mode does not provide capacity isolation. Quota support needs investigation. For now, the nfs-client external provisioner can be used to provide similar functionality for Kubernetes clusters.
  • Windows support. The current version of the driver supports volumes mounted to Linux nodes only.

Deploying the Driver

  • Clone the repository in cloudshell using following commands
mkdir -p $GOPATH/src/sigs.k8s.io
cd $GOPATH/src/sigs.k8s.io
git clone https://github.com/kubernetes-sigs/gcp-filestore-csi-driver.git
  • Set up a service account with appropriate role binds, and download a service account key. This service account will be used by the driver to provision Filestore instances and otherwise access GCP APIs. This can be done by running ./deploy/project_setup.sh and pointing to a directory to store the SA key. To prevent your key from leaking do not make this directory publicly accessible!
$ PROJECT=<your-gcp-project> GCFS_SA_DIR=<your-directory-to-store-credentials-by-default-home-dir> ./deploy/project_setup.sh
  • Choose a stable overlay that matches your cluster version, eg stable-1-19. If you are running a more recent cluster version than given here, use stable-master. The prow-* overlays are for testing, and the dev overlay is for driver development. ./deploy/kubernetes/cluster-setup.sh will install the driver pods, as well as necessary RBAC and resources.

  • If deploying new changes in the master branch update the overlay file with a new custom tag to identify this image.

apiVersion: builtin
kind: ImageTagTransformer
metadata:
  name: imagetag-gce-fs-driver
imageTag:
  name: k8s.gcr.io/cloud-provider-gcp/gcp-filestore-csi-driver
  newName: gcr.io/<your-project>/gcp-filestore-csi-driver # Add newName
  newTag: "<your-custom-tag>" # Change to your custom tag

Make and build the image if desploying new master branch.

GCP_FS_CSI_STAGING_VERSION=<your-custom-tag> GCP_FS_CSI_STAGING_IMAGE=gcr.io/<your-project>/gcp-filestore-csi-driver make build-image-and-push

Once the image is pushed it can be verified by visiting https://pantheon.corp.google.com/gcr/images/<your-project>/global/gcp-filestore-csi-driver

$ PROJECT=<your-gcp-project> DEPLOY_VERSION=<your-overlay-choice> GCFS_SA_DIR=<your-directory-to-store-credentials-by-default-home-dir> ./deploy/kubernetes/cluster_setup.sh

After this, the driver can be used. See ./docs/kubernetes for further instructions and examples.

  • For cleanup of the driver run the following:
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=<your-overlay-choice> ./deploy/kubernetes/cluster_cleanup.sh

Kubernetes Development

  • Set up a service account. Most development uses the dev overlay, where a service account key is not needed. Otherwise use GCFS_SA_DIR as described above.
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=dev ./deploy/project_setup.sh
  • To build the Filestore CSI latest driver image and push to a container registry.
$ PROJECT=<your-gcp-project> make build-image-and-push
  • The base manifests like core driver manifests, rbac role bindings are listed under here. The overlays (e.g prow-gke-release-staging-head, prow-gke-release-staging-rc-{k8s version}, stable-{k8s version}, dev) are listed under deploy/kubernetes/overlays apply transformations on top of the base manifests.

  • 'dev' overlay uses default service account for communicating with GCP services. https://www.googleapis.com/auth/cloud-platform scope allows full access to all Google Cloud APIs and given node scope will allow any pod to reach GCP services as the provided service account, and so should only be used for testing and development, not production clusters. cluster_setup.sh installs kustomize and creates the driver manifests package and deploys to the cluster. Bring up GCE cluster with following:

$ NODE_SCOPES=https://www.googleapis.com/auth/cloud-platform KUBE_GCE_NODE_SERVICE_ACCOUNT=<SERVICE_ACCOUNT_NAME>@$PROJECT.iam.gserviceaccount.com kubetest --up
  • Deploy the driver.
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=dev ./deploy/kubernetes/cluster_setup.sh

Gcloud Application Default Credentials and scopes

See here, here and here

Filestore IAM roles and permissions

See here

Driver Release [Google internal only]

gcp-filestore-csi-driver's People

Contributors

ajayk avatar alexiondev avatar amacaskill avatar annapendleton avatar axisofentropy avatar bharath-b-rh avatar bishal7679 avatar dannawang0221 avatar dependabot[bot] avatar hsadoyan avatar ikarldasan avatar judemars avatar k8s-ci-robot avatar kenthua avatar krunaljain avatar leiyiz avatar mattcary avatar msau42 avatar pshahonica avatar pwschuurman avatar romanbednar avatar saikat-royc avatar sivanzcw avatar sneha-at avatar songjiaxun avatar sunnylovestiramisu avatar tsmetana avatar tyuchn avatar vishnukarthikl avatar yashsingh74 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gcp-filestore-csi-driver's Issues

Add labels from command line option to filestore backup resource

/kind feature

Currently filestore backup resource is added with the labels configured in StorageClass object, and those labels from command line --extra-labels is not included.
Labels from command line will be given lower precedence and will be overwritten by labels defined by driver and labels in StorageClass object in the same order.

Describe the solution you'd like
To add labels from command line option to the created filestore backup resources.

FIlestore CSI driver doesn't work on Shared VPC

From documentation

Filestore instances on the shared VPC network cannot be created from service projects
If you attempt to create a Filestore instance from a service project that is attached to a shared VPC host project, the shared VPC network is not listed under Authorized network. Similarly, attempting to create the instance using gcloud or the REST API results in the following error:
ERROR: (gcloud.filestore.instances.create) INVALID_ARGUMENT: network '[SHARED_VPC_NETWORK]' does not exist.
Workaround
You can create Filestore instances from the host project with the shared VPC as the authorized network. Once created, clients in any service project can mount the instance as usual.
The caveats to this workaround include:
The host project must be involved in the creation of Filestore instances.
Costs for Filestore instances are charged to the host project instead of the service projects that use them.

E0604 18:10:37.941382       1 utils.go:55] GRPC error: rpc error: code = Internal desc = CreateInstance operation failed: googleapi: Error 400: network 'shared-vpc' does not exist., badRequest

So the question is, How can I run CSI controller on GKE on Project Y (which is the service project) with shared VPC on project X (the host project)?

nfs-services errors on COS

When deploying, I have seen the following error from the nfs-services container on COS nodes. It seems to be benign--everything seems to work--- but it causes confusion/doubt.

Starting RPC port mapper daemon: rpcbind.
Statd service already running!
Starting NFS common utilities: statd failed!
~/go/src/sigs.k8s.io/gcp-filestore-csi-driver$ kubectl logs gcp-filestore-csi-node-qzh4k -n gcp-filestore-csi-driver -c nfs-services
Starting RPC port mapper daemon: rpcbind.
/etc/init.d/nfs-common: line 101: mountpoint: command not found
mount: rpc_pipefs is write-protected, mounting read-only
mount: cannot mount rpc_pipefs read-only
Starting NFS common utilities: statd.

project_setup.sh unbound variable

Command and error message

./deploy/project_setup.sh

+ set -o nounset
+ set -o errexit
++ dirname ./deploy/project_setup.sh
+ mydir=./deploy
+ source ./deploy/common.sh
++ set -o nounset
++ set -o errexit
++ GCFS_SA_DIR=/Users/ericlugo
++ GCFS_SA_FILE=/Users/ericlugo/gcp_filestore_csi_driver_sa.json
++ GCFS_SA_NAME=gcp-filestore-csi-driver-sa
++ GCFS_NS=gcp-filestore-csi-driver
./deploy/project_setup.sh: line 10: PROJECT: unbound variable

Fix:
Add PROJECT=$(gcloud info --format='value(config.project)') within the project_setup.sh script
Or add clearer documentation to set $PROJECT variable before running script

Add Kubernetes e2e

So that the Dockerfile and Kubernetes specs are also tested. Maybe minikube or a local cluster could work.

Support to add ResoureManagerTags to GCP Filestore resources

/kind feature

GCP Tags are key-value pairs that are bind to the GCP resources. Unlike currently supported labels, tags are not part of the resource metadata but resource in itself. Tag Keys, Values, Bindings are all discreet resources. Tags are used for defining IAM policy conditions, Organization conditionals policies and integrating with Cloud billing for cost management, which are not supported by labels.

Describe the solution you'd like
Able to define the list of resource tags to add to the Filestore resources created by the driver.

Anything else you would like to add:
Currently tag resources tag keys and tag values can only be created at Organization and Project level with the required permissions. Tag keys and Tag Values will be created by the user and only the Tag bindings to the filestore resources to be created by the driver and would require(propose) below changes

  1. Driver to accept new argument(--extra-tags) for the list of tags, where tags would be provided as CSV of the form <parent_id>/<key1>/<value1>,...,<parent_id>/<keyN>/<valueN>, where
    parent_id : Organization or the Project ID where tag key and tag value resource exists.
    key : Tag key short name
    value : Tag value short name
    N : A maximum of 50 tags bindings is allowed for a resource.

  2. Below resources created by the driver requires to be updated with tags

  • Filestore Instance
  • Filestore Backup

Reference Links:

Dynamic PersistentVolume from pre-provisioned StorageClass

The documentation for the pre-provisioning is confusing. First, it states that it's necessary to create a StorageClass, and then it creates PersistentVolume, but without referencing the StorageClass; rather, it references filestore directly.

Is it possible to create PersistentVolume dynamically using PersistentVolumeClaim and StorageClass, without referencing the filestore?

Multiple pre-provisioned PVs on the same node

I am trying to reuse the same pre-provisioned Filestore instance and share across multiple namespaces within the same cluster. For each namespace, I create a PV and PVC that points to the same IP address and handle. All of the PVCs and PVs bind with no issue, but when a pod is scheduled to a node that already has the NFS mounted from a different namespace, I get an error like this:

MountVolume.SetUp failed for volume "namespace3-shared-volume" : rpc error: code = Internal desc = mount "/var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount" failed: mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount /var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount Output: mount: /var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount does not exist.

These correspond to the CSI driver logs:

I0403 16:05:31.158128       1 utils.go:55] GRPC call: /csi.v1.Node/NodePublishVolume
I0403 16:05:31.158284       1 utils.go:56] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount","target_path":"/var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"redacted","csi.storage.k8s.io/pod.namespace":"namespace3","csi.storage.k8s.io/pod.uid":"8b55fb58-037b-4d06-a658-22516200bb73","csi.storage.k8s.io/serviceAccount.name":"redacted","ip":"10.X.X.X","volume":"share1"},"volume_id":"modeInstance/us-central1-c/filestore-instance-name/share1"}
E0403 16:05:31.160819       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount /var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount
Output: mount: /var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount does not exist.

E0403 16:05:31.160861       1 node.go:135] Mount "/var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount" failed, cleaning up
W0403 16:05:31.160906       1 mount_helper_common.go:34] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount
E0403 16:05:31.160935       1 utils.go:59] GRPC error: rpc error: code = Internal desc = mount "/var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount" failed: mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount /var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount
Output: mount: /var/lib/kubelet/pods/8b55fb58-037b-4d06-a658-22516200bb73/volumes/kubernetes.io~csi/namespace3-shared-volume/mount: special device /var/lib/kubelet/plugins/kubernetes.io/csi/pv/namespace3-shared-volume/globalmount does not exist.

When the pods get scheduled on different nodes, the shared volume works as expected.

Is this the expected behavior? Are there any alternatives or workarounds that I should consider?

README links to an archived repository

The Future Features section of the README links to an archived NFS provisioner repository. Should it link to https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner instead?

isolation. Quota support needs investigation. For now, the
[nfs-client](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client)
external provisioner can be used to provide similar functionality for
Kubernetes clusters.

Handle NodePublish in compliance with CSI spec

Reference: kubernetes/kubernetes#75535

During NodePublish, kubernetes today creates the target path (for e.g. /var/lib/kubelet/pods//volumes/kubernetes.io~csi//mount).
This behavior will go away in future kubernetes version, where k8s would only create the parent dir /var/lib/kubelet/pods//volumes/kubernetes.iocsi/

It is the driver's responsibility to ensure the target path is created (and delete during node unpublish).
Update the current node publish code, to check for existence of the target path and create one if needed (for linux), before proceeding to mount call.

Use external provisioner 2.0.x which provides a default-fstype flag

Use external provisioner change which provides a default-fstype flag. For filestore driver this should be set to 'nfs'.

Also have stricter check in ValidateVolumeCapabilities for fstype == 'nfs'.
https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/blob/master/pkg/csi_driver/gcfs_driver.go#L154

Current implementation ignores the fstype passed by the CSI calls, and uses hard coded nfs type in mount operation.
https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/blob/master/pkg/csi_driver/node.go#L216

fsGroup does not change permissions and ownership of the volume

Hello ๐Ÿ‘‹

I created a Pod with fsGroup as documented, but the root group remains the owner of the files

Kubernetes uses fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's [SecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod). Kubernetes feature `CSIVolumeFSGroupPolicy` is a beta feature in K8s 1.20+ by which CSI drivers can explicitly declare support for fsgroup. Read more about `CSIVolumeFSGroupPolicy` [here](https://kubernetes-csi.github.io/docs/csi-driver-object.html) and [here](https://kubernetes-csi.github.io/docs/support-fsgroup.html).

Are there any other requirements not documented?

Version: v1.5.8-gke.2
GKE version: v1.25.13-gke.200

Add ability to manage IP based access

Hello,

I totally love the idea of dynamic NFS volume provisioning, however, in the case of shared VPC the instance becomes open to the whole network and it is a lot of different projects in our case. It would've been fantastic if it was possible to specify a list of networks that should have access to the file share instance as part of storage class annotations. Something like:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard-rwx-vpc
allowVolumeExpansion: true
parameters:
  connect-mode: PRIVATE_SERVICE_ACCESS
  network: projects/<project>/global/networks/<vpc>
  reserved-ip-range: c0-private-services-access-1
  tier: standard
  nfs-export-options:
    - access-mode: "READ_WRITE"
      ip-ranges:
        - "10.0.0.0/29"
        - "10.2.0.0/29"
      squash-mode: "ROOT_SQUASH",
      anon_uid: 1003
      anon_gid: 1003
provisioner: filestore.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: Immediate

Support for labels in Filestore driver

  1. Driver should accept user provided labels for filestore instance
  2. Driver should accept PV/PVC metadata labels provided by external provisioner sidecar and plumb it to the filestore instance

Switch from k8s.gcr.io to registry.k8s.io

From kubernetes-announce:
"On the 3rd of April 2023, the old registry k8s.gcr.io will be frozen and no further images for Kubernetes and related subprojects will be pushed to the old registry.

This registry registry.k8s.io replaced the old one and has been generally available for several months. We have published a blog post about its benefits to the community and the Kubernetes project."

Fix gofmt error

log

Error:

I1108 20:26:13.379] Initialized empty Git repository in /go/src/sigs.k8s.io/gcp-filestore-csi-driver/.git/
I1108 20:26:13.380] process 37 exited with code 0 after 0.0m
I1108 20:26:13.380] Call:  git clean -dfx
I1108 20:26:13.384] process 38 exited with code 0 after 0.0m
I1108 20:26:13.385] Call:  git reset --hard
I1108 20:26:13.391] process 39 exited with code 0 after 0.0m
I1108 20:26:13.392] Call:  git config --local user.name 'K8S Bootstrap'
I1108 20:26:13.398] process 40 exited with code 0 after 0.0m
I1108 20:26:13.398] Call:  git config --local user.email k8s_bootstrap@localhost
I1108 20:26:13.402] process 41 exited with code 0 after 0.0m
I1108 20:26:13.403] Call:  git fetch --quiet --tags https://github.com/kubernetes-sigs/gcp-filestore-csi-driver master +refs/pull/378/head:refs/pr/378
I1108 20:26:15.539] process 42 exited with code 0 after 0.0m
I1108 20:26:15.540] Call:  git checkout -B test 7f9927c5d280fafb9bb909f36ccd72f1f5bfbba0
W1108 20:26:16.325] Switched to a new branch 'test'
I1108 20:26:16.329] process 54 exited with code 0 after 0.0m
I1108 20:26:16.[33](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A33)0] Call:  git show -s --format=format:%ct HEAD
I1108 20:26:16.3[34](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A34)] process 55 exited with code 0 after 0.0m
I1108 20:26:16.3[35](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A35)] Call:  git merge --no-ff -m 'Merge +refs/pull/[37](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A37)8/head:refs/pr/378' b47451d8d3a96a44da53fa79106d22c2e0f27c18
I1108 20:26:16.605] Merge made by the 'recursive' strategy.
I1108 20:26:16.609]  README.md                                         | 3 ++-
I1108 20:26:16.609]  deploy/kubernetes/images/stable-master/image.yaml | 2 +-
I1108 20:26:16.610]  2 files changed, 3 insertions(+), 2 deletions(-)
I1108 20:26:16.610] process 56 exited with code 0 after 0.0m
I1108 20:26:16.610] Configure environment...
I1108 20:26:16.610] Call:  git show -s --format=format:%ct HEAD
I1108 20:26:16.615] process 72 exited with code 0 after 0.0m
I1108 20:26:16.616] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1108 20:26:17.781] Activated service account credentials for: [[email protected]]
I1108 20:26:18.116] process 73 exited with code 0 after 0.0m
I1108 20:26:18.116] Call:  gcloud config get-value account
I1108 20:26:19.[39](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A39)6] process 82 exited with code 0 after 0.0m
I1108 20:26:19.396] Will upload results to gs://kubernetes-jenkins/pr-logs using [email protected]
I1108 20:26:19.397] Start 1590077971178196992 at unknown...
I1108 20:26:19.[40](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A40)3] Call:  gsutil -q -h Content-Type:application/json cp /tmp/gsutil_ITby2C gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/started.json
I1108 20:26:22.[41](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A41)9] process 91 exited with code 0 after 0.1m
I1108 20:26:22.[42](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A42)0] Call:  gsutil -q -h Content-Type:text/plain -h 'x-goog-meta-link: gs://kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992' cp /tmp/gsutil_0V8Zcg gs://kubernetes-jenkins/pr-logs/directory/pull-gcp-filestore-csi-driver-verify/1590077971178196992.txt
I1108 20:26:25.2[49](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A49)] process 256 exited with code 0 after 0.0m
I1108 20:26:25.2[50](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A50)] Call:  /workspace/./test-infra/jenkins/../scenarios/execute.py hack/verify_all.sh
W1108 20:26:25.276] Run: ('hack/verify_all.sh',)
I1108 20:26:25.377] Verifying gofmt
I1108 20:26:25.410] diff ./pkg/csi_driver/multishare_ops_manager.go.orig ./pkg/csi_driver/multishare_ops_manager.go
I1108 20:26:25.410] --- ./pkg/csi_driver/multishare_ops_manager.go.orig
I1108 20:26:25.411] +++ ./pkg/csi_driver/multishare_ops_manager.go
I1108 20:26:25.411] @@ -[67](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A67)9,21 +679,22 @@
I1108 20:26:25.411]  
I1108 20:26:25.411]  // A source instance will be considered as "matched" with the target instance
I1108 20:26:25.411]  // if and only if the following requirements were met:
I1108 20:26:25.411] -// 1. Both source and target instance should have a label with key
I1108 20:26:25.411] -//    "storage_gke_io_storage-class-id", and the value should be the same.
I1108 20:26:25.411] -// 2. (Check if exists) The ip address of the target instance should be
I1108 20:26:25.411] -//    within the ip range specified in "reserved-ipv4-cidr".
I1108 20:26:25.412] -// 3. (Check if exists) The ip address of the target instance should be
I1108 20:26:25.412] -//    within the ip range specified in "reserved-ip-range".
I1108 20:26:25.412] -// 4. Both source and target instance should be in the same location.
I1108 20:26:25.412] -// 5. Both source and target instance should be under the same tier.
I1108 20:26:25.412] -// 6. Both source and target instance should be in the same VPC network.
I1108 20:26:25.412] +//  1. Both source and target instance should have a label with key
I1108 20:26:25.413] +//     "storage_gke_io_storage-class-id", and the value should be the same.
I1108 20:26:25.413] +//  2. (Check if exists) The ip address of the target instance should be
I1108 20:26:25.413] +//     within the ip range specified in "reserved-ipv4-cidr".
I1108 20:26:25.413] +//  3. (Check if exists) The ip address of the target instance should be
I1108 20:26:25.413] +//     within the ip range specified in "reserved-ip-range".
I1108 20:26:25.414] +//  4. Both source and target instance should be in the same location.
I1108 20:26:25.414] +//  5. Both source and target instance should be under the same tier.
I1108 20:26:25.414] +//  6. Both source and target instance should be in the same VPC network.
I1108 20:26:25.414] +//
I1108 20:26:25.414]  // 7, Both source and target instance should have the same connect mode.
I1108 20:26:25.414] -// 8. Both source and target instance should have the same KmsKeyName.
I1108 20:26:25.415] -// 9. Both source and target instance should have a label with key
I1108 20:26:25.415] -//    "gke_cluster_location", and the value should be the same.
I1108 20:26:25.415] -// 10. Both source and target instance should have a label with key
I1108 20:26:25.415] -//    "gke_cluster_name", and the value should be the same.
I1108 20:26:25.415] +//  8. Both source and target instance should have the same KmsKeyName.
I1108 20:26:25.415] +//  9. Both source and target instance should have a label with key
I1108 20:26:25.415] +//     "gke_cluster_location", and the value should be the same.
I1108 20:26:25.416] +//  10. Both source and target instance should have a label with key
I1108 20:26:25.416] +//     "gke_cluster_name", and the value should be the same.
I1108 20:26:25.416]  func isMatchedInstance(source, target *file.MultishareInstance, req *csi.CreateVolumeRequest) (bool, error) {
I1108 20:26:25.416]  	matchLabels := [3]string{util.ParamMultishareInstanceScLabelKey, tagKeyClusterLocation, tagKeyClusterName}
I1108 20:26:25.416]  	for _, labelKey := range matchLabels {
I1108 20:26:25.416] diff ./test/k8s-integration/version.go.orig ./test/k8s-integration/version.go
I1108 20:26:25.417] --- ./test/k8s-integration/version.go.orig
I1108 20:26:25.417] +++ ./test/k8s-integration/version.go
I1108 20:26:25.417] @@ -115,9 +115,10 @@
I1108 20:26:25.417]  }
I1108 20:26:25.417]  
I1108 20:26:25.417]  // Helper function to compare versions.
I1108 20:26:25.417] -//  -1 -- if left  < right
I1108 20:26:25.417] -//   0 -- if left == right
I1108 20:26:25.417] -//   1 -- if left  > right
I1108 20:26:25.417] +//
I1108 20:26:25.417] +//	-1 -- if left  < right
I1108 20:26:25.417] +//	 0 -- if left == right
I1108 20:26:25.418] +//	 1 -- if left  > right
I1108 20:26:25.418]  func (v *version) compare(right *version) int {
I1108 20:26:25.418]  	for i, b := range v.version {
I1108 20:26:25.418]  		if b > right.version[i] {
I1108 20:26:25.418] 
I1108 20:26:25.418] Please run hack/update-gofmt.sh
W1108 20:26:25.418] Traceback (most recent call last):
W1108 20:26:25.418]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 50, in <module>
W1108 20:26:25.418]     main(ARGS.env, ARGS.cmd + ARGS.args)
W1108 20:26:25.419]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 41, in main
W1108 20:26:25.419]     check(*cmd)
W1108 20:26:25.419]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W1108 20:26:25.419]     subprocess.check_call(cmd)
W1108 20:26:25.419]   File "/usr/lib/python2.7/subprocess.py", line 1[90](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A90), in check_call
W1[108](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_gcp-filestore-csi-driver/378/pull-gcp-filestore-csi-driver-verify/1590077971178196992/#1:build-log.txt%3A108) 20:26:25.419]     raise CalledProcessError(retcode, cmd)

open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

After deploying on a gke (1.16.8-gke.15) cluster using terraform-google-modules/kubernetes-engine/google//modules/beta-public-cluster, I get this error:
kubectl -n gcp-filestore-csi-driver logs gcp-filestore-csi-controller-0 csi-external-attacher
I0529 13:23:03.802819 1 main.go:77] Version: v0.4.1-0-ga3fb29c
E0529 13:23:03.803086 1 main.go:82] failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
and
kubectl -n gcp-filestore-csi-driver logs gcp-filestore-csi-node-hxdmz csi-driver-registrar
Version: v0.4.1-0-gb3ef1f6
(...)
main.go:168] failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

New release required

Post May there are some important changes merged to master ike the below two
Upgrade the driver to use CSI 1.3.x spec
PR for csi 1.3.0
Can you please release a new build for us from master branch?
Thanks
Shankar KC

kubetest --up step is failing with base64 error when deploying with multishare overlay

Hi,
I am trying to setup multishare filestore storage class on a gardener cluster deployed on GCP.
When I use stable overlays, filestore works except for multishare which seems to be only available on multishare overlay.
And multishare overlay doesn't work with Deploying the Driver and based on this, I need to use "Kubernetes Development" for it.
So I went to that section and started following the steps.
But I get the below error when I run the following command ( was replaced with correct project name before execution):

NODE_SCOPES=https://www.googleapis.com/auth/cloud-platform KUBE_GCE_NODE_SERVICE_ACCOUNT=gcp-filestore-csi-driver-sa@.iam.gserviceaccount.com kubetest --up

Output:
.....

...
Generating certs for alternate-names: IP:34.30.34.12,IP:10.0.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:e2e-test-i443733-master
base64: invalid argument /var/folders/52/l0d4zy4s7z72y62704y00x840000gn/T/tmp.h5bNLsa8/easy-rsa-master/easyrsa3/pki/private/ca.key
Usage: base64 [-hDd] [-b num] [-i in_file] [-o out_file]
-h, --help display this message
-Dd, --decode decodes input
-b, --break break encoded string into num character lines
-i, --input input file (default: "-" for stdin)
-o, --output output file (default: "-" for stdout)
2023/09/07 16:07:49 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 2m52.770690236s
2023/09/07 16:07:49 main.go:324: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 64

Any idea what is causing this error? Any pointer to the code that is calling base64 in this scenario?
Or even better, any easier way to deploy the driver for enterprice multishare storage class?

Add details of SA key expiry and rotation

If the GCP SA token expires or otherwise needs to be rotated, the corresponding secret can be updated and the driver should pick that up automatically.

This needs to be confirmed and the documentation updated to make this clear.

/assign @mattcary

Filestore driver sends error while creating snapshot

Filestore driver sends error while creating snapshot. Now, I am not sure whether this is expected behaviour or not. While polling for volumesnapshot status using Dynamic Kube cli it gets confusing as to consider error or not. Volumesnapshot eventually sets ReadyToUse flag to true. If VolumeSnapshot is getting created is it necessary to send error stating that current State Created. I have shared sample logs and yaml files I have used.

Deployment with Filestore CSI PVC.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: web-server
          image: nginx
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: pvc-1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-1
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: standard-rwx
  resources:
    requests:
      storage: 1Ti

Sample VolumeSnapshot yaml -

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: test-snapshot
spec:
  volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class
  source:
    persistentVolumeClaimName: pvc-1

VolumeSnapshotClass yaml -

apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: filestore.csi.storage.gke.io
kind: VolumeSnapshotClass
metadata:
  annotations:
  creationTimestamp: "2022-02-10T09:57:33Z"
  generation: 1
  name: csi-gcp-filestore-backup-snap-class
  resourceVersion: "104941"
  uid: b50cb0b0-c2ea-4c48-bb50-b8167b4fba4a
parameters:
  type: backup

Error I found in controller pod -

E0221 13:46:40.708660       1 snapshot_controller.go:122] checkandUpdateContentStatus [snapcontent-a0e5a9d7-dbaa-48cb-b560-9939e360e6b9]: error occurred failed to take snapshot of the volume modeInstance/us-west1-c/pvc-a99c1776-34dd-4691-a415-18c7c3b1bea4/vol1: "rpc error: code = DeadlineExceeded desc = Backup <redacted> not yet ready, current state CREATING"


E0221 13:46:56.127621       1 snapshot_controller.go:122] checkandUpdateContentStatus [snapcontent-a0e5a9d7-dbaa-48cb-b560-9939e360e6b9]: error occurred failed to take snapshot of the volume modeInstance/us-west1-c/pvc-a99c1776-34dd-4691-a415-18c7c3b1bea4/vol1: "rpc error: code = DeadlineExceeded desc = Backup <redacted> not yet ready, current state FINALIZING"

I haven't shared all logs. Just took few for sample.

Please let me know if more information is needed.

Take advantage of `csi_secret` in CSI 1.0

CSI 1.0 decorates sensitive fields with csi_secret. Let's take advantage of this feature to programmatically ensure no sensitive fields are ever logged by this driver.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.