purestorage / helm-charts Goto Github PK
View Code? Open in Web Editor NEWPure Storage Helm Charts
License: Apache License 2.0
Pure Storage Helm Charts
License: Apache License 2.0
With this PR merged, it's possible for the Pure CSI driver to make use of the PVC name & namespace and PV name.
Currently the volumes' generated name are in the following format k8s-{pvc-name}
, which makes it more difficult to associate which PVC refers to which volume when looking at a glance.
We recently upgraded to k8s 1.20. I am having troubles creating volumesnapshots. Here is the error:
spec:
snapshotClassName: pure-snapshotclass
snapshotContentName: ""
source:
apiGroup: null
kind: PersistentVolumeClaim
name: pvc-nginx
status:
creationTime: "2021-04-05T13:22:48Z"
error:
message: 'Failed to create snapshot: selfLink was empty, can''t make reference'
time: "2021-04-05T13:23:11Z"
readyToUse: false
restoreSize: null
I am using snapshot.storage.k8s.io/v1alpha1, which is still included in k8s 1.20.
$ kubectl api-versions | grep snap
snapshot.storage.k8s.io/v1alpha1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-31T15:33:39Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-31T16:42:32Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
We are using: image: purestorage/k8s:5.2.0
It looks like this issue is referenced here, but I'm not sure how that helps us. Is an upgrade to PSO 6.x required?
kubernetes/kubernetes#94660
Receiving the following error when I deploy or upgrade to v1.0.6 via Rancher.
Wait helm template failed. 2020/02/11 12:28:55 warning: skipped value for arrays: Not a table. Error: parse error in "pure-csi/templates/node.yaml": template: pure-csi/templates/node.yaml:141: unexpected "=" in operand : exit status 1
When I try to use the 2.2.1 helm chart that defaults to the 2.2.1 image I get the following errors from the pure-provisioner
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Initializing Mount config, conf = '<nil>'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Done initializing config, conf = '&{DefaultBlkFSType:xfs DefaultBlkFSOpts:-q DefaultBlkMountOpts:}'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Initializing discovery config, conf = '<nil>'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Done initializing discovery config, conf = '&{DiscoveryType:json Pure1APIHost:localhost:8080}'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=info msg="namespace = k8s"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=info msg="Starting pureProvisioner" provisionerName=pure-provisioner serverVersion=v1.12.3
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provision volume" id="{k8s pvc-f3052460-1ea0-11e9-bef5-005056975897}" spec="{map[size:107374182400 volume_label_selector:purestorage.com/backend=block]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=debug msg="No backends discovered, unable to schedule volume." spec="{map[size:107374182400 volume_label_selector:purestorage.com/backend=block]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provisioned volume for PVC" annotations="map[]" backend=block error="no storage backends were able to meet the request specification" k8sNamespace=monitoring pvname=pvc-f3052460-1ea0-11e9-bef5-005056975897 size="{{0 0} {0xc420417590} 100Gi BinarySI}" volume="<nil>"
pure-provisioner-85b786974b-w4n74 pure-provisioner E0123 00:08:56.597730 1 controller.go:672] Failed to provision volume for claim "monitoring/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0" with StorageClass "pure": no storage backends were able to meet the request specification
pure-provisioner-85b786974b-w4n74 pure-provisioner E0123 00:08:56.598362 1 goroutinemap.go:151] operation for "provision-monitoring/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0[f3052460-1ea0-11e9-bef5-005056975897]" failed with: no storage backends were able to meet the request specification
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provision volume" id="{k8s pvc-ef113003-1ea0-11e9-bef5-005056975897}" spec="{map[volume_label_selector:purestorage.com/backend=block size:53687091200]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=debug msg="No backends discovered, unable to schedule volume." spec="{map[size:53687091200 volume_label_selector:purestorage.com/backend=block]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provisioned volume for PVC" annotations="map[]" backend=block error="no storage backends were able to meet the request specification" k8sNamespace=monitoring pvname=pvc-ef113003-1ea0-11e9-bef5-005056975897 size="{{0 0} {0xc42014c3f0} 50Gi BinarySI}" volume="<nil>"
The values.yaml file I was using looks something like
orchestrator:
k8s:
flexPath: /var/lib/kubelet/volumeplugins
arrays:
FlashBlades:
- MgmtEndPoint: "10.123.456.140"
APIToken: "T-obfuscat-e989-4ade-beaa-obfuscated0e"
NfsEndPoint: "d-gp2-nas-1.domain.com"
storageclass:
isPureDefault: true
pureBackend: file
I resolved it by using the previous image in the values.yaml
image:
name: purestorage/k8s
tag: 2.1.2
pullPolicy: IfNotPresent
orchestrator:
k8s:
flexPath: /var/lib/kubelet/volumeplugins
arrays:
FlashBlades:
- MgmtEndPoint: "10.123.456.140"
APIToken: "T-obfuscat-e989-4ade-beaa-obfuscated0e"
NfsEndPoint: "d-gp2-nas-1.domain.com"
storageclass:
isPureDefault: true
pureBackend: file
It seems that since the config map just takes what is in the yaml and converts it to json the acceptable format for the json likely changed in the image, maybe?
If that is the case what would the acceptable json look like?
The orchestrator creates the volume with a blank ... (rw,no_root_squash) on the volume. Would be good if we could limit this to our k8s subnets.
Using the operator-k8s-plugin/install.sh script on OpenShift platform.
When attempting to delete the PSOPlugin/psoplugin-operator the deletion hangs. I believe this is due to the metadata "finalizers" parameter. The finalizers value "uninstall-helm-release" is not a valid object to remove.
Example:
oc delete PSOPlugin/psoplugin-operator -n 'pure-k8s-operator-installed-namespace'
'hangs'
Expected Behavior:
The PSOPlugin/psoplugin-operator gets deleted.
Version: Pulled from Master on Feb. 13th, 2020
If the default storageclass is set to use a backend
of file
, the default fstype
should be set to be NFS. It currently defaults to xfs and therefore causes PV creation to fail.
This fault is here: https://github.com/purestorage/helm-charts/blob/master/pure-csi/templates/storageclass.yaml#L13.
The error created can be seen here:
time="2020-02-20T23:45:27Z" level=error msg="GRPC handler error" error="rpc error: code = Internal desc = exit status 32" method=/csi.v1.Node/NodeStageVolume request="volume_id:\"k8s-ch1prd/pvc-9102f0df-cf4c-425e-b26a-b1ed9477aebe\" staging_target_path:\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9102f0df-cf4c-425e-b26a-b1ed9477aebe/globalmount\" volume_capability:<mount:<fs_type:\"xfs\" mount_flags:\"discard\" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:\"backend\" value:\"file\" > volume_context:<key:\"createoptions\" value:\"-q\" > volume_context:<key:\"namespace\" value:\"k8s-ch1prd\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1582150723465-8081-pure-csi\" > volume_context:<key:\"volumeName\" value:\"pvc-9102f0df-cf4c-425e-b26a-b1ed9477aebe\" > " span="{}"
It seems still easier to deploy with helm.
Do we have to have the generator set as helm
in _helpers.tpl
when the installation is using the Operator rather than helm install
? Seems like this is a little confusing...
When running the documented process for the plugin using Helm, the upgrade fails:
# helm upgrade pure-storage-driver pure/pure-k8s-plugin -f values.yaml --version 2.1.0 --recreate-pods
Error: UPGRADE FAILED: DaemonSet.apps "pure-flex" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"generator":"helm", "release":"pure-storage-driver", "app":"pure-flex", "chart":"pure-k8s-plugin", "date":"2018-07-31"}: selector
does not match template labels
&& Deployment.apps "pure-provisioner" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"release":"pure-storage-driver", "app":"pure-provisioner", "chart":"pure-k8s-plugin", "date":"2018-07-31", "generator":"helm"}: selector
does not match template labels
I'm having an issue with the pureflex driver removing all multipaths on my root and /var volume. These volumes are located on the array but should not be touched by the pureflex driver as they are not PVCs. When this happens, the machine locks up as it cannot connect to those volumes any more. Anyone know how to fix it from disconnecting our boot and /var volumes?
Here is an excerpt from the log.
May 30 18:42:41 lpul-k8sprdwrk2 pureflex[1657]: time="2019-05-31T01:42:41Z" level=warning msg="Found multipath map that should NOT be attached, adding to list for cleanup" connection="{Name: LUN:0 Volume: Hgroup:}" dev="DeviceInfo{BlockDev: dm-0, LUN: , Serial: , WWID: 3624A9370D5BC07893D61B51B00255378}" shouldBeConnected=false
Apparently there has been an internal discussion and we believe that the CSI driver supports Ubuntu 18.04, if this is genuinely the case, can the READEME be updated to reflect this.
Currently, the csi driver in PSO 5 is using storage.k8s.io/v1beta1 API, which is removed in Kubernetes 1.22.
Can PSO 5 support the stable API, storage.k8s.io/v1
, for existing users who cannot upgrade to PSO 6?
We try to use our Flash Array to provide persistent storage for our Openshift cluster but I keep getting
Unable to mount volumes for pod "docker-registry-7-2wr7g_default(d19d07e2-1e88-11e9-8fbd-0050569a062c)": timeout expired waiting for volumes to attach or mount for pod
The volume was created just fine on the array so, the plugin can create the volume but it wasn't be able to mount the volume to the pod.
We did change sanType in the value.yaml to FC.
The nodes are also boot from the same array. So, they can see the Array via FC.
Not sure what I miss here.
I'm getting the following error trying to mount a file system:
csi_attacher.go:330] kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name pure-csi not found in the list of registered CSI drivers
However, the driver shows up under csidrivers:
$ oc get csidrivers.storage.k8s.io NAME CREATED AT pure-csi 2019-08-22T17:00:13Z
Note that this is for FlashBlade, using the 'operator-csi-plugin' and some snapshot of Kubernetes v1.14.0.
Hi,
I am implementing a flasharray on kubernetes, I have deployed the csi helm chart and I can see that persistent volume claim are correctly created.
When I am trying to use the pvc in a pod, it stay in ContainerCreating, logs on the node is not really useful; all I have is
(durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-f33cc299-070a-11ea-b821-54802852354e\" (UniqueName: \"kubernetes.io/csi/pure-csi^k8s/pvc-f33cc299-070a-11ea-b821-54802852354e\") pod \"postgres-56448cf857-p2s44\" (UID: \"f9c8408d-070a-11ea-b9c7-5480284e548e\") "
Nothing really useful on the csi pods part.
My setup looks like :
We have one vlan nic for iscsi traffic, I can contact the array without any issues; tried to verify if a volume created on the san array can be visible from iscsiadm and it work.
Got multipath too but not really configured for the flasharray.
On the san array side, I am using an array admin api token (which is for me too much but as it's not working who cares....), I can't see hosts added (which is said by a purestorage presales to be automatic..).
I feel a little bit stuck, I don't see where I could be wrong.
Do you have any advice to get more detailed logs ?
Or if I'm wrong please tell me nobody's perfect :-)
Thanks
The install.sh script for the operator-csi-plugin references the following:
IMAGE=quay.io/purestorage/pso-operator:v5.0.0
However, the highest tagged image on quay.io/purestorage is:
quay.io/purestorage/pso-operator:v0.0.5
Because of this, the operator-csi-plugin installer does not succeed. The pod it creates is forever stuck on trying to download an image that doesn't exist.
If a provisioned PVC is running low on space, what is the recommended approach to expanding? The current StorageClass does not specify allowVolumeExpansion: true
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
Would this be a possibility?
Thank you!
I have a cluster provisioned with rancher and applying an update to values.yaml causes the pure-csi containers to restart but they fail to start because they can't mount a directory.
container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/kubelet/pods/2a8b5dcb-6d88-4264-a7a9-444482467dda/containers/pure-csi-container/8dfa651d\\\" to rootfs \\\"/var/lib/docker/devicemapper/mnt/2a11a0313018b8dcb9f43925ca8264ba6c23e3b1411d366edf30cdab456ee741/rootfs\\\" at \\\"/var/lib/docker/devicemapper/mnt/2a11a0313018b8dcb9f43925ca8264ba6c23e3b1411d366edf30cdab456ee741/rootfs/dev/termination-log\\\" caused \\\"no such file or directory\\\"\""
Restarting the Worker node fixes the issue.
Hello,
In the old flexvolume plugin we used to be able to create PVC from a snapshot via annotation snapshot.beta.purestorage.com/name: <snapshot_name>
. It doens't care how the snapshot was created as long as the prefix matches. We have been trying the new pure-csi
plugin now and this annotation seems to cease to work.
We have an application in k8s which consumes snapshots outside of the k8s cluster (name <prefix>-<id>
so it can be used in the k8s cluster). This used to work with the snapshot annotation. But I can't seem to find a way to do it the "CSI-way" after reading https://github.com/purestorage/helm-charts/blob/master/docs/csi-snapshot-clones.md
Any help is appreciated :)
Thanks
Xinghong
Helm chart installs, but fails to provision volume. This is the error message reported via
kubectl describe pvc ...:
failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
This is echoed in the pure-provisioner-0
pod:
I0414 15:18:19.675302 1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
I0414 15:20:27.675779 1 controller.go:1199] provision "default/pure-claim" class "pure": started
I0414 15:20:27.680404 1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/pure-claim"
W0414 15:20:27.683067 1 controller.go:887] Retrying syncing claim "1429eb88-e318-4162-992a-b25c25c91349", failure 8
E0414 15:20:27.683112 1 controller.go:910] error syncing claim "1429eb88-e318-4162-992a-b25c25c91349": failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
I0414 15:20:27.683147 1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
This is my test pvc manifest:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
# Referenced in nginx-pod.yaml for the volume spec
name: pure-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: pure
Here is the output from k get sc:
NAME PROVISIONER AGE
pure pure-csi 17m
pure-block pure-csi 17m
pure-file pure-csi 17m
Deployed via helm 3:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
pure-storage storage 1 2020-04-14 16:07:51.941290552 +0100 BST deployed pure-csi-1.1.0 1.1.0
I enabled debug:true
in the chart to see if that gave me any more information, it did not (that I could see)
I also recreated the API token, just incase I had it wrong. It still gives the same result.
Can you please state in the README whether or not there is a minimum version of Kubernetes that PSO will work with.
Currently, the API token is specified as a string literal in PSOPlugin
object in purestorage.com/v1
.
Since we check in all cluster objects in a git repository, this would mean that we have to expose the token to everyone who has read permission to the repository. This is not very secure.
Kubernetes secret is designed for managing sensitive information, and there are many options to allow us to safely version control secrets in git in encrypted form.
Can we allow the token to be referenced as a, for example, v1.SecretKeySelector?
Hi! Could you add some more explanation on what this option do? Thanks!
Hi,
Is it possible (using flexvolume or csi) to segregate the cluster so that the pure storage class can only be used on specific nodes ?
I am implementing a flasharray san on kubernetes and wanted to connect it using FC.
One of our big GPU server don't support FC cards and I don't want to use ISCSI.
So my question is as I can see in the chart that there is some nodeselector parameter; is defining a node selector will restrict mount to the selected nodes ?
Thanks a lot.
I am on k8s 1.17 and am getting an error during deploy.
The CustomResourceDefinition "psoplugins.purestorage.com" is invalid: spec.versions[0].schema.openAPIV3Schema: Required value: schemas are required
From what I can understand from https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema
If using apiextensions.k8s.io/v1 schemas are now required?
Also I'm not very familiar with CRD's at this stage but this indentation looked off:
https://github.com/purestorage/helm-charts/blob/master/operator-k8s-plugin/install.sh#L125
Which results in an unknown field "subresources" unless the indentation is moved in to the name scope
pso deployments are broken:
{"level":"error","ts":1575570508.7013617,"logger":"helm.controller","msg":"Release failed","namespace":"pso-operator","name":"psoplugin-operator","apiVersion":"purestorage.com/v1","kind":"PSOPlugin","release":"psoplugin-operator-5igmuk6pvf7od89wp3m0z75rr","error":"failed to install release: validation failed: [unable to recognize "": no matches for kind "DaemonSet" in version "extensions/v1beta1", unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1"]",
helm chart deployments are broken for the same reason.
It's been almost three months since the release of K8s 1.16 and I figured you guys would be on it by now.
https://kubernetes.io/blog/2019/09/18/kubernetes-1-16-release-announcement/
With the Role
defined in install.sh, the operator logs the following error when starts:
{"level":"info","ts":1581545947.1596415,"logger":"cmd","msg":"failed to initialize service object for metrics: replicasets.apps \"pso-operator-697946fc65\" is forbidden: User \"system:serviceaccount:pure-yuhao:default\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"pure-yuhao\"","Namespace":"pure-yuhao"}
I have to give it additional permission to get
replicasets
:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pso-operator
rules:
...
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- statefulsets
- replicasets <---------- added this
verbs:
- "*"
...
The k8s provisioner does not support setting the PURE_ISCSI_ALLOWED_CIDRS option.
The flexPath parameter was moved out from under orchestrator/k8s to the base of the yaml in 9871d64 which causes upgrades past 2.2.1 to look successful but then not be able to actually mount the filesystems due to the volume plugin being installed incorrectly. Might want to note in the release notes that the values.yaml will have to be updated to work around this.
I don't seem to be able to find any metrics on the PVC's that are attached to pure storage.
Is the pure-csi exposing metrics like kubelet_volume_stats_available_bytes?
Provide a way to enable/disable the three different storage classes. For example I'm not setting up pure-file however the helm chart assumes I am and creates a storage class for it.
@sdizer to add details ...
flasharray:
sanType: ISCSI
defaultFSType: xfs
defaultFSOpt: "-q"
defaultMountOpt:
- discard
preemptAttachments: "true"
iSCSILoginTimeout: 20
iSCSIAllowedCIDR: ""
seems to be out of sync with the docs...
We have a requirement to share a volume with different PODs in different Kubernetes namespaces.
In Kubernetes, a volume can only be shared within one namespace by using PersistentVolumeClaim. It is not easy to share a purestorage volume among different namespaces.
Currently, we are using some workaround to share the same purestorage among different namespaces. Is there any problem with it?
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
persistentVolumeReclaimPolicy: Retain
flexVolume:
driver: pure/flex-norelabel
fsType: nfs
options:
accessMode: exclusive
backend: file
labels: ""
namespace: k8s
volumeMode: ""
volumeName: pvc-xxxx
storageClassName: pure
Issue:
pure-csi (daemonset) pod failed to stop due to preStop hook exec failure. (stat /bin/sh: no such file or directory).
Cause:
The preStop hook is trying to run on image "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"
https://github.com/purestorage/helm-charts/blob/5.0.7/pure-csi/templates/node.yaml#L84-L90
which stopped including /bin/sh since version v1.1.0.
How to reproduce:
OR
dockr run --rm -ti --entrypoint /bin/sh quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
Environment:
OS: Centos 7
Kubernetes: v1.14.5
Pure-csi: 5.0.6 (also a problem for 5.0.7)
The .Values.arrays
contains both non-sensitive data like management endpoint and also sensitive data like api token. Ideally credentials like api token should be stored as Secret instead of ConfigMap.
In the header for the pure-csi helm chart there is a message:
Feature Frozen: Pure Service Orchestrator 5.x is in feature freeze
What does this mean for the non operator parts of the helm-charts repo, and how does it apply to the pure-csi chart?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.