Git Product home page Git Product logo

helm-charts's People

Contributors

2vcps avatar bcdonadio avatar caryli-ps avatar carysyd avatar david-enli avatar dinathom avatar dsupure avatar haibinxie avatar japaniel avatar javefang avatar kengibous avatar patrick-east avatar pure-adamukaapan avatar pure-garyyang avatar pure-jliao avatar pure-yesmat avatar rdeenik avatar rlondner avatar sdodsley avatar stefanmcshane avatar sun7927 avatar taherv avatar trevormeiss avatar wstr-ncrum avatar yuha0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

failed to create snapshot in k8s 1.20

We recently upgraded to k8s 1.20. I am having troubles creating volumesnapshots. Here is the error:

spec:
snapshotClassName: pure-snapshotclass
snapshotContentName: ""
source:
apiGroup: null
kind: PersistentVolumeClaim
name: pvc-nginx
status:
creationTime: "2021-04-05T13:22:48Z"
error:
message: 'Failed to create snapshot: selfLink was empty, can''t make reference'
time: "2021-04-05T13:23:11Z"
readyToUse: false
restoreSize: null

I am using snapshot.storage.k8s.io/v1alpha1, which is still included in k8s 1.20.

$ kubectl api-versions | grep snap
snapshot.storage.k8s.io/v1alpha1

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-31T15:33:39Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-31T16:42:32Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

We are using: image: purestorage/k8s:5.2.0

It looks like this issue is referenced here, but I'm not sure how that helps us. Is an upgrade to PSO 6.x required?
kubernetes/kubernetes#94660

Error deploying or upgrading to v1.0.6

Receiving the following error when I deploy or upgrade to v1.0.6 via Rancher.

Wait helm template failed. 2020/02/11 12:28:55 warning: skipped value for arrays: Not a table. Error: parse error in "pure-csi/templates/node.yaml": template: pure-csi/templates/node.yaml:141: unexpected "=" in operand : exit status 1

Plugin image 2.2.1 states my json is malformed

When I try to use the 2.2.1 helm chart that defaults to the 2.2.1 image I get the following errors from the pure-provisioner

pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Initializing Mount config, conf = '<nil>'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Done initializing config, conf = '&{DefaultBlkFSType:xfs DefaultBlkFSOpts:-q DefaultBlkMountOpts:}'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Initializing discovery config, conf = '<nil>'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=debug msg="Done initializing discovery config, conf = '&{DiscoveryType:json Pure1APIHost:localhost:8080}'"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=info msg="namespace = k8s"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:39Z" level=info msg="Starting pureProvisioner" provisionerName=pure-provisioner serverVersion=v1.12.3
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provision volume" id="{k8s pvc-f3052460-1ea0-11e9-bef5-005056975897}" spec="{map[size:107374182400 volume_label_selector:purestorage.com/backend=block]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=debug msg="No backends discovered, unable to schedule volume." spec="{map[size:107374182400 volume_label_selector:purestorage.com/backend=block]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provisioned volume for PVC" annotations="map[]" backend=block error="no storage backends were able to meet the request specification" k8sNamespace=monitoring pvname=pvc-f3052460-1ea0-11e9-bef5-005056975897 size="{{0 0} {0xc420417590} 100Gi BinarySI}" volume="<nil>"
pure-provisioner-85b786974b-w4n74 pure-provisioner E0123 00:08:56.597730       1 controller.go:672] Failed to provision volume for claim "monitoring/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0" with StorageClass "pure": no storage backends were able to meet the request specification
pure-provisioner-85b786974b-w4n74 pure-provisioner E0123 00:08:56.598362       1 goroutinemap.go:151] operation for "provision-monitoring/prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0[f3052460-1ea0-11e9-bef5-005056975897]" failed with: no storage backends were able to meet the request specification
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provision volume" id="{k8s pvc-ef113003-1ea0-11e9-bef5-005056975897}" spec="{map[volume_label_selector:purestorage.com/backend=block size:53687091200]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="JSON validation failed" err="NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=error msg="Error validating json: NFSEndPoint: NFSEndPoint is required\t"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=debug msg="No backends discovered, unable to schedule volume." spec="{map[size:53687091200 volume_label_selector:purestorage.com/backend=block]}"
pure-provisioner-85b786974b-w4n74 pure-provisioner time="2019-01-23T00:08:56Z" level=info msg="Provisioned volume for PVC" annotations="map[]" backend=block error="no storage backends were able to meet the request specification" k8sNamespace=monitoring pvname=pvc-ef113003-1ea0-11e9-bef5-005056975897 size="{{0 0} {0xc42014c3f0} 50Gi BinarySI}" volume="<nil>"

The values.yaml file I was using looks something like

orchestrator:
  k8s:
    flexPath: /var/lib/kubelet/volumeplugins
arrays:
  FlashBlades:
    - MgmtEndPoint: "10.123.456.140"
      APIToken: "T-obfuscat-e989-4ade-beaa-obfuscated0e"
      NfsEndPoint: "d-gp2-nas-1.domain.com"
storageclass:
  isPureDefault: true
  pureBackend: file

I resolved it by using the previous image in the values.yaml

image:
  name: purestorage/k8s
  tag: 2.1.2
  pullPolicy: IfNotPresent
orchestrator:
  k8s:
    flexPath: /var/lib/kubelet/volumeplugins
arrays:
  FlashBlades:
    - MgmtEndPoint: "10.123.456.140"
      APIToken: "T-obfuscat-e989-4ade-beaa-obfuscated0e"
      NfsEndPoint: "d-gp2-nas-1.domain.com"
storageclass:
  isPureDefault: true
  pureBackend: file

It seems that since the config map just takes what is in the yaml and converts it to json the acceptable format for the json likely changed in the image, maybe?

If that is the case what would the acceptable json look like?

[Bug] - Deletion of PSOPlugin resource hangs

Using the operator-k8s-plugin/install.sh script on OpenShift platform.

When attempting to delete the PSOPlugin/psoplugin-operator the deletion hangs. I believe this is due to the metadata "finalizers" parameter. The finalizers value "uninstall-helm-release" is not a valid object to remove.

Example:
oc delete PSOPlugin/psoplugin-operator -n 'pure-k8s-operator-installed-namespace'
'hangs'

Expected Behavior:
The PSOPlugin/psoplugin-operator gets deleted.

Version: Pulled from Master on Feb. 13th, 2020

Default storageclass has incorrect fs_type when set to use FB as the backend

If the default storageclass is set to use a backend of file, the default fstype should be set to be NFS. It currently defaults to xfs and therefore causes PV creation to fail.
This fault is here: https://github.com/purestorage/helm-charts/blob/master/pure-csi/templates/storageclass.yaml#L13.
The error created can be seen here:

time="2020-02-20T23:45:27Z" level=error msg="GRPC handler error" error="rpc error: code = Internal desc = exit status 32" method=/csi.v1.Node/NodeStageVolume request="volume_id:\"k8s-ch1prd/pvc-9102f0df-cf4c-425e-b26a-b1ed9477aebe\" staging_target_path:\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9102f0df-cf4c-425e-b26a-b1ed9477aebe/globalmount\" volume_capability:<mount:<fs_type:\"xfs\" mount_flags:\"discard\" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:\"backend\" value:\"file\" > volume_context:<key:\"createoptions\" value:\"-q\" > volume_context:<key:\"namespace\" value:\"k8s-ch1prd\" > volume_context:<key:\"storage.kubernetes.io/csiProvisionerIdentity\" value:\"1582150723465-8081-pure-csi\" > volume_context:<key:\"volumeName\" value:\"pvc-9102f0df-cf4c-425e-b26a-b1ed9477aebe\" > " span="{}"

create a git repo for operator

  • Please clarify your position/best practice concerning the depreciation of helm charts - now that it uses helm 3 w/o Tiller

It seems still easier to deploy with helm.

  • Operator deployment is part of the "helm-charts" repo which is quite confusing

Change generator name

Do we have to have the generator set as helm in _helpers.tpl when the installation is using the Operator rather than helm install? Seems like this is a little confusing...

Upgrade from 2.0.1 to 2.1.0 fails

When running the documented process for the plugin using Helm, the upgrade fails:

# helm upgrade pure-storage-driver pure/pure-k8s-plugin -f values.yaml --version 2.1.0 --recreate-pods

Error: UPGRADE FAILED: DaemonSet.apps "pure-flex" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"generator":"helm", "release":"pure-storage-driver", "app":"pure-flex", "chart":"pure-k8s-plugin", "date":"2018-07-31"}: selector does not match template labels && Deployment.apps "pure-provisioner" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"release":"pure-storage-driver", "app":"pure-provisioner", "chart":"pure-k8s-plugin", "date":"2018-07-31", "generator":"helm"}: selector does not match template labels

Pureflex unmounting root volume

I'm having an issue with the pureflex driver removing all multipaths on my root and /var volume. These volumes are located on the array but should not be touched by the pureflex driver as they are not PVCs. When this happens, the machine locks up as it cannot connect to those volumes any more. Anyone know how to fix it from disconnecting our boot and /var volumes?

Here is an excerpt from the log.

May 30 18:42:41 lpul-k8sprdwrk2 pureflex[1657]: time="2019-05-31T01:42:41Z" level=warning msg="Found multipath map that should NOT be attached, adding to list for cleanup" connection="{Name: LUN:0 Volume: Hgroup:}" dev="DeviceInfo{BlockDev: dm-0, LUN: , Serial: , WWID: 3624A9370D5BC07893D61B51B00255378}" shouldBeConnected=false

Ubuntu 18.04 support

Apparently there has been an internal discussion and we believe that the CSI driver supports Ubuntu 18.04, if this is genuinely the case, can the READEME be updated to reflect this.

Volume mount timeout

We try to use our Flash Array to provide persistent storage for our Openshift cluster but I keep getting

Unable to mount volumes for pod "docker-registry-7-2wr7g_default(d19d07e2-1e88-11e9-8fbd-0050569a062c)": timeout expired waiting for volumes to attach or mount for pod

The volume was created just fine on the array so, the plugin can create the volume but it wasn't be able to mount the volume to the pod.
We did change sanType in the value.yaml to FC.
The nodes are also boot from the same array. So, they can see the Array via FC.
Not sure what I miss here.

pure-csi not found in the list of registered CSI drivers

I'm getting the following error trying to mount a file system:

csi_attacher.go:330] kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name pure-csi not found in the list of registered CSI drivers

However, the driver shows up under csidrivers:

$ oc get csidrivers.storage.k8s.io NAME CREATED AT pure-csi 2019-08-22T17:00:13Z

Note that this is for FlashBlade, using the 'operator-csi-plugin' and some snapshot of Kubernetes v1.14.0.

ISCSI CSI Plugin mount fail

Hi,
I am implementing a flasharray on kubernetes, I have deployed the csi helm chart and I can see that persistent volume claim are correctly created.
When I am trying to use the pvc in a pod, it stay in ContainerCreating, logs on the node is not really useful; all I have is
(durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-f33cc299-070a-11ea-b821-54802852354e\" (UniqueName: \"kubernetes.io/csi/pure-csi^k8s/pvc-f33cc299-070a-11ea-b821-54802852354e\") pod \"postgres-56448cf857-p2s44\" (UID: \"f9c8408d-070a-11ea-b9c7-5480284e548e\") "

Nothing really useful on the csi pods part.

My setup looks like :

  • 11 bare-metal servers running kubernetes 1.13.12
  • Calico as CNI (network policies)
  • Docker CE 18.09
  • Ubuntu 18.04

We have one vlan nic for iscsi traffic, I can contact the array without any issues; tried to verify if a volume created on the san array can be visible from iscsiadm and it work.

Got multipath too but not really configured for the flasharray.

On the san array side, I am using an array admin api token (which is for me too much but as it's not working who cares....), I can't see hosts added (which is said by a purestorage presales to be automatic..).

I feel a little bit stuck, I don't see where I could be wrong.

Do you have any advice to get more detailed logs ?

Or if I'm wrong please tell me nobody's perfect :-)

Thanks

install.sh references non-existant image on quay.io

The install.sh script for the operator-csi-plugin references the following:
IMAGE=quay.io/purestorage/pso-operator:v5.0.0

However, the highest tagged image on quay.io/purestorage is:
quay.io/purestorage/pso-operator:v0.0.5

Because of this, the operator-csi-plugin installer does not succeed. The pod it creates is forever stuck on trying to download an image that doesn't exist.

[pure-csi] daemon set fails to start on update.sh

I have a cluster provisioned with rancher and applying an update to values.yaml causes the pure-csi containers to restart but they fail to start because they can't mount a directory.

container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/kubelet/pods/2a8b5dcb-6d88-4264-a7a9-444482467dda/containers/pure-csi-container/8dfa651d\\\" to rootfs \\\"/var/lib/docker/devicemapper/mnt/2a11a0313018b8dcb9f43925ca8264ba6c23e3b1411d366edf30cdab456ee741/rootfs\\\" at \\\"/var/lib/docker/devicemapper/mnt/2a11a0313018b8dcb9f43925ca8264ba6c23e3b1411d366edf30cdab456ee741/rootfs/dev/termination-log\\\" caused \\\"no such file or directory\\\"\""

Restarting the Worker node fixes the issue.

[pure-csi] create pvc from an externally created snapshots

Hello,

In the old flexvolume plugin we used to be able to create PVC from a snapshot via annotation snapshot.beta.purestorage.com/name: <snapshot_name>. It doens't care how the snapshot was created as long as the prefix matches. We have been trying the new pure-csi plugin now and this annotation seems to cease to work.

We have an application in k8s which consumes snapshots outside of the k8s cluster (name <prefix>-<id> so it can be used in the k8s cluster). This used to work with the snapshot annotation. But I can't seem to find a way to do it the "CSI-way" after reading https://github.com/purestorage/helm-charts/blob/master/docs/csi-snapshot-clones.md

Any help is appreciated :)

Thanks
Xinghong

failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null

Helm chart installs, but fails to provision volume. This is the error message reported via
kubectl describe pvc ...:

failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null

This is echoed in the pure-provisioner-0 pod:

I0414 15:18:19.675302       1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
I0414 15:20:27.675779       1 controller.go:1199] provision "default/pure-claim" class "pure": started
I0414 15:20:27.680404       1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/pure-claim"
W0414 15:20:27.683067       1 controller.go:887] Retrying syncing claim "1429eb88-e318-4162-992a-b25c25c91349", failure 8
E0414 15:20:27.683112       1 controller.go:910] error syncing claim "1429eb88-e318-4162-992a-b25c25c91349": failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null
I0414 15:20:27.683147       1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pure-claim", UID:"1429eb88-e318-4162-992a-b25c25c91349", APIVersion:"v1", ResourceVersion:"24418135", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "pure": rpc error: code = Internal desc = (root): Invalid type. Expected: object, given: null

This is my test pvc manifest:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  # Referenced in nginx-pod.yaml for the volume spec
  name: pure-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: pure

Here is the output from k get sc:

NAME         PROVISIONER   AGE
pure         pure-csi      17m
pure-block   pure-csi      17m
pure-file    pure-csi      17m

Deployed via helm 3:

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
pure-storage    storage         1               2020-04-14 16:07:51.941290552 +0100 BST deployed        pure-csi-1.1.0  1.1.0      

I enabled debug:true in the chart to see if that gave me any more information, it did not (that I could see)

I also recreated the API token, just incase I had it wrong. It still gives the same result.

[Feature Request] Reference API token from a kubernetes secret object

Currently, the API token is specified as a string literal in PSOPlugin object in purestorage.com/v1.

Since we check in all cluster objects in a git repository, this would mean that we have to expose the token to everyone who has read permission to the repository. This is not very secure.

Kubernetes secret is designed for managing sensitive information, and there are many options to allow us to safely version control secrets in git in encrypted form.

Can we allow the token to be referenced as a, for example, v1.SecretKeySelector?

support for zones in a cluster ?

Hi,

Is it possible (using flexvolume or csi) to segregate the cluster so that the pure storage class can only be used on specific nodes ?

I am implementing a flasharray san on kubernetes and wanted to connect it using FC.
One of our big GPU server don't support FC cards and I don't want to use ISCSI.

So my question is as I can see in the chart that there is some nodeselector parameter; is defining a node selector will restrict mount to the selected nodes ?

Thanks a lot.

kubernetes 1.17 requires schema

I am on k8s 1.17 and am getting an error during deploy.

The CustomResourceDefinition "psoplugins.purestorage.com" is invalid: spec.versions[0].schema.openAPIV3Schema: Required value: schemas are required

From what I can understand from https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema
If using apiextensions.k8s.io/v1 schemas are now required?

Also I'm not very familiar with CRD's at this stage but this indentation looked off:
https://github.com/purestorage/helm-charts/blob/master/operator-k8s-plugin/install.sh#L125

Which results in an unknown field "subresources" unless the indentation is moved in to the name scope

These K8s charts are still using deprecated apiVersions

pso deployments are broken:

{"level":"error","ts":1575570508.7013617,"logger":"helm.controller","msg":"Release failed","namespace":"pso-operator","name":"psoplugin-operator","apiVersion":"purestorage.com/v1","kind":"PSOPlugin","release":"psoplugin-operator-5igmuk6pvf7od89wp3m0z75rr","error":"failed to install release: validation failed: [unable to recognize "": no matches for kind "DaemonSet" in version "extensions/v1beta1", unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1"]",

helm chart deployments are broken for the same reason.

It's been almost three months since the release of K8s 1.16 and I figured you guys would be on it by now.
https://kubernetes.io/blog/2019/09/18/kubernetes-1-16-release-announcement/

Failed to initialize service object for metrics: replicasets.apps

With the Role defined in install.sh, the operator logs the following error when starts:

{"level":"info","ts":1581545947.1596415,"logger":"cmd","msg":"failed to initialize service object for metrics: replicasets.apps \"pso-operator-697946fc65\" is forbidden: User \"system:serviceaccount:pure-yuhao:default\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"pure-yuhao\"","Namespace":"pure-yuhao"}

I have to give it additional permission to get replicasets:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pso-operator
rules:
...
  - apiGroups:
    - apps
    resources:
    - deployments
    - daemonsets
    - statefulsets
    - replicasets <---------- added this
    verbs:
    - "*"
...

flexPath moved in values.yaml Can Break Upgrade Past 2.2.1

The flexPath parameter was moved out from under orchestrator/k8s to the base of the yaml in 9871d64 which causes upgrades past 2.2.1 to look successful but then not be able to actually mount the filesystems due to the volume plugin being installed incorrectly. Might want to note in the release notes that the values.yaml will have to be updated to work around this.

Volume metrics in pure-csi

I don't seem to be able to find any metrics on the PVC's that are attached to pure storage.

Is the pure-csi exposing metrics like kubelet_volume_stats_available_bytes?

Only create specified storage classes.

Provide a way to enable/disable the three different storage classes. For example I'm not setting up pure-file however the helm chart assumes I am and creates a storage class for it.

Clarify details in the docs for flashArrays

@sdizer to add details ...

flasharray:
  sanType: ISCSI
  defaultFSType: xfs
  defaultFSOpt: "-q"
  defaultMountOpt:
    - discard
  preemptAttachments: "true"
  iSCSILoginTimeout: 20
  iSCSIAllowedCIDR: ""

seems to be out of sync with the docs...

Support for Sharing a Volume Among Different Kubernetes Namespaces

Requirement

We have a requirement to share a volume with different PODs in different Kubernetes namespaces.

Problem

In Kubernetes, a volume can only be shared within one namespace by using PersistentVolumeClaim. It is not easy to share a purestorage volume among different namespaces.

Can you provide an official way to fulfill this requirement?

Workaround

Currently, we are using some workaround to share the same purestorage among different namespaces. Is there any problem with it?

  1. Create a PersistentVolumeClaim in one namespace. Because we already have purestorage Storage Class being configured. a new PV and purestorage volume are provisioned.
  2. run "kubectl get -o yaml <the_new_pv_name>" on the new PV
    the output would look like this:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  persistentVolumeReclaimPolicy: Retain
  flexVolume:
    driver: pure/flex-norelabel
    fsType: nfs
    options:
      accessMode: exclusive
      backend: file
      labels: ""
      namespace: k8s
      volumeMode: ""
      volumeName: pvc-xxxx
  storageClassName: pure
  1. create a new PV in a different namespace with same flexVolume settings retrieved from the first PV.
    because this new PV have the same flexVolume setting. It will actually use the same purestorage volume.

node-driver-registrar preStop hook: stat /bin/sh: no such file or directory

Issue:

pure-csi (daemonset) pod failed to stop due to preStop hook exec failure. (stat /bin/sh: no such file or directory).

Cause:

The preStop hook is trying to run on image "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0"
https://github.com/purestorage/helm-charts/blob/5.0.7/pure-csi/templates/node.yaml#L84-L90
which stopped including /bin/sh since version v1.1.0.

How to reproduce:

  1. Install the pure-csi driver 5.0.6
  2. Wait for pure-csi daemonset pod to start
  3. Delete one of the pure-csi pod
  4. "k describe pod pure-csi-"

OR

dockr run --rm -ti --entrypoint /bin/sh quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 

Environment:

OS: Centos 7
Kubernetes: v1.14.5
Pure-csi: 5.0.6 (also a problem for 5.0.7)

Storing api token in Secret instead of ConfigMap

The .Values.arrays contains both non-sensitive data like management endpoint and also sensitive data like api token. Ideally credentials like api token should be stored as Secret instead of ConfigMap.

Clarification on the future of the pure-csi helm chart

In the header for the pure-csi helm chart there is a message:

Feature Frozen: Pure Service Orchestrator 5.x is in feature freeze

What does this mean for the non operator parts of the helm-charts repo, and how does it apply to the pure-csi chart?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.