Git Product home page Git Product logo

helm's Introduction

logo

Portworx is a software defined persistent storage solution designed and purpose built for applications deployed as containers, via container orchestrators such as Kubernetes, Marathon and Swarm. It is a clustered block storage solution and provides a Cloud-Native layer from which containerized stateful applications programmatically consume block, file and object storage services directly through the scheduler.

Helm Chart for Portworx

Follow these instructions to install Portworx onto your kubernetes cluster using Helm

Contributing

The specification and code is licensed under the Apache 2.0 license found in the LICENSE file of this repository.

Join us on Slack!

Contact us to share feedback, work with us, and to request features.

helm's People

Contributors

adasilva70 avatar adityadani avatar aks-px avatar cnbu-jenkins avatar dahuang-purestorage avatar diptiranjanpx avatar franciosi avatar hr1sh1kesh avatar hrish1kesh avatar hrishipx avatar jsilberm avatar kravichandran-px avatar matanbaru avatar nikolaypopov avatar prabhu-px avatar prashanthpx avatar pure-jliao avatar px-kesavan avatar saheienko avatar satchpx avatar sean-mcgimpsey avatar sensre avatar siva-portworx avatar ss-px avatar subinsebastian avatar svijaykumar-px avatar t-nidhin avatar tushar-lb avatar vikasit12 avatar zoxpx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm's Issues

Acquire leader lease via specific resource object instead of ConfigMap for the portworx-pvc-controller deployment

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT
What happened:
Leader election fails in some cases for the portworx-pvc-controller pods. The --leader-elect-resource-lock should be for a specific object created just for portworx-pvc-controller to work.

What you expected to happen:
leader election of the portworx-pvc-controller deployment to be successful all the time.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Syntax error in preinstall hook

Is this a BUG REPORT or FEATURE REQUEST?:
BUG

What happened:
Syntax error in preinstall hook

Initializing...
/usr/bin/etcdStatus.sh: line 34: syntax error near unexpected token `else'
/usr/bin/etcdStatus.sh: line 34: `    else'

What you expected to happen:
Should there be a then after the if?

How to reproduce it (as minimally and precisely as possible):
Supply etcd cacert but not cert/key

Anything else we need to know?:
Seems to have been introduced by #78

Environment:

  • Container Orchestrator and version: k8s 1.10
  • Cloud provider or hardware configuration: IKS
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Add portworx-api Daemonset to the helm chart

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT

What happened:
portworx-api Daemonset is missing from Helm Chart but do exist when upgrading

What you expected to happen:
Align portworx-api Daemonset same as the spec generator / talisman upgrade

How to reproduce it (as minimally and precisely as possible):
Install Portworx using helm chart, Then see portworx-api is missing

Environment:

  • Container Orchestrator and version: Kubernetes > 1.8
  • Cloud provider or hardware configuration: On-Prem
  • OS (e.g. from /etc/os-release): Ubuntu 18.04 LTS
  • Kernel (e.g. uname -a): Linux svvr-mng77 2.6.32-642.6.2.el6.x86_64 #1 SMP Mon Oct 24 10:22:33 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: Helm Chart

"Invalid volume specification" on /var/lib/kubelet

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT

What happened:
Can't initiate portworx pods because of "Invalid volume specification" on /var/lib/kubelet

What you expected to happen:
To work.

How to reproduce it (as minimally and precisely as possible):
Try to install the helm chart in a kubernetes > 1.10

Anything else we need to know?:
Preparing a PR.

Environment:

  • Container Orchestrator and version: kubernetes 1.13
  • Cloud provider or hardware configuration: -
  • OS (e.g. from /etc/os-release): -
  • Kernel (e.g. uname -a): -
  • Install tools: helm
  • Others:

Support for adding a journal device to the chart

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

What happened:
Add support for adding a journal device

What you expected to happen:
Enhance the helm chart to support an option to specify a journaling device.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Helm chart doesn't include PodSecurityPolicies

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

What happened:
Pods fail to be scheduled in a Kubernetes cluster with the PodSecurityPolicy AdmissionController enabled

What you expected to happen:
Enhance the helmchart with a specific PodSecurityPolicy for all PortWorx components.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

px-lighthouse init-config container won't start, can't list resource "nodes"

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT

What happened:

time="2019-02-03T16:38:13Z" level=info msg="Creating new default config for lighthouse"
2019/02/03 16:38:13 nodes is forbidden: User "system:serviceaccount:kube-system:px-lh-account" cannot list resource "nodes" in API group "" at the cluster scope Next retry in: 10s

What you expected to happen:
To work.

How to reproduce it (as minimally and precisely as possible):
Just run a default installation.

Anything else we need to know?:
The role binding of px-lh-account is against px-lh-role which only has:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: px-lh-role
   namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "create", "update"]

No resources: ["nodes"] there so obviously it won't be able to list them. Should this be the same as node-get-put-list-role that's using the px-account Service Account?

Environment:

  • Container Orchestrator and version: kubernetes 1.13
  • Cloud provider or hardware configuration: -
  • OS (e.g. from /etc/os-release): Ubuntu Bionic
  • Kernel (e.g. uname -a): -
  • Install tools: heml
  • Others: -

Add Pod AntiAffinity for Cassandra

Something like this would be useful to prevent Cassandra Pods to be placed on the same Kubernetes node:

podAntiAffinity:
  preferredDuringSchedulingIgnoredDuringExecution:
  - weight: 100
    podAffinityTerm:
      labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - cassandra
      topologyKey: "kubernetes.io/hostname"

helm install fails with Error: jobs.batch "px-etcd-preinstall-hook" already exists

BUG REPORT: Performed helm install with incorrect etcd url. Next retry of helm install failed with

$ helm install --name my-release -f ./helm/charts/px/values.yaml ./helm/charts/px
Error: jobs.batch "px-etcd-preinstall-hook" already exists

What happened:

$ helm install --name my-release -f ./helm/charts/px/values.yaml ./helm/charts/px
Error: Job failed: BackoffLimitExceeded
harsh@harsh-lnx:~/work/src/github.com/portworx$ kubectl logs tiller-deploy-5b48764ff7-gjww4 -n kube-system
[main] 2018/02/11 00:25:09 Starting Tiller v2.8.1 (tls=false)
[main] 2018/02/11 00:25:09 GRPC listening on :44134
[main] 2018/02/11 00:25:09 Probes listening on :44135
[main] 2018/02/11 00:25:09 Storage driver is ConfigMap
[main] 2018/02/11 00:25:09 Max history per release is 0
[tiller] 2018/02/11 00:25:19 preparing install for my-release
[storage] 2018/02/11 00:25:19 getting release history for "my-release"
[tiller] 2018/02/11 00:25:19 rendering portworx-helm chart using values
2018/02/11 00:25:19 info: manifest "portworx-helm/templates/portworx-stork.yaml" is empty. Skipping.
2018/02/11 00:25:19 info: manifest "portworx-helm/templates/portworx-oc.yaml" is empty. Skipping.
[tiller] 2018/02/11 00:25:19 performing install for my-release
[tiller] 2018/02/11 00:25:19 executing 3 pre-install hooks for my-release
[kube] 2018/02/11 00:25:19 building resources from manifest
[kube] 2018/02/11 00:25:19 creating 1 resource(s)
[kube] 2018/02/11 00:25:19 Watching for changes to Job px-etcd-preinstall-hook with timeout of 5m0s
[kube] 2018/02/11 00:25:19 Add/Modify event for px-etcd-preinstall-hook: ADDED
[kube] 2018/02/11 00:25:19 px-etcd-preinstall-hook: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
[kube] 2018/02/11 00:25:24 Add/Modify event for px-etcd-preinstall-hook: MODIFIED
[tiller] 2018/02/11 00:25:24 warning: Release my-release pre-install portworx-helm/templates/hooks/pre-install/etcd-preinstall-hook.yaml could not complete: Job failed: BackoffLimitExceeded
[tiller] 2018/02/11 00:25:24 failed install perform step: Job failed: BackoffLimitExceeded
harsh@harsh-lnx:~/work/src/github.com/portworx$ kubectl get pods -nkube-system -a | grep preinstall
px-etcd-preinstall-hook-w5dz7    0/1       Error     0          1m
harsh@harsh-lnx:~/work/src/github.com/portworx$ kubectl logs  -nkube-system po/px-etcd-preinstall-hook-w5dz7
Initializing...
Verifying if the provided etcd url is accessible: //192.168.56.70:2379
Response Code: 000
Incorrect ETCD URL provided. It is either not reachable or is incorrect...

$ helm install --name my-release -f ./helm/charts/px/values.yaml ./helm/charts/px
Error: jobs.batch "px-etcd-preinstall-hook" already exists

What you expected to happen:

retry of helm install should have succeeded.

Installing on EKS, what to specify for the etcd endpoint in AWS EKS?

Is this a BUG REPORT or FEATURE REQUEST?:
Clarification
What happened:
etcd endpoint is a required field, what to specify when installing on EKS where control plane is managed by AWS
What you expected to happen:
Tried the clusterIP address and the API cluster DNS name, does not work
How to reproduce it (as minimally and precisely as possible):
Try to install on EKS
Anything else we need to know?:

Environment:

  • Container Orchestrator and version: 1.15
  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Error: <not>: wrong number of args for not: want 1 got 2

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT

What happened:
When trying to install using the following command:
helm install --debug --name portworx-release --set etcdEndPoint=etcd:http://$etcdendpoint:2379,clusterName=ncp_portworx,AKSorEKSInstall=true ./portworx_helm/charts/portworx/;
faced this error:
Error: render error in "portworx/templates/portworx-controller.yaml": template: portworx/templates/portworx-controller.yaml:94:13: executing "portworx/templates/portworx-controller.yaml" at <not>: wrong number of args for not: want 1 got 2

What you expected to happen:
The install has been performed without any errors.

How to reproduce it (as minimally and precisely as possible):
It was done using the OOB clone of the git repo and the helm command mentioned above.

Anything else we need to know?:
The error was fixed after changing line 94 of portworx/templates/portworx-controller.yaml from

      {{- if not empty .Values.registrySecret }}

to

      {{- if not (empty .Values.registrySecret) }}

Environment:

  • Container Orchestrator and version: kubernetes
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-29T16:15:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: EKS
  • OS (e.g. from /etc/os-release): EKS
  • Kernel (e.g. uname -a): EKS
  • Install tools: helm
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
  • Others: N/A

Support vsphere disk provisioning

Is this a BUG REPORT or FEATURE REQUEST?: Feature request

The portworx helm chart does not currently support setting up vsphere disk provisioning.

Allow user to supply Environment variables to the chart on the command line

Is this a BUG REPORT or FEATURE REQUEST?:

What happened:
Environment variables that are passed on to Portworx specification should be allowed to be via the --set option while helm install

What you expected to happen:
helm install --debug --set envVars=myENVVAR1=val1, myENVVAR2=val2 ./portworx-helm
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Remove static kubectl path from hooks

BUG REPORT: post and pre delete hooks are using /usr/bin/kubectl as host path. This will fail for setups where kubectl is at a path other than /usr/bin

since we are using the kubectl container, we can directly invoke kubectl commands as container args without specifying the kubectl binary.

etcd-preinstall job/ container needs to be able to mount etcd cert in case of secure etcd

Is this a BUG REPORT or FEATURE REQUEST?:
BUG report

What happened:
etcd-preinstall-hook fails when using secure etcd

What you expected to happen:
helm chart should be able to handle secure etcd. etcd-preinstall container needs to mount etcd cert(s) specifiec in values.yaml

How to reproduce it (as minimally and precisely as possible):
install the helm chart by using secure etcd

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
    docker 1.12, kubernetes (RKE) 1.8.7
  • Cloud provider or hardware configuration:
    VMware ESXi 6.0U2
  • OS (e.g. from /etc/os-release):
    CoreOS
  • Kernel (e.g. uname -a):
    4.13.9
  • Install tools:
  • Others:

helm install does't work for k8s 1.7 as job spec has invalid field backOffLimit

**BUG REPORT **:

What happened:

helm install --name my-release -f ./helm/charts/px/values.yaml ./helm/charts/px
Error: error validating "": error validating data: ValidationError(Job.spec): unknown field "backoffLimit" in io.k8s.kubernetes.pkg.apis.batch.v1.JobSpec

Environment:

vagrant@k1m:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.12", GitCommit:"3bda299a6414b4866f179921610d6738206a18fe", GitTreeState:"clean", BuildDate:"2017-12-29T08:39:49Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Error when installing on OpenShift

Is this a BUG REPORT or FEATURE REQUEST?:

BUG REPORT

What happened:

Tried installing on OpenShift and received an error, installation did not complete.

helm install --debug my-release --set etcdEndPoint=etcd:http://192.168.70.90:2379,clusterName=$(uuidgen) ./helm/charts/portworx/ --set openshiftInstall=true --set registrySecret=sbu-pipeline

install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /helm/charts/portworx

I0706 10:00:32.149409     662 request.go:645] Throttling request took 1.0930126s, request: GET:https://c108-e.eu-gb.containers.cloud.ibm.com:30469/apis/tekton.dev/v1alpha1?timeout=32s
Error: template: portworx/templates/portworx-ds.yaml:219:23: executing "portworx/templates/portworx-ds.yaml" at <(semverCompare ">=1.9" .Capabilities.KubeVersion.GitVersion) or (.Values.openshiftInstall and semverCompare ">=1.8" .Capabilities.KubeVersion.GitVersion)>: can't give argument to non-function semverCompare ">=1.9" .Capabilities.KubeVersion.GitVersion
helm.go:81: [debug] template: portworx/templates/portworx-ds.yaml:219:23: executing "portworx/templates/portworx-ds.yaml" at <(semverCompare ">=1.9" .Capabilities.KubeVersion.GitVersion) or (.Values.openshiftInstall and semverCompare ">=1.8" .Capabilities.KubeVersion.GitVersion)>: can't give argument to non-function semverCompare ">=1.9" .Capabilities.KubeVersion.GitVersion

What you expected to happen:
I expect the install to be performed.

How to reproduce it (as minimally and precisely as possible):
I did the following:

git clone https://github.com/portworx/helm.git
helm install --debug my-release --set etcdEndPoint=etcd:http://192.168.70.90:2379,clusterName=$(uuidgen) ./helm/charts/portworx/ --set openshiftInstall=true --set registrySecret=sbu-pipeline

Anything else we need to know?:
I believe this can be fixed by amending https://github.com/portworx/helm/blob/master/charts/portworx/templates/portworx-ds.yaml#L219 from:

{{- if (semverCompare ">=1.9" .Capabilities.KubeVersion.GitVersion) or (.Values.openshiftInstall and semverCompare ">=1.8" .Capabilities.KubeVersion.GitVersion) }}

to

{{- if or (semverCompare ">=1.9" .Capabilities.KubeVersion.GitVersion) (and (.Values.openshiftInstall) (semverCompare ">=1.8" .Capabilities.KubeVersion.GitVersion)) }}

I have done this in my environment and the installation was then performed successfully.

Environment:

  • Container Orchestrator and version: OpenShift 4.6.34 on IBM Cloud
  • Cloud provider or hardware configuration: IBM Cloud
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Enable deletion of Portworx Runc containers upon helm delete

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
Enable deletion of the PX runc containers upon helm delete.
What you expected to happen:
When a user runs helm delete <release-name> the Portworx runc containers that run on the nodes should also get deleted.
How to reproduce it (as minimally and precisely as possible):
run helm delete <release-name> the kubernetes pods and the applied spec would be deleted but the portworx runc containers which run on the nodes would still be running.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

helm chart install on GKE does not install the PVC controller

**BUG REPORT **: When installing on GKE, helm install should detect GKE using the k8s version and also install the pvc controller. The spec generator at install.portworx.com currently uses the "gke" field in the kubectl version output to detect it's GKE and includes the pvc controller in the spec response.

What happened:
As the pvc controller does not install installed, the PVC creations will fail.

Helm chart doesnt install correctly for CoreOS

Is this a BUG REPORT or FEATURE REQUEST?:

What happened:

What you expected to happen:
CoreOS installation of the chart fails. The volume mounts are incorrect when using a docker container. Needs validation on Tectonic.

How to reproduce it (as minimally and precisely as possible):
Install the helm chart on tectonic.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

please add cpu and memory resource constraints

Is this a BUG REPORT or FEATURE REQUEST?:
This is a feature request

What happened:

What you expected to happen:
The daemon-set spec should contain cpu and memory requests and limits set for oci_monitor. Similarly the generated systemd files should also have memory and cpu constraints for portworx.service.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
    Kubernetes 1.8.7

  • Cloud provider or hardware configuration:
    VMware ESXi

  • OS (e.g. from /etc/os-release):
    CoreOS and RHEL Atomic

  • Kernel (e.g. uname -a):
    4.13.9 (CoreOS)
    3.10 (RHEL Atomic)

  • Install tools:

  • Others:

Move preinstall hook to a portworx repository

Is this a BUG REPORT or FEATURE REQUEST?:
BUG

What happened:
Move the pre-install hook container to a Portworx repository.

What you expected to happen:
move the container from hrishi/px-etcd-preinstalll-hook to portworx/etcd-preinstall-hook

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Update default values to more recent releases of Portworx

Is this a BUG REPORT or FEATURE REQUEST?: minor update

What happened: versions listed in values.yaml file for Portworx, Stork and Lighthouse should be update to new releases

What you expected to happen: changes in default values in values.yaml file

How to reproduce it (as minimally and precisely as possible): this a not a bug, so no need to reproduce

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Multiple drives do not get recognised when passed via command line.

Is this a BUG REPORT or FEATURE REQUEST?:

What happened:
helm install --debug --name portworx-icp --set clusterName=portworx-test,etcdEndPoint=etcd:http://10.50.88.130:4001,drive='/dev/sdi,/dev/sdj,/dev/sdk'

fails with the error
Error: failed parsing --set data: key "/dev/sdj" has no value (cannot end with ,)

What you expected to happen:
The drives should be parsed and used in the spec generation.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

If etcd server has self signed cert, but does not require client certs, the `pre-install` hook fails

Is this a BUG REPORT or FEATURE REQUEST?: BUG Report

What happened: If etcd server has self signed cert, but does not require client certs, the pre-install hook fails.

What you expected to happen: The pre-install hook should pass

How to reproduce it (as minimally and precisely as possible):

  • Run an etcd server with self signed TLS cert
  • Create the secret with only ca-cert.pem. Leave client.pem & client-key.pem blank
  • Try to deploy the helm chart

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4+IKS", GitCommit:"27c949c1ae36d3c99246bde7ef68ff8bd3bb4ad3", GitTreeState:"clean", BuildDate:"2019-01-09T10:08:47Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: IBM Kuberenetes Service
  • OS (e.g. from /etc/os-release):
$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel (e.g. uname -a):
$ uname -a
Linux VirtualBox 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

RBAC issue in PVC controller for list and watch configmap

Is this a BUG REPORT or FEATURE REQUEST?:
Bug
What happened:

Portworx-controller unable to list and watch configmap, please find below error for reference.

E0831 13:10:36.287111 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:kube-system:portworx-pvc-controller-account" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
0831 13:12:49.796335 1 reflector.go:309] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)

What you expected to happen:

Should have access to do list and watch for configmap

How to reproduce it (as minimally and precisely as possible):
Setup portworx form master helm branch and tail the logs for portworx-pvc-controller

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: Kubernetes version 1.17.9
  • Cloud provider or hardware configuration: Cloud
  • OS (e.g. from /etc/os-release): Linux 16.04
  • Kernel (e.g. uname -a): 4.15.0-1092-azure #102~16.04.1-Ubuntu SMP Tue Jul 14 20:28:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

helm install fails due to invalid kubectl image

** BUG REPORT**

What happened:
helm chart is creating the kubectl container image tag incorrectly. Below it was trying to pull from
docker.io/lachlanevenson/k8s-kubectl:v1.8.7-rancher1 which doesn't exist.

Events:
  Type     Reason                 Age                 From                  Message
  ----     ------                 ----                ----                  -------
  Normal   Scheduled              10m                 default-scheduler     Successfully assigned px-hook-preinstall-clean-djmb2 to worker2-1-1
  Normal   SuccessfulMountVolume  10m                 kubelet, worker2-1-1  MountVolume.SetUp succeeded for volume "tiller-token-4pnhd"
  Warning  Failed                 10m                 kubelet, worker2-1-1  Failed to pull image "lachlanevenson/k8s-kubectl:v1.8.7-rancher1": rpc error: code = Unknown desc = Tag v1.8.7-rancher1 not found in repository docker.io/lachlanevenson/k8s-kubectl
  Normal   SandboxChanged         9m (x18 over 10m)   kubelet, worker2-1-1  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                9m (x2 over 10m)    kubelet, worker2-1-1  pulling image "lachlanevenson/k8s-kubectl:v1.8.7-rancher1"
  Warning  FailedSync             20s (x43 over 10m)  kubelet, worker2-1-1  Error syncing pod

Environment: RKE

  • Container Orchestrator and version: RKE 1.8.7

Can't install GKE 1.10.6 /opt/pwx Read-only

Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT

What happened: oci-monitor container fails to start

What you expected to happen: oci-monitor container to start

How to reproduce it (as minimally and precisely as possible):
GKE
ContainerOS
Kubernetes 1.10.6
ETCD

Any help to get this installed would be appreciated.

Auto journal creation

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

What happened:
auto create journal does not work when not setting something here:
https://github.com/portworx/helm/blob/master/charts/portworx/values.yaml#L35

What you expected to happen:
If i dont specify something i would assume to auto create the journal? Or if not then I need an option to create journal automatically on cloud drives.

How to reproduce it (as minimally and precisely as possible):
Use value defaults

Node label px/enabled=false constraint should apply always

Is this a BUG REPORT or FEATURE REQUEST?:
BUG

What happened:
Default helm-installation of PX does not configure the px/enabled=false node constraint.

What you expected to happen:
The px/enabled=false constraint should apply unconditionally, regardless of the flavor of installation.

How to reproduce it (as minimally and precisely as possible):
Use a straight-forward installation on Kubernetes cluster (openshiftInstall=false and deployOnMaster=false).
Inspect the generated YAML - note that px/enabled=false constraint will not be added.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Portworx Pod Fails Looking for Mounted Directories that Don't Exist

BUG REPORT

What happened:
After deployment, the main portworx pod keeps crashing with these errors below. It is looking for mounted volumes that I don't think it should be looking for. Why does it use the oci-monitor Docker image when I chose no monitoring and no OCI?

time="2020-08-18T19:53:22Z" level=info msg="Input arguments: /px-oci-mon -s /dev/sdc -k etcd:http://server1:2379,http://server2:2379 -c ebdc-poc -secret_type k8s -ca /etc/pwx/etcdcerts/ca.pem -cert /etc/pwx/etcdcerts/client.pem -key /etc/pwx/etcdcerts/client-key.pem -x kubernetes"
time="2020-08-18T19:53:22Z" level=info msg="Updated arguments: /px-oci-mon -s /dev/sdc -k etcd:http://server1:2379,http://server2:2379 -c ebdc-poc -secret_type k8s -ca /etc/pwx/etcdcerts/ca.pem -cert /etc/pwx/etcdcerts/client.pem -key /etc/pwx/etcdcerts/client-key.pem -x kubernetes"
time="2020-08-18T19:53:22Z" level=info msg="OCI-Monitor computed version v2.5.1-g1e7eecc3-dirty"
time="2020-08-18T19:53:22Z" level=error msg="Following dirs/files must be mounted into this container: /opt/pwx, /etc/systemd/system, /host_proc/1/ns"
time="2020-08-18T19:53:22Z" level=info msg="> run-host: /bin/sh -c systemctl show-environment"
time="2020-08-18T19:53:22Z" level=info msg="> run-host: /bin/sh -c initctl version"
time="2020-08-18T19:53:22Z" level=error msg="No suitable service controller for this platform and/or mounts configuration"
time="2020-08-18T19:53:22Z" level=error msg="Could not init service controls: Could not initialize service controller"

What you expected to happen:
Since I chose Docker type of deployment and not OCI in my values file, I expected that these mounts wouldn't be necessary:

/opt/pwx, /etc/systemd/system, /host_proc/1/ns

How to reproduce it (as minimally and precisely as possible):
My answer file is like this:

# Please uncomment and specify values for these options as per your requirements.

deploymentType: docker                     # accepts "oci" or "docker"
imageType: docker
imageVersion: 2.5.1                     # Version of the PX Image.

openshiftInstall: false                 # Defaults to false for installing Portworx on Openshift .
isTargetOSCoreOS: false                 # Is your target OS CoreOS? Defaults to false.
pksInstall: false                       # installation on PKS (Pivotal Container Service)
EKSInstall: false                     # installation on EKS.
AKSInstall: false                      # installation on AKS
etcdEndPoint: etcd:http://server1:2379;http://server2:2379
clusterName: ebdc-poc
usefileSystemDrive: false             # true/false Instructs PX to use an unmounted Drive even if it has a filesystem.
usedrivesAndPartitions: true          # Defaults to false. Change to true and PX will use unmounted drives and partitions.
secretType: k8s                       # Defaults to k8s, but can be kvdb/k8s/aws-kms/vault/ibm-kp. It is autopopulated to ibm-kp if the environment is IKS.
drives: /dev/sdc
dataInterface: none                   # Name of the interface <ethX>
managementInterface: none             # Name of the interface <ethX>
envVars: none                         # NOTE: This is a ";" seperated list of environment variables. For eg: MYENV1=myvalue1;MYENV2=myvalue2

stork: true                           # Use Stork https://docs.portworx.com/scheduler/kubernetes/stork.html for hyperconvergence.
storkVersion: 2.4.0

customRegistryURL:
registrySecret:

lighthouse: true
lighthouseVersion: 2.0.7
lighthouseSyncVersion: 2.0.7
lighthouseStorkConnectorVersion: 2.0.7
monitoring: false

journalDevice:

deployOnMaster: true
csi: false

internalKVDB: false                   # internal KVDB

etcd:
  credentials: none:none              # Username and password for ETCD authentication in the form user:password
  certPath: /etc/pwx/etcdcerts
  ca: /etc/pwx/etcdcerts/ca.pem
  cert: /etc/pwx/etcdcerts/client.pem
  key: /etc/pwx/etcdcerts/client-key.pem
consul:
  token: none                         # ACL token value used for Consul authentication. (example: 398073a8-5091-4d9c-871a-bbbeb030d1f6)

tolerations:                          # Add tolerations
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

serviceAccount:
  hook:
    create: true
    name:

clusterToken:
  create: false                    # Create cluster token
  secretName: px-vol-encryption    # Name of kubernetes secret to be created. Requires clusterToken.create to be true.
  serviceAccountName: px-create-cluster-token # Service account name to use for post-install hook to create cluster token

replicas: 2

#requirePxEnabledTag: true               # if set to true, portworx will only install on nodes with px/enabled: true label. Not required in most scenarios.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: Docker Community 18.09.7
  • Cloud provider or hardware configuration: Local virtual machines
  • OS (e.g. from /etc/os-release): RHEL 7
  • Kernel (e.g. uname -a): 3.10.0-1062
  • Install tools: Helm Chart

Portworx monitoring charts don't work with new releases of Kubernetes

Is this a BUG REPORT or FEATURE REQUEST?: BUG

What happened: Portworx Prometheus fails to start when deployment Portworx on Kubernetes 1.18.x

What you expected to happen: Expected Prometheus to run when deploying Portworx.

How to reproduce it (as minimally and precisely as possible): Make sure you have "monitoring: true" in your values.yaml file and deploy Portworx on Kubernetes 1.18.x.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A

pre-install etcd check hook fails on etcd cluster with only ca cert

BUG REPORT:

pre-install etcd check hook fails when run against an etcd cluster which only has ca cert. The etcd is using username and password auth hence etcd cert and key files are not present.

What happened:

Pull #78 introduced a regression where "none" values were being passed to the etcdStatus.sh for etcd key and cert. Hence this tried to use these "none" files and failed the status check.

What you expected to happen:
The etcd status check should ignore if etcd key and cert files are absent.

How to reproduce it (as minimally and precisely as possible):

Create a etcd for compose with username password auth and just a k8s secret with ca file.

Environment:

  • Container Orchestrator and version: Kubernetes
  • Cloud provider or hardware configuration: IBM cloud

helm chart did not recognize/ handle etcdEndpoint passed into "helm install"

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT

What happened:
Executed the command below:
helm install --name=px --namespace kube-system --set deploymentType=oci,imageVersion=1.2.16,etcdEndpoint=etcd:https://kube-etcd.kube-system:2379,clusterName=myCluster px-1.0.0.tgz
Failed with:
Error: render error in "px/templates/portworx-ds.yaml": template: px/templates/portworx-ds.yaml:93:18: executing "px/templates/portworx-ds.yaml" at <required "A valid ET...>: error calling required: A valid ETCD url in the format etcd:http:// is required. Verify that the key is correct and there isnt any typo in specifying that, also ensure it is accessible from all node of your kubernetes cluster

What you expected to happen:
helm install should work

How to reproduce it (as minimally and precisely as possible):
Run the command mentioned above

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
    docker 1.12, helm 2.7, kubernetes RKE-1.8.7
  • Cloud provider or hardware configuration:
    VMware ESXi 6.0 Update 2
  • OS (e.g. from /etc/os-release):
    CoreOS
  • Kernel (e.g. uname -a):
    4.13.9
  • Install tools:
  • Others:

Cleanup previously failed jobs upon helm install

Is this a BUG REPORT or FEATURE REQUEST?:

What happened:
If the preinstall hook fails it stays in the Error state. Subsequent installs are blocked stating the preinstall hook is already present. The user needs to manually delete the previously failed hook.

What you expected to happen:
The install process for a user should be idempotent in nature. This would help the install process.

How to reproduce it (as minimally and precisely as possible):
helm install. Make sure there is an erroneous etcd url passed. This would cause the pre-install hook to fail.
helm install again with the right etcd url. The install will error stating preinstall hook already exists

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Install autopilot via helm chart

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

What happened:
autopilot is not installed after helm chart is installed

What you expected to happen:
autopilot is either installed by default or its at least configurable to install. E.g. autopilot_enabled: true

How to reproduce it (as minimally and precisely as possible):
Install helm chart

How DaemonSet portworx-api works?

https://github.com/portworx/helm/blob/master/charts/portworx/templates/portworx-ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: portworx-api
  namespace: kube-system
  labels:
    name: portworx-api
spec:
  selector:
    matchLabels:
      name: portworx-api
  minReadySeconds: 0
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 100%
  template:
    metadata:
      labels:
        name: portworx-api
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: px/enabled
                    operator: {{ template "px.affinityPxEnabledOperator" . }}
                    values:
                      -  {{ template "px.affinityPxEnabledValue" . }}
                  - key: node-role.kubernetes.io/master
                    operator: DoesNotExist
      hostNetwork: true
      hostPID: false
      containers:
        - name: portworx-api
          image: "{{ template "px.getPauseImage" . }}/pause:3.1"
          imagePullPolicy: {{ $pullPolicy }}
          readinessProbe:
            periodSeconds: 10
            httpGet:
              host: 127.0.0.1
              path: /status
              port: 9001
      restartPolicy: Always
      serviceAccountName: px-account

I don't understand how it works. As i see there is the only one pause container what do nothing. It doesn't listen port 9001, so readinessProbe always fail.
Could you explain how it should work?

px-backup fails for non ssl backup

Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT
What happened:
Added s3 backup storage without ssl and by checking disable ssl button and it got validated but backups are failing.

" Error uploading resources: blob (key "backup/bkup-6-435607e/6ad9dfc6-88db-4a3f-bb23-72984c9737a7/namespaces.json") (code=Unknown): RequestError: send request failed caused by: Put "https://bastion.baremetal.xyz:8020/portworx/dbackup/bkup-6-435607e/6ad9dfc6-88db-4a3f-bb23-72984c9737a7/namespaces.json": http: server gave HTTP response to HTTPS client "
What you expected to happen:
with ssl disabled in backup location and validated , the backup must complete instead of trying to go via https and failing
How to reproduce it (as minimally and precisely as possible):
create a s3 bucket , in our case scality cloudserver, go to add cluster in px-backup, add backup location and credentials in cloud setting and perform backup of any object
Anything else we need to know?:
is there a way to add self signed certs in px-backup config so backups can be performed using them
Environment:

  • Container Orchestrator and version: openshift 4.9
    oc version
    Client Version: 4.9.13
    Server Version: 4.9.13
    Kubernetes Version: v1.22.3+e790d7f

  • Cloud provider or hardware configuration: Azure

  • OS (e.g. from /etc/os-release): Coreos

  • Kernel (e.g. uname -a):

  • Install tools: helm

  • Others: px-backip app version 2.2.0, without portworx cluster

Develop a Helm chart which would ease the deployment of Portworx

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
Develop a Helm chart which would ease the deployment of Portworx on kubernetes.
What you expected to happen:
Should be able to deploy Portworx using the helm client.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

helm install did not validate etcd endpoint correctness

BUG REPORT: I ran the helm install command with an incorrect etcd endpoint. The install should have failed in the pre install hook that validates the etcd.

What happened:

Ran command:

helm install --debug --name my-release --set etcdEndPoint=etcd:http://192.168.70.90:2379,clusterName=mycluster5 ./helm/charts/px/

There is no etcd at above endpoint. Still Portworx pods went ahead and started with the incorrect etcd.

kubectl get pods -n kube-system -l name=portworx
NAME             READY     STATUS    RESTARTS   AGE
portworx-nbthc   0/1       Running   0          1m
portworx-snrfl   0/1       Running   0          1m
portworx-xc2tc   0/1       Running   0          1m

journalctl on the nodes show

Feb 08 02:36:25 k2n1 px-runc[23981]: time="2018-02-08 02:36:25" level=error msg="Could not load config file /etc/pwx/config.json due to: Error in obtaining etcd version: Get http://192.168.70.90:2379/version: dial tcp 192.168.70.90:2379: i/o timeout.  Please visit http://docs.portworx.com for more information."`

What you expected to happen:
The pre-install hook should have failed and blocked the install.

Environment:

  • Container Orchestrator and version: Kubernetes 1.8.7
  • Cloud provider or hardware configuration: Virtualbox
  • OS (e.g. from /etc/os-release): Centos

image pull issue for some images which are injected on runtime in On-premise Air-gapped Environment

** BUG REPORT **

What happened:

Image pull issue for some images like 'docker.io/portworx/px-enterpri​se:2.7.4', 'quay.io/prometheus/alertmanager:v0.21.0' 'quay.io/prometheus/prometheus' and 'grafana/grafana:5.3.3' even after configuring "customRegistryURL" , image location are not getting overridden which are injected on runtime in airgapped environment.

What you expected to happen:

Image location should be overridden after configuring "customRegistryURL".

How to reproduce it (as minimally and precisely as possible):

In Helm 'values.yaml' , mention value of "customRegistryURL".

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: containerd v1.3.9

  • Cloud provider or hardware configuration: On-premise Air-gapped Environment

  • OS (e.g. from /etc/os-release):

`
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
`

  • Kernel (e.g. uname -a):
    Linux xxx-worker-rgu00u.xxx 3.10.0-1160.31.1.el7.x86_64 #1 SMP Thu Jun 10 13:32:12 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: Installed Portworx through Helm

  • Others:

px-backup chart installation fails with error `couldn't find key OIDC_CLIENT_SECRET in Secret px-backup/pxc-backup-secret`

Is this a BUG REPORT or FEATURE REQUEST?:

BUG REPORT

What happened:

When installing the chart with generated values from PX-Central, the pxcentral-frontend and pxcentral-backend pods fail with the error: CreateContainerConfigError

The pod description shows the error message:

 Error: couldn't find key OIDC_CLIENT_SECRET in Secret px-backup/pxc-backup-secret

What you expected to happen:

Chart deploys normally with the provided values.

How to reproduce it (as minimally and precisely as possible):

Install the chart with the generated values-px-backup.yaml from PX-Central, with the command:

helm install px-backup portworx/px-backup --namespace px-backup --create-namespace --version 1.2.3 -f values-px-backup.yaml

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: EKS 1.19
  • Cloud provider or hardware configuration: AWS

Helm delete gives an error skipping delete; object not found, skipping delete

Is this a BUG REPORT or FEATURE REQUEST?:

What happened:
Helm delete deletes the objects but still gives an error
Tiller logs:
2018/02/02 10:59:32 uninstall: Failed deletion of "braided-frog": no objects visited [tiller] 2018/02/02 10:59:32 error: object not found, skipping delete
All the resources do get deleted however.

What you expected to happen:
The error message shouldnt occur upon successful deletion of the resources from kubernetes.

Similar to : helm/helm#1807

How to reproduce it (as minimally and precisely as possible):
helm delete .
Verify that the objects get deleted but you still get an error message as stated above.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

When `stork=true` in helm chart. the Stork pods should wait for Portworx to come up.

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

What happened:
The STork Pods start running and show errors in the logs for unable to connect to Portworx. Instead it would be better if they wait for Portworx to be in READY state

What you expected to happen:

STork should wait for Portworx to be READY

How to reproduce it (as minimally and precisely as possible):

helm install --debug --set stork=true, etcdEndpoint=etcd:http://192.168.70.90:2379 ./portworx-helm

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

pre-delete / post-delete hooks do not complete

The pre-delete and post-delete tasks do not complete, as the service account px-hook is not created.

  Warning  FailedCreate  16s (x10 over 4m16s)  job-controller  Error creating: pods "px-hook-predelete-nodelabel-" is forbidden: error looking up service account kube-system/px-hook: serviceaccount "px-hook" not found

Just to confirm .Values.serviceAccount.hook.create is true, and im running helm v3.1.0

To reproduce:

  • Install portworx charts
  • Run uninstall with the following command(replacing release_name and namespace)
    helm uninstall _release_name_ -n _namespace_ --debug

please provide capability to pull from a private docker registry

Is this a BUG REPORT or FEATURE REQUEST?:
Feature request

What happened:
In case of diconnected/ air gapped environment setup/ installs, the container images are hosted in a private docker registry.

What you expected to happen:
The helm chart should be capable of handling a private docker registry

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Container Orchestrator and version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.