Git Product home page Git Product logo

k8s-pvc-tagger's Introduction

k8s-pvc-tagger

NOTE: This project was originally named k8s-aws-ebs-tagger but was renamed to k8s-pvc-tagger as the scope has expanded to more than aws ebs volumes.

A utility to tag PVC volumes based on the PVC's k8s-pvc-tagger/tags annotation

Go Gosec GitHub tag

The k8s-pvc-tagger watches for new PersistentVolumeClaims and when new AWS EBS/EFS volumes are created it adds tags based on the PVC's k8s-pvc-tagger/tags annotation to the created EBS/EFS volume. Other cloud provider and volume times are coming soon.

How to set tags

cmdline args

--default-tags - A json or csv encoded key/value map of the tags to set by default on EBS/EFS Volumes. Values can be overwritten by the k8s-pvc-tagger/tags annotation.

--tag-format - Either json or csv for the format the k8s-pvc-tagger/tags and --default-tags are in.

--allow-all-tags - Allow all tags to be set via the PVC; even those used by the EBS/EFS controllers. Use with caution!

--copy-labels - A csv encoded list of label keys from the PVC that will be used to set tags on Volumes. Use * to copy all labels from the PVC.

Annotations

k8s-pvc-tagger/ignore - When this annotation is set (any value) it will ignore this PVC and not add any tags to it

k8s-pvc-tagger/tags - A json encoded key/value map of the tags to set on the EBS/EFS Volume (in addition to the --default-tags). It can also be used to override the values set in the --default-tags

NOTE: Until version v1.2.0 the legacy annotation prefix of aws-ebs-tagger will continue to be supported for aws-ebs volumes ONLY.

Examples

  1. The cmdline arg --default-tags={"me": "touge"} and no annotation will set the tag me=touge

  2. The cmdline arg --default-tags={"me": "touge"} and the annotation k8s-pvc-tagger/tags: | {"me": "someone else", "another tag": "some value"} will create the tags me=someone else and another tag=some value on the EBS/EFS Volume

  3. The cmdline arg --default-tags={"me": "touge"} and the annotation k8s-pvc-tagger/ignore: "" will not set any tags on the EBS/EFS Volume

  4. The cmdline arg --default-tags={"me": "touge"} and the annotation k8s-pvc-tagger/tags: | {"cost-center": "abc", "environment": "prod"} will create the tags me=touge, cost-center=abc and environment=prod on the EBS/EFS Volume

  5. The cmdline arg --copy-labels '*' will create a tag from each label on the PVC with the exception of the those used by the controllers unless --allow-all-tags is specified.

  6. The cmdline arg --copy-labels 'cost-center,environment' will copy the cost-center and environment labels from the PVC onto the cloud volume.

ignored tags

The following tags are ignored by default

  • kubernetes.io/*
  • KubernetesCluster
  • Name

Tag Templates

Tag values can be Go templates using values from the PVC's Name, Namespace, Annotations, and Labels.

Some examples could be:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: touge-test
  namespace: touge
  labels:
    TeamID: "Frontend"
  annotations:
    CostCenter: "1234"
    k8s-pvc-tagger/tags: |
      {"Owner": "{{ .Labels.TeamID }}-{{ .Annotations.CostCenter }}"}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-1
  namespace: my-app
  annotations:
    k8s-pvc-tagger/tags: |
      {"OwnerID": "{{ .Namespace }}/{{ .Name }}"}

Multi-cloud support

Currently supported clouds: AWS, GCP.

Only one mode is active at a given time. Specify the cloud k8s-pvc-tagger is running in with the --cloud flag. Either aws or gcp.

If not specified --cloud aws is the default mode.

NOTE: GCP labels have constraints that do not match the contraints allowed by Kubernetes labels. When running in GCP mode labels will be modified to fit GCP's constraints, if necessary. The main difference is . and / are not allowed, so a label such as dom.tld/key will be converted to dom-tld_key.

Installation

AWS IAM Role

You need to create an AWS IAM Role that can be used by k8s-pvc-tagger. For EKS clusters, an IAM Role for Service Accounts should be used instead of using an AWS access key/secret. For non-EKS clusters, I recommend using a tool like kube2iam. An example policy is in examples/iam-role.json.

GCP Service Account

You need a GCP Service Account (GSA) that can be used by k8s-pvc-tagger. For GKE clusters, Workload Identity should be used instead of a static JSON key.

It is recommended you create a custom IAM role for use by k8s-pvc-tagger. The permissions needed are:

  • compute.disks.get
  • compute.disks.list
  • compute.disks.setLabels

An example terraform resources is in examples/gcp-custom-role.tf.

Or, with gcloud:

gcloud iam roles create CustomDiskRole \
    --project=<your-project-id> \
    --title="k8s-pvc-tagger" \
    --description="Custom role to manage disk permissions" \
    --permissions="compute.disks.get,compute.disks.list,compute.disks.setLabels" \
    --stage="GA"

Install via helm

helm repo add mtougeron https://mtougeron.github.io/helm-charts/
helm repo update
helm install k8s-pvc-tagger mtougeron/k8s-pvc-tagger

Container Image

Images are available on the GitHub Container Registry and DockerHub. Containers are published for linux/amd64 & linux/arm64.

The container images are signed with sigstore/cosign and can be verified by running COSIGN_EXPERIMENTAL=1 cosign verify ghcr.io/mtougeron/k8s-pvc-tagger:<tag>

Licensing

This project is licensed under the Apache V2 License. See LICENSE for more information.

k8s-pvc-tagger's People

Contributors

dependabot[bot] avatar dinhkim avatar dol3y avatar jantzenallphin avatar joemiller avatar khartahk avatar liorfranko avatar mberga14 avatar mtougeron avatar wadhwakabir avatar yurrriq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

k8s-pvc-tagger's Issues

Cannot access PVCs in namespaces in watchNamespace

Describe the bug
First of all, thank you for this! Works great with a minor issue I noticed when using the helm chart.

When setting watchNamespace to a namespace different from where the helm-generated manifests are, the tagger pods can't access the PVCs. For example, I deploy the resources in ns1 and set watchNamespace: "monitoring". This results in errors in the tagger pod logs:

k8s-aws-ebs-tagger-bdd94dd85-t8xqk k8s-aws-ebs-tagger E0930 01:04:41.988660       1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:ns1:k8s-aws-ebs-tagger" cannot list resource "persistentvolumeclaims" in API group "" in the namespace "monitoring"

I believe it just lacks rolebindings in the target namespaces in watchNamespace.

To Reproduce
Steps to reproduce the behavior:

  1. Set watchNamespace: monitoring in chart values.
  2. Deploy resources to ns1.
  3. Print logs of k8s-aws-ebs-tagger pods and see the error above..

Expected behavior
k8s-aws-ebs-tagger pods should be able to access all PVCs in target namespaces in watchNamespace.

Additional context
I'm not a Go developer but it looks like this is purely helm chart-related which I'm familiar with. I could open a PR for this if needed.

Tag EFS access Points in aws for PVC created using EFS storage class

Is your feature request related to a problem? Please describe.
Tool currently tags PVC that are created in aws using EBS storage class, needed functionality to tag PVC created using EFS storage class.

Describe the solution you'd like
Using existing Informer that is watching all PVC, add condition for efs, and fetch efs access point details from persistent volume and tag particular access point using same annotation logic.

Describe alternatives you've considered
remove repo to k8s-aws-pvc tagger for generic use

Additional context
I have made changes in my fork project

Can't use aws-ebs-tagger/tags annotation got "map", expected "string"

Hi
I have problem that I can't use aws-ebs-tagger/tags annotation. This is my PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
aws-ebs-tagger/tags: {"me": "someone else", "another tag": "some value"}
name: foo
spec:
storageClassName: gp3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

error: error validating "pvc.yaml": error validating data: ValidationError(PersistentVolumeClaim.metadata.annotations.aws-ebs-tagger/tags): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false

It is like there.
https://grepmymind.com/introducing-the-k8s-aws-ebs-tagger-3ec2502cf40e
Maybe something changed?

Here doc
https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

k8s-pvc-tagger crashing

Describe the bug
Getting the ff error after a pvc has been deleted and recreated

E0117 23:07:54.796497       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 69 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c1ace0?, 0x32f11e0})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00013b800?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1c1ace0, 0x32f11e0})
	/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0001694d0)
	/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0001694d0})
	/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x10000000011?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000523f38?, {0x235c4e0, 0xc000216060}, 0x1, 0xc000734000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc000523f88?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005c4500?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1935b1f]

To Reproduce
We are currently using k8s-pvc-tagger:v1.0.1.
We installed the k8s pvc tagger which is working fine for a while. We've installed ebs csi driver after.
Previously, the PVCs are created by kubernetes.io/aws-ebs storage-provisioner(PVC annotation: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs).
We have recreated one of the pvcs and it got provisioned by ebs-csi-driver this time (PVC annotation: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com)

The k8s-pvc-tagger started crashing after.

Copy tags from EC2 instance

Is your feature request related to a problem? Please describe.
Would be great to automatically tag volumes with the same or a subset of tags of the instance the ebs is attached to. This would make easier handle tags related for example to cost allocation, business department and so on...

Mandatory tags

Is your feature request related to a problem? Please describe.
Let's say I would like for billing purpose to enforce some mandatory tags such as customer, billing-id, department etc. It could be nice to report (using events + Prometheus) the PVCs which does not fit a "Standard".

Describe the solution you'd like

  • Add cli parameter with --mandatory-tags which contain a json of mandatory tags + the format
  • When a PVC is handled and does not meet mandatory tags, an event is sent in the namespace (this is just a warning, the other tags are still added)

Describe alternatives you've considered
N/A

Additional context
N/A

I can work on a PR if needed

Setup prometheus metrics

Track the following:

  • tags actions (add and remove)
  • PVC processed
  • PVC ignored

Are there others?

Tag existing AWS resources

Problem description
Whenever k8s-pvc-tagger is deployed there might be already many untagged PVCs (and dynamically provisioned PVs / EBS / EFS).

Solution
If a flag (e.g. tag-existing-resources) is set, the tagger lists all the PVCs and tries to tag the underlying EBS/EFS.

Parsing AWS volumeID is failing

Describe the bug
A clear and concise description of what the bug is.
When creating a new EBS volume (EKS v1.23) using the intree provisioner, the request is redirected to te csi driver which creates the PV object with volumeID: vol-xxxxxx without the aws:// prefix. This is expected according to this. Though the k8s-pvc-tagger fails to parse the PV object since the regex doesn't include that case.
I think that the regex can be changed to something like: ^(?:aws:\/\/\w{2}-\w{4,9}-\d\w\/){0,1}(vol-\w+){1}$
To Reproduce
Steps to reproduce the behavior:

  1. Create a new PVC using the intree provisioner
  2. Watch the k8s-pvc-tagger logs

Expected behavior
k8s-pvc-tagger should be able to tag the ebs volume.

Add support to specify labels for ServiceMonitor

Is your feature request related to a problem? Please describe.
Yes, Prometheus Operator will read ServiceMonirot resources labeled with sepcifc label, in my case.

Describe the solution you'd like
Add support to specify labels in the helm chart

Describe alternatives you've considered
Manualy add those labels

Refactor project from k8s-aws-ebs-tagger to k8s-pvc-tagger

Refactor project from k8s-aws-ebs-tagger to k8s-pvc-tagger so that it can be used for multiple PVC types and used for multiple cloud providers. While this rebranding/renaming sucks, it's better to do it now than later.

  • document timeline (maybe mid-July?)
  • rename project
  • rename metrics to k8s_pvc_tagger
  • rename annotations to k8s-pvc-tagger
  • support legacy annotations & metrics for 2 releases
  • add storage class label to metrics
  • update documentation
  • create new docker hub repo
  • create new github artifact repo (is this possible?
  • create deprecation plan for legacy docker/github image repos

EBS Tagger crashes whenever we add a new non-EBS PVC

Describe the bug
When EBS tagger is running and you add a new EBS volume that it should monitor the pod crashes.

Expected behavior
Tagger should discover the new EBS volume and check if it needs to tag it then tag it.

Error log

E1209 09:18:32.383947       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 65 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1cc9be0, 0x305f4e0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
panic(0x1cc9be0, 0x305f4e0)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
main.processPersistentVolumeClaim(0xc00034cf78, 0xc000000004, 0xc0006c5c28, 0x1, 0x1, 0x0)
    /build/kubernetes.go:253 +0x677
main.watchForPersistentVolumeClaims.func1(0x1f890a0, 0xc00034cf78)
    /build/kubernetes.go:102 +0x24c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00005ff60)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c5f60, 0x22c56c0, 0xc00038e330, 0x1c6d901, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00005ff60, 0x3b9aca00, 0x0, 0x1, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005da380)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000598c30, 0xc000614910)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x65
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1a24ab7]
goroutine 65 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x109
panic(0x1cc9be0, 0x305f4e0)
    /usr/local/go/src/runtime/panic.go:965 +0x1b9
main.processPersistentVolumeClaim(0xc00034cf78, 0xc000000004, 0xc0006c5c28, 0x1, 0x1, 0x0)
    /build/kubernetes.go:253 +0x677
main.watchForPersistentVolumeClaims.func1(0x1f890a0, 0xc00034cf78)
    /build/kubernetes.go:102 +0x24c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00005ff60)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c5f60, 0x22c56c0, 0xc00038e330, 0x1c6d901, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00005ff60, 0x3b9aca00, 0x0, 0x1, 0xc000640000)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005da380)
    /go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000598c30, 0xc000614910)
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
    /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x65

Annotate PVs

Is your feature request related to a problem? Please describe.
It could be nice to have a way to tag PV with the tags provided

Describe the solution you'd like

Describe alternatives you've considered
N/A

Additional context
N/A

I can work on an PR if needed

Crash when adding any new PVC

Describe the bug
Any time I create a pod with a new pvc, the k8s-pvc-tagger pod crashes:

E1207 21:21:37.388710       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 121 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1c1ace0?, 0x32f11e0})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0003ca000?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1c1ace0, 0x32f11e0})
	/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0004d08f0)
	/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0004d08f0})
	/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00051ae98?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051af38?, {0x235c4e0, 0xc0004284b0}, 0x1, 0xc0005a60c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00051af88?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005d6600?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1935b1f]

goroutine 121 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0003ca000?})
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xd8
panic({0x1c1ace0, 0x32f11e0})
	/usr/local/go/src/runtime/panic.go:838 +0x207
main.processPersistentVolumeClaim(0xc0004d08f0)
	/build/kubernetes.go:367 +0x31f
main.watchForPersistentVolumeClaims.func1({0x1f14a20?, 0xc0004d08f0})
	/build/kubernetes.go:114 +0x1f8
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:232
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:818 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00051ae98?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051af38?, {0x235c4e0, 0xc0004284b0}, 0x1, 0xc0005a60c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc00051af88?)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005d6600?)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x85

I don't see any useful debugging info there, so I'm not sure how to debug.

Additional context

Here's how k8s-pvc-tagger was deployed via TerraForm and Helm:

resource "helm_release" "k8s-pvc-tagger" {
  name = "k8s-pvc-tagger"
  namespace = "k8s-pvc-tagger"
  create_namespace = true
  repository = "https://mtougeron.github.io/helm-charts"
  chart = "k8s-pvc-tagger"
  version = "2.0.1"
  set {
    name  = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
    value = aws_iam_role.foo.arn
  }

  set {
    name  = "serviceAccount.name"
    value = "k8s-pvc-tagger-sa"
  }

}

(plus of course the respective IAM service policy and role)

Our pods are also created via Helm. Here's their PVC definition, which seems to trigger the crash:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
  annotations:
    volume.beta.kubernetes.io/storage-class: gp2
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
    k8s-pvc-tagger/tags: |
      {
        "foo/bu": "dc",
        "foo/consumer": "bar",
        "foo/expiry": "9999-01-01",
        "foo/created_by": "helm",
        "foo/environment": "unknown"
      }
  labels:
    managed-by: helm
    foo/bu: dc
    foo/consumer: bar
    foo/stage: unknown
    foo/expiry: "9999-01-01"
    foo/created_by: helm
    foo/environment: unknown
    app: {{ .Release.Name }}
spec:
  storageClassName: gp2
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: {{ .Values.persistentVolumeClaim.requestedSize | default "60Gi" }}

This error seems awfully similar to #37

Cant create Default tags

when i try to create a volume with Name tag it just ignore the variable and move to the next tag.
is it in purpose? if so can you change it cause this is very valuable tag we need.

Allow templated tags

Allow setting a tag that uses tpl vars to substitute values from a label or annotation. e.g., "mytag": "{{ metadata.namespace}}-{{ labels.app }}" or something like that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.