Git Product home page Git Product logo

helm-operator's Introduction

Helm Operator

This repository contains the source code of the Helm Operator, part of Flux Legacy (v1).

Flux v1 has reached end of life and has been replaced by fluxcd/flux2 and its controllers entirely.

If you are on Flux Legacy, please see the migration guide. If you need hands-on help migrating, you can contact one of the companies listed here.

History

The Helm Operator was initially developed by Weaveworks as an extension to Flux, to allow for the declarative management of Helm releases.

For an extensive historical overview of Flux, please refer to https://github.com/fluxcd/flux/#History.

helm-operator's People

Contributors

2opremio avatar aaron7 avatar awh avatar dimitropoulos avatar ellieayla avatar eochs avatar hiddeco avatar lelenanam avatar marccarre avatar ncabatoff avatar paulbellamy avatar peterbourgon avatar petervandenabeele avatar philwinder avatar pjbgf avatar rade avatar richardcase avatar rndstr avatar rytswd avatar sa-spag avatar samb1729 avatar scjudd avatar seaneagan avatar squaremo avatar stefanprodan avatar stefansedich avatar stephenmoloney avatar swade1987 avatar tamarakaufler avatar yebyen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-operator's Issues

valuesFromFile to create key's value sourced from file content

valuesFromFile:
  chartFileRef:
    myScript: path/to/file.js
  externalSourceRef:
    defaultScript: https://raw.githubusercontent.com/cdenneen/example/master/script.js

result in equivalent of:

helm install --set-file key=path/to/file.js

myScript: |
  const { events, Job } = require("brigadier")
  function run(e, project) {
    console.log("this is my script")
  }
  events.on("run", run)
defaultScript: |
  const { events, Job } = require("brigadier")
  function run(e, project) {
    console.log("hello default script")
  }
  events.on("run", run)

Helm operator : Could not read from remote repository.

Describe the bug

Problem 1
For some reason the Flux helm operator can't connect to our gitlab anymore...

ts=2019-10-15T15:45:26.832760944Z caller=chartsync.go:533 component=chartsync info="chart repo not ready yet" resource=logging:helmrelease/fluentd-elasticsearch status=new err="git clone --mirror: fatal: Could not read from remote repository., full output:\n Cloning into bare repository '/tmp/flux-gitclone975152289'...\nGitLab: The project you were looking for could not be found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"

However Flux still works and deploys standalone yamls perfectly. Its only the helm operator that has connectivity issues. Even after being restarted multiple times by myself the error persists.

The above warning repeats itself for each helm release present on the cluster. Even for the ones that were NOT deployed by this helm operator.
For instance the fluentd-elasticsearch chart in the above message is not part of the git repository. I don't get why Flux would try to clone something that is NOT in the repository...

Problem 2

I also have this issue on charts that are in the repository:

ts=2019-10-15T15:45:17.433825329Z caller=chartsync.go:533 component=chartsync info="chart repo not ready yet" resource=opa-integ:helmrelease/gitlab-runner-integ status=new err="git clone --mirror: fatal: Could not read from remote repository., full output:\n Cloning into bare repository '/tmp/flux-gitclone597278666'...\nGitLab: The project you were looking for could not be found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"

Always the same error on each deployed charts ...

FYI, I have a second Flux which is working perfectly in another namespace and synchronizing a second repository connected to the same Gitlab instance.

To Reproduce

Steps to reproduce the behaviour:

  • Install 2 Flux via Helm in your cluster
  • Connect them to the same hosted Gitlab on 2 separate repositories
  • Install and remove charts
  • Wait for something to go wrong (quick step)

Expected behavior

Helm operator should still work and sync ressources.

Additional context

Add any other context about the problem here, e.g

  • Flux version: 1.13.2
  • Helm Operator version: 0.10.0
  • Kubernetes version: 1.14.3
  • Git provider: Gitlab
  • Container registry provider: Nexus

It must be an error on my end.
Any ideas are welcome ;-)
Thx

Filter namespaces by annotation rather than `--allow-namespace`

Describe the feature
Create an annotation like helm.fluxcd.io/enabled: false instead of using the specify namespace field. This follows the patterns of other major k8s projects like flux fluxcd.io/automated: "false", istio, jaeger, certmanager, etc.

Background
We have a multi-tenant cluster where we want to have two completely separate instances of tiller, flux, flux-helm-operator. We would like the flux-helm-operator managed by ops/eng to have access to all namespaces outside of the developers' namespaces so that we don't trample on their instances.

Workaround
Restrict flux-helm-operator to flux namespace, and use the targetNamespace field to deploy things into other namespaces.

What would the new user story look like?
How would the new interaction with the Flux Helm operator look like? E.g.
0. By default all namespaces would be enabled, the annotation would only disable namespaces. -- Make this configurable by users.

  1. User creates namespace that they don't want helm-operator to process and annotates it as helm.fluxcd.io/enabled: false.
  2. A new HelmRelease is mistakenly created in that namespace.
  3. Flux notes the release and rejects it due to namespace conditions.

Expected behavior
Flux retrieves all namespaces and reads annotations. Any namespace annotated with specific annotation are dropped from processing but releases are noted in the log.

Send chart release notifications upstream from helm operator

This has come up a couple of times in weave-community slack.

Step one: figure out what should be sent, and when -- neither of these are straight-forward, since usually we are taking action to reconcile states, rather than enacting a user's command.

'no repository definition' error, but repos have been configured

I'm trying to use the Helm Operator to release a Helm Chart (v3 syntax) that defines a dependency. I get a Helm error about 'no repository definition', but I think I've added the chart repo correctly. If I remove the dependency section, the chart installs.

Perhaps related to #18.

When I deployed the helm operator (via a git checkout d42759d), I added a couple of chart repos:

helm upgrade -i helm-operator ./chart/helm-operator --wait \
--namespace fluxcd \
--set git.ssh.secretName=flux-git-deploy \
--set git.pollInterval=1m \
--set chartsSyncInterval=1m \
--set configureRepositories.enable=true \
--set configureRepositories.repositories\[0\].name=stable \
--set configureRepositories.repositories\[0\].url=https://kubernetes-charts.storage.googleapis.com \
--set configureRepositories.repositories\[1\].name=bitnami \
--set configureRepositories.repositories\[1\].url=https://charts.bitnami.com/bitnami \
--set extraEnvs\[0\].name=HELM_VERSION \
--set extraEnvs\[0\].value=v3 \
--set image.repository=docker.io/fluxcd/helm-operator \
--set image.tag=helm-v3-9908b258

I checked that the repositories are in the secret/flux-helm-repositories secret:

kubectl get -n fluxcd secret flux-helm-repositories --output="jsonpath={.data.repositories\.yaml}" | base64 --decode

apiVersion: v1
generated: 0001-01-01T00:00:00Z
repositories:
- name: stable
  url: https://kubernetes-charts.storage.googleapis.com
  cache: /var/fluxd/helm/repository/cache/stable-index.yaml
  caFile: ""
  certFile: ""
  keyFile: ""
  password: ""
  username: ""
- name: bitnami
  url: https://charts.bitnami.com/bitnami
  cache: /var/fluxd/helm/repository/cache/bitnami-index.yaml
  caFile: ""
  certFile: ""
  keyFile: ""
  password: ""
  username: ""% 

My Helm chart defines a dependency in Chart.yaml:

apiVersion: v2
name: mygreatapp
description: A Helm chart for mygreatapp
type: application
version: 0.1.0
appVersion: 1.0.0
dependencies:
  - name: mongodb
    version: 7.3.0
    repository: https://charts.bitnami.com/bitnami
    condition: mongodb.enabled

My HelmRelease manifest looks like:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: mygreatapp-stage
  namespace: fluxcd
  annotations:
    fluxcd.io/ignore: "false"
spec:
  releaseName: mygreatapp-stage
  targetNamespace: stage
  chart:
    git: [email protected]:myorg/infrastructure
    path: charts/mygreatapp

I get a Helm error reported by the helm-operator:

ts=2019-10-10T09:13:19.2217093Z caller=operator.go:322 component=operator info="enqueuing release" resource=fluxcd:helmrelease/mygreatapp-stage
ts=2019-10-10T09:13:19.2807463Z caller=chartsync.go:555 component=chartsync release=mygreatapp-stage targetNamespace=stage resource=fluxcd:helmrelease/mygreatapp-stage warning="failed to update chart dependencies" err="no repository definition for https://charts.bitnami.com/bitnami. Please add them via 'helm repo add'"

And the HelmRelease message is similar:

NAME                                            RELEASE   STATUS   MESSAGE                                                                                                AGE
helmrelease.helm.fluxcd.io/mygreatapp-stage                        no repository definition for https://charts.bitnami.com/bitnami. Please add them via 'helm repo add'   35m

Error: at <.Values.global.app.my.name>: nil pointer evaluating interface {}.name

Describe the bug
The value of global.app.my.name is not picked up, and the following error appears in the helm operator logs:

at <.Values.global.app.my.name>: nil pointer evaluating interface {}.name

The value is provided as follows in the HelmRelease definition:

  values:
    global.app.my.name: "${my_name}"

To Reproduce

Expected behavior
The value of global.app.my.name should be used as provided, and no error should appear.

Logs

ts=2019-08-06T18:07:53.149745284Z caller=release.go:215 component=release error="Chart release failed: test-one: &status.statusError{Code:2, Message:\"render error in \\\"...\\\" at <.Values.global.global.app.my.name>: nil pointer evaluating interface {}.name\", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}"

Additional context
Add any other context about the problem here, e.g

  • Flux version: flux-0.11.0 1.13.2
  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-eks-c57ff8", GitCommit:"c57ff8e35590932c652433fab07988da79265d5b", GitTreeState:"clean", BuildDate:"2019-06-11T03:47:47Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.8-eks-a977ba", GitCommit:"a977bab148535ec195f12edc8720913c7b943f9c", GitTreeState:"clean", BuildDate:"2019-07-29T20:47:04Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

End-to-end tests

Given that there has been much progress on the end-to-end testing side of Flux, and we can use a lot of the libraries there to scaffold the tests here, the easiest way to get a validation of all (new) features is to test them in an end-to-end setting.

The minimum tests required to get an overview are:

  • Installation
    • Helm v2 git chart installation
    • Helm v2 repo chart installation
    • Helm v3 git chart installation
    • Helm v3 repo chart installation
  • Upgrade
    • Helm v2 upgrade
    • Helm v2 upgrade triggered by git chart source change
    • Helm v3 upgrade
    • Helm v3 upgrade triggered by git chart source change
  • Rollback
    • Helm v2 rollback
    • Helm v3 rollback
    • Rollback halts sequential releases
    • Change to chart and/or values continues release
  • Uninstall
    • Helm v2 uninstall
    • Helm v3 uninstall
  • Dependencies
    • Helm v2 git chart source dependencies are updated
    • Helm v3 git chart source dependencies are updated
  • Other
    • Default Helm version setting
    • HTTP sync request
    • ...

Helm Operator ignores .spec.targetNamespace when --allowed-namespace is set

Describe the bug
Helm Operator is ignoring a HelmRelease's .spec.targetNamespace value when the operator itself is scoped to watch HelmReleases in a single namespace with the --allowed-namespace flag.

To Reproduce
Deploy the operator using the latest Flux Helm chart (0.14.1) with the following values snippet:

helmOperator:
  create: true
  allowNamespace: namespace1

This results in the --allowed-namespace=namespace1 flag being passed to the operator binary.

Next, create the a HelmRelease with the following content:

apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: myapp
  namespace: namespace1
spec:
  releaseName: myapp
  targetNamespace: namespace2

Instead of being deployed to namespace2 as per the targetNamespace field, the chart is instead released to namespace1:

$ helm ls
NAME            REVISION        UPDATED                         STATUS          CHART           APP VERSION     NAMESPACE
myapp           1               Mon Sep 23 18:56:14 2019        DEPLOYED        myapp-0.1.0     1.0.0           namespace1

Expected behavior
I expect that the chart is released to namespace2, respecting the targetNamespace.

Logs
Logs don't indicate much:

ts=2019-09-23T16:56:13.62040964Z caller=release.go:183 component=release info="processing release myapp (as myapp)" action=CREATE options="{DryRun:false ReuseName:false}" timeout=600s
ts=2019-09-23T16:58:49.69646849Z caller=operator.go:309 component=operator info="enqueuing release" resource=namespace1:helmrelease/myapp

Additional context

  • Helm operator version: 0.10.1
  • Kubernetes version: 1.14.3

Question: How to use common environment variables or shared config-maps across chart

Not sure if this will become a feature request, or if this is possible with the version we have, or will be possible with the new Kustomize feature. We have started using Flux in development and production clusters. For development we create new clusters on a, almost, weekly basis. All development clusters points to the same branch and config in the repo. Before we installed flux we installed the components to the cluster through a custom script, but we want to move most of this to Flux config. We have started to see that it would be nice to have a mechanism to share config across components. However we struggle when having components in different namespaces and/or where the config for each component is named differently in the different charts. An example of such a config is the clusterName. For the first component we flux'ified, our own controller which runs in default namespace, we created a configmap which feeds the values into the Helm-release. For that component the clusterName is a top level value in the values file. The second component was Velero, which needed to be inside a velero namespace. The clusterName went into the configuration -> backupStorageLocation -> bucket. The third component, kubed, would need to has its clusterName config in config -> clusterName, the fourth, external-dns, gets clusterName into txtOwnerId. The way we know how to solve this would be to create a config-map for each namespace where there is a component which should be managed by flux, which needs dynamic variables from outside. For components living in the same namespace, to have a shared configmap where the clustername is duplicated to different sections of that configmap.

What would be nice to be able to do was to either:

  • Tell flux about cluster global environment variables, through i.e. extraEnvs that we could set when we install to that specific cluster, and refer to that variable in the Helmrelease when setting the specific value
  • Or to have a configMapKeyRef to a shared configmap in the cluster. I.e. having a configmap in the default namespace holding all, even if values are duplicated to different sections, cluster global environment variables, and allow flux to read from that configmap

Recommendations for alerting on failed releases

I'd like to be able to alert on failed releases. I looked into hooking into the kubernetes events that the operator emits, but the only event emitted is chartsynced and it doesn't contain any error information.

Object:  { kind: 'Event',
  apiVersion: 'v1',
  metadata:
   { name: 'bad-hr-test.15cada07d1ccac62',
     namespace: 'new-ns',
     selfLink:
      '/api/v1/namespaces/new-ns/events/bad-hr-test.15cada07d1ccac62',
     uid: '507be1a7-e7ae-11e9-9058-42010a8000c0',
     resourceVersion: '1006825',
     creationTimestamp: '2019-10-05T20:25:47Z' },
  involvedObject:
   { kind: 'HelmRelease',
     namespace: 'new-ns',
     name: 'bad-hr-test',
     uid: '5077e09e-e7ae-11e9-9058-42010a8000c0',
     apiVersion: 'flux.weave.works/v1beta1',
     resourceVersion: '39704805' },
  reason: 'ChartSynced',
  message: 'Chart managed by HelmRelease processed',
  source: { component: 'helm-operator' },
  firstTimestamp: '2019-10-05T20:25:47Z',
  lastTimestamp: '2019-10-05T20:35:31Z',
  count: 7,
  type: 'Normal',
  eventTime: null,
  reportingComponent: '',
  reportingInstance: '' }

The only way I've found to discover bad releases is by introspecting the logs of the helm-operator. For example:

ts=2019-10-05T20:30:13.497815163Z caller=release.go:208 component=release error="Chart release failed: bad-hr-test: &status.statusError{Code:2, Message:\"render error in \\\"blah/templates/ui/deployment.yaml\\\": template: blah/templates/ui/deployment.yaml:21:55: executing \\\"blah/templates/ui/deployment.yaml\\\" at <required \\\"ui image tag required\\\" .Values.ui.image.tag>: error calling required: ui image tag required\", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}"

This sometimes appears the Status of the helm release, but it looks like sometimes isn't reported:

Name:         bad-hr-test
Namespace:    new-ns
Labels:       <none>
Annotations:  flux.weave.works/automated: false
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"flux.weave.works/v1beta1","kind":"HelmRelease","metadata":{"annotations":{"flux.weave.works/automated":"false"},"name":"bad...
API Version:  flux.weave.works/v1beta1
Kind:         HelmRelease
Metadata:
  Creation Timestamp:  2019-10-05T20:25:47Z
  Generation:          1
  Resource Version:    39705744
  Self Link:           /apis/flux.weave.works/v1beta1/namespaces/new-ns/helmreleases/bad-hr-test
  UID:                 5077e09e-e7ae-11e9-9058-42010a8000c0
Spec:
  Chart:
    Git:         [email protected]:something/cluster-config
    Path:        charts/blah
  Release Name:  bad-hr-test
Status:
  Conditions:
    Last Transition Time:  2019-10-05T20:30:08Z
    Message:               successfully cloned git repo
    Reason:                GitRepoCloned
    Status:                True
    Type:                  ChartFetched
    Last Transition Time:  2019-10-05T20:30:13Z
    Message:               rpc error: code = Unknown desc = release: "bad-hr-test" not found
    Reason:                HelmInstallFailed
    Status:                False
    Type:                  Released

I can think of a couple of different ways this might get solved:

  • Publishing more kubernetes events
  • adding a failed install status to release status (currently it will only report a status if the install actually happened, not if there's a render issue)

If you're in favour of adding more kube events, how best should I add the eventBroader to the release.go and other helm related files?

Thanks!

-Kyle

Helm client in operator gets namespace wrong

Describe the bug
I try to run install a chart from operator pod but it seems helm client get the namespace wrong.
Get pods

kubectl get pods -n flux

Output

flux-5878766c4b-2mf42                 1/1     Running   0          37m
flux-helm-operator-77d464dbc4-pb5ns   1/1     Running   0          37m
flux-memcached-676b496574-28f8l       1/1     Running   0          37m

Open a shell to operator

keti -n flux flux-helm-operator-77d464dbc4-pb5ns /bin/sh

Output

/home/flux # 

With the ls it works

/home/flux # helm --tls --tls-verify \
  --tls-ca-cert /etc/fluxd/helm-ca/ca.crt \
  --tls-cert /etc/fluxd/helm/tls.crt \
  --tls-key /etc/fluxd/helm/tls.key \
  --tls-hostname tiller-deploy.kube-system \
  ls

Output

audit-logging         	1       	Tue Oct 29 11:22:23 2019	DEPLOYED	audit-logging-3.2.0         	           	kube-system 
auth-apikeys          	1       	Tue Oct 29 11:17:41 2019	DEPLOYED	auth-apikeys-3.2.0          	           	kube-system 
auth-idp              	1       	Tue Oct 29 11:17:39 2019	DEPLOYED	auth-idp-3.2.0              	           	kube-system 
auth-pap              	1       	Tue Oct 29 11:17:43 2019	DEPLOYED	auth-pap-3.2.0              	           	kube-system 
....

But when install

/home/flux # helm --tls --tls-verify \
  --tls-ca-cert /etc/fluxd/helm-ca/ca.crt \
  --tls-cert /etc/fluxd/helm/tls.crt \
  --tls-key /etc/fluxd/helm/tls.key \
  --tls-hostname tiller-deploy.kube-system \
  install --name dokuwiki-test stable/dokuwiki --namespace default

Output:

Error: Not authorized: Client is not authorized to install a release, into namespace:
dokuwiki-test default

Install works when try from another helm client, same version.

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3+icp", GitCommit:"34e12adfe271fd157db8f9745affe84c0f603809", GitTreeState:"clean"}

To Reproduce
Steps to reproduce the behaviour:

  1. Install v1 CRD, flux-operator 1.0.0-rc3 (custom build with helm version set to 2.12.3)
  2. Install flux-operator
  3. Open a shell to deployment/flux-helm-operator
  4. Use helm to install any chart

Expected behavior
Expected the chart to be installed successfully

Additional context
Add any other context about the problem here, e.g

  • Helm operator version: 1.0.0-rc3
  • Kubernetes version: v1.13.5+icp
  • Git provider: github
  • Container registry provider: docker.io

Let flux deploy a bundle of helmreleases

Hi,

not sure if i am right here.
I have the question if the flux operator is able to deploy a bundle of helmreleases via another helmrelease.
I defined helmreleases for several services and want to deploy them as a bundle of services.
Is that already possible and if so, how is that working/should the "master-helmrelease" look like?

Thanks in advance and best regards
timokorny

flux operator does not produce deployment from helmchart, though tiller does

Describe the bug
Although when executing

helm install

(either with --dry-run option or not)

a k8s Deployment is produced,

when flux takes over the the git ops process kicks in, there is no Deployment resources created (albeit there is a helm chart installed)

To Reproduce
Steps to reproduce the behaviour:

I have set up the typical GitOps flow where the flux operator is monitoring a GitHub repo with HelmRelease files pointing to helm charts.
The k8s cluster is on GKE.

Expected behavior
I would expect to see in my k8s all the resources I see when executing

helm install <path/to/my/helm/chart>

However only some of them are created.

Logs

endgame is the name of both my HelmChart and HelmRelease

Tiller logs

[storage] 2019/08/24 12:15:45 getting last revision of "endgame"
[storage] 2019/08/24 12:15:45 getting release history for "endgame"
[tiller] 2019/08/24 12:15:50 rendering endgame chart using values
2019/08/24 12:15:50 info: manifest "endgame/templates/deployment.yaml" is empty. Skipping.
[storage] 2019/08/24 12:15:52 getting last revision of "endgame"
[storage] 2019/08/24 12:15:52 getting release history for "endgame"

Output of

stern flux* | grep -i endgame
➢  stern 'flux*' | grep -i endgame
+ flux-helm-operator-6f74bc4d6-qkc79 › flux-helm-operator
+ flux-d8d4897c5-4qrmk › flux
+ flux-memcached-788df4d497-vg8jn › memcached
flux-helm-operator-6f74bc4d6-qkc79 flux-helm-operator ts=2019-08-24T11:46:04.725307514Z caller=operator.go:172 component=operator debug="PROCESSING item [\"mycompany/endgame\"]"
flux-helm-operator-6f74bc4d6-qkc79 flux-helm-operator ts=2019-08-24T11:46:04.726174013Z caller=operator.go:229 component=operator debug="Starting to sync cache key mycompany/endgame"
flux-d8d4897c5-4qrmk flux ts=2019-08-24T11:46:04.966052368Z caller=sync.go:483 component=cluster method=Sync cmd="kubectl apply -f -" took=741.961201ms err=null output="helmrelease.flux.weave.works/endgame created"
flux-d8d4897c5-4qrmk flux ts=2019-08-24T11:46:05.017626012Z caller=daemon.go:619 component=daemon event="Sync: fe1463a, mycompany:helmrelease/endgame" logupstream=false
flux-helm-operator-6f74bc4d6-qkc79 flux-helm-operator ts=2019-08-24T11:46:07.478950415Z caller=operator.go:214 component=operator info="Successfully synced 'mycompany/endgame'"
flux-helm-operator-6f74bc4d6-qkc79 flux-helm-operator I0824 11:46:07.479224       9 event.go:221] Event(v1.ObjectReference{Kind:"HelmRelease", Namespace:"mycompany", Name:"endgame", UID:"c0deba1c-c664-11e9-8aed-42010a790d08", APIVersion:"flux.weave.works/v1beta1", ResourceVersion:"58194264", FieldPath:""}): type: 'Normal' reason: 'ChartSynced' Chart managed by HelmRelease processed successfully
flux-helm-operator-6f74bc4d6-qkc79 flux-helm-operator ts=2019-08-24T11:46:52.133866679Z caller=release.go:147 component=release info="processing release endgame (as endgame)" action=CREATE options="{DryRun:false ReuseName:false}" timeout=300s
flux-helm-operator-6f74bc4d6-qkc79 flux-helm-operator ts=2019-08-24T11:47:49.556805847Z caller=release.go:147 component=release info="processing release endgame (as c0deba1c-c664-11e9-8aed-42010a790d08)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=300s
flux-d8d4897c5-4qrmk flux ts=2019-08-24T11:48:10.698516869Z caller=sync.go:483 component=cluster method=Sync cmd="kubectl apply -f -" took=673.073612ms err=null unchanged\nhelmrelease.flux.weave.works/endgame

Additional context
Add any other context about the problem here, e.g

  • Flux version:
  flux-helm-operator:
    Container ID:  docker://2d88c55b6bd2d24feea8eea05118465099bf5ba254a08d226512bf0b1e2e8134
    Image:         docker.io/weaveworks/helm-operator:0.7.1
    Image ID:      docker-pullable://weaveworks/helm-operator@sha256:f18e615338680934580978530987530908090875487455a3d9bac09936fac7d2b0
    Port:          3030/TCP
    Host Port:     0/TCP
    Args:
      --git-timeout=20s
      --charts-sync-interval=2m
      --update-chart-deps=true
      --log-release-diffs=false
      --tiller-namespace=kube-system
      --tiller-tls-enable=true
      --tiller-tls-key-path=/etc/fluxd/helm/tls.key
      --tiller-tls-cert-path=/etc/fluxd/helm/tls.crt
  • Helm Operator version: tiller:v2.11.0
  • Kubernetes version:
➢  kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.9-gke.7", GitCommit:"b6001a5d99c235723fc19342d347eee4394f2005", GitTreeState:"clean", BuildDate:"2019-06-18T12:12:30Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
  • Git provider: Github
  • Container registry provider: GCR

Helm 3 support

With the release of Helm 3 would like to track progress for this integration

Debug helmrelease before committing to GitOps

Feature request: Add a command to fluxctl that can debug a helmrelease yaml without first having to apply it to the cluster and ensure all the chart references are pushed.

Say I have a myrelease.yaml file:

---
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: my-release
  namespace: dev
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.chart-image: glob:master-*
spec:
  releaseName: my-release-name
  chart:
    git: somewhere:/foo/bar.git
    ref: master
    path: charts/myserver
  values:
    image:
      repository: my-image-repo/my-imagebamappserver
      tag: foobar
    replicaCount: 2
    ingress:
      enabled: true
[...]

I'd like something like fluxctl helm-template ./myrelease.yaml /path/to/my/chart to run the moral equivalent of:

helm template --values=- --namespace=dev --name=my-release-name /path/to/my/chart

where the stdin (used by the --values=- switch), is generated like:

image:
  repository: my-image-repo/my-image
  tag: foobar
replicaCount: 2
ingress:
  enabled: true
[...]

This would make debugging flux helmreleases a lot more pleasant!

If you are willing to take a PR to implement this (or a modified proposal), let me know, and I can try to cook one up.

Thanks,
Michael.

Flux doesn't upgrade helm release

Describe the bug
Automated helm releases are deployed, but not upgraded when a new image is found.

To Reproduce
Steps to reproduce the behaviour:
0. Kubernetes 1.14.6, Flux 1.14.1, Flux helm operator 0.10.1, connected to a private gitlab repo and private gitlab registry

  1. Push a commit to a repo, to trigger the CI
  2. Launch fluxctl sync
  3. Fluxctl see the new image, commit the release file with the new image tag
  4. Flux helm operator trigger the helm upgrade, but fails it

Expected behavior
Flux helm operator upgrade the release when a new image is found

Logs

Helm operator logs:

 ts=2019-09-02T09:19:51.799888885Z caller=release.go:183 component=release info="processing release api-1 (as 8e1e9a2e-cb06-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=500s
 ts=2019-09-02T09:22:51.49430209Z caller=operator.go:309 component=operator info="enqueuing release" resource=development:helmrelease/api-2                      
 ts=2019-09-02T09:22:51.49458506Z caller=operator.go:309 component=operator info="enqueuing release" resource=development:helmrelease/api-1
 ts=2019-09-02T09:22:51.638421148Z caller=release.go:183 component=release info="processing release api-2 (as c828c2f2-cb1f-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=400s
 ts=2019-09-02T09:22:51.819936819Z caller=release.go:183 component=release info="processing release api-1 (as 8e1e9a2e-cb06-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=500s
 ts=2019-09-02T09:25:51.494503037Z caller=operator.go:309 component=operator info="enqueuing release" resource=development:helmrelease/api-1
 ts=2019-09-02T09:25:51.494737742Z caller=operator.go:309 component=operator info="enqueuing release" resource=development:helmrelease/api-2
 ts=2019-09-02T09:25:52.221917531Z caller=release.go:183 component=release info="processing release api-1 (as 8e1e9a2e-cb06-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=500s
 ts=2019-09-02T09:25:52.383190672Z caller=release.go:183 component=release info="processing release api-2 (as c828c2f2-cb1f-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=400s
 ts=2019-09-02T09:28:51.494605759Z caller=operator.go:309 component=operator info="enqueuing release" resource=development:helmrelease/api-1
 ts=2019-09-02T09:28:51.49476549Z caller=operator.go:309 component=operator info="enqueuing release" resource=development:helmrelease/api-2
 ts=2019-09-02T09:28:51.668150398Z caller=release.go:183 component=release info="processing release api-1 (as 8e1e9a2e-cb06-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=500s
 ts=2019-09-02T09:28:51.848429055Z caller=release.go:183 component=release info="processing release api-2 (as c828c2f2-cb1f-11e9-9e30-ac1f6b90cfa0)" action=CREATE options="{DryRun:true ReuseName:false}" timeout=400s

Tiller logs

[storage] 2019/09/02 09:55:31 getting last revision of "api-2"
[storage] 2019/09/02 09:55:31 getting release history for "api-2"
[kube] 2019/09/02 09:55:31 Doing get for ConfigMap: "api-2-config"
[kube] 2019/09/02 09:55:31 get relation pod of object: development/ConfigMap/api-2-config
[kube] 2019/09/02 09:55:31 Doing get for Deployment: "api-2"
[kube] 2019/09/02 09:55:31 get relation pod of object: development/Deployment/api-2

Helm chart deployment :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name }}
  namespace: {{ .Values.namespace }}
  labels:
    app: api-2
    fonction: api
    version: v1
spec:
  strategy:
    rollingUpdate:
      maxUnavailable: 0
    type: RollingUpdate
  replicas: {{ .Values.replicas }}
  selector:
    matchLabels:
      app: api-2
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
        prometheus.io/scrape: "false"
        sidecar.istio.io/inject: "false"
      labels:
        app: api-2
    spec:
      containers:
      - name: apollon
        image: "redacted/api-2:tag"
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: {{ .Values.ressources.limits.cpu }}
            memory: {{ .Values.ressources.limits.memory }}
          requests:
            cpu: {{ .Values.ressources.requests.cpu }}
            memory: {{ .Values.ressources.requests.memory }}
        ports:
        -  containerPort: 8080
           name: http
           protocol: TCP
        envFrom:
        - configMapRef:
            name: api-2-config
        env:
        - name: REDACTED
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.redacted }}
              key: REDACTED
        - name: REDACTED
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.redacted }}
              key: REDACTED
        - name: REDACTED
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.redacted }}
              key: REDACTED
      imagePullSecrets:
      - name: regcred

Release file :

---
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: api-2
  namespace: development
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.chart-image: glob:nightly-*
spec:
  releaseName: api-2
  chart:
    git: git@redacted:redacted/repo.git
    path: charts/api-2
    ref: master
  values:
    image: redacted/api-2:tag
    replicaCount: 3
  timeout: 400

Send helm-operator events to upstream

We currently send Flux events to Weave Cloud as events allowing upstreams to be aware of syncs, releases, and other events in Flux.

It would be great to be able to produce events from the helm-operator whenever it makes changes (or fails to make changes).

Are there any plans for sending events from the helm-operator upstream?

Microservices best practices

I have an architectural question regarding flux and helm. I'm following the https://github.com/fluxcd/helm-operator-get-started/ tutorial.

In my project, I have tens of microservices which can not live without each other. What would be your suggestion? Should I create as many charts and HelmReleases as many microservice I have, or just create a generic chart, which contains all of the microservices helm yaml files (like deployment, service, etc...)?

Implement chartPullSecrets

The design arrived at in fluxcd/flux#1382 includes the use of Secrets to provide keys (for git repos) and repositories.yaml files (for Helm repos), but this isn't implemented.

It is still possible to use repos of either type that need credentials, but it entails mounting them into the helm-operator container, which is 1. pretty fiddly to get right and 2. arguably the wrong place to supply credentials.

Error messages without further information

Describe the bug
I get the error "Object 'Kind' is missing in 'null'. Without any reference I am unable to fix this. We need a better error message complaining about the corresponding manifest file or something like that.

Logs

flux/flux-78dc8cc7d8-sr526[flux]: ts=2019-08-21T20:16:15.395032417Z caller=warming.go:198 component=warmer info="refreshing image" image=quay.io/prometheus/prometheus tag_count=95 to_update=95 of_which_refresh=0 of_which_missing=95
flux/flux-helm-operator-dff8b66b4-fhf6x[flux-helm-operator]: ts=2019-08-21T20:16:17.803965983Z caller=release.go:568 component=release err="Object 'Kind' is missing in 'null'"
flux/flux-helm-operator-dff8b66b4-fhf6x[flux-helm-operator]: ts=2019-08-21T20:16:17.851680342Z caller=release.go:568 component=release err="Object 'Kind' is missing in 'null'"
flux/flux-helm-operator-dff8b66b4-fhf6x[flux-helm-operator]: ts=2019-08-21T20:16:17.866927428Z caller=release.go:568 component=release err="Object 'Kind' is missing in 'null'"
flux/flux-helm-operator-dff8b66b4-fhf6x[flux-helm-operator]: ts=2019-08-21T20:16:17.884564751Z caller=release.go:568 component=release err="Object 'Kind' is missing in 'null'"
flux/flux-helm-operator-dff8b66b4-fhf6x[flux-helm-operator]: ts=2019-08-21T20:16:17.898444433Z caller=release.go:568 component=release err="Object 'Kind' is missing in 'null'"

Additional context
Add any other context about the problem here, e.g

  • Helm operator version: 0.10.1
  • Kubernetes version: 1.14
  • Git provider: github
  • Container registry provider: docker hub

Helm operator does not notice git changes to child charts

In a Helm chart, it is allowed to have relative file:// references in your requirements.yaml, .e.g.:

❯ tree charts
charts
├── child/..
└── parent/..

❯ cat charts/parent/requirements.yaml
dependencies:
- name: child
  version: 0.1.0
  repository: file://../child

The Helm operator does however only look for commits touching the configured parent chart path (e.g. charts/parent) on git mirror updates, which causes changes to child charts to go unnoticed. Subsequent dry-runs do not pick up the change either, as the clone for a HelmRelease is only replaced when a change is detected.

The easiest and neatest fix would probably be to always replace the clone of a chart, but to only schedule a release on changes to the parent chart. This would ensure changes to the parent chart result in a priority release, while changes to child charts are still picked up after a while by the sync / mutation prevention mechanism.

Don't require values.yaml key in secret, pull in values directly

The feature
Having to encode the entire contents of the values.yaml file a a string in a k8s secret or config map makes it difficult to work with or change. It would be better if you could just specify the secret or config map and the helm operator pulls all the keys instead.

Before

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  values.yaml: YWRtaW4=

After

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm

Nested values can be handled with dot notation

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  db.username: YWRtaW4=
  db.password: MWYyZDFlMmU2N2Rm

New user story
Precondition: A helm chart with values stored in a configmap or secret

  1. Store values directly in the configmap or secret as above
  2. When the helm release is created, these values are merged with any values specified directly in the helm release.

It looks like the load happens here:

key = "values.yaml"

I'm happy to look into creating a PR if this looks like a change you would support.

Cmpare OperatorSdk helm vs helm-operator

This is mostly an informational issue to understand from the team of fluxcd. I am wondering how operator sdk and helm operator can work similarly.
How could we compare both these and decide on when to choose what. Are there any fundamental differences between the two.

There's not a simple way to add a helm repo with a custom CA

It's not easy to add your own repository configuration to the chart.

The setting Values.helmOperator.configureRepositories.enable allow you toinject a secret, however, if there's a repo that needs a CA there's not a standard way to add the CA into the deployment provided by the chart.

Note that Values.helmOperator.tls.verify is used as a flag to configure tiller tls and not repo.

If you jump into the helmOperator pod and create the file ca.crt and then execute:

helm repo add myrepo https://myhelmrepo.com --ca-file ca.crt

the HelmRelease can use a helm repo with a self-signed cert. This means that the operator needs 2 things to install releases from a helm repo under a self-signed cert:

  • The CA certificate.
  • The entry in the repositories.yaml with the location of the ca certificate.

Allow helm template feature

Describe the feature
In some environments tiller is a no go, hence all helm charts are deployed via helm template command. It would be great if this operator allowed that.

valueFileSecrets works but valuesFrom.secretKeyRef doesn't

Describe the bug
A clear and concise description of what the bug is.

valuesFrom.secretKeyRef didn't work for me.

I thought I must have been doing something wrong with the formatting of my b64 encoded secret, but I just tried switching to valueFileSecrets and it worked with the same secret that failed for the valuesFrom.secretKeyRef

To Reproduce
Steps to reproduce the behaviour:

Take a secret like this:

data:
  values.yaml: eyJncmFmYW5hLmluaSI6IHsiYXV0aC5nZW5lcmljX29hdXRoIjogeyJjbGllbnRfc2VjcmV0IjogInJlZGFjdGVkIn0sICJkYXRhYmFzZSI6IHsidXJsIjogInJlZGFjdGVkIn19fQ==
kind: Secret
metadata:
  creationTimestamp: "2019-11-08T16:13:07Z"
  name: grafana-yaml
  namespace: production
  ownerReferences:
  - apiVersion: bitnami.com/v1alpha1
    controller: true
    kind: SealedSecret
    name: grafana-yaml
    uid: somenumber
  resourceVersion: "3652725"
  selfLink: /api/v1/namespaces/production/secrets/grafana-yaml
  uid: somenumber
type: Opaque

Try import values from that secret using valuesFrom.secretKeyRef in a helmrelease

apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: grafana
  namespace: production
  annotations:
    flux.weave.works/automated: "true"
spec:
  releaseName: grafana
  chart:
    repository: https://kubernetes-charts.storage.googleapis.com/
    name: grafana
    version: 3.4.2
  valuesFrom:
  - secretKeyRef:
      # Name of the secret, must be in the same namespace as the
      # HelmRelease
    name: grafana-yaml # mandatory
      # Key in the secret to get thre values from
    key: values.yaml     # optional; defaults to values.yaml

Expected behavior

I expected to see the values b64 from the secret in get loaded in to the helm release.

Logs

I've had two different errors come out in the logs.

First I got this error:

validation failure list:\nspec.valuesFrom.secretKeyRef in body must be of type object: \"null\"

Later I got this error.
validation failure list:\n\"spec.valuesFrom\" must validate one and only one schema (oneOf). Found none valid\nspec.valuesFrom.configMapKeyRef in body is required"

Between the errors, I deployed a version of the chart without the secret as a sanity check, and also changed the formatting of the secret multiple times.

Additional context

I was not using a configMapKeyRef. Just the secretKeyRef

  • Helm operator version:

docker.io/fluxcd/helm-operator:0.10.1

  • Kubernetes version:
  • Git provider:
  • Container registry provider:

Specify installation order of charts

Describe the feature

A cool feature would be a file allowing to set the order of installation of charts.

Why?
Because a lot of Helm charts are cut in half. First half are CRDs and the second half is the chart itself.
Take istio for example, it has istio-init for the CRDs and istio for the chart itself. But if you install istio before istio-init it will crash because it has not found any CRDs.
Same when installing Prometheus-operator. It fails because of CRDs not being installed correctly by helm. An error which is hard to prevent because its not possible to set the order of installation.

What would the new user story look like?

    1. Create a new yaml file with a list of each charts and the order of installation
      or
    1. Add integer in each HelmRelease files to set order of installation
    1. Use fluxctl as always
    1. Installation of each charts follows the specified order of installation
    1. All helm charts are deployed correctly and we all live happily ever after

Thx !

Helm release revision created every minute if secretKeyRef used

If a secretKeyRef is defined in the HelmRelease manifest, the helm-operator creates a new helm release every minute.

with secretKeyRef

With the following HelmRelease file pushed to my repo:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: podinfo
  namespace: stage
  annotations:
    fluxcd.io/ignore: "false"
spec:
  releaseName: podinfo-stage
  targetNamespace: stage
  chart:
    git: [email protected]:myorg/infrastructure
    ref: 14/gitops
    path: k8s/charts/podinfo
  valuesFrom:
  - configMapKeyRef:
      name: podinfo-stage-values
      key: stage.values.yaml
  - secretKeyRef:
      name: podinfo-stage-secrets
      key: stage.secrets.yaml      

The helmrelease is created, and podinfo workloads created as expected, including a helm release secret secret/podinfo-stage.v1. After a minute another helm release secret is created, secret/podinfo-stage.v2. This carries on every minute.

NAME                           TYPE                                  DATA   AGE
secret/default-token-whxz6     kubernetes.io/service-account-token   3      27h
secret/podinfo-stage-secrets   Opaque                                1      19m
secret/podinfo-stage.v1        helm.sh/release                       1      2m23s
secret/podinfo-stage.v2        helm.sh/release                       1      91s
secret/podinfo-stage.v3        helm.sh/release                       1      30s

without secretKeyRef

Clean up the release by setting fluxcd.io/ignore: "true", commit and push, then use kubectl to delete the helmrelease.

Edit the HelmRelease file to remove the secretKeyRef section (reset fluxcd.io/ignore: "false"), and commit and push it.

This creates the helmrelease again, and the podinfo workloads. This time, only one helm release secret is created, and no other releases are created after a minute.

Additional context
Add any other context about the problem here, e.g

  • Helm operator version: chart from commit 8f5bbeb with the following values:
git:
  ssh:
    secretName: flux-git-deploy
  pollInterval: 1m

chartsSyncInterval: 1m

configureRepositories:
  enable: true
  repositories:
    - name: https://charts.bitnami.com/bitnami
      url: https://charts.bitnami.com/bitnami
    - name: https://kubernetes-charts.storage.googleapis.com/
      url: https://kubernetes-charts.storage.googleapis.com/

extraEnvs:
  - name: HELM_VERSION
    value: v3

image:
  repository: docker.io/fluxcd/helm-operator-prerelease
  tag: helm-v3-71bc9d62

Decouple release reconciliation from chart source sync

In the current state of the code base the chartsync package tries to take care of too many things to make it easy to work with, as it:

  1. keeps track of chart source changes in Git
  2. provides the tools to fetch chart sources from HTTP(S) Helm chart registries
  3. re-conciliates HelmRelease resources to match the expected Helm release state

As 3 has much overlap with the release package, and for Helm v3 support a client abstraction was introduced, moving this logic into the release package would lead to a severe improvement in code readability, and reduce the overall complexity.

The end result would make the context of the chartsync package chart source sync only, while the release package takes care of releasing the HelmRelease resource with Helm (==syncing) using the synced chart.

Helm releases deleted directly won't upgraded

Summary

When I delete a release externally (by running helm delete) and run fluxctl sync without deleting release definitions from GitOps repo, it won't be installed again and gives the following error:

unable to proceed with the release:
current state prevents it from being upgraded (DELETED)

It looks like the state of the release is stuck on DELETED, but I couldn't find any way to fix it back.

To Reproduce

Steps to reproduce the behaviour:

  1. Install cert-manager by adding release to the repository and running fluxctl sync
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  releaseName: cert-manager
  chart:
    repository: https://charts.jetstack.io
    name: cert-manager
    version: v0.9.1
  1. Delete it using helm directly
 $ helm delete cert-manager
  1. run fluxctl sync without removing definition
  2. Remove it from repository and run fluxctl sync again
  3. Try to add it to repository again and run fluxctl sync

It gets current state prevents it from being upgraded (DELETED) error.

Expected behavior

Reinstalling cert-manager.

Logs

ts=2019-09-03T04:56:29.520785129Z caller=operator.go:309 component=operator info="enqueuing release" resource=cert-manager:helmrelease/cert-manager
ts=2019-09-03T04:56:29.537843659Z caller=chartsync.go:328 component=chartsync warning="unable to proceed with release" resource=cert-manager:helmrelease/cert-manager release=cert-manager err="current state prevents it from being upgraded (DELETED)"
ts=2019-09-03T04:59:29.520920531Z caller=operator.go:309 component=operator info="enqueuing release" resource=cert-manager:helmrelease/cert-manager
ts=2019-09-03T04:59:29.541024412Z caller=chartsync.go:328 component=chartsync warning="unable to proceed with release" resource=cert-manager:helmrelease/cert-manager release=cert-manager err="current state prevents it from being upgraded (DELETED)"

Additional context

  • Helm operator version: 0.10.1
  • Kubernetes version: 1.13

Release does not belong to HelmRelease, this may be an indication that multiple HelmReleases with the same release name exist.

Describe the bug
A clear and concise description of what the bug is.
The automated deployment stopped working. The status of the helmRelease is deployed but it doesn't detect changes in image versions from docker repository.
To Reproduce
Steps to reproduce the behaviour:
https://github.com/fluxcd/helm-operator-get-started

Expected behavior
The automated releases based on image versions to work.
Logs
If applicable, please provide logs. In a standard stand-alone installation, you'd get this by running kubectl logs -n default deploy/flux-helm-operator.

textPayload: "ts=2019-10-21T13:17:31.407051541Z caller=chartsync.go:374 component=chartsync warning="release 'office-hub-stg' does not belong to HelmRelease, this may be an indication that multiple HelmReleases with the same release name exist" resource=stg:helmrelease/office-hub-stg

Additional context
Add any other context about the problem here, e.g

  • Helm operator version: 1.14.2
  • Kubernetes version: 1.13.7-gke.24
  • Git provider: Bitbucket
  • Container registry provider: Docker Hub

Record failed resource annotations on `HelmRelease`

The helm-op annotator, a secondary system that tries to mark up resources with Helm information, to be interpreted by the flux daemon, tends to spam the logs when something's wrong. It runs every few seconds, and reports any failures.

To fix this, we could

  • come up with a better way of annotating resources, or otherwise getting the information across*
  • suppress some log messages, e.g., only report failures the first time.

*The requirement is that fluxd, which knows nothing about Helm, somehow knows which resources belong to a chart release, so it can mark them as such. Instead of marking each of the resources, we could stick a list of them in the FHR status.

Promote the HelmRelease CRD to GA

As part of the Flux Helm Operator GA release we should bump the HelmRelease CRD version from v1beta1 to v1. The beta version covers all the Helm options that make sense from a GitOps perspective. We should revisit the CRD when Helm v3 will be available.

kubectl show wrong crd status

Describe the bug
I check the flux-helm-operator behaviour on a namespace restricted tiller via different targetNamespace within the helmrelease file.
The operator behaviour is desired but the helmreleases.helm.fluxcd.io status message is a bit confusing.

The helm-operator logs:

ts=2019-10-09T15:20:20.432426741Z caller=release.go:216 component=release error="Chart release failed: redis-demo-test: &status.statusError{Code:2, Message:\"release redis-demo-test failed: namespaces \\\"demo\\\" is forbidden: User \\\"system:serviceaccount:test:tiller\\\" cannot get resource \\\"namespaces\\\" in API group \\\"\\\" in the namespace \\\"demo\\\"\", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}"
ts=2019-10-09T15:20:20.450130849Z caller=release.go:220 component=release info="Deleting failed release: [redis-demo-test]"

But the crd shows

kubectl get helmreleases.helm.fluxcd.io
NAME                       RELEASE   STATUS   MESSAGE                  AGE
redis-demo-test                      helm install succeeded   3m1s

@hiddeco also ask for the hr output

kubectl describe hr redis-demo-test
Name:         redis-demo-test
Namespace:    test
Labels:       <none>
Annotations:  flux.weave.works/tag.chart-image: semver:~5.0.0
              fluxcd.io/automated: true
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"helm.fluxcd.io/v1","kind":"HelmRelease","metadata":{"annotations":{"flux.weave.works/tag.chart-image":"semver:~5.0.0","flux...
API Version:  helm.fluxcd.io/v1
Kind:         HelmRelease
Metadata:
  Creation Timestamp:  2019-10-09T15:31:09Z
  Generation:          1
  Resource Version:    26980434
  Self Link:           /apis/helm.fluxcd.io/v1/namespaces/test/helmreleases/redis-demo-test
  UID:                 181b9778-0ea8-4b1b-87cc-2cb9deff3443
Spec:
  Chart:
    Name:            redis
    Repository:      https://kubernetes-charts.storage.googleapis.com/
    Version:         9.2.2
  Release Name:      redis-demo-test
  Target Namespace:  demo
  Values:
    Cluster:
      Enabled:      true
      Slave Count:  1
    Image:
      Pull Policy:  IfNotPresent
      Registry:     docker-registry/public
      Repository:   bitnami/redis
      Tag:          5.0.5
    Master:
      Persistence:
        Enabled:  false
      Resources:
        Limits:
          Cpu:     100m
          Memory:  256Mi
        Requests:
          Cpu:     100m
          Memory:  256Mi
    Slave:
      Persistence:
        Enabled:  false
      Resources:
        Limits:
          Cpu:     100m
          Memory:  256Mi
        Requests:
          Cpu:     100m
          Memory:  256Mi
Status:
  Conditions:
    Last Transition Time:  2019-10-09T15:31:09Z
    Last Update Time:      2019-10-09T15:32:21Z
    Message:               chart fetched: redis-9.2.2.tgz
    Reason:                RepoChartInCache
    Status:                True
    Type:                  ChartFetched
    Last Transition Time:  2019-10-09T15:31:10Z
    Last Update Time:      2019-10-09T15:32:21Z
    Message:               helm install succeeded
    Reason:                HelmSuccess
    Status:                True
    Type:                  Released
  Observed Generation:     1
  Release Name:            
  Release Status:          
  Revision:                9.2.2
  Values Checksum:         ec90a4f2595193036ab91451612c2b7eb858479e4fa9e6db941da93586fe15de
Events:
  Type    Reason       Age               From           Message
  ----    ------       ----              ----           -------
  Normal  ChartSynced  5s (x2 over 76s)  helm-operator  Chart managed by HelmRelease processed

To Reproduce
Steps to reproduce the behaviour:
Connect the helm-operator to a namespace restricted tiller and try to deploy a helmrelease within a different targetNamespace

Expected behavior
The crd status message should not show helm install succeeded

Logs
If applicable, please provide logs. In a standard stand-alone installation, you'd get this by running kubectl logs -n default deploy/flux-helm-operator.

ts=2019-10-09T15:47:21.077505305Z caller=release.go:184 component=release info="processing release redis-demo-test (as redis-demo-test)" action=CREATE options="{DryRun:false ReuseName:false}" timeout=300s
ts=2019-10-09T15:47:21.276260647Z caller=release.go:216 component=release error="Chart release failed: redis-demo-test: &status.statusError{Code:2, Message:\"release redis-demo-test failed: namespaces \\\"demo\\\" is forbidden: User \\\"system:serviceaccount:test:tiller\\\" cannot get resource \\\"namespaces\\\" in API group \\\"\\\" in the namespace \\\"demo\\\"\", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}"
ts=2019-10-09T15:47:21.300359986Z caller=release.go:220 component=release info="Deleting failed release: [redis-demo-test]"

Additional context
Add any other context about the problem here, e.g

  • Helm operator version: v1.0.0-rc2
  • Kubernetes version: v1.15.4
  • Git provider: private
  • Container registry provider: private

Improve leakage prevention / garbage collection of git mirrors

The refactor in #99 took ultimately care of preventing git mirrors from staying around after all HelmRelease resources that referred to it were removed.

The solution, although being an improvement to what it was, is non optimal and can be improved, as it relies on receiving a signal from the mirror to clean itself up, which is something that won't work for stale repositories or repositories that receive almost no updates.

What makes the garbage collection a challenge in its current form is the triangular relationship as described in #99 (comment), it is likely that a reverse lookup table needs to be put in place so that look-ups can be done directly and without relying on the HelmRelease resources in the HelmReleaseLister.

Specify an alternative default branch for HelmRelease objects

Preamble: The below is part question, part solution. I've had a dig around the documentation and I can't find any advice or examples on how I would solve this.


Describe the feature

The Helm Operator should be able to be configured with a default git branch/ref for HelmRelease resources specifying the .spec.chart.git key.

As fluxcd/flux can be configured with a Git branch via --git-branch, the operator should also have the option be configured to use a default Git branch/ref other than master if the .spec.chart.ref field is unspecified.

What would the new user story look like?

  1. Add an extra command-line and chart flag, something along the lines of --git-chart-ref-default. To preserve backwards-compatibility, when the operator is started without this flag, the default value is set to master to match the current default here.
  2. User starts up the operator using the --git-chart-ref-default flag, specifying some branch name or reference (i.e. dev)
  3. A new HelmRelease is applied with the .spec.chart.ref field unspecified. The operator then retrieves the chart from the default branch specified via the flag (in this case, dev).

Expected behavior

I'll now go on to explain what problem this solves. Given a typical pipeline of three levels: dev, staging and production:

dev --> staging --> production

the relevant changes inside the config repository (charts/ and releases/) travel along the pipeline (with each stage represented by a Git branch). To keep the pipeline history clean, production should be a subset of the changes in staging, and staging be a subset of changes of dev. To simplify the pipeline, the values provided to each chart are fixed and each pipeline level is represented by one cluster.

The flux instances could be configured in such a way so that staging and production do not write changes to Git, but mirror a previously verified dev configuration:

Pipeline Level Flux Flags
dev --git-branch=dev
staging --git-branch=staging --git-readonly=true
production --git-branch=production --git-readonly=true

Currently, there is no way (that I can find at least) that enables me to fully implement this, as the .spec.git.ref is fixed to one value for each stage of the pipeline - meaning in this case that a change to the chart on the dev branch will be immediately reflected in production (as in this case the .spec.git.ref would be set to dev). The only way that I can see to implement this currently is to maintain each branch with a different history and set of HelmRelease objects, which doesn't seem particularly automatable to me when moving changes through the pipeline. From my perspective, it's important to run the same in staging and production to what's been verified in dev.

If I can solve the problem with what's implemented already, that's great too! Thanks and keep up the great work!

Git over HTTPS support

Helm Operator allows for recycling the git ssh key eventually used in a Flux setup, however Flux does also support Git over HTTPS to fetch its state:
https://github.com/fluxcd/flux/blob/master/chart/flux/README.md#flux-with-git-over-https
which is the method I initially adopted.

However as per Helm Operator's readme the only method supported is the git deploy key:
https://github.com/fluxcd/helm-operator/blob/master/chart/helm-operator/README.md#use-fluxs-git-deploy-key

Is using flux-git-auth instead of flux-git-deploy supported somehow? In any case how would the spec.chart.git entry look like in our HelmRelease?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.