carvel-dev / kapp-controller Goto Github PK
View Code? Open in Web Editor NEWContinuous delivery and package management for Kubernetes.
Home Page: https://carvel.dev/kapp-controller
License: Apache License 2.0
Continuous delivery and package management for Kubernetes.
Home Page: https://carvel.dev/kapp-controller
License: Apache License 2.0
Describe the problem/challenge you have
Right now every path under ytt
or similar template steps in the App CR is required, and the App CR will fail if it's not present:
template:
error: 'Templating dir: exit status 1'
exitCode: 1
stderr: |
ytt: Error: Checking file '/etc/kappctrl-mem-tmp/kapp-controller-fetch-template-deploy695734710/config/aws': lstat /etc/kappctrl-mem-tmp/kapp-controller-fetch-template-deploy695734710/config/aws: no such file or directory
updatedAt: "2020-12-14T16:27:39Z"
Describe the solution you'd like
We have a standardized way we write out App CRs, providing an easy to override mechanism for different environments. Not all App CRs need that overriding capability, but having the folders there reduces complexity. If there was a way to say "this path is optional, don't fail if it's not there, just proceed without it" that would reduce the need to populate a bunch of folders with a .gitkeep
file or similar.
Anything else you would like to add:
Not a high priority, as almost 99% of our apps do need some type of configuration per environment, just wanted to track it for posterity.
AppDefinition allows to expose App creation as a dedicated CRD so that "configuration details" are hidden from the App creator. For example:
AppDefinition named my-app
specifies how to create MyApp/v1alpha1
by defining App template (represented via App CRD's spec section). Users that want to create new instance of MyApp use create MyApp CR and fill it out with data values (if any required).
apiVersion: kappctrl.k14s.io/v1alpha1
kind: AppDefinition
metadata:
name: myapp
spec:
crd:
apiGroup: apps.co.com
version: v1alpha1
name: MyApp
defaults: # <-- ytt data values?
...
template:
spec: # App CRD spec
fetch:
- git: ...
template:
- ytt: ...
deploy:
- kapp: ...
apiVersion: apps.co.com/v1alpha1
kind: MyApp
metadata:
name: app1
namespace: apps
spec:
host: blah.com
status:
conditions:
- type: Reconciling
- type: ReconcileSucceeded
- type: ReconcileFailed
reason: Invalid
message: ...
this would make syncPeriod configuration more useful (e.g. it could be set higher, and App can try to react to "input" changes proactively). i'd like to have this mechanims pave the way for "watching" of other resources used by App.
When using kubectl 1.19.1 to install kapp-controller in a Kubernetes v1.19.1 cluster, we got this warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition.
Using kubectl 1.18.x doesn’t get the warning.
Shall we provide the apiextensions.k8s.io/v1 App CRD? Thanks.
Could it be possible to supply a service account in the App resource that would be used to run kapp rather than giving kapp-controller cluster-admin privileges?
Describe the problem/challenge you have
I would like to use Vault to provide my secrets for my YTT templates. To achieve this, one can encrypt the secrets YAML with the Vault Transit engine and store in their repository.
Describe the solution you'd like
A templater in kapp controller that supports Hashicorp Vault's transit engine to decrypt files similar to the SOPS templater. This way I can decrypt secrets, template with ytt, and deploy with kapp.
Additional Details
It is also possible to use Vault to inject secrets, you just would not be able to use those secrets with ytt. See here and examples
The kubectl explain
command is a great way to get documentation about resources.
Currently, the App CRD does not provide documentation via kubectl explain
:
$ kubectl explain app.spec
KIND: App
VERSION: kappctrl.k14s.io/v1alpha1
DESCRIPTION:
<empty>
It would be nice to be able to run any docker image as a template engine.
Configuration may looks like this:
# ...
spec:
# ...
template:
- docker:
image: my-renderer:v1
values:
some-value: 123
valuesFrom:
- secretRef:
name: secret-name
- configMapRef:
name: configmap-name
Have you considered something like this?
In the App CR, the documentation states that Fetch must have one *or more* directives
. However, when specifying multiple fetch directives (e.g. two directives of type git
) only the last one is used. This is because of https://github.com/k14s/kapp-controller/blob/17b8d5d7e6b25bd382b94146511bbb7908fe9913/pkg/app/app_fetch.go#L79 (Extract empties the dstPath before copying).
I'm depending on a ytt feature added in 0.23.0 (released shortly after the first and only kapp-controller release).
Please rebuild the docker-image https://github.com/k14s/kapp-controller/blob/a4a1e6491218350e71c5c149c3ff8a9d6363d0de/Dockerfile#L7
and create a new release.
kapp-controller was created before vendir. since vendir will have same fetch methods and more, it makes sense to use it in kapp-controller instead of maintaining duplicate code.
Using kapp-controller in coroporate on premise environement requires to be able to set corporate ca cert on git configuration
With the project being moved to a new location, I don't believe you will be able to do a go get github.com/vmware-tanzu/carvel-kapp-controller
. It'll fail because the module declared in go.mod is github.com/k14s/kapp-controller
. In the short term, folks can still do a go get github.com/k14s/kapp-controller
. Perhaps this could be highlighted in the readme.
Hi,
we tried out the sops approach according to your documentation but got the following error:
template:
error: 'Building config paths: Generating secring.gpg: Serializing pk: openpgp:
invalid argument: unknown private key type'
exitCode: -1
We have created the PGP private key with "gpg (GnuPG) 2.2.23".
gpg --list-secret-keys --keyid-format LONG
/Users/d040602/.gnupg/pubring.kbx
---------------------------------
sec rsa4096/8C321E51694F531E 2020-10-09 [SC] [expires: 2022-10-09]
F3599FD662F75135637911E88C321E51694F531E
uid [ultimate] ...
The secret with the private key looks as follows (only used for testing :-)):
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: "2020-10-08T07:56:53Z"
name: pot-gitops-sops-key
namespace: garden-hubtest
resourceVersion: "1343617448"
selfLink: /api/v1/namespaces/garden-hubtest/secrets/pot-gitops-sops-key
uid: d6135486-1cc0-4c91-88fe-84e3bb62515b
data:
my.pk: apiVersion: v1
data:
my.pk: LS0tLS1CRUdJTiBQR1AgUFJJVkFURSBLRVkgQkxPQ0stLS0tLQoKbFFkR0JGOS8rcEFCRUFDOEcwVkNaZWZ2blJJdnJCM2FpTUtra3ptc0grUmErQzJUZnF6QXFEcmNJNG9RNzhmNQpLOTJTenpqeTcycHQwUlJrNVhqRXZSTHJJS2pUamJJUHQ2ZER1OXNxSnZ6dzJ6bzg5UVJHVFlxbG9tTnY1U2ZmClZPdEM5MFpVOEJFYXNjWTlycDZneTlBVGdOS09aTGNDenpmcjY0U2x4Y25Ka2pkb0hJWmFXUXplRE9TdWNJK3YKaitiWHkzRlAxUy9COTN2ZzJ0MVBtMVhTcHlXV1ZMbTRrQjlhVkU3Y1lrTUNxTklUV1VOM2JSV1o2SmJZNU85QgpuL292LzVSUnM0N25Zc2dRbE9CMlh1bGtFdzBITXNuTS8rOENyaDYwSkhPVVQxNDdNeWFRUXBqT3huRGhqK3E5Ci9FUmtTcW5PQkQ4Q09UWDYrZFZrVXpDTTMvdTJ2a3VIaHJjN2V3SUhNVFNSbFVjS1dST1VLaVE4MzFlRGt2V0YKbDF6VkQwVnZveTRDMmF2R3psbFdLVU1McytLUlN1Ym8ydU0zRllwcFRtV1dUNTA0QnVHUTVnUDhFUWRZb0E4NwplSXZ3ZkFmYi8zVzZoKzJSb3Q3YzhYZDlRZVNYZXRRS1U0RmdYTGV6VzhaZFNnMWJHUWF5dmtGS1J6SVk4RndXCmhSWStGRXEzbll2b1plY3c0ck1ycFM1VitDQWZmTDdKZWlsMk0wL0QzU054VldETDhkbStVOWZFUmU1SkVxSGsKQ2ZqT3Y2bk5MeDBYT2pvT1FDbjJsdlB0YStqNm43RENJNzFNYkJ0MEdDTEVTbDFJQmkvSFF6WWpQcitQNElLVgoyWmtycnU1MHQ2Q0phbE5Zd0JWRFNKc25zeEJZaGJGaWw0OFNRVVR4T2JqUmFVeXM2MTdJQVEwRjJ3QVJBUUFCCi9nY0RBaDhobDRoVWE5WWE3WHdlZnJvcUZPQkhkQ0hRbmU5a2pReGZoV0d2YXJQM1lxTjZGZ041SkNqY2lrUEcKOXBmSk8xTTlyTG0yVXRVRjdBSHRBL3F1MG1qSzF6eDROSU9oaWxVRzEwUkFkVGlha01VT2ppUW9zZTB5N0N1QgpJK0lFbHVlMldMZXdZOFdZNFBEaFVqOFEzZ0xpbm9FYTVoUWU3S3g4aXVEQ00xaGhOdGlHNVA4UWhwaWZQMmZTCkViOWRjME05WUcwZ0tXelZISUlCUDJjN09HNmdqMjE1V3ExQ0N5NE1Dbm5zSVpUWTV2ZGR6MExwcTRxVVRkOFMKdEdWTmRTU3doNGh5RzNVR0kwcE9DQW1jMkNwME9CSnhLT2xVd0JHc0ZIMVhpaENjd3k2b2FwTjBucTdsZXA2bgpnUjk4TmdjaE9FV0NVMVFkRzhRNHJockZ6c24xTlkyRnVUL2RnOUE4MnVpNW5DMzBxRTNwbkYwVkVMQVgvZDZtCnhSWUtIK2Q5MFhBNXZvUUMxelVOdDI5UmdXSkdCZmFNMjkrKzJvQnFEbFhISFpXYVZsV1BJYWFHMkxtNUI5WFMKRlpvcU9KbFhSVWlRbzVyaXFKdUFnT3dFbjYwNlFvQTBCK1dUek9hdWFsVERTa0pEdnRjSjZnaEhHTTJCMUplOQoxeXF0VDFOOHl1RURVcGNDNVlVeHpqQXdMNTd5V0lSM3Y2SUkrT3hmcDVaMENNeG9LVVVtUmY3MVRCaUIwMzQwCnB1QWY4SFU4UXhmblhqZDFXV0pBdVNyMjBnK1M2aVMvZTVlOFhSV3ZQN21FazdwNnBxeWVJSHl6K2NlSVdhTW0KZVJURXJMVVBIeDBLNXl3YnBRbWphTGVZMkt5MC9EYmwvU2FpMlB6L1liSFZYREE5cXpxRTFBdXFrQ0YyNldLNgo1dlk4ZE1YQmhaM0U1czBuVTcyT0VUcjFkcFJSVXNmRUlFT2w3MVJmRGNVR0gwR2lNZEhIZ2l5ZStORXhRZVl3ClBielVjcWFaZFJJY2R3M2pNaUx2KzJ5bjB6a3d1dVlpSUFLcmFGaUx5MEtLZXhjeXVRY0lTaEg4R1pNUW1ydVMKWEx6VkgwME8waW5LaTRKajRWZkhoWUtxZmRHYzJIbU1OSFRic3ByRlVOdjdQM002NXhjK3gzVGRNdVcxc3BKOQpUdVJVYWZURy9MUFM5cE1lZDFILzUxV0EwYldBZmkzZENtblRyMGZCUk1pdmNWYllpRXpZNnFHcTF3SjJqNDhTCm9HYUt1bnVPeW9WbitmWkpvakdROU1hcjE3amZJRkc2RitVY3o4RGowdkJHb1RqNDNXeDJHZ0hYYVYzdGRYWHEKUWR4VUxyaXVKYnFnNzJjb3hyNzk5ZjgrdGJTWkpNMmp4NjdHUURYNGFoazZhTzBuQVI1bkozZTdTL2FMYVl5UQpua25Ib0dJaEUrMWh0c2xEczhKLzFxSTZ3UDllTFYxTDVKcVNLWGFYRnRIYmVTVHlnUnJzekVUQ3lXSHV4engvCnhlNTcxSE0zM1VvRTNDTndOQVR2aFIzdjlUNi9lU1c4YzRETmtQSHcvOE5RUFI1SmFDRTE3MmgrTHdhSnp6bEgKUElsV3M1WmFPcmFYWnR2dXZwSkwzcCtPd1pIc2ExWHc1bjVMWnB2TmxWNzdIclVSY0JCbGVVc2pRUEIwMGNRLwpIVGRHWjlxTmtlRDhUUmZsZExLQTVnU1RUTEVlWVo5TGsrR0ZpL1YzbDhSRVVSME5HVmI3N3psSFcxQ2N3MXJqCmY4aGlhUFJ5cGdFbi9zV0VFNGlUN3NBRWkyVWVmck1idk9Iazhlamx5dnA0YnJqbXFFT0xwQkx4Tnc4Q05DWWIKSG1hWS9jQmF6WXdZcXozSkVWN3JHdVpDTnMyT0NjL2xKS3dEYUk2OVZJVnYyMWtpaTVhZDJyaTdkL1VFYmFkMQppNlQwUWhVcGJSWUFJdEd5RjNsMFRQcE42R3lTSW9HcGJYOVpxM3ZxV1l6dUFGRkgvUVE0Ly9TTGMweGtpckpQCmdHSDdWaThiVlk4eVpNVHp3U05uSnZtdzlxMlBGcmJuOFdqRS9YQ2JEM2N2dmY0T3Fiek55S3VZZ2VkYTlRc08KbzNXNjR5WTRFYWZiWEV4ZmJSaEpMcVRtMS9hQlFOd3U2ZXJYN2Qvd201bHh1YVVJWVp1U2cxVmMxRVppTEl1MwpxU1ZVNldlazI0VnRJSGh6RldXTnoyY0V2aTJwSkVsdFFBU0dhcFFCMEtQQ3ZuZ29SbFc4Vy8rM29xY0gwRW91CnhodGVKbElBV2FaVDBlL0VTZmlDRktIOWo1eUpBNXRibnM1VHB3azFXR2RCc0xqSjFQbDBkU2EwSEdGamFHbHQKSUR4aFkyaHBiUzUzWldsblpXeEFjMkZ3TG1OdmJUNkpBbFFFRXdFQ0FENFdJUVR6V1ovV1l2ZFJOV041RWVpTQpNaDVSYVU5VEhnVUNYMy82a0FJYkF3VUpBOEpuQUFVTENRZ0hBZ1lWQ2drSUN3SUVGZ0lEQVFJZUFRSVhnQUFLCkNSQ01NaDVSYVU5VEhoZ3pELzlhSWFCZmJhR1czdG9RWW1yYW5Za3lMRnlJcm15SVBPTVJKRUorbDJHWUkxRVcKWVVhSm5hb0RvcmlNa1hZTG9iRjNCcUhneCtlcS9vZ3F6RkxncTJlWVJvRitkVkY4bkVFUHYxTUxtdTR2YzVTTwp6YUNqZmU2cDRrK0laYWYybXQva0I1NENqSVk4LzhRa2RBTHRyRlZJSlhaM1ZOdm5aR2Z2MmNwdURKSEVpYzVmCkd2SWFBVzlwVlgrZW10b3p0TTJnOHBpa0pCYmhvUWpMMUdIUWExSEJmRFVtb1JSdzNIUEMrRndDeGtNNWZ6QWEKWUlHNFpVM2JMRTJPZ0ZCK1FOYmJTeFFoSWdmbnkveDFGYXovZnp5NHNzeHVoRlN1bmJuTmdDQng1ZmlqYTlkZwp3c0xiNS9Qb1UxVm50SGlkRElJajJXdGREWFVLMlc5SnlDd1NvSldRc1V1T29lSW9FZXdXeEpFZzJQSWlSSEFLCm5neVgyMjZNUjArMWRyTFVmWWhraFlCZlBFWi9VZHF5UmRVdEF3RUxkd1Y3M0VXVVlzK2xadWdoLy9qOUYrMEgKRlRORXFlckx6eElaTE9vNEtNNW5tOGloekxleXc2Zy9yZ3pubE15L25NVUhRUVlnL1kvR2pWUkJyakNnN1dwZgpXbUJkdW11dXAvemE3R1FqZXlUamhQZlZweStFT2hEeTRiUDkvbHByM3lTWG5jM3djS3dHbnRwZVhKZkVBbFJzCkJNazhGbml3Rml4ZndKLzdnNWxUeHBGWE5VTDRYNHhxYVg4TFFVRGE3ZklmcnM3MTlYcTZzeEt4cnFud1VralMKZlRaS29aR3FmaitvZHpJeUJweWNySmY4L2hJdUxtLzNaTWxvYnBWWXBNUGFweDBlc1FDSnhUQ0ZCcDJ3UGc9PQo9QWg0QwotLS0tLUVORCBQR1AgUFJJVkFURSBLRVkgQkxPQ0stLS0tLQo=
kind: Secret
metadata:
name: pot-gitops-sops-key
namespace: garden-hubtest
type: Opaque
type: Opaque
Do you perhaps any idea what is going wrong here?
Thank you very much in advance and best regards,
Achim
What steps did you take:
I read the readme.md file
What happened:
I saw a typo.
What did you expect:
I expected words to be spelled correctly.
Anything else you would like to add:
In the sentence, "It will install, and continiously apply updates.", continiously
should be continuously
Environment:
n/a
proposed changes
- helmTemplate:
name: external-dns
path: helm/chart #! Tell helm template where to find the chart location
valuesFrom:
- fileRef: helm/external-dns-values.yaml #! Tell helm an additional file to pass as a -f option
- fileRef: helm/secret-external-dns-values.yaml #! Tell helm an additional file to pass as a -f option
- secretRef:
name: secrets-that-were-setup-ahead-of-time-for-helm
rawOptions:
- "--namespace external-dns"
- "--include-crds"
I saw the commit "minor stylistic fixes to git creds error messages" 04835f7 which actually makes things worse according to https://github.com/golang/go/wiki/CodeReviewComments#error-strings. Because this project uses capitalized error messages everywhere, that commit might still be valid for consistency.
@cppforlife I can create a PR which changes all errors to lower-case, if you want.
figure out how to best connect AWS/AKS/etc auth to App CR's service account so that each App CR does not get "global" KMS auth.
What steps did you take:
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: simple-app
spec:
syncPeriod: 10s
serviceAccountName: default
fetch:
- inline:
paths:
deploy.yaml: |
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: bad
name: bad
spec:
selector:
matchLabels:
app: bad
template:
metadata:
labels:
app: bad
spec:
containers:
- image: bad-image
name: bad-image
template:
- ytt: {}
deploy:
- kapp: {}
kubectl delete app simple-app
What happened:
The app is stuck reconciling
What did you expect:
The app is deleted
This is simliar to #42
What steps did you take:
Create an App CRD as follows:
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: test
namespace: default
spec:
fetch:
- image:
url: some-image-with-files
template:
- foo: {}
What happened:
Kapp-controller panics with a nil pointer deref:
E0106 15:08:52.526837 15 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 174 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x14b6920, 0x236bbc0)
k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x14b6920, 0x236bbc0)
runtime/panic.go:679 +0x1b2
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).template(0xc00043d680, 0xc0003aefa0, 0x46, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_template.go:50 +0x56c
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).reconcileFetchTemplateDeploy(0xc00043d680, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_reconcile.go:127 +0x48a
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).reconcileDeploy(0xc00043d680, 0x167d2f5, 0xe)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_reconcile.go:81 +0x1a6
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).Reconcile(0xc00043d680, 0xc000437400, 0x0, 0x0, 0x0)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_reconcile.go:44 +0x37f
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*CRDApp).Reconcile(...)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/crd_app.go:112
github.com/vmware-tanzu/carvel-kapp-controller/cmd/controller.(*AppsReconciler).Reconcile(0xc00025e580, 0xc000806788, 0x7, 0xc000806784, 0x4, 0xc00000d9c0, 0xc00046bba0, 0x1344e55, 0xc000364888)
github.com/vmware-tanzu/carvel-kapp-controller@/cmd/controller/apps_reconciler.go:38 +0x353
github.com/vmware-tanzu/carvel-kapp-controller/cmd/controller.(*ErrReconciler).Reconcile(0xc00043b4c0, 0xc000806788, 0x7, 0xc000806784, 0x4, 0xc0008067f0, 0xc, 0xc00046bc30, 0x403744)
github.com/vmware-tanzu/carvel-kapp-controller@/cmd/controller/err_reconciler.go:24 +0x177
github.com/vmware-tanzu/carvel-kapp-controller/cmd/controller.(*UniqueReconciler).Reconcile(0xc000364870, 0xc000806788, 0x7, 0xc000806784, 0x4, 0xc00046bcd8, 0xc0000d4360, 0xc0000d42d8, 0x186fd60)
github.com/vmware-tanzu/carvel-kapp-controller@/cmd/controller/unique_reconciler.go:40 +0x19e
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00003a180, 0x15081c0, 0xc00000d840, 0x0)
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00003a180, 0x0)
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00003a180)
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0002be280)
k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002be280, 0x3b9aca00, 0x0, 0x1, 0xc00021c000)
k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc0002be280, 0x3b9aca00, 0xc00021c000)
k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x328
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x11ec9bc]
goroutine 174 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x105
panic(0x14b6920, 0x236bbc0)
runtime/panic.go:679 +0x1b2
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).template(0xc00043d680, 0xc0003aefa0, 0x46, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_template.go:50 +0x56c
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).reconcileFetchTemplateDeploy(0xc00043d680, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_reconcile.go:127 +0x48a
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).reconcileDeploy(0xc00043d680, 0x167d2f5, 0xe)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_reconcile.go:81 +0x1a6
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*App).Reconcile(0xc00043d680, 0xc000437400, 0x0, 0x0, 0x0)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/app_reconcile.go:44 +0x37f
github.com/vmware-tanzu/carvel-kapp-controller/pkg/app.(*CRDApp).Reconcile(...)
github.com/vmware-tanzu/carvel-kapp-controller@/pkg/app/crd_app.go:112
github.com/vmware-tanzu/carvel-kapp-controller/cmd/controller.(*AppsReconciler).Reconcile(0xc00025e580, 0xc000806788, 0x7, 0xc000806784, 0x4, 0xc00000d9c0, 0xc00046bba0, 0x1344e55, 0xc000364888)
github.com/vmware-tanzu/carvel-kapp-controller@/cmd/controller/apps_reconciler.go:38 +0x353
github.com/vmware-tanzu/carvel-kapp-controller/cmd/controller.(*ErrReconciler).Reconcile(0xc00043b4c0, 0xc000806788, 0x7, 0xc000806784, 0x4, 0xc0008067f0, 0xc, 0xc00046bc30, 0x403744)
github.com/vmware-tanzu/carvel-kapp-controller@/cmd/controller/err_reconciler.go:24 +0x177
github.com/vmware-tanzu/carvel-kapp-controller/cmd/controller.(*UniqueReconciler).Reconcile(0xc000364870, 0xc000806788, 0x7, 0xc000806784, 0x4, 0xc00046bcd8, 0xc0000d4360, 0xc0000d42d8, 0x186fd60)
github.com/vmware-tanzu/carvel-kapp-controller@/cmd/controller/unique_reconciler.go:40 +0x19e
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00003a180, 0x15081c0, 0xc00000d840, 0x0)
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00003a180, 0x0)
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00003a180)
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0002be280)
k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002be280, 0x3b9aca00, 0x0, 0x1, 0xc00021c000)
k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc0002be280, 0x3b9aca00, 0xc00021c000)
k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:193 +0x328
{"level":"error","ts":1609945732.5325596,"logger":"kc.init","msg":"Could not start controller","error":"exit status 2","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tgithub.com/go-logr/[email protected]/zapr.go:128\ngithub.com/vmware-tanzu/carvel-kapp-controller/cmd/controllerinit.Run\n\tgithub.com/vmware-tanzu/carvel-kapp-controller@/cmd/controllerinit/run.go:36\nmain.main\n\tgithub.com/vmware-tanzu/carvel-kapp-controller@/cmd/main.go:43\nruntime.main\n\truntime/proc.go:203"}
We would love to see support for Helm v3 charts. The main feature we are missing is the support for a separate crds
folder, apart from the usual templates
folder. Documentation for this is available here. One example chart that uses this feature is Dynatrace's dynatrace-oneagent-controller.
Tagging my pair @Haegi
kapp-controller could have the ability to perform a rollback. I'm currently unsure how this could be accomplished, but if the controller can somehow persist change requests, a previous one could be selected.
app.yml includes App CR describing what/how to deploy:
kappctrl deploy -a app1 -f app.yml
kappctrl deploy -a app1 -f app.yml
It might be worth adding "shortcut" commands
kappctrl deploy -a app1 --helm stable/postgres
kappctrl deploy -a app1 --git github.com/cloudfoundry/cf-for-k8s --ytt-...
related: carvel-dev/kapp#96
limit kapp-controller scope by:
conflicting resources during kapp-controller install:
Describe the problem/challenge you have
There is currently no way to fetch assets stored in an imgpkg bundle.
Describe the solution you'd like
A section of the AppCRD that contains info for fetching an imgpkg bundle.
spec:
fetch:
imgpkgBundle:
# Image url; unqualified, tagged, or
# digest references supported (required)
url: host.com/username/image:v0.1.0
# secret with auth details (optional)
secretRef:
name: secret-name
# grab only portion of image (optional)
subPath: inside-dir/dir2
This would allow users to leverage the benefits of imgpkg bundles over images, for example, using the ImagesLock file during the template stage for relocated assets .
Error message:
Stderr: kapp: Error: Validation errors:
- Expected 'kind' on resource '/ () cluster' to be non-empty (stdin doc 10)
- Expected 'apiVersion' on resource '/ () cluster' to be non-empty (stdin doc 10)
- Expected 'metadata.name' on resource '/ () cluster' to be non-empty (stdin doc 10)
Okay, this took a long time and was pretty much trial and error, would love to find some way to possibly make that easier. My secret that I was using for inline.pathsFrom with the ytt section didn't have #@data/values at the top.
how do we improve finding this kind of error? will schemas help with this?
What steps did you take:
[A clear and concise description steps that can be used to reproduce the problem.]
kubectl apply -f ingress/contour/namespace-role.yaml
kubectl create secret generic contour-data-values --from-file=values.yaml=ingress/contour/vsphere/contour-data-values.yaml -n tanzu-system-ingress-4
kubectl apply -f ingress/contour/contour-extension.yaml
kubectl get app contour -n tanzu-system-ingress
What happened:
[A small description of the issue]
root@tkg [ ~ ] tkg12workloadcluster-01-admin@tkg12workloadcluster-01:default) # k get app contour -n tanzu-system-ingress-4
NAME DESCRIPTION SINCE-DEPLOY AGE
contour Reconcile failed: Fetching (0): Fetching registry image: Imgpkg: exit status 1 (stderr: Error: Collecting images: Working with registry.tkg.vmware.run/tkg-extensions-templates:v1.2.0_vmware.1: Get https://registry.tkg.vmware.run/v2/: dial tcp: i/o timeout
) 64m
What did you expect:
[A description of what was expected]
NAME DESCRIPTION SINCE-DEPLOY AGE
contour Reconcile succeeded 15s 72s
Anything else you would like to add:
[Additional information that will assist in solving the issue.]
Environment:
kubectl get deployment -n kapp-controller kapp-controller -o yaml
and the annotation is kbld.k14s.io/images
):kubectl version
)consider supporting include CRDs flag. still want to support helm v2. should we make this consistent with how vendir selects helm version?
As discovered in #45, TestHelm
is failing. Logs from GitHub actions are shown here.
This is related to Helm 2 no longer being supported. The test should be updated to account for stable/redis no longer being available.
Describe the problem/challenge you have
in cases when App CR no longer has access to service account (e.g. remote cluster is deleted, disconnected), we dont have a natural way to delete App CRs. one "solution" today is to initiate deletion and then delete finalizer. this feels more like a hack.
Describe the solution you'd like
not sure. suggestions previously were:
Anything else you would like to add:
this issue may arise also when service account resource is deleted before App CR (so not just in the cases of remote cluster).
seems like this should be per controller configuration? at the least, document how to configure env vars.
example env vars:
- name: http_proxy
value: http://proxy.corp.com:8888
- name: HTTP_PROXY
value: http://proxy.corp.com:8888
- name: https_proxy
value: http://proxy.corp.com:8888
- name: HTTPS_PROXY
value: http://proxy.corp.com:8888
- name: no_proxy
value: 127.0.0.1,localhost,kubernetes.default.svc,.svc,.corp.com
- name: NO_PROXY
value: 127.0.0.1,localhost,kubernetes.default.svc,.svc,.corp.com
- name: PROXY_ENABLED
value: "yes"
The kubernetes project started using all lower-case import strings. Because the version of client-go that we are using (v0.17.2) uses an upper case version of an import string, if we ever need to add a dependency, or bump one of the other k8s dependencies, that uses that same package, we will get the following go mod error: case-insensitive import collision: "github.com/googleapis/gnostic/openapiv2" and "github.com/googleapis/gnostic/OpenAPIv2"
.
Describe the problem/challenge you have
Assume we define an App CR to deploy some artefacts on some target cluster. If the deployment fails, e.g. if the target cluster is hibernated and not reachable, the deployment is retried every few seconds without any deceleration. This results in a flood of retries if there are more than one such App CR.
Describe the solution you'd like
Perhaps it would be a good idea to slow down the retries perhaps using the standard strategy of the controller framework. Not sure if this should be configurable?
Right now, kapp-controller builds package the latest version of kapp. This makes it impossible to conclude the kapp version from kapp-controller's version.
Should we consider pinning kapp's version per kapp-controller release?
tagging my pair @Haegi
When kapp-controller is reconciling, I want the ability to issue a command that will allow me to track the progress of the reconciliation as if I did the deployment myself.
Essentially I want to stream the progress in a similar way that I get with kapp deploy...
Output would stream the progress of the reconciliation until complete.
I encountered a bug, where KAPP_KUBECONFIG_YAML is being set to empty in version >0.6, which somehow leads to yaml formatting errors for my yaml specs.
Just changing https://github.com/k14s/kapp-controller/blob/develop/pkg/deploy/kapp.go#L196 to just do != ""
fixes the issue. Is there a reason for using the len
approach?
It would be really cool if the git source could verify a commit was signed with a GPG signature. This seems like a nice middle ground between not really wanting to give our GitLab access to a serviceAccount with really broad permissions, but also not wanting to set up really fine grained deployment roles per application.
currently if you deploy app cr and associated service account and delete them together, there is a race between kapp controller deleting workload and k8s deleting associated service account. once service account is deleted, kapp controller of course cannot delete app workload.
currently if you are deploying appcr+sa with kapp, you can add order rule; however, in a more basic cases (using kubectl for example), we should probably enforce "dependency" in api server via finalizer on service account and associated secrets. to support usage of same service account by multiple app crs, it's probably needed to add one finalizer per app cr to service account.
it might be helpful to indicate that resources are consistently drifting. it should probably kick in if we see that X number of consecutive reconciliations happened without external change (though how does one no what is external...).
/kind feature
As a user, I would like to concile an App CR deployed in Cluster A into Cluster B, to support a few use cases:
Given a secret of the form:
apiVersion: v1
data:
value: YXBpVmVyc... (base64 encoded kubeconfig file)
kind: Secret
metadata:
name: another-kubeconfig
namespace: some-namespace
type: Opaque
I can put into the App CR:
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: my-app-to-another
namespace: some-namespace
spec:
context:
- secretRef:
name: another-kubeconfig
subPath: value
- secretRef:
matchLabels:
somelabel: on-mysecrets
subPath: value
fetch:
- git:
...
template:
- ytt: {}
deploy:
- kapp: {}
Update 1: removed namespace reference so that you can only reference a secret in the same namespace.
Update 2: change context to an array allowing you to define multiple contexts
Recreate:
Create a kubeconfig that is valid but points to an invalid endpoint (like api.example.com
). On first deployment you will get something similar to this:
Status:
Conditions:
Message: Deploying: exit status 1
Status: True
Type: ReconcileFailed
Deploy:
Error: Deploying: exit status 1
Exit Code: 1
Finished: true
Started At: 2020-07-04T12:08:10Z
Stderr: kapp: Error: Creating app:
Post https://api.example.com:6443/api/v1/namespaces/kube-system/configmaps: dial tcp 10.234.2.190:6443: i/o timeout
Stdout: Target cluster 'https://api.example.com:6443'
Updated At: 2020-07-04T12:09:10Z
Fetch:
Exit Code: 0
Started At: 2020-07-04T12:08:10Z
Updated At: 2020-07-04T12:08:10Z
Friendly Description: Reconcile failed: Deploying: exit status 1
Inspect:
Error: Inspecting: exit status 1
Exit Code: 1
Stderr: kapp: Error: Getting app:
Get https://api.example.com:6443/api/v1/namespaces/kube-system/configmaps/a-postcreate-ctrl: dial tcp 10.234.0.52:6443: i/o timeout
Stdout: Target cluster 'https://api.example.com:6443'
Updated At: 2020-07-04T12:10:11Z
Observed Generation: 1
Template:
Exit Code: 0
Updated At: 2020-07-04T12:08:10Z
App CR will not sit around indefinitely, because it cannot ever get the status of the app.
We are using the following configuration to install cf-for-k8s:
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: cf
spec:
cluster:
kubeconfigSecretRef:
name: "cf4k8s-kapp.kubeconfig"
key: "kubeconfig"
fetch:
- git:
url: https://github.tools.sap/cki/cf-for-k8s-scp.git
ref: origin/master
secretRef:
name: github-tools-sap-token
subPath: config
template:
- ytt:
inline:
pathsFrom:
- secretRef:
name: cf-values
deploy:
- kapp:
inspect:
rawOptions: ["--json=true"]
When we change ref: origin/master
to ref: 11c6cd21ff7100de5bf208aa391912f38c830b32
we get the following error:
Reconcile failed: Fetching (0): Fetching git repo: Git [checkout 11c6cd21ff7100de5bf208aa391912f38c830b32 --recurse-submodules .]: exit status 128 (stderr: fatal: reference is not a tree: 11c6cd21ff7100de5bf208aa391912f38c830b32
)
although the commit 11c6cd21ff7100de5bf208aa391912f38c830b32
exists in the repo.
git checkout 11c6cd21ff7100de5bf208aa391912f38c830b32 --recurse-submodules .
on the command line works fine.
Running
cd /tmp
rm -rf cf-for-k8s
mkdir cf-for-k8s
cd cf-for-k8s/
git init .
git remote add origin https://github.tools.sap/cki/cf-for-k8s-scp.git
git fetch origin
git checkout 11c6cd21ff7100de5bf208aa391912f38c830b32
inside the kapp-controller docker container also works.
AppSet is meant to represent same App deployed in several variations (it's analogous to Pod to Deployment relation where each Pod gets a unique name):
Following the install instructions for kapp-controller, there is no mention of needing to create the kapp-controller
namespace before using kapp
or kubectl
.
One approach to resolving this would be to actually include the creation of the kapp-controller
namespace as part of the script. An alternative is to update the documentation so users are aware of the step before installing.
EDIT
This issue is actually about how kubectl
needs the namespace resource to come before all other dependent resources in the release script as shown here. The kapp
install process should still work as expected.
allow to configure sync period via spec.syncPeriod: <duration+unit>
.
What steps did you take:
apiVersion: kappctrl.k14s.io/v1alpha1
kind: App
metadata:
name: simple-app
namespace: default
spec:
serviceAccountName: default-ns-sa
fetch:
- http:
url: i-dont-exist
template:
- ytt: {}
deploy:
- kapp: {}
kubectl get apps -n <namespace you deployed to> -oyaml
What happened:
Seeing the inspect error, which isnt the actual error causing reconciliation to fail, is misleading
fetch:
error: 'Fetching resources: exit status 1'
exitCode: 1
startedAt: "2021-01-27T19:33:30Z"
stderr: |
Error: Syncing directory '0': Syncing directory '.' with HTTP contents: Downloading URL: Initiating URL download: Get i-dont-exist: unsupported protocol scheme ""
updatedAt: "2021-01-27T19:33:30Z"
friendlyDescription: 'Reconcile failed: Fetching resources: exit status 1'
inspect:
error: 'Inspecting: exit status 1'
exitCode: 1
stderr: 'kapp: Error: App ''simple-app-ctrl'' (namespace: default) does not
exist: configmaps "simple-app-ctrl" not found'
stdout: Target cluster 'https://10.96.0.1:443'
updatedAt: "2021-01-27T19:33:30Z"
What did you expect:
To just see the error that relates to why the app is failing (in this case the fetch)
We should not run inspect if the app has never been deployed.
I really like doing the following fairly often:
template:
- ytt:
ignoreUnknownComments: true
inline:
paths:
somemorevalues.yaml: |
#@data/values
---
stuff: that
can: go
here: true
Unfortunately you lose the power of native yaml syntax highlighting and other convenience stuff. If you added in support for just inline.values
you could instead have the following:
template:
- ytt:
ignoreUnknownComments: true
inline:
values:
stuff: that
can: go
here: true
It's more of a convenience thing, but it also lowers the barrier because someone would need to know the little hack to include a data/values file directly.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.