carvel-dev / kbld Goto Github PK
View Code? Open in Web Editor NEWkbld seamlessly incorporates image building and image pushing into your development and deployment workflows
Home Page: https://carvel.dev/kbld
License: Apache License 2.0
kbld seamlessly incorporates image building and image pushing into your development and deployment workflows
Home Page: https://carvel.dev/kbld
License: Apache License 2.0
kbld relocation commands should use lock files as an interface for relocating images. A lock file will consist of the original image reference from the configuration manifests (image: nginx) along with the current location of the image (my.internal.registry/app1@). The workflow will look like:
kbld -f manifest.yml --lock-output images.lock // lock file contains manifest images => digest references in source registry
kbld pkg -f images.lock -o images.tar
kbld unpkg -f images.lock --input images.tar --repository <new repo> --lock-output relocated-images.lock // new lock file containes manifest images => digest references in new repo
And for all in one relocation:
kbld -f manifest.yml --lock-output images.lock // lock file contains manifest images => digest references in source registry
kbld relocate -f images.lock --src <extranl registry> --dst <internal registry> --lock-output relocated-images.lock // new lock file containes manifest images => digest references in new repo
See packaging docs for more details on the workflow
Kbld reports success given an unknown command.
workspace/kbld » kbld i-dont-exist
Succeeded
This should error telling the user that i-dont-exist
is an unknown command
Introduce a command kbld relocate -f install.yml --repository my.internal.reg
that will skip writing the image layers to disk and will stream them directly to the new repository.
kbld relocate supports the --lock-output flag to maintain a mapping from manifest image locations to current image locations. See packaging docs for more details on the workflow
# new lock file containes manifest images => digest references in new repo
kbld relocate -f images.lock --repository <dst registry> --lock-output relocated-images.lock
kbld relocate -f images.lock -r <dst registry> --lock-output relocated-images.lock
I bumped into the issue with quay.io/thanos/thanos:v0.8.1 (on the tags page, use filter by tag to find exactly that one).
kbld
resolves wrong digest for that image. I tested another one from quay.io (prometheus/node-exporter) and its digest was resolved correctly.
Along with the tagged image I put another one with correct digest in the example below.
test.yaml:
---
kind: 'Object'
spec:
- name: thanos
image: 'quay.io/thanos/thanos:v0.8.1'
- name: thanos-digest
image: 'quay.io/thanos/thanos@sha256:e008f9f98a403d6e872baf4b97ca85e7be79d401a43c6f85cf5ad170f1c21646'
$ kbld -f test.yaml
resolve | final: quay.io/thanos/thanos@sha256:e008f9f98a403d6e872baf4b97ca85e7be79d401a43c6f85cf5ad170f1c21646 -> quay.io/thanos/thanos@sha256:e008f9f98a403d6e872baf4b97ca85e
7be79d401a43c6f85cf5ad170f1c21646
resolve | final: quay.io/thanos/thanos:v0.8.1 -> quay.io/thanos/thanos@sha256:d6bcedf93f1a2ef27f3a0c8dd8bfb6bd86e6ae89352fdbb79354fd59bce6fc1b
---
kind: Object
metadata:
annotations:
kbld.k14s.io/images: |
- Metas:
- Tag: v0.8.1
Type: resolved
URL: quay.io/thanos/thanos:v0.8.1
URL: quay.io/thanos/thanos@sha256:d6bcedf93f1a2ef27f3a0c8dd8bfb6bd86e6ae89352fdbb79354fd59bce6fc1b
- Metas: null
URL: quay.io/thanos/thanos@sha256:e008f9f98a403d6e872baf4b97ca85e7be79d401a43c6f85cf5ad170f1c21646
spec:
- image: quay.io/thanos/thanos@sha256:d6bcedf93f1a2ef27f3a0c8dd8bfb6bd86e6ae89352fdbb79354fd59bce6fc1b
name: thanos
- image: quay.io/thanos/thanos@sha256:e008f9f98a403d6e872baf4b97ca85e7be79d401a43c6f85cf5ad170f1c21646
name: thanos-digest
Succeeded
After looking through the code I found out that kbld
uses outdated dependency for digest resolution. The wrong digest is calculated by taking sha256 from image's manifest, whereas the correct digest is being sent in headers by registry.
It was fixed in google/go-containerregistry.
Just updating dependency is not enough — API is changed and with the new version kbld
can't be built anymore. Unfortunately, I have a little understanding of this project and can't rapidly fix the API usage.
As a workaround I just use the correct digest instead of tag.
==> Upgrading k14s/tap/kbld
==> Downloading https://github.com/k14s/kbld/releases/download/v0.7.0/kbld-darwin-amd64
curl: (22) The requested URL returned error: 404 Not Found
Error: An exception occurred within a child process:
DownloadError: Failed to download resource "kbld"
Download failed: https://github.com/k14s/kbld/releases/download/v0.7.0/kbld-darwin-amd64
this is useful for additional workflow steps that may need to know what got processed. example: determine what is a digest for particular image so that it could be used elsewhere.
change should possibly be in the underlying go-containerregistry library.
--- FAIL: TestPkgUnpkgSuccessful (39.76s)
kbld.go:72: Failed to successfully execute 'kbld package -f - --output /tmp/kbld-test-pkg-unpkg-successful --yes': Execution error: stdout: '' stderr: 'package | exporting 2 images...
package | will export index.docker.io/library/redis@sha256:000339fb57e0ddf2d48d72f3341e47a8ca3b1beae9bdcb25a96323095b72a79b
package | will export gcr.io/cloud-builders/gcs-fetcher@sha256:055519529bf1ba12bf916fa42d6d3f68bdc581413621c269425bb0fee2467a93
package | exported 2 images
Error: Getting compressed layer: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/97/97a473b37fb2921176dcdeb10cecd0171a8b2ef20ea51fcbf330a8ccd9c7efb3/data?verify=1564597134-8w3hML1hGiv1%2Fwzqpms%2BhwqDGXo%3D: net/http: timeout awaiting response headers
' error: 'exit status 1'
introduce --concurrency flag (default to 5?)
error: fork/exec /usr/local/bin/docker: not a directory
we already support docker, and pack. let's add https://github.com/pivotal/kpack.
to avoid unnecessary annotation changes
kbld --lock-output
produces a file with a resource of Config
kind.
But in the docs I don't see a mention that kind: Config
has overrides
key.
I'd like to clarify which kind
is preferable for overriding images: ImageOverrides
or Config
.
Here is a snippet I used for testing:
echo "image: postgres" | kbld -f - --lock-output /dev/fd/2 > /dev/null
resolve | final: postgres -> index.docker.io/library/postgres@sha256:8f7c3c9b61d82a4a021da5d9618faf056633e089302a726d619fa467c73609e4
apiVersion: kbld.k14s.io/v1alpha1
kind: Config
minimumRequiredVersion: 0.23.0
overrides:
- image: postgres
newImage: index.docker.io/library/postgres@sha256:8f7c3c9b61d82a4a021da5d9618faf056633e089302a726d619fa467c73609e4
preresolved: true
First off, I wanted to say thank you for putting together this tool. The ability to specify local build context and have they transformed into images is great!
I have been using Docker buildx bake as a similar tool, using the Kubernetes executor to build images on k8 and push images.
I have been appreciating the ability to parallelize the build DAG as well as add caching for certain steps easily. See my generated docker bake json, for an example of the caching and tags.
So I am curious about two points:
These are larger discussion points, so I more just wanted to get a sense from you all how you are thinking about the relationship between the two tools.
I am very thankful that there is so much great work going on this space right now!
seems that aws ecr does not support docker v2 manifest lists
Some users want to use ko
as a backend (i.e. an image-building option).
Proposed Workflow:
kbld
behavior as possible, so we'll only use ko
for building images and not for publishing images.ko
for building, we can simply use ko publish --local
. This will build images to the local Docker. kbld
can then reference the local image for publishing.pack
).Example CR:
---
apiVersion: kbld.k14s.io/v1alpha1
kind: Sources
sources:
- image: image1
path: src/
ko:
build:
rawOptions: ["--disable-optimizations"]
ko.build.rawOptions ([]string)
: Refer to ko publish -h
for all available flags
We didn't know what the rules/patterns were for docker and pack build options so we just opted for the simplest form.
(spawned from conversation in #18; thanks to @Zebradil)
A user may want to verify that digest references to images, in fact, resolve to an actual image.
Right now, when an image reference has already been resolved to a digest, kbld
does not check for the presence of the image in the registry pointed to by the reference.
Allow the user to direct kbld
to perform that check with a flag (e.g. --verify-digest-refs
).
When this flag is set, kbld
would:
HEAD
request for that image)I am trying to run kbld before ytt. Is there any way to have kbld not strip out the ytt templating?
On linux, I am seeing auth failures when using the following docker config. This works fine authing against dockerhub, etc:
$ cat config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.6 (linux)"
},
"credsStore": "secretservice"
}
When running the following command:
kbld unpkg -f resolved-manifest.yaml --input data-flow-image.tar --repository chrisjs/spring-cloud-dataflow-server
It fails with:
unpackage | importing 1 images...
unpackage | importing index.docker.io/springcloud/spring-cloud-dataflow-server@sha256:64807655037fa1dd90f4841e9bff0994bfe37edeaf473fbac722a2412ccfbe89 -> index.docker.io/chrisjs/spring-cloud-dataflow-server@sha256:64807655037fa1dd90f4841e9bff0994bfe37edeaf473fbac722a2412ccfbe89...
unpackage | imported 0 images
kbld: Error: Importing image index.docker.io/springcloud/spring-cloud-dataflow-server@sha256:64807655037fa1dd90f4841e9bff0994bfe37edeaf473fbac722a2412ccfbe89: Importing image as index.docker.io/chrisjs/spring-cloud-dataflow-server@sha256:64807655037fa1dd90f4841e9bff0994bfe37edeaf473fbac722a2412ccfbe89: Writing image: Retried 5 times: unsupported status code 401; body:
docker-credential-secretservice
is on my $PATH
and can see my proper credentials in it
When changing my docker config.json to use base64 encoded user:pass
, ie:
"auths": {
"https://index.docker.io/v1/": {
"auth": "<base64 encoded user:pass>"
}
},
It then auths and relocates properly and I can see the image in dockerhub:
$ kbld unpkg -f resolved-manifest.yaml --input data-flow-image.tar --repository chrisjs/spring-cloud-dataflow-server
unpackage | importing 1 images...
unpackage | importing index.docker.io/springcloud/spring-cloud-dataflow-server@sha256:64807655037fa1dd90f4841e9bff0994bfe37edeaf473fbac722a2412ccfbe89 -> index.docker.io/chrisjs/spring-cloud-dataflow-server@sha256:64807655037fa1dd90f4841e9bff0994bfe37edeaf473fbac722a2412ccfbe89...
unpackage | imported 1 images
---
..
..
Background:
I use kbld
to dynamically replace docker image:tag
notations with image:digest
before applying the manifests in a pipeline like this:
custom-script-to-render-multiple-helm-charts | kbld -f - | kubectl apply -f -
According to the YAML spec a YAML document may start with ---
.
Adding ---
on the top of every YAML file (even empty ones) has proven specifically useful when concatenating the output of tools like kubectl
, helm template
and also kbld
into a single file and is hence somewhat considered a best practice (at least in my filter-bubble).
kbld
does not (yet) add ---
on top of its output by default and does not have an option to enable it either. It would be really helpful to have it.
I am happy to provide a PR if you pointed me in the right direction.
Error message:
Importing image as <some-wacky-registry>@sha256:41cdb94b4dbd0c70a8a31ccfc886b1451219e823e65bb6a4e1270263cf16eb04:
Writing image:
Retried 5 times:
Get https://<some-wacky-registry>/v2/: http: server gave HTTP response to HTTPS client
Could support an --insecure-registry
flag, which causes kbld to create a new registry with the insecure option, or just always include the option because ggcr will try https first then fallback on http.
For convenience, it may be helpful to allow the destination.newImage
field of an ImageDestination
document to contain a tag. Currently it errors when trying to auto-generate the upload tag
kbld: Error: Importing image <some-image>:
Importing image as <some-relocated-image>:
Writing image:
Retried 5 times:
Patch <some-image-blob-url>: net/http:
HTTP/1.x transport connection broken: close /tmp/packaged-dependencies.tar: file already closed
kbld consistently faced this issue against artifactory installed into a k8s namespace. we couldnt reproduce this issue in our own environments or after a few days within same atifactory env. filing this here for anyone else to report if they seen this.
I'm currently using the nix toolchain for building my Docker images from source. It outputs a single file which I pass to docker load
. Is it possible to integrate that with kbld?
This issue seems to be similar to #19 but the setup is different.
Have a typical setup for kbld
where it is doing image build and pushing image to remote registry. When it fills out the image digest reference in the deployment, it is using the digest which matched what was stored in local image build cache. The digest which the image was stored under in the remote image registry is different though, meaning the image cannot be found by the deployment.
Final step of kbld
outputs:
resolve | final: custom-jupyterhub -> lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-
jupyterhub@sha256:0fc073a041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6
Inspection of the image in the local build cache yields:
[
{
"Id": "769906fbf88ba54ba5a5267749edda68c2aea96a630b2e2c9aee3b7042eaa860",
"Digest": "sha256:0fc073a041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6",
"RepoTags": [
"localhost/kbld:custom-jupyterhub-769906fbf88ba54ba5a5267749edda68c2aea96a630b2e2c9aee3b7042eaa8
60",
"lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub:kbld-rand-1586
680932345334383-93222136210142"
],
"RepoDigests": [
"lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub@sha256:0fc073a
041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6",
"localhost/kbld@sha256:0fc073a041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6"
],
The image reference in the deployment is:
metadata:
annotations:
kbld.k14s.io/images: |
- Metas:
- Path: /home/eduk8s/hub-v3
Type: local
URL: lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub@sha256:0fc073a
041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6
...
spec:
containers:
- image: lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub@sha256:0fc07
3a041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6
The error from kapp
when deploying is:
8:47:22AM: ^ Pending: ErrImagePull (message: rpc error: code = Unknown desc = Error response from daemon
: manifest for lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub@sha256:0fc07
3a041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6 not found: manifest unknown: manifest unknown)
If you rerun just kbld
with nothing changed, you get the error:
kbld: Error:
- Resolving image 'custom-jupyterhub': Expected to find same repo digest, but found []string{"lab-jupyter-on
-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub@sha256:0fc073a041e341ca3db2fe40a570d5c22e
b529b591fc90a490737b4d48ca53d6", "lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupy
terhub@sha256:484209beb53e3dae50fc828fa215ac596f9f6f35d6398c113b7ce45b59dec5d2", "localhost/kbld@sha256:0fc0
73a041e341ca3db2fe40a570d5c22eb529b591fc90a490737b4d48ca53d6", "localhost/kbld@sha256:484209beb53e3dae50fc82
8fa215ac596f9f6f35d6398c113b7ce45b59dec5d2"}
which also appears to be picking up the mismatch.
As far as the setup goes, the builds are being done with podman
using the podman-docker
wrapper on Fedora which results in podman
being invoked when docker
is run. Thus podman
is used to push the image to the remote registry. The remote registry self hosted latest version of the standard Docker image registry.
The image registry was empty before starting. After kbld
run, have:
$ curl -u xxx:xxx -X GET https://$REGISTRY_HOST/v2/_catalog
{"repositories":["custom-jupyterhub"]}
$ curl -u xxx:xxx -X GET https://$REGISTRY_HOST/v2/custom-jupyterhub/tags/list
{"name":"custom-jupyterhub","tags":["kbld-rand-1586680932345334383-93222136210142"]}
Running skopeo
to inspect the remote image get:
"Name": "lab-jupyter-on-k8s-03-w01-s003-registry.training.getwarped.org/custom-jupyterhub",
"Digest": "sha256:cdaeaddf70987d0b895efccaa735ea6efcdb00228a0bdd9f586c78f663c42b3c",
"RepoTags": [
"kbld-rand-1586680932345334383-93222136210142"
],
...
"Layers": [
"sha256:462816986e4db9b3ba3ff89114709053ed939e75339c1bf64bb6c8c20bb0ac09",
"sha256:8a14c0ec1aad906c9703b3be9a4807671a6367d6496d1941ff4a40f488887816",
"sha256:ea892e6e4eade778ab7349dafb7a078676f4367c7bed5067c2456c204705aeaa",
"sha256:a8abb9d778483d211484c35e61002e6dde8ca8755c0577a82fe265c65590a786",
"sha256:f5266c3eb07a9a052ccde05abe2bd4e763a8ffe316dcaef1c3f1b2935515ec14",
"sha256:255f20c7a19d5ec58bc992bcf3c56cd58c6cd9ffd144c3cbd854a5e4efc72a94",
"sha256:3699fa7667edd14c6e99940406dd9fb4f2d33e0bced900623bd973bd822ab4d5",
"sha256:cf5bbb6d527ad8bc3eb29a4b73fa7acfc1fcb2d4bbf17df1b3d5e4176aad33bd",
"sha256:021fc1973d41319bdb76e723c4f89424d5a9c7a6d3bd5b0d79560722a70e53e3",
"sha256:b971891c534ff0a70e03c988a9e254b6e33d192e61d1e1352061edb69bba7469",
"sha256:5fd45b49b7e12c875ca5110ea78b8ff04ac90c79db1c0bbced7a13d55d6f820a",
"sha256:f291ee79eb30aedf1cf37386b3c3233e193c3a95255a8e8b55e9d5ff41e07409",
"sha256:028d992643957f2d6b967697e91e7c986eb919016aa88fd997e8b91237cb5fed",
"sha256:716df0b1c063d005b933667274c2edd3a0d6bf5405ac1390812066543c812a36",
"sha256:68d6bea7dd2ed44cb44cf70365c75f69e6b30877332dac5d08b333c4de56c6b1"
],
So looks like kbld
should be working out what the digest is from the image on the remote image registry after it has been pushed, rather than assuming the image will have the same digest value.
Today, when I use kbld with an unknown command, such as kbld blah
, I receive a "Succeeded" message and it exits 0. This is surprising so we should consider fixing it to inform the user that a command does not exist.
I installed ytt
v0.19.0 and at kbld develop (0f13c7b), I get the following (on macOS with Go 1.12):
$ ./hack/build.sh
+ go fmt ./cmd/... ./pkg/... ./test/...
+ build_values_path=../../.././hack/build-values-default.yml
+ cd pkg/kbld/website
+ ytt version
Version: 0.19.0
+ ytt -f . -f ../../.././hack/build-values-default.yml --file-mark generated.go.txt:exclusive-for-output=true --output ../../../tmp/
Error: Unknown output type '../../../tmp/'
Similarly:
$ ./hack/test-all.sh
+ ./hack/build.sh
+ go fmt ./cmd/... ./pkg/... ./test/...
+ build_values_path=../../.././hack/build-values-default.yml
+ cd pkg/kbld/website
+ ytt version
Version: 0.19.0
+ ytt -f . -f ../../.././hack/build-values-default.yml --file-mark generated.go.txt:exclusive-for-output=true --output ../../../tmp/
Error: Unknown output type '../../../tmp/'
From the simple app example, step 4 using an internal harbor repo
$ ytt template -f config-step-4-build-and-push/ -v hello_msg="k14s user" -v push_images=true -v push_images_repo=https://harbor.internal.com/mygroup/test-cavel | kbld -f-
Fails:
$ ytt template -f config-step-4-build-and-push/ -v hello_msg="k14s user" -v push_images=true -v push_images_repo=https://harbor.internal.com/mygroup/test-cavel | kbld -f- | kapp deploy -a simple-app -f- --diff-changes --yes
docker.io/dkalinin/k8s-simple-app | starting build (using Docker): . -> kbld:rand-1601525838182306000-812220914045-docker-io-dkalinin-k8s-simple-app
docker.io/dkalinin/k8s-simple-app | Sending build context to Docker daemon 264.2kB
docker.io/dkalinin/k8s-simple-app | Step 1/8 : FROM golang:1.12
docker.io/dkalinin/k8s-simple-app | ---> ffcaee6f7d8b
docker.io/dkalinin/k8s-simple-app | Step 2/8 : WORKDIR /go/src/github.com/k14s/k8s-simple-app-example/
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 408dba10421f
docker.io/dkalinin/k8s-simple-app | Step 3/8 : COPY . .
docker.io/dkalinin/k8s-simple-app |
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 4e4c0c0d2fbd
docker.io/dkalinin/k8s-simple-app | Step 4/8 : RUN CGO_ENABLED=0 GOOS=linux go build -v -o app
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> c42243670670
docker.io/dkalinin/k8s-simple-app | Step 5/8 : FROM scratch
docker.io/dkalinin/k8s-simple-app | --->
docker.io/dkalinin/k8s-simple-app | Step 6/8 : COPY --from=0 /go/src/github.com/k14s/k8s-simple-app-example/app .
docker.io/dkalinin/k8s-simple-app |
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 2effdd09ffab
docker.io/dkalinin/k8s-simple-app | Step 7/8 : EXPOSE 80
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 70a6756e6cfa
docker.io/dkalinin/k8s-simple-app | Step 8/8 : ENTRYPOINT ["/app"]
docker.io/dkalinin/k8s-simple-app |
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> c8157a2b346a
docker.io/dkalinin/k8s-simple-app | Successfully built c8157a2b346a
docker.io/dkalinin/k8s-simple-app | Successfully tagged kbld:rand-1601525838182306000-812220914045-docker-io-dkalinin-k8s-simple-app
docker.io/dkalinin/k8s-simple-app | Untagged: kbld:rand-1601525838182306000-812220914045-docker-io-dkalinin-k8s-simple-app
docker.io/dkalinin/k8s-simple-app | finished build (using Docker)
https://harbor.internal.com/mygroup/test-cavel | starting push (using Docker): kbld:docker-io-dkalinin-k8s-simple-app-sha256-c8157a2b346ad40e711a2eeb1c666761a2aa7ade4949a130b197fde417e3bf3b -> https://harbor.internal.com/mygroup/test-cavel:kbld-rand-1601525839289730000-62652921773
https://harbor.internal.com/mygroup/test-cavel | Error parsing reference: "https://harbor.internal.com/mygroup/test-cavel:kbld-rand-1601525839289730000-62652921773" is not a valid repository/tag: invalid reference format
https://harbor.internal.com/mygroup/test-cavel | tag error: exit status 1
https://harbor.internal.com/mygroup/test-cavel | finished push (using Docker)
kbld: Error:
- Resolving image 'docker.io/dkalinin/k8s-simple-app': exit status 1
Error: Trying to apply empty set of resources which will delete cluster resources. Refusing to continue unless --dangerous-allow-empty-list-of-resources is specified.
Changing to
$ ytt template -f config-step-4-build-and-push/ -v hello_msg="k14s user" -v push_images=true -v push_images_repo=harbor.internal.com/mygroup/test-cavel | kbld -f- | kapp deploy -a simple-app -f- --diff-changes --yes
Fixes it. That is, just remove the https://
Output after removing the scheme:
$ ytt template -f config-step-4-build-and-push/ -v hello_msg="k14s user" -v push_images=true -v push_images_repo=harbor.internal.com/mygroup/test-cavel l | kbld -f- | kapp deploy -a simple-app -f- --diff-changes --yes
docker.io/dkalinin/k8s-simple-app | starting build (using Docker): . -> kbld:rand-1601525851460904000-1378915685101-docker-io-dkalinin-k8s-simple-app
docker.io/dkalinin/k8s-simple-app | Sending build context to Docker daemon 264.2kB
docker.io/dkalinin/k8s-simple-app | Step 1/8 : FROM golang:1.12
docker.io/dkalinin/k8s-simple-app | ---> ffcaee6f7d8b
docker.io/dkalinin/k8s-simple-app | Step 2/8 : WORKDIR /go/src/github.com/k14s/k8s-simple-app-example/
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 408dba10421f
docker.io/dkalinin/k8s-simple-app | Step 3/8 : COPY . .
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 4e4c0c0d2fbd
docker.io/dkalinin/k8s-simple-app | Step 4/8 : RUN CGO_ENABLED=0 GOOS=linux go build -v -o app
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> c42243670670
docker.io/dkalinin/k8s-simple-app | Step 5/8 : FROM scratch
docker.io/dkalinin/k8s-simple-app |
docker.io/dkalinin/k8s-simple-app | --->
docker.io/dkalinin/k8s-simple-app | Step 6/8 : COPY --from=0 /go/src/github.com/k14s/k8s-simple-app-example/app .
docker.io/dkalinin/k8s-simple-app |
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 2effdd09ffab
docker.io/dkalinin/k8s-simple-app | Step 7/8 : EXPOSE 80
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> 70a6756e6cfa
docker.io/dkalinin/k8s-simple-app | Step 8/8 : ENTRYPOINT ["/app"]
docker.io/dkalinin/k8s-simple-app | ---> Using cache
docker.io/dkalinin/k8s-simple-app | ---> c8157a2b346a
docker.io/dkalinin/k8s-simple-app | Successfully built c8157a2b346a
docker.io/dkalinin/k8s-simple-app | Successfully tagged kbld:rand-1601525851460904000-1378915685101-docker-io-dkalinin-k8s-simple-app
docker.io/dkalinin/k8s-simple-app | Untagged: kbld:rand-1601525851460904000-1378915685101-docker-io-dkalinin-k8s-simple-app
docker.io/dkalinin/k8s-simple-app | finished build (using Docker)
harbor.internal.com/mygroup/test-cavel | starting push (using Docker): kbld:docker-io-dkalinin-k8s-simple-app-sha256-c8157a2b346ad40e711a2eeb1c666761a2aa7ade4949a130b197fde417e3bf3b -> harbor.internal.com/mygroup/test-cavel:kbld-rand-1601525852551414000-1089543465
harbor.internal.com/mygroup/test-cavel | The push refers to repository [harbor.internal.com/mygroup/test-cavel]
harbor.internal.com/mygroup/test-cavel | 7f0382251eda: Preparing
harbor.internal.com/mygroup/test-cavel | 7f0382251eda: Layer already exists
harbor.internal.com/mygroup/test-cavel | kbld-rand-1601525852551414000-1089543465: digest: sha256:399b0f724d117b6525cc7857d92d1d5e11cac88b14112ed16265b3e2ac30a22c size: 528
harbor.internal.com/mygroup/test-cavel | finished push (using Docker)
resolve | final: docker.io/dkalinin/k8s-simple-app -> harbor.internal.com/mygroup/test-cavel@sha256:399b0f724d117b6525cc7857d92d1d5e11cac88b14112ed16265b3e2ac30a22c
--- update deployment/simple-app (apps/v1) namespace: default
...
12, 12 Type: git
13 - URL: kbld:docker-io-dkalinin-k8s-simple-app-sha256-c8157a2b346ad40e711a2eeb1c666761a2aa7ade4949a130b197fde417e3bf3b
13 + URL: harbor.internal.com/mygroup/test-cavel@sha256:399b0f724d117b6525cc7857d92d1d5e11cac88b14112ed16265b3e2ac30a22c
14, 14 creationTimestamp: "2020-09-30T04:19:09Z"
15, 15 generation: 18
...
39, 39 value: k14s user
40 - image: kbld:docker-io-dkalinin-k8s-simple-app-sha256-c8157a2b346ad40e711a2eeb1c666761a2aa7ade4949a130b197fde417e3bf3b
40 + image: harbor.internal.com/mygroup/test-cavel@sha256:399b0f724d117b6525cc7857d92d1d5e11cac88b14112ed16265b3e2ac30a22c
41, 41 name: simple-app
42, 42 status:
Changes
Namespace Name Kind Conds. Age Op Wait to Rs Ri
default simple-app Deployment 1/2 t 23h update reconcile fail Deployment is not progressing:
ProgressDeadlineExceeded (message:
ReplicaSet "simple-app-7445899497"
has timed out progressing.)
Op: 0 create, 0 delete, 1 update, 0 noop
Wait to: 1 reconcile, 0 delete, 0 noop
12:17:37AM: ---- applying 1 changes [0/1 done] ----
12:17:37AM: update deployment/simple-app (apps/v1) namespace: default
12:17:37AM: ---- waiting on 1 changes [0/1 done] ----
12:17:37AM: ongoing: reconcile deployment/simple-app (apps/v1) namespace: default
12:17:37AM: ^ Waiting for generation 20 to be observed
12:17:37AM: L ok: waiting on replicaset/simple-app-857fcccfb9 (apps/v1) namespace: default
12:17:37AM: L ok: waiting on replicaset/simple-app-84fc496fd5 (apps/v1) namespace: default
12:17:37AM: L ok: waiting on replicaset/simple-app-7bfb9ffcd (apps/v1) namespace: default
12:17:37AM: L ok: waiting on replicaset/simple-app-7b8b7d7d86 (apps/v1) namespace: default
12:17:37AM: L ok: waiting on replicaset/simple-app-7445899497 (apps/v1) namespace: default
12:17:37AM: L ok: waiting on replicaset/simple-app-66489544f9 (apps/v1) namespace: default
12:17:37AM: L ok: waiting on replicaset/simple-app-55566d7464 (apps/v1) namespace: default
12:17:37AM: L ok: waiting on pod/simple-app-7b8b7d7d86-wbsh6 (v1) namespace: default
12:17:37AM: L ongoing: waiting on pod/simple-app-7445899497-9zzrn (v1) namespace: default
12:17:37AM: ^ Pending: ImagePullBackOff (message: Back-off pulling image "kbld:docker-io-dkalinin-k8s-simple-app-sha256-c8157a2b346ad40e711a2eeb1c666761a2aa7ade4949a130b197fde417e3bf3b")
12:17:38AM: ongoing: reconcile deployment/simple-app (apps/v1) namespace: default
12:17:38AM: ^ Waiting for 1 unavailable replicas
12:17:38AM: L ok: waiting on replicaset/simple-app-857fcccfb9 (apps/v1) namespace: default
12:17:38AM: L ok: waiting on replicaset/simple-app-84fc496fd5 (apps/v1) namespace: default
12:17:38AM: L ok: waiting on replicaset/simple-app-7bfb9ffcd (apps/v1) namespace: default
12:17:38AM: L ok: waiting on replicaset/simple-app-7b8b7d7d86 (apps/v1) namespace: default
12:17:38AM: L ok: waiting on replicaset/simple-app-7445899497 (apps/v1) namespace: default
12:17:38AM: L ok: waiting on replicaset/simple-app-66489544f9 (apps/v1) namespace: default
12:17:38AM: L ok: waiting on replicaset/simple-app-55566d7464 (apps/v1) namespace: default
12:17:38AM: L ongoing: waiting on pod/simple-app-857fcccfb9-tct2p (v1) namespace: default
12:17:38AM: ^ Pending: ContainerCreating
12:17:38AM: L ok: waiting on pod/simple-app-7b8b7d7d86-wbsh6 (v1) namespace: default
12:17:38AM: L ongoing: waiting on pod/simple-app-7445899497-9zzrn (v1) namespace: default
12:17:38AM: ^ Deleting
12:17:41AM: ongoing: reconcile deployment/simple-app (apps/v1) namespace: default
12:17:41AM: ^ Waiting for 1 unavailable replicas
12:17:41AM: L ok: waiting on replicaset/simple-app-857fcccfb9 (apps/v1) namespace: default
12:17:41AM: L ok: waiting on replicaset/simple-app-84fc496fd5 (apps/v1) namespace: default
12:17:41AM: L ok: waiting on replicaset/simple-app-7bfb9ffcd (apps/v1) namespace: default
12:17:41AM: L ok: waiting on replicaset/simple-app-7b8b7d7d86 (apps/v1) namespace: default
12:17:41AM: L ok: waiting on replicaset/simple-app-7445899497 (apps/v1) namespace: default
12:17:41AM: L ok: waiting on replicaset/simple-app-66489544f9 (apps/v1) namespace: default
12:17:41AM: L ok: waiting on replicaset/simple-app-55566d7464 (apps/v1) namespace: default
12:17:41AM: L ongoing: waiting on pod/simple-app-857fcccfb9-tct2p (v1) namespace: default
12:17:41AM: ^ Pending: ContainerCreating
12:17:41AM: L ok: waiting on pod/simple-app-7b8b7d7d86-wbsh6 (v1) namespace: default
12:17:44AM: ok: reconcile deployment/simple-app (apps/v1) namespace: default
12:17:44AM: ---- applying complete [1/1 done] ----
12:17:44AM: ---- waiting complete [1/1 done] ----
Succeeded
ie.
image: some/image@sha256:12312312...
Get's processed by kbld
and resolves to itself. This also leaves unnecessary annotation on some objects`
ie.
kbld.k14s.io/images: |
- Metas: null
URL: some/image@sha256:12312312...
When you package. I wonder why this does not do repo name to repo name map. Instead, it pushed into the same repo
https://github.com/k14s/kbld/blob/develop/docs/packaging.md
``Images will be imported under a single new repository docker.io/dkalinin/app1. You are guaranteed that images are exactly same as they are referenced by the same digests in produced YAML configuration (though under a different repository name).```
It would be nice if the build Source had an option for target
. The use case is multi-stage builds where you might have a separate development vs production target.
https://docs.docker.com/engine/reference/commandline/build/#specifying-target-build-stage---target
I imagine it might look something like:
---
kind: Object
spec:
- image: myimg
---
apiVersion: kbld.k14s.io/v1alpha1
kind: Sources
sources:
- image: myimg
path: .
target: production
Related to #6.
We should more from ghdoss's yaml package to the k8s yaml package, which is a more maintained fork.
Described in https://skaffold.dev/docs/pipeline-stages/taggers/
I run into the following issues:
$ kbld version
kbld version 0.24.0
Succeeded
v0.23.0+
supports environment variables and it looks like it honors those if they are present.Our suspicion is that kbld
is not honoring the credentials across redirects.
kbld: Error: Importing image gcr.io/cf-networking-images/cf-k8s-networking/routecontroller@sha256:ed4b3e351a31313ebf974439e4fb43210281a02f0a9125cb8ea880c572385b5f:
Importing image as example.com/pull-requests/tas-for-kubernetes@sha256:ed4b3e351a31313ebf974439e4fb43210281a02f0a9125cb8ea880c572385b5f:
Writing image:
Retried 5 times:
HEAD https://example.com/v2/pull-requests/tas-for-kubernetes/blobs/sha256:2278c072850b5981cc98736548509e528ee6fca05bda90d75119ee953c8facf9: unsupported status code 401
Note: Edited internal host for privacy.
Extend kbld
to support a semver lock on arbitrary images. Basically, when an image is being resolved it would be constrained to some range nginx:1.3.x
of tags and the latest within that tag range is picked.
This is very useful if you have to deal with dynamic environments. Instead to maintain and communicate none-breaking changes we could automate it.
apiVersion: kbld.k14s.io/v1alpha1
kind: ImageOverrides
overrides:
- image: team-B-order-service
semverLock: 1.14.x
Related: https://docs.fluxcd.io/en/latest/tutorials/driving-flux/
github.com/k14s/imgpkg is a builder that can create images from a set of files. let's have that as a backend. there probably needs to be two options: regular or bundle.
for bundles, i think we should have an option to run kbld to update images.yml
Imagine following use case:
helm template my-app ./umbrella-chart | kbld -f - -f umbrella-chart/kbld-sources.yaml --lock-output .umbrella-state/kbld.lock.yml --registry-verify-certs=false > ./.umbrella-state/state.yaml
An error by kbld
will clear the file ./.umbrella-state/state.yaml
. It's not possible to handle this case in an OS-independent construct. Providing a flag to create a file would be handy.
kbld -f - -f umbrella-chart/kbld-sources.yaml -o .umbrella-state/state.yaml
The note at https://github.com/k14s/kbld/blob/ddd2561de7d0a07eaee903767adc30a23c4fb1b8/docs/packaging.md#using-with-aws-ecr might soon be out of date.
According to aws/containers-roadmap#505 manifest support is rolling out with docs being updated: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-multi-architecture-image.html
since kbld resolves image refs to digests after building them, it's possible that there is no way for a user to see which images are used within registry UI. it's a minor details but probably still nice to add.
as a side benefit, registries probably do not garbage collect images referenced with tags.
Hi, the vendor dir isn't in sync. Do we use vendoring
in this project?
Howdy! I am following the example found on https://get-kbld.io/ with deploying a simple nginx server to a new GKE cluster. I've used the following command with the yaml provided in the example:
$ kbld -f k8s/ | kubectl apply -f -
resolve | final: nginx:1.7.9 -> index.docker.io/library/nginx@sha256:03bf9c90c36067a6d328b184f6c6068766fa5c60681adcced2509ae85c14b983
deployment.apps/nginx-deployment created
And i'm receiving the following error on the nginx pods:
Warning Failed 3m2s (x4 over 4m33s) kubelet, gke-johns-test-default-pool-9c3cb85a-bzrj Failed to pull image "index.docker.io/library/nginx@sha256:03bf9c90c36067a6d328b184f6c6068766fa5c60681adcced2509ae85c14b983": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx@sha256:03bf9c90c36067a6d328b184f6c6068766fa5c60681adcced2509ae85c14b983 not found
When attempting to pull the nginx image locally:
$ docker pull nginx:1.7.9
1.7.9: Pulling from library/nginx
Image docker.io/library/nginx:1.7.9 uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
a3ed95caeb02: Pull complete
6f5424ebd796: Pull complete
d15444df170a: Pull complete
e83f073daa67: Pull complete
a4d93e421023: Pull complete
084adbca2647: Pull complete
c9cec474c523: Pull complete
Digest: sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Status: Downloaded newer image for nginx:1.7.9
docker.io/library/nginx:1.7.9
Could this be related to the old schema manifest not working with k8s? I also note that the digest I get on the local docker pull is different from the resolve I get on the kbld
command.
Changing the image line in the yaml
image: nginx:latest
is successful.
Let me know if there are questions or if I can provide more information!
kbld
:
v0.13.0
Kubernetes:
Client - v1.15.3
Server - v1.14.10-gke.17
Docker:
macOS desktop - 2.2.0.3
Engine - 19.03.5
Hi team
I'm exploring this tool to improve our existing CD pipelines and found an issue while trying to setup a POC.
Problem is that even kbld finish successfully and the output manifest contains the image referenced by a digest, metadata is added, etc.. but the digest used doesn't exist.
Even worst, trying with an incorrect tag gives same result, successful completion but incorrect digest.
Any idea?
About my environment:
Harbor as registry and kbld v 0.21.0.
My current application has multiple Dockerfiles that are housed in a sub-directory (such as docker/images
). It would be nice if there was a way to specify the Dockerfile separate from the build context.
Maybe like:
---
kind: Object
spec:
- image: myimg
---
apiVersion: kbld.k14s.io/v1alpha1
kind: Sources
sources:
- image: myimg
path: .
dockerfile: sandbox/containers/fake.docker
I have the images already built by the CI server, but due to technical reasons the production environment does not see the registry where it is.
kbld can both pull (when packaging) and push (after building) images. But can I use it to just push the images to the target registry without processing the manifest multiple times and creating the intermediate tar? The server running the deployment does see both registries, so I don't really need a package.
I tried using just ImageDestinations
with no sources and setting imageRepo
to the name without tag, but it didn't seem to work.
https://github.com/deislabs/cnab-spec/blob/master/103-bundle-runtime.md#image-relocation
Example:
/cnab/app/relocation-mapping.json:
{
"gabrtv/microservice@sha256:cca460afa270d4c527981ef9ca4989346c56cf9b20217dcea37df1ece8120687": "my.registry/microservice@sha256:cca460afa270d4c527981ef9ca4989346c56cf9b20217dcea37df1ece8120687",
"technosophos/helloworld:0.1.0": "my.registry/helloworld:0.1.0"
}
for example include git url, git sha (+ dirty status), and may be git tag (if present)?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.