Git Product home page Git Product logo

builder's Introduction

builder

The image run by build pods to execute image building+pushing

Looking for contribution guidelines?

Glad to have you here!
Please check CONTRIBUTING.md for detailed steps to develop & test the changes.

builder's People

Contributors

adambkaplan avatar apoorvajagtap avatar bparees avatar coreydaley avatar csrwng avatar deads2k avatar dependabot[bot] avatar gabemontero avatar guangxuli avatar jhadvig avatar jkhelil avatar jupierce avatar kargakis avatar liggitt avatar mfojtik avatar mtrmac avatar nak3 avatar nalind avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar otaviof avatar rhcarvalho avatar ricardomaraschini avatar smarterclayton avatar soltysh avatar stevekuznetsov avatar vbehar avatar wanghaoran1988 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

builder's Issues

Can't pull from registry.redhat.io in builder pod

I've a pull secret configured at cluster creation for pulling images from registry.redhat.io. That works fine for running pods, but when I start a build from an image that's hosted on registry.redhat.io the builder pod is unable to pull it:

[nferraro@localhost camel-k]$ oc logs camel-k-kit-bpv2ejjrmvh2e42n94pg-1-build
Caching blobs under "/var/cache/blobs".

Pulling image registry.redhat.io/openjdk/openjdk-11-rhel8:1.2 ...
Warning: Pull failed, retrying in 5s ...
Warning: Pull failed, retrying in 5s ...
Warning: Pull failed, retrying in 5s ...
error: build error: failed to pull image: After retrying 2 times, Pull image still failed due to error: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication

Shouldn't pull auth tokens be propagated to builders automatically?

Using OpenShift 4.3.5

retry image pulls

we should be retrying image pulls in the same way that we retry image pushes, so we do not fail builds on network flakes.

missing /var/run/secrets/kubernetes.io in builder pod ..

Hi Folks,
In trying out Openshift BuildConfigs have run into an oddball issue. When we specify a user (USER ####) in the inline Dockerfile strategy for running the assemble script we find that the pod produced by BuildConfig does not have anything (dir is MIA) at /var/run/secrets/kubernetes.io and the permissions on /var/run/secrets/rhsm are "drwx--------- root root" The pod spec does have a volumemount with the mountpath: /var/run/secrets/kubernetes.io/serveiceaccount.

automountServiceAccountToken is not configured. Any ideas on why we may be in this state ? I was expecting the pod to have /var/run/secrets/kubernetes.io/<> available to USER ####. Thanks for any pointers

The vulnerability CVE-2021-3344 has been fixed, but no specific tag denotes the patched version.

Hello, we are a team researching the dependency management mechanism of Golang. During our analysis, we came across your project and noticed that you have fixed a vulnerability (snyk references, CVE: CVE-2021-3344, CWE: CWE-522, fix commit id: d9d9f89). However, we observed that you have not tagged the fixing commit or its subsequent commits. As a result, users are unable to obtain the patch version through Go tool ‘go list’.

We kindly request your assistance in addressing this issue. Tagging the fixing commit or its subsequent commits will greatly benefit users who rely on your project and are seeking the patched version to address the vulnerability.

We greatly appreciate your attention to this matter and collaboration in resolving it. Thank you for your time and for your valuable contributions to our research.

Docker builds are failing if HEALTHCHECK has CMD with exec array format.

Create buildconfig with below config and make sure dockerfile contains HEALTHCHECK where CMD has exec array format. (HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"])

$ cat buildvault
kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
  name: vaultwarden-build
  namespace: vaultwarden
  labels:
    name: vaultwarden-build
spec:
  nodeSelector: null
  output:
    to:
      kind: ImageStreamTag
      name: 'vaultwarden:latest'
  resources: {}
  successfulBuildsHistoryLimit: 5
  failedBuildsHistoryLimit: 5
  strategy:
    type: Docker
    dockerStrategy: {}
  postCommit: {}
  source:
    type: Git
    git:
      uri: 'https://github.com/dani-garcia/vaultwarden.git'
  triggers:
    - type: ConfigChange
  runPolicy: Serial
Trigger the build:
`--> fc34d50da62
[3/3] STEP 4/15: VOLUME /data
--> 54691e1261a
[3/3] STEP 5/15: EXPOSE 80
--> 34a62581ba7
[3/3] STEP 6/15: EXPOSE 3012
--> 86ff5bb9597
[3/3] STEP 7/15: WORKDIR /
--> bed03445f77
[3/3] STEP 8/15: COPY --from=vault /web-vault ./web-vault
--> 0958dd943a3
[3/3] STEP 9/15: COPY --from=build /app/target/release/vaultwarden .
--> e5f562e9da6
[3/3] STEP 10/15: COPY docker/healthcheck.sh /healthcheck.sh
--> c988d42f8c5
[3/3] STEP 11/15: COPY docker/start.sh /start.sh
--> 32477e7a6c5
[3/3] STEP 12/15: HEALTHCHECK --interval=60s --timeout=10s ["CMD","/healthcheck.sh"]
error: build error: error building at STEP "HEALTHCHECK --interval=60s --timeout=10s ["CMD","/healthcheck.sh"]": Unknown type "[\"CMD\",\"/HEALTHCHECK.SH\"]" in HEALTHCHECK (try CMD)

From code:

The command after the CMD keyword can be either a shell command (e.g. HEALTHCHECK CMD /bin/check-running) or an exec array (as with other Dockerfile commands; see e.g. ENTRYPOINT for details).

In our case we are setting it as array. 

 HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]

Below is seen in openshift-docker-build debug log:

Command:healthcheck Args:[["CMD","echo hello"]] Flags:[--interval=60s --timeout=10s] Attrs:map[] Message:HEALTHCHECK --interval=60s --timeout=10s ["CMD","echo hello"] Original:HEALTHCHECK --interval=60s --timeout=10s ["CMD","echo hello"]} 

Instead of splitting the array at CMD and the later part, it is taken as one ( which would be are first arg[0])

From builder code:

less vendor/github.com/openshift/imagebuilder/dispatchers.go

// Set the default healthcheck command to run in the container (which may be empty).
// Argument handling is the same as RUN.
//
func healthcheck(b *Builder, args []string, attributes map[string]bool, flagArgs []string, original string) error {
        if len(args) == 0 {
                return errAtLeastOneArgument("HEALTHCHECK")
        }

        }
        typ := strings.ToUpper(args[0]) <---- 

        args = args[1:]

Here args[0] will be "CMD /healthcheck.sh" as it's the first element in the list.
Due to strings.ToUpper the sring is then converted to upper case [\"CMD\",\"/HEALTHCHECK.SH\"]

Later we have a switch case where the detection fails for CMD as the typ is " "CMD\",\"/HEALTHCHECK.SH and not CMD.

                switch typ {
                case "CMD":
                        cmdSlice := handleJSONArgs(args, attributes)
                        if len(cmdSlice) == 0 {
                                return fmt.Errorf("Missing command after HEALTHCHECK CMD")
                        }

                        if !attributes["json"] {
                                typ = "CMD-SHELL"
                        }

                        healthcheck.Test = strslice.StrSlice(append([]string{typ}, cmdSlice...))
                default:
                        return fmt.Errorf("Unknown type %#v in HEALTHCHECK (try CMD)", typ) <----
                }

Possible solutions:


The fix would be to either detect that arg[0] length is greater than 1, which would mean we have multiple entries.

if len(args[0]) != 1 <----

Later split it, CMD for typ and rest for arguments.

Or we can fix this earlier in parseHealthConfig() function

# less vendor/github.com/openshift/imagebuilder/dockerfile/parser/line_parsers.go
// The HEALTHCHECK command is like parseMaybeJSON, but has an extra type argument.
func parseHealthConfig(rest string, d *Directive) (*Node, map[string]bool, error) {

Support for private submodules with different secrets

Hello,

I am trying to build a Dockerfile app from a Github private repo which includes various git-submodules and some are private.

My .gitmodules:

[submodule "sub-repo"]
	path = src/sub-repo
	url = ssh://[email protected]/myorg/sub-repo.git
	branch = master

I created 2 secrets:

kind: Secret
apiVersion: v1
metadata:
  name: github-main-repo
  namespace: myapp
  selfLink: /api/v1/namespaces/myapp/secrets/github-main-repo
  annotations:
    build.openshift.io/source-secret-match-uri-1: 'ssh://[email protected]/myorg/main-repo*'
type: kubernetes.io/ssh-auth
kind: Secret
apiVersion: v1
metadata:
  name: github-sub-repo
  namespace: myapp
  selfLink: /api/v1/namespaces/myapp/secrets/github-sub-repo
  annotations:
    build.openshift.io/source-secret-match-uri-1: 'ssh://[email protected]/myorg/sub-repo*'
type: kubernetes.io/ssh-auth

linked them to the builder and added them to my build config:

    sourceSecret:
      name: github-main-repo
    secrets:
      - secret:
          name: github-main-repo
      - secret:
          name: github-sub-repo

I can clone the main repo with the secret in sourceSecret but it then fails at cloning the submodule.

Is it a supported scenario? Is it something that could be supported? Otherwise what are my options?

cpu limits should be detectable during s2i build

During the s2i build, the cpu limit doesn't ripple down to the build process.

.NET, for example, uses the cpu limit to determine how much things it should build in parallel.
When there is no cpu limit, .NET falls back to use the number of physical cores on the machine.

When the build machine has many cores (e.g. 64 or more), the amount of parallelism may be completely inappropriate. This causes the build to stall, consume massive amounts of memory, and finally crash.

.NET determines the cpu limit by dividing cfs_quota_us by cfs_period_us and rounding it up.
In the s2i build container, cfs_quota_us is -1.

cc @bparees @nalind @coreydaley

Push secret not found if pull secret is not present

If a build config has a push secret, but does not have a pull secret, the push secret config is not found. Builder looks for a config.json file, which is not created by oc create secret docker-registry ...

Steps to reproduce

  1. Create a docker-registry secret: oc create secret docker-registry ${secret} --docker-email=${email} --docker-username=${username} --docker-password=${password} --docker-server=${registry}
  2. Link the secret to the builder service account: oc secret link builder ${secret}
  3. Create a build config that pushes an image to a repo that requires registry auth (ex: Docker Hub) with the above secret, but does not use a pull secret.

Expected result
Push secret is found and used to push image upon successful build.

Actual result
Failure to find the push secret:

ERROR: logging before flag.Parse: I1022 17:49:23.993670       1 sti.go:393] Locating docker auth for image docker.io/adambkaplan/ruby-ex:latest and type PUSH_DOCKERCFG_PATH
ERROR: logging before flag.Parse: I1022 17:49:23.993696       1 sti.go:393] Getting docker auth in paths : [/var/run/secrets/openshift.io/push]
ERROR: logging before flag.Parse: I1022 17:49:23.993705       1 config.go:131] looking for config.json at /var/run/secrets/openshift.io/push/config.json
ERROR: logging before flag.Parse: I1022 17:49:23.993879       1 builder.go:300] No push secret provided

Pushing image docker.io/adambkaplan/ruby-ex:latest ...
Pushing image "docker.io/adambkaplan/ruby-ex:latest" from local storage.
No authentication secret provided for pushing to registry.
Registry "docker.io" is marked as secure in the registries configuration.
Getting image source signatures
Copying blob sha256:1d31b5806ba40b5f67bde96f18a181668348934a44c9253b420d5f04cfb4e37a

 0 B / 198.64 MiB 
 8 B / 198.64 MiB  0s
ERROR: logging before flag.Parse: F1022 17:49:24.411146       1 helpers.go:119] error: build error: Failed to push image: error copying layers and metadata from "containers-storage:[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.skip_mount_home=false,overlay.mountopt=nodev,overlay.override_kernel_check=false]docker.io/adambkaplan/ruby-ex:latest" to "docker://adambkaplan/ruby-ex:latest": Error writing blob: Error initiating layer upload to /v2/adambkaplan/ruby-ex/blobs/uploads/ in registry-1.docker.io: errors:
denied: requested access to the resource is denied
unauthorized: authentication required

Entitled builds broken

Commit 7901cb3 breaks entitled builds, because entitlement certificates are no longer passed through to the buildah process.

Host entitlements are linked in /usr/share/rhel/secrets on the build-host, which is mounted as /run/secrets (defined /usr/share/containers/mounts.conf) in the build continer. With the commit above only the rhsm portion is copied to the buildah process, the entitlement certificates are not, which results in failed entitled builds.

Panic in Build

Hi, I am deploying the following BuildConfig:

kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
  name: "test"
  namespace: "test"
spec:
  runPolicy: "Serial"
  source:
    git:
      uri: "<REPO_URI>"
      ref: "master"
    contextDir: "build"
  strategy:
    dockerStrategy:
      from:
        kind: "DockerImage"
        name: "debian:latest"
    type: "Docker"
  output:
    to:
      kind: "ImageStreamTag"
      name: "test:latest"

And when I try to run the build, I got a go panic (image doesn't even start building):

Cloning "<REPO_URI>" ...
	Commit:	<commit> (<commit_msg>)
	Author:	<author> <email>
	Date:	Wed Feb 5 13:43:23 2020 +0000
Replaced Dockerfile FROM image base
panic: assignment to entry in nil map

goroutine 1 [running]:
github.com/openshift/builder/vendor/github.com/openshift/imagebuilder.arg(0xc000afe000, 0xc00031d180, 0x1, 0x1, 0x0, 0x2b0e858, 0x0, 0x0, 0xc000ad5080, 0x22, ...)
	/go/src/github.com/openshift/builder/vendor/github.com/openshift/imagebuilder/dispatchers.go:557 +0x179
github.com/openshift/builder/vendor/github.com/openshift/imagebuilder.(*Builder).Run(0xc000afe000, 0xc0009fa080, 0x1ab21a0, 0x2b0e858, 0x0, 0xc000a82c00, 0x10)
	/go/src/github.com/openshift/builder/vendor/github.com/openshift/imagebuilder/builder.go:323 +0x106
github.com/openshift/builder/vendor/github.com/openshift/imagebuilder.(*Builder).extractHeadingArgsFromNode(0xc000afe000, 0xc000cbf3b0, 0xc000d071e0, 0x122d1df)
	/go/src/github.com/openshift/builder/vendor/github.com/openshift/imagebuilder/builder.go:242 +0x32a
github.com/openshift/builder/vendor/github.com/openshift/imagebuilder.NewStages(0xc000cbf3b0, 0xc000afe000, 0x70, 0x70, 0x1799da0, 0x0, 0x0)
	/go/src/github.com/openshift/builder/vendor/github.com/openshift/imagebuilder/builder.go:201 +0x4d
github.com/openshift/builder/pkg/build/builder.replaceImagesFromSource(0xc000cbf3b0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/openshift/builder/pkg/build/builder/common.go:451 +0x1a6
github.com/openshift/builder/pkg/build/builder.addBuildParameters(0xc000a545c0, 0x11, 0xc000cb8700, 0xc000cb43c0, 0x0, 0x0)
	/go/src/github.com/openshift/builder/pkg/build/builder/common.go:428 +0x337
github.com/openshift/builder/pkg/build/builder.ManageDockerfile(0xc000a545c0, 0x11, 0xc000cb8700, 0xc0002cd500, 0x0)
	/go/src/github.com/openshift/builder/pkg/build/builder/source.go:123 +0x2c5
github.com/openshift/builder/pkg/build/builder/cmd.RunManageDockerfile(0x1a8ae60, 0xc00000e020, 0x0, 0x0)
	/go/src/github.com/openshift/builder/pkg/build/builder/cmd/builder.go:411 +0xd9
main.NewCommandManageDockerfile.func1(0xc000a8a500, 0xc000cc20b0, 0x0, 0x1)
	/go/src/github.com/openshift/builder/cmd/builder.go:118 +0x43
github.com/openshift/builder/vendor/github.com/spf13/cobra.(*Command).execute(0xc000a8a500, 0xc00000c090, 0x1, 0x1, 0xc000a8a500, 0xc00000c090)
	/go/src/github.com/openshift/builder/vendor/github.com/spf13/cobra/command.go:833 +0x2cc
github.com/openshift/builder/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000a8a500, 0x2b, 0xc000047a6c, 0x30)
	/go/src/github.com/openshift/builder/vendor/github.com/spf13/cobra/command.go:917 +0x2f8
github.com/openshift/builder/vendor/github.com/spf13/cobra.(*Command).Execute(0xc000a8a500, 0x1b, 0xc000a8a500)
	/go/src/github.com/openshift/builder/vendor/github.com/spf13/cobra/command.go:867 +0x2b
main.main()
	/go/src/github.com/openshift/builder/cmd/main.go:58 +0x536

I am using CRC because I am developing an operator and I need this build as part of the process. Maybe Builds do not work in crc and I need a real cluster? (I have had a lot of problems in the past spinning up openshift clusters, so I hope not)

crc version: 1.4.0+d5bb3a3
OpenShift version: 4.2.13 (embedded in binary)
Docker version 19.03.2, build 6a30dfc

Any idea? Thanks in advance.

OpenShift 4 - builds don't work if whitelist in registrySources defined.

I have OCP4.1 RC installed:
$ oc4 get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-rc.0 True False 8d Cluster version is 4.1.0-rc.0

And have image policies implemented with list of allowed registries as build source:

apiVersion: config.openshift.io/v1
kind: Image
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"config.openshift.io/v1","kind":"Image","metadata":{"annotations":{},"name":"cluster","namespace":""},"spec":{"allowedRegistriesForImport":[{"domainName":"registry.redhat.io","insecure":false},{"domainName":"quay.io","insecure":false}],"registrySources":{"allowedRegistries":["registry.redhat.io","registry.access.redhat.com","quay.io"]}}}
    release.openshift.io/create-only: "true"
  name: cluster
spec:
  allowedRegistriesForImport:
  - domainName: registry.redhat.io
    insecure: false
  - domainName: quay.io
    insecure: false
  - domainName: registry.access.redhat.com
    insecure: false
  registrySources:
    allowedRegistries:
    - registry.redhat.io
    - registry.access.redhat.com
    - image-registry.openshift-image-registry.svc:5000
    - quay.io
status:
  internalRegistryHostname: image-registry.openshift-image-registry.svc:5000

All my builds (for example sample Ruby application build) are failing with same error:

error: build error: error copying layers and metadata for container "ceed796969e4947a471e4c866606d1fb5067055f7c0d8d9e3b174e3906fa37d7": Source image rejected: Running image containers-storage:ruby-working-container is rejected by policy.

Builder pod has this generated configuration:

{"default":[{"type":"reject"}],"transports":{"atomic":{"image-registry.openshift-image-registry.svc:5000":[{"type":"insecureAcceptAnything"}],"quay.io":[{"type":"insecureAcceptAnything"}],"registry.access.redhat.com":[{"type":"insecureAcceptAnything"}],"registry.redhat.io":[{"type":"insecureAcceptAnything"}]},"docker":{"image-registry.openshift-image-registry.svc:5000":[{"type":"insecureAcceptAnything"}],"quay.io":[{"type":"insecureAcceptAnything"}],"registry.access.redhat.com":[{"type":"insecureAcceptAnything"}],"registry.redhat.io":[{"type":"insecureAcceptAnything"}]}}}

with default type - reject.
As I understand, if default=reject it's also applies to all types of transport, including "containers-storage" - and this can be the reason why my builds failing.

If I tweak policy.json file (in builder pod, using oc debug) by adding containers-storage into transport:

sh-4.2# cat policy.json 
{
        "default": [{
                "type": "reject"
        }],
        "transports": {
                "atomic": {
                        "image-registry.openshift-image-registry.svc:5000": [{
                                "type": "insecureAcceptAnything"
                        }],
                        "quay.io": [{
                                "type": "insecureAcceptAnything"
                        }],
                        "registry.access.redhat.com": [{
                                "type": "insecureAcceptAnything"
                        }],
                        "registry.redhat.io": [{
                                "type": "insecureAcceptAnything"
                        }]
                },
                "docker": {
                        "image-registry.openshift-image-registry.svc:5000": [{
                                "type": "insecureAcceptAnything"
                        }],
                        "quay.io": [{
                                "type": "insecureAcceptAnything"
                        }],
                        "registry.access.redhat.com": [{
                                "type": "insecureAcceptAnything"
                        }],
                        "registry.redhat.io": [{
                                "type": "insecureAcceptAnything"
                        }]
                },
                "containers-storage": {
                        "": [{
                                "type": "insecureAcceptAnything"
                        }]
                }
        }
}

Then build finished successfully.

With blacklist in image policy configuration:

kind: Image
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"config.openshift.io/v1","kind":"Image","metadata":{"annotations":{},"name":"cluster","namespace":""},"spec":{"allowedRegistriesForImport":[{"domainName":"registry.redhat.io","insecure":false},{"domainName":"quay.io","insecure":false}],"registrySources":{"allowedRegistries":["registry.redhat.io","registry.access.redhat.com","quay.io"]}}}
    release.openshift.io/create-only: "true"
  name: cluster
spec:
  allowedRegistriesForImport:
  - domainName: registry.redhat.io
    insecure: false
  - domainName: quay.io
    insecure: false
  - domainName: registry.access.redhat.com
    insecure: false
  registrySources:
    blockedRegistries:
    - docker.io

All builds complete successfuly. With this settings policy.json in builder pod is:

{"default":[{"type":"insecureAcceptAnything"}],"transports":{"atomic":{"docker.io":[{"type":"reject"}]},"docker":{"docker.io":[{"type":"reject"}]}}}

with default type insecureAcceptAnything - which allows container tools to use transport "containers-storage".

ports not handled properly in CA mapping

since CAs for hostnames with a port have to be provided as configmap keys like "host..port" because colon is not allowed, the logic that copies the CA to the final destination needs to replace the final ".." with a colon (assuming a final .. is present).

Note that "foo.bar." is a valid hostname which means "foo.bar...5000" (3 dots) is valid. so the logic needs to account for that and replace the correct pair of dots.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.