Git Product home page Git Product logo

googlecloudplatform / cloud-builders Goto Github PK

View Code? Open in Web Editor NEW
1.4K 66.0 588.0 14.4 MB

Builder images and examples commonly used for Google Cloud Build

Home Page: https://cloud.google.com/cloud-build/

License: Apache License 2.0

Shell 7.70% Go 84.20% JavaScript 0.33% Python 0.19% Java 1.05% C# 1.33% Dockerfile 2.99% Starlark 0.68% Slim 0.29% HTML 0.35% BitBake 0.89%
google-cloud-platform google-containers build docker google-cloud-build

cloud-builders's Introduction

Google Cloud Build official builder images

This repository contains source code for official builders used with the Google Cloud Build API.

Pre-built images are available at gcr.io/cloud-builders/... and include:

  • aactl: runs the aactl tool
  • bazel: runs the bazel tool
  • curl: runs the curl tool
  • docker: runs the docker tool
  • dotnet: run the dotnet tool
  • gcloud: runs the gcloud tool
  • gcs-fetcher: efficiently fetches objects from Google Cloud Storage
  • git: runs the git tool
  • gke-deploy: deploys an application to a Kubernetes cluster, following Google's recommended best practices
  • go: runs the go tool
  • gradle: runs the gradle tool
  • gsutil: runs the gsutil tool
  • javac: runs the javac tool
  • kubectl: runs the kubectl tool
  • mvn: runs the maven tool
  • npm: runs the npm tool
  • twine: runs the twine tool
  • wget: runs the wget tool
  • yarn: runs the yarn tool

Builders contributed by the public are available in the Cloud Builders Community repo.

Each builder includes a cloudbuild.yaml that will push your images to Artifact Registry. To build with this default cloudbuild.yaml, you will need to first create an Artifact Registry repository with gcr.io domain support.

To file issues and feature requests against these builder images, create an issue in this repo. If you are experiencing an issue with the Cloud Build service or have a feature request, e-mail [email protected] or see our Getting support documentation.


Alternatives to official images

Most of the tools in this repo are also available in community-supported publicly available repositories. Such repos also generally support multiple versions and platforms, available by tag.

The following community-supported images are compatible with the hosted Cloud Build service and function well as build steps; note that some will require that you specify an entrypoint for the image. Additional details regarding each alternative official image are available in the README.md for the corresponding Cloud Builder.

Container Registry Deprecation

Google announced on May 15 2023 that Container Registry has been deprecated and is superseded by Artifact Registry. The deprecation won't affect the use of official cloud builder images. Artifact Registry automatically redirects gcr.io requests for Container Registry hosts to corresponding Artifact Registry repositories.

Future Direction

You may have already noticed that most of the images in this repo now provide notices to the above alternative images. For the hosted Cloud Build service, we are formulating plans surrounding both improved support for existing cloud-builder images and documentation for alternative community-supported images that may be more appropriate for some users. Both this page and the related open issues will be updated with details soon.

cloud-builders's People

Contributors

abhinavrau avatar arvinddayal avatar austinzhao-go avatar bendory avatar chengyuanzhao avatar chitrangpatel avatar chrisge4 avatar codrienne avatar dependabot[bot] avatar eeertekin avatar giangn avatar gleeper avatar haroonc avatar ichaelm avatar imjasonh avatar ivannaranjo avatar joonlim avatar khalkie avatar kmontg avatar leeonlee avatar michaeledgar avatar mwiczer avatar nof20 avatar palmerj3 avatar philmod avatar ronanww avatar sanastos avatar skelterjohn avatar spencerc avatar squee1945 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-builders's Issues

Publish builders to eu.gcr.io

It would feel better if the builders are available in the European Union as well;

  • legally
  • bandwidth-wise

So this is a suggestion to publish the images on eu.gcr.io as well as gcr.io.

Please at-haf me if it's urgent I reply to this issue.

.ignore list

I'm not sure if this is the place for a feature request for GCCB, so here we go.

If you have a build step like:

- name: 'gcr.io/cloud-builders/git'
  args:
    - clone
    - https://github.com/googleapis/googleapis
...

and you work on your local computer you obviously have that second repo cloned in the same place but if you submit the build job it will always fail if you don't remove the directory from your local dir.

I think an ignore file list file like .gitignore that would exclude patterns from the uploaded tarball.

Thanks

Container Builder is ignoring env variables

Hey I really like the idea to have container builder available.

For some reason it doesn't work for us atm, though.

We have a cloudbuild.yaml like that:

steps:
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-project', '.']
  env:
  - 'RAILS_ENV=staging'
  - 'SECRET_KEY_BASE=abc'
  - 'ASSETS_PROVIDER=Google'
  - 'GOOGLE_STORAGE_ACCESS_KEY=MY'
  - 'GOOGLE_STORAGE_SECRET_KEY=KEY'
images:
- 'gcr.io/$PROJECT_ID/my-project'
timeout: '1200s'

But for some reason it's not using the env variables.

container_registry_-_boxes

Did I do a mistake or is that a beta bug?

golang-project: WORKSPACE must be set

With a cloudbuild.yaml file like this:

steps:
  - name: 'gcr.io/cloud-builders/golang-project'
    args:
      - the/package/for/a/binary/abcdef
      - --tag=us.gcr.io/$PROJECT_ID/abcdef:$REVISION_ID
    env: ['GOPATH=./go']
images:
  - 'us.gcr.io/$PROJECT_ID/abcdef:$REVISION_ID'

I get this when the build runs:

Step #0: Already have image (with digest): gcr.io/cloud-builders/golang-project
Starting Step #0
Step #0: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/golang-project/README.md
Step #0: WORKSPACE must be set
Finished Step #0
ERROR
ERROR: build step "gcr.io/cloud-builders/golang-project@sha256:ad8294b30cd0b8d1ff35409c3f5e75314e6e3e67caea36fe08f9c04f2a8d6439" failed: exit status 1

On the history page on console.cloud.google.com/gcr/builds/<build-id> it shows:

Directory   /workspace/ 

Setting env: ['GOPATH=./go', 'WORKSPACE=/workspace/'] seems to fix the issue, but it is not mentioned anywhere in the documentation that I would be expected to set WORKSPACE.

bazel builder is broken

When I run gcloud container builds submit --config=cloudbuild.yaml . with the bazel builder, it fails in Step #2 with this error (build label: 0.4.5):

Step #2: ____[22 / 32] GoCompile subdir/hello.a
Step #2: ERROR: /workspace/examples/subdir/BUILD:8:1: null failed: hello.a.GoCompileFile.params failed: error executing command 
Step #2:   (exec env - \
Step #2:     GOARCH=amd64 \
Step #2:     GOOS=linux \
Step #2:   bazel-out/local-fastbuild/bin/subdir/bazel-out/local-fastbuild/bin/subdir/hello.a.GoCompileFile.params)
Step #2: 
Step #2: Use --sandbox_debug to see verbose messages from the sandbox.
Step #2: open github.com/GoogleCloudPlatform/cloud-builders/bazel/examples/subdir/main.go: open github.com/GoogleCloudPlatform/cloud-builders/bazel/examples/subdir/main.go: permission denied
Step #2: Use --strategy=GoCompile=standalone to disable sandboxing for the failing actions.
Step #2: ____Building complete.
Step #2: Target //subdir:target failed to build
Step #2: ____Elapsed time: 29.041s, Critical Path: 0.20s
Step #2: ERROR: Build failed. Not running target.

How do I access a private Github repository within a cloud builder?

I have two private repositories on Github, A and B. Both are Golang projects. Project A depends on repository B. I have linked both repositories to Google Source Code and consented with permissions.

Now in Container builder, when I try to build for project A, it is not able to access repository B. In the build logs, I see:

Step #1: �[0;33m[WARN]  �[mUnable to checkout github.com/avi/api

Step #1: �[0;31m[ERROR] �[mUpdate failed for github.com/avi/api: Unable to get repository

The above happens when I try to run glide install:

steps:
- name: 'gcr.io/cloud-builders/glide'
  args: ['install', '.']

Later I thought may I could clone the repo first and make glide to use local repo, so I tried:

steps:
- name: 'gcr.io/cloud-builders/git'
  args: ['clone', '[email protected]:avinassh/api.git']

But it failed saying:

Step #0: Already have image (with digest): gcr.io/cloud-builders/git
Starting Step #0
Step #0: Cloning into 'ssh_clone'...
Step #0: Host key verification failed.
Step #0: fatal: Could not read from remote repository.
Step #0: 
Step #0: Please make sure you have the correct access rights
Step #0: and the repository exists.
Finished Step #0

and when I tried HTTPS instead of SSH, I got following error:

Step #0: Cloning into 'api'...
Step #0: fatal: could not read Username for 'https://github.com': No such device or address

Permission error when using gcloud container images describe

I am trying to inspect an image and am getting an error:

$ gcloud container images describe gcr.io/my-project/my-image:latest

ERROR: (gcloud.container.images.describe) You do not have permission to access project [my-project] (or it may not exist): permission "containeranalysis.occurrences.list" denied for project "my-project": Service 'containeranalysis.googleapis.com' is not enabled for consumer 'project_number:12345'.

All other gcloud commands work. I am using an owner account, or so I think.

Fetch repositories with submodules

It is unclear how repositories with submodules can be built.

This is what the build currently does:

FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
... <fetches from the single specified repository> ...

It is not clear if it is possible to use the cloud-builders/git container somehow to fetch all of the repositories. Is the workspace cleared between each buildstep? What if they are executed in parallel?

Is it possible to use this gcloud docker image for `gcloud app deploy` ?

I tried to and i get "Request had insufficient authentication scopes." It seems that like when requesting scopes for the builder service account for the project, we request the following scopes:

{"aliases":["default"],"email":"[email protected]","scopes":"https://www.googleapis.com/auth/logging.write\nhttps://www.googleapis.com/auth/projecthosting\nhttps://www.googleapis.com/auth/pubsub\nhttps://www.googleapis.com/auth/devstorage.read_write"}

However in order to deploy we need:

scope="https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/cloud-platform.read-only https://www.googleapis.com/auth/cloud-platform"

Any suggestions on how i could get around this ?

How to pass environment variables or output between build steps especially standard ones.

Not quite sure if this is the correct repo to post this.

But when calling any of these or custom build steps how can I pass either STDOUT or some environment variable to the next build step or to the images array. We are calculating the version tag to use for a build within a bash script but I am not sure how to get it into the images property of the build file for example.

I guess other variables I could do "printenv > somewhere.sh" and then execute ". somewhere.sh" in the next task. If the task is on of the standard ones here ("cp", "npm" or "docker") then things become more complicated.

Although it would be possible to write a new entrypoint, pull in the variables and then run the original entrypoint script. Since most of the entrypoints in these commands are not resolved through a path but by full path it seems things would become a bit fragile.

Wrong default log directory

I am using a service account to perform builds. The command used is pretty standard:

gcloud container builds submit  --substitutions "_TAG=$TAG" --config cloudbuild.yaml .

The build was created but I had errors when the log was being read:

Creating temporary tarball archive of 663 file(s) totalling 4.3 MiB before compression.
Uploading tarball of [.] to [gs://xxxx_cloudbuild/source/xxx.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/xxx/builds/xxx].
Logs are available at [https://console.cloud.google.com/gcr/builds/xxx?project=xxx].
ERROR: (gcloud.container.builds.submit) HTTPError 403: <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Details>Caller does not have storage.objects.get access to object xxx.cloudbuild-logs.googleusercontent.com/log-xxx.txt.</Details></Error>

I tried adding extra permissions to the SA but it didn't work.
Finally after some time I decided to manually add the log dir option

--gcr-log-dir "gs://<my_project_id>_cloudbuild/logs"

And finally the build worked.

From the help text:

--gcs-log-dir=GCS_LOG_DIR
        Directory in Google Cloud Storage to hold build logs. If the bucket
        does not exist, it will be created. If not set, gs://<project
        id>_cloudbuild/logs is used.

Apparently this is not correct.

My gcloud env:

Installed Components:
  core: [2017.06.09]
  pubsub-emulator: [2017.03.24]
  gcloud: []
  beta: [2017.03.24]
  gsutil: [4.26]
  bq: [2.0.24]
  alpha: [2017.03.24]

using a package manager you need to go get, e.g. glide

I've ended-up writing the following and it took me a few hours to figure out.

steps:

- name: 'gcr.io/cloud-builders/go'
  env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
  args: ['get', 'github.com/Masterminds/glide']

- name: 'gcr.io/cloud-builders/golang-project'
  env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
  args: ['github.com/Masterminds/glide', '--base-image=gcr.io/cloud-builders/golang-project', '--tag=builder-with-glide']

- name: 'builder-with-glide'
  env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
  args: ['-c', 'source /builder/prepare_workspace.inc && prepare_workspace && cd ./gopath/src/github.com/errordeveloper/kubegen && glide up --strip-vendor']
  entrypoint: '/bin/sh'

- name: 'gcr.io/cloud-builders/golang-project'
  env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
  args: ['github.com/errordeveloper/kubegen/cmd/kubegen', '--base-image=scratch', '--tag=gcr.io/$PROJECT_ID/kubegen']

images: ['gcr.io/$PROJECT_ID/kubegen']

I'm not entirely happy about how it looks/works right now, but may be we could discuss some improvements and upstream this somehow or at least document it?

Fetch npm package from private github or google repo.

npm allows you to fetch node_modules directly from a github private repo.

"dependencies": {
    "express": "4.14.0",
    "privatepackage": "git+https://github.com/myaccount/privatepackage.git"
  }

Is there a way in builder to access either github or if I mirror the repo at google.

    "privatepackage": "https://source.developers.google.com/p/$PROJECT_ID/r/privatepackage"

How would I pass credentials?

go: how to pull alpine-based image?

I see a Dockerfile.alpine for go however I can't see the image name documented anywhere so I cannot pull it. Is it available?

(I discovered that it's tagged at :alpine but let's ignore that for the moment)

Access GCP Services within Container

I get this error when running tests accessing any GCP services that require auth within a container. The Builder Service Account has Edit permissions to BigQuery & GCS.

Should this be possible?

Here's a lengthy stack trace for one of the errors:

Step #2: __________________________ ERROR at setup of test_get __________________________
Step #2: 
Step #2: project = '[--]', bucket_name = '[--]'
Step #2: blob_name_expanded = 'tests/test_650575/blob_415687', data = 'abcdef'
Step #2: 
Step #2: @pytest.fixture()
Step #2: def existing_blob_name(project, bucket_name, blob_name_expanded, data):
Step #2: > blob = gcs.get_client(project).get_bucket(bucket_name).blob(blob_name_expanded)
Step #2: 
Step #2: /src/sixty/tests/test_hooks/test_gcs.py:43: 
Step #2: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/storage/client.py:173: in get_bucket
Step #2: bucket.reload(client=self)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/storage/_helpers.py:99: in reload
Step #2: _target_object=self)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/_http.py:299: in api_request
Step #2: headers=headers, target_object=_target_object)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/_http.py:193: in _make_request
Step #2: return self._do_request(method, url, headers, data, target_object)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/_http.py:223: in _do_request
Step #2: body=data)
Step #2: /usr/local/lib/python2.7/dist-packages/google_auth_httplib2.py:187: in request
Step #2: self._request, method, uri, request_headers)
Step #2: /usr/local/lib/python2.7/dist-packages/google/auth/credentials.py:121: in before_request
Step #2: self.refresh(request)
Step #2: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
Step #2: 
Step #2: self = <google.auth.compute_engine.credentials.Credentials object at 0x7f1098b6ba90>
Step #2: request = <google_auth_httplib2.Request object at 0x7f1098b6b410>
Step #2: 
Step #2: def refresh(self, request):
Step #2: """Refresh the access token and scopes.
Step #2: 
Step #2: Args:
Step #2: request (google.auth.transport.Request): The object used to make
Step #2: HTTP requests.
Step #2: 
Step #2: Raises:
Step #2: google.auth.exceptions.RefreshError: If the Compute Engine metadata
Step #2: service can't be reached if if the instance has not
Step #2: credentials.
Step #2: """
Step #2: try:
Step #2: self._retrieve_info(request)
Step #2: self.token, self.expiry = _metadata.get_service_account_token(
Step #2: request,
Step #2: service_account=self._service_account_email)
Step #2: except exceptions.TransportError as exc:
Step #2: > raise exceptions.RefreshError(exc)
Step #2: E RefreshError: ('Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\n<!DOCTYPE html>\n<html lang=en>\n <meta charset=utf-8>\n <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">\n <title>Error 404 (Not Found)!!1</title>\n <style>\n *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}\n </style>\n <a href=//www.google.com/><span id=logo aria-label=Google></span></a>\n <p><b>404.</b> <ins>That\xe2\x80\x99s an error.</ins>\n <p>The requested URL <code>/computeMetadata/v1/instance/service-accounts/default/?recursive=true</code> was not found on this server. <ins>That\xe2\x80\x99s all we know.</ins>\n', <google_auth_httplib2._Response object at 0x7f1098b6b9d0>)
Step #2: 
Step #2: /usr/local/lib/python2.7/dist-packages/google/auth/compute_engine/credentials.py:93: RefreshError

How do I run kubectl after image push?

It would seem that a CI/CD pipeline would require the ability to deploy an image into the gke cluster after it was built/tested/pushed.

Without this, it seems like Builder is incomplete.

go: selecting the version of Go to build with?

It would be great if there was some way to specify the Go release to use to build a container. As far as I can tell, this is not controllable. This is not a critical issue for us at the moment, but at some point we will want to control when we upgrade between major releases. I don't have a good idea for how this should work, although maybe having a GO_VERSION variable would work, although you would then need to ship all the versions in your base image. Alternatively, maybe you need to replicate all the tags from the upstream golang Docker image?

gcloud: crcmod is not fast with gsutil rsync

when I use gsutil rsync command in the gcr.iocloud-builders/gcloud image, I get this warning:

Step #2: WARNING: You have requested checksumming but your crcmod installation isn't
Step #2: using the module's C extension, so checksumming will run very slowly. For help
Step #2: installing the extension, please see "gsutil help crcmod".

This is discussed in detail at https://cloud.google.com/storage/docs/gsutil/addlhelp/CRC32CandInstallingcrcmod

Impact

When I try to upload my static blog contents (150 MB) to GCS bucket using gsutil -m rsync, it takes 60 seconds in Google Cloud Container Builder as opposed to 5 seconds in my laptop with a decent Internet connection.

Best place to file build trigger + source repo bugs?

I have set up build triggers from my github repos.

Some pushes just fail to show up.

I have to make another fake push on github for the sync to trigger into source code repository and trigger the build.

While you guys fix the bug, a way to say "Sync now" on a repo will help me keep my workflow going on.

Fetch deeper history in repo?

We need to pull a deeper history for our repos (because they use an automatic versioning scheme that requires knowing how far HEAD is from the prior tag)

This has generally worked:

- name: gcr.io/cloud-builders/git
  args: [fetch, --depth=100]

But with one repo I get this error:

BUILD
Step #0: Already have image (with digest): gcr.io/cloud-builders/git
Starting Step #0
Step #0: fatal: missing blob object 'a702af496a55a81c491fe2eef993939a26c0d8a6'
Step #0: error: https://source.developers.google.com/p/$project/r/$repo did not send all necessary objects
Step #0: 
Finished Step #0

Any thoughts on the cause?

Builds fail with unreachable commit ID

I have a trigger setup to run a build when a push occurs to a google source repository, there
seems to be a race condition where sometimes the builds fail with:

error loading template: could not fetch file from source: generic::not_found: unreachable commit IDs: XXX

If I go and manually run the trigger then the build executes fine, so it appears to be trying to fetch the commit before google source repository has fully synced?

Initially it only happened occasionally, but recently most pushes have failed to build.

gcloud: service account permissions

I´m trying to setup a continuous deployment for an appengine application (standard environment) using container builder. I have the following cloudbuild.yaml setup:

steps:
- name: gcr.io/cloud-builders/gcloud
  args: ['app', 'deploy', 'frontend/app.yaml']

As I understand, the cloud builder uses the service account [PROJECT-ID]@cloudbuild.gserviceaccount.com, therefore I added the role App Engine Admin in the IAM section of the cockpit. Still, it fails with the error:

ERROR: (gcloud.app.deploy) You do not have permission to access app [...] (or it may not exist): Request had insufficient authentication scopes.

Any idea what i´m missing?

Thanks!

export env variables from step to step.

I would like to have a single trigger select different clusters based upon the branch name.
The code below works, but it looks like a kludge. Is there a more elegant way to pass environment variables from step to step?

steps:
- name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: 'bash'
  id: 'set-cluster'
  waitFor: ['-']
  args:
  - '-c'
  - |
    if [ "$BRANCH_NAME" = "develop" ]
    then
    echo "us-central1-f" > ZONE.txt
    echo "cluster-1" > CLUSTER.txt
    else
    echo "us-west1-a" > ZONE.txt
    echo "cluster-2" > CLUSTER.txt
    fi
    cat ZONE.txt
    cat CLUSTER.txt

- name: 'gcr.io/cloud-builders/kubectl'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
    export CLOUDSDK_COMPUTE_ZONE=`cat ZONE.txt`
    export CLOUDSDK_CONTAINER_CLUSTER=`cat CLUSTER.txt`
    /builder/kubectl.bash get ns
  waitFor: ['set-cluster']

Consider merging aspnetcorebuild image here

Just discovered a build image for 'gcr.io/gcp-runtimes/aspnetcorebuild-1.0:latest' exists. (from here).

Since it's not described in this repo and not listed under gcr.io/cloud-builders I think this introduces a discoverability issue. By not having that image here, we're introducing 2 problems/questions for a user:

  • why aspnetcorebuild is not under cloud-builders too?
  • what else is under "gcr.io/gcp-runtimes" (google search doesn't return anything)

My recommendation: create this image here under cloud-builders bucket as well and have it just do FROM .....

Error when trying to run builds in parralel in the go image, workspace file exists

Hey guys, not sure if i'm posting this to the wrong project btw,

I have an issue running builds in parallel in the go build image. It tries to shadow link the workspace directory but since there are two builds in parallel, it seems that the directory already exists.

cloudbuilder.yaml

steps:
  - name: 'gcr.io/cloud-builders/go'
    args: ['get', 'github.com/tools/godep']
    env: ['PROJECT_ROOT=authproxy']
  - name: 'gcr.io/cloud-builders/go'
    entrypoint: 'gopath/bin/godep'
    args: ['restore']
    env: ['PROJECT_ROOT=authproxy']
    id: 'go-restore'
  - name: 'gcr.io/cloud-builders/go'
    env: ['PROJECT_ROOT=authproxy']
    args: ['tool', 'vet', '.']
    waitFor: ['go-restore']
  - name: 'gcr.io/cloud-builders/go'
    env: ['PROJECT_ROOT=authproxy']
    args: ['tool', 'vet', '-shadowstrict', '.']
    waitFor: ['go-restore']

logs

starting build "3ffb769c-04d5-418e-9304-251075dbcf12"

FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/dragon-creative/r/github-mogthesprog-authproxy
* branch 1692cc4909e860aeed3e494928358639682bf60b -> FETCH_HEAD
HEAD is now at 1692cc4 wip
BUILD
Step #0: Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #0
Step #0: Creating shadow workspace and symlinking source into "./gopath/src/authproxy".
Step #0: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/go/README.md
Step #0: Running: go get github.com/tools/godep
Finished Step #0
Step #1 - "go-restore": Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #1 - "go-restore"
Step #1 - "go-restore": godep: [WARNING]: godep should only be used inside a valid go package directory and
Step #1 - "go-restore": godep: [WARNING]: may not function correctly. You are probably outside of your $GOPATH.
Step #1 - "go-restore": godep: [WARNING]:	Current Directory: /workspace
Step #1 - "go-restore": godep: [WARNING]:	$GOPATH: 
Finished Step #1 - "go-restore"
Step #2: Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #2
Step #3: Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #3
Step #2: Creating shadow workspace and symlinking source into "./gopath/src/authproxy".
Step #2: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/go/README.md
Step #2: Running: go tool vet .
Step #3: Creating shadow workspace and symlinking source into "./gopath/src/authproxy".
Step #3: ln: ./gopath/src/authproxy/workspace: File exists
Finished Step #3
ERROR
ERROR: build step "gcr.io/cloud-builders/go@sha256:fd73433f7cb47df8712bab4ba95c5c4d98877fbab39a8155f333544d9c27652a" failed: exit status 1

It's likely something i've done wrong in the cloudbuilder yaml file since i'm still pretty new to this.

Proposal: Provide contract "go.{ash,bash} will be in PATH"

I just found myself doing this:

steps:
- id: compile
  name: gcr.io/cloud-builders/go:alpine
  env: ["PROJECT_ROOT=myapp"]
  entrypoint: /bin/ash # we need this to do the $(git) substutiton below
  args:
  - '-c'
  - >
    /builder/go.ash install -v \
      -ldflags="-X myapp/version.version=$(git describe --always --dirty)" \
        ./myapp/main

Note that I'm referring to the program as /bin/ash -c "/builder/go.ash ...". If somebody were to change that path, downstream users referring to go.ash with its full path will get broken.

Proposal: Add the directory containing go.ash/go.bash to PATH env var.

If folks agree, I can contribute a CL. @skelterjohn

Abstract builder?

Is there a reason we can't have a builder where we can supply the entrypoint at runtime?

For example, we need to use pip. I think making a image that did this would be fairly trivial (I might be wrong though) - namely taking an existing image and adding ENTRYPOINT ["pip"] to the Dockerfile.

But then why not allow this as an argument in the cloudbuild.yaml, rather than having a specific list of builders with a full image for each?

go: document libc and ca-certificates.crt dependencies?

When applying the examples for the go builder to my project, I ran into two problems:

  1. By default, the binary depends on libc (some version of musl it seems?). This can be worked around by adding env: ["CGO_ENABLED=0"].
  2. If you make HTTPS connections, it needs /etc/ssl/certs/ca-certificates.crt for the trusted root certificates.

I think this is mostly a documentation oversight. I think it could be "solved" in any of the following ways:

  • Document these dependencies/limitations in go/README.md?
  • Recommend that people use the golang-project builder and not go. I think golang-project sets an alpine base image in the Dockerfile by default, which would avoid this.
  • Document the alpine base image that is used by the builder, so we could add it as the correct base instead of scratch?
  • Ship a minimal base image that includes the correct libc version and ca-certificates.crt in the correct location, so we could use that instead of alpine (probably overkill; this makes tiny binaries, but adds maintenance burden).
  • Add a working example that connects to https://www.google.com, since I think that runs into both these dependencies (for the resolver and to verify the HTTPS certificate).

Where is the homevol mounted?

I've been attempting to share a file written out during a step that builds a docker image and another step that uses it to tag the image. Similar to the git example in this repo.

I don't know where else to ask to find out what is going on. Based on the docs I would think writing out ./version in one step would mean ./version would exist in the next steps. That or that I could write to /builder/home/version and access that from later steps. But no matter what I've tried I find the file does not exist.

Docker: support volume during build

docker run -v is supported but officially docker build -v is not. However RedHat seems to provide a version of docker which contains support for -v in build, see comment at moby/moby#14080 (comment). Could that feature be ported over as Docker seems to fight against it?

I have a use case where I am building an image with a binary.
I have an image that I am building on top of another image I am also building both as part of cloudbuild.yaml setup. Both images depend on a private library of mine and right now I have to put the library into the 1st image in order for it to be available on the 2nd. I would much rather do -v with both to avoid creating large images.

docker 1.13

Hi, are there currently any plans for using docker 1.13? --cache-from would make a big improvement to my build speeds.

Troubles replicating local go install with cloud builder

For our go projects we are moving to locating all of our go src within a base go folder which in turn is what GOPATH is set to:

~/go

With this in mind for our github organization an example-project the path would look like:

~/go/src/github.com/vendasta/example-project

This example-project has a structure that looks like

./example-project
├── gke
└── exampleProject
    ├── package1
    ├── package2
    └── package3
    ├── main.go
    ├── channel_handlers.go
    ├── api_handlers.go

From within the terminal I can run go install ./exampleProject from the base project folder. I've as of yet been able to replicate this go install command with the cloud builder.

gcloud container builds submit exampleProject --config cloudbuild.yaml

cloudbuild.yaml

steps:
- name: 'gcr.io/cloud-builders/go'
  args: ['install', '.']
  env: ['PROJECT_ROOT=exampleProject']

I keep getting errors stating that the sub packages cannot be found? Am I doing something wrong here?

Step #0: Creating shadow workspace and symlinking source into "./gopath/src/exampleProject".
Step #0: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/go/README.md
Step #0: Binaries built using 'go install' will go to "/workspace/gopath/bin".
Step #0: Running: go install .
Step #0: api_handlers.go:12:2: cannot find package "github.com/vendasta/example-project/exampleProject/package1" in any of:
Step #0:  /workspace/gopath/src/exampleProject/vendor/github.com/vendasta/example-project/exampleProject/package1 (vendor tree)
Step #0:  /usr/local/go/src/github.com/vendasta/example-project/exampleProject/package1 (from $GOROOT)
Step #0:  /workspace/gopath/src/github.com/vendasta/example-project/exampleProject/package1 (from $GOPATH)
Step #0: channel_handlers.go:6:2: cannot find package "github.com/vendasta/example-project/exampleProject/package2" in any of:
Step #0:  /workspace/gopath/src/exampleProject/vendor/github.com/vendasta/example-project/exampleProject/package2 (vendor tree)
Step #0:  /usr/local/go/src/github.com/vendasta/example-project/exampleProject/package2 (from $GOROOT)
Step #0:  /workspace/gopath/src/github.com/vendasta/example-project/exampleProject/package2 (from $GOPATH)
Step #0: api_handlers.go:13:2: cannot find package "github.com/vendasta/example-project/exampleProject/package3" in any of:
Step #0:  /workspace/gopath/src/exampleProject/vendor/github.com/vendasta/example-project/exampleProject/package3 (vendor tree)
Step #0:  /usr/local/go/src/github.com/vendasta/example-project/exampleProject/package3 (from $GOROOT)
Step #0:  /workspace/gopath/src/github.com/vendasta/example-project/exampleProject/package3 (from $GOPATH)
Finished Step #0
ERROR

Running golint or other go installed tools is clumsy

Hey all,

Would you be willing to allow overriding of the command run at the end of the wrapper script for the go container, for example to run golint instead of go?

I would also be in support of a golint specfic image, since golint is a pretty popular tool as far as I know.

I have used workarounds such as building my own wrapper script for my own container, as well as using go run and running golint from source from within your go conatiner, but neither of those are ideal.

Java builders based on alpine

Nearly all of our builders are based on ubuntu by default, unless some other basis is necessary, or unless we provide multiple builders based on ubuntu and some other distro (e.g., golang-project:ubuntu and golang-project:alpine).

Alpine generally produces smaller images, but it also means users have to use ash instead of bash, and in general know more about differences between Alpine and Ubuntu/Debian. Smaller builder images are nice, but are ultimately less meaningful since worker VMs pre-cache our official builders before receiving builds.

Unless there's some reason java/mvn and java/gradle must be based on openjdk:8-jre-alpine I think we should consider basing them on openjdk:8-jre for consistency, and to continue the precedent set by existing official builders. This image is based on buildpack-deps:stretch-curl which is based on debian:stretch.

If they must be based on openjdk:8-jre-alpine we should document somewhere why we made that decision to help future builder developers understand it.

Bazel image with different Ubuntu version as base

Shall we also have gcr.io/cloud-builders/bazel:xenial, for example?

Our team is building a CI system on top of cloud builders, and plan to use bazel test for our Python3 code base built on top of Ubuntu:xenial.

Though we can do it on our own and host the image in cloud registry, however I believe it makes more sense to have the official support in this case.

variable substitution with unexpected tokenizations

hey folks,

if I use the following step in my cloudbuilder.yaml, where I use '_' as the delimiter, the behavior is unexpected (and I would argue, incorrect)

unexpected behavior

- name: 'gcr.io/cloud-builders/docker'
  dir: 'app'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_$COMMIT_SHA', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_latest', '.']
  id:   'build'

when the build is triggered, it runs the following command (as shown by the CLI and gui under build details):

build -t gcr.io/my-project/app.web:598ccb22d52633474fa92a4290b627634749472e -t gcr.io/my-project/app.web:latest .

Notice how the $BRANCH_NAME does not exist, nor the _ delimiter?

correct behavior

if I use a - instead of an _ as a delimiter, I get the expected behavior:

- name: 'gcr.io/cloud-builders/docker'
  dir: 'app'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_$COMMIT_SHA', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_latest', '.']
  id:   'build'

triggers the following build, correctly:

build -t gcr.io/my-project/app.test-branch-ef5776bd22bd72d6599a7f9ece6e78523c597553 -t gcr.io/my-project/app.web:test-branch-latest .

Summary

Tokenization of variables seems to be too greedy. The expected behavior is it should only substitute the token, not anything else.

let me know if you want additional information

go & golang-project is confusing

I spent quite some time to understand how these two images are different and which one I should be using. This looks like a fairly confusing point and I don't have an answer yet.

Is there a way to clarify this like:

NOTE: If you looking for $X, go use $IMAGE.

Create/Update Triggers with gcloud

I wasn't sure where to post this issue, but it would be nice to have a command line way to create triggers. I have a lot of microservices and it takes a lot of effort to add triggers via the UI.

Substitution $REPO_NAME is misleading

$REPO_NAME substitution is misleading as you expect to only get repository name instead of the more verbose {SOURCE}-{ORG}-{REPOSITORY} (eg: github-my-org-hello-world-repo)

maybe a $REPO_SHORTNAME could be added ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.