googlecloudplatform / cloud-builders Goto Github PK
View Code? Open in Web Editor NEWBuilder images and examples commonly used for Google Cloud Build
Home Page: https://cloud.google.com/cloud-build/
License: Apache License 2.0
Builder images and examples commonly used for Google Cloud Build
Home Page: https://cloud.google.com/cloud-build/
License: Apache License 2.0
I tried to and i get "Request had insufficient authentication scopes." It seems that like when requesting scopes for the builder service account for the project, we request the following scopes:
{"aliases":["default"],"email":"[email protected]","scopes":"https://www.googleapis.com/auth/logging.write\nhttps://www.googleapis.com/auth/projecthosting\nhttps://www.googleapis.com/auth/pubsub\nhttps://www.googleapis.com/auth/devstorage.read_write"}
However in order to deploy we need:
scope="https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/cloud-platform.read-only https://www.googleapis.com/auth/cloud-platform"
Any suggestions on how i could get around this ?
$REPO_NAME substitution is misleading as you expect to only get repository name instead of the more verbose {SOURCE}-{ORG}-{REPOSITORY} (eg: github-my-org-hello-world-repo)
maybe a $REPO_SHORTNAME could be added ?
Compare https://docs.docker.com/engine/installation/linux/ubuntu/#install-using-the-repository with the Dockerfile https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/docker/Dockerfile
Mention me with at-haf if it's urgent I follow this thread.
I am trying to inspect an image and am getting an error:
$ gcloud container images describe gcr.io/my-project/my-image:latest
ERROR: (gcloud.container.images.describe) You do not have permission to access project [my-project] (or it may not exist): permission "containeranalysis.occurrences.list" denied for project "my-project": Service 'containeranalysis.googleapis.com' is not enabled for consumer 'project_number:12345'.
All other gcloud commands work. I am using an owner account, or so I think.
Just discovered a build image for 'gcr.io/gcp-runtimes/aspnetcorebuild-1.0:latest' exists. (from here).
Since it's not described in this repo and not listed under gcr.io/cloud-builders I think this introduces a discoverability issue. By not having that image here, we're introducing 2 problems/questions for a user:
My recommendation: create this image here under cloud-builders bucket as well and have it just do FROM ....
.
I would like to have a single trigger select different clusters based upon the branch name.
The code below works, but it looks like a kludge. Is there a more elegant way to pass environment variables from step to step?
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
id: 'set-cluster'
waitFor: ['-']
args:
- '-c'
- |
if [ "$BRANCH_NAME" = "develop" ]
then
echo "us-central1-f" > ZONE.txt
echo "cluster-1" > CLUSTER.txt
else
echo "us-west1-a" > ZONE.txt
echo "cluster-2" > CLUSTER.txt
fi
cat ZONE.txt
cat CLUSTER.txt
- name: 'gcr.io/cloud-builders/kubectl'
entrypoint: 'bash'
args:
- '-c'
- |
export CLOUDSDK_COMPUTE_ZONE=`cat ZONE.txt`
export CLOUDSDK_CONTAINER_CLUSTER=`cat CLUSTER.txt`
/builder/kubectl.bash get ns
waitFor: ['set-cluster']
I noticed the discussion in #50 by @aslo and @skelterjohn
Does it make sense to point to the internal google cloud maven mirror described here:
https://www.infoq.com/news/2015/11/maven-central-at-google
I can work up a PR is someone thinks it's worthwhile. While it's nice to have a few 'predefined' dependencies, pointing to what should be a 'faster' maven server seems to make sense in my mind.
I spent quite some time to understand how these two images are different and which one I should be using. This looks like a fairly confusing point and I don't have an answer yet.
Is there a way to clarify this like:
NOTE: If you looking for $X, go use $IMAGE.
I've been attempting to share a file written out during a step that builds a docker image and another step that uses it to tag the image. Similar to the git
example in this repo.
I don't know where else to ask to find out what is going on. Based on the docs I would think writing out ./version
in one step would mean ./version
would exist in the next steps. That or that I could write to /builder/home/version
and access that from later steps. But no matter what I've tried I find the file does not exist.
I've ended-up writing the following and it took me a few hours to figure out.
steps:
- name: 'gcr.io/cloud-builders/go'
env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
args: ['get', 'github.com/Masterminds/glide']
- name: 'gcr.io/cloud-builders/golang-project'
env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
args: ['github.com/Masterminds/glide', '--base-image=gcr.io/cloud-builders/golang-project', '--tag=builder-with-glide']
- name: 'builder-with-glide'
env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
args: ['-c', 'source /builder/prepare_workspace.inc && prepare_workspace && cd ./gopath/src/github.com/errordeveloper/kubegen && glide up --strip-vendor']
entrypoint: '/bin/sh'
- name: 'gcr.io/cloud-builders/golang-project'
env: ['PROJECT_ROOT=github.com/errordeveloper/kubegen']
args: ['github.com/errordeveloper/kubegen/cmd/kubegen', '--base-image=scratch', '--tag=gcr.io/$PROJECT_ID/kubegen']
images: ['gcr.io/$PROJECT_ID/kubegen']
I'm not entirely happy about how it looks/works right now, but may be we could discuss some improvements and upstream this somehow or at least document it?
Hey guys, not sure if i'm posting this to the wrong project btw,
I have an issue running builds in parallel in the go build image. It tries to shadow link the workspace directory but since there are two builds in parallel, it seems that the directory already exists.
cloudbuilder.yaml
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['get', 'github.com/tools/godep']
env: ['PROJECT_ROOT=authproxy']
- name: 'gcr.io/cloud-builders/go'
entrypoint: 'gopath/bin/godep'
args: ['restore']
env: ['PROJECT_ROOT=authproxy']
id: 'go-restore'
- name: 'gcr.io/cloud-builders/go'
env: ['PROJECT_ROOT=authproxy']
args: ['tool', 'vet', '.']
waitFor: ['go-restore']
- name: 'gcr.io/cloud-builders/go'
env: ['PROJECT_ROOT=authproxy']
args: ['tool', 'vet', '-shadowstrict', '.']
waitFor: ['go-restore']
logs
starting build "3ffb769c-04d5-418e-9304-251075dbcf12"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/dragon-creative/r/github-mogthesprog-authproxy
* branch 1692cc4909e860aeed3e494928358639682bf60b -> FETCH_HEAD
HEAD is now at 1692cc4 wip
BUILD
Step #0: Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #0
Step #0: Creating shadow workspace and symlinking source into "./gopath/src/authproxy".
Step #0: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/go/README.md
Step #0: Running: go get github.com/tools/godep
Finished Step #0
Step #1 - "go-restore": Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #1 - "go-restore"
Step #1 - "go-restore": godep: [WARNING]: godep should only be used inside a valid go package directory and
Step #1 - "go-restore": godep: [WARNING]: may not function correctly. You are probably outside of your $GOPATH.
Step #1 - "go-restore": godep: [WARNING]: Current Directory: /workspace
Step #1 - "go-restore": godep: [WARNING]: $GOPATH:
Finished Step #1 - "go-restore"
Step #2: Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #2
Step #3: Already have image (with digest): gcr.io/cloud-builders/go
Starting Step #3
Step #2: Creating shadow workspace and symlinking source into "./gopath/src/authproxy".
Step #2: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/go/README.md
Step #2: Running: go tool vet .
Step #3: Creating shadow workspace and symlinking source into "./gopath/src/authproxy".
Step #3: ln: ./gopath/src/authproxy/workspace: File exists
Finished Step #3
ERROR
ERROR: build step "gcr.io/cloud-builders/go@sha256:fd73433f7cb47df8712bab4ba95c5c4d98877fbab39a8155f333544d9c27652a" failed: exit status 1
It's likely something i've done wrong in the cloudbuilder yaml file since i'm still pretty new to this.
Hey I really like the idea to have container builder available.
For some reason it doesn't work for us atm, though.
We have a cloudbuild.yaml like that:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-project', '.']
env:
- 'RAILS_ENV=staging'
- 'SECRET_KEY_BASE=abc'
- 'ASSETS_PROVIDER=Google'
- 'GOOGLE_STORAGE_ACCESS_KEY=MY'
- 'GOOGLE_STORAGE_SECRET_KEY=KEY'
images:
- 'gcr.io/$PROJECT_ID/my-project'
timeout: '1200s'
But for some reason it's not using the env
variables.
Did I do a mistake or is that a beta bug?
Is there a reason we can't have a builder where we can supply the entrypoint at runtime?
For example, we need to use pip
. I think making a image that did this would be fairly trivial (I might be wrong though) - namely taking an existing image and adding ENTRYPOINT ["pip"]
to the Dockerfile.
But then why not allow this as an argument in the cloudbuild.yaml
, rather than having a specific list of builders with a full image for each?
When applying the examples for the go
builder to my project, I ran into two problems:
env: ["CGO_ENABLED=0"]
./etc/ssl/certs/ca-certificates.crt
for the trusted root certificates.I think this is mostly a documentation oversight. I think it could be "solved" in any of the following ways:
go/README.md
?golang-project
builder and not go
. I think golang-project
sets an alpine base image in the Dockerfile by default, which would avoid this.scratch
?libc
version and ca-certificates.crt
in the correct location, so we could use that instead of alpine (probably overkill; this makes tiny binaries, but adds maintenance burden).For our go projects we are moving to locating all of our go src within a base go folder which in turn is what GOPATH is set to:
~/go
With this in mind for our github organization an example-project the path would look like:
~/go/src/github.com/vendasta/example-project
This example-project has a structure that looks like
./example-project
├── gke
└── exampleProject
├── package1
├── package2
└── package3
├── main.go
├── channel_handlers.go
├── api_handlers.go
From within the terminal I can run go install ./exampleProject
from the base project folder. I've as of yet been able to replicate this go install command with the cloud builder.
gcloud container builds submit exampleProject --config cloudbuild.yaml
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['install', '.']
env: ['PROJECT_ROOT=exampleProject']
I keep getting errors stating that the sub packages cannot be found? Am I doing something wrong here?
Step #0: Creating shadow workspace and symlinking source into "./gopath/src/exampleProject".
Step #0: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/go/README.md
Step #0: Binaries built using 'go install' will go to "/workspace/gopath/bin".
Step #0: Running: go install .
Step #0: api_handlers.go:12:2: cannot find package "github.com/vendasta/example-project/exampleProject/package1" in any of:
Step #0: /workspace/gopath/src/exampleProject/vendor/github.com/vendasta/example-project/exampleProject/package1 (vendor tree)
Step #0: /usr/local/go/src/github.com/vendasta/example-project/exampleProject/package1 (from $GOROOT)
Step #0: /workspace/gopath/src/github.com/vendasta/example-project/exampleProject/package1 (from $GOPATH)
Step #0: channel_handlers.go:6:2: cannot find package "github.com/vendasta/example-project/exampleProject/package2" in any of:
Step #0: /workspace/gopath/src/exampleProject/vendor/github.com/vendasta/example-project/exampleProject/package2 (vendor tree)
Step #0: /usr/local/go/src/github.com/vendasta/example-project/exampleProject/package2 (from $GOROOT)
Step #0: /workspace/gopath/src/github.com/vendasta/example-project/exampleProject/package2 (from $GOPATH)
Step #0: api_handlers.go:13:2: cannot find package "github.com/vendasta/example-project/exampleProject/package3" in any of:
Step #0: /workspace/gopath/src/exampleProject/vendor/github.com/vendasta/example-project/exampleProject/package3 (vendor tree)
Step #0: /usr/local/go/src/github.com/vendasta/example-project/exampleProject/package3 (from $GOROOT)
Step #0: /workspace/gopath/src/github.com/vendasta/example-project/exampleProject/package3 (from $GOPATH)
Finished Step #0
ERROR
This library is required by gRPC. We'll save users a lot of time by just installing it ourselves.
Nearly all of our builders are based on ubuntu
by default, unless some other basis is necessary, or unless we provide multiple builders based on ubuntu
and some other distro (e.g., golang-project:ubuntu
and golang-project:alpine
).
Alpine generally produces smaller images, but it also means users have to use ash
instead of bash
, and in general know more about differences between Alpine and Ubuntu/Debian. Smaller builder images are nice, but are ultimately less meaningful since worker VMs pre-cache our official builders before receiving builds.
Unless there's some reason java/mvn
and java/gradle
must be based on openjdk:8-jre-alpine
I think we should consider basing them on openjdk:8-jre
for consistency, and to continue the precedent set by existing official builders. This image is based on buildpack-deps:stretch-curl
which is based on debian:stretch
.
If they must be based on openjdk:8-jre-alpine
we should document somewhere why we made that decision to help future builder developers understand it.
Hi, are there currently any plans for using docker 1.13? --cache-from
would make a big improvement to my build speeds.
It is unclear how repositories with submodules can be built.
This is what the build currently does:
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
... <fetches from the single specified repository> ...
It is not clear if it is possible to use the cloud-builders/git
container somehow to fetch all of the repositories. Is the workspace cleared between each buildstep? What if they are executed in parallel?
It would feel better if the builders are available in the European Union as well;
So this is a suggestion to publish the images on eu.gcr.io
as well as gcr.io
.
Please at-haf me if it's urgent I reply to this issue.
npm allows you to fetch node_modules directly from a github private repo.
"dependencies": {
"express": "4.14.0",
"privatepackage": "git+https://github.com/myaccount/privatepackage.git"
}
Is there a way in builder to access either github or if I mirror the repo at google.
"privatepackage": "https://source.developers.google.com/p/$PROJECT_ID/r/privatepackage"
How would I pass credentials?
I'm not sure if this is the place for a feature request for GCCB, so here we go.
If you have a build step like:
- name: 'gcr.io/cloud-builders/git'
args:
- clone
- https://github.com/googleapis/googleapis
...
and you work on your local computer you obviously have that second repo cloned in the same place but if you submit the build job it will always fail if you don't remove the directory from your local dir.
I think an ignore file list file like .gitignore that would exclude patterns from the uploaded tarball.
Thanks
I have set up build triggers from my github repos.
Some pushes just fail to show up.
I have to make another fake push on github for the sync to trigger into source code repository and trigger the build.
While you guys fix the bug, a way to say "Sync now" on a repo will help me keep my workflow going on.
hey folks,
if I use the following step in my cloudbuilder.yaml
, where I use '_' as the delimiter, the behavior is unexpected (and I would argue, incorrect)
- name: 'gcr.io/cloud-builders/docker'
dir: 'app'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_$COMMIT_SHA', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_latest', '.']
id: 'build'
when the build is triggered, it runs the following command (as shown by the CLI and gui under build details):
build -t gcr.io/my-project/app.web:598ccb22d52633474fa92a4290b627634749472e -t gcr.io/my-project/app.web:latest .
Notice how the $BRANCH_NAME
does not exist, nor the _
delimiter?
if I use a -
instead of an _
as a delimiter, I get the expected behavior:
- name: 'gcr.io/cloud-builders/docker'
dir: 'app'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_$COMMIT_SHA', '-t', 'gcr.io/$PROJECT_ID/app.web:$BRANCH_NAME_latest', '.']
id: 'build'
triggers the following build, correctly:
build -t gcr.io/my-project/app.test-branch-ef5776bd22bd72d6599a7f9ece6e78523c597553 -t gcr.io/my-project/app.web:test-branch-latest .
Tokenization of variables seems to be too greedy. The expected behavior is it should only substitute the token, not anything else.
let me know if you want additional information
I have two private repositories on Github, A and B. Both are Golang projects. Project A depends on repository B. I have linked both repositories to Google Source Code and consented with permissions.
Now in Container builder, when I try to build for project A, it is not able to access repository B. In the build logs, I see:
Step #1: �[0;33m[WARN] �[mUnable to checkout github.com/avi/api
Step #1: �[0;31m[ERROR] �[mUpdate failed for github.com/avi/api: Unable to get repository
The above happens when I try to run glide install
:
steps:
- name: 'gcr.io/cloud-builders/glide'
args: ['install', '.']
Later I thought may I could clone the repo first and make glide to use local repo, so I tried:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['clone', '[email protected]:avinassh/api.git']
But it failed saying:
Step #0: Already have image (with digest): gcr.io/cloud-builders/git
Starting Step #0
Step #0: Cloning into 'ssh_clone'...
Step #0: Host key verification failed.
Step #0: fatal: Could not read from remote repository.
Step #0:
Step #0: Please make sure you have the correct access rights
Step #0: and the repository exists.
Finished Step #0
and when I tried HTTPS instead of SSH, I got following error:
Step #0: Cloning into 'api'...
Step #0: fatal: could not read Username for 'https://github.com': No such device or address
I have a trigger setup to run a build when a push occurs to a google source repository, there
seems to be a race condition where sometimes the builds fail with:
error loading template: could not fetch file from source: generic::not_found: unreachable commit IDs: XXX
If I go and manually run the trigger then the build executes fine, so it appears to be trying to fetch the commit before google source repository has fully synced?
Initially it only happened occasionally, but recently most pushes have failed to build.
Would it be possible to run the builder locally for testing purposes?
When I run gcloud container builds submit --config=cloudbuild.yaml .
with the bazel builder, it fails in Step #2 with this error (build label: 0.4.5):
Step #2: ____[22 / 32] GoCompile subdir/hello.a
Step #2: ERROR: /workspace/examples/subdir/BUILD:8:1: null failed: hello.a.GoCompileFile.params failed: error executing command
Step #2: (exec env - \
Step #2: GOARCH=amd64 \
Step #2: GOOS=linux \
Step #2: bazel-out/local-fastbuild/bin/subdir/bazel-out/local-fastbuild/bin/subdir/hello.a.GoCompileFile.params)
Step #2:
Step #2: Use --sandbox_debug to see verbose messages from the sandbox.
Step #2: open github.com/GoogleCloudPlatform/cloud-builders/bazel/examples/subdir/main.go: open github.com/GoogleCloudPlatform/cloud-builders/bazel/examples/subdir/main.go: permission denied
Step #2: Use --strategy=GoCompile=standalone to disable sandboxing for the failing actions.
Step #2: ____Building complete.
Step #2: Target //subdir:target failed to build
Step #2: ____Elapsed time: 29.041s, Critical Path: 0.20s
Step #2: ERROR: Build failed. Not running target.
With a cloudbuild.yaml
file like this:
steps:
- name: 'gcr.io/cloud-builders/golang-project'
args:
- the/package/for/a/binary/abcdef
- --tag=us.gcr.io/$PROJECT_ID/abcdef:$REVISION_ID
env: ['GOPATH=./go']
images:
- 'us.gcr.io/$PROJECT_ID/abcdef:$REVISION_ID'
I get this when the build runs:
Step #0: Already have image (with digest): gcr.io/cloud-builders/golang-project
Starting Step #0
Step #0: Documentation at https://github.com/GoogleCloudPlatform/cloud-builders/blob/master/golang-project/README.md
Step #0: WORKSPACE must be set
Finished Step #0
ERROR
ERROR: build step "gcr.io/cloud-builders/golang-project@sha256:ad8294b30cd0b8d1ff35409c3f5e75314e6e3e67caea36fe08f9c04f2a8d6439" failed: exit status 1
On the history page on console.cloud.google.com/gcr/builds/<build-id>
it shows:
Directory /workspace/
Setting env: ['GOPATH=./go', 'WORKSPACE=/workspace/']
seems to fix the issue, but it is not mentioned anywhere in the documentation that I would be expected to set WORKSPACE.
I wasn't sure where to post this issue, but it would be nice to have a command line way to create triggers. I have a lot of microservices and it takes a lot of effort to add triggers via the UI.
docker run -v
is supported but officially docker build -v
is not. However RedHat seems to provide a version of docker
which contains support for -v
in build, see comment at moby/moby#14080 (comment). Could that feature be ported over as Docker seems to fight against it?
I have a use case where I am building an image with a binary.
I have an image that I am building on top of another image I am also building both as part of cloudbuild.yaml setup. Both images depend on a private library of mine and right now I have to put the library into the 1st image in order for it to be available on the 2nd. I would much rather do -v
with both to avoid creating large images.
I just found myself doing this:
steps:
- id: compile
name: gcr.io/cloud-builders/go:alpine
env: ["PROJECT_ROOT=myapp"]
entrypoint: /bin/ash # we need this to do the $(git) substutiton below
args:
- '-c'
- >
/builder/go.ash install -v \
-ldflags="-X myapp/version.version=$(git describe --always --dirty)" \
./myapp/main
Note that I'm referring to the program as /bin/ash -c "/builder/go.ash ..."
. If somebody were to change that path, downstream users referring to go.ash with its full path will get broken.
Proposal: Add the directory containing go.ash
/go.bash
to PATH env var.
If folks agree, I can contribute a CL. @skelterjohn
It would be great if there was some way to specify the Go release to use to build a container. As far as I can tell, this is not controllable. This is not a critical issue for us at the moment, but at some point we will want to control when we upgrade between major releases. I don't have a good idea for how this should work, although maybe having a GO_VERSION
variable would work, although you would then need to ship all the versions in your base image. Alternatively, maybe you need to replicate all the tags from the upstream golang
Docker image?
Shall we also have gcr.io/cloud-builders/bazel:xenial
, for example?
Our team is building a CI system on top of cloud builders, and plan to use bazel test
for our Python3 code base built on top of Ubuntu:xenial
.
Though we can do it on our own and host the image in cloud registry, however I believe it makes more sense to have the official support in this case.
I get this error when running tests accessing any GCP services that require auth within a container. The Builder Service Account has Edit permissions to BigQuery & GCS.
Should this be possible?
Here's a lengthy stack trace for one of the errors:
Step #2: __________________________ ERROR at setup of test_get __________________________
Step #2:
Step #2: project = '[--]', bucket_name = '[--]'
Step #2: blob_name_expanded = 'tests/test_650575/blob_415687', data = 'abcdef'
Step #2:
Step #2: @pytest.fixture()
Step #2: def existing_blob_name(project, bucket_name, blob_name_expanded, data):
Step #2: > blob = gcs.get_client(project).get_bucket(bucket_name).blob(blob_name_expanded)
Step #2:
Step #2: /src/sixty/tests/test_hooks/test_gcs.py:43:
Step #2: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/storage/client.py:173: in get_bucket
Step #2: bucket.reload(client=self)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/storage/_helpers.py:99: in reload
Step #2: _target_object=self)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/_http.py:299: in api_request
Step #2: headers=headers, target_object=_target_object)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/_http.py:193: in _make_request
Step #2: return self._do_request(method, url, headers, data, target_object)
Step #2: /usr/local/lib/python2.7/dist-packages/google/cloud/_http.py:223: in _do_request
Step #2: body=data)
Step #2: /usr/local/lib/python2.7/dist-packages/google_auth_httplib2.py:187: in request
Step #2: self._request, method, uri, request_headers)
Step #2: /usr/local/lib/python2.7/dist-packages/google/auth/credentials.py:121: in before_request
Step #2: self.refresh(request)
Step #2: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Step #2:
Step #2: self = <google.auth.compute_engine.credentials.Credentials object at 0x7f1098b6ba90>
Step #2: request = <google_auth_httplib2.Request object at 0x7f1098b6b410>
Step #2:
Step #2: def refresh(self, request):
Step #2: """Refresh the access token and scopes.
Step #2:
Step #2: Args:
Step #2: request (google.auth.transport.Request): The object used to make
Step #2: HTTP requests.
Step #2:
Step #2: Raises:
Step #2: google.auth.exceptions.RefreshError: If the Compute Engine metadata
Step #2: service can't be reached if if the instance has not
Step #2: credentials.
Step #2: """
Step #2: try:
Step #2: self._retrieve_info(request)
Step #2: self.token, self.expiry = _metadata.get_service_account_token(
Step #2: request,
Step #2: service_account=self._service_account_email)
Step #2: except exceptions.TransportError as exc:
Step #2: > raise exceptions.RefreshError(exc)
Step #2: E RefreshError: ('Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Status: 404 Response:\n<!DOCTYPE html>\n<html lang=en>\n <meta charset=utf-8>\n <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">\n <title>Error 404 (Not Found)!!1</title>\n <style>\n *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}\n </style>\n <a href=//www.google.com/><span id=logo aria-label=Google></span></a>\n <p><b>404.</b> <ins>That\xe2\x80\x99s an error.</ins>\n <p>The requested URL <code>/computeMetadata/v1/instance/service-accounts/default/?recursive=true</code> was not found on this server. <ins>That\xe2\x80\x99s all we know.</ins>\n', <google_auth_httplib2._Response object at 0x7f1098b6b9d0>)
Step #2:
Step #2: /usr/local/lib/python2.7/dist-packages/google/auth/compute_engine/credentials.py:93: RefreshError
can any one please help me I can't figure out how to deploy a hello world Nodejs application on google app engine ... this is the error i get when i try to deploy :
ERROR: (gcloud.app.deploy) You do not have permission to access project [mazouzialami] (or it may not exist): The caller does not have permission
It would be nice to have support for https://github.com/golang/dep, since it's getting a lot of traction lately as an alternative to Glide.
We need to pull a deeper history for our repos (because they use an automatic versioning scheme that requires knowing how far HEAD
is from the prior tag)
This has generally worked:
- name: gcr.io/cloud-builders/git
args: [fetch, --depth=100]
But with one repo I get this error:
BUILD
Step #0: Already have image (with digest): gcr.io/cloud-builders/git
Starting Step #0
Step #0: fatal: missing blob object 'a702af496a55a81c491fe2eef993939a26c0d8a6'
Step #0: error: https://source.developers.google.com/p/$project/r/$repo did not send all necessary objects
Step #0:
Finished Step #0
Any thoughts on the cause?
I am using a service account to perform builds. The command used is pretty standard:
gcloud container builds submit --substitutions "_TAG=$TAG" --config cloudbuild.yaml .
The build was created but I had errors when the log was being read:
Creating temporary tarball archive of 663 file(s) totalling 4.3 MiB before compression.
Uploading tarball of [.] to [gs://xxxx_cloudbuild/source/xxx.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/xxx/builds/xxx].
Logs are available at [https://console.cloud.google.com/gcr/builds/xxx?project=xxx].
ERROR: (gcloud.container.builds.submit) HTTPError 403: <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Details>Caller does not have storage.objects.get access to object xxx.cloudbuild-logs.googleusercontent.com/log-xxx.txt.</Details></Error>
I tried adding extra permissions to the SA but it didn't work.
Finally after some time I decided to manually add the log dir option
--gcr-log-dir "gs://<my_project_id>_cloudbuild/logs"
And finally the build worked.
From the help text:
--gcs-log-dir=GCS_LOG_DIR
Directory in Google Cloud Storage to hold build logs. If the bucket
does not exist, it will be created. If not set, gs://<project
id>_cloudbuild/logs is used.
Apparently this is not correct.
My gcloud env:
Installed Components:
core: [2017.06.09]
pubsub-emulator: [2017.03.24]
gcloud: []
beta: [2017.03.24]
gsutil: [4.26]
bq: [2.0.24]
alpha: [2017.03.24]
It would seem that a CI/CD pipeline would require the ability to deploy an image into the gke cluster after it was built/tested/pushed.
Without this, it seems like Builder is incomplete.
when I use gsutil rsync
command in the gcr.iocloud-builders/gcloud
image, I get this warning:
Step #2: WARNING: You have requested checksumming but your crcmod installation isn't
Step #2: using the module's C extension, so checksumming will run very slowly. For help
Step #2: installing the extension, please see "gsutil help crcmod".
This is discussed in detail at https://cloud.google.com/storage/docs/gsutil/addlhelp/CRC32CandInstallingcrcmod
When I try to upload my static blog contents (150 MB) to GCS bucket using gsutil -m rsync
, it takes 60 seconds in Google Cloud Container Builder as opposed to 5 seconds in my laptop with a decent Internet connection.
Hey all,
Would you be willing to allow overriding of the command run at the end of the wrapper script for the go container, for example to run golint instead of go?
I would also be in support of a golint specfic image, since golint is a pretty popular tool as far as I know.
I have used workarounds such as building my own wrapper script for my own container, as well as using go run and running golint from source from within your go conatiner, but neither of those are ideal.
I see a Dockerfile.alpine for go
however I can't see the image name documented anywhere so I cannot pull it. Is it available?
(I discovered that it's tagged at :alpine but let's ignore that for the moment)
Not quite sure if this is the correct repo to post this.
But when calling any of these or custom build steps how can I pass either STDOUT or some environment variable to the next build step or to the images array. We are calculating the version tag to use for a build within a bash script but I am not sure how to get it into the images property of the build file for example.
I guess other variables I could do "printenv > somewhere.sh" and then execute ". somewhere.sh" in the next task. If the task is on of the standard ones here ("cp", "npm" or "docker") then things become more complicated.
Although it would be possible to write a new entrypoint, pull in the variables and then run the original entrypoint script. Since most of the entrypoints in these commands are not resolved through a path but by full path it seems things would become a bit fragile.
As the title says, the npm builder is not available under https://console.cloud.google.com/gcr/images/cloud-builders?project=cloud-builders as the README suggests..
I know this is still beta but can we have an ETA on the availability?
It would be nice to have support for docker compose!
I´m trying to setup a continuous deployment for an appengine application (standard environment) using container builder. I have the following cloudbuild.yaml setup:
steps:
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'frontend/app.yaml']
As I understand, the cloud builder uses the service account [PROJECT-ID]@cloudbuild.gserviceaccount.com, therefore I added the role App Engine Admin in the IAM section of the cockpit. Still, it fails with the error:
ERROR: (gcloud.app.deploy) You do not have permission to access app [...] (or it may not exist): Request had insufficient authentication scopes.
Any idea what i´m missing?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.