Git Product home page Git Product logo

cloud-build-local's Introduction

This project is archived and no longer supported, developed, or maintained.

The code remains available for historic purposes.

The README as of the archival date remains unchanged below for historic purposes.


Google Cloud Build Local Builder

Local Builder runs Google Cloud Build locally, allowing easier debugging, execution of builds on your own hardware, and integration into local build and test workflows. Please note that the Local Builder is not 100% feature-compatible with the hosted GCB service.

NOTE: The Cloud Build local builder is maintained at best effort as a local debugging tool for Cloud Build. It does not support 100% feature parity with the hosted Cloud Build service and should not be used for production workloads.


Prerequisites

  1. Ensure you have installed:

  2. If the build needs to access a private Google Container Registry, install and configure the Docker credential helper for Google Container Registry.

  3. Configure your project for the gcloud tool, where [PROJECT_ID] is your Cloud Platform project ID:

    gcloud config set project [PROJECT-ID]
    

Install using gcloud

  1. Install by running the following command:

    gcloud components install cloud-build-local
    

    After successful installation, you will have cloud-build-local in your PATH as part of the Google Cloud SDK binaries.

  2. To see all of the commands, run:

    $ cloud-build-local --help
    

    The Local Builder's command is $ cloud-build-local.

Download the latest binaries

The latest binaries are available in a GCS bucket.

Download the latest binaries from GCS.

To run a build:

./cloud-build-local_{linux,darwin}_{386,amd64}-v<latest_tag> --dryrun=false --config=path/to/cloudbuild.yaml path/to/code

Developing and contributing to the Local Builder

See the contributing instructions.

Limitations

  • Only one build can be run at a time on a given host.
  • The tool works on the following platforms:
    • Linux
    • macOS

Support

Our documentation has a page on getting support. If you have general questions about Local Builder or Cloud Build, you can file issues here on GitHub, or see:

cloud-build-local's People

Contributors

bendory avatar codrienne avatar danielpeach avatar emoryruscus avatar leeonlee avatar philmod avatar rafikk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-build-local's Issues

--write-workspace does not work for directories outside of $PWD

# Works: creates $PWD/workspace
cloud-build-local --dryrun=false --write-workspace=. .
# Works: creates $PWD/subdir/workspace
cloud-build-local --dryrun=false --write-workspace=subdir .
# Runs fine but either no workspace directory is created in ".." or it is not populated.
# I've seen both behaviours.
cloud-build-local --dryrun=false --write-workspace=.. .

Expected: write-workspace should write to any given directory or print an error if not possible.
Observed: build completes but workspace is not created or not populated.

--substitutions is broken in gcloud v201

(I'm sorry if it's not the correct way to report bug for this tool.)

In the latest gcloud v201, the --substitutions flag doesn't work anymore for container-builder-local.
It displays errors like this:

2018/05/17 17:02:30 Error validating build: key in the template "_VERSION" is not matched in the substitution data;key in the template "_VERSION" is not matched in the substitution data;key in the template "_VERSION" is not matched in the substitution data

I don't have any issue with gcloud v200.

Feature request: run a single step with a -step flag

Debugging long scripts would be made more ergonomic with a -step flag that allowed you to specify the step number to run. I appreciate that the equivalent functionality doesn't exist in the hosted service, but, for the debugging use case, it would be great to inspect interim state to do a bit of manual testing of invariants. Thanks for your consideration!

$BUILD_ID is always empty

While buildConfig.ProjectId (aka PROJECT_ID) is populated with the expected value https://github.com/GoogleCloudPlatform/container-builder-local/blob/master/localbuilder_main.go#L162 buildConfig.Id (aka BUILD_ID) is never populated and is left empty.

This unexpected behavior results in breaking changes of the building pipeline and can result in the failure of the building process.

BUILD_ID should be generated randomly for each building process to emulate the expected behavior of the container builder.

Substitutions cannot contain commas

Currently, we are facing the following issue at our company: We need to define a base DN for our LDAP connection and this DN is different in DEV, STAGE, and PROD and therefore needs to be configurable.

The issue is that LDAP DN has a format similar to DC=example,DC=com and this format confuses the Cloud Builder which tries to break down the string into more substitution, returning the error Error merging substitutions and validating build: substitution key "DC" does not respect format "^_[A-Z0-9_]+$" and is not an overridable built-in substitutions.

It seems that currently it is not possible to escape this string in any way. This scenario is also likely to appear in other situations where having a comma in the substitutions is necessary.

Our current workaround is to encode in base64 the DN, but again this is not a clean solution to the issue.

Bazel Builder: /builder/outputs/output: Permission denied

Hi,

By default gcr.io/cloud-builders/bazel is outputting the error:

Step GoogleCloudPlatform/cloud-builders#1: /usr/bin/bazel: line 43: /builder/outputs/output: Permission denied

Using this environment variable resolves the issue but it is not documented and I don't know if this env variable is officially supported.

env:
      - 'BUILDER_OUTPUT=/workspace/.bazel'

Also we need to create the .bazel folder so bazel can work in it.

Affected builder image

(e.g., gcr.io/cloud-builders/bazel)

Expected Behavior

  - name: 'gcr.io/cloud-builders/bazel'
    args: ['version']

Outputs the version

Actual Behavior

Outputs:

Step GoogleCloudPlatform/cloud-builders#1: /usr/bin/bazel: line 43: /builder/outputs/output: Permission denied

Steps to Reproduce the Problem

  1. Have a cloudbuild.yaml as:
steps:
  - name: 'gcr.io/cloud-builders/bazel'
    args: ['version']
  1. Run
cloud-build-local --dryrun=false .

[RFC] Injecting substitutions provided by the GCP Container Builder

Problem

Currently, there's a feature gap when trying to run your builds locally. A number of substitutions are provided automatically when using the real GCP Container Builder. As it stands, there's no way to provide them locally.

The cause of the issue is the validation regex for user-defined substitutions enforces an underscore at the start of the variable name.

This means that once you're using the automatically provided substitutions, you probably can't build locally any more.

Proposals

These are some options I've thought of. There might be a better way though!

1. Relax the validation of substitution parameters to allow injection of the GCP automatic substitutions

As well as allowing substitution variables starting with _, we could allow the list of built-in variables.

This seems like a pretty low-effort way to solve the problem, and I can't see any obvious downsides. I think keeping the _ restriction in general is good, as removing it entirely would mislead people working locally.

2. Automatically set up the source provenance based on the current working directory

We could look for the .git folder in your current working directory and use it to determine the values of the substitutions the GCP builder would provide.

This could either be fully automatic or behind a flag. Fully automatic might be a little bit too magic and cause cause confusion for people for the sake of a minor convenience.

A disadvantage of this approach is that there are more VCSs than git, and we'd need to implement separate code for each one that's supported by GCP.

3. Add extra flags to set up the source provenance

Rather than inferring the source provenance values automatically, we could add flags for each of them.

This feels like a worse version of option 1. It has an advantage over option 2 though - no VCS-specific code.

Next steps

It'd be good to get some feedback and hopefully settle on an approach.

Depending on how deeply the changes run through the codebase, I'd be happy to help out with the implementation. If it's going to be quite involved, I might be more of a hinderance!

Getting a fix in for this would make it possible to simulate the GCP environment more closely. On the project I'm working on, it would make local builds possible again.

[bug] Error in dry runs when images specified in cloudbuild.yaml

If any images are built and listed to be pushed via the images array, the build fails with ERROR: failed to find one or more images after execution of build steps.

Minimal cloudbuild.yaml:

steps:
- id: "fake building the image"
  name: "gcr.io/cloud-builders/docker"
  args:
    - pull
    - ubuntu

- id: "tag the image"
  name: "gcr.io/cloud-builders/docker"
  args:
    - tag
    - ubuntu
    - eu.gcr.io/$PROJECT_ID/my_tag

images:
  - eu.gcr.io/$PROJECT_ID/my_tag

This succeeds if built with cloud-build-local --dryrun=false, but fails with --dryrun=true:

$ cloud-build-local --dryrun=true .
2019/09/03 11:10:40 RUNNER - [docker ps -a -q --filter name=step_[0-9]+|cloudbuild_|metadata]
2019/09/03 11:10:40 RUNNER - [docker network ls -q --filter name=cloudbuild]
2019/09/03 11:10:40 RUNNER - [docker volume ls -q --filter name=homevol|cloudbuild_]
2019/09/03 11:10:42 Build id = localbuild_70a4ab53-0375-47af-940a-0cad6ffbcfe0
2019/09/03 11:10:42 RUNNER - [docker volume create --name homevol]
2019/09/03 11:10:42 status changed to "BUILD"
BUILD
Starting Step #0 - "fake building the image"
2019/09/03 11:10:42 RUNNER - [docker inspect gcr.io/cloud-builders/docker]
Step #0 - "fake building the image": Already have image: gcr.io/cloud-builders/docker
2019/09/03 11:10:42 RUNNER - [docker run --rm --name step_0 --volume /var/run/docker.sock:/var/run/docker.sock --privileged --volume cloudbuild_vol_26f62f35-6fe4-44fc-bd8a-cc636a1b63a3:/workspace --workdir /workspace --volume homevol:/builder/home --env HOME=/builder/home --network cloudbuild --volume /tmp/step-0/:/builder/outputs --env BUILDER_OUTPUT=/builder/outputs gcr.io/cloud-builders/docker pull ubuntu]
Finished Step #0 - "fake building the image"
2019/09/03 11:10:42 Step Step #0 - "fake building the image" finished
Starting Step #1 - "tag the image"
2019/09/03 11:10:42 RUNNER - [docker inspect gcr.io/cloud-builders/docker]
Step #1 - "tag the image": Already have image: gcr.io/cloud-builders/docker
2019/09/03 11:10:42 RUNNER - [docker run --rm --name step_1 --volume /var/run/docker.sock:/var/run/docker.sock --privileged --volume cloudbuild_vol_26f62f35-6fe4-44fc-bd8a-cc636a1b63a3:/workspace --workdir /workspace --volume homevol:/builder/home --env HOME=/builder/home --network cloudbuild --volume /tmp/step-1/:/builder/outputs --env BUILDER_OUTPUT=/builder/outputs gcr.io/cloud-builders/docker tag ubuntu eu.gcr.io/<my_project_id>/my_tag]
Finished Step #1 - "tag the image"
2019/09/03 11:10:42 Step Step #1 - "tag the image" finished
2019/09/03 11:10:42 RUNNER - [docker images -q eu.gcr.io/<my_project_id>/my_tag]
2019/09/03 11:10:42 RUNNER - [docker rm -f step_0 step_1]
2019/09/03 11:10:42 status changed to "ERROR"
ERROR
ERROR: failed to find one or more images after execution of build steps: ["eu.gcr.io/<my_project_id>/my_tag"]
2019/09/03 11:10:42 RUNNER - [docker volume rm homevol]
2019/09/03 11:10:42 Build finished with ERROR status
exit 1

The same build, but without the lines

images:
  - eu.gcr.io/$PROJECT_ID/my_tag

does succeed, even for a dry run.

Different results with cloud-build-local than with gcloud builds submit: Surely wrong?

I presume that invoking cloud-build-local something like:
cloud-build-local --config deploy/cloudbuild.yaml --substitutions=BRANCH_NAME=master,SHORT_SHA=blahblah -dryrun=false -write-workspace=/tmp/workspace .

would be equivalent to a cloud build involved like:
gcloud builds submit --config deploy/cloudbuild.yaml --substitutions=BRANCH_NAME=master,SHORT_SHA=blahblah

Is this assumption invalid? Exactly how are they meant to differ? The issue is that the exact same codebase and cloudbuild.yaml are producing different build output (one passes tests, the other fails). Consistently so.

Recommended way for container networking in cloud build?

I'm setting up an integration test on cloud build and am starting a set of containers that I'd like to be able to communicate. I used an approach similar to the one recommended here: https://stackoverflow.com/a/52400857/8115327

Basically I am using build steps executing docker-compose to create the containers and add them to the cloudbuild network. I was expecting to be able to address them via their container names but they are not finding each other.

What is the recommended way to have containers able to communicate to each other via container name?

Feature request: Release build artifacts to local directory.

As a user of the container-builder-local I would like to be able to "push" build artifacts (not just images) from the workspace into a local directory.
I want to specify with a regex the files that I want to get from the workspace into a specified local directory.
This will be executed at the end of the workflow and only in local mode. If I run the build on Google Cloud Container Builder, I want to store the files on Google Cloud Storage buckets.

Related to #33

Privileged mode is incompatible with user namespaces

Hi -- wanted to leave a comment that users with userns-remap enabled will have to add the flag --userns=host for docker to run properly with --privileged (discussed here: https://docs.docker.com/engine/security/userns-remap/#user-namespace-known-limitations)

For posterity:
in Build/build.go in func dockerRunArgs I added the following two lines (after "--privileged" on line 1195):

...
// Run in privileged mode per discussion in b/31267381.                                                                                                              
"--privileged",                                                                                                                                                      
// Add userns=host in case dockerd has userns enabled                                                                                                                
"--userns=host",
)
...

this fixed my initial error

thank you!

Not able to provide substitution for global substitutions

I guess the whole idea about container-builder-local is that you are supposed to be able to test and iterate on your cloudbuild.yaml files before submitting and running em in the cloud.

Since the hosted GCP container builder service provides global substitutions (example COMMIT_SHA), this container-builder-local tool should allow you to set this, but currently the validator in https://github.com/GoogleCloudPlatform/container-builder-local/blob/master/validate/validate.go#L152 is stopping you.

I guess this maybe has been intended seen from Google's perspective to protect the runtime environment from external sources to change these global provided substitutions, as you support user submitted substitutions if it starts with _. This doesn't help us when testing locally when we want to use an global substitution in our cloudbuild.yaml.

Suggested fix: the tool should allow to set these global substitutions , and you move this validating code into your proprietary GCE container-builder.

As a work around I ended up with using https://gist.github.com/norrs/34f25e6dd02cf5f7805d925f2663ab46 for now.

lcoal BUILD_ID doesn't adhere to convention using dashes only

The current implementation introduced with fixing #45 of creating a local BUILD_ID uses the convention 'localbuild_' + uuid() which breaks the convention of using only dashes in the BUILD_ID, making it impossible to use as part of cloud run deployment like gcloud run deploy "something-$BUILD_ID"

Current behavior: in local builds BUILD_ID resolves to "localbuild_" + uuid()

Expected behavior: in local builds BUILD_ID resolves to "localbuild-" + uuid()

Running multiple builds at once errors on docker network creation

We run multiple parallel builds of images in our project simultaneously in Google Cloud Builder, which works just fine. However, when running locally with the container builder, the creation of a named docker network (cloudbuild) causes collisions when attempting to run multiple builds at the same time. The builder errors out when attempting to create the network when it already exists from another process.

Here's an example. This build is being orchestrated by calling make.

/Applications/Xcode.app/Contents/Developer/usr/bin/make -j 100 all
cd /Users/timfall/Code/mission-control/auth && \
		container-builder-local --push=false --dryrun=false \
			--config=/Users/timfall/Code/mission-control/cloud-builder/dev.yaml \
			--substitutions="_PROJECT=auth,_CONTEXT_HASH=$(md5 -r dockerfile/base.dockerfile | awk '{ print $1 }')" \
			.
cd /Users/timfall/Code/mission-control/management-api && \
		container-builder-local --push=false --dryrun=false \
			--config=/Users/timfall/Code/mission-control/cloud-builder/dev.yaml \
			--substitutions="_PROJECT=management-api,_CONTEXT_HASH=$(md5 -r dockerfile/base.dockerfile | awk '{ print $1 }')" \
			.
cd /Users/timfall/Code/mission-control/dashboard && \
		container-builder-local --push=false --dryrun=false \
			--config=/Users/timfall/Code/mission-control/cloud-builder/dev.yaml \
			--substitutions="_PROJECT=dashboard,_CONTEXT_HASH=$(md5 -r dockerfile/base.dockerfile | awk '{ print $1 }')" \
			.
cd /Users/timfall/Code/mission-control/dev-tools && \
		container-builder-local --push=false --dryrun=false \
			--config=/Users/timfall/Code/mission-control/cloud-builder/dev.yaml \
			--substitutions="_PROJECT=dev-tools,_CONTEXT_HASH=$(md5 -r dockerfile/base.dockerfile | awk '{ print $1 }')" \
			.
cd /Users/timfall/Code/mission-control/mirror && \
		container-builder-local --push=false --dryrun=false \
			--config=/Users/timfall/Code/mission-control/cloud-builder/dev.yaml \
			--substitutions="_PROJECT=mirror,_CONTEXT_HASH=$(md5 -r dockerfile/base.dockerfile | awk '{ print $1 }')" \
			.
2017/09/19 16:40:27 Warning: The server docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The client docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The server docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The server docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The server docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The client docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The client docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The client docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The server docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
2017/09/19 16:40:27 Warning: The client docker version installed (17.09.0-ce-rc2) is different from the one used in GCB (17.05-ce)
Error response from daemon: network with name cloudbuild already exists
2017/09/19 16:40:32 Failed to start spoofed metadata server: Error creating network: exit status 1
make[1]: *** [build-dev-tools] Error 1
make[1]: *** Waiting for unfinished jobs....
Error response from daemon: network with name cloudbuild already exists
Error response from daemon: network with name cloudbuild already exists
2017/09/19 16:40:32 Failed to start spoofed metadata server: Error creating network: exit status 1
make[1]: *** [build-auth] Error 1
2017/09/19 16:40:32 Failed to start spoofed metadata server: Error creating network: exit status 1
make[1]: *** [build-mirror] Error 1
Error response from daemon: network with name cloudbuild already exists
2017/09/19 16:40:33 Failed to start spoofed metadata server: Error creating network: exit status 1
make[1]: *** [build-dashboard] Error 1
2017/09/19 16:40:33 Started spoofed metadata server
2017/09/19 16:40:37 status changed to "BUILD"
^Cmake[1]: *** [build-management-api] Interrupt: 2
make: *** [update-projects] Interrupt: 2

I assume the creation of the docker network is to (somewhat) mimic what happens with specific networks on the cloud side. Perhaps a fix would be to use generated network names for each process? It could also block (or attach to) the cloudbuild network if it detects that it already exists?

Docker layer progress not displayed

cloud-build-local Version: v0.5.2

Cloud-build-local does not show the progress bar when downloading layers within Docker. This means that for the first time, it appears that cloud-build-local has frozen.

Here is the example output from cloud-build-local:

Step #2: Pulling image: gcr.io/cloud-builders/gcloud
Step #2: Using default tag: latest
Step #2: latest: Pulling from cloud-builders/gcloud
Step #2: 75f546e73d8b: Already exists
Step #2: 0f3bb76fc390: Already exists
Step #2: 3c2cba919283: Already exists
Step #2: b2196a5a20d1: Pulling fs layer
Step #2: dd8547ee0e52:
Step #2: Pulling fs layer
Step #2: 2b11efe6c1f1: Pulling fs layer
Step #2: 17b4726867d5: Pulling fs layer
Step #2: 17b4726867d5: Waiting
Step #2: dd8547ee0e52: Download complete
Step #2: 2b11efe6c1f1: Download complete

The process looks like it has hung here but actually it is downloading the layers in the background.

When pulling the same image using Docker, we get:

Using default tag: latest
latest: Pulling from cloud-builders/gcloud
75f546e73d8b: Already exists
0f3bb76fc390: Already exists
3c2cba919283: Already exists
b2196a5a20d1: Downloading [=========================================>         ]  440.2MB/535.7MB
dd8547ee0e52: Download complete
2b11efe6c1f1: Download complete
17b4726867d5: Downloading [===========================>                       ]  444.7MB/794.2MB

This is much more informative and tells the user that the process hasn't frozen. Could we adapt cloud-build-local so that it displays the download process bars?

[bug] inline scripts in `steps[].args` fail validation if env var is not a substitution

Given the following cloudbuild.yaml:

steps:
- name: ubuntu
  env:
  - "FOO=bar"
  entrypoint: "bash"
  args:
  - "-c"
  - |-
    echo $FOO

When I run cloud-build-local I see the following error:

$ cloud-build-local --config=cloudbuild.yaml --dryrun=false --write-workspace=. .
...
2021/01/14 13:18:45 Error merging substitutions and validating build: Error validating build: key in the template "FOO" is not a valid built-in substitution

This error seems to result from not considering defined environment variables during the args substitution check. The current workaround is to move echo $FOO to a script file.

'gcloud auth list' inside a build step reports the wrong service account

It should be my user account, but it is the builder service account (which is not actually being used).

$ cat cb.yaml 
steps:
- name: 'gcr.io/cloud-builders/gcloud'
  args: ['auth', 'list']
$ container-builder-local --config cb.yaml --dryrun=false .2017/08/22 16:38:48 Warning: The server docker version installed (1.12.6) is different from the one used in GCB (17.05-ce)
2017/08/22 16:38:48 Warning: The client docker version installed (1.12.6) is different from the one used in GCB (17.05-ce)
370e1a589c8c317dfde376d78c211088068672ddfcf621b4ab7b50f7447d77d8
2017/08/22 16:38:50 [docker run -d -p=8082:80 --name=metadata gcr.io/cloud-builders/metadata]
2017/08/22 16:38:51 [docker network connect --alias=metadata --alias=metadata.google.internal --ip=169.254.169.254 cloudbuild metadata]
2017/08/22 16:38:51 Started spoofed metadata server
2017/08/22 16:38:52 status changed to "BUILD"
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
To set the active account, run:
Credentialed Accounts:
 - <redacted>@cloudbuild.gserviceaccount.com ACTIVE
    $ gcloud config set account `ACCOUNT`

2017/08/22 16:38:53 status changed to "DONE"
DONE

Error when using comma in substitution key value pair

Example command:
cloud-build-local --dryrun=true --substitutions _CLOUDSDK_COMPUTE_REGION=us-central1,_CLOUDSDK_CONTAINER_CLUSTER=cluster-2,_SKAFFOLD_PROFILES=cluster-2,staging,secrets-staging .

Error:

2020/07/20 10:23:36 RUNNER - [docker ps -a -q --filter name=step_[0-9]+|cloudbuild_|metadata]
2020/07/20 10:23:36 RUNNER - [docker network ls -q --filter name=cloudbuild]
2020/07/20 10:23:36 RUNNER - [docker volume ls -q --filter name=homevol|cloudbuild_]
2020/07/20 10:23:38 Error parsing substitutions flag: The substitution key value pair is not valid: staging

Notes:
This is how I run skaffold pipelines via Google Cloud Build triggers. In order for me to test those same pipelines locally I would need to be capable of including a comma in the value. So far various quoting and escaping I've tried hasn't helped.

Similar to #78

Feature request: direct source mounting

The current implementation creates docker volume and copies source code to the created volume. Build process is running on top of that volumes so changes are not written back to the source code.

Would be possible to add a switch that allows mounting of a source code directly to a build step container so changes that will be made in the build step will be written back to the source code? I'd like to have build pipeline same on the cloud builder and locally and I also want to keep some generated code in the source (gitignored) due to IDE syntax highlighting.

Thank you

write-workspace flag can cause performance problems

I recently ran into a performance issue that I think it would be good to mention in the documentation somewhere. I had been working on a cloudbuild.yaml file for a couple weeks as I slowly worked through all the hurdles in getting building, testing and deployment steps working. During this time I was running the build locally using cloud-build-local. At some point I added the --write-workspace flag to save the resulting workspace to local disk for my inspection.

After many, many local runs I found that the start and end of the build runs were taking longer and longer. Eventually, it was taking 15+ minutes to start a local build. Once I realized it was getting slower each time the source of the problem was obvious. I was writing the post-build workspace to a subdirectory of the folder that I was passing in as the source for the workspace. So with each build the post-build-workspace folder was getting bigger since it would include a new copy of the full workspace which included the prior post-build-workspace folder, which included the prior post-build-workspace folder... I had created recursive workspace folders all the way down...

I think this may have also been causing the end of the build to take a long time as the post-build workspace was copied to disk. This would result in the next build indicating that the prior run had left around resources that needed to be cleaned up. I see some issues with others describing similar symptoms.

So I think it would be good to warn about this potential pitfall of using --write-workspace in your documentation. You might also consider logic that checks to see if the --write-workspace folder exists within the source folder, and either ignore that folder when copying the source or log a warning about the potential for a snowball effect with repeat local builds.

Error copying source to docker volume: exit status 1

I'm not able to run cloud-build-local on a project that run without issues in cloud build.

If i do a dry run it runs successfully, but when turning off dry run i get following error:

▶ cloud-build-local --config cloudbuild.yml  --dryrun=false .    
2019/09/06 09:56:24 Warning: there are left over step containers from a previous build, cleaning them.
2019/09/06 09:56:24 Warning: there are left over step volumes from a previous build, cleaning it.
2019/09/06 09:56:24 Warning: The server docker version installed (19.03.1) is different from the one used in GCB (18.09.0)
2019/09/06 09:56:24 Warning: The client docker version installed (19.03.1) is different from the one used in GCB (18.09.0)
2019/09/06 09:56:36 Error copying source to docker volume: exit status 1

I'm running it on mac. Is this due to too new version of docker?

Error when "=" is included in substitution string

Hi

When "=" is included in substitution string, error reported like below:

2018/08/27 17:43:33 Error parsing substitutions flag: The substitution key value pair is not valid:

I think keyValue := strings.Split(s, "=") is the cause.

  • common/common.go
        for _, s := range list {
		keyValue := strings.Split(s, "=") # here
		if len(keyValue) != 2 {
			return substitutionsMap, fmt.Errorf("The substitution key value pair is not valid: %s", s)
		}
		substitutionsMap[strings.TrimSpace(keyValue[0])] = strings.TrimSpace(keyValue[1])
	}

docker-compose: containers are left running

With a setup like below, docker ps or docker-compose ps shows that the containers are still running on my local machine. This happens I think both on failure and success.

$  cloud-build-local --config=cloudbuild.yaml --dryrun=false .
...
$ docker ps
$ docker-compose ps

cloudbuild.yaml:

steps:
- name: 'docker/compose:1.15.0'
  args: ['up', '-d','redis', 'mongo']

- name: 'alpine'
  args: ['sh', '-c', 'while ! nc -v -z localhost 27017; do echo -e "\033[92m  ---> waiting for mongo ... \033[0m"; sleep 1; done']

docker-compose.yml:

version: "3"
services:
  redis:
    image: redis:alpine
    container_name: redis
    hostname: redis
    # networks:
    #   - some_network
    ports:
      - 6379:6379

  mongo:
    image: mongo:3.6-jessie
    container_name: mongo
    hostname: mongo
    # networks:
    #   - some_network
    ports:
      - 27017:27017

# networks:
#   some_network:

Building without access to GCP credentials

Hello,

I would like to make a feature request to add the ability to build in an environment that does not have GCP access. In my project, we use two CI systems:

  • Github Action for development, because it has great integration with Github and is the most developer-friendly.
  • Cloud Build for production - build a Docker image, because it ensures audibility and is easy to integrate with IAM.

We want to test the image build on CI as well, so I tried to use this project to run the Cloud Build pipeline on Github Action but unfortunely the GCP project permissions are needed. Our entire process of them does not need access to GCP, so it would be useful to be able to disable the metaserver so that the process could be done fully locally

For now, I have workaround (described below), but the built-in options might help others.

To use this tool without the GCP project, I created the /scripts/airflow-docker-image/mock_gcloud/gcloud script.

#!/usr/bin/env bash

if [[ "$#" -eq 0 ]]; then
    echo "You must provide at arguments."
    exit 1
fi

if [[ "$*" == "config config-helper --format=json" ]]; then
TOKEN_EXPIRE_DATE=$(date +'%Y-%m-%dT%H:%M:%SZ' -d "+1 hour")

cat <<EOF
{
  "configuration": {
    "active_configuration": "default",
    "properties": {
      "core": {
        "account": "[email protected]",
        "disable_usage_reporting": "False",
        "log_http": "true",
        "project": "project-id"
      }
    }
  },
  "credential": {
    "access_token": "ACCESS_TOKEN",
    "id_token": "ID_TOKEN",
    "token_expiry": "${TOKEN_EXPIRE_DATE}"
  }
}
EOF
echo
    exit 0
fi

if [[ "$*" == "config list --format value(core.project)" ]]; then
    echo "project-id"
    exit 0
fi

if [[ "$*" == "projects describe project-id --format value(projectNumber)" ]]; then
    echo "12345678"
    exit 0
fi

echo "Unsupported command:"
echo "$*"
exit 1

and then run the build process as follows:

export PATH="$PWD/scripts/airflow-docker-image/mock_gcloud:$PATH"
cloud-build-local --config=scripts/airflow-docker-image/cloudbuild.yaml --dryrun=false .

/workspace can not be re-mounted in inner docker calls

Working with cloud-build-local to try to debug a fairly complex, 8-step cloud build specification, I ran into a strange behavior that's hard to describe. It involves reasoning about nested docker containers invoking other containers, and re-mounting volumes that were mounted in the outer containers. Unfortunately, I can't share the file that would probably make a great test case for you with the following steps:

  • 'gcr.io/cloud-builders/gsutil'
  • 'gcr.io/cloud-builders/docker'
  • 'docker/compose:1.23.2'
  • 'docker/compose:1.23.2'
  • 'gcr.io/cloud-builders/gsutil'
  • 'docker/compose:1.23.2'
  • 'gcr.io/cloud-builders/docker'
  • 'docker/compose:1.23.2'

Instead, I spent some time trying to distill down the essential problem, and that resulted in this public gist. The README.md file is in the gist because it's used in the repro case, but it has the description. Essentially, it comes down to the following working correctly in cloud build, but not cloud-build-local (assuming you run both from the downloaded gist folder):

steps:
- name: 'gcr.io/cloud-builders/docker'
  args: ['run', '-v', '/workspace:/project', 'ubuntu', 'stat', '/project/README.md']

Please let me know if there's any other information I can provide.

Error response from daemon: error while removing network: network cloudbuild id XXX has active endpoints

Hi,

I am having "a kind of an issue" where, for some integration tests, I need to run as a step a docker-compose up (which runs/launch two databases -> postgres & oracle).

Output

DONE
Error response from daemon: error while removing network: network cloudbuild id 20a939ac64d84fa538772d6f5af954ccb73f606f5a738b9187e19b9ce13e6ee7 has active endpoints

Any idea about how to stop the containers? I don't think it's quite possible from witihn docker-compose itself, but what about from cloud build itself? Thanks


test.integration.docker-compose.yml

version: "3.7"

services:
  postgres:
    image: postgres:9.6-alpine
    container_name: postgres
    environment:
      - POSTGRES_USER=**********
      - POSTGRES_PASSWORD=**********
      - POSTGRES_DB=**********
    network_mode: cloudbuild
  oracle:
    image: chameleon82/oracle-xe-10g
    container_name: oracle
    network_mode: cloudbuild

networks:
  default:
    external:
      name: cloudbuild

test.integration.yaml

steps:

  # Runs/Launch databases
  - name: 'docker/compose'
    args: [
        '-f',
        'deployments/cloud-build/test.integration.docker-compose.yml',
        'up',
        '--build',
        '-d'
    ]
    id: 'databases-docker-compose'

  # Runs tests
  - name: golang:1.15
    entrypoint: '/bin/bash'
    volumes:
      - name: "usr"
        path: "/usr"
    env:
      - "XXX=YYY"
    args:
      - '-c'
      - '.........'

Thanks for any hints/help 👍

command not found: cloud-build-local Post-Installation

After running gcloud components install cloud-build-local the bin path isn't added to the shell's path var.

Upon some inspection, I did find the installed location (/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/cloud-build-local). However, running a local build with the GCP quickstart example looks to fail:

2020/03/25 16:09:17 RUNNER - [docker ps -a -q --filter name=step_[0-9]+|cloudbuild_|metadata]
2020/03/25 16:09:17 RUNNER - [docker network ls -q --filter name=cloudbuild]
2020/03/25 16:09:17 RUNNER - [docker volume ls -q --filter name=homevol|cloudbuild_]
2020/03/25 16:09:18 Build id = localbuild_620c483d-6580-4f56-88a1-1cca76369239
2020/03/25 16:09:18 RUNNER - [docker volume create --name homevol]
2020/03/25 16:09:18 status changed to "BUILD"
BUILD
2020/03/25 16:09:18 RUNNER - [docker inspect gcr.io/cloud-builders/docker]
: Already have image: gcr.io/cloud-builders/docker
2020/03/25 16:09:18 RUNNER - [docker run --rm --name step_0 --volume /var/run/docker.sock:/var/run/docker.sock --privileged --volume cloudbuild_vol_04b922ce-0746-4d6d-9a2a-4924dd191cb6:/workspace --workdir /workspace --volume homevol:/builder/home --env HOME=/builder/home --network cloudbuild --volume /tmp/step-0/:/builder/outputs --env BUILDER_OUTPUT=/builder/outputs gcr.io/cloud-builders/docker build -t gcr.io/luciditi/project .]
2020/03/25 16:09:18 Step  finished
2020/03/25 16:09:18 RUNNER - [docker images -q gcr.io/luciditi/project]
2020/03/25 16:09:18 RUNNER - [docker rm -f step_0]
2020/03/25 16:09:18 status changed to "ERROR"
ERROR
ERROR: failed to find one or more images after execution of build steps: ["gcr.io/luciditi/project"]
2020/03/25 16:09:18 RUNNER - [docker volume rm homevol]
2020/03/25 16:09:18 Build finished with ERROR status
  • OS: macOS 10.15.3
  • Docker Version: 2.2.0.4 (43472)

Mounting volumes appears to fail

I have run into an issue using cloud build local. I have also run into the same issue using the hosted cloud build, but this is far less frequent. I am only able to consistently reproduce the issue using cloud build local.

I am unable to mount a volume for postgres to run init scripts.

I have a repository with steps to replicate here.

This will allow you to replicate the same issue locally:

  • Run cloudbuild local which will fail
  • Run build on gcp which will most likely pass
  • Run the docker containers locally which will pass, proving this isn't an issue with the containers.

errors with successful build

Finished Step #1 - "gbuild2"
2020/07/07 12:53:18 Step Step #1 - "gbuild2" finished
2020/07/07 12:53:18 status changed to "DONE"
DONE
2020/07/07 12:53:18 Error updating docker credentials: failed to update docker credentials: signal: killed
2020/07/07 12:53:18 Failed to delete homevol: exit status 1

Error loading config file: unknown field "dynamic_substitutions" in cloudbuild.BuildOptions

cloud-build-local is not compatible with gcloud builds. The issue is dynamic_substitutions.

Here is doc about dynamic_substitutions:
https://cloud.google.com/cloud-build/docs/configuring-builds/use-bash-and-bindings-in-substitutions

reproduce the issue:

cloudbuild.yaml

options:
  dynamic_substitutions: true
substitutions:
  _IMAGE_TAG: '${TAG_NAME:-latest}'
  _IMAGE: 'eu.gcr.io/${PROJECT_ID}/${REPO_NAME}:${_IMAGE_TAG}'
  _REVISION_SUFFIX_TAG: '${TAG_NAME//./-}'
  _REVISION_SUFFIX: '${_REVISION_SUFFIX_TAG:-${SHORT_SHA}}'
steps:
  - name: 'gcr.io/cloud-builders/gcloud'
    entrypoint: 'bash'
    args:
      - '-c'
      - 'echo _IMAGE_TAG $_IMAGE_TAG, _IMAGE $_IMAGE, _REVISION_SUFFIX_TAG $_REVISION_SUFFIX_TAG, _REVISION_SUFFIX $_REVISION_SUFFIX'
      - 'echo TAG_NAME $TAG_NAME, SHORT_SHA $SHORT_SHA, REPO_NAME $REPO_NAME'

run build locally and experience the issue:

cloud-build-local --substitutions=REPO_NAME=foo,SHORT_SHA=1234,TAG_NAME=v1.2.3 --dryrun=false .
2020/07/18 18:45:36 Error loading config file: unknown field "dynamic_substitutions" in cloudbuild.BuildOptions

expected result:

gcloud builds submit --no-source --substitutions=REPO_NAME=foo,SHORT_SHA=1234,TAG_NAME=v1.2.3 --config=cloudbuild.yaml
_IMAGE_TAG v1.2.3, _IMAGE eu.gcr.io/staging-redjar/foo:v1.2.3, _REVISION_SUFFIX_TAG v1-2-3, _REVISION_SUFFIX v1-2-3

PS There is also a bug in glcoud builds:
- 'echo TAG_NAME $TAG_NAME, SHORT_SHA $SHORT_SHA, REPO_NAME $REPO_NAME'

this line has to be added while it is useless because of

ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: invalid build: key "REPO_NAME" in the substitution data is not matched in the template;key "SHORT_SHA" in the substitution data is not matched in the template;key "TAG_NAME" in the substitution data is not matched in the template

which is not true, because this variables are used in substitutions. But not in steps. Not sure if this needs to be fixed in cloud-build-local separately from gcloud builds or report for gclod builds is enough.

I created the issue here for gcloud builds:
https://issuetracker.google.com/issues/161588167

Ignored files are copied to cloudbuild volume (resulting in out of space errors)

Background:
When I run cloud-build-local I get issues with my disk filling up, making the build to fail, even though I should have sufficient space to perform the build.

Issue:
What happens is that cloud-build-local does not properly (?) ignore the contents of .gcloudignore (or .dockerignore), and copy files and directories in the workdir into its build volume, when they should have been ignored.

Steps to replicate:
Dockerfile:

FROM busybox

RUN sleep 1000

.gcloudignore + .dockerignore:

ignored_dir/

cloudbuild.yaml:

steps:
- name: 'gcr.io/cloud-builders/docker'
  id: 'build'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
    set -o errexit -o pipefail -o xtrace

    docker build \
      -t local/cloud-build-volume-issue \
      -f Dockerfile \
      .

ignored_dir/textfile: Arbitrary large file, in my test 5.8GB
Run cloud-build-local --config=cloudbuild.yaml --dryrun=false .:

2018/10/29 14:50:31 Warning: The server docker version installed (18.06.1-ce) is different from the one used in GCB (17.12.0-ce)
2018/10/29 14:50:31 Warning: The client docker version installed (18.06.1-ce) is different from the one used in GCB (17.12.0-ce)
Using default tag: latest
latest: Pulling from cloud-builders/metadata
Digest: sha256:08d3404781d9d1114880485fcfe63687999d0817881de67a697128b8f79a4382
Status: Image is up to date for gcr.io/cloud-builders/metadata:latest
2018/10/29 14:50:44 Started spoofed metadata server
2018/10/29 14:50:44 Build id = localbuild_f613a933-fc86-4f08-b5e9-9c37cddcaed0
2018/10/29 14:50:45 status changed to "BUILD"
BUILD
: Already have image (with digest): gcr.io/cloud-builders/docker
: + docker build -t local/cloud-build-volume-issue -f Dockerfile .
: Sending build context to Docker daemon   5.12kB
: Step 1/2 : FROM busybox
:  ---> 59788edf1f3e
: Step 2/2 : RUN sleep 100
:  ---> Running in 4f0840cec186
: Removing intermediate container 4f0840cec186
:  ---> 19a17d9fc9fb
: Successfully built 19a17d9fc9fb
: Successfully tagged local/cloud-build-volume-issue:latest
2018/10/29 14:52:26 status changed to "DONE"
DONE

While the build is on the sleep step, I run du -h -d1 /var/lib/docker/volumes/, which shows:

5.8G    /var/lib/docker/volumes/cloudbuild_vol_f613a933-fc86-4f08-b5e9-9c37cddcaed0

And the contents of it du -h -d1 /var/lib/docker/volumes/cloudbuild_vol_f613a933-fc86-4f08-b5e9-9c37cddcaed0/_data/ignored_dir/:

5.8G    /var/lib/docker/volumes/cloudbuild_vol_f613a933-fc86-4f08-b5e9-9c37cddcaed0/_data/ignored_dir/```

Expected behavior:
Files in the .gcloudignore should not be copied into the cloud-build-local workdir volume, in the same way they are not sent as contents when trigger a cloud build.

Versions:

Your current Cloud SDK version is: 211.0.0
Google Cloud Build Local Builder: 0.4.2 

Cloud Build Local errors when substitutions are not matched in the template

We are having an issue that works in google Cloud Build, but not with cloud-build-local. We are defining 2 substitution keys _STATUSES and _SLACK_WEBHOOK_URL that are not used in the template steps. We set these substitutions for use in a cloud function that subscribes to cloud build to have finer grain control on which slack channel we send notifications to and for which build statuses we send notifications.

Here is the error we are getting:

2019/07/09 09:43:40 Error merging substitutions and validating build: Error validating build: key "_SLACK_WEBHOOK_URL" in the substitution data is not matched in the template;key "_STATUSES" in the substitution data is not matched in the template
make: *** [local-cloud-build] Error 1

Would it be possible to update the validation rules on cloud-build-local to more closely match the functionality in Cloud Build?

Variable interpolation in options:env: not working

Hi,

I'm trying to specify environment variables for the whole build instead of for each build step, like so:

steps:
- id: 'echo_env_vars'
  name: 'gcr.io/cloud-builders/git'
  entrypoint: '/bin/bash'
  args: ["-c", "env"]

- id: 'build_docker_image'
  name: 'gcr.io/cloud-builders/docker'
  waitFor: ['build_number']
  entrypoint: '/bin/bash'
  args: ["-c", "./make.bash all ${REPO_NAME}"]

options:
 env:
  - 'BUILD_ID=$BUILD_ID'
  - 'BRANCH_NAME=$BRANCH_NAME'
  - 'TAG_NAME=$TAG_NAME'
  - 'REPO_NAME=$REPO_NAME'
  - 'REVISION_ID=$REVISION_ID'
  - 'PROJECT_ID=$PROJECT_ID'
  - 'SHORT_SHA=$SHORT_SHA'

But variables are not interpolated, so in the 'echo_env_vars' step, my environment looks like this:

Already have image (with digest): gcr.io/cloud-builders/git
HOSTNAME=a3319a25f68f
BUILD_ID=$BUILD_ID
BRANCH_NAME=$BRANCH_NAME
SHORT_SHA=$SHORT_SHA
PATH=/builder/google-cloud-sdk/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/builder/google-cloud-sdk/bin/
PWD=/workspace
PROJECT_ID=$PROJECT_ID
REPO_NAME=$REPO_NAME
SHLVL=1
HOME=/builder/home
REVISION_ID=$REVISION_ID
DEBIAN_FRONTEND=noninteractive
BUILDER_OUTPUT=/builder/outputs
TAG_NAME=$TAG_NAME
_=/usr/bin/env

If I put the vars in for each build step, like so:

steps:
- id: 'echo_env_vars'
  name: 'gcr.io/cloud-builders/git'
  entrypoint: '/bin/bash'
  args: ["-c", "env"]
  env:
    - 'BUILD_ID=$BUILD_ID'
    - 'BRANCH_NAME=$BRANCH_NAME'
    - 'TAG_NAME=$TAG_NAME'
    - 'REPO_NAME=$REPO_NAME'
    - 'REVISION_ID=$REVISION_ID'
    - 'PROJECT_ID=$PROJECT_ID'
    - 'SHORT_SHA=$SHORT_SHA'

- id: 'build_docker_image'
  name: 'gcr.io/cloud-builders/docker'
  waitFor: ['build_number']
  entrypoint: '/bin/bash'
  args: ["-c", "./make.bash all ${REPO_NAME}"]
  env:
    - 'BUILD_ID=$BUILD_ID'
    - 'BRANCH_NAME=$BRANCH_NAME'
    - 'TAG_NAME=$TAG_NAME'
    - 'REPO_NAME=$REPO_NAME'
    - 'REVISION_ID=$REVISION_ID'
    - 'PROJECT_ID=$PROJECT_ID'
    - 'SHORT_SHA=$SHORT_SHA'

Then the variables are interpolated.

Already have image (with digest): gcr.io/cloud-builders/git
HOSTNAME=99ca06779b08
BRANCH_NAME=ryan/push-on-merge2
SHORT_SHA=341bb94
PATH=/builder/google-cloud-sdk/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/builder/google-cloud-sdk/bin/
PWD=/workspace
PROJECT_ID=ci
REPO_NAME=assetserver
BUILD_ID=bd63276b-2201-4525-be2c-33aa7f1a87e9
SHLVL=1
HOME=/builder/home
REVISION_ID=341bb944b371fc5602e4d961912939fd876f09f3
DEBIAN_FRONTEND=noninteractive
BUILDER_OUTPUT=/builder/outputs
TAG_NAME=
_=/usr/bin/env

Unfortunately, specifying env variables multiple times in each build step is error prone. I'd like a way to do it once and for it to apply to all steps.

[bug] cloud-build-local always pushes to GCR

Platform:
macOS Mojave: 10.14.5
cloud-build-local Version: v0.5.0
go version: go1.12.5 darwin/amd64

Issue:

When running cloud-build-local with both a custom config location and substitutions passed in, the resulting image always gets pushed up to GCR instead of staying local (no --push parameter needed...)

Final docker image push step in cloudbuild_production.yaml:

- name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/$PROJECT_ID/xxxx:$COMMIT_SHA'] timeout: 600s

Cloud-build-local command being run:

cloud-build-local --config=./cloudbuild_production.yaml --dryrun=false --substitutions COMMIT_SHA="local-build-do-no-deploy" .

Result:
Image pushed from local to GCR (e.g. xxxx:local-build-do-no-deploy in this example)

Expected:
As the cloud-build-local has no "--push" flag passed to it, no image should be pushed to GCR.

The official documentation (https://cloud.google.com/cloud-build/docs/build-debug-locally) suggests that the built image should only be pushed to GCR if the --push flag is used, which is not the case here.

image

Slack

How do I join the Slack channel?

Error building Docker container when nothing has changed

When running cloud-build-local twice against the same configuration, two errors pop up the second time around, at the very end of the run:

  • Error updating docker credentials: failed to update docker credentials: signal: killed
  • Failed to delete homevol: exit status 1

This only happens if there were no changes to the Dockerfile in-between runs. It also does not impact the usage of cloud-build-local from what I can tell, it just seems to be some clean-up issue.

To reproduce the issue, create a cloudbuild.yaml with the following content:

steps:
- name: gcr.io/cloud-builders/docker
  args:
  - build
  - --tag=foo:bar
  - .
images:
- foo:bar

As well as a Dockerfile with this content:

FROM ubuntu:latest as build-env
ENTRYPOINT /usr/bin/bash

Then run the following command twice (assuming foo:bar is not a known image):

cloud-build-local --dryrun=false .

In my case, this is the output I'm seeing:

$ cloud-build-local --dryrun=false .
2019/03/20 11:59:48 Warning: The server docker version installed (18.09.2) is different from the one used in GCB (18.09.0)
2019/03/20 11:59:48 Warning: The client docker version installed (18.09.2) is different from the one used in GCB (18.09.0)
Using default tag: latest
latest: Pulling from cloud-builders/metadata
Digest: sha256:6eb6787cfcbd4b0b5cc20fb02797e83adc46a42112215dda9cdae6afbe3a9023
Status: Image is up to date for gcr.io/cloud-builders/metadata:latest
2019/03/20 11:59:55 Started spoofed metadata server
2019/03/20 11:59:55 Build id = localbuild_7a3f1e50-7c8c-4ed8-baba-3063a357065c
2019/03/20 11:59:55 status changed to "BUILD"
BUILD
: Already have image (with digest): gcr.io/cloud-builders/docker
: Sending build context to Docker daemon  3.072kB
: Step 1/2 : FROM ubuntu:latest as build-env
:
:  ---> 94e814e2efa8
: Step 2/2 : ENTRYPOINT /usr/bin/bash
:
:  ---> Running in f0775bc706db
: Removing intermediate container f0775bc706db
:  ---> 0c4e4223c99f
: Successfully built 0c4e4223c99f
: Successfully tagged foo:bar
2019/03/20 11:59:57 Step  finished
2019/03/20 11:59:57 status changed to "DONE"
DONE

$ cloud-build-local --dryrun=false .
2019/03/20 12:00:29 Warning: The server docker version installed (18.09.2) is different from the one used in GCB (18.09.0)
2019/03/20 12:00:29 Warning: The client docker version installed (18.09.2) is different from the one used in GCB (18.09.0)
Using default tag: latest
latest: Pulling from cloud-builders/metadata
Digest: sha256:6eb6787cfcbd4b0b5cc20fb02797e83adc46a42112215dda9cdae6afbe3a9023
Status: Image is up to date for gcr.io/cloud-builders/metadata:latest
2019/03/20 12:00:36 Started spoofed metadata server
2019/03/20 12:00:36 Build id = localbuild_32d76dc1-4368-4e7f-97c2-9ba46256a7fa
2019/03/20 12:00:36 status changed to "BUILD"
BUILD
: Already have image (with digest): gcr.io/cloud-builders/docker
: Sending build context to Docker daemon  3.072kB
: Step 1/2 : FROM ubuntu:latest as build-env
:
:  ---> 94e814e2efa8
: Step 2/2 : ENTRYPOINT /usr/bin/bash
:
:  ---> Using cache
:  ---> 0c4e4223c99f
: Successfully built 0c4e4223c99f
: Successfully tagged foo:bar
2019/03/20 12:00:37 Step  finished
2019/03/20 12:00:38 status changed to "DONE"
DONE
2019/03/20 12:00:38 Error updating docker credentials: failed to update docker credentials: signal: killed
2019/03/20 12:00:38 Failed to delete homevol: exit status 1

Gets stuck during build

I'm testing this out, and right off the bat it's getting into a bad state and does not ever get out of it. Here is my CLI:

➜  webapp (ruby-2.3.7) git:(master) ✗ cloud-build-local --config $BUILD_CONFIG --dryrun=false .
2018/08/29 13:30:08 Warning: there are left over step containers from a previous build, cleaning them.

➜  webapp (ruby-2.3.7) git:(master) ✗ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                          PORTS               NAMES
cbf5e3cf2088        busybox             "sh"                About a minute ago   Exited (0) About a minute ago                       cloudbuild_vol_d454aed9-e3dd-4f0b-86bb-c58a3fd6337d-helper
➜  webapp (ruby-2.3.7) git:(master) ✗  docker rm -f cbf5e3cf2088


As you can see, it tries to clear exited containers but this one container cannot be removed from the looks of it.

Intermittent spoofed metadata server issues

I seem to be seeing an issue where sometimes the spoofed metadata server will not start. It seems as if this check is the issue. I am on the latest build of cloud-build-local: 313.0.0. As far as I can tell, the issue seems to be that somehow this check is succeeding. Interestingly, when I run curl against this address and it does not immediately error out that will be an instance when the spoofed metadata server does not start. I am guessing that somehow I am able to access this server from being logged into GCP, and that this is causing my machine to think that it is running on GCE...

Error when using with minikube

Hi

Not sure if the error is with minikube or container-builder-local, but thought I'd post here first.

I'm seeing

2017/11/28 14:51:03 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused

when running container-builder-local with the same Docker host as the minikube VM (On Mac, using xhyve driver).

More info

First I set up minikube on my Mac following the hello-node minikube tutorial.

I was able to set up minikube, and build the hello-node docker image following that tutorial using docker build. I was also able to build the same hello-node using container builder in the cloud no problem, and also got the container-builder-local working.

What I want to do is build the containers with the same Docker host as my minikube VM like the tutorial says, so that I can build locally using container-builder-local and have the images available in minikube. Unfortuantely I get an error which I'm hoping you can help me.

The hello-node minikube tutorial says:

Because this tutorial uses Minikube, instead of pushing your Docker image to a registry, you can simply build the image using the same Docker host as the Minikube VM, so that the images are automatically present. To do so, make sure you are using the Minikube Docker daemon:
eval $(minikube docker-env) which does:

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.64.13:2376"
export DOCKER_CERT_PATH="/Users/mike/.minikube/certs"
export DOCKER_API_VERSION="1.23"
# Run this command to configure your shell:
# eval $(minikube docker-env)

So after I run eval $(minikube docker-env) I can use the normal docker build commands from the tutorial to successfully build the hello-node image using the same Docker host as the Minikube VM.

However... when I use the following cloudbuild.yaml

    steps:
    - name: 'gcr.io/cloud-builders/docker'
      args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/hello-node-image', '.' ]
    images:
    - 'gcr.io/$PROJECT_ID/hello-node-image'

I can get it to work no problem for the cloud version of container-builder AND my regular docker host (using eval $(minikube docker-env -u) to undo the switch to the minikube Docker host). But I get the following error when I run this line after switching to the minikube Docker host with eval $(minikube docker-env)

container-builder-local --config=cloudbuild.yaml --dryrun=false .

I get the following error:

2017/11/28 14:50:49 Warning: The server docker version installed (17.06.0-ce) is different from the one used in GCB (17.06.1-ce)
2017/11/28 14:50:49 Warning: The client docker version installed (17.09.0-ce) is different from the one used in GCB (17.06.1-ce)
Using default tag: latest
latest: Pulling from cloud-builders/metadata
Digest: sha256:3f52605df8532eca6aff1fa9d0cb035b07bfb68bedf0ffd5b919c84f41aa7685
Status: Image is up to date for gcr.io/cloud-builders/metadata:latest
2017/11/28 14:50:59 Started spoofed metadata server
2017/11/28 14:51:00 status changed to "BUILD"
BUILD
Pulling image: gcr.io/cloud-builders/docker
Unable to find image 'gcr.io/cloud-builders/docker:latest' locally
latest: Pulling from cloud-builders/docker
660c48dd555d: Already exists
4c7380416e78: Already exists
421e436b5f80: Already exists
e4ce6c3651b3: Already exists
be588e74bd34: Already exists
10d78c1521df: Pulling fs layer
3a9f6555ca9b: Pulling fs layer
2017/11/28 14:51:03 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused
2017/11/28 14:51:05 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused
2017/11/28 14:51:07 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused
2017/11/28 14:51:10 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused
2017/11/28 14:51:12 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused
2017/11/28 14:51:14 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused
2017/11/28 14:51:16 Error updating token in metadata server: Post http://localhost:8082/token: dial tcp [::1]:8082: getsockopt: connection refused

I'v tried googling the error but didn't find anything. I've also made sure to run docker-credential-gcr configure-docker and google auth login but I get the same error. Would love to get this working so can use the same container-builder and kubectl commands locally and in production. Thanks in advance for any help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.