Git Product home page Git Product logo

bake-action's Introduction

GitHub release GitHub marketplace CI workflow Test workflow Codecov

About

GitHub Action to use Docker Buildx Bake as a high-level build command.

Screenshot


Usage

Path context

By default, this action will use the local bake definition (source: .), so you need to use the actions/checkout action to check out the repository.

name: ci

on:
  push:
    branches:
      - 'master'

jobs:
  bake:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v4
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      -
        name: Login to DockerHub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/bake-action@v4
        with:
          push: true

Git context

Git context can be provided using the source input. This means that you don't need to use the actions/checkout action to check out the repository as BuildKit will do this directly.

name: ci

on:
  push:
    branches:
      - 'master'

jobs:
  bake:
    runs-on: ubuntu-latest
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      -
        name: Login to DockerHub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/bake-action@v4
        with:
          source: "${{ github.server_url }}/${{ github.repository }}.git#${{ github.ref }}"
          push: true

Be careful because any file mutation in the steps that precede the build step will be ignored, including processing of the .dockerignore file since the context is based on the Git reference. However, you can use the Path context alongside the actions/checkout action to remove this restriction.

Default Git context can also be provided using the Handlebars template expression {{defaultContext}}. Here we can use it to provide a subdirectory to the default Git context:

      -
        name: Build and push
        uses: docker/bake-action@v4
        with:
          source: "{{defaultContext}}:mysubdir"
          push: true

Building from the current repository automatically uses the GITHUB_TOKEN secret that GitHub automatically creates for workflows, so you don't need to pass that manually. If you want to authenticate against another private repository for remote definitions, you can set the BUILDX_BAKE_GIT_AUTH_TOKEN environment variable.

Note

Supported since Buildx 0.14.0

      -
        name: Build and push
        uses: docker/bake-action@v4
        with:
          source: "${{ github.server_url }}/${{ github.repository }}.git#${{ github.ref }}"
          push: true
        env:
          BUILDX_BAKE_GIT_AUTH_TOKEN: ${{ secrets.MYTOKEN }}

Customizing

inputs

The following inputs can be used as step.with keys

List type is a newline-delimited string

set: target.args.mybuildarg=value
set: |
  target.args.mybuildarg=value
  foo*.args.mybuildarg=value

CSV type is a comma-delimited string

targets: default,release
Name Type Description
builder String Builder instance (see setup-buildx action)
source String Context to build from. Can be either local (.) or a remote bake definition
files List/CSV List of bake definition files
workdir String Working directory of execution
targets List/CSV List of bake targets (default target used if empty)
no-cache Bool Do not use cache when building the image (default false)
pull Bool Always attempt to pull a newer version of the image (default false)
load Bool Load is a shorthand for --set=*.output=type=docker (default false)
provenance Bool/String Provenance is a shorthand for --set=*.attest=type=provenance
push Bool Push is a shorthand for --set=*.output=type=registry (default false)
sbom Bool/String SBOM is a shorthand for --set=*.attest=type=sbom
set List List of targets values to override (eg: targetpattern.key=value)
github-token String API token used to authenticate to a Git repository for remote definitions (default ${{ github.token }})

outputs

The following outputs are available

Name Type Description
metadata JSON Build result metadata

Subactions

list-targets

This subaction generates a list of Bake targets that can be used in a GitHub matrix, so you can distribute your builds across multiple runners.

# docker-bake.hcl
group "validate" {
  targets = ["lint", "doctoc"]
}

target "lint" {
  target = "lint"
}

target "doctoc" {
  target = "doctoc"
}
jobs:
  prepare:
    runs-on: ubuntu-latest
    outputs:
      targets: ${{ steps.generate.outputs.targets }}
    steps:
      -
        name: Checkout
        uses: actions/checkout@v4
      -
        name: List targets
        id: generate
        uses: docker/bake-action/subaction/list-targets@v4
        with:
          target: validate

  validate:
    runs-on: ubuntu-latest
    needs:
      - prepare
    strategy:
      fail-fast: false
      matrix:
        target: ${{ fromJson(needs.prepare.outputs.targets) }}
    steps:
      -
        name: Checkout
        uses: actions/checkout@v4
      -
        name: Validate
        uses: docker/bake-action@v4
        with:
          targets: ${{ matrix.target }}

inputs

Name Type Description
workdir String Working directory to use (defaults to .)
files List/CSV List of bake definition files
target String The target to use within the bake file

outputs

The following outputs are available

Name Type Description
targets List/CSV List of extracted targest

Contributing

Want to contribute? Awesome! You can find information about contributing to this project in the CONTRIBUTING.md

bake-action's People

Contributors

crazy-max avatar darthmaim avatar dependabot[bot] avatar felipecrs avatar gforien avatar justincormack avatar nithos avatar tonistiigi avatar tuler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bake-action's Issues

cache-from and to gha not fully working for me

Behaviour

I'm trying to build mailu with bake action (https://github.com/leolivier/Mailu) and the gha cache, but everything is rebuilt each time and I see no cache usage. Maybe I missed something?

Steps to reproduce this issue

I have CI.yml file which contains:

# needed for gha cache?
      - uses: crazy-max/ghaction-github-runtime@v2
      # depending on login above, will push to GHCR for testing or to Docker hub for releasing
      # Build only arm64 version for now
      - name: Build and push
        uses: docker/[email protected]
        with:
          files: tests/build.hcl
          push: 'true'
          set: |
            "*.args.VERSION=${{ env.VERSION_FILE }}"
            "*.args.pinned_version=${{ env.VERSION_FILE }}"
            "*.cache-from=type=gha"
            "*.cache-to=type=gha,mode=max"
            "*.platform=linux/amd64"

Expected behaviour

If I run the job a second time (forced 2nd run just to test cache behavior), the cache is used in very few places, a lot of things are rebuilt. see https://github.com/leolivier/Mailu/runs/7336417398?check_suite_focus=true
(don't pay attention to tests jobs, I didn't yet adapt them for buildx, only the 'bake mailu' job is meaningful here)

Full CI yml file is here: https://github.com/leolivier/Mailu/blob/master/.github/workflows/CI-multiarch.yml

Caching images for use in docker compose between jobs and runs.

Behaviour

This tool looks like exactly what I need, and it almost works, but I'm stuck on the following:
Only some of my images are cached between jobs and runs, but other are constantly rebuilt, and I cannot determine the reason for the difference. I must be doing something fundamentally incorrect, but I cannot find it.

Steps to reproduce this issue

Here is how I am currently attempting to build and save images:

jobs:
  build_neurosynth_compose:
   runs-on: ubuntu-latest
   defaults:
      run:
        working-directory: compose
   steps:
      -
        name: Checkout
        uses: actions/checkout@v3
        with:
          submodules: recursive
      -
        name: Configuration
        run: |
          cp .env.example .env
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/bake-action@master
        with:
          files: docker-compose.yml,docker-compose.dev.yml
          push: true
          load: false
          workdir: compose
          set: |
              neurosynth.tags=ghcr.io/${{ github.repository_owner }}/neurosynth_compose:${{ hashFiles('**/compose/neurosynth_compose/**') }}
              neurosynth.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/neurosynth_compose:${{ hashFiles('**/compose/neurosynth_compose/**') }}
              neurosynth.cache-from=type=gha,scope=cached-stage
              neurosynth.cache-to=type=gha,scope=cached-stage,mode=max
              nginx.tags=ghcr.io/${{ github.repository_owner }}/synth_nginx:${{ hashFiles('**/compose/nginx/**') }}
              nginx.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/synth_nginx:${{ hashFiles('**/compose/nginx/**') }}
              nginx.cache-from=type=gha,scope=cached-stage
              nginx.cache-to=type=gha,scope=cached-stage,mode=max
              synth_pgsql.tags=ghcr.io/${{ github.repository_owner }}/synth_pgsql:${{ hashFiles('**/compose/postgres/**') }}
              synth_pgsql.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/synth_pgsql:${{ hashFiles('**/compose/postgres/**') }}
              synth_pgsql.cache-from=type=gha,scope=cached-stage
              synth_pgsql.cache-to=type=gha,scope=cached-stage,mode=max

And here is where I try to get the cache from the above job

neurosynth_compose_backend_tests:
    runs-on: ubuntu-latest
    needs: build_neurosynth_compose
    defaults:
      run:
        working-directory: compose
    steps:
    - 
      name: Checkout
      uses: actions/checkout@v3
      with:
        submodules: recursive
    -
      name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    -
      name: Configuration
      run: |
        cp .env.example .env
    -
      name: load images
      uses: docker/bake-action@master
      with:
        files: docker-compose.yml,docker-compose.dev.yml
        push: false
        load: true
        workdir: compose
        set: |
            neurosynth.cache-from=type=gha,scope=cached-stage
            nginx.cache-from=type=gha,scope=cached-stage
            synth_pgsql.cache-from=type=gha,scope=cached-stage
    -
      name: Spin up backend
      run: |
        docker network create nginx-proxy
        docker-compose pull
        docker-compose \
          -f docker-compose.yml \
          -f docker-compose.dev.yml \
          up -d --no-build
    -
      name: Create Test Database
      run: |
        until docker-compose exec -T \
        synth_pgsql pg_isready -U postgres; do sleep 1; done
        docker-compose exec -T \
        synth_pgsql \
        psql -U postgres -c "create database test_db"
    -
      name: Backend Tests
      env:
        AUTH0_CLIENT_ID: ${{ secrets.AUTH0_CLIENT_ID }}
        AUTH0_CLIENT_SECRET: ${{ secrets.AUTH0_CLIENT_SECRET }}
        AUTH0_BASE_URL: ${{ secrets.AUTH0_BASE_URL }}
        AUTH0_ACCESS_TOKEN_URL: ${{ secrets.AUTH0_ACCESS_TOKEN_URL }}
        AUTH0_AUTH_URL: ${{ secrets.AUTH0_AUTH_URL }}
      run: |
        docker-compose run \
          -e "APP_SETTINGS=neurosynth_compose.config.DockerTestConfig" \
          -e "AUTH0_CLIENT_ID=${AUTH0_CLIENT_ID}" \
          -e "AUTH0_CLIENT_SECRET=${AUTH0_CLIENT_SECRET}" \
          -e "AUTH0_BASE_URL=${AUTH0_BASE_URL}" \
          -e "AUTH0_ACCESS_TOKEN_URL=${AUTH0_ACCESS_TOKEN_URL}" \
          -e "AUTH0_AUTH_URL=${AUTH0_AUTH_URL}" \
          --rm -w /neurosynth \
          neurosynth \
          python -m pytest neurosynth_compose/tests

here is the docker-compose file I'm using

version: "2"
services:
  neurosynth:
    image: neurosynth_compose
    restart: always
    build: ./neurosynth_compose
    expose:
      - "8000"
    volumes:
      - ./postgres/migrations:/migrations
      - ./:/neurosynth
    command: /usr/local/bin/gunicorn -w 2 -b :8000 neurosynth_compose.core:app --log-level debug --timeout 120
    env_file:
      - .env
    container_name: neurosynth_compose

  nginx:
    image: synth_nginx
    restart: always
    build: ./nginx
    expose:
      - "80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    volumes_from:
      - neurosynth
    environment:
      - VIRTUAL_HOST=${V_HOST}
      - LETSENCRYPT_HOST=${V_HOST}

  synth_pgsql:
    image: synth_pgsql
    restart: always
    build: ./postgres
    volumes:
      - postgres_data:/var/lib/postgresql/data
    expose:
      - '5432'
    env_file:
      - .env

volumes:
  postgres_data:

networks:
  default:
    external:
      name: nginx-proxy

additional dev configuration

version: "2"
services:
  nginx:
    ports:
      - "81:80"

  neurosynth:
    expose:
      - "8000"
    command: /usr/local/bin/gunicorn -w 2 -b :8000 neurosynth_compose.core:app --log-level debug --timeout 300 --reload
    restart: "no"

Expected behaviour

I expect the gha cache in the build job would make it so that the cache was reused in the test job, and that pushing images to ghcr.io would make subsequent runs pull from that cache during the build step.

Actual behaviour

sometimes neurosynth is cached, other times it's nginx, but never synth_pgsql

Configuration

name: Testing Workflow
on: [workflow_dispatch,push]

jobs:
  build_neurosynth_compose:
   runs-on: ubuntu-latest
   defaults:
      run:
        working-directory: compose
   steps:
      -
        name: Checkout
        uses: actions/checkout@v3
        with:
          submodules: recursive
      -
        name: Configuration
        run: |
          cp .env.example .env
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/bake-action@master
        with:
          files: docker-compose.yml,docker-compose.dev.yml
          push: true
          load: false
          workdir: compose
          set: |
              neurosynth.tags=ghcr.io/${{ github.repository_owner }}/neurosynth_compose:${{ hashFiles('**/compose/neurosynth_compose/**') }}
              neurosynth.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/neurosynth_compose:${{ hashFiles('**/compose/neurosynth_compose/**') }}
              neurosynth.cache-from=type=gha,scope=cached-stage
              neurosynth.cache-to=type=gha,scope=cached-stage,mode=max
              nginx.tags=ghcr.io/${{ github.repository_owner }}/synth_nginx:${{ hashFiles('**/compose/nginx/**') }}
              nginx.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/synth_nginx:${{ hashFiles('**/compose/nginx/**') }}
              nginx.cache-from=type=gha,scope=cached-stage
              nginx.cache-to=type=gha,scope=cached-stage,mode=max
              synth_pgsql.tags=ghcr.io/${{ github.repository_owner }}/synth_pgsql:${{ hashFiles('**/compose/postgres/**') }}
              synth_pgsql.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/synth_pgsql:${{ hashFiles('**/compose/postgres/**') }}
              synth_pgsql.cache-from=type=gha,scope=cached-stage
              synth_pgsql.cache-to=type=gha,scope=cached-stage,mode=max

  build_neurostore:
   runs-on: ubuntu-latest
   defaults:
      run:
        working-directory: store
   steps:
      -
        name: Checkout
        uses: actions/checkout@v3
        with:
          submodules: recursive
      -
        name: Configuration
        run: |
          cp .env.example .env
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to GitHub Container Registry
        uses: docker/login-action@v2
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}
      -
        name: Build and push
        uses: docker/bake-action@master
        with:
          files: docker-compose.yml,docker-compose.dev.yml
          push: true
          load: false
          workdir: store
          set: |
              neurostore.tags=ghcr.io/${{ github.repository_owner }}/neurostore:${{ hashFiles('**/store/neurostore/**') }}
              neurostore.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/neurostore:${{ hashFiles('**/store/neurostore/**') }}
              neurostore.cache-from=type=gha,scope=cached-stage
              neurostore.cache-to=type=gha,scope=cached-stage,mode=max
              nginx.tags=ghcr.io/${{ github.repository_owner }}/store_nginx:${{ hashFiles('**/store/nginx/**') }}
              nginx.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/store_nginx:${{ hashFiles('**/store/nginx/**') }}
              nginx.cache-from=type=gha,scope=cached-stage
              nginx.cache-to=type=gha,scope=cached-stage,mode=max
              store_pgsql.tags=ghcr.io/${{ github.repository_owner }}/store_pgsql:${{ hashFiles('**/store/postgres/**') }}
              store_pgsql.cache-from=type=registry,ref=ghcr.io/${{ github.repository_owner }}/store_pgsql:${{ hashFiles('**/store/postgres/**') }}
              store_pgsql.cache-from=type=gha,scope=cached-stage
              store_pgsql.cache-to=type=gha,scope=cached-stage,mode=max


  neurostore_backend_tests:
    runs-on: ubuntu-latest
    needs: build_neurostore
    defaults:
      run:
        working-directory: store
    steps:
      - 
        name: Checkout
        uses: actions/checkout@v3
        with:
          submodules: recursive
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Configuration
        run: |
          cp .env.example .env
      -
        name: load images
        uses: docker/bake-action@master
        with:
          files: docker-compose.yml,docker-compose.dev.yml
          push: false
          load: true
          workdir: store
          set: |
              neurostore.cache-from=type=gha,scope=cached-stage
              nginx.cache-from=type=gha,scope=cached-stage
              store_pgsql.cache-from=type=gha,scope=cached-stage
      - 
        name: spin up backend
        run: |
          docker network create nginx-proxy
          docker-compose pull
          docker-compose \
            -f docker-compose.yml \
            -f docker-compose.dev.yml \
            up -d --no-build
      - 
        name: Create Test Database
        run: |
          until docker-compose exec -T \
          store_pgsql pg_isready -U postgres; do sleep 1; done

          docker-compose exec -T \
          store_pgsql \
          psql -U postgres -c "create database test_db"
      -
        name: Backend Tests
        env:
          AUTH0_CLIENT_ID: ${{ secrets.AUTH0_CLIENT_ID }}
          AUTH0_CLIENT_SECRET: ${{ secrets.AUTH0_CLIENT_SECRET }}
          AUTH0_BASE_URL: ${{ secrets.AUTH0_BASE_URL }}
          AUTH0_ACCESS_TOKEN_URL: ${{ secrets.AUTH0_ACCESS_TOKEN_URL }}
          AUTH0_AUTH_URL: ${{ secrets.AUTH0_AUTH_URL }}
        run: |
          docker-compose run \
            -e "APP_SETTINGS=neurostore.config.DockerTestConfig" \
            -e "AUTH0_CLIENT_ID=${AUTH0_CLIENT_ID}" \
            -e "AUTH0_CLIENT_SECRET=${AUTH0_CLIENT_SECRET}" \
            -e "AUTH0_BASE_URL=${AUTH0_BASE_URL}" \
            -e "AUTH0_ACCESS_TOKEN_URL=${AUTH0_ACCESS_TOKEN_URL}" \
            -e "AUTH0_AUTH_URL=${AUTH0_AUTH_URL}" \
            --rm -w /neurostore \
            neurostore \
            python -m pytest neurostore/tests

  neurosynth_compose_backend_tests:
    runs-on: ubuntu-latest
    needs: build_neurosynth_compose
    defaults:
      run:
        working-directory: compose
    steps:
    - 
      name: Checkout
      uses: actions/checkout@v3
      with:
        submodules: recursive
    -
      name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    -
      name: Configuration
      run: |
        cp .env.example .env
    -
      name: load images
      uses: docker/bake-action@master
      with:
        files: docker-compose.yml,docker-compose.dev.yml
        push: false
        load: true
        workdir: compose
        set: |
            neurosynth.cache-from=type=gha,scope=cached-stage
            nginx.cache-from=type=gha,scope=cached-stage
            synth_pgsql.cache-from=type=gha,scope=cached-stage
    -
      name: Spin up backend
      run: |
        docker network create nginx-proxy
        docker-compose pull
        docker-compose \
          -f docker-compose.yml \
          -f docker-compose.dev.yml \
          up -d --no-build
    -
      name: Create Test Database
      run: |
        until docker-compose exec -T \
        synth_pgsql pg_isready -U postgres; do sleep 1; done

        docker-compose exec -T \
        synth_pgsql \
        psql -U postgres -c "create database test_db"
    -
      name: Backend Tests
      env:
        AUTH0_CLIENT_ID: ${{ secrets.AUTH0_CLIENT_ID }}
        AUTH0_CLIENT_SECRET: ${{ secrets.AUTH0_CLIENT_SECRET }}
        AUTH0_BASE_URL: ${{ secrets.AUTH0_BASE_URL }}
        AUTH0_ACCESS_TOKEN_URL: ${{ secrets.AUTH0_ACCESS_TOKEN_URL }}
        AUTH0_AUTH_URL: ${{ secrets.AUTH0_AUTH_URL }}
      run: |
        docker-compose run \
          -e "APP_SETTINGS=neurosynth_compose.config.DockerTestConfig" \
          -e "AUTH0_CLIENT_ID=${AUTH0_CLIENT_ID}" \
          -e "AUTH0_CLIENT_SECRET=${AUTH0_CLIENT_SECRET}" \
          -e "AUTH0_BASE_URL=${AUTH0_BASE_URL}" \
          -e "AUTH0_ACCESS_TOKEN_URL=${AUTH0_ACCESS_TOKEN_URL}" \
          -e "AUTH0_AUTH_URL=${AUTH0_AUTH_URL}" \
          --rm -w /neurosynth \
          neurosynth \
          python -m pytest neurosynth_compose/tests
    -
      name: Frontend Jest Unit Tests
      env:
        AUTH0_CLIENT_ID: ${{ secrets.AUTH0_CLIENT_ID }}
        AUTH0_CLIENT_SECRET: ${{ secrets.AUTH0_CLIENT_SECRET }}
        AUTH0_BASE_URL: ${{ secrets.AUTH0_BASE_URL }}
        AUTH0_ACCESS_TOKEN_URL: ${{ secrets.AUTH0_ACCESS_TOKEN_URL }}
        AUTH0_AUTH_URL: ${{ secrets.AUTH0_AUTH_URL }}
        REACT_APP_AUTH0_CLIENT_ID: ${{ secrets.REACT_APP_AUTH0_CLIENT_ID }}
        REACT_APP_AUTH0_DOMAIN: ${{ secrets.REACT_APP_AUTH0_DOMAIN }}
        REACT_APP_AUTH0_CLIENT_SECRET: ${{ secrets.REACT_APP_AUTH0_CLIENT_SECRET }}
      run: |
        cd neurosynth-frontend/ && \
        cp .env.example .env.dev && \
        docker-compose run \
          -e "APP_SETTINGS=neurosynth_compose.config.DockerTestConfig" \
          -e "AUTH0_CLIENT_ID=${AUTH0_CLIENT_ID}" \
          -e "AUTH0_CLIENT_SECRET=${AUTH0_CLIENT_SECRET}" \
          -e "AUTH0_BASE_URL=${AUTH0_BASE_URL}" \
          -e "AUTH0_ACCESS_TOKEN_URL=${AUTH0_ACCESS_TOKEN_URL}" \
          -e "AUTH0_AUTH_URL=${AUTH0_AUTH_URL}" \
          -e "REACT_APP_AUTH0_DOMAIN=${REACT_APP_AUTH0_DOMAIN}" \
          -e "REACT_APP_AUTH0_CLIENT_ID=${REACT_APP_AUTH0_CLIENT_ID}" \
          -e "REACT_APP_AUTH0_AUDIENCE=localhost" \
          -e "REACT_APP_AUTH0_CLIENT_SECRET=${REACT_APP_AUTH0_CLIENT_SECRET}" \
          -e "REACT_APP_ENV=DEV" \
          -e "REACT_APP_NEUROSTORE_API_DOMAIN=http://localhost/api" \
          -e "CI=true" \
          -e "REACT_APP_NEUROSYNTH_API_DOMAIN=http://localhost:81/api" \
          --rm -w /neurosynth/neurosynth-frontend \
          neurosynth \
          bash -c "cd /neurosynth/neurosynth-frontend && \
          npm install && npm run test"
    -
      name: Frontend Cypress E2E Tests
      uses: cypress-io/github-action@v4
      env:
        CYPRESS_auth0ClientId: ${{ secrets.REACT_APP_AUTH0_CLIENT_ID }}
        CYPRESS_auth0ClientSecret: ${{ secrets.REACT_APP_AUTH0_CLIENT_SECRET }}
        CYPRESS_auth0Domain: ${{ secrets.REACT_APP_AUTH0_DOMAIN }}
        CYPRESS_auth0Audience: localhost
        REACT_APP_AUTH0_AUDIENCE: localhost
        REACT_APP_AUTH0_CLIENT_ID: ${{ secrets.REACT_APP_AUTH0_CLIENT_ID }}
        REACT_APP_AUTH0_DOMAIN: ${{ secrets.REACT_APP_AUTH0_DOMAIN }}
        REACT_APP_AUTH0_CLIENT_SECRET: ${{ secrets.REACT_APP_AUTH0_CLIENT_SECRET }}
        REACT_APP_ENV: DEV
      with:
        build: npm run build:dev
        start: npm run start-ci:dev
        browser: chrome
        wait-on: http://localhost:3000
        working-directory: /home/runner/work/neurostore/neurostore/compose/neurosynth-frontend

  style_check:
    runs-on: ubuntu-latest
    steps:
    -
      name: Checkout
      uses: actions/checkout@v3
      with:
        submodules: recursive
    -
      name: run flake8
      run: |
        pip install flake8
        cd ./store
        flake8 ./neurostore
        cd ../compose
        flake8 ./neurosynth_compose

Logs

Download the log file of your build
and attach it to this issue.

logs_873.zip

Add bake metadata JSON output

Description

Making the JSON printed in the "Bake definition" portion of this Action's output available as an step output would facilitate the use of this information in Actions job summaries. Practically, I'm hoping to use this information to explain the context dependency tree present in a bake manifest to contributors so that they can understand how their change may impact other downstream images.

Feature Request: Ability to specify working-directory

Behaviour

tl;dr I can get docker buildx bake to succeed only when I cd to the subdirectory where docker-compose.yml. Can I similarly invoke a change of working directory with bake-action?

By having the ability to change the working directory, one would have a suitable workaround to the following issue in buildx

docker/buildx#1028

Steps to reproduce this issue

Given the following project directory structure:

tests/docker-compose.yml
tests/dockerfiles/debian8/Dockerfile
tests/dockerfiles/debian9/Dockerfile
tests/dockerfiles/debian10/Dockerfile

And the following workflow:

name: CI
on:
  push:
  pull_request:

jobs:
  tests:

    runs-on: ubuntu-latest

    defaults:
      run:
        working-directory: tests

    steps:
      - uses: actions/checkout@v3

      - name: Build
        uses: docker/[email protected]
        with:
          files: tests/docker-compose.yml

And the following tests/docker-compose.yml file

version: "3"
services:
  debian8:
    build: ./dockerfiles/debian8
    cap_add: [ALL]
  debian9:
    build: ./dockerfiles/debian9
    cap_add: [ALL]
  debian10:
    build: ./dockerfiles/debian10
    cap_add: [ALL]

When a build definition file is specified in a subdirectory of the project such as above

Then resolution of Dockerfiles fails.

This is an issue with docker buildx bake itself, not bake-action:

docker buildx bake -f  tests/docker-compose.yml
[+] Building 0.0s (0/0)
error: unable to prepare context: path "dockerfiles/debian9" not found

However, with docker buildx bake there is at least a workaround – Simply first change the working directory to be that of where the build definition file exists.

cd tests && docker buildx bake
cd tests && docker buildx bake
[+] Building 1.4s (23/23) FINISHED
# (output omitted)

Expected behaviour

bake-action should allow one to control the current working directory from which docker buildx bake is invoked.

Configuration

Logs

Download the log file of your build
and attach it to this issue.

logs_28.zip

Q: Proper bake usage withh groups

Hi, I am just wondering what is the proper/best practice for using this action with multiple targets through a group.
As an example I have a repository that contains 3 subsystems that are controlled through a bakefile that trigger them.

Example

group "prod" {
  targets = ["A", "B", "C"]
}

I want to trigger this group through the action, but am unsure as to the best approach. I have the following two options but unsure which way is best, or if there is another option I have not considered.

NOTE: removed the surrounding boiler stuff such as credential logins, QEMU/BuildX setup etc.

  1. Simple Way
    Simply use the prod as the targets input into the action. This will trigger the build of all of them however does this consider:
    a) multi-platform builds
    b) proper caching using gha, since the group is used do they share the cache or do they overwrite each others?

Example

- name: Push Images using BuildX Bake
        uses: docker/bake-action@v2
        with:
          files: |
            ./docker-bake.hcl
             ${{ steps.meta.outputs.bake-file }}
          targets: prod
          push: true
          set: |
            *.cache-from=type=gha,scope=build-prod
            *.cache-to=type=gha,scope=build-prod,mode=max
  1. Manual separation and using a Matrix strategy
    Currently I am taking the group input, and using jq to extract the targets within it and generating a matrix strategy so that they are run in parallel and guarantee separate caches. This makes things a bit more complicated and a little harder to maintain (for instance already had an issue with the bake output changing on me and needing to update the jq extraction.
targets:
    name: Generate targets list from provided bake file
    runs-on: ubuntu-22.04
    outputs:
      matrix: ${{ steps.targets.outputs.matrix }}
    steps:
      # 1.1 - checkout the files
      - name: Checkout
        uses: actions/checkout@v3

      # 1.2 - Generate a matrix output of all the targets for the specified group
      - name: Create matrix
        id: targets
        run: |
          docker buildx bake ${{ inputs.group }} -f ${{ inputs.file }} --print
          TARGETS=$(docker buildx bake ${{ inputs.group }} -f ${{ inputs.file }} --print | jq -cr ".group.${{ inputs.group }}.targets")
          echo "matrix=$TARGETS" >> $GITHUB_OUTPUT

      # 1.3 (optional) - output the generated target list for verification
      - name: Show matrix
        run: |
          echo ${{ steps.targets.outputs.matrix }}

Then using that to build the matrix

# this job depends on the 'targets' job
    needs:
      - targets

    # 2.0 - Build a matrix strategy from the retrieved target list
    strategy:
      fail-fast: true
      matrix:
        target: ${{ fromJson(needs.targets.outputs.matrix) }}

And finally building the images of those targets

 - name: Push Images using BuildX Bake
        uses: docker/bake-action@v2
        with:
          files: |
            ./${{ inputs.file }}
             ${{ steps.meta.outputs.bake-file }}
          targets: ${{ matrix.target }}
          push: true
          set: |
            *.cache-from=type=gha,scope=build-${{ matrix.target }}
            *.cache-to=type=gha,scope=build-${{ matrix.target }},mode=max

Closed connection while exporting cache

Behaviour

I got the following error when I ran bake-action with gha cache. If I change the cache-from and cache-to to registry backend, it works without any issue. In the bake file, I only build and push a single image with cache mode set to max.

Error: buildx bake failed with: ERROR: failed to solve: error writing layer blob: Patch
"https://artifactcache.actions.githubusercontent.com/uTgMzyvCdNIjCtee7zrndnvlIB7VknrPTzIh4DDuGWZFAwVhOj/_apis/artifactcache/caches/202": 
read tcp 172.17.0.2:44358->52.219.169.187:443: use of closed network connection

Steps to reproduce this issue

I think this may be an intermittent issue and might be related to docker/buildx#367

Configuration

docker version: 20.10.23
Buildx version: v0.10.4
Bake action version: 3.0.1

Multi level image builds - only base image is cached

Behaviour

Not sure if I am doing it wrong, but I can't make specific images to be cached (if I use docker buildx bake locally, it caches everything properly), but using this action, it only caches my base image.

Steps to reproduce this issue

For example I tried:

      - name: Login to docker registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      - name: Set up Docker Buildx
        uses: docker/[email protected]
      - name: Build and Push Projects
        uses: docker/[email protected]
        env:
          PROJECT_TAG: cmt-${{ github.sha }}
        with:
          # targets: "${{ needs.project_names.outputs.names_comma }}"
          targets: base,specific_image1,specific_image2
          push: true
          set: |
            base.cache-from=type=gha,scope=build-base
            base.cache-to=type=gha,scope=build-base,mode=max
            specific_image1.cache-from=type=gha,scope=build-specific_image1
            specific_image1.cache-to=type=gha,scope=build-specific_image1,mode=max
            specific_image2.cache-from=type=gha,scope=build-specific_image2
            specific_image2.cache-to=type=gha,scope=build-specific_image2,mode=max

I also tried specifying cache generically like:

      *.cache-from=type=gha
      *.cache-to=type=gha,mode=max

Expected behaviour

I would expect to cache all image built, not just base one.

Actual behaviour

Base image is cached, and specific image is always rebuilt even if nothing have changed.

Configuration

My bake file looks like this:

variable "BASE_TAG" {
  default = "16.0"
}
variable "PROJECT_TAG" {
  default = "16.0"
}

target "base" {
  dockerfile = "src/base/Dockerfile"
  contexts = {
    base-src = "src/base"
  }
  tags = ["ghcr.io/myorg/base:${BASE_TAG}"]
}

target "_all" {
  contexts = {
    base = "target:base",
    extra-src = "src/extra"
  }
}

target "specific_image_1" {
  inherits = ["_all"]
  contexts = {
    project-src = "src/projects/specific_image_1"
  }
  dockerfile = "${PROJECT_DOCKERFILE}"
  tags = ["ghcr.io/myorg/specific_image_1:${PROJECT_TAG}"]
}

Logs

In logs, part that is cached is actually coming from base image, not specific one.:

...
#24 [specific_image_1 stage-0  7/16] RUN python3 -m venv /opt/odoo/venv     && pip3 install --no-cache-dir pip-tools==6.13.0 wheel     && pip-compile-install /opt/odoo/requirements
#24 CACHED

#25 [specific_image_1 stage-0 15/16] COPY --from=monodoo-src --chown=odoo:odoo ./entrypoint.py entrypoint.py
#25 CACHED

#26 [specific_image_1 stage-0  4/16] COPY --from=monodoo-src --chown=odoo:odoo ./odoo/requirements.txt /opt/odoo/requirements/odoo-requirements.in
#26 CACHED

#27 [specific_image_1 stage-0 10/16] RUN pip3 install --no-cache-dir -e /opt/odoo/odoo     && pip3 install --no-cache-dir "/opt/odoo/bins/anthem-0.13.1.dev33+gcf73513-py2.py3-none-any.whl"
#27 CACHED

#28 [specific_image_1 stage-0 12/16] RUN pip3 install --no-cache-dir -e /opt/odoo/songs
#28 CACHED

#29 [specific_image_1 stage-0  8/16] COPY --from=monodoo-src --chown=odoo:odoo ./odoo /opt/odoo/odoo
#29 CACHED

#30 [specific_image_1 stage-0 14/16] COPY --chown=odoo:odoo ./pytest.ini /opt/odoo/pytest.ini
#30 CACHED

#31 [specific_image_1 stage-0  5/16] COPY --from=monodoo-src --chown=odoo:odoo ./requirements.txt /opt/odoo/requirements/requirements.in
#31 CACHED

#32 [specific_image_1 stage-0 16/16] COPY --from=monodoo-src --chown=odoo:odoo ./addons /opt/odoo/projects/monodoo
#32 sha256:287b999fa4f23b153628e0306124c5821df8499eed930d33db2fb0631cfcd35c 0B / 216B 0.2s
#32 ...
...

#35 [base] exporting to GitHub Actions Cache
#35 preparing build cache for export 2.6s done
#35 DONE 2.6s

#32 [base stage-0 16/16] COPY --from=base-src --chown=odoo:odoo ./addons /opt/odoo/projects/base
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 52.43MB / 314.72MB 6.0s
#32 extracting sha256:cf92e523b49ea3d1fae59f5f082437a5f96c244fda6697995920142ff31d59cf 1.7s done
#32 extracting sha256:89c0313aa29e2dd962db1c104533845c3b8a4417e3f9498ed0b277a5c8d96901 0.0s done
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 40.89MB / 140.09MB 5.4s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 49.28MB / 156.02MB 5.7s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 49.28MB / 140.09MB 6.2s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 57.67MB / 156.02MB 6.5s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 70.25MB / 314.72MB 7.7s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 56.62MB / 140.09MB 6.9s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 66.06MB / 156.02MB 7.4s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 65.01MB / 140.09MB 7.7s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 88.08MB / 314.72MB 9.3s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 73.40MB / 140.09MB 8.4s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 80.74MB / 140.09MB 9.2s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 89.13MB / 140.09MB 9.9s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 105.91MB / 314.72MB 11.0s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 74.45MB / 156.02MB 10.4s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 97.52MB / 140.09MB 10.7s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 82.84MB / 156.02MB 11.4s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 104.86MB / 140.09MB 11.4s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 122.68MB / 314.72MB 12.5s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 112.20MB / 140.09MB 12.0s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 92.27MB / 156.02MB 12.5s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 119.54MB / 140.09MB 12.8s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 140.51MB / 314.72MB 14.1s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 100.66MB / 156.02MB 13.5s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 127.93MB / 140.09MB 13.5s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 136.31MB / 140.09MB 14.3s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 109.05MB / 156.02MB 14.6s
#32 sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 140.09MB / 140.09MB 14.7s done
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 158.33MB / 314.72MB 15.8s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 117.44MB / 156.02MB 15.5s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 175.11MB / 314.72MB 17.3s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 125.83MB / 156.02MB 16.5s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 134.22MB / 156.02MB 17.4s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 192.94MB / 314.72MB 18.9s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 142.61MB / 156.02MB 18.5s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 150.99MB / 156.02MB 19.5s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 210.76MB / 314.72MB 20.6s
#32 sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 156.02MB / 156.02MB 20.0s done
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 228.59MB / 314.72MB 22.4s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 246.42MB / 314.72MB 24.5s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 264.24MB / 314.72MB 26.6s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 281.02MB / 314.72MB 28.5s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 297.80MB / 314.72MB 30.5s
#32 sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 314.72MB / 314.72MB 32.4s done
#32 extracting sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438
#32 extracting sha256:16fd52525449f870f1ec2d0096b352bae611ff3876a7e733950c4628813f3438 11.5s done
#32 extracting sha256:bcdb739d5aef016392e512fa997b8d9d306e64497bbaaa60c3b98a2f778884c7 0.0s done
#32 DONE 44.6s

#32 [specific_image_1 stage-0 16/16] COPY --from=base-src --chown=odoo:odoo ./addons /opt/odoo/projects/base
#32 extracting sha256:161804423ce2990c2dbff86ee156e9a25c4b230934ff56f121be1506ff494762 0.0s done
#32 extracting sha256:c9285c82326f171fa758940fa64cc967cd87dac0d945976540eb67e911f66b10 0.0s done
#32 extracting sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710
#32 extracting sha256:834c5693942525652d5952da9f7578d39838d6f94c63c52e729328f5615d9710 6.6s done
#32 extracting sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf
#32 extracting sha256:f9b1fff5a0fb46796fbac4bda595f1590813ae6302b13dd025fc40eac739afbf 17.3s done
#32 extracting sha256:3ae774462f6508dbc08f158ce375fa336d7d572b7bab20043ddabb7ce540b373 0.0s done
#32 extracting sha256:f906c05f9d6e04001861baa255081d8a36ff0e569f6d2b6799d16bf90b8ee58f 0.0s done
#32 extracting sha256:903c40a7931a03fd4d19722115d49b0e0080e91fdc280beb6056360452051f44 0.0s done
#32 DONE 68.6s

#32 [specific_image_1 stage-0 16/16] COPY --from=base-src --chown=odoo:odoo ./addons /opt/odoo/projects/base
#32 extracting sha256:fba3414c2be77074d36dff6a0128e4027a39049f2d1130795cd46e9e206066ea 0.0s done
#32 extracting sha256:154d71d26556a7378017cb7151f6f4644d27c9e4a27d8f25d5688f6ee62dbc69 0.0s done
#32 extracting sha256:32c43fb07e4e86e64922daf97eb8939eba6f39b47d0db3501b80ff8c6a6ce21e 0.0s done
#32 extracting sha256:287b999fa4f23b153628e0306124c5821df8499eed930d33db2fb0631cfcd35c 0.0s done
#32 extracting sha256:9c83cbfacf8c82b77a547c595566a6a9ea060ab0e635825e9fef63a117a38e61
#32 extracting sha256:9c83cbfacf8c82b77a547c595566a6a9ea060ab0e635825e9fef63a117a38e61 0.1s done
#32 DONE 68.8s

#36 [specific_image_1 stage-0 1/9] COPY --from=project-src --chown=odoo:odoo ./requirements.txt /opt/odoo/requirements/custom-requirements.in
#36 DONE 0.2s

#37 [specific_image_1 stage-0 2/9] COPY --from=extra-src --chown=odoo:odoo ./connector/requirements.txt /opt/odoo/requirements/connector-requirements.in
#37 DONE 0.0s

#38 [specific_image_1 stage-0 3/9] RUN pip-compile-install /opt/odoo/requirements
#0 0.145 + pip-compile-install /opt/odoo/requirements
#38 43.66 #
#38 43.66 # This file is autogenerated by pip-compile with Python 3.10
#38 43.66 # by the following command:
#38 43.66 #
#38 43.66 #    pip-compile --output-file=/opt/odoo/requirements/requirements.txt --resolver=backtracking /opt/odoo/requirements/connector-requirements.in /opt/odoo/requirements/custom-requirements.in /opt/odoo/requirements/odoo-requirements.in /opt/odoo/requirements/requirements.in
#38 43.66 #

Whats interesting, looks like last COPY of base image is not cachhed (not sure why):

#30 [specific_image_1 stage-0 14/16] COPY --chown=odoo:odoo ./pytest.ini /opt/odoo/pytest.ini
#30 CACHED

#31 [specific_image_1 stage-0  5/16] COPY --from=monodoo-src --chown=odoo:odoo ./requirements.txt /opt/odoo/requirements/requirements.in
#31 CACHED

#32 [specific_image_1 stage-0 16/16] COPY --from=monodoo-src --chown=odoo:odoo ./addons /opt/odoo/projects/monodoo
#32 sha256:287b999fa4f23b153628e0306124c5821df8499eed930d33db2fb0631cfcd35c 0B / 216B 0.2s
#32 ...

It just copies source code, but maybe its not supposed to be cached?

feature: perform push iff all targets succeed

Not sure if this is possible currently but it would be nice to have the bake-action only do a push if all targets have been successfully built.

My typical use case is to provide a group to use for the build, and from that group extract the individual targets within the definition. From this I build a matrix that is then used to parallelize the targets in the actual bake-action with their own individual cache locations.

Eg.
Matrix output is

[foo,bar]

Following that a bake action is triggered using the following

 # 2.6 - Build and push Docker Images
- name: Build Images using BuildX Bake
  uses: docker/bake-action@v2
  with:
    files: ./${{ inputs.file }}
    targets: ${{ matrix.target }}
    push: ${{ inputs.push }}
    set: |
      *.cache-from=type=gha,scope=build-${{ matrix.target }}
      *.cache-to=type=gha,scope=build-${{ matrix.target }},mode=max

But since they are independent steps now, it could be where one (foo) is fast and succeeds and is pushed into the registry, while the 2nd one (bar) takes longer and eventually fails. This causes the issue that the registry contains inconsistent release versions of the images which is a problem.

Is there a way around this, or is the approach overall flawed?

Caching questions

Is it currently possible to use actions/cache with docker/bake-action somehow? I notice there is a no-cache option, however it doesn't seem to do much between runs.

Ideally I'd like to take advantage of docker layer caching between runs to speed up our deploys. Thank you!

How to access the bake-action cached image in subsequent jobs?

Hi
I read #81 which explains how to access bake actions cached images in subsequent steps but I wanted to do the same in subsequent jobs, not steps.
Would it be possible?
I tried the approach above but it does not seem to work.
So currently, my only possibility is to push to ghcr.io for testing purpose.

Feat: registry option

Hi guys,

Really like this action, the only thing I found missing is the option to point to another registry like GitHub's container registry.
Is there a way to set the registry with the set option or will this be a new feature?

Thanks in advanced,

Koen

Cache not being hit when using TAG workflows

Behaviour

When using a workflow that triggers on a TAG it fails to hit the cache when triggering the workflow with a follow on version tag.
Removing the tag and deploying again does hit the cache correctly however.

This is similar/same to the issue currently found in the build_pull_action here

Note this is a multi-platform build as well in case that adds extra complexity

Expected behaviour

Cache should be hit and used appropriately on all tags.

Actual behaviour

Cache is not hit on subsequent tags and invocation of the workflow.

Configuration

Sample Workflow

name: Docker CI Bake

on:
  push:
    tags:
      - 'v*.*.*'
      - 'v*.*.*-*'

env:
  REGISTRY: ghcr.io
  BAKE_FILE: docker-bake-gh.hcl

jobs:
  targets:
    runs-on: ubuntu-22.04
    outputs:
      matrix: ${{ steps.targets.outputs.matrix }}
    steps:
      - name: Checkout
        uses: actions/checkout@v3

      # Generate a matrix output of all the default targets
      - name: Create matrix
        id: targets
        run: |
          echo ::set-output name=matrix::$(docker buildx bake -f ${{ env.BAKE_FILE }} --print | jq -cr '.group.default.targets')

      - name: Show matrix
        run: |
          echo ${{ steps.targets.outputs.matrix }}

  build-push:
    name: Buid and push Docker image to GitHub Container registry
    if: ${{ github.ref_type == 'tag' }}
    runs-on: ubuntu-22.04
    permissions:
      packages: write
      contents: read
    needs:
      - targets

    strategy:
      fail-fast: true
      matrix:
        target: ${{ fromJson(needs.targets.outputs.matrix) }}

    steps:
      # Checkout the repository
      - name: Checkout the repository
        uses: actions/checkout@v3

      # Login against the docker registry
      - name: Login to registry ${{ env.REGISTRY }}
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: set env variables for bakefile
        run: |
          echo "VERSION=$( echo ${{ github.ref_name }} | sed 's/^.//' )" >>${GITHUB_ENV}
          echo "DOCKER_ORG=${{ env.REGISTRY }}" >> ${GITHUB_ENV}
          echo "DOCKER_PREFIX=${{ github.repository_owner }}" >> ${GITHUB_ENV}

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      # Build and push Docker Images
      - name: Build Images using BuildX Bake
        uses: docker/bake-action@v2
        with:
          files: ./${{ env.BAKE_FILE }}
          targets: ${{ matrix.target }}
          push: true
          set: |
            *.cache-from=type=gha,scope=build-${{ matrix.target }}
            *.cache-to=type=gha,scope=build-${{ matrix.target }},mode=max

Null value for `tags` when using with `docker/metadata-action` bake file output

Contributing guidelines

I've found a bug, and:

  • The documentation does not mention anything about my problem
  • There are no open or closed issues that are related to my problem

Description

Hello! I'm using this action together with docker/metadata-action bake file output. I'm using the tags from docker-metadata-action target to generate image name.

Expected behaviour

It should build normally, using the tags from meta action.

Actual behaviour

The action errors out:

Error: #1 [internal] load local bake definitions
#1 reading ./docker-bake.hcl 320B / 320B done
#1 reading /home/runner/work/_temp/docker-actions-toolkit-ZPTkvi/docker-metadata-action-bake-tags.json 247B / 247B done
#1 reading /home/runner/work/_temp/docker-actions-toolkit-ZPTkvi/docker-metadata-action-bake-labels.json 643B / 643B done
#1 reading /home/runner/work/_temp/docker-actions-toolkit-ZPTkvi/docker-metadata-action-bake-annotations.json 696B / 696B done
#1 DONE 0.0s
./docker-bake.hcl:14
--------------------
  12 |     target "test" {
  13 |       inherits = ["_common"]
  14 | >>>   tags = generate_tags("docker.io", target.docker-metadata-action.tags)
  15 |     }
  16 |     
--------------------
ERROR: ./docker-bake.hcl:14,37-71: Invalid function argument; Invalid value for "tags" parameter: argument must not be null., and 1 other diagnostic(s)

Repository URL

https://github.com/tk-nguyen/docker-bake-demo (master branch is after when I moved the bake file with the tags last)

Workflow run URL

https://github.com/tk-nguyen/docker-bake-demo/actions/runs/8098999772/job/22133738961

YAML workflow

on:
  push:
    tags:
      - v*.*.*

name: Docker bake test

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Docker meta
        id: meta
        uses: docker/metadata-action@v5
        with:
          tags: |
            type=semver,pattern=v{{version}}
            type=semver,pattern=v{{major}}
            type=semver,pattern=v{{major}}.{{minor}}

      - name: Build and push
        uses: docker/bake-action@v4
        with:
          files: |
            ./docker-bake.hcl
            ${{ steps.meta.outputs.bake-file }}
            ${{ steps.meta.outputs.bake-file-annotations }}
          targets: test

Additional info

When I put the bake file with the tags last (either ${{ steps.meta.outputs.bake-file }} or ${{ steps.meta.outputs.bake-file-tags }}), it works perfectly, as seen here: https://github.com/tk-nguyen/docker-bake-demo/actions/runs/8099076031/job/22133989604

Building on multiple native nodes

This a great, github action. Makes it super easy to get all our required docker images built and published in CI.

Is it possible to support building across multiple native nodes? When building our images for both linux/arm64 and linux/amd64 using QEMU, the build takes 60 minutes in CI. By splitting the build to run on two separate native nodes the build time is reduced to 15 minutes.

The only problem with this approach is that both builds push their own manifest separately instead of a combined one at the end. So the latest pushed image only knows of a single platform.

Additional Metadata Output specifying images generated

In my bake files I have a small function that is used for tagging. For example it may add a prefix or a suffix to the image name based on some ENV variable that was set. Because of this I don't have a good way to determine the actual tags that were used and pushed that I can then reference in following jobs. I could make a workaround that uses jq with the --print option to extract it but it would be best if it was part of the metadata output of the workflow.

Actual:
Meta file currently outputs:

  • attributes and labels for the image
  • platform used
  • sources used
  • target name used

Desired:

  • additionally output the fully qualified image name that was generated i.e ghcr.io/foo/my_image:123 or at the very least the tag and version (i.e my_image:123)

Too large metadata cause " Argument list too long" error

Behaviour

When I tried to parse the info from build metadata using:

            - name: Set output variables
              id: bake_metadata
              env:
                  BAKE_METADATA: ${{ steps.bake.outputs.metadata }}
              run: |
                  targets=$(jq -c 'keys' <<< "${BAKE_METADATA}")
                  echo "::set-output name=targets::${targets}"
                  images=$(jq -c '. as $base |[to_entries[] |{"key": (.key|ascii_upcase|sub("-"; "_"; "g") + "_IMAGE"), "value": [(.value."image.name"|split(",")[0]),.value."containerimage.digest"]|join("@")}] |from_entries' <<< "${BAKE_METADATA}")
                  echo "::set-output name=images::${images}"

I got the error "Argument list too long".

Is there anyway to workaround this by writing the metadata to some file rather than an environment variable?

Logs

The log archive is in:
logs_1206.zip

How to access the bake-action cached image in subsequent steps?

Thanks a lot team for your work on this repo.

I want to use bake-action to build & cache, and then run docker compose run web in the following step using this cached image, however, I can't seem to access the cached, built image second step as it is rebuilt every time.

If this is something bake-action supports, I'd be happy to do any work I can do on documenting it.

See rdmolony/cache-docker-compose-images for an illustration of my experiment

I suspect this isn't supported as hooking into a runner via tmate & running docker image ls shows that the cached image isn't accessible -

node             14-alpine         47afee183159   2 weeks ago    119MB
node             16-alpine         97c7a05048e1   2 weeks ago    112MB
ubuntu           20.04             20fffa419e3a   2 weeks ago    72.8MB
ubuntu           18.04             ad080923604a   2 weeks ago    63.1MB
node             16                b59df4e04d61   2 weeks ago    907MB
node             14                d0c8d2556876   3 weeks ago    946MB
buildpack-deps   stretch           37d7e352c300   3 weeks ago    835MB
buildpack-deps   buster            78ad4d0bd058   3 weeks ago    804MB
buildpack-deps   bullseye          679938ea7aec   3 weeks ago    834MB
debian           9                 daa15d2587f5   3 weeks ago    101MB
debian           10                354ff99d6bff   3 weeks ago    114MB
debian           11                4eacea30377a   3 weeks ago    124MB
moby/buildkit    buildx-stable-1   a2c9241854f2   6 weeks ago    142MB
moby/buildkit    latest            a2c9241854f2   6 weeks ago    142MB
node             12                6c8de432fc7f   2 months ago   918MB
node             12-alpine         bb6d28039b8c   2 months ago   91MB
alpine           3.12              24c8ece58a1a   2 months ago   5.58MB
alpine           3.13              20e452a0a81a   2 months ago   5.61MB
alpine           3.14              e04c818066af   2 months ago   5.59MB
ubuntu           16.04             b6f507652425   9 months ago   135MB

Everything after # gets eaten in the `set` input, with no way to escape it since v3

Behaviour

Steps to reproduce this issue

Set the context like this:

- name: Build
  uses: docker/bake-action@v3
  with:
    set: |
      base.context=https://github.com/${{ github.repository }}.git#${{ github.ref }}

Expected behaviour

docker bake should be called with --set base.context=https://github.com/foo/bar#ref

or have a way to escape it. I tried putting quotes around, or backslashes, but none of this works.

Actual behaviour

docker bake should is called with --set base.context=https://github.com/foo/bar (without the #ref)

Configuration

v3 breaks build for *.output=type=docker,dest

Behaviour

After updating to docker/[email protected] from docker/[email protected] my build breaks, because I am exporting the image and export does not support provenance metadata. Setting provenance: false fixes the build.

This should either be marked clearly as breaking change in the v3 release or the default should changed to provenance: false.

Steps to reproduce this issue

  1. Use the default docker/bake-action@v3 and set *.output=type=docker,dest=/tmp/image.tar

Expected behaviour

Build works without error and docker image gets exported

Actual behaviour

Error: buildx bake failed with: ERROR: docker exporter does not currently support exporting manifest lists

Configuration

name: Docker

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]
  merge_group:

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}

jobs:
  image:
    name: Build image ${{ matrix.target }}
    runs-on: ubuntu-latest
    strategy:
      matrix:
        target: [ web, worker, legacy-importer, database-migration ]
    steps:
    - uses: actions/checkout@v3
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    - name: Build the Docker image
      uses: docker/[email protected]
      with:
        files: |
          ./docker-compose.yml
          ./docker-compose.importer.yml
        targets: ${{ matrix.target }}
        set: |
          *.output=type=docker,dest=/tmp/image-${{ matrix.target }}.tar
          *.cache-from=type=gha,scope=build-${{ matrix.target }}
          *.cache-to=type=gha,scope=build-${{ matrix.target }},mode=max
    - uses: actions/upload-artifact@v3
      with:
        name: docker-image-${{ matrix.target }}
        path: /tmp/image-${{ matrix.target }}.tar

Logs

logs_377.zip

Can’t set oci media type to off

Contributing guidelines

I've found a bug, and:

  • The documentation does not mention anything about my problem
  • There are no open or closed issues that are related to my problem

Description

See docker/setup-buildx-action#187 (comment) for details

Expected behaviour

See docker/setup-buildx-action#187 (comment)
IMG_7379

Actual behaviour

See image

Repository URL

No response

Workflow run URL

No response

YAML workflow

https://github.com/dotabod/backend/blob/abb6c688de712ac8d94c30da13ee67c1590c79de/docker-compose.yml

Workflow logs

No response

BuildKit logs

No response

Additional info

No response

Action does not respect the Builder mirror configuration

Behaviour

I am running the bake-action in a GitHub pipeline in an isolated company network. Local Dockerfiles that extend public Docker images from Dockerhub thus need to go through a mirror (Artifactory) set up in the same network. It is possible to configure Builder with such mirror information during its setup step. This works well with the "pure" build-push-action (on single Dockerfiles), which honors the mirror and pulls base layers from it instead of registry-1.docker.io. But as soon as one switches to using the bake action on multiple Dockerfiles, the mirror is not used anymore.

Steps to reproduce this issue

  1. Define a GitHub pipeline with a setup step for the Builder that includes an inline mirror configuration
  2. In a later step, run the Bake action on the same Builder
  3. Add one or some Dockerfiles extending public images
  4. Observe the pipeline failure

Expected behaviour

I expect that images built by Bake on the customized Builder would honor the mirror configured for Dockerhub and thus pull base images not from registry-1.docker.io but from the mirror site.

Actual behaviour

Buildx (according to the output) relentlessly tries to contact Dockerhub directly:

Error: buildx bake failed with: ERROR: failed to solve: DeadlineExceeded: DeadlineExceeded: DeadlineExceeded: centos:8.3.2011: failed to do request: Head "https://registry-1.docker.io/v2/library/centos/manifests/8.3.2011": dial tcp 34.194.164.123:443: i/o timeout

Configuration

    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        id: setup-builder
        with:
          config-inline: |
            [registry."docker.io"]
              mirrors = ["dockerhub-remote.artifactory.company.int"]
      -
        name: Build and push
        uses: docker/bake-action@v2
        with:
          builder: ${{ steps.setup-builder.outputs.name }}
          workdir: ${{ matrix.image }}
          push: true

Logs

githubactions-run-log.txt
githubactions-cleanup-log.txt

Error: Blocks are not allowed here

Behaviour

Steps to reproduce this issue

  1. Checkout the enhancement/gh-action feature branch or the k3d repository to get the following action part: https://github.com/k3d-io/k3d/blob/enhancement/gh-actions/.github/workflows/release.yaml#L143-L154
  2. Push some commit to trigger the workflow (or adjust to workflow_dispatch)
  3. Wait for it to get to the docker-bake-action step

Expected behaviour

The bake action step takes the provided docker-bake.hcl and generated bake JSON files and uses them to build the images.
It works with copied JSON output locally using e.g. docker buildx bake -f ./docker-bake.hcl -f .local/tests/bake.1.json --print release

Actual behaviour

Bake definition
  /usr/bin/docker buildx bake --file ./docker-bake.hcl --file /tmp/docker-metadata-action-GGKyIw/docker-metadata-action-bake.json --file /tmp/docker-metadata-action-fZ5d0s/docker-metadata-action-bake.json --file /tmp/docker-metadata-action-toPOSl/docker-metadata-action-bake.json --file /tmp/docker-metadata-action-Cqo0nf/docker-metadata-action-bake.json --metadata-file /tmp/docker-build-push-Rpx0C2/metadata-file binary dind proxy tools --print
  ./docker-bake.hcl:2
  --------------------
     1 |     // filled by GitHub Actions
     2 | >>> target "docker-metadata-k3d" {}
     3 |     target "docker-metadata-k3d-dind" {}
     4 |     target "docker-metadata-k3d-proxy" {}
  --------------------
  error: ./docker-bake.hcl:2,1-7: Unexpected "target" block; Blocks are not allowed here., and 3 other diagnostic(s)
  Error: The process '/usr/bin/docker' failed with exit code 1

The same thing happened when I still had the release group before the target definitions.

Configuration

Here's the docker-bake.hcl: https://github.com/k3d-io/k3d/blob/enhancement/gh-actions/docker-bake.hcl

Workflow Configuration File
name: Test & Release

on: push

env:
  IMAGE_REGISTRY: ghcr.io
  IMAGE_BASE_REPO: k3d-io
  IMAGE_PLATFORMS: linux/amd64,linux/arm64,linux/arm/v7
  GO_VERSION: "1.17.x"
  DOCKER_VERSION: "20.10"
    
jobs:
  test-suite:
    name: Full Test Suite
    runs-on: ubuntu-20.04
    steps:
      #... skipped irrelevant steps ...

  release:
    name: Build & Release
    # Only run on tags
    runs-on: ubuntu-20.04
    steps:
      # Setup
      - uses: actions/checkout@v2
      #... skipped irrelevant steps ...
      # Container Image Setup
      - name: Setup Docker
        uses: docker-practice/actions-setup-docker@master
        with:
          docker_version: "${{ env.DOCKER_VERSION }}"
      - name: Log in to the Container registry
        uses: docker/login-action@v1
        with:
          registry: ${{ env.IMAGE_REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      - name: Set up QEMU
        uses: docker/setup-qemu-action@v1
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1
      # Gather Docker Metadata
      - name: Docker Metadata k3d-binary
        id: meta-k3d-binary
        env:
          IMAGE_ID: k3d
        uses: docker/metadata-action@v3
        with:
          images: ${{ env.IMAGE_REGISTRY }}/${{ env.IMAGE_BASE_REPO }}/${{ env.IMAGE_ID }}
          github-token: ${{ secrets.GITHUB_TOKEN }}
          bake-target: docker-metadata-${{ env.IMAGE_ID }}
          tags: |
            type=semver,pattern={{major}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{version}}
            type=ref,event=branch
            type=ref,event=pr
            type=sha
      - name: Docker Metadata k3d-dind
        id: meta-k3d-dind
        env:
          IMAGE_ID: k3d
          IMAGE_SUFFIX: "-dind"
        uses: docker/metadata-action@v3
        with:
          images: ${{ env.IMAGE_REGISTRY }}/${{ env.IMAGE_BASE_REPO }}/${{ env.IMAGE_ID }}
          github-token: ${{ secrets.GITHUB_TOKEN }}
          bake-target: docker-metadata-${{ env.IMAGE_ID }}${{ env.IMAGE_SUFFIX }}
          tags: |
            type=semver,pattern={{major}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{version}}
            type=ref,event=branch
            type=ref,event=pr
            type=sha
          flavor: |
            suffix=${{ env.IMAGE_SUFFIX }}
      - name: Docker Metadata k3d-proxy
        id: meta-k3d-proxy
        env:
          IMAGE_ID: k3d-proxy
        uses: docker/metadata-action@v3
        with:
          images: ${{ env.IMAGE_REGISTRY }}/${{ env.IMAGE_BASE_REPO }}/${{ env.IMAGE_ID }}
          github-token: ${{ secrets.GITHUB_TOKEN }}
          bake-target: docker-metadata-${{ env.IMAGE_ID }}
          tags: |
            type=semver,pattern={{major}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{version}}
            type=ref,event=branch
            type=ref,event=pr
            type=sha
      - name: Docker Metadata k3d-tools
        id: meta-k3d-tools
        env:
          IMAGE_ID: k3d-tools
        uses: docker/metadata-action@v3
        with:
          images: ${{ env.IMAGE_REGISTRY }}/${{ env.IMAGE_BASE_REPO }}/${{ env.IMAGE_ID }}
          github-token: ${{ secrets.GITHUB_TOKEN }}
          bake-target: docker-metadata-${{ env.IMAGE_ID }}
          tags: |
            type=semver,pattern={{major}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{version}}
            type=ref,event=branch
            type=ref,event=pr
            type=sha
      # Build and Push container images
      - name: Build Images
        uses: docker/[email protected]
        with:
          files: |
            ./docker-bake.hcl
            ${{ steps.meta-k3d-binary.outputs.bake-file }}
            ${{ steps.meta-k3d-dind.outputs.bake-file }}
            ${{ steps.meta-k3d-proxy.outputs.bake-file }}
            ${{ steps.meta-k3d-tools.outputs.bake-file }}
          targets: binary,dind,proxy,tools
          push: false
      #... skipped irrelevant steps ...

Logs

logs_200.zip

Builds not using local cache

Behaviour

Docker compose is not using cache for docker compose run commands. I'm probably making a silly mistake somewhere, but I can't quite figure out where or how.

Steps to reproduce this issue

I have a fairly simple workflow that I'm trying to get to use local cache for Docker compose

Expected behaviour

I was hoping this workflow would utilize local cache to build compose containers.

Actual behaviour

The Build stack , Load stack and Compose up steps do appear to use the cached layers; However, Run static checks rebuilds the whole stack again.

name: ci

on:
  push:
    branches:
      - master
  pull_request:
  # cron schedule for nightly builds
  schedule:
     - cron: '59 23 * * *'
  # allow manual triggering 
  workflow_dispatch:

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build stack
        uses: docker/[email protected]
        with:
          files: docker-compose.yml
          load: false
          push: false
          set: |
              *.cache-from=type=gha,scope=cached-stage
              *.cache-to=type=gha,scope=cached-stage,mode=max
      -
        name: Load stack
        uses: docker/[email protected]
        with:
          files: docker-compose.yml
          load: true
          push: false
          set: |
              *.cache-from=type=local,src=/tmp/.buildx-cache
      -
        name: Compose up
        run: |
          docker-compose up -f docker-compose.yml -d --no-build
      -
        name: Run static checks
        run: |
          docker-compose -f docker-compose.yml run --rm python_base black --check --diff .
      -
        name: Compose down
        run: |
          docker-compose -f docker-compose.yml down

Logs

Screenshot 2023-01-31 at 7 49 35 PM
Here is another example where the Build process was quick, but Load stack rebuilt the python image, and once again the static checks built the container too.
Screenshot 2023-01-31 at 8 15 33 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.