Git Product home page Git Product logo

uraimo / run-on-arch-action Goto Github PK

View Code? Open in Web Editor NEW
648.0 13.0 144.0 291 KB

A Github Action that executes jobs/commands on non-x86 cpu architectures (ARMv6, ARMv7, aarch64, s390x, ppc64le, riscv64) via QEMU

License: BSD 3-Clause "New" or "Revised" License

JavaScript 48.72% Shell 51.28%
github-actions github-workflow actions continuous-integration aarch64 s390x armv6 armv7 ppc64le riscv64

run-on-arch-action's Introduction

run-on-arch-action's People

Contributors

benalexau avatar d4n avatar dependabot[bot] avatar dims avatar elijahr avatar fingolfin avatar fniephaus avatar gamer191 avatar gdams avatar gerschtli avatar joschi avatar julianoes avatar leoniloris avatar longhronshen avatar lukaswoodtli avatar pentamassiv avatar pllim avatar sgallagher avatar sithlord48 avatar uraimo avatar vrince avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

run-on-arch-action's Issues

Supporting $GITHUB_ENV besides "::set-output ..."

At the moment, I have to do this to get output out of the container:

echo "::set-output name=${var_name}::${var_value}"

Would it be possible to support $GITHUB_ENV somehow?

echo "${var_name}=${var_value}" >> $GITHUB_ENV

Without $GITHUB_ENV, I have to manually repack variables for follow-up scripts like this:

   - name: Export variables from "some-id" docker step
     run: |
       echo "FIRST=${{ steps.some-id.outputs.FIRST}}" >> $GITHUB_ENV
       echo "SECOND=${{ steps.some-id.outputs.SECOND}}" >> $GITHUB_ENV

So that I can then call scripts like this:

   - name: Follow-up scripts
     run: echo "Hello $FIRST World $SECOND !"

Dynamic Dockerfiles?

IIUC, the recommended approach to using this action with custom base images is to fork or embed and add new Dockerfiles.

Assuming one didn't want additional customization in the Dockerfile beyond an arbitrary base image (which is always the case for the builtins and the custom images I was using), it seems like it'd be pretty straightforward to generate the Dockerfile on the fly from a template to make any base image work with the stock action. eg, add a new optional base_image arg that would be templated into FROM- without the arg, the current behavior against the Dockerfiles in the action, and with it, using a dynamic Dockerfile. This change would obviate the need to fork and customize the action merely for different base images.

Thoughts?

Need alpine_edge; Dockerfile.aarch64.alpine_edge does not exist

In my build.yaml I am using

      uses: uraimo/run-on-arch-action@v2
      with:
        arch: ${{ matrix.qemu_arch }}
        distro: alpine_edge

because I need fuse3 wich is only in the edge branch of Alpine Linux (I was told that fuse2 is no longer maintained and should not be used anymore).

Getting

build (aarch64, aarch64)
run-on-arch: /home/runner/work/_actions/uraimo/run-on-arch-action/v2/Dockerfiles/Dockerfile.aarch64.alpine_edge does not exist.
build (armv7, armhf)
run-on-arch: /home/runner/work/_actions/uraimo/run-on-arch-action/v2/Dockerfiles/Dockerfile.armv7.alpine_edge does not exist.

Can alpine_edge be enabled please?

Using `act` to test an action locally fails

I would like to test a new workflow using this action on my local machine using nektos/act, but even with the simple example listed in the README, it fails with this output:

| WARNING: The requested image's platform (linux/arm) does not match the detected host platform (linux/amd64) and no specific platform was requested
| docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "/actions/[email protected]/src/run-on-arch-commands.sh": stat /actions/[email protected]/src/run-on-arch-commands.sh: no such file or directory: unknown.
[Linux-ARM/Build on ubuntu-18.04 armv7]   ❗  ::error::The process '/actions/[email protected]/src/run-on-arch.sh' failed with exit code 127
[Linux-ARM/Build on ubuntu-18.04 armv7]   ❌  Failure - Run commands
DEBU[0023] exit with `FAILURE`: 1                       
DEBU[0023] exit with `FAILURE`: 1                       
Error: exit with `FAILURE`: 1

I realize that this might not be the intended way to running this action, but I would really like to get this working together with act.

I also tried to use the latest release 2.0.9 and I tried using different arch/distro combinations. Any help debugging the problem would be greatly appreciated.

Nice Project but README is wrong

Hi!

super cool project, thanks!

But there is no support for Debian for aarch64 => arm64 but the README mentions it.

Is it planned?

fedora_latest armv7 tries to allocate 464Gb of memory

Using fedora_latest armv7 crashes with:

  Fedora 36 - armhfp                              3.9 MB/s |  76 MB     00:19    
  Fedora 36 openh264 (From Cisco) - armhfp        791  B/s | 2.5 kB     00:03    
  Fedora Modular 36 - armhfp                      1.4 MB/s | 2.3 MB     00:01    
  Fedora 36 - armhfp - Updates                    3.7 MB/s |  23 MB     00:06    
  Out of memory allocating 486539264 bytes!
  /root/run-on-arch-install.sh: line 4:    13 Aborted                 (core dumped) dnf install -y dotnet-sdk-6.0 wget
  The command '/bin/sh -c chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh' returned a non-zero code: 134
  Error: The process '/home/runner/work/_actions/uraimo/run-on-arch-action/v2/src/run-on-arch.sh' failed with exit code 134

fedora_latest s390x works fine.

Running the 'docker' command from run-on-arch

I'd like to test my multiple-architecture Docker image, hence, I've tried the run-on-arch action, together with the docker command. Something like this:

name: Docker Test
on:
  workflow_dispatch
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Code checkout
      uses: actions/[email protected]
    
    - uses: uraimo/[email protected]
      name: Run aratiny
      id: docker-cmd
      env:
        DOCKER_USER: ${{secrets.DOCKER_USER}}
        DOCKER_PASSWORD: ${{secrets.DOCKER_PASSWORD}}
      with:
        arch: aarch64
        distro: ubuntu20.04
        run: |
          cd docker
          ./docker-run.sh # this just sets up a few options and runs "docker run".

However, I get the error 'docker command not found':

Status: Downloaded newer image for arm64v8/ubuntu:20.04
 ---> 0a1fc7bf1e73
Step 2/3 : COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh
 ---> 24140215138a
Step 3/3 : RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh
Warning: rning] The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested
 ---> Running in 85e80b14444c
Removing intermediate container 85e80b14444c
 ---> 833ef66bca6f
Successfully built 833ef66bca6f
Successfully tagged run-on-arch-rothamsted-knetminer-docker-test-aarch64-ubuntu20-04:latest
WARNING: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested


export MAVEN_ARGS=" --no-transfer-progress --batch-mode --no-snapshot-updates -Pdocker"
++ docker run -it -p 8080:8080 --env MAVEN_ARGS knetminer/knetminer:latest aratiny
./docker-run.sh: line 144: docker: command not found
Error: The process '/home/runner/work/_actions/uraimo/run-on-arch-action/v2.0.9/src/run-on-arch.sh' failed with exit code 127

Details here.

Usually, the docker command is available in the GH Actions host, without anything special to set, here it seems it isn't.

Moreover, as I asked elsewhere, I'm not sure that I should prepare multi-arch images for all the parents that my image uses, or if doing it just for the latter is enough.

Support for whole jobs

Hey, I just wondered if there could be a way to emulate aarch64 for all steps like

jobs:
  build:
    container: name-of/container-image

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

I would love to run actions on this container provided by this repo!

Support install scripts located in checked-out repository

During the action's install: phase, one cannot access the checked-out files. For modularity reasons, I manage preparation scripts for various Linux environments in the repository. Therefore, I tried to not spoil the .yml workflow spec with extra "apt install ..." stuff.

First, I noticed that the current working directory does not point to the checkout:

install: ./scripts/ci/actions_prepare_linux_arm.sh

/root/run-on-arch-install.sh: line 7: ./scripts/ci/actions_prepare_linux_arm.sh: No such file or directory

After realizing that I seem to be in the root path, I tried to provide the full path the the install script:

install: ./home/runner/work/opensmalltalk-vm/opensmalltalk-vm/scripts/ci/actions_prepare_linux_arm.sh

/root/run-on-arch-install.sh: line 4: ./home/runner/work/opensmalltalk-vm/opensmalltalk-vm/scripts/ci/actions_prepare_linux_arm.sh: No such file or directory

This issue might be related to issue #39 . This might be a "nice-to-have" feature. 😄

Accepting `distro` notation variations, e.g. `ubuntu-20.04`

In this action, we can specify labels like ubuntu20.04 for the distro option. On the other hand, the GitHub-hosted runners use the labels with a hyphen like ubuntu-20.04.
https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners#supported-runners-and-hardware-resources

This leads to verbose matrix options. Currently, only Ubuntu overlaps with the docker images and the GitHub-hosted runners, though.

Is `githubToken` safe?

this action prints Your password will be stored unencrypted in /home/runner/.docker/config.json.

Failed to run command on aarch64

Hi,

I'm trying to run a command on aarch64 architecture but got the error below and I'm not quite sure what that means, maybe some missing configuration? The issue happens on line 84(mvn command)

Build failed = https://github.com/rodrigorodrigues/spring-native-crud-mongodb/runs/2117047728?check_suite_focus=true

Github Actions file = https://github.com/rodrigorodrigues/spring-native-crud-mongodb/blob/master/.github/workflows/spring-native.yml#L65-L84

Error

qemu-aarch64-static: Could not open '/lib/ld-linux-aarch64.so.1': No such file or directory
Error: Process completed with exit code 255.

Thanks

Building Images from Dockerfiles

When building image from Dockerfile it fails.

> [2/3] COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh: failed to compute cache key: "/run-on-arch-install.sh" not found: not found

There is no such file in repo.

cache conflict?

Hi, it seems the name of the container generated after install steps for caching is ghcr.io/<github_user>/<github_repo>/run-on-arch-<github_user>-<github_repo>-<workflow_name>-<arch>-<distro>.

I am trying to build different images for the same arch/distro, for instance i am using a different version of the JDK inside the image.

My understanding is that all 3 jobs will try to push/pull the same container, which will probably generate conflict.

Can you confirm if that's the case, and if yes, is there something we can do about it?

feature: Support more arch?

Hi there, Is there a plan to support more arch? Like ppc64, loongarch64, mips, mipsle, mips64, mips64le, riscv64 and so on?

Add support for subsequent actions

The way this is written really limits the value of this action to only building artifacts, or so it seems. Is there any way to just prepare the build environment for the next action?

Value too large for defined data type; class=Os (2) (but only when using a build.rs)

I know this might be just a rust related thing, but I'll ask here as well since I first saw the partial solution to this problem on the issue #9

  • Problem:

I see the error Value too large for defined data type; class=Os (2) when trying to build a rust program IF AND ONLY IF I have a file build.rs (which are built and run before building the program itself). When I don't have this file, the problem disappears (the file is doing nothing functionally. it is literally empty)

  • Reproduction:

I created this repository https://github.com/leoniloris/test-actions/actions with only two commits (and two runs on the actions), so one can see that literally when I remove the file build.rs , the build goes through.

There is a single yml file for building it and it is using the workaround depicted here #9 (comment)

Do you guys have any idea of what might be the problem?


linked issue:
rust-lang/cargo#9545
but since I don't know if it is a cargo issue or github actions issue, I'll post on both (for now. I'll close the unrelated one when we discover what's happening)

Cannot run suid programs

When I try to run sudo with a non-root user with this action, it tells me:

sudo: effective uid is not 0, is /usr/sbin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

Invalidate cached container image

Hey, as stated in the README, the commands in install are only executed once when githubToken is provided. Is there a way to rerun the install commands to update the cached container image? Like a separate cronjob to keep the image up-to-date asynchronously?

CI Env not accesible in aarch64 container

When running aarch64 architecture in my case not sure if others are affected here.

Part of cmake code is checking for CI Env
if("$ENV{CI}" STREQUAL "true")

Which can't seem to detect it because that code inside if does not get executed.

PS: As CI is on as default on virtual enviroments.

Cargo build error: .git//refs: Value too large for defined data type; class=Os (2)

Since this action is using qemu, that seems to be causing a known issue associated with 32-bit qemu machines (in my case it's armv6).

To configure your current shell run source $HOME/.cargo/env
  Installing monolith v2.2.2 (/github/workspace/monolith)
    Updating crates.io index
warning: spurious network error (2 tries remaining): could not read directory '/home/runner/.cargo/registry/index/github.com-1ecc6299db9ec823/.git//refs': Value too large for defined data type; class=Os (2)
warning: spurious network error (1 tries remaining): could not read directory '/home/runner/.cargo/registry/index/github.com-1ecc6299db9ec823/.git//refs': Value too large for defined data type; class=Os (2)
error: failed to fetch `https://github.com/rust-lang/crates.io-index`

Caused by:
  could not read directory '/home/runner/.cargo/registry/index/github.com-1ecc6299db9ec823/.git//refs': Value too large for defined data type; class=Os (2)

More info:
rust-lang/cargo#7451 -> https://lkml.org/lkml/2018/12/28/461

This is a very useful action, thank you for creating and maintaining it

release number in readme is not the recent one

Hi,

first and foremost, thanks for this wonderful tool. It really helps me to catch problems early on my project.

A short feedback I would like to give, to improve the user experience... I did copy-paste out one of the example from the README.md file. And simple added different architectures as the "Supported Platform" section of the same file suggest. The problem is, the example uses the v2.0.5 label, which do not have all the images as the "Supported Platform" section lists.

I think this can be avoided in multiple ways.

  • The easiest could be to use the latest v2.1.1 in the examples.
  • The more elegant can be to create a v2 floating label. (This is what the official actions/checkout@v2 does too.) This would make your action as uraimo/run-on-arch-action@v2.

What do you think?

aarch64 debian error ##[error]The process '/home/runner/work/_actions/uraimo/run-on-arch-action/v1.0.7/src/run-on-arch.sh' failed with exit code 125 ##[error]Node run failed with exit code 1

buster aarch64 error
this is the workflow file.

on: 
  watch:
    types: [started]

jobs:
  armv7_job:
    runs-on: ubuntu-18.04
    name: Build on ARMv7 
    steps:
      - uses: actions/[email protected]
      - uses: uraimo/[email protected]
        id: runcmd
        with:
          architecture: aarch64
          distribution: buster
          additionalArgs: <additional args for architecture specific docker, optional>
          run: |
            uname -a
            echo ::set-output name=uname::$(uname -a)
      - name: Get the output
        run: |
            echo "The uname output was ${{ steps.runcmd.outputs.uname }}"

图片
this is job log https://github.com/yjcn/testarch/actions/runs/38046045

env is not passed to the install block

Add a single line to the install block, then it will fail with error "artifact_name: parameter not set".

on: [push, pull_request]

jobs:
  build_job:
    # The host should always be linux
    runs-on: ubuntu-18.04
    name: Build on ${{ matrix.distro }} ${{ matrix.arch }}

    # Run steps on a matrix of 3 arch/distro combinations
    strategy:
      matrix:
        include:
          - arch: aarch64
            distro: ubuntu18.04
          - arch: ppc64le
            distro: alpine_latest
          - arch: s390x
            distro: fedora_latest

    steps:
      - uses: actions/[email protected]
      - uses: uraimo/[email protected]
        name: Build artifact
        id: build
        with:
          arch: ${{ matrix.arch }}
          distro: ${{ matrix.distro }}

          # Not required, but speeds up builds
          githubToken: ${{ github.token }}

          # Create an artifacts directory
          setup: |
            mkdir -p "${PWD}/artifacts"

          # Mount the artifacts directory as /artifacts in the container
          dockerRunArgs: |
            --volume "${PWD}/artifacts:/artifacts"

          # Pass some environment variables to the container
          env: | # YAML, but pipe character is necessary
            artifact_name: git-${{ matrix.distro }}_${{ matrix.arch }}

          # The shell to run commands with in the container
          shell: /bin/sh

          # Install some dependencies in the container. This speeds up builds if
          # you are also using githubToken. Any dependencies installed here will
          # be part of the container image that gets cached, so subsequent
          # builds don't have to re-install them. The image layer is cached
          # publicly in your project's package repository, so it is vital that
          # no secrets are present in the container state or logs.
          install: |
            case "${{ matrix.distro }}" in
              ubuntu*|jessie|stretch|buster)
                apt-get update -q -y
                apt-get install -q -y git
                echo "${artifact_name}"
                ;;
              fedora*)
                dnf -y update
                dnf -y install git which
                ;;
              alpine*)
                apk update
                apk add git
                ;;
            esac

          # Produce a binary artifact and place it in the mounted volume
          run: |
            cp $(which git) "/artifacts/${artifact_name}"
            echo "Produced artifact at /artifacts/${artifact_name}"

      - name: Show the artifact
        # Items placed in /artifacts in the container will be in
        # ${PWD}/artifacts on the host.
        run: |
          ls -al "${PWD}/artifacts"

The line that I added is echo "${artifact_name}".

Node.js memory limit

Hi @uraimo,
first of all, thanks for this excellent action. I use it quite a lot.

Unfortunately, I have an issue: I run Node.js inside the containers for my build and it always crashes with JavaScript heap out of memory although I have set then env variable NODE_OPTIONS: "--max_old_space_size=8048".

I created a debug workflow:

name: Debug
on:
  push:

env:
  NODE_OPTIONS: "--max_old_space_size=8048"

jobs:
  build_qemu:
    name: Debug ${{ matrix.arch }}
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        arch: [armhf]
    steps:
    - uses: actions/checkout@master
    - name: Set Swap Space
      uses: pierotofy/set-swap-space@master
      with:
        swap-size-gb: 10
    - name: Build
      uses: uraimo/[email protected]
      id: build
      with:
        arch: "${{ fromJSON('{\"armhf\": \"armv7\"}')[matrix.arch] }}"
        distro: ubuntu18.04
        env: |
          NODE_OPTIONS: --max_old_space_size=8048
        setup: |
          mkdir -p "${PWD}/artifacts"
        dockerRunArgs: |
          --volume "${PWD}/artifacts:/artifacts"
        install: |
          apt-get update && apt-get install -y curl
          curl -fsSL https://deb.nodesource.com/setup_16.x | bash -
          apt-get install -y nodejs
          npm install --global yarn
        githubToken: ${{ github.token }}
        run: |
          node -e "console.log(require('v8').getHeapStatistics())"
          node -e "console.log(process.env)"

As you can see, I even set more swap space and the env variable twice.

The output is:

{
  total_heap_size: 3960832,
  total_heap_size_executable: 524288,
  total_physical_size: 3154768,
  total_available_size: 4166532276,
  used_heap_size: 2419216,
  heap_size_limit: 4169138176,
  malloced_memory: 131164,
  peak_malloced_memory: 320576,
  does_zap_garbage: 0,
  number_of_native_contexts: 1,
  number_of_detached_contexts: 0
}
{
  _: '/usr/bin/node',
  GITHUB_WORKSPACE: '/home/runner/work/redacted/redacted',
  PATH: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
  GITHUB_WORKFLOW: 'Debug',
  GITHUB_RUN_NUMBER: '6',
  GITHUB_EVENT_NAME: 'push',
  GITHUB_REPOSITORY: 'redacted/redacted',
  SHLVL: '1',
  GITHUB_JOB: 'build_qemu',
  TERM: 'xterm',
  GITHUB_BASE_REF: '',
  RUNNER_OS: 'Linux',
  GITHUB_GRAPHQL_URL: 'https://api.github.com/graphql',
  GITHUB_EVENT_PATH: '/home/runner/work/_temp/_github_workflow/event.json',
  GITHUB_SERVER_URL: 'https://github.com/',
  GITHUB_RUN_ID: 'redacted',
  GITHUB_SHA: 'redacted',
  GITHUB_REF: 'refs/heads/main',
  RUNNER_WORKSPACE: '/home/runner/work/redacted',
  DEBIAN_FRONTEND: 'noninteractive',
  GITHUB_ENV: '/home/runner/work/_temp/_runner_file_commands/set_env_redacted',
  RUNNER_TEMP: '/home/runner/work/_temp',
  HOME: '/root',
  PWD: '/home/runner/work/redacted/redacted',
  GITHUB_ACTION: 'build',
  GITHUB_ACTOR: 'hrueger',
  GITHUB_HEAD_REF: '',
  CI: 'true',
  GITHUB_ACTIONS: 'true',
  RUNNER_TOOL_CACHE: '/opt/hostedtoolcache',
  GITHUB_API_URL: 'https://api.github.com/',
  NODE_OPTIONS: '--max_old_space_size=8048',
  HOSTNAME: 'd6d6e1f2994f'
}

It seems like the memory available inside the contianer / qemu is only 4 GB, because the env variable is passed correctly.

Do you have any idea?

Consistent naming

We currently have the following ubuntu machines: ubuntu16.04, ubuntu18.04, ubuntu20.04. But in GitHub Actions these machines named as ubuntu-16.04, ubuntu-18.04, ubuntu-20.04.

It would be awesome to have the same names to allow easy substitutions via matrix.
Or alternatively, use by default host distro (instead of ubuntu18.04).

Support for other OSes (operating systems)

Hi

Would it be possible to support other OSes, such as Windows and MacOS?
Since it's eventually running in qemu it could be theoretically possible, right?
I'm not sure how install could work, since interacting with say Windows can be a bit difficult, but this is theoretically possible right?

For MacOS it's probably easier to set up, but I'm not sure about licensing. GitHub Actions already have these OSes so some loophole might be found.

Try to generate NixOS with run-on-arch-action alpine_latest leads to error

I would like to build an NixOS image, with github actions for my RaspberryPi. When I use the code, that I got from here: https://github.com/Robertof/nixos-docker-sd-image-builder everything is fine and after somehow 25min it ends successfully (code 0): https://github.com/Chris2011/nixos-docker-image-builder/actions/runs/2952094775/jobs/4718626365#step:4:2234

The script checks the architecture, if it is x86, it will use QEMU to emulates it. This is exactly what happens. But I guess it will be much faster when I try native ARM image so I found your repo and I updated my workflows file: https://github.com/Chris2011/nixos-docker-image-builder/blob/dev/.github/workflows/build-nixos-arm.yml

Now, when I run the action on your image with alpine_latest aarch64 it will not run QEMU because it is native aarch64 which is what I want, the problem here is, after ~8min it is still green it ends "successfully (code 1)" but with this error: https://github.com/Chris2011/nixos-docker-image-builder/actions/runs/3039913885/jobs/4895382860#step:3:1580

So my feeling is, that there is smth misbehavior while using your image to build it.

X86 exception support.

Currently, since run-on-arch do not handle exception when running X86-64 ie) running natively without QEMU, we need some additional conditional statements if we want to build for both non-native arch and x86-64(native).
Could we make a change so run-on-arch is disabled and steps are ran natively when arch type is either empty or "x86-64"?

Continue in another job?

I am using your action in my workflow to compile the Linux kernel. This takes longer than 6h limit of job in GitHub Actions. Is it somehow possible to use same Docker container on next job, and continue with compilation?

The advanced example produces a warning

The advanced example produces the following warning in the build step:

Warrning: The requested image's platform (linux/arm64) does not match the detected host platform (linux/amd64) and no specific platform was requested.

Is it an intended behaviour?

Volume is not mounted

Hello.

Could anyone guide me how to mount current directory into the aarch64 container?

I'm trying to do that with

dockerRunarg: |
  --volume "${PWD}:/app"

but the volume is not mounted

Run uraimo/run-on-arch-action@v2
Configuring Docker for multi-architecture support
/home/runner/work/_actions/uraimo/run-on-arch-action/v2/src/run-on-arch.sh /home/runner/work/_actions/uraimo/run-on-arch-action/v2/Dockerfiles/Dockerfile.aarch64.ubuntu20.04 run-on-arch-jokaorgua-for-files-ci-aarch64-ubuntu20-04 --volume ${PWD}:/app
Build container
  GitHub token provided, caching to ghcr.io/jokaorgua/for_files/run-on-arch-jokaorgua-for-files-ci-aarch64-ubuntu20-04
  WARNING! Your password will be stored unencrypted in /home/runner/.docker/config.json.
  Configure a credential helper to remove this warning. See
  https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  
  Login Succeeded
  Error response from daemon: manifest unknown
  Sending build context to Docker daemon  61.44kB
  
  Step 1/3 : FROM arm64v8/ubuntu:20.04
  20.04: Pulling from arm64v8/ubuntu
  d4ba87bb7858: Pulling fs layer
  d4ba87bb7858: Verifying Checksum
  d4ba87bb7858: Download complete
  d4ba87bb7858: Pull complete
  Digest: sha[25](https://github.com/jokaorgua/for_files/runs/6324683489?check_suite_focus=true#step:4:25)6:ca83774d06420ceb4682ef73bd9cbbfc38a97a[27](https://github.com/jokaorgua/for_files/runs/6324683489?check_suite_focus=true#step:4:27)e061b578547a6761206658b9
  Status: Downloaded newer image for arm64v8/ubuntu:20.04
   ---> db1bc6aa58da
  Step 2/3 : COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh
   ---> 95ef0[31](https://github.com/jokaorgua/for_files/runs/6324683489?check_suite_focus=true#step:4:31)9085f
  Step 3/3 : RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh
  Warning: rning] The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
   ---> Running in 55c58bc0ca7a
  /root/run-on-arch-install.sh: line 4: cd: /app: No such file or directory

Here is my workflow

# This is a basic workflow to help you get started with Actions

name: CI

# Controls when the workflow will run
on:
  # Triggers the workflow on push or pull request events but only for the master branch
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  test:
    # This workflow contains a single job called "build"
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        include:
          - arch: aarch64
            distro: ubuntu20.04
            build-targets: "deb"
            os: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        name: Checkout branch
      - run: echo ${PWD}
      - uses: uraimo/run-on-arch-action@v2
        name: setup aarch container
        with:
          arch: ${{ matrix.arch }}
          distro: ${{ matrix.distro }}
          # Not required, but speeds up builds
          githubToken: ${{ github.token }}
          shell: /bin/bash
          # Mount the artifacts directory as /artifacts in the container
          dockerRunArgs: |
            --volume "${PWD}:/app"
          install: |
            cd /app
            ls -la
            apt update
            apt install -y curl ruby-dev build-essential
            gem i fpm -f
            curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
            source ~/.nvm/nvm.sh
            nvm use v16.13.2
            nvm installnvm
          run: |
            uname -a
            node --version
            nvm --version

cache-from: type=gha | use the native Docker's official build-push-action now supports GitHub Cache API where caches are saved to GitHub Actions cache directly

can switch we use the Docker's official build-push-action? as now supports GitHub Cache API where caches are saved to GitHub Actions cache directly.
https://github.com/docker/build-push-action/blob/master/docs/advanced/cache.md#cache-backend-api

https://dev.to/dtinth/caching-docker-builds-in-github-actions-which-approach-is-the-fastest-a-research-18ei
Warning: This article has not been updated since its publication in April 2020. The approaches outlined here are probably out-of-date. Here are more updated takes on this issue:

    2021-07-29 Docker's official build-push-action now supports GitHub Cache API where caches are saved to GitHub Actions cache directly, skipping the local filesystem-based cache.

    2021-03-21 Andy Barnov, Kirill Kuznetsov. “Build images on GitHub Actions with Docker layer caching”, Evil Martians.

Document the fact that emulation is being used.

It might be really obvious to people who know how this works, but for someone just looking to run something on non-x86 arch it might not be. I had to dig quite a bit to confirm that emulation via QEmu was being used. This matters because it is going to have big implications for performance.

This could be as simple as changing the description to:

A GitHub Action that executes commands on non-x86 CPU architecture (armv6, armv7, aarch64, s390x, ppc64le) using emulation.

getting wrong architecture during build

I'm running Docker in docker to build for armv7l, but I keep getting:

ERRO[0000] failure getting variant error="getCPUInfo for pattern: Cpu architecture: not found"

and the end result is an amd64 Docker image, built from within the armv7l/debian:buster setup in my workflow...how can this be?

Please support non-emulated amd64

Thanks for developing run-on-arch-action. When doing multi-arch builds using a matrix, being able to put amd64 and i386 in it would help keep the github action simple. In that case run-on-arch could run a native container.

Failing Cpack during creating temp dir

Hi there. I am using your action in 1 of project that I participate.

I am having issue with executing cpack

Run cpack
CPack: Create package using TGZ
CPack: Install projects
CPack Error: Problem creating temporary directory: /home/runner/work/name/name/build/_CPack_Packages/Linux/TGZ/name-arm/
CPack Error: Error when generating package: name
Error: Process completed with exit code 1.

PS: We have several jobs with no issues just it fails with that one.

Adding RISCV

Have you considered adding RISCV support?

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.