tonistiigi / binfmt Goto Github PK
View Code? Open in Web Editor NEWCross-platform emulator collection distributed with Docker images.
License: MIT License
Cross-platform emulator collection distributed with Docker images.
License: MIT License
Is it possible to use multiple cores or pass a flag to QEMU, because when running on arm-v7, armbian 23.8, qemu only utilizes one core, and stays pinned at 100%.
❯ docker run --platform linux/mips64le golang go version
standard_init_linux.go:228: exec user process caused: exec format error
❯ docker run --platform linux/amd64 golang go version
Unable to find image 'golang:latest' locally
latest: Pulling from library/golang
Digest: sha256:301609ebecc0ec4cd3174294220a4d9c92aab9015b3a2958297d7663aac627a1
Status: Downloaded newer image for golang:latest
go version go1.17.6 linux/amd64
❯ docker run --platform linux/arm64 golang go version
Unable to find image 'golang:latest' locally
latest: Pulling from library/golang
Digest: sha256:301609ebecc0ec4cd3174294220a4d9c92aab9015b3a2958297d7663aac627a1
Status: Downloaded newer image for golang:latest
go version go1.17.6 linux/arm64
❯ docker version
Client:
Cloud integration: v1.0.22
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:46:56 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:56 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
❯ docker buildx version
github.com/docker/buildx v0.7.1 05846896d149da05f3d6fd1e7770da187b52a247
❯ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
desktop-linux docker
desktop-linux desktop-linux running linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
default * docker
default default running linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
exec /bin/uname: no such file or directory
docker run --rm riscv64/alpine:20221110 uname -a
WARNING: The requested image's platform (linux/riscv64) does not match the detected host platform (linux/amd64) and no specific platform was requested
Linux a72f961e7dc3 5.4.0-132-generic #148~18.04.1-Ubuntu SMP Mon Oct 24 20:41:14 UTC 2022 riscv64 Linux
docker run --rm s390x/alpine uname -a
WARNING: The requested image's platform (linux/s390x) does not match the detected host platform (linux/amd64) and no specific platform was requested
exec /bin/uname: no such file or directory
系统ubuntu18.04
docker -v
Docker version 20.10.21, build baeda1f
uname -r
5.4.0-132-generic
docker run --privileged --rm tonistiigi/binfmt --version
binfmt/a161c41 qemu/v7.0.0 go/1.18.5
ls -al /proc/sys/fs/binfmt_misc/
total 0
drwxr-xr-x 2 root root 0 11月 17 15:25 .
dr-xr-xr-x 1 root root 0 11月 17 15:25 ..
-rw-r--r-- 1 root root 0 11月 30 08:54 python2.7
-rw-r--r-- 1 root root 0 11月 21 18:06 python3.6
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-aarch64
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-alpha
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-arm
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-armeb
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-cris
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-m68k
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-microblaze
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-mips
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-mips64
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-mips64el
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-mipsel
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-ppc
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-ppc64
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-ppc64abi32
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-ppc64le
-rw-r--r-- 1 root root 0 12月 1 09:53 qemu-riscv64
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-s390x
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-sh4
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-sh4eb
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-sparc
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-sparc32plus
-rw-r--r-- 1 root root 0 11月 21 18:06 qemu-sparc64
--w------- 1 root root 0 11月 30 08:54 register
-rw-r--r-- 1 root root 0 11月 30 08:54 status
I noticed on Docker Hub that for arm64 images have either linux/arm64 or linux/arm64/v8. Is there currently any real difference between the two or is one currently just the more explicit form of the other and a means for future proofing the platform specification when we get a newer arm64 variant than v8?
Could you share what the license used for the image?
Although, compiled binaries are being provided in releases at https://github.com/tonistiigi/binfmt/releases
Still opening this issue for creating an OS-specific package.
If possible, can we release OS-specific packages of "binfmt" with all qemu patches which can be installed or uninstalled through os-specific package managers instead of installing and installing from a container?
As, while automating or building a container image it is not possible to run "docker run --privileged --rm tonistiigi/binfmt --install all
" in the image build process.
Discussed at containerd/nerdctl#1321
An alternative, to this, was proposed at containerd/nerdctl#1322 but using upstream packages.
From there this need came as "The upstream QEMU lacks patches".
So, This can be fixed by:
or,
» docker run -it --rm --platform=linux/amd64 debian chrt -p 1
Unable to find image 'debian:latest' locally
latest: Pulling from library/debian
Digest: sha256:e8c184b56a94db0947a9d51ec68f42ef5584442f20547fa3bd8cbd00203b2e7a
Status: Downloaded newer image for debian:latest
chrt: failed to get pid 1's policy: Function not implemented
Does not happen on qemu-v5.2.0
tag.
Native musl seems to not have issues:
docker run -it --rm alpine ash -c "apk add util-linux-misc && chrt -p 1"
pid 1's current scheduling policy: SCHED_OTHER
pid 1's current scheduling priority: 0
https://github.com/qemu/qemu/blame/master/linux-user/syscall.c#L10564
Related commands:
linux1@t1:~$ sudo docker run --privileged --rm tonistiigi/binfmt --install all
installing: ppc64le OK
installing: mips64le OK
installing: mips64 OK
installing: arm64 OK
installing: arm OK
installing: 386 OK
installing: amd64 OK
installing: riscv64 OK
{
"supported": [
"linux/s390x"
],
"emulators": [
"qemu-aarch64",
"qemu-arm",
"qemu-i386",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-x86_64"
]
}
First, install all platform; then run the container for linux/amd64,
linux1@t1:~$ sudo docker run --privileged --platform=linux/amd64 \
--restart=always -itd \
--name warp_one \
--sysctl net.ipv6.conf.all.disable_ipv6=0 \
--cap-add net_admin \
-p 14888:9091 \
-v /lib/modules:/lib/modules \
ubuntu:focal
However, it wont start as it should be, it just keep restarting; actually same commands work when using the ARM
platform, but it failed to run on S390x
this time.
linux1@t1:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0951d5e0af79 ubuntu:focal "/bin/bash" About a minute ago Restarting (132) 42 seconds ago warp_one
I have a few containers on my Raspberry Pi that run amd64 emulators:
docker run -d --restart always --platform linux/amd64 example/example
Whenever I reboot I have to reinstall binfmt
with:
docker run --privileged --rm tonistiigi/binfmt --install all
Otherwise I get exec /bin/sh: exec format error
inside my containers.
That's really annoying. Is there a way to avoid this?
Would be nice to have it able to automatically run every time docker is started. Maybe add in a shell script and have it "sleep inf"? Perhaps have a special env variable to enable this mode?
I've discovered an issue while trying to perform builds via buildkit, on a host architecture of linux/amd64
, while the target is linux/arm64
(although this also happens for other emulated architectures). This is happening when relying on the buildkit binfmt emulation, and not having host-level support installed into the kernel. binfmt
output for the host, showing no emulation support for arm64
and others:
{
"supported": [
"linux/amd64",
"linux/386"
],
"emulators": null
}
When run in this configuration with buildkit
v0.10.4, which includes binfmt v6.2.0-24, buildkit itself is configured to run commands using the buildkit variant qemu binaries. Previously I believe the way to support this was to install the host-level emulation support, however with the capabilities in buildkit itself now, my understanding is that should no longer be the case?
RUN
commands in the Dockerfile
via emulation work in many cases with the buildkit provided qemu, with it modifying the RUN
command to inject the buildkit qemu binary into the call. For some (many?) cases where the program being run itself tries to run something else in the PATH
, this fails.
Minimal Dockerfile
examples that fail when run through emulation:
FROM alpine:3.16
RUN /usr/bin/env sh -c 'echo Hello World'
FROM debian:bullseye
RUN /usr/bin/env sh -c 'echo Hello World'
Note: this doesn't just affect env
, but other commands that execute files within the PATH
as well, like xargs
From debugging the process, I can see that within the buildkit qemu emulation, the execve
syscall modifies the arguments to inject .buildkit_qemu_emulator
into the call, ensuring that the invocation is run through the emulator. This change to the call modifies the behaviour such that it breaks execution of binaries within the PATH
, e.g. via an execvp
syscall.
Typically the execvp
call will iterate over each element of the PATH
, and each time the target file doesn't exist within that path element, will return an ENOENT
and continue the loop until either; 1) the file is found (and executed); or 2) the entire PATH
is searched and the file isn't found. When executed via .buildkit_qemu_emulator
however, the qemu binary itself is always found, but internally the command being called (e.g. absolute path /usr/local/sbin/sh
) isn't found. .buildkit_qemu_emulator
fails indicating Error while loading /usr/local/sbin/sh: No such file or directory
, but the error returned through the execve
call is the child process (.buildkit_qemu_emulator
) failing with an error. This bubbles back to the execvp
call, matches its unhandled error case, and aborts the entire process.
Workaround: by modifying the PATH
so that the first element contains the file to be executed, the process succeeds.
The relevant portion of the build for the alpine
example run with strace
enabled for qemu:
#5 [2/2] RUN /usr/bin/env sh -c 'echo Hello World'
#5 0.302 1 set_tid_address(365117932080,1,16,16,0,365117246396) = 1
#5 0.304 1 brk(NULL) = 0x00000055000eb000
#5 0.304 1 brk(0x00000055000ed000) = 0x00000055000ed000
#5 0.304 1 mmap(0x00000055000eb000,4096,PROT_NONE,MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED,-1,0) = 0x00000055000eb000
#5 0.305 1 mprotect(0x00000055000e6000,16384,PROT_READ) = 0
#5 0.307 1 getuid() = 0
#5 0.308 1 mmap(NULL,4096,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANONYMOUS,-1,0) = 0x0000005502b9a000
#5 0.308 1 mmap(NULL,4096,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANONYMOUS,-1,0) = 0x0000005502b9b000
#5 0.309 1 getpid() = 1
#5 0.309 1 mmap(NULL,8192,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANONYMOUS,-1,0) = 0x0000005502b9c000
#5 0.309 1 rt_sigprocmask(SIG_UNBLOCK,0x0000005502aeb820,NULL) = 0
#5 0.309 1 rt_sigaction(SIGCHLD,0x0000005502aeb800,NULL) = 0
#5 0.310 1 getppid() = 0
#5 0.310 1 uname(0x5502aeb9f0) = 0
#5 0.311 1 getcwd(0x5502aeab30,4096) = 2
#5 0.312 1 rt_sigaction(SIGINT,NULL,0x0000005502aeba30) = 0
#5 0.312 1 rt_sigaction(SIGINT,0x0000005502aeba10,NULL) = 0
#5 0.312 1 rt_sigaction(SIGQUIT,NULL,0x0000005502aeba30) = 0
#5 0.312 1 rt_sigaction(SIGQUIT,0x0000005502aeba10,NULL) = 0
#5 0.312 1 rt_sigaction(SIGTERM,NULL,0x0000005502aeba30) = 0
#5 0.327 Error while loading /usr/local/sbin/sh: No such file or directory
#5 ERROR: process "/dev/.buildkit_qemu_emulator -strace /bin/sh -c /usr/bin/env sh -c 'echo Hello World'" did not complete successfully: exit code: 1
The issue also impacts script shebang execution (which is how I initially encountered this), but I'm unclear on how much of that process differs inside the emulation from the example cases here.
#!/usr/bin/env sh
echo "Hello World!"
^ Also fails when executed via a RUN
instruction
Hi @tonistiigi and other Maintainers.
Thank you in advance for making this binfmt, it's very useful for building multi architecture image. 😄
I want to ask something, I have tried to run this command docker run --privileged --rm tonistiigi/binfmt --install all
in my Virtual Machine. Because I have an image with a different architecture, so I want the image to running as a container even though it's a different architecture.
But until I realized the resource usage was higher (also never down) than when the image was run on the same architecture. Is there any explanation for this?
I have a query around the depth of support Binfmt has, primarily around running the actual services inside the emulated containers.
I am able to spin up amd64
containers easily enough on an aarch64
ARM platform, and while the containers do run and I can drop into a shell no problem, the services of some don't seem to start (definitely not all do it). Looks like it may be related to a timeout while starting the server itself inside the container - lack of resources? The host I'm running on still seems to have plenty of CPU and memory overhead.
Can I ask whether it's normal for emulated containers to run but not run? Are there still caveats and aspects of Binfmt that don't quite emulate the full architecture whereby some components just don't or can't reliably run? Or should, if the container itself runs, the whole service inside realistically be running too? Thanks!
Hello,
i have some x86 app that are emulated on a RPi.
I would like to setup a docker compose so that binfmt is also triggered during the init phase of my system.
How do you handle the '--install all', is it default if omitted?
The "privileged" key seems to be quite straightforward according to https://docs.docker.com/compose/compose-file/compose-file-v3/
Hi maintainer,
Thank you for the great work!
We use your docker image (tonistiigi/binfmt) to build manylinux2014_aarch64 Python wheels. Recently it bumped into an issue while using docker/setup-qemu-action@v1 from GitHub Action. It uses the latest docker image from tonistiigi/binfmt.
Previous docker images (<=qemu-v6.0.0-12) worked pretty well, but it encountered a crash started from qemu-v6.1.0. When using jupyter related commands, it bumped into crashes. For instance,
jupyter nbconvert --to notebook --inplace --execute xxx.ipynb
2021-09-28T15:17:33.3669336Z [NbConvertApp] Converting notebook onnx/examples/Protobufs.ipynb to notebook
2021-09-28T15:17:37.0294397Z Operation not permitted (src/thread.cpp:122)
2021-09-28T15:17:37.4379426Z /usr/bin/bash: line 9: 2586 Aborted (core dumped) jupyter nbconvert --to notebook --inplace --execute onnx/examples/Protobufs.ipynb
2021-09-28T15:17:38.2560671Z ##[error]Process completed with exit code 134.
pytest xxx.ipynb
2021-09-28T15:19:22.4754205Z Fatal Python error: Aborted
2021-09-28T15:19:22.4754611Z
2021-09-28T15:19:22.4765058Z Current thread 0x00000055012e9410 (most recent call first):
2021-09-28T15:19:22.4883802Z File "/ws/.env/lib/python3.6/site-packages/zmq/sugar/socket.py", line 83 in __init__
2021-09-28T15:19:22.4884993Z File "/ws/.env/lib/python3.6/site-packages/zmq/sugar/context.py", line 264 in socket
2021-09-28T15:19:22.4886297Z File "/ws/.env/lib/python3.6/site-packages/jupyter_client/connect.py", line 617 in _create_connected_socket
2021-09-28T15:19:22.4887687Z File "/ws/.env/lib/python3.6/site-packages/jupyter_client/manager.py", line 268 in _connect_control_socket
2021-09-28T15:19:22.4889289Z File "/ws/.env/lib/python3.6/site-packages/jupyter_client/manager.py", line 314 in _async_post_start_kernel
2021-09-28T15:19:22.4890498Z File "/opt/python/cp36-cp36m/lib/python3.6/asyncio/tasks.py", line 180 in _step
2021-09-28T15:19:22.4891592Z File "/ws/.env/lib/python3.6/site-packages/nest_asyncio.py", line 169 in step
2021-09-28T15:19:22.4892722Z File "/opt/python/cp36-cp36m/lib/python3.6/asyncio/events.py", line 145 in _run
2021-09-28T15:19:22.4893744Z File "/ws/.env/lib/python3.6/site-packages/nest_asyncio.py", line 100 in _run_once
2021-09-28T15:19:22.4894815Z File "/ws/.env/lib/python3.6/site-packages/nest_asyncio.py", line 64 in run_until_complete
2021-09-28T15:19:22.4895930Z File "/ws/.env/lib/python3.6/site-packages/jupyter_client/utils.py", line 23 in wrapped
2021-09-28T15:19:22.4897106Z File "/ws/.env/lib/python3.6/site-packages/jupyter_client/manager.py", line 337 in _async_start_kernel
2021-09-28T15:19:22.4898380Z File "/opt/python/cp36-cp36m/lib/python3.6/asyncio/tasks.py", line 180 in _step
2021-09-28T15:19:22.4899462Z File "/ws/.env/lib/python3.6/site-packages/nest_asyncio.py", line 169 in step
2021-09-28T15:19:22.4900543Z File "/opt/python/cp36-cp36m/lib/python3.6/asyncio/events.py", line 145 in _run
2021-09-28T15:19:22.4901638Z File "/ws/.env/lib/python3.6/site-packages/nest_asyncio.py", line 100 in _run_once
2021-09-28T15:19:22.4902812Z File "/ws/.env/lib/python3.6/site-packages/nest_asyncio.py", line 64 in run_until_complete
2021-09-28T15:19:22.4903993Z File "/ws/.env/lib/python3.6/site-packages/jupyter_client/utils.py", line 23 in wrapped
2021-09-28T15:19:22.4905186Z File "/ws/.env/lib/python3.6/site-packages/nbval/kernel.py", line 53 in start_new_kernel
2021-09-28T15:19:22.4906310Z File "/ws/.env/lib/python3.6/site-packages/nbval/kernel.py", line 88 in __init__
2021-09-28T15:19:22.4907419Z File "/ws/.env/lib/python3.6/site-packages/nbval/plugin.py", line 237 in setup
2021-09-28T15:19:22.4908560Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 449 in prepare
2021-09-28T15:19:22.4909901Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 150 in pytest_runtest_setup
2021-09-28T15:19:22.4911191Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_callers.py", line 39 in _multicall
2021-09-28T15:19:22.4912355Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_manager.py", line 80 in _hookexec
2021-09-28T15:19:22.4913496Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_hooks.py", line 265 in __call__
2021-09-28T15:19:22.4914618Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 255 in <lambda>
2021-09-28T15:19:22.4915988Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 311 in from_call
2021-09-28T15:19:22.4917196Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 255 in call_runtest_hook
2021-09-28T15:19:22.4918407Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 215 in call_and_report
2021-09-28T15:19:22.4919633Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 120 in runtestprotocol
2021-09-28T15:19:22.4920928Z File "/ws/.env/lib/python3.6/site-packages/_pytest/runner.py", line 109 in pytest_runtest_protocol
2021-09-28T15:19:22.4922140Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_callers.py", line 39 in _multicall
2021-09-28T15:19:22.4923314Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_manager.py", line 80 in _hookexec
2021-09-28T15:19:22.4924455Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_hooks.py", line 265 in __call__
2021-09-28T15:19:22.4925609Z File "/ws/.env/lib/python3.6/site-packages/_pytest/main.py", line 348 in pytest_runtestloop
2021-09-28T15:19:22.4926814Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_callers.py", line 39 in _multicall
2021-09-28T15:19:22.4928000Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_manager.py", line 80 in _hookexec
2021-09-28T15:19:22.4988190Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_hooks.py", line 265 in __call__
2021-09-28T15:19:22.5042380Z File "/ws/.env/lib/python3.6/site-packages/_pytest/main.py", line 323 in _main
2021-09-28T15:19:22.5043564Z File "/ws/.env/lib/python3.6/site-packages/_pytest/main.py", line 269 in wrap_session
2021-09-28T15:19:22.5044762Z File "/ws/.env/lib/python3.6/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main
2021-09-28T15:19:22.5045961Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_callers.py", line 39 in _multicall
2021-09-28T15:19:22.5047119Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_manager.py", line 80 in _hookexec
2021-09-28T15:19:22.5048255Z File "/ws/.env/lib/python3.6/site-packages/pluggy/_hooks.py", line 265 in __call__
2021-09-28T15:19:22.5049362Z File "/ws/.env/lib/python3.6/site-packages/_pytest/config/__init__.py", line 163 in main
2021-09-28T15:19:22.5050530Z File "/ws/.env/lib/python3.6/site-packages/_pytest/config/__init__.py", line 185 in console_main
2021-09-28T15:19:22.5051334Z File "/ws/.env/bin/pytest", line 8 in <module>
2021-09-28T15:19:22.8454387Z /usr/bin/bash: line 5: 60 Aborted (core dumped) pytest xxx.ipynb
I have tried different version combinations of jupyterlab, jupyter-related tools and pytest, but none of them helped. I guess some system dependencies might be missing in the latest environment, but I have no clue from the crash log...
Do you know why causes this crash in recent updates? How can I resolve this? Thank you for your help!
P.S.. I also filed an issue in jupyterlab about it to understand the crash: jupyterlab/jupyterlab#11186
Hello,
This is coming from ibmruntimes/Semeru-Runtimes#11 - I think I've tracked the issue down to the emulators installed by binfmt 6.1.0.
To reproduce the issue, use this Dockerfile:
FROM --platform=linux/arm64 centos:latest@sha256:65a4aad1156d8a0679537cb78519a17eb7142e05a968b26a5361153006224fdc
RUN yum install -y tar gzip wget alsa-lib dejavu-sans-fonts fontconfig freetype libX11 libXext libXi libXrender libXtst
RUN wget https://openj9-artifactory.osuosl.org/artifactory/ci-openj9/Build_JDK8_aarch64_linux_Personal/33/OpenJ9-JDK8-aarch64_linux-20211206-231001.tar.gz && \
tar -C /opt -xvf OpenJ9-JDK8-aarch64_linux-20211206-231001.tar.gz
RUN /opt/j2sdk-image/bin/java -version
and run the following on an x86 machine:
docker run --privileged --rm tonistiigi/binfmt:qemu-v6.1.0 --uninstall "qemu-*"
docker run --privileged --rm tonistiigi/binfmt:qemu-v6.1.0 --install all
docker build --rm --no-cache --platform=linux/arm64 -f Dockerfile .
The last step in the build file should fail with a java error.
If you change the above steps to use qemu-6.0.0
, the build succeeds.
If it helps, we traced the issue to an ENOSYS error when the JVM is calling pthread_create
.
Thanks!
i am trying to build arm64 image on amd64 host. I have installed qemu, but running the command docker run --rm --privileged tonistiigi/binfmt:latest --install arm64 gives i am trying to build arm64 image on amd64 host. I have installed qemu, but running the command docker run --rm --privileged tonistiigi/binfmt:latest --install arm64 gives
installing: arm64 cannot write to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
{
"supported": [
"linux/amd64",
"linux/386"
],
"emulators": [
"qemu-aarch64",
"qemu-aarch64_be",
"qemu-alpha",
"qemu-arm",
"qemu-armeb",
"qemu-hppa",
"qemu-m68k",
"qemu-microblaze",
"qemu-microblazeel",
"qemu-mips",
"qemu-mips64",
"qemu-mips64el",
"qemu-mipsel",
"qemu-mipsn32",
"qemu-mipsn32el",
"qemu-or1k",
"qemu-ppc",
"qemu-ppc64",
"qemu-ppc64le",
"qemu-riscv32",
"qemu-riscv64",
"qemu-s390x",
"qemu-sh4",
"qemu-sh4eb",
"qemu-sparc",
"qemu-sparc32plus",
"qemu-sparc64",
"qemu-xtensa",
"qemu-xtensaeb"
]
}
Docker version : 20.10.7
OS: Ubuntu 16.04.5
any help will be appreciated. :)
after git clone this repo, i run docker buildx bake
, i get this error:
user@user-PC:~/my_develop/my_go/binfmt$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
default * docker
default default running linux/mips64le, linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
user@user-PC:~/my_develop/my_go/binfmt$ docker buildx bake
[+] Building 15.9s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 4.75kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 88B 0.0s
=> ERROR resolve image config for docker.io/docker/dockerfile:1 15.8s
------
> resolve image config for docker.io/docker/dockerfile:1:
------
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to solve with frontend gateway.v0: no match for platform in manifest sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2: not found
Add support for loong64 builds.
To be able to build binaries for loong64.
Link: golang/go#46229
The latest
tag hasn't been updated since Aug 12, 2022
https://hub.docker.com/layers/tonistiigi/binfmt/latest/images/sha256-66ac1b854f9ce567503783aeee464ff1569076bf3613d122a5174890cbea34f9?context=explore
When we run the commands for updating gem libraries (e.g. gem update
or gem pristine
),
The process for x86 is completed quickly(153.3s). On the other hand, it is extremely slower in arm (8606.2s) than x86 to execute the same process.
We would like to resolve slowness about arm. Does anyone have some ideas about that?
When I try to use our image artifacts in our environment, it takes a few minutes to update gem libraries with x86. On the other hand, it takes more than 3 hours to do the same process with arm. We don't see this problem when we build our image artifacts on ARM node directly without tonistiigi/binfmt
.
So we would like to resolve them.
I can reproduce this easily in my development environment with the minimum step.
$ docker -v
Docker version 20.10.25, build b82b9f3
$ docker run --privileged --rm tonistiigi/binfmt --version
binfmt/a161c41 qemu/v7.0.0 go/1.18.5
$ uname -a
Linux xxx.xxx.xxx.xxx 5.10.209-175.812.amzn2int.x86_64 #1 SMP Tue Jan 30 21:29:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Dockerfile
like this.FROM public.ecr.aws/amazonlinux/amazonlinux:2023
RUN dnf install -y ruby-devel gcc make libyaml-devel
RUN gem list --local | awk '{print $1}' | xargs gem update -V
# Installing emulators
$ docker run --privileged --rm tonistiigi/binfmt --install amd64,arm64
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
8d4d64c318a5: Pull complete
e9c608ddc3cb: Pull complete
Digest: sha256:66e11bea77a5ea9d6f0fe79b57cd2b189b5d15b93a2bdb925be22949232e4e55
Status: Downloaded newer image for tonistiigi/binfmt:latest
installing: amd64 cannot register "/usr/bin/qemu-x86_64" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: no such file or directory
installing: arm64 OK
{
"supported": [
"linux/amd64",
"linux/arm64",
"linux/386"
],
"emulators": [
"kshcomp",
"qemu-aarch64"
]
}
# Create builder instance for multi-platform
$ docker buildx create --use --name builder-multiarch
builder-multiarch
# Check the list of builder instances
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
builder-multiarch * docker-container
builder-multiarch0 unix:///var/run/docker.sock inactive
default docker
default default running v0.8+unknown linux/amd64, linux/386, linux/arm64
$ docker buildx build --no-cache --platform=linux/amd64,linux/arm64 -t qemu-gem-update ./
[+] Building 9004.4s (10/10) FINISHED docker-container:builder-multiarch
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 204B 0.0s
=> [linux/arm64 internal] load metadata for public.ecr.aws/amazonlinux/amazonlinux:2023 3.0s
=> [linux/amd64 internal] load metadata for public.ecr.aws/amazonlinux/amazonlinux:2023 3.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/arm64 1/3] FROM public.ecr.aws/amazonlinux/amazonlinux:2023@sha256:38701a173dc0dea352df1bb934c3269053cf4137a9325 20.9s
=> => resolve public.ecr.aws/amazonlinux/amazonlinux:2023@sha256:38701a173dc0dea352df1bb934c3269053cf4137a9325e68ebef971a3d 0.0s
=> => sha256:f534013dbea5ef16c757c5298f993b98988a6e0833221735408a89b0a475dd63 51.30MB / 51.30MB 6.0s
=> => extracting sha256:f534013dbea5ef16c757c5298f993b98988a6e0833221735408a89b0a475dd63 14.9s
=> [linux/amd64 1/3] FROM public.ecr.aws/amazonlinux/amazonlinux:2023@sha256:38701a173dc0dea352df1bb934c3269053cf4137a9325 15.5s
=> => resolve public.ecr.aws/amazonlinux/amazonlinux:2023@sha256:38701a173dc0dea352df1bb934c3269053cf4137a9325e68ebef971a3d 0.0s
=> => sha256:8784573bb84d178812057375084b2df4e8a0ffb22734f522709063f9581c296f 52.21MB / 52.21MB 5.4s
=> => extracting sha256:8784573bb84d178812057375084b2df4e8a0ffb22734f522709063f9581c296f 10.0s
=> [linux/amd64 2/3] RUN dnf install -y ruby-devel gcc make libyaml-devel 53.2s
=> [linux/arm64 2/3] RUN dnf install -y ruby-devel gcc make libyaml-devel 374.0s
=> [linux/amd64 3/3] RUN gem list --local | awk '{print $1}' | xargs gem update -V 153.3s
=> [linux/arm64 3/3] RUN gem list --local | awk '{print $1}' | xargs gem update -V 8606.2s
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
The point is
RUN gem pristine --all -V
for both of linux/amd64
and linux/arm64
.We also see this problem with gem pristine
as well.
FROM public.ecr.aws/amazonlinux/amazonlinux:2023
RUN dnf install -y ruby-devel gcc make libyaml-devel
RUN gem pristine --all -V
=> [linux/arm64 2/3] RUN dnf install -y ruby-devel gcc make libyaml-devel 294.1s
=> [linux/amd64 2/3] RUN dnf install -y ruby-devel gcc make libyaml-devel 12.9s
=> [linux/amd64 3/3] RUN gem pristine --all -V 23.8s
=> [linux/arm64 3/3] RUN gem pristine --all -V 1839.2s
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
We would like to resolve the slowness about arm.
user@debian:~$ docker run --privileged --rm tonistiigi/binfmt --version
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
9e0174275344: Pull complete
f163282b5573: Pull complete
Digest: sha256:f52cfb6019e8c8d12b13093fd99c2979ab5631c99f7d8b46d10f899d0d56d6ab
Status: Downloaded newer image for tonistiigi/binfmt:latest
flag provided but not defined: -version
Usage of /usr/bin/binfmt:
-install string
architectures to install
-mount string
binfmt_misc mount point (default "/proc/sys/fs/binfmt_misc")
-uninstall string
architectures to uninstall
user@debian:~$
I got a problem when running --platform linux/amd64
builds on an IBM Power (ppc64le) host. ARM64 doesn't seem to have this problem when emulated.
dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf install docker-ce docker-ce-cli -y
wget https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le-O ~/.docker/cli-plugins/docker-buildx
chmod +x ~/.docker/cli-plugins/docker-buildx
docker run --rm --privileged tonistiigi/binfmt:latest --install all
docker buildx create --name mybuilder --use --bootstrap
$ uname -m
ppc64le
$ docker --version
Docker version v20.10.21, build baeda1f
$ docker buildx version
github.com/docker/buildx v0.9.1 ed00243a0ce2a0aee75311b06e32d33b44729689
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
mybuilder * docker-container
mybuilder0 unix:///var/run/docker.sock running v0.10.5 linux/ppc64le, linux/amd64, linux/amd64/v2, linux/arm64, linux/riscv64, linux/s390x, linux/mips64le, linux/mips64
default docker
default default running v20.10.21 linux/ppc64le, linux/amd64, linux/arm64, linux/riscv64, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
FROM almalinux:8.6
RUN dnf update -y && dnf install vim -y
When I build for ppc64le and arm64 using these commands it works:
docker buildx build --load --platform linux/ppc64le -t test-alma:ppc64le -f Dockerfile.alma .
docker buildx build --load --platform linux/arm64 -t test-alma:arm64 -f Dockerfile.alma .
However when I want to build for x86/amd64 using this command:
docker buildx build --load --platform linux/amd64 -t test-alma:amd64 -f Dockerfile.alma .
I'm getting this error:
[...]
> [2/2] RUN dnf update -y && dnf install vim -y:
#0 2.191 Traceback (most recent call last):
#0 2.191 File "/usr/lib64/python3.6/site-packages/libdnf/error.py", line 14, in swig_import_helper
#0 2.191 return importlib.import_module(mname)
#0 2.191 File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
#0 2.191 return _bootstrap._gcd_import(name[level:], package, level)
#0 2.191 File "<frozen importlib._bootstrap>", line 994, in _gcd_import
#0 2.191 File "<frozen importlib._bootstrap>", line 971, in _find_and_load
#0 2.191 File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
#0 2.191 File "<frozen importlib._bootstrap>", line 658, in _load_unlocked
#0 2.191 File "<frozen importlib._bootstrap>", line 571, in module_from_spec
#0 2.191 File "<frozen importlib._bootstrap_external>", line 922, in create_module
#0 2.191 File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
#0 2.191 ImportError: libgmp.so.10: ELF load command alignment not page-aligned
#0 2.191
#0 2.191 During handling of the above exception, another exception occurred:
#0 2.191
#0 2.191 Traceback (most recent call last):
#0 2.191 File "/usr/bin/dnf", line 57, in <module>
#0 2.191 from dnf.cli import main
#0 2.191 File "/usr/lib/python3.6/site-packages/dnf/__init__.py", line 30, in <module>
#0 2.191 import dnf.base
#0 2.191 File "/usr/lib/python3.6/site-packages/dnf/base.py", line 29, in <module>
#0 2.191 import libdnf.transaction
#0 2.191 File "/usr/lib64/python3.6/site-packages/libdnf/__init__.py", line 8, in <module>
#0 2.191 from . import error
#0 2.191 File "/usr/lib64/python3.6/site-packages/libdnf/error.py", line 17, in <module>
#0 2.191 _error = swig_import_helper()
#0 2.191 File "/usr/lib64/python3.6/site-packages/libdnf/error.py", line 16, in swig_import_helper
#0 2.191 return importlib.import_module('_error')
#0 2.191 File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
#0 2.191 return _bootstrap._gcd_import(name[level:], package, level)
#0 2.191 ModuleNotFoundError: No module named '_error'
------
Dockerfile.alma:2
--------------------
1 | FROM almalinux:8.6
2 | >>> RUN dnf update -y && dnf install vim -y
3 |
4 |
--------------------
ERROR: failed to solve: process "/bin/sh -c dnf update -y && dnf install vim -y" did not complete successfully: exit code: 1
Running a simple container with --platform linux/amd64
seems to work:
$ docker run -ti --platform linux/amd64 almalinux:8.6 uname -m
x86_64
Any idea how to fix this issue? Is this related to binfmt or to AlmaLinux?
Thanks!
docker run --privileged --rm tonistiigi/binfmt --install arm64
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
6dda554f4baf: Pull complete
2b0720d7a501: Pull complete
Digest: sha256:66e11bea77a5ea9d6f0fe79b57cd2b189b5d15b93a2bdb925be22949232e4e55
Status: Downloaded newer image for tonistiigi/binfmt:latest
installing: arm64 cannot register "/usr/bin/qemu-aarch64" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: no such file or directory
{
"supported": [
"linux/arm64",
"linux/amd64",
"linux/riscv64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/mips64le",
"linux/mips64",
"linux/arm/v7",
"linux/arm/v6"
],
"emulators": [
"qemu-arm",
"qemu-i386",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x",
"qemu-x86_64"
]
}
Hey, I’m interested to use docker buildx with multi platform support. I have everything set except of QEMU as I cannot use docker run
within my k8s pod as docker.sock is no longer available and cannot be mounted through the host/nodes.
How could I set this to work?
I'm finding that I need to run:
docker run --privileged --rm tonistiigi/binfmt --install all
after every machine reboot.
Am I holding something wrong here?
Or are there instructions that make the install "permanent"?
Quite a few times, GitHub Actions stops because Docker Hub triggers an HTTP 429 (too many request) error, like there: https://github.com/clemlesne/azure-pipelines-agent/actions/runs/7384635563/job/20087929649.
This could be solved by deploying the containers in GitHub packages, allowing lower latency, and no HTTP 429 errors. The packages could be deployed both in Docker Hub and GitHub packages.
Hello,
Using this project to run a container built for linux/amd64
on a M1
macbook pro, every time I restart Docker desktop, emulation stops working correctly and I get errors (here is one but it is specific to my container's stack...)
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
The container doesn’t change. The JVM inside is always the same.
So I tried different things and it happens to work by uninstalling first and then install again (everytime I restart Docker desktop):
docker run --privileged --rm tonistiigi/binfmt --uninstall \*
docker run --privileged --rm tonistiigi/binfmt --install all
docker run myImage...
Note: When it doesn’t work properly, the input is faster as if no emulation. Yet If no emulation at all (if I uninstall and run my image), it wouldn't even start.
Can’t we install and use once and for all ?
RUN apt-get update && apt-get install -y wget git unzip zip
fails with following error message:
#12 66.77 addgroup: `/usr/sbin/groupadd -g 101 ssh' exited from signal 134. Exiting.
#12 66.78 dpkg: error processing package openssh-client (--configure):
#12 66.78 installed openssh-client package post-installation script subprocess returned error exit status 1
Previously i've ran: docker run --privileged --rm tonistiigi/binfmt --install amd64
Trying to create a amd64 image on aarch cpu machine.
Uninstall command not working
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-*
zsh: no matches found: qemu-*
Installed version
docker run --privileged --rm tonistiigi/binfmt --version
binfmt/a161c41 qemu/v7.0.0 go/1.18.5
docker run --privileged --rm tonistiigi/binfmt --uninstall *
uninstalling: Dockerfile not found
{
"supported": [
"linux/arm64",
"linux/riscv64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/mips64le",
"linux/mips64",
"linux/arm/v7",
"linux/arm/v6"
],
"emulators": [
"qemu-arm",
"qemu-i386",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x",
"qemu-x86_64",
"rosetta"
]
}
I am trying to generate an image with qemu v7.0.0-rc3 to avoid ppc64le: fatal, any solution?
#0 17.56 + cp -a community/qemu/0001-hw-arm-virt-Add-a-control-for-the-the-highmem-PCIe-M.patch community/qemu/0002-hw-arm-virt-Add-a-control-for-the-the-highmem-redist.patch community/qemu/0003-hw-arm-virt-Honor-highmem-setting-when-computing-the.patch community/qemu/0004-hw-arm-virt-Use-the-PA-range-to-compute-the-memory-m.patch community/qemu/0005-hw-arm-virt-Disable-highmem-devices-that-don-t-fit-i.patch community/qemu/0006-hw-arm-virt-Drop-superfluous-checks-against-highmem.patch community/qemu/0006-linux-user-signal.c-define-__SIGRTMIN-MAX-for-non-GN.patch community/qemu/CVE-2021-20255.patch community/qemu/MAP_SYNC-fix.patch community/qemu/fix-ppc.patch community/qemu/fix-sockios-header.patch community/qemu/guest-agent-shutdown.patch community/qemu/mips-softfloat.patch community/qemu/musl-initialise-msghdr.patch community/qemu/xattr_size_max.patch ../patches/alpine-patches/
#0 17.56 + cd -
#0 17.56 + rm -rf aports
#0 17.56 /src
#0 21.07 + '[' -n ]
#0 21.07 + cd qemu
#0 21.07 + echo cpu-max,alpine-patches,zero-init-msghdr,sched
#0 21.07 + tr , '\n'
#0 21.07 apply ../patches/cpu-max/0001-default-to-cpu-max-on-x86-and-arm.patch
#0 21.07 + echo 'apply ../patches/cpu-max/0001-default-to-cpu-max-on-x86-and-arm.patch'
#0 21.07 + patch -p1
#0 21.08 patching file linux-user/aarch64/target_elf.h
#0 21.08 patching file linux-user/arm/target_elf.h
#0 21.08 patching file linux-user/i386/target_elf.h
#0 21.08 patching file linux-user/x86_64/target_elf.h
#0 21.08 + echo 'apply ../patches/alpine-patches/0001-hw-arm-virt-Add-a-control-for-the-the-highmem-PCIe-M.patch'
#0 21.08 + patch -p1
#0 21.08 apply ../patches/alpine-patches/0001-hw-arm-virt-Add-a-control-for-the-the-highmem-PCIe-M.patch
#0 21.08 patching file hw/arm/virt-acpi-build.c
#0 21.08 Reversed (or previously applied) patch detected! Assume -R? [n]
#0 21.08 Apply anyway? [n]
#0 21.08 Skipping patch.
#0 21.08 3 out of 3 hunks ignored -- saving rejects to file hw/arm/virt-acpi-build.c.rej
#0 21.09 patching file hw/arm/virt.c
#0 21.09 Reversed (or previously applied) patch detected! Assume -R? [n]
#0 21.09 Apply anyway? [n]
#0 21.09 Skipping patch.
#0 21.09 4 out of 4 hunks ignored -- saving rejects to file hw/arm/virt.c.rej
#0 21.09 patching file include/hw/arm/virt.h
#0 21.09 Hunk #1 FAILED at 143.
#0 21.09 1 out of 1 hunk FAILED -- saving rejects to file include/hw/arm/virt.h.rej
------
error: failed to solve: executor failed running [/bin/sh -c set -ex
if [ "${QEMU_PATCHES_ALL#*alpine-patches}" != "${QEMU_PATCHES_ALL}" ]; then
ver="$(cat qemu/VERSION)"
for l in $(cat patches/aports.config); do
if [ "$(printf "$ver\n$l" | sort -V | head -n 1)" != "$ver" ]; then
commit=$(echo $l | cut -d, -f2)
rmlist=$(echo $l | cut -d, -f3)
break
fi
done
mkdir -p aports && cd aports && git init
git fetch --depth 1 https://github.com/alpinelinux/aports.git "$commit"
git checkout FETCH_HEAD
mkdir -p ../patches/alpine-patches
for f in $(echo $rmlist | tr ";" "\n"); do
rm community/qemu/*${f}*.patch || true
done
cp -a community/qemu/*.patch ../patches/alpine-patches/
cd - && rm -rf aports
fi
if [ -n "${QEMU_PRESERVE_ARGV0}" ]; then
QEMU_PATCHES_ALL="${QEMU_PATCHES_ALL},preserve-argv0"
fi
cd qemu
for p in $(echo $QEMU_PATCHES_ALL | tr ',' '\n'); do
for f in ../patches/$p/*.patch; do echo "apply $f"; patch -p1 < $f; done
done
scripts/git-submodule.sh update ui/keycodemapdb tests/fp/berkeley-testfloat-3 tests/fp/berkeley-softfloat-3 dtc slirp
]: exit code: 1
[m0nius@centos-3 ~]$ sudo docker run --privileged --rm tonistiigi/binfmt --install all
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
8d4d64c318a5: Pull complete
e9c608ddc3cb: Pull complete
Digest: sha256:66e11bea77a5ea9d6f0fe79b57cd2b189b5d15b93a2bdb925be22949232e4e55
Status: Downloaded newer image for tonistiigi/binfmt:latest
error: no such device
cannot mount binfmt_misc filesystem at /proc/sys/fs/binfmt_misc
main.run
/src/cmd/binfmt/main.go:183
main.main
/src/cmd/binfmt/main.go:170
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1571
I tried to run this on old Raspberry PI
docker run --privileged --rm tonistiigi/binfmt --install all
latest: Pulling from tonistiigi/binfmt
docker: no matching manifest for linux/arm/v6 in the manifest list entries.
See 'docker run --help'.
Is there any chance to provide docker image for armv6?
Originally posted by jonathan-albrecht-ibm August 25, 2021
I tried building binfmt against the newly released qemu v6.1.0 but hit a compile error. @crazy-max it's the same error as seen in the recent release GitHub action https://github.com/tonistiigi/binfmt/actions/runs/1164593346
I just wanted to note that I tried building without applying the alpine aport qemu patches (
Line 20 in 99c76af
I don't know if any of the other patches are needed for other reasons though.
Hope that helps.
This problem only occurs on the arm64 platform, everything works fine under amd64.
Log output: https://github.com/pexcn/docker-images/actions/runs/3391825550/jobs/5637357880
Relate issue: #109 (comment)
To reproduce, on an arm64 host (Apple Silicon) I ran:
sudo docker run --privileged --rm tonistiigi/binfmt:qemu-v6.1.0 --install all
# {
# "supported": [
# "linux/arm64",
# "linux/amd64",
# "linux/riscv64",
# "linux/ppc64le",
# "linux/s390x",
# "linux/386",
# "linux/mips64le",
# "linux/mips64",
# "linux/arm/v7",
# "linux/arm/v6"
# ],
# "emulators": [
# "qemu-arm",
# "qemu-i386",
# "qemu-mips64",
# "qemu-mips64el",
# "qemu-ppc64le",
# "qemu-riscv64",
# "qemu-s390x",
# "qemu-x86_64"
# ]
#}
This is:
docker run --privileged --rm tonistiigi/binfmt --version
# binfmt/1aa2eba qemu/v6.2.0 go/1.17.8
These fail:
docker run --platform linux/amd64 ubuntu apt-get update
# Assertion failed: p_rcu_reader->depth != 0 (/qemu/include/qemu/rcu.h: rcu_read_unlock: 101)
Same thing for linux/s390x
, but all other above enabled emulators work fine (in so far as they the platform is supported by ubuntu).
I'm guessing this affects more than just apt-get update
; I just needed that first.
Some downstream (of ubuntu 22.x) images, such as r-base are also affected (already on their default ENTRYPOINT
/CMD
).
A bunch of other official images also seemed to be unaffected on cursory inspection, including alpine, debian and fedora.
The problem also does not occur (even for linux/s290x
on an amd64 host).
So, TL,DR, it seems you need all of these for the problem to arise:
linux/amd64
or linux/s390x
image architectureapt-get key list
, but not a coredump -- 18.04 works without a hitch)I'm running this on a vanilla Asahi Linux Apple M1 machine.
(I don't have access to another arm64 host, so couldn't cross-validate whether the issue is somehow related to Apple Silicon).
Hello,
after installing the emulator on my arm64 host, I can run amd64 docker container, but the iptables does not work.
for example:
➜ ~ uname -m
aarch64
➜ ~ docker run -it --cap-add NET_ADMIN --privileged --platform linux/amd64 alpine
/ # uname -m
x86_64
(install iptables...)
/ # iptables -L
iptables v1.8.8 (legacy): can't initialize iptables table 'filter': iptables who? (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded.
iptables works as expected on the host and arm64 container. How can I fix this? Thanks!!
It is recommended that the binfmt flag be empty by default and increase the flag through command line parameters.
Using ARM64 Ubuntu 20.04,
Playing around with this images and using it to use steamcmd seem to work sometimes , is exteremely weird.
So:
I run the command:
docker run --platform linux/amd64 --env=CPU_MHZ=2000 --entrypoint /bin/sh -it steamcmd/steamcmd:latest
steamcmd
What is "funny" is that I run the EXACT same command on the same terminal a command i ran before:
The error is:
Bail out! ERROR:../target/i386/tcg/translate.c:8578:i386_tr_init_disas_context: assertion failed: (IOPL(dc) == iopl)
/root/.steam/steamcmd/steamcmd.sh: line 39: 18 Aborted (core dumped) $DEBUGGER "$STEAMEXE" "$@"
Can I get some help on this?
With this commit, Qemu has removed the --disable-blobs
and some other configuration parameters from their configure file. binfmt package is still using this parameter to configure Qemu. The binfmt build with QEMU master is failing with error:
ERROR: unknown option --disable-blobs
Kindly check and update to remove the configuration parameters which are no longer supported by QEMU.
EDIT: fixed with the following steps:
When I run
docker run --privileged --rm tonistiigi/binfmt:latest --install amd64
The output is:
installing: amd64 OK
{
"supported": [
"linux/arm64",
"linux/amd64"
],
"emulators": [
"qemu-x86_64"
]
}
However the abstraction layer is not persisted. Because when I run the following command afterwards:
docker run --privileged --rm tonistiigi/binfmt:latest
The output does not include linux/amd64 and the the qemu emulator.
{
"supported": [
"linux/arm64"
],
"emulators": null
}
And I am not able to run an amd64 image on a linux/amd64 host.
Versions:
binfmt/a161c41 qemu/v7.0.0 go/1.18.5
Docker version 25.0.3, build 4debf411d1
Linux 6.7.9-1-aarch64-ARCH #1 SMP PREEMPT_DYNAMIC Tue Mar 12 11:20:09 MDT 2024 aarch64 GNU/Linux
The host is running on Apple M3 in an Arch linux based VM with docker installed with pacman.
Hey there 👋
So, let's first start by saying I'm out of my depth here, but, still, it might be interesting for you to have the information.
I've been building a Docker image with QEMU to target many platforms (using docker/setup-qemu-action@v3
). Long story short, I've encountered a bug with chromium
on the armhf
architecture at build stage, and filled an issue in the Debian bug tracker.
While investigating, the maintainer found out some "anomalies" regarding QEMU/binfmt (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1055765#40). Also:
As far as qemu, I'd suggest filing a bug on that binfmt project to not
use passthrough mode; I'm not sure if it should be something
decided/set at build time, or if there's a way to get commandline
arguments to qemu when executing binaries using the kernel's binfmt
interface. Either way, that seems like something to discuss with the
project author.
I won't be able to debate anything on this subject (unless you're ready to guide me), but maybe you'll find something useful.
Hi,
I'm trying to install qemu in docker.io/library/docker:19.03.14 container (Linux reynwz-containerize-pod 3.10.0-1160.62.1.el7.x86_64 #1 SMP Wed Mar 23 09:04:02 UTC 2022 x86_64 Linux
) running the following but getting errors:
docker run --rm --privileged tonistiigi/binfmt:latest --install all
but getting following errors:
Status: Downloaded newer image for tonistiigi/binfmt:latest
installing: arm64 cannot register "/usr/bin/qemu-aarch64" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
installing: s390x cannot register "/usr/bin/qemu-s390x" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
installing: ppc64le cannot register "/usr/bin/qemu-ppc64le" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
installing: mips64le cannot register "/usr/bin/qemu-mips64el" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
installing: mips64 cannot register "/usr/bin/qemu-mips64" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
installing: arm cannot register "/usr/bin/qemu-arm" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
installing: riscv64 cannot register "/usr/bin/qemu-riscv64" to /proc/sys/fs/binfmt_misc/register: write /proc/sys/fs/binfmt_misc/register: invalid argument
{
"supported": [
"linux/amd64",
"linux/386"
],
"emulators": null
}
Any hints why it fails to register? What can I do to fix this issue?
My environment is:
+ docker version
Client: Docker Engine - Community
Version: 19.03.14
API version: 1.40
Go version: go1.13.15
Git commit: 5eb3275
Built: Tue Dec 1 19:14:24 2020
OS/Arch: linux/amd64
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8
Built: Sat Jan 30 03:18:13 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.3.9
GitCommit: ea765aba0d05254012b0b9e595e995c09186427f
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
I also tried running
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
but with similar results:
Status: Downloaded newer image for multiarch/qemu-user-static:latest
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha sh: write error: Invalid argument
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm sh: write error: Invalid argument
Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb sh: write error: Invalid argument
Setting /usr/bin/qemu-sparc-static as binfmt interpreter for sparc sh: write error: Invalid argument
Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus sh: write error: Invalid argument
Setting /usr/bin/qemu-sparc64-static as binfmt interpreter for sparc64 sh: write error: Invalid argument
Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc sh: write error: Invalid argument
Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64 sh: write error: Invalid argument
Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le sh: write error: Invalid argument
Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k sh: write error: Invalid argument
Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips sh: write error: Invalid argument
Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel sh: write error: Invalid argument
Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32 sh: write error: Invalid argument
Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el sh: write error: Invalid argument
Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64 sh: write error: Invalid argument
Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el sh: write error: Invalid argument
Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4 sh: write error: Invalid argument
Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb sh: write error: Invalid argument
Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x sh: write error: Invalid argument
Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64 sh: write error: Invalid argument
Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be sh: write error: Invalid argument
Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa sh: write error: Invalid argument
Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32 sh: write error: Invalid argument
Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64 sh: write error: Invalid argument
Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa sh: write error: Invalid argument
Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb sh: write error: Invalid argument
Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze sh: write error: Invalid argument
Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel sh: write error: Invalid argument
Setting /usr/bin/qemu-or1k-static as binfmt interpreter for or1k sh: write error: Invalid argument
Setting /usr/bin/qemu-hexagon-static as binfmt interpreter for hexagon sh: write error: Invalid argument
After I have build arm64 images by binfmt, Could I test the images in K8s on AMD64 machines?
This may be an incredibly naive question, but is there a way this container could be modified to work in rootless mode?
The main issue seems to be mount
at
Line 182 in eed5db1
The reason I am interested in this is that I am using act to develop my GitHub Actions. I would like to run it rootless, and some of my actions call the docker/setup-qemu-action GitHub Action, which in turn uses tonistiigi/binfmt
at https://github.com/docker/setup-qemu-action/blob/10348241d3ea2d30357b172897afc31824ea2e2e/src/main.ts#L30.
Dear Tonisiigi.
This emulator solved a big problem for me, thanks a lot!
but I have to run the command manually after every system restart. How can I enable the emulator automatically on system boot?
I tried to put the command into "/etc/rc.d/rc.local, it didn't work.
Thanks for your help~
When building a Docker Fedora ARMv7 image, this has been failing for the last 2 months, with the following error:
#19 263.9 Out of memory allocating 536870912 bytes!
This appears to be linked to when a new image was released qemu-v7.0.0-28
If I force to use the prior version of qemu qemu-v6.2.0-26 there is no issue and the images are built correctly and in a timely fashion.
- uses: docker/setup-qemu-action@v2
with:
image: tonistiigi/binfmt:qemu-v6.2.0-26
platforms: all
This issue only impacts Fedora ARM v7 builds, other platforms such as Debian | Ubuntu are not impacted.
Build an image for Fedora ARM v7 within a 'matrix'.
- uses: docker/setup-qemu-action@v2
with:
image: tonistiigi/binfmt:latest
platforms: all
Builds for Fedora ARMv7 should build without issue
Fedora ARMv7 builds fail with Out of memory allocating XXX bytes
name: Build Docker Images
on:
push:
branches: [ master ]
tags: [ 'v*' ]
pull_request:
branches:
- master
types: [closed]
env:
DOCKER_HUB_SLUG: driveone/onedrive
jobs:
build:
if: (!(github.event.action == 'closed' && github.event.pull_request.merged != true))
runs-on: ubuntu-latest
strategy:
matrix:
flavor: [ fedora, debian, alpine ]
include:
- flavor: fedora
dockerfile: ./contrib/docker/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: debian
dockerfile: ./contrib/docker/Dockerfile-debian
platforms: linux/amd64,linux/arm64,linux/arm/v7
- flavor: alpine
dockerfile: ./contrib/docker/Dockerfile-alpine
platforms: linux/amd64,linux/arm64
steps:
- name: Check out code from GitHub
uses: actions/checkout@v3
with:
submodules: recursive
fetch-depth: 0
- name: Docker meta
id: docker_meta
uses: marcelcoding/ghaction-docker-meta@v2
with:
tag-edge: true
images: |
${{ env.DOCKER_HUB_SLUG }}
tag-semver: |
{{version}}
{{major}}.{{minor}}
flavor: ${{ matrix.flavor }}
main-flavor: ${{ matrix.flavor == 'fedora' }}
- uses: docker/setup-qemu-action@v2
if: matrix.platforms != 'linux/amd64'
- uses: docker/setup-buildx-action@v2
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ matrix.flavor }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-${{ matrix.flavor }}
- name: Login to Docker Hub
uses: docker/login-action@v2
if: github.event_name != 'pull_request'
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and Push to Docker
uses: docker/build-push-action@v3
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.docker_meta.outputs.tags }}
labels: ${{ steps.docker_meta.outputs.labels }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
Failed build
logs_1109.zip
Working build with downgraded qemu
docker run --privileged --rm tonistiigi/binfmt --version
binfmt/a161c41 qemu/v7.0.0 go/1.18.5
root@orangepizero:~# docker run --privileged --rm tonistiigi/binfmt --install all
installing: arm64 OK
installing: ppc64le OK
installing: riscv64 OK
installing: 386 OK
installing: mips64 OK
installing: amd64 OK
installing: s390x OK
installing: mips64le OK
{
"supported": [
"linux/arm/v7",
"linux/amd64",
"linux/arm64",
"linux/riscv64",
"linux/ppc64le",
"linux/s390x",
"linux/386",
"linux/arm/v6"
],
"emulators": [
"qemu-aarch64",
"qemu-i386",
"qemu-mips64",
"qemu-mips64el",
"qemu-ppc64le",
"qemu-riscv64",
"qemu-s390x",
"qemu-x86_64"
]
}
root@orangepizero:~# docker run --rm arm64v8/alpine uname -a
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.