Git Product home page Git Product logo

docker-image's People

Contributors

abrodkin avatar aescolar avatar carlescufi avatar cfriedt avatar cguenthertuchemnitz avatar dimka-rs avatar doanac avatar fkokosinski avatar galak avatar ithinuel avatar johngrey-dev avatar jrjang avatar kartben avatar knthm avatar marc-hb avatar mrxinwang avatar nashif avatar nguyenmthien avatar piotrzierhoffer avatar stephanosio avatar tejlmand avatar tsonono avatar vanwinkeljan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-image's Issues

Dockerfile.ci: AArch64 LLVM installation is broken

AArch64 LLVM installation is currently broken due to the broken packages in the LLVM APT repository:

#9 7.775 Some packages could not be installed. This may mean that you have
#9 7.775 requested an impossible situation or if you are using the unstable
#9 7.775 distribution that some required packages have not yet been created
#9 7.775 or been moved out of Incoming.
#9 7.775 The following information may help to resolve the situation:
#9 7.775 
#9 7.775 The following packages have unmet dependencies:
#9 7.895  clang-16 : Depends: libclang-common-16-dev (= 1:16.0.3~++20230420083054+12f17d196eff-1~exp1~20230420083153.78) but 1:16.0.3~++20230420053043+464bda7750a3-1~exp1~20230420173052.80 is to be installed
#9 7.895  clangd-16 : Depends: libclang-common-16-dev (= 1:16.0.3~++20230420083054+12f17d196eff-1~exp1~20230420083153.78) but 1:16.0.3~++20230420053043+464bda7750a3-1~exp1~20230420173052.80 is to be installed
#9 7.896  libclang-16-dev : Depends: libclang-common-16-dev (= 1:16.0.3~++20230420083054+12f17d196eff-1~exp1~20230420083153.78) but 1:16.0.3~++20230420053043+464bda7750a3-1~exp1~20230420173052.80 is to be installed
#9 7.896  libclang-common-16-dev : Depends: libllvm16 (< 1:16.0.3~++20230420053043+464bda7750a3-1~exp1~20230420173052.80.1~) but 1:16.0.3~++20230420083054+12f17d196eff-1~exp1~20230420083153.78 is to be installed
#9 7.903 E: Unable to correct problems, you have held broken packages.

Support multiple host architectures

The docker-image currently supports the x86-64 host architecture only -- expand this to at least x86-64 and AArch64.

Note that Dockerfile will need some changes to support this (change any explicit references to x86_64 to $HOSTTYPE for example).

Creating container in Docker for Windows Issue

Hello I created the image following your instructions:

docker build -t zephyr:v1 .

And it builds it ok, but when I attempt to run it i got this error:

C:\>docker run -ti zephyr:v1
standard_init_linux.go:207: exec user process caused "no such file or directory"

If I use your image already created I have no issues:

docker run -ti docker.io/zephyrprojectrtos/zephyr-build:latest

Any hint?

Migrate to GitHub Container Registry (GHCR)

Preface

The CI automation currently uploads the Docker images to the DockerHub (docker.io).

DockerHub sets an unreasonably low rate limit of 200 image pulls per 6 hours, which can be easily exceeded when it is being used as part of a CI infrastructure.

Error response from daemon: toomanyrequests: You have reached your pull rate limit.
You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

The GitHub Container Registry (GHCR) effectively provides the same functionality as the DockerHub, but with a better integration to the GitHub (e.g. associated container image will be shown as a "package" on the repository main page and more granular access permissions can be set using the GitHub credentials) -- most importantly, it does not have an explicit rate limit set.

TODO

Modify the CI workflow such that it uploads the Docker images to both DockerHub and GHCR. User may choose to use either, and the CI infrastructure utilising this image can be updated to use the GHCR to work around the DockerHub rate limit problem.

Broken Link in Docker Script ?

Hi,

I am a newcomer to Zephyr, but in trying to get the latest SDK via the Docker Image I have run into some issues.

In the Docker File - We call WGET with this string : https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v${ZSDK_VERSION}/zephyr-sdk-${ZSDK_VERSION}-x86_64-linux-setup.run

So - I am trying to build something that requires 0.13.0.

When using the Docker script as currently in this repo This URL is called. This pulls down the 1.1GB File for Zephyr SDK.

ARG ZSDK_VERSION=0.12.4
https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.12.4/zephyr-sdk-0.12.4-x86_64-linux-setup.run

But when you replace the 0.12.4 with 0.13.0. - It fails.
ARG ZSDK_VERSION=0.13.0
https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.13.0/zephyr-sdk-0.13.0-x86_64-linux-setup.run

So - >

132858238-4646e2eb-f111-4bbc-b43c-2c336a28a375

It looks like in making the latest V0.13 release, the maintainers have swapped the OS - Architecture nomenclature around, breaking this Docker Script.

Can someone confirm that issue for me ?

I saw another user was having a Similar issue over here #73.
But I feel that this was distinct enough to warrant opening a fresh issue for this.

( When I hardcode the path with the flipped OS - Arch flags, The Dockerfile works fine for 0.13.0. )
Thanks,

Black sceen when loggin in with VNC

When I start the image and login with VNC I can see only a black screen.
I'm starting the container with this command:

docker run -ti -p 5900:5900 -v D:\GitLab\smart-binary-sensing\o200.gp-zephyr:/workdir docker.io/zephyrprojectrtos/zephyr-build:latest

This is the output I get:

Openbox-Message: Unable to find a valid menu file "/var/lib/openbox/debian-menu.xml"
user@64d871c7a4d0:/workdir$
The VNC desktop is:      64d871c7a4d0:0
PORT=5900

I use Docker Desktop v2.2.0.4 Engine 19.03.8

Missing Python.h when building the image

When building the image from scratch with docker build -t zephyr-docker-image . the build fails with:

  Running setup.py bdist_wheel for psutil: started
  Running setup.py bdist_wheel for psutil: finished with status 'error'
  Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-5ip0cauh/psutil/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp96w0agpopip-wheel- --python-tag cp36:
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.6
  creating build/lib.linux-x86_64-3.6/psutil
  copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
  copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
  creating build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
  copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
  running build_ext
  building 'psutil._psutil_linux' extension
  creating build/temp.linux-x86_64-3.6
  creating build/temp.linux-x86_64-3.6/psutil
  x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/usr/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
  psutil/_psutil_common.c:9:10: fatal error: Python.h: No such file or directory
   #include <Python.h>
            ^~~~~~~~~~
  compilation terminated.
  error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
  
  ----------------------------------------
  Failed building wheel for psutil
  Running setup.py clean for psutil
  Running setup.py bdist_wheel for future: started
  Running setup.py bdist_wheel for future: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/8b/99/a0/81daf51dcd359a9377b110a8a886b3895921802d2fc1b2397e
Successfully built PyYAML sphinx-tabs cbor junit2html configobj docopt intervaltree prettytable pyusb future
Failed to build psutil
Installing collected packages: pyelftools, PyYAML, pyparsing, six, packaging, intelhex, configobj, docopt, python-dateutil, pykwalify, colorama, west, MarkupSafe, jinja2, lxml, gcovr, coverage, more-itertools, zipp, importlib-metadata, pluggy, py, wcwidth, attrs, pytest, sphinxcontrib-jsmath, pytz, babel, snowballstemmer, alabaster, sphinxcontrib-serializinghtml, certifi, urllib3, chardet, idna, requests, docutils, sphinxcontrib-applehelp, imagesize, sphinxcontrib-devhelp, sphinxcontrib-qthelp, sphinxcontrib-htmlhelp, Pygments, sphinx, breathe, sphinx-rtd-theme, sphinx-tabs, sphinxcontrib-svg2pdfconverter, pyserial, pycparser, cffi, milksnake, appdirs, cmsis-pack-manager, psutil, future, pylink-square, sortedcontainers, intervaltree, prettytable, pyusb, pyocd, tabulate, cbor, anytree, arrow, Click, sh, gitlint, junit2html, Pillow
  Running setup.py install for psutil: started
    Running setup.py install for psutil: finished with status 'error'
    Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-5ip0cauh/psutil/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-72t01jjh-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.6
    creating build/lib.linux-x86_64-3.6/psutil
    copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
    copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
    creating build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_memory_leaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
    copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
    running build_ext
    building 'psutil._psutil_linux' extension
    creating build/temp.linux-x86_64-3.6
    creating build/temp.linux-x86_64-3.6/psutil
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=570 -DPSUTIL_LINUX=1 -I/usr/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
    psutil/_psutil_common.c:9:10: fatal error: Python.h: No such file or directory
     #include <Python.h>
              ^~~~~~~~~~
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    
    ----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-5ip0cauh/psutil/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-72t01jjh-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-5ip0cauh/psutil/
The command '/bin/sh -c wget -q https://raw.githubusercontent.com/zephyrproject-rtos/zephyr/master/scripts/requirements.txt && 	wget -q https://raw.githubusercontent.com/zephyrproject-rtos/zephyr/master/scripts/requirements-base.txt && 	wget -q https://raw.githubusercontent.com/zephyrproject-rtos/zephyr/master/scripts/requirements-build-test.txt && 	wget -q https://raw.githubusercontent.com/zephyrproject-rtos/zephyr/master/scripts/requirements-doc.txt && 	wget -q https://raw.githubusercontent.com/zephyrproject-rtos/zephyr/master/scripts/requirements-run-test.txt && 	wget -q https://raw.githubusercontent.com/zephyrproject-rtos/zephyr/master/scripts/requirements-extras.txt && 	pip3 install wheel &&	pip3 install -r requirements.txt && 	pip3 install west &&	pip3 install sh' returned a non-zero code: 1

It seems we're somehow missing an install for Python.h here.

"west" Error on Build

When I launch using the prebuilt image and mount a fresh clone of the Zephyr repo into "workdir", I get an error when I run the "west build" command. It gives me an error saying I have an invalid build choice and from reading it looks this usually occurs when a west project hasn't been set up. There is a west.yml file and the ZEPHYR_BASE env variable is set to /workdir but when I run "west update" I get an error saying no west workspace is found in the /workdir directory.

user@0bcb1e85f710:/workdir$ west build -p auto -b qemu_x86 samples/hello_world/
usage: west [-h] [-z ZEPHYR_BASE] [-v] [-V] <command> ...
west: error: argument <command>: invalid choice: 'build' (choose from 'init', 'update', 'list', 'manifest', 'diff', 'status', 'forall', 'help', 'config', 'topdir', 'selfupdate')`

user@0bcb1e85f710:/workdir$ west update
FATAL ERROR: no west workspace found from "/workdir"; "west update" requires one.
Things to try:
  - Change directory to somewhere inside a west workspace and retry.
  - Set ZEPHYR_BASE to a zephyr repository path in a west workspace.
  - Run "west init" to set up a workspace here.
  - Run "west init -h" for additional information.

Why is ZEPHYR_BASE hard coded?

In

ENV ZEPHYR_BASE=/workdir/zephyr

ZEPHYR_BASE is hard coded and prevents the usage of my .west/config file:

[zephyr]
base = deps/zephyr

Without the hard coded ZEPHYR_BASE, west will automatically find zephyr.base from the config file and does't require this. Is there a reason for that? I have zephyr under folder deps, so the hard code path does not exists.

I would like to extend the already build image from ghcr.io/zephyrproject-rtos/zephyr-build:v0.26.4 and extend that for my usage.
But it is not possible in docker to unset an env variable (https://stackoverflow.com/questions/55789409/how-to-unset-env-in-dockerfile).
Otherwise I have to call unset ZEPHYR_BASE before every command.

Issues running under WSL

Hi,

I've tried using docker on windows using WSL and have issues when attempting a build. Have you tried running under WSL?

--Hernan

user@09d73f4f2737:/workdir/samples/hello_world/build$ ls
CMakeCache.txt CMakeFiles
user@09d73f4f2737:/workdir/samples/hello_world/build$ cmake -DBOARD=qemu_x86 ..
CMake Error: The current CMakeCache.txt directory /workdir/samples/hello_world/CMakeCache.txt is different than the directory /workdir/zephyrprj/zephyr/samples/hello_world where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
user@09d73f4f2737:/workdir/samples/hello_world/build$ cmake -DBOARD=qemu_x86 ..
CMake Error: The current CMakeCache.txt directory /workdir/samples/hello_world/CMakeCache.txt is different than the directory /workdir/zephyrprj/zephyr/samples/hello_world where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
user@09d73f4f2737:/workdir/samples/hello_world/build$ ls

Image from docker hub does not contain ENV variables for building zephyr

Downloaded image from docker hub:
https://hub.docker.com/r/zephyrprojectrtos/ci

to test our CI pipeline, and the hosted image does not contain the following env variables

113 # Set the locale
114 ENV ZEPHYR_TOOLCHAIN_VARIANT=zephyr
115 ENV ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-${ZSDK_VERSION}
116 ENV ZEPHYR_BASE=/workdir
117 ENV GNUARMEMB_TOOLCHAIN_PATH=/opt/toolchains/${GCC_ARM_NAME}

however, the ENV variables are there when I build the image using the Dockerfile.
Is this the expected behaviour?

west: unknown command "build"

Hi,

I followed the README instructions to download and setup zephyr development environment but I'm getting an error when trying to build a sample project using west:

Environment

  • OS: Debian GNU/Linux 12 (bookworm)
  • Docker: version 26.1.2, build 211e74b

How to reproduce?

# Pull the pre-built zephyr developer docker image
bayrem@debian:~$ docker run -ti -v $HOME/Work/zephyrproject:/workdir ghcr.io/zephyrproject-rtos/zephyr-build:latest

# Take ownership of the workdir/ directory
user@1df0da87e5b8:~$ sudo chown -R user:user /workdir/

user@1df0da87e5b8:~$ cd /workdir/

# Clone the zephyr repo
user@1df0da87e5b8:/workdir$ git clone --recursive https://github.com/zephyrproject-rtos/zephyr.git

# Build a sample application
user@1df0da87e5b8:/workdir$ cd zephyr/

user@1df0da87e5b8:/workdir/zephyr$ west build -b qemu_x86 samples/hello_world

usage: west [-h] [-z ZEPHYR_BASE] [-v] [-V] <command> ...
west: unknown command "build"; do you need to run this inside a workspace?
user@1df0da87e5b8:/workdir$ 

`g++-multilib` missing in current zephyr-build image

In current zephyrprojectrtos/zephyr-build:latest docker image, it seems that the g++-multilib package is missing, which will lead to build errors for some 32-bit platforms if the CI is running in 64-bit machine - See below log:

ERROR   - -- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
Including boilerplate (Zephyr base): /home/user/zephyr/subsys/testsuite/unittest.cmake
CMake Deprecation Warning at /home/user/zephyr/subsys/testsuite/unittest.cmake:4 (cmake_policy):
  The OLD behavior for policy CMP0000 will be removed from a future version
  of CMake.
  The cmake-policies(7) manual explains that the OLD behaviors of all
  policies are deprecated and that a policy should be set to OLD only under
  specific short-term circumstances.  Projects should be ported to the NEW
  behavior and not rely on setting a policy to OLD.
Call Stack (most recent call first):
  /home/user/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake:24 (include)
  /home/user/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake:35 (include_boilerplate)
  /home/user/zephyr/share/zephyrunittest-package/cmake/ZephyrUnittestConfig.cmake:4 (include)
  CMakeLists.txt:5 (find_package)

-- The ASM compiler identification is GNU
-- Found assembler: /usr/bin/cc
-- Configuring done
-- Generating done
CMake Warning:
  Manually-specified variables were not used by the project:
    BOARD

-- Build files have been written to: /home/user/zephyr/sanity-out/unit_testing/tests/unit/util/utilities.dec
[1/7] Generating include/generated/kobj-types-enum.h, include/generated/otype-to-str.h
[2/7] Building C object CMakeFiles/testbinary.dir/home/user/zephyr/lib/os/dec.c.o
[3/7] Building C object CMakeFiles/testbinary.dir/home/user/zephyr/subsys/testsuite/ztest/src/ztest.c.o
[4/7] Building C object CMakeFiles/testbinary.dir/home/user/zephyr/subsys/testsuite/ztest/src/ztest_mock.c.o
[5/7] Building C object CMakeFiles/testbinary.dir/main.c.o
[6/7] Building CXX object CMakeFiles/testbinary.dir/maincxx.cxx.o
[7/7] Linking CXX executable testbinary
FAILED: testbinary 
: && /usr/bin/c++ -m32  CMakeFiles/testbinary.dir/main.c.o CMakeFiles/testbinary.dir/maincxx.cxx.o CMakeFiles/testbinary.dir/home/user/zephyr/lib/os/dec.c.o CMakeFiles/testbinary.dir/home/user/zephyr/subsys/testsuite/ztest/src/ztest.c.o CMakeFiles/testbinary.dir/home/user/zephyr/subsys/testsuite/ztest/src/ztest_mock.c.o -o testbinary  -Wl,--fatal-warnings && :
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.so when searching for -lstdc++
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/7/libstdc++.a when searching for -lstdc++
/usr/bin/ld: cannot find -lstdc++
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

RFC: Improvements for better performance, caching & dev experience

I'd like to propose several improvements to the Dockerfile in order to make targeted images that are smaller & faster, leading to a better developer experience for maintainers and developers alike. I'm far from a Docker expert but have made my share of builder-type images, so hopefully we can get some extra input.

The biggest improvement would be breaking up into multistage builds. I'd suggest the following stages:

  • Base
  • CI
  • Development
  • Test
  • Docs

I'm roughly mapping to the different requirements.txt and use cases. Not sure if "compliance" should be created as well. I'm not fluent enough in all the dependencies but we can be more efficient with what's downloaded, installed & cached. For example, Base would only install the minimal packages (ex. install python but not renode,) pip install requirements-base.txt, and so on.

Other ideas to further improvements:

  • COPY pip directory from Base to save download time (pip install seems to take awhile)
  • Do "proper" caching with Docker's BuildKit
  • Be smarter when it comes to caching layers (we'll see some benefits just from moving to multistage)
  • Move to a smaller base image like Alpine
  • Create our own "base image" using a Scratch container
  • Having the "child" stages like Development only include what we need (Scratch is useful for that too)
  • Releasing child stages as images on Dockerhub. These could be autogenerated based on Zephyr release or Release+SDK combo
  • Integrate zephyrproject-rtos/zephyr#36324 as ARGS to further reduce downloads

Docker image not correctly set up

Hi. This is my first time trying out Zephyr. I'd love to start hacking, but I can't get it to work... I prefer running the development environment in a container, then everything is/should be correctly set up, and I don't need to install all the stuff on my computer.

I start a new docker container:
docker run -ti -v $HOME/Work/zephyrproject:/workdir docker.io/zephyrprojectrtos/zephyr-build:latest

Using the cli, I default to the workdir directory.
I run
west init
cd zephyr
west update
west build -p auto -b esp32 samples/basic/blinky
I get

-- west build: generating a build system
Including boilerplate (Zephyr base): /workdir/zephyr/cmake/app/boilerplate.cmake
-- Application: /workdir/zephyr/samples/basic/blinky
-- Zephyr version: 2.6.99 (/workdir/zephyr), build: zephyr-v2.6.0-2270-g976c5fee289f
-- Found Python3: /usr/bin/python3.8 (found suitable exact version "3.8.10") found components: Interpreter
-- Found west (found suitable version "0.11.0", minimum required is "0.7.1")
-- Board: esp32
-- Cache files will be written to: /home/user/.cache/zephyr
CMake Error at /workdir/zephyr/cmake/verify-toolchain.cmake:70 (find_package):
Could not find a configuration file for package "Zephyr-sdk" that is
compatible with requested version "0.13".
The following configuration files were considered but not accepted:
/opt/toolchains/zephyr-sdk-0.12.4/cmake/Zephyr-sdkConfig.cmake, version: 0.12.4
Call Stack (most recent call first):
/workdir/zephyr/cmake/app/boilerplate.cmake:548 (include)
/workdir/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake:24 (include)
/workdir/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake:35 (include_boilerplate)
CMakeLists.txt:4 (find_package)
-- Configuring incomplete, errors occurred!
FATAL ERROR: command exited with status 1: /usr/local/bin/cmake -DWEST_PYTHON=/usr/bin/python3 -B/workdir/zephyr/build -S/workdir/zephyr/samples/basic/blinky -GNinja -DBOARD=esp32

Do I do something wrong, is the image not correctly set up, or is the documentation incorrect/poor?

RFC: minimal build container for zephyr

Glad to see there's a containerized environment to support building zephyr.

I have a more narrowly focused use case that I'd like supported, and am looking for feedback:

Background:

I'd like a container that I can use for building the sources both as part of my development flow, as well as through CI testing.

Issue with current implementation:

The container image supports VNC, which is helpful for some, but is excessive for my use case. Aside from size addition, it seems to have necessitated creating users, etc.

These users don't necessarily map to who will ultimately use the container to build and then utilze the built artifacts to test/run.

I'd like to be able to execute a build as follows:
docker run -it --user "$(id -u):$(id -g)" -v /my/west/workdir /home/eric/src/zephyrproject:/workdir --workdir /workdir/zephyr bash -c "west build with my args"

This allows me to obtain artifacts that are owned by the calling user.

Query

Does it make sense to offer a build environment that doesn't include VNC and container-build-time defined users?

What is the reason that the ci-image uses USER root while the devel-image the USER user?

First of all: Thanks a lot for providing these images. They save a ton of time!

I am using the ci-image in a Azure DevOps pipeline while using the devel-image locally on my host. I tried to set up the pipeline using exactly the same bash-scripts to setup the west workspace, but building the image failed with the error that the zephyr-sdk could not be found (I have fixed that in the meantime).

I think this is caused by the zephyr-sdk is installed as user
...

'# Run the Zephyr SDK setup script as 'user' in order to ensure that the
'# Zephyr-sdk CMake package is located in the package registry under the
'# user's home directory.
USER user

RUN sudo -E -- bash -c '
/opt/toolchains/zephyr-sdk-${ZSDK_VERSION}/setup.sh -c &&
chown -R user:user /home/user/.cmake
'
...

while the ci-image is then run as root
...
USER root

'# Set the locale
ENV ZEPHYR_TOOLCHAIN_VARIANT=zephyr
ENV PKG_CONFIG_PATH=/usr/lib/i386-linux-gnu/pkgconfig
ENV OVMF_FD_PATH=/usr/share/ovmf/OVMF.fd
...

I wonder what the reason is for changing back to root?!?

Problem running "docker build"

Hello guys,

I ran into a problem when trying to run the following command using Dockerfile from release v0.13:

docker build --build-arg UID=$(id -u) --build-arg GID=$(id -g) -t zephyr_doc:v2.3.0 .

The build could not complete and stop here:

...
Mono precompiling /usr/lib/mono/4.5/System.Collections.Immutable.dll for amd64 (trying with LLVM, this might take a few minutes)...
Mono precompiling /usr/lib/mono/4.5/System.Reflection.Metadata.dll for amd64 (trying with LLVM, this might take a few minutes)...

Setting up mono-complete (6.12.0.90-0xamarin1+ubuntu1804b1) ...
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.36.11-2) ...

--2020-09-09 06:45:09--  http://security.ubuntu.com/ubuntu/pool/main/d/device-tree-compiler/device-tree-compiler_1.4.7-1_amd64.deb
Resolving security.ubuntu.com (security.ubuntu.com)... 91.189.91.39, 91.189.88.152, 91.189.91.38, ...
Connecting to security.ubuntu.com (security.ubuntu.com)|91.189.91.39|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-09-09 06:45:10 ERROR 404: Not Found.

Followed by:

The command '/bin/sh -c dpkg --add-architecture i386 ...
...
... returned a non-zero code: 8

Please assist. Thanks.

Build & flash inside docker container

Hello guys,

I have an idea about build & flash firmware inside docker container by using JLinkRemoteServer running outside the docker container.
I read jlink.py file inside zephyr codebase and found that their west flash/debug commands do not support remote flashing yet, so I have to modify that file manually and also modify the Dockerfile, install jlink and build the image by myself.
I'd would like to ask you guys do you have any better idea to remotely flash inside docker container.

Thanks.

Add ONBUILD to ARGs to allow usage in child builds

This will allow people to extend the current base image and use the same arguments.

For example., if I do the following:

FROM zephyrprojectrtos/ci:latest
RUN apt install -y clang-format-$LLVM_VERSION`

This does not work as the $LLVM_VERSION ARG is not passed to the child dockerfile. This can be implemented with the ONBUILD keyword.

Proposed change:

ONBUILD ARG ZSDK_VERSION=0.14.2
ONBUILD ARG DOXYGEN_VERSION=1.9.4
ONBUILD ARG CMAKE_VERSION=3.20.5
ONBUILD ARG RENODE_VERSION=1.13.0
ONBUILD ARG LLVM_VERSION=12
ONBUILD ARG BSIM_VERSION=v1.0.3
ONBUILD ARG WGET_ONBUILD ARGS="-q --show-progress --progress=bar:force:noscroll --no-check-certificate"

arm-none-eabi-gdb fails lacking libncurses.so.5

Trying to debug a target from the container and arm-none-eabi-gdb fails

`arm-none-eabi-gdb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory'

Sample does not build

I am trying to use the docker-build in my local environment, however I get a strange message regarding a recurisve inclusion of kconfig.zephyr.

Parsing /workdir/zephyr/Kconfig
/workdir/zephyr/scripts/kconfig/kconfig.py: Kconfig:8: recursive 'source' of 'Kconfig.zephyr' detected. Check that environment variables are set correctly.   
Include path:
/workdir/zephyr/Kconfig:8
Kconfig.zephyr:23
modules/Kconfig:6
samples/hello_world/build/Kconfig/Kconfig.modules:2
Kconfig:8
CMake Error at /workdir/zephyr/cmake/kconfig.cmake:265 (message):
  command failed with return code: 1
Call Stack (most recent call first):
  /workdir/zephyr/cmake/app/boilerplate.cmake:536 (include)
  /workdir/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake:24 (include)
  /workdir/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake:35 (include_boilerplate)
  CMakeLists.txt:5 (find_package)


-- Configuring incomplete, errors occurred!

Any thoughts? I am running on a windows host and trying to build on zephyr v2.5.0.

docker-compose with developer image fails

Using this docker-compose.yml file:

version: "2"
services:
    zephyr:
        image: docker.io/zephyrprojectrtos/zephyr-build:latest
        container_name: zephyr
        volumes:
            - /myworkspace:/workdir
        ports:
            - "5900:5900"

I get the following error starting the container:

zephyr | Openbox-Message: Unable to find a valid menu file "/var/lib/openbox/debian-menu.xml"
zephyr |
zephyr | ERROR: openbox-xdg-autostart requires PyXDG to be installed

Pyocd failed cause of missing dependencies in docker

When i run docker(v 0.24.7) image based on instruction, i got error during run pyocd command :

Traceback (most recent call last):
  File "/usr/local/bin/pyocd", line 5, in <module>
    from pyocd.__main__ import main
  File "/usr/local/lib/python3.8/dist-packages/pyocd/__init__.py", line 21, in <module>
    from . import gdbserver
  File "/usr/local/lib/python3.8/dist-packages/pyocd/gdbserver/__init__.py", line 17, in <module>
    from .gdbserver import GDBServer
  File "/usr/local/lib/python3.8/dist-packages/pyocd/gdbserver/gdbserver.py", line 41, in <module>
    from ..rtos import RTOS
  File "/usr/local/lib/python3.8/dist-packages/pyocd/rtos/__init__.py", line 29, in <module>
    load_plugin_classes_of_type('pyocd.rtos', RTOS, ThreadProvider)
  File "/usr/local/lib/python3.8/dist-packages/pyocd/core/plugin.py", line 97, in load_plugin_classes_of_type
    plugin = entry_point.load()()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2444, in load
    self.require(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2467, in require
    items = working_set.resolve(reqs, env, installer, extras=self.extras)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 792, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (six 1.14.0 (/usr/lib/python3/dist-packages), Requirement.parse('six<2.0,>=1.15.0'))

clang-16 is useless and break the sparse build

Hello,

The latest Docker images were switched to clang-16 installation, however this clang is not used by the system as Ubuntu's clang-15 is installed by default by mono dependencies, so clang-16 is probably nothing but a waste of space.

Also, if the default clang-15 is replaced to clang-16 by the update-alternative script, then the sparse-llvm build will be broken as no one has updated it for years. Part of sparse-llvm broken build log is below.

So I suggest to switch back to Ubuntu's clang-15 usage, of fix sparse-llvm and forcing clang-16 as as the only clang of the system.

Broken sparse-llvm:

......
  CC      version.o
sparse-llvm.c: In function ‘get_sym_value’:
sparse-llvm.c:305:34: warning: implicit declaration of function ‘LLVMConstGEP’; did you mean ‘LLVMConstGEP2’? [-Wimplicit-function-declaration]
  305 |                         result = LLVMConstGEP(data, indices, ARRAY_SIZE(indices));
      |                                  ^~~~~~~~~~~~
      |                                  LLVMConstGEP2
sparse-llvm.c:305:32: warning: assignment to ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
  305 |                         result = LLVMConstGEP(data, indices, ARRAY_SIZE(indices));
      |                                ^
sparse-llvm.c: In function ‘calc_gep’:
sparse-llvm.c:488:16: warning: implicit declaration of function ‘LLVMBuildInBoundsGEP’; did you mean ‘LLVMBuildInBoundsGEP2’? [-Wimplicit-function-declaration]
  488 |         addr = LLVMBuildInBoundsGEP(builder, base, &off, 1, name);
      |                ^~~~~~~~~~~~~~~~~~~~
      |                LLVMBuildInBoundsGEP2
sparse-llvm.c:488:14: warning: assignment to ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
  488 |         addr = LLVMBuildInBoundsGEP(builder, base, &off, 1, name);
      |              ^
sparse-llvm.c: In function ‘output_op_load’:
sparse-llvm.c:714:18: warning: implicit declaration of function ‘LLVMBuildLoad’; did you mean ‘LLVMBuildLoad2’? [-Wimplicit-function-declaration]
  714 |         target = LLVMBuildLoad(fn->builder, addr, name);
      |                  ^~~~~~~~~~~~~
      |                  LLVMBuildLoad2
sparse-llvm.c:714:16: warning: assignment to ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
  714 |         target = LLVMBuildLoad(fn->builder, addr, name);
      |                ^
sparse-llvm.c: In function ‘output_op_call’:
sparse-llvm.c:822:18: warning: implicit declaration of function ‘LLVMBuildCall’; did you mean ‘LLVMBuildCall2’? [-Wimplicit-function-declaration]
  822 |         target = LLVMBuildCall(fn->builder, func, args, n_arg, name);
      |                  ^~~~~~~~~~~~~
      |                  LLVMBuildCall2
sparse-llvm.c:822:16: warning: assignment to ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
  822 |         target = LLVMBuildCall(fn->builder, func, args, n_arg, name);
      |                ^
  AR      libsparse.a
  LD      compile
  LD      ctags
  LD      example
  LD      graph
  LD      obfuscate
  LD      sparse
  LD      test-dissect
  LD      test-lexing
  LD      test-linearize
  LD      test-parsing
  LD      test-show-type
  LD      test-unssa
  LD      c2xml
  LD      sparse-llvm
/usr/bin/ld: sparse-llvm.o: in function `get_sym_value':
.../sparse/sparse/sparse-llvm.c:305: undefined reference to `LLVMConstGEP'
/usr/bin/ld: sparse-llvm.o: in function `calc_gep':
.../sparse/sparse/sparse-llvm.c:488: undefined reference to `LLVMBuildInBoundsGEP'
/usr/bin/ld: sparse-llvm.o: in function `output_op_load':
.../sparse/sparse/sparse-llvm.c:714: undefined reference to `LLVMBuildLoad'
/usr/bin/ld: sparse-llvm.o: in function `output_op_call':
.../sparse/sparse/sparse-llvm.c:822: undefined reference to `LLVMBuildCall'
collect2: error: ld returned 1 exit status
make: *** [Makefile:250: sparse-llvm] Error 1

Image does not build for Nordic nRF52

Hi,
I am trying to build the blinky-sample for the Nordic nRF52833 inside docker. I successfully build and run the docker image. Then I do west init and west update.

While it works fine for the Qemu, I get the following error running
west build -b nrf52833dk_nrf52833 samples/hello_world

warning: HAS_NORDIC_DRIVERS (defined at modules/hal_nordic/Kconfig:7) has direct dependencies 0 with value n, but is currently being y-selected by the following symbols: - SOC_SERIES_NRF52X (defined at soc/arm/nordic_nrf/nrf52/Kconfig.series:6), with value y, direct dependencies (value: y), and select condition (value: y) warning: HAS_NRFX (defined at modules/hal_nordic/nrfx/Kconfig:4) has direct dependencies 0 with value n, but is currently being y-selected by the following symbols: - SOC_SERIES_NRF52X (defined at soc/arm/nordic_nrf/nrf52/Kconfig.series:6), with value y, direct dependencies (value: y), and select condition (value: y) warning: NRFX_CLOCK (defined at modules/hal_nordic/nrfx/Kconfig:16) has direct dependencies HAS_NRFX && 0 with value n, but is currently being y-selected by the following symbols: - CLOCK_CONTROL_NRF (defined at drivers/clock_control/Kconfig.nrf:13), with value y, direct dependencies DT_HAS_NORDIC_NRF_CLOCK_ENABLED && CLOCK_CONTROL (valuParsing /home/zephyrproject/zephyr/Kconfig Loaded configuration '/home/zephyrproject/zephyr/boards/arm/nrf52833dk_nrf52833/nrf52833dk_nrf52833_defconfig' Merged configuration '/home/zephyrproject/zephyr/samples/hello_world/prj.conf' e: y), and select condition !CLOCK_CONTROL_NRF_FORCE_ALT && DT_HAS_NORDIC_NRF_CLOCK_ENABLED && CLOCK_CONTROL (value: y) warning: NRFX_GPIOTE0 (defined at modules/hal_nordic/nrfx/Kconfig:68) has direct dependencies HAS_NRFX && 0 with value n, but is currently being y-selected by the following symbols: - GPIO_NRFX (defined at drivers/gpio/Kconfig.nrfx:4), with value y, direct dependencies DT_HAS_NORDIC_NRF_GPIO_ENABLED && GPIO (value: y), and select condition HAS_HW_NRF_GPIOTE0 && DT_HAS_NORDIC_NRF_GPIO_ENABLED && GPIO (value: y) warning: NRFX_PPI (defined at modules/hal_nordic/nrfx/Kconfig:155) has direct dependencies HAS_NRFX && 0 with value n, but is currently being y-selected by the following symbols: - UART_0_ENHANCED_POLL_OUT (defined at drivers/serial/Kconfig.nrfx_uart_instance:21), with value y, direct dependencies !SOC_SERIES_NRF54LX && HAS_HW_NRF_UARTE0 && (HAS_HW_NRF_PPI || HAS_HW_NRF_DPPIC) && (HAS_HW_NRF_UART0 || HAS_HW_NRF_UARTE0) && UART_NRFX && SERIAL (value: y), and select condition HAS_HW_NRF_PPI && !SOC_SERIES_NRF54LX && HAS_HW_NRF_UARTE0 && (HAS_HW_NRF_PPI || HAS_HW_NRF_DPPIC) && (HAS_HW_NRF_UART0 || HAS_HW_NRF_UARTE0) && UART_NRFX && SERIAL (value: y) warning: NRFX_CLOCK_LFXO_TWO_STAGE_ENABLED (defined at modules/hal_nordic/nrfx/Kconfig:20) has direct dependencies NRFX_CLOCK && HAS_NRFX && 0 with value n, but is currently being y-selected by the following symbols: - CLOCK_CONTROL_NRF_K32SRC_XTAL (defined at drivers/clock_control/Kconfig.nrf:36), with value y, direct dependencies (value: y), and select condition !SOC_SERIES_BSIM_NRFXX && !CLOCK_CONTROL_NRF_FORCE_ALT && (value: y) error: Aborting due to Kconfig warnings

Is this docker image not made for all controllers or am I missing something?

bug: Dockerfile.ci: broken clang-15 installation in v0.26.0

Hello @KloudJack ,

After migration to Ubuntu 22.04, the vanilla clang installation been broken due to a conflict with Ubuntu's llvm-15:i386 (installed as a dependency of 'libsdl2-dev:i386'). Actually, the bug is masked by a second "false positive installation" error: an incorrect version of the repository being used for downloading causes vanilla clang from 'apt.llvm.org' to be completely ignored.

The second bug is in line 55:
echo "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy main" | tee /etc/apt/sources.list.d/llvm-official.list && \

while it should be (according to https://apt.llvm.org/):
echo "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-${LLVM_VERSION} main" | tee /etc/apt/sources.list.d/llvm-official.list && \

With the above modification, the correct clang version 15.0.7~++20230131104537+8dfdcc7b7bf6-1~exp1~20230131104626.110 started to install and the Docker build process immediately failed due to version's conflict.

RFC: pip install west and PEP 668

I recently ran into a problem with Debian 12 (bookworm) on a private Docker environment for Zephyr and decided to see how this repo solves it. Short answer: it doesn't.

https://pythonspeed.com/articles/externally-managed-environment-pep-668/

Right now the Dockerfiles are using ubuntu-22.04 which isn't affected by PEP 668. However, the next Ubuntu LTS almost certainly will be. Solutions are fairly stark:

  • Wait for Ubuntu to package west and the other Zephyr tools so it can be installed with apt rather than pip
  • Run all Zephyr tools inside a Python virtual environment
  • Invoke the --break-system-packages flag on the assumption that everything will probably(?) be ok

bug: Dockerfile.ci: Unpredictable vanilla clang version installed (was broken clang-15 installation in v0.26.0)

Hello,

Unfortunately last release has fixed the clang-15 problem, but introduce new regression: version of clang is now vanilla 'latest', but not defined by LLVM_VERSION variable. This is due to missed following url modification in the Dockerfile:

- echo "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy main" | tee /etc/apt/sources.list.d/llvm-official.list && \
+ echo "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-${LLVM_VERSION} main" | tee /etc/apt/sources.list.d/llvm-official.list && \

llvm-toolchain-jammy is a link to the 'latest stable', not predefined version. So version 17 (latest stable for now) will installed.

How to use this docker image?

I have a project, which I can build like this:

$ cmake -B build -GNinja -DBOARD=nucleo_f429zi .
$ ninja -C build

Which docker image should I use to build this project, and how should I use it?

I tried this, but it did not work:

$ docker run --rm  --volume c:\git\my_project:/project zephyrprojectrtos/ci /bin/bash /project/build.sh

Make Developer Image conform to VSCode devcontainers

The IDE primarily used to develop Zephyr based applications seems to be VSCode. It would be great if you could provide the docker image "Developer Image" conform to the VSCode devcontainer format. VSCode devcontainers can be added to projects easily with reasonable preconfiguration of extensions, etc.

There is a Dockerfile template and a image template which can be used as boilerplate to get started. They are usually based on an alpine image or debian image. Dockerfile ARGs can be customized with a .devcontainer/devcontainer.json.

Reasonable VSCode extensions to include could be:

Potentially reasonable VSCode extensions to include could be:

Inspiration/references:

Dependency errors for sphinx, pyocd, junit2html, cmsis-pack-manager

Hello, I tried to build an image for https://github.com/zephyrproject-rtos/docker-image, SHA is d761a83.

The following non-fatal errors occurred during the installation process:

...
ERROR: sphinx 4.5.0 has requirement docutils<0.18,>=0.14, but you'll have docutils 0.18.1 which is incompatible.
ERROR: sphinx-rtd-theme 1.0.0 has requirement docutils<0.18, but you'll have docutils 0.18.1 which is incompatible.
ERROR: sphinx-tabs 3.3.1 has requirement docutils~=0.17.0, but you'll have docutils 0.18.1 which is incompatible.
ERROR: pyocd 0.33.1 has requirement six<2.0,>=1.15.0, but you'll have six 1.14.0 which is incompatible.
ERROR: junit2html 30.1.3 has requirement jinja2>=3.0, but you'll have jinja2 2.10.1 which is incompatible.
...
ERROR: sphinx-tabs 3.3.1 has requirement docutils~=0.17.0, but you'll have docutils 0.16 which is incompatible.
ERROR: pyocd 0.33.1 has requirement pyyaml<7.0,>=6.0, but you'll have pyyaml 5.4.1 which is incompatible.
ERROR: pyocd 0.33.1 has requirement six<2.0,>=1.15.0, but you'll have six 1.14.0 which is incompatible.
ERROR: cmsis-pack-manager 0.4.0 has requirement pyyaml<7.0,>=6.0, but you'll have pyyaml 5.4.1 which is incompatible.

Although the image is built successfully in the end, it prevents later build of the Latex PDF documentation - the process fails.

Could you tell me what is the reason and how to solve it? Perhaps there are any workarounds?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.