Git Product home page Git Product logo

podman-gitlab-runner's Introduction

Using Podman to power your Gitlab CI pipeline

⚠️ NOTE ⚠️: New deployments should avoid using code from this repository. Instead the official Podman support should be used https://docs.gitlab.com/runner/executors/docker.html#use-podman-to-run-docker-commands. Old deployments should consider migrating if possible.

  1. Installation and Setup
    1. Set up rootless Podman for the gitlab-runner user
    2. Installing the gitlab-runner
    3. Setting up a Runner Instance
  2. Tweaking the Installation
    1. Private Registries
  3. License
  4. Links

Installation and Setup

The install instructions are for a Fedora 31+ installation. Most of the instructions should transfer to other distributions. gitlab-runner needs to be installed in version 12.6 or higher, because we rely on the image tag being exposed from the .gitlab-ci.yml file.

Set up rootless Podman for the gitlab-runner user

Make sure you have added entries in /etc/subuid and /etc/subgid for the gitlab-runner user. Enable lingering for the gitlab-runner user with sudo loginctl enable-linger gitlab-runner. Run sudo -iu gitlab-runner podman system migrate to set correct cgroups behavior and silence a warning during job execution.

Installing the gitlab-runner

First, you need to install the gitlab-runner using the instructions listed on the website. You can silence the SELinux warnings, by labelling the binary with the proper bin_t type like:

sudo chcon -t bin_t /usr/bin/gitlab-runner

Ensure that the gitlab-runner service runs with the appropirate permissions. Since we are using Podman in a rootless setup, we can run the service with user privileges instead of root permissions. Add a systemd dropin (/etc/systemd/system/gitlab-runner.service.d/rootless.conf):

[Service]
User=gitlab-runner
Group=gitlab-runner

Setting up a Runner Instance

As the gitlab-runner user change into the home directory (/home/gitlab-runner) and clone this repository.

git clone https://github.com/jonasbb/podman-gitlab-runner

Then follow the instructions to set up a new runner instance:

sudo -u gitlab-runner gitlab-runner register \
    --url https://my.gitlab.instance/ \
    --registration-token $GITLAB_REGISTRATION_TOKEN \
    --name "Podman Runner" \
    --executor custom \
    --builds-dir /home/user \
    --cache-dir /home/user/cache \
    --custom-prepare-exec "/home/gitlab-runner/podman-gitlab-runner/prepare.sh" \
    --custom-run-exec "/home/gitlab-runner/podman-gitlab-runner/run.sh" \
    --custom-cleanup-exec "/home/gitlab-runner/podman-gitlab-runner/cleanup.sh"

Tweaking the Installation

Currently, the scripts do not provide much customization. However, you can adapt the functions start_container and install_dependencies to specify how Podman should spawn the containers and how to install the dependencies.

Some behaviour can be tweaked by tweaked by setting the correct environment variables. Rename the custom_base.template.sh file into custom_base.sh to make use of the customization. The following variables are supported right now:

  • PODMAN_RUN_ARGS: Customize how Podman spawns the containers.

Private Registries

Podman supports access to private registries. You can set the DOCKER_AUTH_CONFIG variable under Settings → CI / CD and provide the credentials for accessing the private registry. Details how the variable has to look can be found under using statically defined credentials in the Gitlab documentation.

Additionally, there are multiple ways to authenticate against Gitlab Registries. The script uses a configured deploy token (via $CI_DEPLOY_PASSWORD) to login. Alternatively, the CI job also provides access to the registry for the duraion of a single job. The scipt uses variables $CI_JOB_TOKEN and $CI_REGISTRY_PASSWORD, if available, to log into the registry.

The four methods are tried in order until one succeeds:

  1. DOCKER_AUTH_CONFIG
  2. CI_DEPLOY_PASSWORD
  3. CI_JOB_TOKEN
  4. CI_REGISTRY_PASSWORD

More details about different authentication variants in the official documentation: https://docs.gitlab.com/ee/user/packages/container_registry/index.html#authenticate-by-using-gitlab-cicd

Using Podman in Podman containers

Executing Podman inside is useful to test containers or build new images inside the CI. By default the nesting fails, since access to the overlayfs is not possible.

RedHat has a guide how to run Podman inside of Podman containers in both rootful and rootless scenarios: https://www.redhat.com/sysadmin/podman-inside-container

License

Licensed under the MIT license.

Links

podman-gitlab-runner's People

Contributors

andreadistefano avatar duck-rh avatar gardar avatar jonasbb avatar mulbc avatar nicki-krizek avatar quanterium avatar runfalk avatar runiq avatar tpmkranz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

podman-gitlab-runner's Issues

Allow overriding command

Quack,

Thanks for you nice work. I'm using a custom running to run tests that require an init system. It would be handy if we could override the command in custom_base.sh. I could prepare a PR if you're ok with this proposal.

Regards.
\_o<

Podman pulling image on every build despite edits to source

I'm trying to contribute a pull if-not-present policy for this runner similar to this docker runner option, but I can't stop podman from pulling.

In prepare.sh

#### Attempting to change this to enable the same as a "pull if-not-present" policy
    podman image ls   # shows that image is available
#    podman pull --authfile "$CACHE_DIR"/_authfile_"$CONTAINER_ID" "$IMAGE"
#    rm "$CACHE_DIR"/_authfile_"$CONTAI
    podman run \
        --detach \
        --name "$CONTAINER_ID" \
        --volume "$CACHE_DIR:/home/user/cache":Z \
        --pull=missing \  ## Added this line although missing is the default
        "${PODMAN_RUN_ARGS[@]}" \
        "$IMAGE"\
        sleep 999999999

In cleanup.sh:

# Try to remove all old containers, images, networks, and volumes
## this is killing the cache, need some way to maintain a cache. For now prune manually.
#podman system prune --force --volumes

But when I run another job, podman reaches out and pulls the container again. What am I missing? Is this a feasible contribution?

local cache not restored

It looks like cache isn't working as expected with gitlab-runner custom executor to podman.

With the following .gitlab-ci.yml (that works with a plain docker executor), Job B fails because cat command cannot find hello.txt file that sould have been restored from cache.

default:
  image: debian:buster
  tags:
    - podman

stages:
  - build
  - test

job A:
  stage: build
  script:
    - mkdir -p vendor/
    - echo "build" > vendor/hello.txt
  cache:
    key: build-cache
    paths:
      - vendor/

job B:
  stage: test
  script:
    - cat vendor/hello.txt
  cache:
    key: build-cache
    paths:                                                                                                            
      - vendor/
    policy: pull

Here is (part of) my config.toml

[[runners]]
  name = "pleiades-ci podman runner"
  url = "https://[masked]"
  token = "[masked]"
  executor = "custom"
  builds_dir = "/builds"
  cache_dir = "/cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.custom]

    prepare_exec = "/home/me/bin2/prepare.sh"
    prepare_exec_timeout = 1800

    run_exec = "/home/me/bin2/run.sh"

    cleanup_exec = "/home/me/bin2/cleanup.sh"
    cleanup_exec_timeout = 300

    graceful_kill_timeout = 60
    force_kill_timeout = 180

Here is the output of the failling job :

Setting up git-lfs (2.7.1-1+deb10u1) ...
Preparing environment 00:00
Running on 7a72d7ad609c...
Getting source from Git repository 00:01
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/DEC/pleiades/demo-ci-cd/.git/
Created fresh repository.
Checking out 5c308f92 as test-cache-podman...
Skipping Git submodules setup
Restoring cache 00:00
Checking cache for build-cache...
Runtime platform                                    arch=amd64 os=linux pid=3635 revision=58ba2b95 version=14.2.0
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted. 
Successfully extracted cache
Executing "step_script" stage of the job script 00:00
WARNING: Starting with version 14.0 the 'build_script' stage will be replaced with 'step_script': https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26426
$ cat vendor/hello.txt
cat: vendor/hello.txt: No such file or directory
Cleaning up file based variables 00:01
ERROR: Job failed: exit status 1

Am i missing something ?

Improve install_command test

install_command() {
# Run test if this binary exists
PACKAGE=$1
TEST_BINARY=$PACKAGE
podman exec --user root:root "$CONTAINER_ID" /bin/bash -c 'if ! type '"$TEST_BINARY"' >/dev/null 2>&1; then

I think the install_command test could be improved as the packages listed under the dependencies don't all contain binaries that are equal to the name of the package.
The ca-certificates package for example does not contain any binary called ca-certificates (at least not on archlinux/debian/redhat) which causes the ca-certificates package to be installed in every job.
This can count for a significant amount of time, especially on yum based distros that have a lot of repos enabled.

Possible solutions:

  • Adding a second argument to install_command for a file to check (or perhaps a command to run).
  • Check if a package is installed instead of relying on the existence of a certain file/binary. This would need to be distro specific but it would be possible to use dpkg/rpm/etc. instead of apt/yum/dnf (using rpm is significantly faster than relying on yum/dnf).

Only works with CUSTOM_ENV_CI_REGISTRY set

Preparing the "custom" executor
Using Custom executor...
Running in runner-152-project-2683-concurrent-0-66548
Login to  with CI_REGISTRY_USER
Error: authenticating creds for "": error pinging docker registry : Get "https:///v2/": http: no Host in request URL
ERROR: Preparation failed: exit status 2

You can see that a podman login is tried with an empty registry address. I don't know why CUSTOM_ENV_CI_REGISTRY_USER and CUSTOM_ENV_CI_REGISTRY_PASSWORD are set while CUSTOM_ENV_CI_REGISTRY is not, but that's what my university's GitLab instance is dealing me.

A pull request is on the way.

Improve login to Gitlab Registry

Besides the currently supported DOCKER_AUTH_CONFIG there are multiple other ways to authenticate against the Gitlab registry. The ways are listed here.

The script should try all logins until one is working. The order should be:

  1. DOCKER_AUTH_CONFIG This is a very specific configuration not only for the Gitlab Registry but for all, thus it should have the highest priority.
  2. CI_DEPLOY_USER Deploy tokens do not exist by default, thus if they are created manually they should have a high priority.
  3. CI_JOB_USER Provided automatically, thus low priority.
  4. CI_REGISTRY_USER Provided automatically, outdated version of CI_JOB_USER thus lowest priority.

https://docs.gitlab.com/ee/user/packages/container_registry/index.html#authenticate-by-using-gitlab-cicd

Change exit code

Hi, I am using this executor but it seems to be impossible to change behaviour of exit code. The problem I am facing is that when I execute som .sh script on runner which should exit with status code 22 the final status code is always 1 anyway.

In this case i am not able to use allow_failure in my .gitlab-ci.yml file.

  allow_failure:
    exit_codes:
      - 22

I found that there is variable "$BUILD_FAILURE_EXIT_CODE" in run.sh script but where is this coming from ?

Is there any possibility to change this behaviour ?

Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.