Git Product home page Git Product logo

gardenlinux's Introduction

GitHub Release Build Build CII Best Practices

Garden Linux

Garden Linux is a Debian GNU/Linux derivate that aims to provide small, auditable Linux images for most cloud providers (e.g. AWS, Azure, GCP etc.) and bare-metal machines. Garden Linux is the best Linux for Gardener nodes. Garden Linux provides great possibilities for customizing that is made by a highly customizable feature set to fit your needs.

Features

  • Easy to use build system
  • Repeatable and auditable builds
  • Small footprint
  • Purely systemd based (network, fstab etc.)
  • Initramfs is dracut generated
  • Running latest LTS Kernel
  • MIT license
  • Security
    • Fully immutable image(s) (optional)
    • OpenSSL 3.0 (default)
    • CIS Framework (optional)
  • Testing
    • Unit tests (Created image testing)
    • Integration tests (Image integration tests in all supported platforms)
    • License violations (Testing for any license violations)
    • Outdated software versions (Testing for outdated software)
  • Supporting major platforms out-of-the-box
    • Major cloud providers AWS, Azure, Google, Alicloud
    • Major virtualizer VMware, OpenStack, KVM
    • Bare-metal systems

Build

The build system utilises the gardenlinux/builder to create customized Linux distributions. gardenlinux/gardenlinux is maintained by the Garden Linux team, highlighting specialized "features" available for other projects.

Tip

For further information about the build process, and how to set it up on your machine, refer to the Build Image documentation page.

To initiate a build, use the command:

./build ${platform}-${feature}_${modifier}

Where:

  • ${platform} denotes the desired platform (e.g., kvm, metal, aws).
  • ${feature} represents a specific feature from the features/ folder.
  • ${modifier} is an optional modifier from the features/ folder, prefixed with an underscore "_".

You can combine multiple platforms, features, and modifiers as needed.

Example:

./build kvm-python_dev

The build script fetches the required builder container and manages all internal build steps. By default, it uses rootless podman, but you can switch to another container engine with the --container-engine flag.

Test

To run unit tests for a specific target, use the command ./test ${target}. Further documentation about tests is located in tests/README.md.

Download Releases

Product Release Frequency Download
LTS cloud and baremetal images Quarterly Download
LTS base container images Quarterly Download
LTS bare python container Quarterly Download
LTS bare libc container Quarterly Download
LTS bare nodejs container Quarterly Download

Note: For each artifact, there also exists a nightly version, which is built daily but is not considered LTS.

The LTS cloud and baremetal images provided by Garden Linux are compatible with various cloud platforms, including Alibaba Cloud, AWS, Microsoft Azure and GCP.

Nvidia Driver Support

An installer can be found in the gardenlinux/gardenlinux-nvidia-installer repository.

Documentation

Please refer to docs/README.md.

Contributing

Contributions to the Garden Linux open source projects are welcome. More information are available in in CONTRIBUTING.md and our docs/.

Community

If you need further assistance, have any issues or just want to get in touch with other Garden Linux users feel free to join our public chat room on Gitter.

Link: https://gitter.im/gardenlinux/community

gardenlinux's People

Contributors

5kt avatar adaevitncar avatar akendo avatar andreasburger avatar berendt avatar ccwienk avatar chrinorse avatar dependabot[bot] avatar dguendisch avatar dkistner avatar dnan0s avatar fwilhe avatar gchbg avatar gehoern avatar gyptazy avatar jensh007 avatar jia-jerry avatar liorokman avatar maltej avatar marwinski avatar mbanck-ntap avatar mrbatschner avatar msohn avatar nanory avatar nkraetzschmar avatar ubi2go avatar vincinator avatar vpnachev avatar walditm avatar zobelhelas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gardenlinux's Issues

chost feature must get a new name

We have to rename the chost feature to something different.
It collides with SUSE CHOST name and it is not self descriptionary.
I sugggest to rename it to containerd

Set a strict global umask value

What would you like to be added:
umask value of 077 or minimum 027

Why is this needed:
The permissive value of 022 should not be used if possible.

Add common static OS user

What would you like to be added:
For debugging and development purposes sometimes it is necessary to ssh on the nodes. Currently, there is no common user available to all cloud images. For example, on AWS there is the admin user, but on GCP it is not available.
It would have been very nice if a common user, e.g. gardenlinux is available by default and can be used to ssh on the node.

Why is this needed:
To ease development and debugging.

Running Golang binaries fails in GardenLinux based Docker containers for Golang >= 1.14

What happened:

While running ArgoCD on a GardenLinux cluster, the container eventually crashes with the following error message:

runtime: mlock of signal stack failed: 12
runtime: increase the mlock limit (ulimit -l) or
runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
fatal error: mlock failed

What you expected to happen:

The container should not crash.

How to reproduce it (as minimally and precisely as possible):

Run ArgoCD of the later versions (compiled with Golang 1.14.1) on a GardenLinux based Kubernetes cluster.

Anything else we need to know:

After a little digging, the root cause of this issue is the following Golang bug: golang/go#37436 coupled with a bad default setting for the Docker runtime engine and coupled with the way that Debian-packaged kernels report their version.

  1. Linux kernel version 5.2.something introduced a memory corruption bug that got fixed in version 5.4.2. See https://github.com/golang/go/wiki/LinuxKernelSignalVectorBug for details.
  2. Golang 1.14 introduced a feature that got bit by the kernel bug above, assuming the kernel itself was compiled with GCC 9.x (which is the case for Debian Bullseye).
  3. Since versions above 5.4.2 are safe, and versions below are not, Golang introduced a workaround for non-patched kernel versions, where the stack memory is mlock(2)-ed into memory.. That workaround is automatically enabled if the kernel version is detected to be a vulnerable kernel version.
  4. Debian (and Ubuntu) keep the reported kernel version stable, due to multiple software packages depending on a stable version. The base version reported by uname -r is 5.4.0-4-cloud-amd64, but if you ran uname -v you would get the real version (#1 SMP Debian 5.4.19-1 (2020-02-13) in this case) which is already patched.
  5. Golang looks at the wrong field in the uname response, and misidentifies the kernel version as 5.4.0, which is still buggy. Golang then enables the mlock workaround, which fails because the Docker daemon launches all containers with a ulimit of 64KB for max locked memory, instead of the system-wide default which is 64MB.

The fix is to modify the way docker is configured on the host, and adding a "default-ulimits" section to the /etc/docker/daemon.json file with the correct value (64M instead of 64K).

See item (2) in golang/go#37436 (comment) for an example of what to put inside the daemon.json file, but instead of setting the locked memory limit to unlimited, I suggest keeping it at the system default, which is 64Mb instead of 64Kb.

GPU Support

What would you like to be added:

Currently it is unclear how to enable gpu support on a gardenlinux node.
Please provide a clear guide how gpus can be used on gardenlinux nodes.
Possible solutions we would see are:

Why is this needed:

Since CoreOS is deprecated and gardener does not Ubuntu support in the SAP live landscape, it is in the near future not possible (or not straight forward) to use gpus gardener clusters in the SAP live landscape.
Thereby all workloads that rely on gpu resources cannot be served on the above gardener clusters anymore when the CoreOS support is dropped.

Add ipvs support

What would you like to be added:

Load ipvs kernel modules ip_vs, ip_vs_rr, and nf_conntrack modules by default.

Why is this needed:

Gardener clusters with ipvs

It takes about 20 mins for Gardenlinux to be booted on Alicloud

What happened:
It takes about 20 mins for Gardenlinux to be booted on Alicloud
What you expected to happen:
It takes less than 1 min for Gardenlinux to be booted on Alicloud
How to reproduce it (as minimally and precisely as possible):
Use master branch
Anything else we need to know:
Alicloud found it is stuck in step initialize debian. They suggested we follow the steps in this doc to build a customized image. I'm not sure virtio is complied with what Alicloud suggested.
Environment: Alicloud

Unit Tests

Writing unit tests

  • Unit tests are tests that can be performed on the final image without really instantiating it!
  • debian-cloud-images list https://salsa.debian.org/cloud-team/debian-cloud-images
  • hardening guide lines (as attached) and NO virus scanner ! NO BMC*! NO PRISMA!
  • no ssh keys
  • tmp, proc, sys empty
  • dev populated with "important" nod's
  • machine-id is a file and is 0 size and /var/lib/dbus/machine-id not exist
  • no passwords
  • size constrains
  • no libdb
  • license scan -> all compatible to SAP!
  • protecode
  • hardening tests (run tiger in chroot, chkrootkit in chroot, rkhunter in chroot)
  • deborphan -an -> not explicit named libraries that are by accident in (apt-mark showmanual)
  • debsums -> check integrity of all files on disk
  • device file exist, especially console, fd, full, null, ptmx, pts, random, shm, stderr, stdin, stdout, tty, urandom, zero, ...
  • make sure that autologin is not enabled for productive builds (make sure there is no autologin.conf file under /etc/systemd)

Use OS GardenLinux extension to provison OS on alicloud

What would you like to be added:
In Alicloud, CCM needs boot args provider-id for kubelet. It is maintained in gardener-extension-provider-alicloud, which is based on assumption env PROVIDER_ID is provided by the system. gardener-extension-os-coreos-alicloud is responsible to provide correct value of env PROVIDER_ID .

I would suggest provide env PROVIDER_ID built in image level.
Why is this needed:
We will not need a special extension as gardener-extension-os-coreos-alicloud. We will use a common gardener-extension-os-gardenlinux for Alicloud.

Build hangs during tests

What happened:

Build hangs and eventually fails at

testing systemd services
OK - all services that should be enabled are enabled
OK - all services that should be disabled are disabled
passed

executing tiger tests
OK - tiger didn't detect any issues
passed

checking for an empty /tmp
passed

What you expected to happen:

Build successful

How to reproduce it (as minimally and precisely as possible):

make aws-dev

Anything else we need to know:

Environment:

VM with Debian 11

image is not booting in libvirt kvm environment

we build the kvm image today and testing it in our cloud environment, but the image stales during boot.

is stales first at: [sda] Attached SCSI disk

if you tip on the keyboard the next line appears: random: crnd init done

could be related to your machine type: machine='pc-i440fx-4.2'

output.libvirt.xml.txt

vnc-output

[GCP] LoadBalancer Health Checks are failing

What happened:
For k8s services of type LoadBalancer GCP creates health checks for the backing nodes.
With Garden Linux version 11.29.1 these health checks are failing

Instance shoot--foo--bar-cpu-worker-z1-123-456 is unhealthy for <public-ip>.

What you expected to happen:
Health checks to not fail.

How to reproduce it (as minimally and precisely as possible):
Create a shoot cluster with Garden Linux and a service of type Load Balancer.
Check in the GCP console that the health checks are failing for this Load Balancer.

Anything else we need to know:
Although the health checks are failing, some services are still working, but others are not accessible.

Environment:

Kernel with CONFIG_IKHEADERS for BPF tools on Kubernetes

What would you like to be added:

I would like the Garden Linux kernel to be compiled with CONFIG_IKHEADERS.

Why is this needed:

BCC tools need to have access to kernel headers. It can be done either by installing linux-headers packages or by having a kernel compiled with CONFIG_IKHEADERS so that enough information can be retrieved via /sys/kernel/kheaders.tar.xz.

BCC tools are used in Inspektor Gadget, a collection of tools for developers of Kubernetes applications. I would like Inspektor Gadget to support Gardener clusters when it uses Garden Linux.

/cc @mauriciovasquezbernal @domdom82 @vasu1124 @gehoern @MalteJ

remove the nameing for debuerreotype

Sone names in the bin folder are starting with the prefix "debuerreotype" since the project https://github.com/debuerreotype/debuerreotype is our parent. we changed so much in the recent past that it is time to get rid of the unspellable name brand everything to garden but also making sure we state the debuerreotype guys for the initial project as well so they get all their credit.

replace growpart with systemd repart

A basic feature of a cloud image is: it grows to the disk size it is located on.

Actually this is done via an updated (corrected and more versatile) growpart script in initram. I think the more future proof and even more versatile approach would be to use repart from systemd.

The problem here is repart is not yet part of the debian systemd (as of version 245)
All initial tests have to be done with a recompiled systemd for now

Can't log on with admin on Alicloud

What happened:
Can't log on with admin on Alicloud. Cloud-init failed.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Latest master branch.
Anything else we need to know:
I used to build an image with alicloud branch. It worked. Below is the comparison of the cloud-init log.
Success one:

2020-05-18 17:57:35,896 - handlers.py[DEBUG]: finish: init-network/check-cache: SUCCESS: no cache found
2020-05-18 17:57:35,896 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance
2020-05-18 17:57:35,897 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.debian.Distro'>
2020-05-18 17:57:35,897 - __init__.py[DEBUG]: Looking for data source in: ['AliYun', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM', 'NETWORK']
2020-05-18 17:57:35,899 - __init__.py[DEBUG]: Searching for network data source in: ['DataSourceAliYun', 'DataSourceNone']
2020-05-18 17:57:35,900 - handlers.py[DEBUG]: start: init-network/search-AliYun: searching for network data from DataSourceAliYun
2020-05-18 17:57:35,900 - __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceAliYun.DataSourceAliYun'>
2020-05-18 17:57:35,900 - __init__.py[DEBUG]: Update datasource metadata and network config due to events: New instance first boot
2020-05-18 17:57:35,900 - util.py[DEBUG]: Running command ['systemd-detect-virt', '--quiet', '--container'] with allowed return codes [0] (shell=False, capture=True)
2020-05-18 17:57:35,905 - util.py[DEBUG]: Running command ['running-in-container'] with allowed return codes [0] (shell=False, capture=True)
2020-05-18 17:57:35,907 - util.py[DEBUG]: Running command ['lxc-is-container'] with allowed return codes [0] (shell=False, capture=True)
2020-05-18 17:57:35,908 - util.py[DEBUG]: Reading from /proc/1/environ (quiet=False)
2020-05-18 17:57:35,908 - util.py[DEBUG]: Read 153 bytes from /proc/1/environ
2020-05-18 17:57:35,908 - util.py[DEBUG]: Reading from /proc/self/status (quiet=False)
2020-05-18 17:57:35,908 - util.py[DEBUG]: Read 1027 bytes from /proc/self/status
2020-05-18 17:57:35,908 - util.py[DEBUG]: querying dmi data /sys/class/dmi/id/product_name
2020-05-18 17:57:35,908 - util.py[DEBUG]: Reading from /sys/class/dmi/id/product_name (quiet=False)

Failed one:

2020-05-18 06:36:44,286 - handlers.py[DEBUG]: finish: init-network/check-cache: SUCCESS: no cache found
2020-05-18 06:36:44,286 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance
2020-05-18 06:36:44,288 - stages.py[DEBUG]: Using distro class <class 'cloudinit.distros.debian.Distro'>
2020-05-18 06:36:44,288 - __init__.py[DEBUG]: Looking for data source in: ['AliYun', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM', 'NETWORK']
2020-05-18 06:36:44,290 - __init__.py[DEBUG]: Searching for network data source in: ['DataSourceAliYun', 'DataSourceNone']
2020-05-18 06:36:44,290 - handlers.py[DEBUG]: start: init-network/search-AliYun: searching for network data from DataSourceAliYun
2020-05-18 06:36:44,290 - __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceAliYun.DataSourceAliYun'>
2020-05-18 06:36:44,290 - handlers.py[DEBUG]: finish: init-network/search-AliYun: FAIL: no network data found from DataSourceAliYun
2020-05-18 06:36:44,290 - util.py[WARNING]: Getting data from <class 'cloudinit.sources.DataSourceAliYun.DataSourceAliYun'> failed
2020-05-18 06:36:44,294 - util.py[DEBUG]: Getting data from <class 'cloudinit.sources.DataSourceAliYun.DataSourceAliYun'> failed
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 756, in find_source
    s = cls(sys_cfg, distro, paths)
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceEc2.py", line 78, in __init__
    super(DataSourceEc2, self).__init__(sys_cfg, distro, paths)
  File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 208, in __init__
    self.ds_cfg = util.get_cfg_by_path(
  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 733, in get_cfg_by_path
    cur = cur[tok]
TypeError: string indices must be integers
2020-05-18 06:36:44,295 - handlers.py[DEBUG]: start: init-network/search-None: searching for network data from DataSourceNone
2020-05-18 06:36:44,295 - __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceNone.DataSourceNone'>
2020-05-18 06:36:44,295 - __init__.py[DEBUG]: Update datasource metadata and network config due to events: New instance first boot

Environment:
Alicloud

Remove Berkeley DB from Operating System Images

Berkely DB comes with a problematic open source license and must be removed prior to the image being deployed.

There are apparently several places where it is being used (more detailed needed):
pam
apt

Ensure Current Version of Containerd

Current stable CoreOS and flatcar releases contain an old version of containerd which will be or has already deprecated by Kubernetes.

These are some details which have been submitted:

In discussions with the Kubernetes sig-node team I have understood that the current Dockershim support (the legacy Docker interface that Kubernetes has for using the Docker Engine API) is being deprecated. At some point in time, it will be removed and CRI will be required for stable Kubernetes installations. This is inline with the fact that Docker Engine is basically a wrapper on top of Containerd. For this reason, I urge you to start with Containerd when you pivot from CoreOS as the default HostOS for Gardener. The merged PR should make it very easy for you to do this – assuming you choose a distribution with a supported version of Containerd ( i.e. >= 1.3 ).

Pipeline Buildout

Set session timeout for the shell

What would you like to be added:
A secure practice is to set session timeout for the shell

Why is this needed:
Additional level of security

Release Notes handling

Motivation

Users may be interested in relevant changes between gardenlinux versions. The usual way to transport this kind of information is through release notes.

Implementation Hints

  • define a contract allowing us to collect, and publish release notes between the last and current (to-be-released) version of gardenlinux; preferrably in an automated fashion
  • establish some conventions (e.g. mark breaking changes, features, fixes, .. in a consistent manner)

Make Vmware has numerous errors

What happened:
make vmware

Tests 25 done. 6 failed

### final fsck, just to be sure
fsck.fat 4.1 (2017-01-24)
/dev/loop0p2: 1 files, 0/8167 clusters
USR: 10756/65536 files (0.1% non-contiguous), 176633/262144 blocks
[QUOTA WARNING] Usage inconsistent for ID 0:actual (173379584, 5143) != expected (173371392, 5143)
ROOT: Update quota info for quota type 0.
[QUOTA WARNING] Usage inconsistent for ID 0:actual (173268992, 5133) != expected (173260800, 5133)
ROOT: Update quota info for quota type 1.
[QUOTA WARNING] Usage inconsistent for ID 0:actual (173387776, 5146) != expected (173379584, 5146)
ROOT: Update quota info for quota type 2.
ROOT: 5156/64384 files (0.1% non-contiguous), 51106/257531 blocks
Errors detected, retrying the fsck.
[QUOTA WARNING] Usage inconsistent for ID 0:actual (173367296, 5143) != expected (173379584, 5143)
ROOT: Update quota info for quota type 0.
[QUOTA WARNING] Usage inconsistent for ID 0:actual (173256704, 5133) != expected (173268992, 5133)
ROOT: Update quota info for quota type 1.
[QUOTA WARNING] Usage inconsistent for ID 0:actual (173375488, 5146) != expected (173387776, 5146)
ROOT: Update quota info for quota type 2.
ROOT: 5156/64384 files (0.1% non-contiguous), 51106/257531 blocks
Errors detected, retrying the fsck.
ROOT: 5156/64384 files (0.1% non-contiguous), 51106/257531 blocks
+ qemu-img convert -o subformat=streamOptimized -o adapter_type=lsilogic -f raw -O vmdk output/20200427/amd64/bullseye/rootfs.raw output/20200427/amd64/bullseye/rootfs.vmdk
+ make-ova --vmdk output/20200427/amd64/bullseye/rootfs.vmdk --template /opt/debuerreotype/templates/gardenlinux.ovf.template
output/20200427/amd64/bullseye/rootfs.ova
#### tests
checking service accounts for shell
all service accounts have no shells
passed
- checking for server keys
  keys found!
- checking minimum settings
- scanning user accounts
failed
checking for autologin.conf files
There are no autologin.conf files under rootfs/etc/systemd directory
passed
checking for blacklisted packeges
OK - there are no blacklisted packages on the filesystem
passed
testing for needed capabilities
OK - all capabilities as expected
passed
testing the integrity of the files from installed packages
debconf: delaying package configuration, since apt-utils is not installed
OK - verifying if all installed packages provide md5sums
OK - verifying if there are any changed files
passed
testing /dev contents
OK - all /dev devices match
passed
checking home permissions
correct home permissions
passed
checking the machine-id
OK - machine-id is as expected
passed
check memory protection
configurations are correct
passed
checking minimum number of users with UID 0
There is only one user with UID 0
passed
testing for unneeded/orphaned packages
debconf: delaying package configuration, since apt-utils is not installed
OK - verifying if any extra unneeded packages are installed
passed
checking for empty/shadowed passwords
passed
checking for an empty /proc
OK - /proc is empty
passed
environment variables must be rest
the environment variables are reset
passed
executing rkhunter tests
debconf: delaying package configuration, since apt-utils is not installed
Warning: Checking for prerequisites               [ Warning ]
         No output from the 'lsattr' command - all file immutable-bit checks will be skipped.
Warning: The SSH and rkhunter configuration options should be the same:
         SSH configuration option 'Protocol': 2
         Rkhunter configuration option 'ALLOW_SSH_PROT_V1': 2
failed
checking root permissions
correct root permissions
passed
check memory protection
configurations are correct
passed
executing ssh config tests
/run/sshd must be owned by root and not group or world-writable.
FATAL - can't get the ssh config!
failed
testing for suid files
FAIL - suid files are present that are not whitelisted!
       suid files: /sbin/mount.nfs
failed
checking for an empty /sys
passed
executing tiger tests
debconf: delaying package configuration, since apt-utils is not installed
OK - tiger didn't detect any issues
failed
checking for an empty /tmp
passed
grep: /home/dev/.bashrc: No such file or directory
check for world-writable strings in path
the directories in the path are fine
passed
this should fail
failed
Tests 25 done. 6 failed

VMDK gets created however is not able boot

What you expected to happen:
I am assuming only one error to be reported

How to reproduce it (as minimally and precisely as possible):
make vmware
5.6.12-1-MANJARO
Docker version 19.03.8-ce, build afacb8b7f0
GNU Make 4.3

Anything else we need to know:

gardenOS

Environment:

Overlayfs

Currently we are pushing liveboot forward as of use in PXE boot. Liveboot brings some cool overlayfs features with it.

Try to test and document the following scenarios:

  • overlay the whole / and persist to ram or a separate drive

and

  • overlay /etc/systemd/system and /etc/kubernetes and persist to ram
  • overlay /opt and /var/lib/docker and persist to disk

Replace version of coreutils with newer one

Debian coreutils has several known vulnerabilities for which a new version is available (just not in Debian). Provide a package in he SAP Debian repository with a newer version that fixes the vulnerabilities.

introduce an "unsupported" flag

there should be a constant monitoring of the state of garden linux and its primary use case: to run kubernetes with gardener.
so e.g. if there are states (e.g. loaded nfsd module) that are problematic, we need to flag whole gardenlinux state as "unsupported" so people know they have to change their way to use garden linux

this is needed to raise the general security level of the whole setup.

what do we need to monitor (plz extend)

  • unwanted kernel modules
  • outdated image use

switch to dracut

dracut is a very minimal but yet powerfull initramfs. It replaces the bigger initramfs as it is atm. There are multiple advantages

  • smaller size of initramfs -> faster boot
  • better control over contents of initramfs
  • liveboot supports tls and names out of the box and is not limited to ipadresses as the busybox setup is by default
  • there is out of the box support for dracut by ignition

Please make sure on implementing this:

  • liveboot works as in fetch a squashfs disk from url with dns, tls and own cert
  • make sure growpart works or directly go for #102
  • check for the support of overlayfs in the new setup as in #104

Integration Tests

  • tests that run in a booted image (all cloud providers provided)
  • test for growroot
  • networking
  • metadata connection
  • cloud provider tools (awscli, gcloud, waagent, ali) operational testing
  • ping test to 8.8.8.8, heise.de (in ipv4 ipv6)
  • traceroute test protocol
  • network time set systemctl status systemd-timesyncd

Allow non-root users to use dmesg

What would you like to be added:

Remove the restriction on dmesg that makes it only usable for root.

Why is this needed:

  1. Allowing non-root users to use dmesg makes it easier to identify various kernel-level errors, the most common of which (in Kubernetes containers) are situations in which the kernel OOM-killer cause processes to be killed.
  2. Allows for easier diagnostics in case of driver and hardware issues.
  3. Provides compatibility with CoreOS, which didn't restrict dmesg from non-root users.

Without this PR, identifying kernel-related issues would require pushing a privileged container to Kubernetes to run dmesg. If the issue is related to stability then it might not be possible to do after the fact, while with the PR any container that is still running on the host could be used to diagnose the issue.

SAP Owned Debian Repo

  • standard debian mirror
  • our files (e.g. new pakages for libdb - less builds) on top
  • recreated debian index files with our files integrated
  • in fact: standard mirror blended with our files :-D
  • include full debian set stable, testing, unstable
  • and probably other sources (e.g. frr, docker)

branch refactoring

we have minimum two branches in Github.

  • main branch (please rename master to this)
    (only after review can be pushed here)
  • integrity branch.
    (only points to git commits of the main branch that become integrity tested. Integrity tests cost money since they create VMs on cloud providers and therefore commits to be integrity tested should be carefully selected)

Please add also a
27 branch that points to the release 27.0
and tag in there also the release 27.1

please protect the brances adequately.

Releases will be documented in RELEASE.md

Test for systemd core dumpes

We believe that systemd core dumpes have not been spoted. We need to test this. The problem mainly happens under "load" e.g. running a kubelet under gardener

How to spot it?

  • "segfault" in journalctl -b
  • running systemctl and this will timeout!
  • unusual long cluster creation

When to test?

  • create a garden cluster
  • test for the criteria when it is up
  • test some time later again

integrate in gardener tests.

REPO_LAYOUT.md Feedback / Discussion

I like the proposal but have some comments and suggestions.:

Versioning Scheme

What are we doing with failed / broken builds? Are we only assigning build numbers to "release" builds and tag the other ones as something like "dev", e.g. 10.2-dev and only upon release mark them as 10.2?

Apart from that this looks quite ok. BTW, April 1st is day 1, not 0 as you wrote further down below.

Release Channels

I don't quite understand the difference between greatest and stable (no it is not self explanatory). I would have used them as synonyms. How about stableand experimental?

Artifact / file names

We probably need multiple levels: we don't just have dev and prod but also ghost (with docker) and chost (with containerd).

Example

I like the CoreOS notation where they store the AMI number for the various hyperscaler regions in txt files attributed with the region name.

update the kernel to latest LTS

We run debian testing and the bare installation would already run a kernel like 5.6 or 5.7 but all those kernels are no LTS kernel so the change rate in ABI is rather high. Debian stable sticks currently to 4.19 what is for our requirements to feature set und security too old.
To have some stability the aim is a LTS kernel and therefore we need to maintain this (at the moment in hack/packages)
Make sure we are using the current kernel.org image for the latest LTS kernel atm it is 5.4.46. (current situation is we pinned to the latest 5.4 available in debian = 5.4.19)

reduce minimal image even further

Currently Gardenlinux is based on
debootstrap --variant minbase
that installs basically the smallest Debian available (by selection only essential declared packages)

Other distributions like Alpine base their minimal baseimage on busybox so we should - in the minimal base image - remove the gnu utils and switch to busybox. Also the following essential packets need to be rethought:

  • pam
  • tzdata
  • perl
  • ncurses-base
  • e2fsprogs

What should be left minimally is:

  • busybox
  • apt

Of course one of the big advantages of debian are th standard use of gnu utils and this is a feature that is available in higher levels of images.

Hint: you will need the apt parameter --allow-remove-essential. to handle this - it probaly needs to go in the default config, so debian does not always try to fix its dependencies to essential packets (APT::Get::allow-remove-essential).

Transfer of GCP ssh keys does not work anymore

What happened:

No ssh login possible.

What you expected to happen:

Possible

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Environment:

Make it possible to use Project Quotas on GardenLinux

What would you like to be added:

Kubernetes allows defining limits on the amount of disk space used by ephemeral storage. Kubelet implements this feature in two ways:

  1. A periodic task that scans directories
  2. Using a kernel feature called Project Quotas where the kernel keeps track of the disk usage.

Full details are here: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage

In order to make it possible to enable the second method, the filesystem needs to have the project quota feature turned on. This needs to be done when the filesystem is unmounted, so it can't be done after GardenLinux has already booted.

Why is this needed:

Directory scanning is not a very accurate way to account for ephemeral filesystem usage. It is racy, and it fails to account for files that are deleted from the filesystem, but are still held open by any process (i.e. the dnode is still allocated, but the inode is already free).

Adding this feature makes it possible for projects using GardenLinux to opt in to the second method supported by Kubelet. A project that wants this must still opt-in by enabling the relevant featuregate.

Disable core dumps

What would you like to be added:
Set the following options in /etc/security/limits.conf:

  • soft core 0
  • hard core 0

Why is this needed:
In case of a crash the system writes a core dump to disk. These core dumps can contain critical
information that can help an attacker to prepare further attacks.

Set Search Domain in resolv.conf File

What would you like to be added:
Search domains from the cloud providers are not set in /etc/resolv.conf file

$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 169.254.169.254

I've seen that the search domains are missing on AWS and GCP, but they are actually available in the DHCP server.

Why is this needed:
If you try to ping or curl another node in the cluster by node name, the requests is failing with resolve error.

$ ping shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5
ping: shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5: Name or service not known

But if I manually add the search domains

$ echo search c.<project-name>.internal google.internal >> /etc/resolv.conf

Then hostname resolving is working

$ ping shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5
PING shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5.c.<project-name>.internal (10.222.0.2) 56(84) bytes of data.
64 bytes from shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5.c.<project-name>.internal (10.222.0.2): icmp_seq=1 ttl=64 time=1.38 ms
64 bytes from shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5.c.<project-name>.internal (10.222.0.2): icmp_seq=2 ttl=64 time=0.311 ms
64 bytes from shoot--foo--bar-cpu-worker-z1-7c7b44b54c-5zfr5.c.<project-name>.internal (10.222.0.2): icmp_seq=3 ttl=64 time=0.250 ms

systemd cores on GCP

We experience very slow cluster startups on GCP. The reason for this is that the extracting of docker layers is very slow. The root cause appears to be a systemd core dump around the time the kubelet starts. This does not happen on AWS wit a very similar software setup.

systemd restarts after the crash but it appears to be in an inconsistent state, e.g. commands like the following do fail:

root@shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-hg27v:/# systemctl
Failed to list units: Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)

systemd version is

Apr 09 14:05:40 localhost systemd[1]: systemd 244.3-1 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)

Issue appears to be easily reproducible (two nodes fail with similar symptoms).

Next steps:

  • try with latest systemd version 245.4-2 available for Debian (unstatable)
  • debug

These are the crash logs. A core dump file has bee secured but does not reveal additional information.

Apr 09 15:11:54 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz dbus-daemon[392]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.>
Apr 09 15:11:54 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Starting Hostname Service...
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz dbus-daemon[392]: [system] Successfully activated service 'org.freedesktop.hostname1'
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Started kubelet daemon.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=kubelet comm="systemd" exe="/lib/systemd/systemd" hostname=? >
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2853]: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet-monitor.service → /etc/systemd/system/kubelet-monitor.s>
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Reloading.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: /lib/systemd/system/logrotate.service:19: Unknown key name 'ProtectClock' in section 'Service', ignoring.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: /lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/doc>
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=kubelet-monitor comm="systemd" exe="/lib/systemd/systemd" hos>
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Started Kubelet-monitor daemon.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz health-monitor[2876]: Start kubernetes health monitoring for kubelet
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz health-monitor[2876]: Wait for 2 minutes for kubelet to be functional
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Reloading.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kernel: systemd[1]: segfault at 50 ip 0000562234075990 sp 00007ffe7fd0b060 error 4 in systemd[562234014000+b5000]
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kernel: Code: a0 48 8b 75 98 c7 45 a4 00 00 00 00 48 8b 94 c7 90 05 00 00 48 89 45 80 48 89 f0 48 39 d6 74 17 66 2e 0f 1f 84 00 00 00 00 00 <48> 8b 40 50 8>
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz audit[2880]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 subj==unconfined pid=2880 comm="systemd" exe="/lib/systemd/systemd" sig=11 res=1
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.550184    2852 flags.go:33] FLAG: --address="0.0.0.0"
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.551265    2852 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.551413    2852 flags.go:33] FLAG: --alsolog
[...]
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.557250    2852 flags.go:33] FLAG: --version="false"
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.557284    2852 flags.go:33] FLAG: --vmodule=""
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.557292    2852 flags.go:33] FLAG: --volume-plugin-dir="/var/lib/kubelet/volumeplugins"
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.557299    2852 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.557362    2852 feature_gate.go:216] feature gates: &{map[]}
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.571891    2852 feature_gate.go:216] feature gates: &{map[]}
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:11:55.571982    2852 feature_gate.go:216] feature gates: &{map[]}
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd-coredump[2881]: Due to PID 1 having crashed coredump collection will now be turned off.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Caught <SEGV>, dumped core as pid 2880.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd[1]: Freezing execution.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: Failed to reload daemon: Connection reset by peer
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: The unit files have no installation config (WantedBy=, RequiredBy=, Also=,
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: Alias= settings in the [Install] section, and DefaultInstance= for template
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: units). This means they are not meant to be enabled using systemctl.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]:
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: Possible reasons for having this kind of units are:
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: • A unit may be statically enabled by being symlinked from another unit's
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]:   .wants/ or .requires/ directory.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: • A unit's purpose may be to act as a helper for some other unit which has
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]:   a requirement dependency on it.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: • A unit may be started when needed via activation (socket, path, timer,
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]:   D-Bus, udev, scripted systemctl call, ...).
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]: • In case of template units, the unit is meant to be enabled with some
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2877]:   instance name specified.
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz systemd-coredump[2881]: Process 2880 (systemd) of user 0 dumped core.

                                                                                                      Stack trace of thread 2880:
                                                                                                      #0  0x00007f7a57e5ea47 kill (libc.so.6 + 0x3ba47)
                                                                                                      #1  0x00005622340c87df n/a (systemd + 0xe57df)
                                                                                                      #2  0x00007f7a57e5e7e0 n/a (libc.so.6 + 0x3b7e0)
                                                                                                      #3  0x0000562234075990 n/a (systemd + 0x92990)
                                                                                                      #4  0x0000562234075e17 n/a (systemd + 0x92e17)
                                                                                                      #5  0x000056223408cd3c n/a (systemd + 0xa9d3c)
                                                                                                      #6  0x00005622340c5f9b n/a (systemd + 0xe2f9b)
                                                                                                      #7  0x000056223401a728 n/a (systemd + 0x37728)
                                                                                                      #8  0x00007f7a57e49e0b __libc_start_main (libc.so.6 + 0x26e0b)
                                                                                                      #9  0x000056223401b59a n/a (systemd + 0x3859a)
Apr 09 15:11:55 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz download-cloud-config.sh[2883]: Failed to enable unit: Connection reset by peer
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz dbus-daemon[392]: [system] Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:12:21.962954    2852 mount_linux.go:170] Cannot run systemd-run, assuming non-systemd OS
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:12:21.963076    2852 server.go:425] Version: v1.15.11
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:12:21.963173    2852 feature_gate.go:216] feature gates: &{map[]}
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:12:21.963288    2852 feature_gate.go:216] feature gates: &{map[]}
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: W0409 15:12:21.963458    2852 plugins.go:118] WARNING: gce built-in cloud provider is now deprecated. The GCE provider is deprecated and wil>
Apr 09 15:12:21 shoot--core--gardenlinux-gcp-worker-i6pqb-z1-86bf99d88b-t6grz kubelet[2852]: I0409 15:12:21.970282    2852 gce.go:868] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:"", scopes:[>

Add Garden Linux to /etc/os-release

What would you like to be added:
Garden Linux currently does not show that it is Garden Linux.

# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux bullseye/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Why is this needed:
This is why in any monitoring it appears always as Debian:

Gardenlinux OS Extension

  • gardener-extension-os-gardenlinux
  • sync with Vladimir how to make gardenlinux the base linux for gardener

make PXE workable

  1. make bin/start-vm booting with an ipxe file -> pulls images from inet (e.g. coreos see first code below)
  2. change the hack/nginx to have a ipxe folder containing files needed
  3. make bin/start-vm example to boot all ipxe files with a sample ignition file provided by us (see second code)
  4. try to make this with some garden linux (all manual is fine)
  5. make the image creation in feature/_pxe

ipxe with coreos sample

#!ipxe
kernel http://alpha.release.core-os.net/amd64-usr/current/coreos\_production\_pxe.vmlinuz initrd=coreos\_production\_pxe\_image.cpio.gz coreos.first\_boot=1 coreos.config.url=https://example.com/pxe-config.ign
initrd http://alpha.release.core-os.net/amd64-usr/current/coreos\_production\_pxe\_image.cpio.gz
boot

sample ignition
{ "ignition": { "config": {}, "timeouts": {}, "version": "2.1.0" }, "networkd": {}, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq......." ] }, ] }, "storage": {}, "systemd": {} }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.