Git Product home page Git Product logo

rustysd's Introduction

rustysd

Rustysd is a service manager that tries to replicate systemd behaviour for a subset of the configuration possibilities. It focuses on the core functionality of a service manager, not requiring to be PID1 (aka init process).

Will this replace systemd?

TLDR: No, rustysd is no dedicated replacement. It is an opportunity for the niches where systemd could not get it's foot down to profit (more easily) from the ecosystem around systemd.

Very likely no. There are a lot of reasons, but most importantly: it works and provides features that rustysd will likely never provide.

This project might be whats needed to show that the core systemd functionality is not very hard to replicate and that all the advantages of having a systemd-like service manager can be brought to many other platforms is very much feasible without having to port all of systemd. There are some (a lot?) platforms that rust does not (yet) fully support so the maintainers will understandably reject using rustysd as their main service manager. But having rustysd as an example might help other efforts in more portable languages.

Rustysd also opens up usage of systemd services outside of systemd based linux distros (like alpine linux, commonly used in docker containers and small vms) and freebsd.

General info

For now this project is just out of interest how far I could come with this and what would be needed to get a somewhat working system. It is very much a proof of concept / work in progress. For the love of god do not use this in anything that is important.

It does look somewhat promising, most needed features are there. There are a lot of tests missing and more care needs to be taken so that rustysd itself never panics.

Short intro to systemd / rustysd

Systemd/rustysd operate on so called "units". These are smallish separate entities in the system like a single service. These units can be handled independently but can specify relations to other units that they "need". The unit of service-abc can say "I need the unit of service-xyz to be started before I do".

The second thing systemd/rustysd bring to the table is socket activation. Services that specify sockets do not need to be started immediately, but rather when there is activity on their socket(s). This enables faster startup times by starting services lazily when they are needed.

Additionally systemd provices a lot more unit-types besides services and sockets which rustysd does not (and for most will likely never) support.

Scope of rustysd

What is explicitly in scope of this project

  1. Startup sorted by dependencies (parallel if possible for unrelated units)
  2. Startup synchronization via *.target units
  3. Socket activation of services

What is explicitly out of scope (for now, this project is still very young):

  1. Timers (Cron should do fine for 99% of usecases)
  2. Mounts (It is actually useful to have these as units but I don't think the gains outweigh the added complexity)
  3. Device (Same argument as for Mount)
  4. Path activation (Might get included.)
  5. Scopes (Nope. If you start processes outside of rustysd you need to manage them yourself. Maybe a second instance of rustysd? ;))
  6. Slices (this might be added as it is fairly important if you are not running inside of a container)

Gitter

About slices

I dont think it is viable for a cross-platform project to support slices. In general I think it would be more sensible to put that responsibility on other tools.

I imagine something along the lines of dockers 'runc' but not specialized to the container environment. Let's call the imaginary tool 'restrict', the usage I imagine would be along the lines of:

restrict -cmd "/my/binary arg1 arg2 arg3" -mem_max=5G -io_max=10G/s

This would setup the process with the given restrictions and then exec into the given cmd. with this kind of tool there are a few benefits:

  1. Rustysd doesnt have to concern itself with how a platform does resource restriction, but there can be separate tools for each platform (if possible)
  2. Clear separation of concerns. Rustysd manages service lifetimes. It does not (or only for relatively trivial stuff) manage the runtime environment for those services.
  3. The tool can be useful to other contexts aswell

For linux there are some existing utilities from the docker/container/oci-spec space:

  • The oci implementation from the docker guys runc
  • For people who want completely static builds, an alternative oci implementation crun seems great
  • It SHOULD be possible to do something similar for BSD jails

Goals

Since this project is very young and wasn't started with any particular goal in mind, I am open to any and all ideas. Here are some that have been brought up that seem sensible. None of this is definitive though.

  1. Provide a PID1 for containers/jails, so unaltered systemd depending services can be run in a container/jail
  2. Provide full init capabilities so this can be used for OS's like redox os or debian/kFreeBSD
  3. Be platform agnostic as long as it's unix (I develop on linux but I'll try to test it on FreeBSD when I add new platform specific stuff)

About platform independence

Here is a list of features rustysd currently assumes the platform to provide. The list also contains suggestions about which features could be cut and which consequence that would have (e.g. see the filedescriptor point). Everything here is written in unix terms but it should not be too much work to write a compatibility shim if an equivalents exist on the target platform. It's not too many features that must exist for the port to work in a usable way (and they are mostly basic OS functionality anyways).

  1. forking
  2. getting the current process id
  3. file descriptors that can be passed to child processes when forking
    • Maybe we dont have to have this. We could just make sockets and socket-activation an optional feature for unixy platforms
    • Then forking would be optional too, just having the ability to launch new executables in a new process would suffice
  4. (Un-)Mark file descriptors for closing on exec()'ing if forking with passed fds is supported
  5. Select()'ing on filedescriptors (not just for socket activation but for listening on stdout/err of child processes)
  6. Creating a pipe/eventfd/... for interrupting the selects (also a way to activate/reset those, write(/read() for pipes for example)
  7. dup2()'ing filedescriptors for providing fds at fd index 3,4,5,...
  8. Creating process-groups
  9. signals from the platform when a child exits / gets terminated in any way
  10. waitpid()'ing for the exited children
  11. sending (kill/terminating) signals to whole process groups (as long as we care about cleanup after killing, maybe the platform handles this in another smart way?)
  12. setting env variables (currently handled with libc because the rust std contains locks which currently break on forking)
  13. setting the current process as a subprocess reaper (might not be that important, other platforms might handle reparenting of orphaned processes differently than unix)
  14. changing the user id to drop privileges
  15. an implementation of getpwnam_r and getgrnam_r. These can be swapped for getpwnam/getgrnam if needed
    • They could also be ignored, restricting the values of User, Group, and SupplementaryGroups to numerical values

About platform dependent stuff

There are some parts that are platform dependent. Those are all optional and behind feature flags.

Cgroups

Rustysd can employ cgroups for better control over which processes belong to which service. Resource-limiting is still out of scope for rustysd. Cgroups are only used to make the features rustysd provides anyways more reliable.

On other systems there might arise issues if a service forks of processes which move into another process-group. If these are not cleanly killed by the stop/posstop commands they will be orphaned and survive. This is (if I understand correctly) the way other service manager handle this too.

Why no jails

One possibility would be to use BSD jails but that seems somewhat hacky since rustysd would have to chain-load the actual service command with a jail command. Rustysd could check if a service is started in a jail anyways and then kill that jail. But that could lead to other problems if the jail is meant to be long lived. In short I see no clean way to employ jails for process-management

What works

This section should be somewhat up to date with what parts are (partly?) implemented and (partly?) tested. If you find anything does actually not work please file me an issue!

For an in-depth comparison of systemd and rustysd see the feature-comparison.md file. It is generated by the tools/gen_feature_comparison.py (shoutout to wmanley who wrote the initial script!). It currently is somewhat pessimistic, I will work on improving the comparison of the features rustysd actually does support (see below for a list of supported features).

General features

Of rustysd itself

  • Parsing of service files (a subset of the settings are recognized)
  • Parsing of socket files (a subset of the settings are recognized)
  • Ordering of services according to the before/after relations
  • Killing services that require services that have died
  • Matching services and sockets either by name or dynamically by parsing the appropiate settings in the .service/.socket files
  • Passing filedescriptors to the daemons as systemd clients expect them (names and all that good stuff)
  • Pretty much all parts of the sd_notify API
  • Waiting for the READY=1 notification for services of type notify
  • Waiting for services of type dbus
  • Waiting for multiple dependencies
  • Target units to synchronize the startup
  • Send SIGKILL to whole processgroup when killing a service
  • Socket activation (the non-inetd style). So your startup will be very fast and services only spin up if the socket is actually activated
  • Pruning the set of loaded units to only the needed ones to reach the target unit

With the control interface (doc/ControlInterface.md for a detailed list of commands)

  • Adding new units while running
  • Restarting units
  • Stopping units
  • Shutdown rustysd

Optional build features

There are some features behind flags because they are either platform dependent or not necessarily needed for most of the use-cases

  • dbus_support: Activate support for services of type dbus (not needed for many services and probably a dumb idea in a container anyways)
  • linux_eventfd: Use eventfds instead of pipes to interrupt select() calls (because they only exist on linux)
  • cgroups: Optional support to use cgroups to more reliably kill processes of services on linux

Docker

Running in a docker container as PID1 works. The image that is built by the scripts in the dockerfiles directory results in a ~2MB image that contains

  • Rustysd (stripped binary built with musl to be completely static) -> ~1.6Mb
  • The testservice and testserviceclient (stripped binaries built with musl to be completely static) -> ~300kb / ~280kb
  • The unit files in test_units

See for yourself

Running ./build_all.sh && cargo run --bin rustysd will build the test services and run rustysd which will start them. Currently there are two services, one that gets passed some sockets and one that uses them to send some text over those sockets.

There are some scripts to run this in a docker container. Have a look at the scripts in the dockerfiles directory.

What does not work

Just some stuff I know does not work but would be cool to have. I tried to categorize them by how much work the seem to be, but otherwise they are without a particular oder.

Requiring bigger changes or seem complicated:

  • Unit templates
  • An optional journald logging. (Maybe thats not something that is actually something that is wanted)
    1. Positive: Better compatibility
    2. Negative: Weird dependency between rustysd and a service managed by rustysd (could be less of a pain point if rustysd itself handled logging in a journald way)
  • Socket activation in inetd style
  • The whole dbus shenanigans (besides waiting on dbus services, which is implemented)
  • Service type forking is missing
    • I would argue that this is an unnecessary type anyways. This would be better handled by using somthing like supervisords pidproxy
    • This is kinda true for dbus services too, which could be easily wrapped into something similar and just behave like a normal 'notify' services
  • The rest of the sd_notify API (with storing filedescriptors and such)

Requiring small changes / additions transparent to the other modules:

  • Change user to drop privileges
  • Patching unit definitions with dropin files
  • Socket options like MaxConnections=/KeepAlive=
  • Killing services with a configurable signal. Currently its always SIGKILL after the ExecStop commands have been run
  • More socket types
    1. Netlink is missing for example
    2. Abstract namespace for unix sockets (but thats linux specific anyways and rust stdlib doesnt support it.....)
  • Service type idle is missing (not even sure if its a good idea to support this)
  • A systemctl equivalent to control/query rustysd (there is a small jsonrpc2 API but that might change again)
    • Disabling of units is missing
    • A better UI than pretty-printed json is missing
  • Many of the missing features in feature-comparison.md are relatively simple issues
  • Support the different allowed prefixed for executables in execstart(-pre/post)

Unclear how much work it is:

  • Get all the meta-targets and default dependencies right
    • Individually these are probably small parts. But as a whole task it seems like much

What could be done better

Some stuff where I chose something along the way where there might be better/other choices

  1. Use mio instead of nix::select to get events from the stdout/stderr/notification-sockets
    1. Pro: uses more modern/efficient APIs (epoll/kqeueu)
    2. Con: Probably less portable to more exotic unices (like redox)

How does it work

Rustysd has two binaries: The main service-manager 'rustysd' and the control client 'rsdctl'.

The client is just a dumb utility to pack cli arguments into jsonrpc2 format and send them to rustysd. This can be used to restart units, add new units, or show the status of units.

Generally rustysd has two phases:

  1. Bring up all units with as much concurrency as possible, and as lazily (with socket activation) as possible
  2. Wait for events from the services or the control sockets, and react to these
    1. Data from either stdout/err or the notification sockets
    2. Signals from the kernel

Community

There has been a request for a place to talk about this project, so I opened a gitter community for this project. Feel free to come over and have a chat on this page

rustysd's People

Contributors

adnelson avatar atul9 avatar dependabot[bot] avatar edoriandark avatar jabedude avatar killingspark avatar leshow avatar mggmuggins avatar wmanley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rustysd's Issues

Add description/topics

The repository lacks a description and topics which allow users to discover this repo.
Can you please add them?

Stop commands might get ignored if the unit is currently restarting

The restart routine currently naively uses deactivate + activate to restart a unit. That leaves a small timewindow where the unit has the status 'Stopped(StoppedFinal)'. If a stop command that tries to deactivate that unit looks at it in this timeframe, it will ignore that unit, even though it will be started immediatly after.

This is a basic example of what could happen: (T1 is reactivating the unit, T2 is trying to just deactivate it)

status change T1 T2
running -> stoppedfinal deactivate
none, is already stopped deactivate
stoppedfinal -> running activate

In general the Unit should not be restarted if this kind of overlap happens, it is very likely the intended effect was to deactivate the unit. This is also very likely to be the result of a service exiting unexpectedly and restarting while a control command was issued to stop this service (or it was stopped due to another unit stopping and not being restarted). In any case I can think of it is either not correct or unexpected for the unit to still be running.

There are a few ways to fix this:

  1. move the status changing logic out of Unit::(de)activate and make status change a wrapper around Unit::(de)activate, something like Unit::(de)(re)activate_with_status. Here an additional Restarting as a toplevel status would be sufficient to communicate to other threads what this unit is currently doing.
  2. make deactivate take an additional argument that determins the state of the unit after it is finished deactivating. This would mean there is an additional Status like Stopped(Restarting).

I currently prefer the first solution but I will think more about this.

command source unclear

The with accept_control_connections commands are accepted.
But i do not understand where they are coming from.
Can you help?

Testing rustysd inside of a docker container?

Hi @KillingSpark ,
have you build a minimal docker image to run rustysd with a simple example unit as docker container pid1 (#9)?

I would like to test it as docker container and maybe build a small system based on rustysd. I use the build tool linuxkit to build an os image based of docker containers. So a minimal image with rustysd as entrypoint would be great as starting point.

Regards

socket instead of tcp port and default socket / port?

Hi @KillingSpark ,
you removed rustysd socket?
For local usage socket would be fine or is it overhead for rustysd to implement socket file?

And rsdctl / rustysd should have a working default socket (/run/rustysd/rustysd.sock) / port (127.0.0.1:4444)? A default port shouldn't be used by known services like proxy (3128, 8080), httpd (80,443), mysql (3306), ...

socket / tcp port should than be optional.

rustysd options like custom unitfiles directory?

I moved rustysd to /usr/sbin/ and would like to move the unitfiles to /etc/unitfiles for example.
Is there a documentation about maybe existing options? Haven't found in rustysd.rs source file.

ExecStart with commands like /bin/sh -c '<CMD>'

Is it possible to get that command running as rustysd service?
It's a complex shell command which should(?) work with systemd.

ExecStart=/bin/sh -c '/bin/grep -h MODALIAS /sys/bus/*/devices/*/uevent | /usr/bin/cut -d= -f2 | /usr/bin/xargs /sbin/modprobe -abq 2> /dev/null'

Fails with error because of ignored ' and " in the command.

[modules.service][STDERR] EXECV: "/bin/sh" ["sh", "-c", "\'/bin/grep", "-h", "MODALIAS", "/sys/bus/*/devices/*/uevent", "|", "/usr/bin/cut", "-d=", "-f2", "|", "/usr/bin/xargs", "/sbin/modprobe", "-abq", "2>", "/dev/null\'"]
[modules.service][STDERR] -h: line 1: syntax error: unterminated quoted string

".service" should be optional to start a service?

I added a rust package manager to my fun linux os and have problems to reload (add) and start a new unitfile.

New unitfile is addes sucessful

Write cmd: {"jsonrpc":"2.0","method":"reload"}
[2020-03-08][20:33:32][rustysd::control::control][TRACE] Execute command: LoadAllNew
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 12: "./unitfiles/default.target"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 13: "./unitfiles/docker.service"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 14: "./unitfiles/init.target"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 15: "./unitfiles/mdevd.service"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 16: "./unitfiles/network.target"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 17: "./unitfiles/onboot.target"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 18: "./unitfiles/rngd.service"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 19: "./unitfiles/services.target"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 20: "./unitfiles/sshd.service"
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] ID 21: "./unitfiles/udhcpc.service"
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 12 references ids: [19]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 19 references ids: [12, 13, 15, 16, 18, 20, 21]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 13 references ids: [19]
Wait for response
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 15 references ids: [19]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 16 references ids: [17, 19]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 17 references ids: [14, 16]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 14 references ids: [17]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 18 references ids: [19]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 20 references ids: [19]
[2020-03-08][20:33:32][rustysd::units::dependency_resolving][TRACE] Id 21 references ids: [19]
[2020-03-08][20:33:32][rustysd::units::loading][TRACE] Finished pruning units
[2020-03-08][20:33:32][rustysd::units::insert_new][TRACE] Check all names exist
[2020-03-08][20:33:32][rustysd::units::insert_new][TRACE] Add new unit: docker.service
Got response
{
  "jsonrpc": "2.0",
  "result": [
    {
      "Added": [
        "docker.service"
      ],
      "Ignored": [
        "sshd.service",
        "udhcpc.service",
        "rngd.service",
        "network.target",
        "onboot.target",
        "mdevd.service",
        "services.target",
        "default.target",
        "init.target"
      ]
    }
  ]
}

(Re-)start without .service fails

/ # rsdctl /notifications/control.socket restart docker
Write cmd: {"jsonrpc":"2.0","method":"restart","params":"docker"}
[2020-03-08][20:38:22][rustysd::control::control][TRACE] Execute command: Restart("docker")
[2020-03-08][20:38:22][rustysd::control::control][TRACE] Find unit for name: docker
Wait for response
Got response
{
  "error": {
    "code": -32000,
    "message": "No unit found with name: docker"
  },
  "jsonrpc": "2.0"
}

Works fine with ".service" added

/ # rsdctl /notifications/control.socket restart docker.service
Write cmd: {"jsonrpc":"2.0","method":"restart","params":"docker.service"}
[2020-03-08][20:36:30][rustysd::control::control][TRACE] Execute command: Restart("docker.service")
[2020-03-08][20:36:30][rustysd::control::control][TRACE] Find unit for name: docker.service
[2020-03-08][20:36:30][rustysd::units::activate][TRACE] Activate id: 13
[2020-03-08][20:36:30][rustysd::units::activate][TRACE] Lock unit: 13
[2020-03-08][20:36:30][rustysd::units::activate][TRACE] Locked unit: 13
[2020-03-08][20:36:30][rustysd::units::activate][TRACE] Lock status for: docker.service
[2020-03-08][20:36:30][rustysd::units::activate][TRACE] Locked status for: docker.service
[2020-03-08][20:36:30][rustysd::services::services][TRACE] Start service docker.service
[2020-03-08][20:36:30][rustysd::services::fork_parent][TRACE] [FORK_PARENT] Service: docker.service forked with pid: 1215
[2020-03-08][20:36:30][rustysd::services::fork_parent][TRACE] [FORK_PARENT] service docker.service doesnt notify
Wait for response
Got response
{
  "jsonrpc": "2.0",
  "result": []
}

[Offtopic] Busybox like applet structure / routing

Hi @KillingSpark,
I know the question is off topic to rustysd, but I try to start with rust and haven't found examples / documentation about how to build a busybox like binary...

I found out how to build a rust binary with optional features

#[cfg(feature = "shutdown")]
mod shutdown;

#[cfg(feature = "monitor")]
mod monitor;

Haven't understood directory structure, but got it work to execute function poweroff from shutdown mod inside of the main.rs file.

But...
I don't like to hardcode subcommand calls to "applet functions" (rust modules)

poweroff => mod shutdown, execute function reboot
status => mod monitor, execute function status

Is there a way how to dynamically map subcommand to a module (or sub-crate or what else would make sense here)?

Searched such a application example structure without success. "Coreutils":https://github.com/uutils/coreutils have such a structure of modules, but how dispatching / routing of subcommand to applet / module is done?

Maybe you know an easy example / link which could help me to understand / build such a app base... or just close that issue because it's off topic.

Regards

Logging

Rustysd needs persistent logging. This needs some questions answered:

  1. Should rustysds logs go somewhere else than stdout/stderr output of services?
  2. Should each service have it's own log or should it all be in one file?
  3. Can/Should rustysd just reuse systemd-logind?

Update: I meant systemd-journald of course!

Missing control commands

This is a list of control commands that I think should be supported in the future. Some of these are straight forward, and only miss code in the control interface parts. Others need more code / changes to the code that manipulates the unit set.

Start/stop differences

Starting and stopping of units currently behaves differently with respect to how unsatisfied dependencies are handled.

  1. Starting units: Currently the user has to start units one by one themselves
  2. Stopping units: Currently stopping a unit will always stop all units that depend on it.

These should have two extra commands which should make the semantics clearer:

  1. stop
  2. stop-all
  3. start
  4. start-all

Start newly loaded units

After reloading units, there should be a way to start all units, that have never been started.

  1. start-all-new

General queries about possible consequences of actions

Additionally there should be more queries available

  1. To start unit A, which other units need to be started
  2. To stop unit A, which other units need to be stopped
  3. To remove unit A, which other units need to be removed
  4. To remove unit A, which other units need to be stopped

Start containers runc / crun

Hi, I have a problem with unitfiles starting a container. I think because of the process runc / crun which ends after the container process is started... So rustysd thinks the service is dead?

[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/default.target", 1
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/docker.service", 2
[2020-02-08][12:12:57][rustysd::units::unit_parsing::service_unit][TRACE] UID: Uid(0)
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/init.target", 3
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/mdevd.service", 4
[2020-02-08][12:12:57][rustysd::units::unit_parsing::service_unit][TRACE] UID: Uid(0)
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/network.target", 5
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/onboot.target", 6
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/rngd.service", 7
[2020-02-08][12:12:57][rustysd::units::unit_parsing::service_unit][TRACE] UID: Uid(0)
[2020-02-08][12:12:57][rustysd::units::loading][TRACE] "./unitfiles/services.target", 8
[2020-02-08][12:12:57][rustysd][TRACE] Finished loading units
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 1 references ids: [8]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 8 references ids: [1, 2, 4, 5, 7]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 2 references ids: [8]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 4 references ids: [8]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 5 references ids: [6, 8]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 6 references ids: [3, 5]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 3 references ids: [6]
[2020-02-08][12:12:57][rustysd::units::dependency_resolving][TRACE] Id 7 references ids: [8]
[2020-02-08][12:12:57][rustysd][TRACE] Finished pruning units
[2020-02-08][12:12:57][rustysd][TRACE] Unit dependencies passed sanity checks
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Root unit: mdevd.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Root unit: rngd.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Root unit: init.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Root unit: docker.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 4
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 4
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 4
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: mdevd.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: mdevd.service
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Start service mdevd.service
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::services::fork_parent][TRACE] [FORK_PARENT] Service: mdevd.service forked with pid: 730
[2020-02-08][12:12:57][rustysd::services::fork_parent][TRACE] [FORK_PARENT] service mdevd.service doesnt notify
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 7
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 7
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 7
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: rngd.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: rngd.service
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Start service rngd.service
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 3
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 3
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 3
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: init.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: init.target
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Reached target init.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 6
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 6
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 6
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: onboot.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: onboot.target
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Reached target onboot.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 5
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 5
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 5
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: network.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: network.target
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Reached target network.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 8[    9.707290] random: rustysd: uninitialized urandom read (16 bytes read)
[    9.709125] random: rustysd: uninitialized urandom read (16 bytes read)

[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Unit: services.target ignores activation. Not all dependencies have been started (still waiting for: [2, 7])
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 2
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 2
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 2
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: docker.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: docker.service
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Start service docker.service
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Unit: services.target ignores activation. Not all dependencies have been started (still waiting for: [2, 7])
[2020-02-08][12:12:57][rustysd::socket_activation][TRACE] Interrupted socketactivation select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::socket_activation][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::services::fork_parent][TRACE] [FORK_PARENT] Service: rngd.service forked with pid: 731
[2020-02-08][12:12:57][rustysd::services::fork_parent][TRACE] [FORK_PARENT] service rngd.service doesnt notify
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Unit: services.target ignores activation. Not all dependencies have been started (still waiting for: [2])
[2020-02-08][12:12:57][rustysd::socket_activation][TRACE] Interrupted socketactivation select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::socket_activation][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] notify eventfd
[2020-02-08][12:12:57][rustysd::services::fork_parent][TRACE] [FORK_PARENT] Service: docker.service forked with pid: 732
[2020-02-08][12:12:57][rustysd::services::fork_parent][TRACE] [FORK_PARENT] service docker.service doesnt notify
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 8
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: services.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: services.target
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Reached target services.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Activate id: 1
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock unit: 1
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked unit: 1
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Lock status for: default.target
[2020-02-08][12:12:57][rustysd::units::activate][TRACE] Locked status for: default.target
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Reached target default.target
[2020-02-08][12:12:57][rustysd::socket_activation][TRACE] Interrupted socketactivation select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::socket_activation][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted notification select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted notification select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted notification select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted stderr select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[rngd.service][STDERR] EXECV: "/usr/bin/crun" ["crun", "run", "--no-pivot", "--bundle", "/containers/services/rngd/", "rngd"]
[mdevd.service][STDERR] EXECV: "/usr/bin/crun" ["crun", "run", "--no-pivot", "--bundle", "/containers/services/mdevd/", "mdevd"]
[docker.service][STDERR] EXECV: "/usr/bin/crun" ["crun", "run", "--no-pivot", "--bundle", "/containers/services/docker/", "docker"]
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted stderr select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted stderr select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted stdout select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted stdout select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Interrupted stdout select because the eventfd fired
[2020-02-08][12:12:57][rustysd::platform::eventfd::pipe_eventfd][TRACE] reset pipe eventfd
[2020-02-08][12:12:57][rustysd::notification_handler][TRACE] Reset eventfd value
[rngd.service][STDERR] bind socket to `/run/crun/rngd/notify`: Address already in use
[2020-02-08][12:12:57][rustysd::signal_handler][TRACE] No more state changes to poll
[docker.service][STDERR] bind socket to `/run/crun/docker/notify`: Address already in use
[2020-02-08][12:12:57][rustysd::signal_handler][TRACE] No more state changes to poll
[mdevd.service][STDERR] bind socket to `/run/crun/mdevd/notify`: Address already in use
[2020-02-08][12:12:57][rustysd::signal_handler][TRACE] No more state changes to poll
[2020-02-08][12:12:57][rustysd::signal_handler][TRACE] No more state changes to poll
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 733
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] All processes spawned by rustysd have a pid entry. This did not: 733. Probably a rerooted orphan that got killed.
[rngd.service][STDERR] sync socket closed
[2020-02-08][12:12:57][rustysd::signal_handler][TRACE] No more state changes to poll
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 731
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Check if we want to restart the unit
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Service with id: 7, name: rngd.service pid: 731 exited with: Exit(1)
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Recursively killing all services requiring service rngd.service
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Deactivate unit: rngd.service
[2020-02-08][12:12:57][rustysd::services::services][ERROR] Error killing process group for service rngd.service: ESRCH: No such process
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Success killing process os specificly for service rngd.service
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 734
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] All processes spawned by rustysd have a pid entry. This did not: 734. Probably a rerooted orphan that got killed.
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 732
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Check if we want to restart the unit
[mdevd.service][STDERR] sync socket closed
[docker.service][STDERR] sync socket closed
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Service with id: 2, name: docker.service pid: 732 exited with: Exit(1)
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Recursively killing all services requiring service docker.service
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Deactivate unit: docker.service
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Success killing process group for service docker.service
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Success killing process os specificly for service docker.service
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 730
[2020-02-08][12:12:57][rustysd::signal_handler][TRACE] No more state changes to poll
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Check if we want to restart the unit
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Service with id: 4, name: mdevd.service pid: 730 exited with: Exit(1)
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Recursively killing all services requiring service mdevd.service
[2020-02-08][12:12:57][rustysd::units::units][TRACE] Deactivate unit: mdevd.service
[2020-02-08][12:12:57][rustysd::services::services][ERROR] Error killing process group for service mdevd.service: ESRCH: No such process
[2020-02-08][12:12:57][rustysd::services::services][TRACE] Success killing process os specificly for service mdevd.service
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 735
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] All processes spawned by rustysd have a pid entry. This did not: 735. Probably a rerooted orphan that got killed.
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 736
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] All processes spawned by rustysd have a pid entry. This did not: 736. Probably a rerooted orphan that got killed.
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 742
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] All processes spawned by rustysd have a pid entry. This did not: 742. Probably a rerooted orphan that got killed.
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] Exit handler with pid: 739
[2020-02-08][12:12:57][rustysd::services::service_exit_handler][TRACE] All processes spawned by rustysd have a pid entry. This did not: 739. Probably a rerooted orphan that got killed.

Example unitfile:

[Unit]
Description=Start services rngd

[Service]
ExecStart=/usr/bin/crun run --no-pivot --bundle /containers/services/rngd/ rngd
#ExecStop=/usr/bin/crun kill rngd
#ExecStopPost=/usr/bin/crun delete rngd; /bin/rm -rf /run/crun/docker

[Install]
WantedBy=services.target

services.target

[Unit]
Description= Startup system services

[Install]
WantedBy=default.target

Reloading of units

Rustysd needs a control interface command that triggers a complete reload of all units, which integrates changed / new units into the running system.

Adding and activating new units should be done similarly to the initial startup by walking the dependency graph, but ignoring unchanged units. Changed units should be killed and restarted with the new configuration

Currently there is only support for adding new units one by one which is fine for manual enabling of services but not for automatically adding a bunch of services with dependencies between them.

Removing units

There should be a call to the control interface like this: {"method": "disable", "params": ["test.service", "test.socket"]}. This should remove all units that reference these units too.

After that a simple {"method": "reload"} should be enough to reload the (possibly changed) unit files. Then the units need to be restarted. For a first start manual reactivation should be fine, a command like {"method": "activate-all-new"} would be nice to have though.

Redesign/Rewrite to move from the explorative code to a well designed system

Currently rustysd is still pretty spaghetti, since I just started writing code to see what components are needed, and wanted to play around with this project. Now that pretty much all concepts work(tm) the existing bugs are likely caused by the bad (read: missing) design.

I will open a new branch that will contain a rewrite, and update this issue accordingly. Before I write any code in that branch, I will need to write a design concept.

Things that need to be particularly looked out for while designing:

  1. Separate 'static' info about units like names/config from the 'runtime' info like status, pid, open fds,...
  2. Locking. Mutexes are something I first started using when starting rustysd so I used them all over the place and by that enabled deadlocking to happen. This should be possible to work around by making rules in which order stuff has to be locked.
  3. Updating the set of units needs to happen 'atomically' so we need to be able to lock the whole runtimeinfo
  4. Make finding units and their inter-dependencies easier. It is currently very annoying and verbose to find all units that need each other either by name or implicit dependency. This probably means rustysd should keep track of the name-dependencies when adding/removing units.
  5. From the beginning, keep in mind that the set of units is neither static nor that units will keep existing if they existed once. This is one of the biggest issues in the current codebase.

Things that I will probably keep conceptually:

  1. The RuntimeInfo struct worked pretty well
  2. The FdStore is nice
  3. The whole config parsing stuff is ok. It needs to output different types though, since the info organization in the different sections is less than ideal

Shutdown should honor dependencies

Shutdown currently kills all services in no particular order. It should use deactivate_recursively() starting at the units that do not depend on anything.

The exit handler should not do reactivation if the status is 'Starting' (or any other operation is currently running)

The exit handler should not do reactivation if the status is 'Starting'. This should be handled by the thread that is currently trying to start the service. The exit event should be communicated via the PidTable in the RuntimeInfo.

This leads to misleading timeout logs for services of type notify (and dbus) if the executable immediately exits.

When fixing this, special care should be taken that socket-activation still works properly, as this resets the status from Started(WaitingOnSocket) to Starting. But the exit handler should in this case apply the correct restarting policy.

So the best plan is probably:

  1. Split Status into two fields: current_status, Option
  2. Let the exit handler only apply reactivation on Units with an Status that is Started (either waiting on a socket or running properly).
    • If the service was previously stopped or never ran, that starting was a one-time event and should not cause any restarts but just report the appropriate error.
    • It should only mark the entry in the PidTable as Exited and return.
  3. The service start routine that waits for notifications/dbus name-grabs needs to regularly check whether the Pid they started is marked as exited in the PidTable and fail with an appropriate error if so.

podman systemd service compatibility and Type=forking

Hi @KillingSpark
current rustysd won't be compatible to podman generated systemd services because of Type=forking and maybe other options like PIDFile= or KillMode. Maybe it would be ok to just ignore unsupported options...

Examples:
https://www.mankier.com/1/podman-generate-systemd

Maybe it's not needed to support the podman generated systemd services because it isn't really a good solution at the moment (manually create the container and generate a start / stop service...
So if the container is removed the service won't work or regenerate I think.
But maybe run / create command could be integrated into podman generate systemd in the future...

Why Type=forking isn't supported? No need to implement or is it to complex?

macos support (osx)

My dev environment is macos, so expanding to add macos support it seems like a solid starting point. I tinkered a bit today and didn't resolve yet, but it may be an easy transition. Will post here with results/findings.

Also, it might be good to decide exactly which platforms are being supported in the current version. We can include these in CI tests and executable builds, and strictly-enforce them with the #[cfg]s. I'm not sure if targeting all of linux would work, there might be subtleties in the support of the nix crate, or variations in some of the supported tooling. So maybe we try:

  • ubuntu
  • red hat
  • debian
  • freebsd
  • openbsd
  • netbsd
  • dragonfly
  • macos
  • windows server 2019
  • windows 10

I would probably save windows support for last, mainly because I haven't used it in a long time, and I'm not sure if there's a way to spin-up containers with Windows without paying a license fee - will check that later.

Restarting of services and socket activation

The socket activation seems to get out of whack when units are restarted over the control interface. It seems to work fine if the service dies on it's own. It seems that the notification for the socket activation is not done properly.

Weaken dependencies if only 'wanted' not 'required'

Units currently wait for all units that they have a 'After' relation to. These have to be in the state 'Started' for the unit to start itself. This is not correct. If the dependency was only 'Wanted' not 'Required' the dependency is allowed to be in all states only 'NeverRan' is disallowed in that case.

create an actual UI for rsdctl

Rsdctl just pretty prints the returned json from rustysd. To make this an actually useful tool it needs to be better in presenting the retrieved info/results/errors.

Refactor the way service executables are started

Currently there are two ways processes for services are started.

  1. Normal std::process::Command for helpers
  2. Custom stuff for the main executable

I am certain that even with the Command extensions there is stuff systemd does, that needs more specialized stuff. Expecially setting env vars is not async-signal-safe which is necessary to be called after fork but before exec. It is probably fine on most platforms since rustysd does not mess with env vars anywhere else before forking, but I like to be pedantic about this kind of stuff.

The solution here is to chain-load another executable, that does all the setup that is not necessarily done in rustysd, and them in turn execs the actual service/helper process.

rustysd (just fd setup?) -> rsdexec (most setup) -> service

This reduces complexity in the codepath main rustysd has and provides this as a (hopefully) useful tool to others. It would increase compatibility too, systemd spawns helpers with the same preparations, as it does the main executable.

See branch redesing_fork_exec for progress on this.

Support interactive tty for example to spawn a foreground shell?

Hi, it looks like ExecStart a shell by /bin/sh isn't supported yet? I think some not supported options would be needed to do that? Some of the following options?

#StandardInput=tty
#StandardOutput=tty
#TTYPath=/dev/tty8
#TTYReset=yes
#TTYVHangup=yes

With issue #15 I try to start a minimal rustysd environment with an interactive shell in the end of the boot process.

Is that possible with rustysd?

Container based OS: busybox + rustysd + crun + gpm build with linuxkit

Build initrd+kernel with linuxkit.
Init is based on busybox init (prepare host, generate unitfiles without dependencies for now...) and rustysd (start linuxkit services).

all dockerfiles, linuxkit yml is included here:
https://github.com/pwFoo/DenglerOS

/ # crun list
NAME   PID       STATUS   BUNDLE PATH                            
rngd   693       running  /containers/services/rngd              
docker 694       running  /containers/services/docker            
udhcpc 695       running  /containers/services/udhcpc            
mdevd  696       running  /containers/services/mdevd             
/ # rsdctl /notifications/control.socket status
Write cmd: {"jsonrpc":"2.0","method":"status"}
[2020-02-22][16:53:17][rustysd::control::control][TRACE] Execute command: Status(None)
Wait for response
Got response
{
  "jsonrpc": "2.0",
  "result": [
    {
      "Name": "init.target",
      "Status": "Started"
    },
    {
      "Name": "rngd.service",
      "Restarted": "0",
      "Sockets": [],
      "Status": "Started",
      "UpSince": "65.858849801s"
    },
    {
      "Name": "onboot.target",
      "Status": "Started"
    },
    {
      "Name": "network.target",
      "Status": "Started"
    },
    {
      "Name": "docker.service",
      "Restarted": "0",
      "Sockets": [],
      "Status": "Started",
      "UpSince": "65.783275025s"
    },
    {
      "Name": "mdevd.service",
      "Restarted": "0",
      "Sockets": [],
      "Status": "Started",
      "UpSince": "65.73633628s"
    },
    {
      "Name": "default.target",
      "Status": "Started"
    },
    {
      "Name": "udhcpc.service",
      "Restarted": "0",
      "Sockets": [],
      "Status": "Started",
      "UpSince": "65.75102832s"
    },
    {
      "Name": "services.target",
      "Status": "Started"
    }
  ]
}

@KillingSpark @cdbattags @justincormack

#13 #15

rustybox and supervising process riffol

I searched for interesting rust based Projects and found two. Not planned to use, but maybe interesting Projects to take a look in features or source...

Last commit 11 months ago. Haven't tested or looked into features / source. Just link it here to compare and maybe get some inspiration for rustysd...
https://github.com/riboseinc/riffol

Would it possible to replace rustybox init with rustysd? Build rustysd as an applet?
https://github.com/samuela/rustybox

Maybe I should start to learn rust ๐Ÿ˜…

*BSD support

hopefully pedantic *BSD fans won't kill me now

Perhaps low priority, this is more for fun rather than using in production I guess? Depends how serious this project gets.

My friend came up with a crazy idea to simply replace Debian installation's Linux kernel with FreeBSD's and keep most of the things working (old init scripts seem to be rotting). Having BSD support would make this idea actually doable

Redox Support

Hi, I'm a contributor to RedoxOS and have been thinking about working on init for the past year or so as real life sorta took over. I dove into getting rustysd running as a normal application on Redox today, and have read through all your notes on the subject.

So far I've been working my way up the dependency tree to try and get to actually compiling rustysd. I talked to AdminXVII (who ported nix) and I'm just pulling his branch for the time being. I also had to [replace] time, there's a redox fork which supports 0.1 until chrono updates its dependency. Still running into issues with signal-hook, due to a couple of missing symbols in libc.

My understanding is that there is some magic that connects libc to relibc during compilation so that relibc is the backend of libc, so you can make libc calls in your program without needing to depend on relibc directly. Given that, I think it may not be too difficult to get things to compile initially, and then it's a bunch of debugging. For that, redoxer should be helpful.

Essentially I opened this issue so that we can get some dialog going and hopefully see rustsd running on Redox soon.

thoughts on moving to prime-time

@KillingSpark hey there! Saw your post on Reddit.

I'm looking for an open source, systems-type side project to work on this year. I was considering a port of PM2 (Node.js) to rust, for several reasons: pm2 has a good UX and a large user base, but javascript is slow, gc w/ memory leak risk, and insecure.

But ultimately PM2 just leverages whatever daemon manager is available on the OS. And while rewriting that layer with speed and memory safety has benefits, I wonder whether it makes sense to cut out the middle layer and make a user friendly e.g. systemd-type service manager

That's when I encountered your project. I think the premise is great! But I'll be honest, my experience with a lot of the underlying principles of a modern service manager is limited. I am reading a lot of opinions that systemd has far too many features and too much code, and as I dug deeper, a lot of the features do seem irrelevant to a basic service manager. But I think I'd need to learn a lot more to fully assess an appropriate design.

That said, it would be great to have your advice on a few points:

  1. Given your current progress, are you thinking that you want to expand this to a battle-hardened production system for service management? e.g. something that works on modern hardwares supported by rustc, and some dominant linux distributions? Or are you still viewing it as a toy project? (I don't mean that in an offensive way)

  2. Based on what you learned, what % complete do you think this is in comparison with a bare-bones service manager that could drop-in for basic usage of systemd?

  3. Why no cgroups? Admittedly, I don't fully understand the implications of this choice, nor the alternatives. But you seemed to get a good amount of backlash about that feature from the community.

  4. Any other major design decisions or forks-in-the-road you faced, or see ahead?

Great name for the project - share your thoughts and ideas.

Seriously. Think about name of this init-like service. More fantasy|imagination won't hurt.

It can be infinit or rockit or something that can be easily pronounced and easily remebered.
There is tini project. And, yes, it a good name. Anyway, what i want to say that name of project should not be nailed tightly to the language it is written in. I uderstand that go, pardon, rust is the safest and coolest lang out there, but this fact should not limit us in naming of projects.

N.B.
This is not demand and i'm not instst on doing rename now or whenever, but i'd like to kow that this idea just taken in consideration as a good advice.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.