Git Product home page Git Product logo

systemd-rhel's People

Contributors

ahkok avatar crrodriguez avatar davidstrauss avatar davidz25 avatar dbuch avatar falconindy avatar filbranden avatar grawity avatar gregkh avatar haraldh avatar holtmann avatar hreinecke avatar jengelh avatar kaisforza avatar kaysievers avatar keszybz avatar lnykryn avatar mbiebl avatar mfwitten avatar michaelolbrich avatar michich avatar msekletar avatar pfl avatar phomes avatar poettering avatar rfc1036 avatar ronnychevalier avatar teg avatar zonque avatar zzam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

systemd-rhel's Issues

systemd stop read and process dbus event of runc cgroup invokes

systemd version the issue has been seen with

systemd 219
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN

Used distribution

Centos 7.4

Expected behaviour you didn't see

Don't stop process dbus invokes.

Unexpected behaviour you saw

Stop process dbus invokes.

Any dbus method call send to org.freedesktop.systemd1 was not responsed, for example, the below command would wait forever:

dbus-send --system --dest=org.freedesktop.systemd1 --type=method_call --print-reply /org/freedesktop/systemd1 org.freedesktop.DBus.Introspectable.Introspect

Also there were many systemd errors in /var/log/messages:
Jan 4 11:56:31 host-k8s-node001 systemd: Failed to propagate agent release message: Operation not supported

busctl tree reported Failed to introspect object / of service org.freedesktop.systemd1: Connection timed out

Steps to reproduce the problem
Can't reproduce it, seems much concurrent runc's dbus cgroup operations will produce it. The issue in runc: opencontainers/runc#1959

systemd always invokes rmdir() on pids cgroup

There is a patch in systemd-219 which is generated from the sources fetched using rhpkg, 0475-core-add-support-for-the-pids-cgroup-controller.patch. This patch adds CGROUP_PIDS to CGroupControllerMask (along with some other changes) but does not update _CGROUP_CONTROLLER_MASK_ALL from 31 to 63. This mistake makes the check at L#1628 to fail for pids and therefore for pids, cg_trim() is invoked always.

Here is the relevant code:

1605         "cpu\0"
1606         "cpuacct\0"
1607         "blkio\0"
1608         "memory\0"
1609         "devices\0"
1610         "pids\0";
1611  
1612 int cg_create_everywhere(CGroupControllerMask supported, CGroupControllerMask mask, const char *path) {
1613         CGroupControllerMask bit = 1;
1614         const char *n;
1615         int r;
1616                 
1617         /* This one will create a cgroup in our private tree, but also
1618          * duplicate it in the trees specified in mask, and remove it
1619          * in all others */
1620  
1621         /* First create the cgroup in our own hierarchy. */
1622         r = cg_create(SYSTEMD_CGROUP_CONTROLLER, path);
1623         if (r < 0)
1624                 return r;
1625                         
1626         /* Then, do the same in the other hierarchies */
1627         NULSTR_FOREACH(n, mask_names) {
1628                 if (mask & bit)
1629                         cg_create(n, path);
1630                 else if (supported & bit)
1631                         cg_trim(n, path, true);
1632 
1633                 bit <<= 1;
1634         }
1635 
1636         return 0;
1637 }

src/shared/cgroup-util.h
32 typedef enum CGroupControllerMask {
 33         CGROUP_CPU = 1,
 34         CGROUP_CPUACCT = 2,
 35         CGROUP_BLKIO = 4,
 36         CGROUP_MEMORY = 8,
 37         CGROUP_DEVICE = 16,
 38         CGROUP_PIDS = 32,
 39         _CGROUP_CONTROLLER_MASK_ALL = 31
 40 } CGroupControllerMask;

At L39, there should be 63

Also Please take a look at openshift/origin#16246 (comment)

systemd-network isn't enabled with 219-9

Subj.

I have to add the following lines into a script which generates a rootfs for now.

+# FIXME this was disabled in 219-9
+ln -s /usr/lib/systemd/system/systemd-networkd.service ${INSTALLROOT}/etc/systemd/system/multi-user.target.wants/systemd-networkd.service
+ln -s /usr/lib/systemd/system/systemd-networkd.socket  ${INSTALLROOT}/etc/systemd/system/sockets.target.wants/systemd-networkd.socket

systemd got a SIGFAULT and freeze when mount_load cgroup-cpuset.mount

Submission type

Bug report

systemd version the issue has been seen with

systemd 219

Used distribution

CentOS 7.4

In case of bug report: Expected behaviour you didn't see

systemd work fine

In case of bug report: Unexpected behaviour you saw

systemd freeze

In case of bug report: Steps to reproduce the problem

Cannot reproduce

dmesg:
[2016-01-16 00:00:37][ 76.135644] systemd[1]: segfault at 8150 ip 000055bc8a50163a sp 00007ffd80213240 error 4 in systemd[55bc8a41b000+144000]
[2016-01-16 00:00:37][ 76.136312] cgroup:cpu is already mounted or /cgroup/cpu busy
[2016-01-16 00:00:37][ 76.137858] cgroup:cpuacct is already mounted or /cgroup/cpuacct busy
[2016-01-16 00:00:37][ 76.141226] cgroup:memory is already mounted or /cgroup/memory busy
[2016-01-16 00:00:37]Caught , dumped core as pid 27069.
[2016-01-16 00:00:37]Freezing execution.

Program terminated with signal 11, Segmentation fault.
#0 0x00007f8cab3e16e7 in kill () from /lib64/libc.so.6
(gdb) bt
#0 0x00007f8cab3e16e7 in kill () from /lib64/libc.so.6
#1 0x000055bc8a4c4643 in crash.2992 (sig=11) at src/core/main.c:168
#2
#3 base_bucket_scan.33968 (h=0x55bc8bfbcca0, idx=16, key=0x55bc8c03ba40) at src/shared/hashmap.c:1218
#4 0x000055bc8a4e8d88 in set_put (s=0x55bc8bfbcca0, key=0x55bc8c03ba40) at src/shared/hashmap.c:1266
#5 0x000055bc8a4fd1f3 in unit_require_mounts_for (u=0x55bc8c03ba40, path=) at src/core/unit.c:3642
#6 0x000055bc8a453b88 in mount_add_mount_links (m=0x55bc8c03ba40) at src/core/mount.c:254
#7 mount_add_extras.61496 (m=m@entry=0x55bc8c03ba40) at src/core/mount.c:501
#8 0x000055bc8a453f08 in mount_load.61514 (u=0x55bc8c03ba40) at src/core/mount.c:547
#9 0x000055bc8a45072f in unit_load (u=0x55bc8c03ba40) at src/core/unit.c:1205
#10 0x000055bc8a450cbe in manager_dispatch_load_queue (m=0x55bc8bf5e940) at src/core/manager.c:1393
#11 0x000055bc8a45b557 in mount_dispatch_io.61307 (source=, fd=, revents=, userdata=0x55bc8bf5e940) at src/core/mount.c:1725
#12 0x000055bc8a4b7240 in source_dispatch (s=s@entry=0x55bc8bf67260) at src/libsystemd/sd-event/sd-event.c:2115
#13 0x000055bc8a4b969a in sd_event_dispatch (e=0x55bc8bf5ee20) at src/libsystemd/sd-event/sd-event.c:2472
#14 0x000055bc8a4946df in sd_event_run (timeout=18446744073709551615, e=0x55bc8bf5ee20) at src/libsystemd/sd-event/sd-event.c:2501
#15 manager_loop (m=0x55bc8bf5e940) at src/core/manager.c:2211
#16 0x000055bc8a43dd30 in main (argc=6, argv=0x7ffd80214258) at src/core/main.c:1791
(gdb) p (Unit)0x55bc8c03ba40
$2 = {manager = 0x55bc8bf5e940, type = UNIT_MOUNT, load_state = UNIT_LOADED, merged_into = 0x0, id = 0x55bc8bff3de0 "cgroup-cpuset.mount", instance = 0x0,
names = 0x55bc8bfe6770, dependencies = {0x0 <repeats 12 times>, 0x55bc8bfe6800, 0x0, 0x55bc8bfe67d0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x55bc8bfe67a0, 0x0},
requires_mounts_for = 0x55bc8bf6c480, description = 0x55bc8bf74020 "/cgroup/cpuset", documentation = 0x0, fragment_path = 0x0,
source_path = 0x55bc8c05b860 "/proc/self/mountinfo", dropin_paths = 0x55bc8c03b3c0, fragment_mtime = 0, source_mtime = 0, dropin_mtime = 1452873637031013, job = 0x0,
nop_job = 0x0, job_timeout = 0, job_timeout_action = 0, job_timeout_reboot_arg = 0x0, refs = 0x0, conditions = 0x0, asserts = 0x0, condition_timestamp = {realtime = 0,
monotonic = 0}, assert_timestamp = {realtime = 0, monotonic = 0}, inactive_exit_timestamp = {realtime = 0, monotonic = 0}, active_enter_timestamp = {realtime = 0,
monotonic = 0}, active_exit_timestamp = {realtime = 0, monotonic = 0}, inactive_enter_timestamp = {realtime = 0, monotonic = 0}, slice = {unit = 0x0, refs_next = 0x0,
refs_prev = 0x0}, units_by_type_next = 0x55bc8c02d6a0, units_by_type_prev = 0x0, has_requires_mounts_for_next = 0x0, has_requires_mounts_for_prev = 0x0,
load_queue_next = 0x0, load_queue_prev = 0x0, dbus_queue_next = 0x55bc8c050ac0, dbus_queue_prev = 0x0, cleanup_queue_next = 0x0, cleanup_queue_prev = 0x0,
gc_queue_next = 0x0, gc_queue_prev = 0x0, cgroup_queue_next = 0x0, cgroup_queue_prev = 0x0, pids = 0x0, sigchldgen = 0, gc_marker = 0, deserialized_job = -1,
load_error = 0, unit_file_state = _UNIT_FILE_STATE_INVALID, unit_file_preset = -1, cgroup_path = 0x0, cgroup_realized_mask = 0, cgroup_subtree_mask = 0,
cgroup_members_mask = 0, on_failure_job_mode = 1, stop_when_unneeded = false, default_dependencies = true, refuse_manual_start = false, refuse_manual_stop = false,
allow_isolate = false, ignore_on_isolate = true, ignore_on_snapshot = false, condition_result = false, assert_result = false, transient = false, in_load_queue = false,
in_dbus_queue = true, in_cleanup_queue = false, in_gc_queue = false, in_cgroup_queue = false, sent_dbus_new_signal = false, no_gc = false, in_audit = false,
cgroup_realized = false, cgroup_members_mask_valid = false, cgroup_subtree_mask_valid = false}
(gdb) p *(Mount *)0x55bc8c03ba40
$3 = {meta = {manager = 0x55bc8bf5e940, type = UNIT_MOUNT, load_state = UNIT_LOADED, merged_into = 0x0, id = 0x55bc8bff3de0 "cgroup-cpuset.mount", instance = 0x0,
names = 0x55bc8bfe6770, dependencies = {0x0 <repeats 12 times>, 0x55bc8bfe6800, 0x0, 0x55bc8bfe67d0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x55bc8bfe67a0, 0x0},
requires_mounts_for = 0x55bc8bf6c480, description = 0x55bc8bf74020 "/cgroup/cpuset", documentation = 0x0, fragment_path = 0x0,
source_path = 0x55bc8c05b860 "/proc/self/mountinfo", dropin_paths = 0x55bc8c03b3c0, fragment_mtime = 0, source_mtime = 0, dropin_mtime = 1452873637031013, job = 0x0,
nop_job = 0x0, job_timeout = 0, job_timeout_action = 0, job_timeout_reboot_arg = 0x0, refs = 0x0, conditions = 0x0, asserts = 0x0, condition_timestamp = {realtime = 0,
monotonic = 0}, assert_timestamp = {realtime = 0, monotonic = 0}, inactive_exit_timestamp = {realtime = 0, monotonic = 0}, active_enter_timestamp = {realtime = 0,
monotonic = 0}, active_exit_timestamp = {realtime = 0, monotonic = 0}, inactive_enter_timestamp = {realtime = 0, monotonic = 0}, slice = {unit = 0x0,
refs_next = 0x0, refs_prev = 0x0}, units_by_type_next = 0x55bc8c02d6a0, units_by_type_prev = 0x0, has_requires_mounts_for_next = 0x0,
has_requires_mounts_for_prev = 0x0, load_queue_next = 0x0, load_queue_prev = 0x0, dbus_queue_next = 0x55bc8c050ac0, dbus_queue_prev = 0x0, cleanup_queue_next = 0x0,
cleanup_queue_prev = 0x0, gc_queue_next = 0x0, gc_queue_prev = 0x0, cgroup_queue_next = 0x0, cgroup_queue_prev = 0x0, pids = 0x0, sigchldgen = 0, gc_marker = 0,
deserialized_job = -1, load_error = 0, unit_file_state = _UNIT_FILE_STATE_INVALID, unit_file_preset = -1, cgroup_path = 0x0, cgroup_realized_mask = 0,
cgroup_subtree_mask = 0, cgroup_members_mask = 0, on_failure_job_mode = 1, stop_when_unneeded = false, default_dependencies = true, refuse_manual_start = false,
refuse_manual_stop = false, allow_isolate = false, ignore_on_isolate = true, ignore_on_snapshot = false, condition_result = false, assert_result = false,
transient = false, in_load_queue = false, in_dbus_queue = true, in_cleanup_queue = false, in_gc_queue = false, in_cgroup_queue = false, sent_dbus_new_signal = false,
no_gc = false, in_audit = false, cgroup_realized = false, cgroup_members_mask_valid = false, cgroup_subtree_mask_valid = false},
where = 0x55bc8bf73d50 "/cgroup/cpuset", parameters_proc_self_mountinfo = {what = 0x55bc8bfcd900 "cgroup:cpuset", options = 0x55bc8bf6f580 "rw,relatime,cpuset",
fstype = 0x55bc8bf6f1f0 "cgroup"}, parameters_fragment = {what = 0x0, options = 0x0, fstype = 0x0}, from_proc_self_mountinfo = true, from_fragment = false,
is_mounted = true, just_mounted = true, just_changed = true, sloppy_options = false, result = MOUNT_SUCCESS, reload_result = MOUNT_SUCCESS, directory_mode = 493,
timeout_usec = 90000000, exec_command = {{path = 0x0, argv = 0x0, exec_status = {start_timestamp = {realtime = 0, monotonic = 0}, exit_timestamp = {realtime = 0,
monotonic = 0}, pid = 0, code = 0, status = 0}, command_next = 0x0, command_prev = 0x0, ignore = false}, {path = 0x0, argv = 0x0, exec_status = {
start_timestamp = {realtime = 0, monotonic = 0}, exit_timestamp = {realtime = 0, monotonic = 0}, pid = 0, code = 0, status = 0}, command_next = 0x0,
command_prev = 0x0, ignore = false}, {path = 0x0, argv = 0x0, exec_status = {start_timestamp = {realtime = 0, monotonic = 0}, exit_timestamp = {realtime = 0,
monotonic = 0}, pid = 0, code = 0, status = 0}, command_next = 0x0, command_prev = 0x0, ignore = false}}, exec_context = {environment = 0x0,
environment_files = 0x0, rlimit = {0x0 <repeats 16 times>}, working_directory = 0x0, root_directory = 0x0, working_directory_missing_ok = false, umask = 18,
oom_score_adjust = 0, nice = 0, ioprio = 16384, cpu_sched_policy = 0, cpu_sched_priority = 0, cpuset = 0x0, cpuset_ncpus = 0, std_input = EXEC_INPUT_NULL,
std_output = 4, std_error = 4, timer_slack_nsec = 18446744073709551615, tty_path = 0x0, tty_reset = false, tty_vhangup = false, tty_vt_disallocate = false,
ignore_sigpipe = true, user = 0x0, group = 0x0, supplementary_groups = 0x0, pam_name = 0x0, utmp_id = 0x0, selinux_context_ignore = false, selinux_context = 0x0,
apparmor_profile_ignore = false, apparmor_profile = 0x0, smack_process_label_ignore = false, smack_process_label = 0x0, read_write_dirs = 0x0, read_only_dirs = 0x0,
inaccessible_dirs = 0x0, mount_flags = 0, capability_bounding_set_drop = 0, capabilities = 0x0, secure_bits = 0, syslog_priority = 30, syslog_identifier = 0x0,
syslog_level_prefix = true, cpu_sched_reset_on_fork = false, non_blocking = false, private_tmp = false, private_network = false, private_devices = false,
protect_system = PROTECT_SYSTEM_NO, protect_home = PROTECT_HOME_NO, no_new_privileges = false, same_pgrp = true, personality = 4294967295, syscall_filter = 0x0,
syscall_archs = 0x0, syscall_errno = 0, syscall_whitelist = false, address_families = 0x0, address_families_whitelist = false, runtime_directory = 0x0,
runtime_directory_mode = 493, oom_score_adjust_set = false, nice_set = false, ioprio_set = false, cpu_sched_set = false, no_new_privileges_set = false,
bus_endpoint = 0x0}, kill_context = {kill_mode = KILL_CONTROL_GROUP, kill_signal = 15, send_sigkill = true, send_sighup = false}, cgroup_context = {
cpu_accounting = false, blockio_accounting = false, memory_accounting = false, cpu_shares = 18446744073709551615, startup_cpu_shares = 18446744073709551615,
cpu_quota_per_sec_usec = 18446744073709551615, blockio_weight = 18446744073709551615, startup_blockio_weight = 18446744073709551615, blockio_device_weights = 0x0,
blockio_device_bandwidths = 0x0, memory_limit = 18446744073709551615, device_policy = CGROUP_AUTO, device_allow = 0x0, delegate = false}, exec_runtime = 0x0,
state = MOUNT_DEAD, deserialized_state = MOUNT_DEAD, control_command = 0x0, control_command_id = _MOUNT_EXEC_COMMAND_INVALID, control_pid = 0, timer_event_source = 0x0,
n_retry_umount = 0}

cgtop: Not show headers in script

Hi,

The systemd-cgtop (219 Release 42, Centos 7.4), not show the headers when I use inside a script (I trying to do something like Monit, show statuses of my own processes, etc).

How do I do that show these headers?

Example:

systemd-cgtop -n 1

Path                                                                                                                  Tasks   %CPU   Memory  Input/s Output/s
...

But if use a redirection or grep (to show only my units):

systemd-cgtop -n 1 | egrep "Path|abrtd"
/system.slice/abrtd.service                       1      -        -        -        -

Then dissapearing the headers.

Regards
Cesar Jorge

Add spec-file to the repository

Please consider adding spec-file to the repository as well. Thus we'll have everything required to build a package in one repo. This not only simplifies a building process with modern services like @buildbot but also makes the entire process even more transparent.

Memory hotplug: how about respect auto_online_blocks value

How about respect "auto_online_blocks"?
Unconditionally online newly-plugged memory blocks may confuse user.


@@ -9,6 +9,7 @@ ACTION!="add", GOTO="memory_hotplug_end"
PROGRAM="/bin/uname -p", RESULT=="s390*", GOTO="memory_hotplug_end"
PROGRAM="/bin/uname -p", RESULT=="ppc64*", GOTO="memory_hotplug_end"

+ATTRS{auto_online_blocks}=="offline", GOTO="memory_hotplug_end"
ENV{.state}="online"
PROGRAM="/bin/systemd-detect-virt", RESULT=="none", ENV{.state}="online_movable"
ATTR{state}=="offline", ATTR{state}="$env{.state}"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.