Git Product home page Git Product logo

procfs's Introduction

procfs

Crate Docs Minimum rustc version

This crate is an interface to the proc pseudo-filesystem on linux, which is normally mounted as /proc. Long-term, this crate aims to be fairly feature complete, but at the moment not all files are exposed. See the docs for info on what's supported, or view the support.md file in the code repository.

Examples

There are several examples in the docs and in the examples folder of the code repository.

Here's a small example that prints out all processes that are running on the same tty as the calling process. This is very similar to what "ps" does in its default mode:

fn main() {
    let me = procfs::process::Process::myself().unwrap();
    let me_stat = me.stat().unwrap();
    let tps = procfs::ticks_per_second().unwrap();

    println!("{: >5} {: <8} {: >8} {}", "PID", "TTY", "TIME", "CMD");

    let tty = format!("pty/{}", me_stat.tty_nr().1);
    for prc in procfs::process::all_processes().unwrap() {
        let prc = prc.unwrap();
        let stat = prc.stat().unwrap();
        if stat.tty_nr == me_stat.tty_nr {
            // total_time is in seconds
            let total_time =
                (stat.utime + stat.stime) as f32 / (tps as f32);
            println!(
                "{: >5} {: <8} {: >8} {}",
                stat.pid, tty, total_time, stat.comm
            );
        }
    }
}

Here's another example that shows how to get the current memory usage of the current process:

use procfs::process::Process;

fn main() {
    let me = Process::myself().unwrap();
    let me_stat = me.stat().unwrap();
    println!("PID: {}", me.pid);

    let page_size = procfs::page_size();
    println!("Memory page size: {}", page_size);

    println!("== Data from /proc/self/stat:");
    println!("Total virtual memory used: {} bytes", me_stat.vsize);
    println!(
        "Total resident set: {} pages ({} bytes)",
        me_stat.rss,
        me_stat.rss * page_size
    );
}

There are a few ways to get this data, so also checkout the longer self_memory example for more details.

Cargo features

The following cargo features are available:

  • chrono -- Default. Optional. This feature enables a few methods that return values as DateTime objects.
  • flate2 -- Default. Optional. This feature enables parsing gzip compressed /proc/config.gz file via the procfs::kernel_config method.
  • backtrace -- Optional. This feature lets you get a stack trace whenever an InternalError is raised.
  • serde1 -- Optional. This feature allows most structs to be serialized and deserialized using serde 1.0. Note, this feature requires a version of rust newer than 1.48.0 (which is the MSRV for procfs). The exact version required is not specified here, since serde does not not have an MSRV policy.

Minimum Rust Version

This crate is only tested against the latest stable rustc compiler, but may work with older compilers. See msrv.md for more details.

License

The procfs library is licensed under either of

at your option.

For additional copyright information regarding documentation, please also see the COPYRIGHT.txt file.

Contribution

Contributions are welcome, especially in the areas of documentation and testing on older kernels.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

procfs's People

Contributors

afranchuk avatar arilou avatar atul9 avatar benesch avatar bobrik avatar dalance avatar edigaryev avatar eliad-wiz avatar eliageretto avatar eminence avatar erichdongubler avatar flier avatar futpib avatar h33p avatar idanski avatar kernelerr avatar ludo-c avatar macisamuele avatar nukesor avatar realkc avatar saruman9 avatar sunfishcode avatar taborkelly avatar tatref avatar trtt avatar tvannahl avatar wfly1998 avatar zmjackson avatar zpp0 avatar zz85 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

procfs's Issues

Improve the ProcError

Hello

I've noticed the ProcError doesn't implement std::error::Error, nor does it implement Display. This is annoying, because it can't be directly used with common error handling strategies (eg. can't be used as Box<dyn Error + Send + Sync> or with failure).

Furthermore, the NotFound variant is not very explaining about what was not found, which makes tracking the origin of the error or the cause a bit harder :-(.

Is it possible to improve the error type?

Thank you

Fails to compile on macos

i'm developing some code on my macbook that will eventually be deployed and run on linux. it looks like some imports fail in src/process.rs:7:14 which causes some other errors.

Decouple /stat reading from the Process constructor

Reading procfs is a rather slow operation and sometimes when reading it in bulk you only need a specific file like /proc/<pid>/maps, which is currently constructed through Process.maps().

Is there a reason why the Process structure constructor reads and parses /proc/<pid>/stat by default? Also, recently it got coupled even more, see bfd2c86.

It would be nice to have bare Process structure and then query the related resources on an on-demand basis.

Is 0.4.7 supposed to be yanked from `crates.io`?

Trying to access the commit for 0.4.7 after a fresh clone results in:

$ git checkout 151a9bcfb9896082d83fbe72c3333f17002713e9                                                                                        
fatal: reference is not a tree: 151a9bcfb9896082d83fbe72c3333f17002713e9

$ git fetch origin 151a9bcfb9896082d83fbe72c3333f17002713e9
error: Server does not allow request for unadvertised object 151a9bcfb9896082d83fbe72c3333f17002713e9

Since there's no CHANGELOG.md to speak of, and there's no notes on the 0.4.7 release that's in crates.io, I don't want to make a guess as to which commit in currently available history I could develop against with PRs like #30. Is there a reason that this history isn't available right now? Was some history rewriting done, perhaps?

Panic: Internal Unwrap Error: No /proc/ directory: Too many open files (os error 24))

The below is what was outputted to my terminal:-

thread 'display_handler' panicked at 'called `Result::unwrap()` on an `Err` value: InternalError(bug at /home/chaz/.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.7.7/src/process.rs:2066 (please report this procfs bug)
                                                                      
Internal Unwrap Error: No /proc/ directory: Too many open files (os error 24))', src/libcore/result.rs:1165:5
    stack backtrace:
        0:     0x55bd538b9174 - <unknown>
        1:     0x55bd538db12c - core::fmt::write::h4931d127ae5abcb3
        2:     0x55bd538b53c7 - <unknown>
        3:     0x55bd538bb6ce - <unknown>
        4:     0x55bd538bb3d1 - <unknown>
        5:     0x55bd538bbdc5 - std::panicking::rust_panic_with_hook::hfe88d534155928a4
        6:     0x55bd538bb962 - <unknown>
        7:     0x55bd538bb856 - rust_begin_unwind
        8:     0x55bd538d7aca - core::panicking::panic_fmt::h5e35aad6e23afea1
        9:     0x55bd538d7bc7 - core::result::unwrap_failed::hfa40973353787c4a
        10:     0x55bd537343c9 - <unknown>
        11:     0x55bd5371662d - <unknown>
        12:     0x55bd5374aef5 - <unknown>
        13:     0x55bd538bf91a - __rust_maybe_catch_panic
        14:     0x55bd5372e4f2 - <unknown>
        15:     0x55bd538af83f - <unknown>
        16:     0x55bd538bec80 - <unknown>
        17:     0x7f767a37be65 - <unknown>
        18:     0x7f7679e8e88d - clone
        19:                0x0 - <unknown>

(edited by @eminence to improve formatting)

Surprisingly large compile time

In a decently large project with 255 dependencies, this crate ranks 6th by compile time in a debug build:

crate build time
protobuf v2.18.1 5.2s
syn v1.0.58 5.2s
regex-syntax v0.6.18 4.0s
nix v0.18.0 3.5s
object v0.20.0 3.2s
clap v2.33.3 3.2s
procfs v0.9.1 3.0s
gimli v0.22.0 3.0s
serde_derive v1.0.116 2.8s
trust-dns-proto v0.20.1 2.6s

It looks like approximately 40% of this crate's build time was introduced by the strategy employed in #48. As far as I can understand, the idea there was to nest macros so that the value produced line!() is relevant to the error. The problem is that this strategy causes this crate to balloon after macro expansion. If I run cargo llvm-lines | head in this crate, I get this:

  Lines          Copies       Function name
  -----          ------       -------------
  250761 (100%)  5134 (100%)  (TOTAL)
   10987 (4.4%)     1 (0.0%)  procfs::process::status::Status::from_reader
    7804 (3.1%)     1 (0.0%)  procfs::process::mount::NFSEventCounter::from_str
    5989 (2.4%)     1 (0.0%)  procfs::process::stat::Stat::from_reader
    3895 (1.6%)     1 (0.0%)  procfs::diskstats::DiskStat::from_line
    3875 (1.5%)    25 (0.5%)  alloc::raw_vec::RawVec<T,A>::grow_amortized
    3698 (1.5%)    76 (1.5%)  <core::result::Result<T,E> as core::ops::try_trait::Try>::branch
    3018 (1.2%)     1 (0.0%)  procfs::process::limit::Limits::from_reader

which is quite unusual, and points squarely at the large amount of code that (for example) from_str! expands into.

I'm also not convinced this strategy even produces useful errors, because its error messages are rendered useless by helper functions, such as from_iter and split_into_num.

Are you interested in a PR that doesn't introduce the panics back, but which tries to do something about the compile time regression from #48, even if it removes some of the file+line information?

Supporting Android platform?

I run the demo code in Android plaform:

fn main() {
    let me = procfs::process::Process::myself().unwrap();
    println!("PID: {}", me.pid);
}

and the error occurs:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: PermissionDenied(Some("/proc/sys/kernel/osrelease"))', /Users/weishu/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/procfs-0.10.1/src/lib.rs:303:34
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Normal app in Android have no permission to read /proc/sys/kernel/osrelease, (it is forbidden by SELinux), how can i fix this?

Turn procfs into a panic-free library

In the current design of the procfs crate, panics were possible, as assertions and unwraps are common in the code. This was done to help uncover bugs in the library.

As the library has matured a bit, all of the assertions and unwraps should be replaced with Result types. Most functions already return a ProcResult, so this itself isn't a big change to the public API of procfs.

See also #39

Improving is_alive()

Hi Andrew,

First of all - thanks for the great library, we use it in production for a larger Rust-based system supervisor project.

That said, I have two issues with is_alive():

  • Is a defunct (as told by the Z flag in /proc/$pid/stat) process "alive"? Probably not :-)
  • We have a system where we've hit issues with pid reuse, while the cmdline and uid are the same. is_alive() seems to agree that they are same process, but in reality they aren't.

What do you think about a PR that would:

  • Check whether the process is defunct before returning true in is_alive()?; and
  • Cache starttime from /proc/$pid/stat and compare the current value to determine if they are actually the same process.

Note that using starttime is still not bullet-proof, but for my use case it's a better check.

Thanks!

Lev

Consider only making fields optional if they were added after Linux 2.6

The docs say that this library intends to support Linux versions greater than 2.6. Would it make sense to only make fields optional if they were added after 2.6? Anything before 2.6 is pretty old, and from my experience with rust-psutil, there's a lack of documentation and a lot of missing features from kernels older than 2.6.

edit: This would be a breaking change, and would require some changes to struct parsing too, but would also make things a lot simpler.

build.rs erroneous target OS check during cross-compilation

My use case is cross-compiling this crate for linux on a macOS host machine. This check in the crate's build.rs erroneously uses the cargo cfg attribute which is based on the host system in a build script, instead of the target system.

// Filters are extracted from `libc` filters
#[cfg(not(any(target_os = "android", target_os = "linux", target_os = "l4re",)))]
compile_error!("Building procfs on an for a unsupported platform. Currently only linux and android are supported")

Could this condition instead check the CARGO_CFG_TARGET_OS environment variable which should correctly report the target system OS during a build script?

procfs::process::all_processes() can blow "open files" ulimit on a system with a large number of processes

Looks like the changes in pull request #171 make the procfs::process::Process struct hold an open file descriptor for the corresponding /proc/<pid> directory, and stats are then lazily loaded with methods.

Unfortunately, keeping a lot of open file descriptors around can hit the "max open files" ulimit - it is 1024 on most linux installs by default. This happened to me today while playing around with nushell, when the ps command did not work, since it had hit the open files limit.

The code in question is this snippet, where procfs::process::all_processes() is called, then more Process instances are created from there which really explodes the number of open file descriptors.

Unfortunately I think the change in #171 - ie. wrapping an open file descriptor in a struct and holding it open until such time as it is needed - is a bad idea. Why not just hold name of the /prod/<pid> directory as a String? Convention dictates that a file is opened, data is read, and the file is closed in rapid succession. This is effectively how it is being done by the methods in impl Process, except for the open file descriptor for the directory!

To reproduce the issue, I offer this butchered version of examples/ps.rs:

#![allow(clippy::print_literal)]

use procfs::process::Process;

extern crate procfs;

fn main() {
    let mestat = procfs::process::Process::myself().unwrap().stat().unwrap();
    let tps = procfs::ticks_per_second().unwrap();

    println!("{: >10} {: <8} {: >8} {}", "PID", "TTY", "TIME", "CMD");

    let tty = format!("pty/{}", mestat.tty_nr().1);
    let mut persist = Vec::new();
    match procfs::process::all_processes() {
        Ok(all_procs) => {
            for p in all_procs {
                match p {
                    Ok(prc) => {
                        if let Ok(stat) = prc.stat() {
                            // total_time is in seconds
                            let total_time = (stat.utime + stat.stime) as f32 / (tps as f32);
                            println!("{: >10} {: <8} {: >8} {}", stat.pid, tty, total_time, stat.comm);
                        }
                        let newproc = match Process::new(prc.pid()) {
                            Ok(p) => p,
                            Err(e) => panic!("{}",e) // cannot create new Process!
                        };
                        // don't let processes go out of scope, to trigger "Too many open files"
                        persist.push(newproc);
                    },
                    Err(e) => {
                        println!("{}", e);
                    }
                }
            }
        },
        Err(e) => println!("{}",e)
    }
}   

Run this with a suitably low ulimit, eg.

ulimit -n 256
cargo run --example bad-ps.rs

Note: I'm creating new Processes and pushing them into a Vec to trigger a panic for illustrative purposes. If I push the original prc objects into the Vector, the ProcessesIter iterator simply finishes early. Either way, the intent is to show that if the Processes returned by all_processes() are not dropped, the open files limit will be reached.

our lazy statics are a panic hazard

We have several lazy statics that use .unwrap(), thus they can panic. This is poor API design and should be fixed. See #136 for an example of this.

lazy_static! {
    /// The number of clock ticks per second.
    ///
    /// This is calculated from `sysconf(_SC_CLK_TCK)`.
    static ref TICKS_PER_SECOND: i64 = {
        ticks_per_second().unwrap()
    };
    /// The version of the currently running kernel.
    ///
    /// This is a lazily constructed static.  You can also get this information via
    /// [KernelVersion::new()].
    static ref KERNEL: KernelVersion = {
        KernelVersion::current().unwrap()
    };
    /// Memory page size, in bytes.
    ///
    /// This is calculated from `sysconf(_SC_PAGESIZE)`.
    static ref PAGESIZE: i64 = {
        page_size().unwrap()
    };
}

Panic when iterating through FDs

I am getting a panic when I try and extract the fd from a particular process, the code I am running is:

if let Result::Ok(fds) = process.fd() {
   for fd in fds {
      if let FDTarget::Socket(inode) = fd.target {
         map.insert(inode, process.pid());
       }
  }
}

The exception I am getting is

hread 'main' panicked at 'attempt to subtract with overflow', /home/alisle/.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.4.1/src/process.rs:984:63
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: std::panicking::rust_panic_with_hook
             at libstd/panicking.rs:476
   5: std::panicking::continue_panic_fmt
             at libstd/panicking.rs:390
   6: rust_begin_unwind
             at libstd/panicking.rs:325
   7: core::panicking::panic_fmt
             at libcore/panicking.rs:77
   8: core::panicking::panic
             at libcore/panicking.rs:52
   9: <procfs::process::FDTarget as core::str::FromStr>::from_str
             at /home/alisle/.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.4.1/src/process.rs:984
  10: procfs::process::Process::fd
             at /home/alisle/.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.4.1/src/process.rs:1369
  11: zerotrust_track::proc::Proc::update
             at src/proc/mod.rs:43
  12: zerotrust_track::proc::Proc::new
             at src/proc/mod.rs:32
  13: zerotrust_track::parser::Parser::new

The process that it is hanging up on has the following FDs:

root@sleepy:/proc/1149/fd# ls -al
total 0
dr-x------ 2 root root  0 Jan 16 09:43 .
dr-xr-xr-x 9 root root  0 Jan 16 09:43 ..
lr-x------ 1 root root 64 Jan 16 09:43 0 -> /dev/null
lrwx------ 1 root root 64 Jan 16 09:43 1 -> 'socket:[22188]'
lr-x------ 1 root root 64 Jan 16 09:43 10 -> /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness_hw_changed
lr-x------ 1 root root 64 Jan 16 09:43 11 -> /dev/input/event0
lrwx------ 1 root root 64 Jan 16 09:43 12 -> 'socket:[31154]'
lrwx------ 1 root root 64 Jan 16 09:43 2 -> 'socket:[22188]'
lrwx------ 1 root root 64 Jan 16 09:43 3 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jan 16 09:43 4 -> 'anon_inode:[eventfd]'
lrwx------ 1 root root 64 Jan 16 09:43 5 -> 'socket:[28263]'
lrwx------ 1 root root 64 Jan 16 09:43 6 -> 'anon_inode:[eventfd]'
l-wx------ 1 root root 64 Jan 16 09:43 7 -> /run/systemd/inhibit/4.ref
lrwx------ 1 root root 64 Jan 16 09:43 8 -> 'socket:[22198]'
lrwx------ 1 root root 64 Jan 16 09:43 9 -> /sys/devices/platform/dell-laptop/leds/dell::kbd_backlight/brightness
root@sleepy:/proc/1149/fd# 

I am running ubuntu 18.04 on a Dell XPS 15, the process which is causing the issues is /usr/lib/upower/upowerd

I am using 0.4.1 version of the crate.

Future breaking API changes

This issue is collecting some ideas for breaking API changes that we might want to make in the future

  • Convert some fields from Option<T> to T if they were added in very old kernels (#68)
  • Remove stat and pid fields from the Process struct to support cases when you don't need this information (#60)
  • Remove some deprecated methods (#52)
  • Returning ticks in CpuTime (#76)

Not usable within containers

Really nice library - however, I miss that it cannot work inside containers because it would be looking at the global stats. Am I misunderstanding the API here? Because as far as my understanding is concerned, it is under /sys/fs/cgroup/* namespace.

I'd like to see that part of this library as well. Any thoughts or recommendations on the API design are most welcome and I am willing to contribute to the crate starting with memory and CPU stats.

please remove expects!

This crate is great and id really like to use it, but there's one issue and thats the expects!. Imho a library shouldn't act on global of program state (like terminating it) based on an unexpected condition - it should only do that if the state could lead to undefined behavior - which can occur in things like libc. Here we are expecting procfs to follow a certain standard in terms of layout - which should be pretty reliable - but worst case should cause an API to return an error not a panic. Don't mind helping with this effort - but wanted to get your thoughts.

more consistent function naming style

There are some naming inconsistencies that have been bothering me for a while, and I'd like to solicit input for anyone who has some. There is a mix of free functions that return structures, and constructor methods on structs. Some examples:

  • There's a free function process::all_processes() -> ProcResult<Vec<Process>> to get all processes, but a method Process::new(pid) -> ProcResult<Process> to get a single process
  • There's a free function cpuinfo() -> ProcResult<CpuInfo> to get CPU Info, but when getting memory info, you use a constructor method Meminfo::new() -> ProcResult<MemInfo>

My personal inclination is to remove the number of free functions (except when necessary, because there's no struct to attach to. boot_time_secs() -> ProcResult<u64> is a good example of this), but I'm curious to know if anyone else has any thoughts on this

KernelStats doesn't handle CPU hotplug (discontiguous CPUs)

I have a system where CPUs have been hot unplugged:

cpu  2074740 293711 234994 2597681757 257 22230 26383 0 0 0
cpu0 14996 1543 5186 5195036 1 63 1986 0 0 0
cpu4 10653 1590 913 5205455 0 57 116 0 0 0
cpu8 22749 1621 923 5193341 0 69 101 0 0 0
cpu12 24716 1511 1236 5191184 1 67 90 0 0 0

It looks like we assume CPUs are contiguous, and don't have a way to retrieve the CPU IDs.

u32 conversion error on /proc/stat::procs_blocked

I have a program which periodically reads system usage stats from procfs and am very occasionally hitting the following.

[ WARN] report: Failed to update 1s usages (Internal error: bug at /home/htejun/.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.7.9/src/lib.rs:901 (please report this procfs bug)

On 0.7.9, this is the line which reads procs_blocked from /proc/stat. The failure is transient and I have no idea how this could happen.

Example in README doesn't compile

Replace:

-    let me = procfs::Process::myself().unwrap();
+    let me = procfs::process::Process::myself().unwrap();

And

-    for prc in procfs::all_processes() {
+    for prc in procfs::process::all_processes().unwrap() {

Panic: Failed to unwrap line (Failed to read line)

Steps to reproduce on my system:

$ cargo install procs
$ procs

thread 'main' panicked at 'Failed to unwrap line (Failed to read line). Please report this as a procfs bug.', /home/danilo/.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.4.7/src/process.rs:910:24
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:70
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:58
             at src/libstd/panicking.rs:200
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:215
   4: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:478
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:385
   6: std::panicking::begin_panic_fmt
             at src/libstd/panicking.rs:340
   7: procfs::process::Process::io
   8: procs::run_default
   9: procs::run
  10: procs::main
  11: std::rt::lang_start::{{closure}}
  12: std::panicking::try::do_call
             at src/libstd/rt.rs:49
             at src/libstd/panicking.rs:297
  13: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:87
  14: std::rt::lang_start_internal
             at src/libstd/panicking.rs:276
             at src/libstd/panic.rs:388
             at src/libstd/rt.rs:48
  15: main
  16: __libc_start_main
  17: _start

procs still uses version 0.4.7, but I didn't see an obvious fix in your git commit log.

CC @dalance.

Permission denied for write-only files.

procfs::sys::vm::drop_caches(procfs::sys::vm::DropCache::All) fails for me with "Permission denied" (even when running as root as needed). The reason I think is that that file cannot be read (sudo cat /proc/sys/vm/drop_caches shows the same error), only written to (echo 3 | sudo tee /proc/sys/vm/drop_caches works).

examples/mountinfo.rs panic on fs_type: "bpf", should this example skip bpf type in loop?

Discussed in #155

Originally posted by pymongo November 9, 2021

[examples/mountinfo.rs:6] &mount = MountInfo {
    mnt_id: 33,
    pid: 23,
    majmin: "0:29",
    root: "/",
    mount_point: "/sys/fs/bpf",
    mount_options: {
        "noexec": None,
        "rw": None,
        "relatime": None,
        "nodev": None,
        "nosuid": None,
    },
    opt_fields: [
        Shared(
            11,
        ),
    ],
    fs_type: "bpf",
    mount_source: None,
    super_options: {
        "rw": None,
        "mode": Some(
            "700",
        ),
    },
}
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', examples/mountinfo.rs:15:32
```</div>

Zombie processes are considered alive

Hi!

It looks like the behavior of Process::is_alive has changed.
I'm not 100% sure if this was a deliberate change or a regression. As it wasn't documented in the Changelog, I assumed that this is indeed a bug.

Previously, Zombie processes weren't detected as alive. This however changed in v0.13.

A minimal reproducable example can be found over here:
https://github.com/Nukesor/Debug/tree/procfs-zombie-process

git clone [email protected]:Nukesor/Debug
cd Debug
git switch procfs-zombie-process
cargo run

The example spawns a sleep 2 and checks the is_alive function and the current process' state:

Process 402069 is alive with state Running
Process 402069 is alive with state Sleeping
Process 402069 is alive with state Sleeping
Process 402069 is alive with state Sleeping
Process 402069 is alive with state Zombie
Process 402069 is alive with state Zombie

From what I gathered, Zombie processes shouldn't be considered alive.

This change in behavior was introduced in v0.13.

v0.12 worked as expected:

Process 403784 is alive with state Running
Process 403784 is alive with state Sleeping
Process 403784 is alive with state Sleeping
Process 403784 is alive with state Sleeping
Process 403784 is dead with state Zombie
Process 403784 is dead with state Zombie

Add an optionnal prefix to /proc for gathering data ?

Hi,

First thanks a lot for this library that we use actively in https://github.com/hubblo-org/scaphandre/. This is really great work and allows us to save a lot of time.

I'd like to propose a feature that would be definitely useful for us but I'm not sure if it would fit in the direction you want to take with procfs. So this is more a question about whether you'd accept considering this feature or not. If so, I'd gladly propose a PR accordingly.

To allow our project to run with Docker while being as secure as possible, we'd like to enable mounting /proc (and /sys/class/powercap, but this is specific to scaphandre) as RO volumes. However this causes apparmor in the container to crash as it need to touch /proc/self at some point (didn't test yet with selinux for example). This behavior is discussed here if you are interested in the details.

To fix that we would like to pass an optionnal prefix (/myprefix for example) for scaphandre to gather data from /myprefix/proc and /myprefix/sys/class/powercap instead of /proc and /sys/class/powercap.
So the question is: would you consider adding an optionnal prefix, using procfs, so that it could get data from a prefixed path in such a context ?

Thanks a lot for your time.

Child processes

Hi!

First of all, thanks for providing this library! I'm currently looking for a nice solution to get all child processes of a specific process. After looking at psutil and procfs, I found that procfs is a lot cleaner and definitely better documented! Your library certainly deserves more attention :)

I would like to add the feature for Process::children() and it would be awesome to get a short introduction or a few tips on how to approach this.

Another thing I'm thinking about is, whether it should only be Process::children_ids() plus another function Process::children(), which actually returns a vector of Process.

Is ProcResult really nessesary?

After playing around with procfs (great start btw!) I just began asking myself if the type ProcResult is really nessesary the way it is?

The alternative I would think of is to use the regular Result type. That way everyone has functions like expect(), is_err(), unwrap_or(), โ€ฆ natively at hand.

I would find a definition like the following more ergonomic:

pub enum ProcError {
  PermissionDenied,
  NotFound,
}

type Result<T> = std::result::Result<T, ProcError>;

// example
fn f() -> Result<Process> {
  Err(ProcError::NotFound)
}

This is simillar to the way Result is used by std::io

Panic when reading process comm values with invalid UTF-8 characters

comm values can contain pretty much any characters, which breaks /proc/[pid]/stat parsing:

$ echo -e "\xFF" > /proc/$$/comm
$ cargo run --example dump -- $$
    Finished dev [unoptimized + debuginfo] target(s) in 0.01s
     Running `target/debug/examples/dump 15815`
Info for pid=15815
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Incomplete(Some("/proc/15815/stat"))', src/libcore/result.rs:1084:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

Consider refactoring the project into smaller files

First off, I just want to say that this project looks great and I think we over at rust-psutil will probably switch to using this instead of doing our own custom procfs stuff.

One thing I think would be nice is if some of the larger files were refactored into smaller and more modular files, just to make things easier and more manageable to work with. For example, process.rs is over 3,000 lines.

Fix /proc/cpuinfo parsing on a raspberry pi

On my raspberry pi, there's an extra block of data at the end of /proc/cpuinfo that confuses the current parsing code:

processor       : 0
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 89.60
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 1
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 89.60
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 2
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 89.60
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 3
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 89.60
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

Hardware        : BCM2835
Revision        : a020d3
Serial          : 00000000ef594812
Model           : Raspberry Pi 3 Model B Plus Rev 1.3

Failed to parse /proc/locks

The following /proc/locks file triggers an InternalError

1: POSIX  ADVISORY  WRITE 723 00:14:16845 0 EOF
2: FLOCK  ADVISORY  WRITE 652 00:14:16763 0 EOF
3: FLOCK  ADVISORY  WRITE 1594 fd:00:396528 0 EOF
4: FLOCK  ADVISORY  WRITE 1594 fd:00:396527 0 EOF
5: FLOCK  ADVISORY  WRITE 2851 fd:00:529372 0 EOF
6: POSIX  ADVISORY  WRITE 1280 00:14:16200 0 0
6: -> POSIX  ADVISORY  WRITE 1281 00:14:16200 0 0
6: -> POSIX  ADVISORY  WRITE 1279 00:14:16200 0 0
6: -> POSIX  ADVISORY  WRITE 1282 00:14:16200 0 0
6: -> POSIX  ADVISORY  WRITE 1283 00:14:16200 0 0
7: OFDLCK ADVISORY  READ  -1 00:06:1028 0 EOF
8: FLOCK  ADVISORY  WRITE 6471 fd:00:529426 0 EOF
9: FLOCK  ADVISORY  WRITE 6471 fd:00:529424 0 EOF
10: FLOCK  ADVISORY  WRITE 6471 fd:00:529420 0 EOF
11: FLOCK  ADVISORY  WRITE 6471 fd:00:529418 0 EOF
12: POSIX  ADVISORY  WRITE 1279 00:14:23553 0 EOF
13: FLOCK  ADVISORY  WRITE 6471 fd:00:393838 0 EOF
14: POSIX  ADVISORY  WRITE 655 00:14:16146 0 EOF

Running 4.19.0-6-amd64

subprocess based question

So I'm working on tracing a process via ptrace and am aiming to capture when a child process execs a new process and then get the path to that executable via procfs. Now the first part I've done but the procfs part I've just started looking into.

It appears I should be able to walk the process tree and from that get my spawned process but just wondering how easy it is to get the path to the executable which has been executed for a given subprocess?

Does not work when running as 32-bit process on 64-bit machines

When running procfs::process::all_processes() parsing fails in stat.rs, because fields overflow usize. Here's an example error:

.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.9.1/src/process/stat.rs:306 (please report this procfs bug)

Changing start_data to be a u64 makes the problem go away for the field, but then it fails to parse the next one:

.cargo/registry/src/github.com-1ecc6299db9ec823/procfs-0.9.1/src/process/stat.rs:307 (please report this procfs bug)

While it is not a fairly common application nowadays, a user could still run into it when downloading the wrong build of the program. I don't know what your view is on this, but this could be solved in at least a couple of ways:

  1. Just replace all parsable occurances of usize to u64 (there are 60 instances of usize in total, 26 excluding diskstats module).
  2. Do 1, but selectively. Ignore certain fields that would never be over 4G.
  3. Create a typedef that can switch between usize and u64 as a feature. Given the way rust compiles crates, it could prove rather fragile when 2 crates expect different features, but flexible at the same time.

If this change is not worth it I'm still interested whether diskstats should be using usize, because nowadays disks are in terabytes of size, and after long runtime the stats could overflow usize on pure 32-bit machines. However, I'm not sure how Linux does it, and I could be wrong, I'm just assuming the numbers are not limited to native word size on 32-bit machines.

Either way, if it interests you I would be willing to contribute this change, just let me know which direction to go for.

Process struct should carry an fd and use openat

Under Linux, procfs can suffer from TOCTTOU caused by pid reuse. This is evident if you take the self_memory example and have it run on external process. I'll show some output from strace to illustrate the problem:

openat(AT_FDCWD, "/proc/1234/statm", O_RDONLY|O_CLOEXEC) = 3
... snip ...
openat(AT_FDCWD, "/proc/1234/status", O_RDONLY|O_CLOEXEC) = 3

In between calls to openat, our process could be pre-empted, and then the pid could potentially be killed and re-used by another process before our process resumes. This would result in the call to Process::status() returning wrong information, when really it should return an error because the original process 1234 it was referencing is dead. To fix the problem, you'd want the syscalls to look more like this:

openat(AT_FDCWD, "/proc/1234", O_RDONLY|O_CLOEXEC) = 3
openat(3, "statm", O_RDONLY|O_CLOEXEC) = 4
openat(3, "status", O_RDONLY|O_CLOEXEC) = 4

The easiest way to do this would be to replace every area where file paths are joined with an intentional call to openat. The net effect of this would be that Process would no longer store root, it would store a handle to the directory in procfs. This should also be done when reading from entries in /proc/[pid]/task/[tid]. Another change that should probably happen along with this is that process::all_processes() should return an Iterator instead of a Vec, that way it will prevent a huge number of file descriptors being opened at once. This is an API break so I'm proposing it here before submitting any pull request.

Make it possible to get cpuinfo from another file.

I think it would be a good idea if cpuinfo could be collected from another source than the hardcoded /proc/cpuinfo.

For example, if I want to parse a file I got from another system I could use this to process the information. It can also help with integration testing, because I would be able to set a cpuinfo other than the one on the currently running system for testing.

I made a PR for this already:

#116

Thanks in advance for considering this.

Returning ticks in CpuTime

Hello,
The time returned in procfs::CpuTime should be a number of ticks in u64 as it is for processes in procfs::process::Stat. It's better to be able to get the values as returned by the kernel.

Originally posted by @lparcq in #69 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.