Git Product home page Git Product logo

async's Introduction

slog-async - Asynchronous drain for slog-rs

Travis CI Build Status slog-async on crates.io slog-rs Gitter Chat slog-rs Dependency Status

For more information, help, to report issues etc. see slog-rs.

Note: Unlike other logging solutions slog-rs does not have a hardcoded async logging implementation. This crate is just a reasonable reference implementation. It should have good performance and work well in most use cases. See documentation and implementation for more details.

It's relatively easy to implement custom slog-rs async logging. Feel free to use this one as a starting point.

async's People

Contributors

atul9 avatar cuviper avatar dpc avatar gnunicorn avatar goose121 avatar gregwebs avatar johnbatty avatar kennytm avatar koushiro avatar licenser avatar njam avatar pierre-l avatar shepmaster avatar techcable avatar tomaka avatar vorner avatar zh-jq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

async's Issues

Performance regression between 2.3.0 and 2.5.0 - crossbeam-channel?

I've observed a performance regression in an application after upgrading slog-async from 2.3.0 to 2.5.0. I guess it's due to the switch from mpsc-channel to crossbeam-channel.

The application is performing ~500 operations per second, each usually taking less than 0.5ms (99th percentile). After upgrading to slog-async 2.5.0 the average duration didn't change, but there are clearly more outliers. The 99th percentile is now 2-3ms.

I've reproduced the issue on 4 cloud VMs, 2x slog-async 2.3.0 and 2x slog-async 2.5.0, otherwise identical.

I have two theories:

  • A: Adding an item to the crossbeam-channel sender is sometimes (rarely) slow
  • B: Using crossbeam-channel saturates CPU resources stronger than mpsc for some reason (maybe also on the receiver side?)

Notes about the application setup:

  • Logging approx. 30 messages per second, some messages with rather large payloads.
  • Overflow strategy "DropAndReport"
  • Channel size 100'000
  • Rust 1.46.0

Do you have any thoughts on what could be causing this?
I'm mainly opening this ticket in case somebody else is experiencing similar issues. Not expecting any action, feel free to close the ticket again :)

question: How to flush on a async logger?

How can I instruct the log = create_logger_cli(false) to be flushed manually (I have a password prompt on stdout which should first flush the whole log such that the prompt is at the right place)
Basically log.flush() which would block until all records have been processed by the thread? Is there another better method?

pub fn create_logger_cli(timestamp: bool) -> Arc<Logger> {
    let decorator = slog_term::TermDecorator::new().build();

    let mut d1 = slog_term::FullFormat::new(decorator);

    if !timestamp {
        d1 = d1.use_custom_timestamp(no_out);
    }

    let drain = d1.build().fuse();

    let drain = slog_async::Async::new(drain)
        .chan_size(5_000_000)
        .build()
        .fuse();

    return Arc::new(slog::Logger::root(drain, o!()));
}

A future-based drain for the asynchronous future

With futures and async coming into the rust standard library, proper, perhaps slog-async should grow some support for a Future to act as the other end of a Drain? This would allow one to spawn all of the logging on an executor which will inevitably exist in any Rust project of 2020.

This is what code would end up looking like:

let (drain, future) = slog_async::Something::new();
runtime.spawn(future); // don’t have to think about this again, and runs on the same runtime as everything else.
let logger = slog::Logger::root(drain, ...);

Implementation wise Drain will likely become the sender end of some async-friendly implementation of channel, whereas the future would be implemented in a way resembling this:

async {
    loop { 
        let message = async_channel.pop().await;
        some_fd.poll_write(..., message).await;
    }
}

Sadly, this likely cannot be done in a composable manner with other already existing drains, as those are synchronous.

Replace crossbeam with std::sync::mpsc

We're exploring options in the logging space. I was curious how amenable you folks would be to using std::sync::mpsc::sync_channel in favor of crossbeam?

slog-async: even with increased chan_size, "logger dropped messages due to channel overflow"

Hi,

I facing this "logger dropped messages due to channel overflow" when stress testing a server.

I've looked at how many records where dropped and increased the chan_size accordingly but I still have these drops, the biggest being hundred or so, so I increased again by adding more and more, but each time, I still have these drops of record :(

What am I doing wrong?

Release new minor version

Hello, any idea of when a new minor version can be released? On the recent commits there's a version bump for packages, like thread-local. I need to reduce binary size on a project I work on that depends on slog-async, a new release would work perfectly for us.

Dropping AsyncCore in the logger thread will cause panic

AsyncCore::drop (and AsyncGuard::drop) needs to join the logger thread. When join is called inside the logger thread itself, it will unconditionally panic.

The Drain::log() function is executed in the logger thread. Therefore, if we drop the AsyncCore inside log(), or if log() panics and we have a custom panic hook and we drop the AsyncCore in the hook, it will cause the EDEADLK crash.

Demo:

use slog::{info, o, Discard, Drain, Logger};
use slog_async::Async;
use std::panic::{set_hook, take_hook};
use std::sync::Mutex;
use std::time::Duration;

struct ErrDrain;

impl Drain for ErrDrain {
    type Ok = ();
    type Err = ();

    fn log(&self, _: &slog::Record, _: &slog::OwnedKVList) -> Result<(), ()> {
        Err(())
    }
}

lazy_static::lazy_static! {
    static ref GLOBAL_LOGGER: Mutex<Logger> = {
        let drain = Async::new(ErrDrain.fuse()).thread_name("slogger".to_string()).build().fuse();
        Mutex::new(Logger::root(drain, o!()))
    };
}

fn main() {
    let hook = take_hook();
    set_hook(Box::new(move |pi| {
        hook(pi);
        *GLOBAL_LOGGER.lock().unwrap() = Logger::root(Discard, o!());  // <--- this
    }));

    info!(GLOBAL_LOGGER.lock().unwrap(), "should panic");
    std::thread::sleep(Duration::from_secs(1));
}

The marked line will panic on Linux and macOS.

Solution for using in DLL

I had a problem:
slog_async had been freezing execution of my windows dll because it had been getting stuck waiting for thread to finish with join() (even though the thread exists successfully). This had been happening while i was waiting for AsyncGuard to drop.

My solution was to add

                    while !join.is_finished() {
                        std::thread::sleep(std::time::Duration::from_millis(50));
                    }

above this line

async/lib.rs

Line 393 in c2040ac

join.join().map_err(|_| {

.
Not a beautiful solution, so i decided not to make a pull request, but its good enough for me. Maybe it will help someone.

Im using build_with_guard(). And then before dll unloading i manually drop async_guard. With this fix drop(async_guard) finishes successfully and doesnt get stuck.

Update:
Its still unsable (sometimes it still freezes). I had to remove

                    join.join().map_err(|err| {
                        println!("prev err: {:#?}", err);
                        io::Error::new(
                            io::ErrorKind::BrokenPipe,
                            "Logging thread worker join error",
                        )
                    })?;

at

async/lib.rs

Line 393 in c2040ac

join.join().map_err(|_| {

completely to make it working (replaced with above-mentioned while basically).

panic with message '`ToSendSerializer` can't fail: Other'

Reproducer:

[dependencies]
erased-serde = "0.3"
serde = "1.0"
slog = { version = "2.7", features = ["nested-values"] }
slog-async = "2.8"
slog-term = "2.9"
use serde::Serialize;
use slog::{info, o, Drain, SerdeValue, Value};

struct MyType;

impl Serialize for MyType {
    fn serialize<S>(&self, _serializer: S) -> Result<S::Ok, S::Error>
    where
        S: serde::Serializer,
    {
        todo!()
    }
}

impl Value for MyType {
    fn serialize(
        &self,
        _record: &slog::Record,
        key: slog::Key,
        serializer: &mut dyn slog::Serializer,
    ) -> slog::Result {
        serializer.emit_serde(key, self)
    }
}

impl SerdeValue for MyType {
    fn as_serde(&self) -> &dyn erased_serde::Serialize {
        todo!()
    }

    fn to_sendable(&self) -> Box<dyn SerdeValue + Send + 'static> {
        todo!()
    }
}

fn main() {
    let decorator = slog_term::TermDecorator::new().build();
    let drain = slog_term::CompactFormat::new(decorator).build().fuse();
    let drain = slog_async::Async::new(drain).build().fuse();

    let log = slog::Logger::root(drain, o!());

    info!(log, "boom"; "value" => MyType);
}

I would expect this to panic at one of the three todo!()s, but it panics inside slog-async:

thread 'main' panicked at '`ToSendSerializer` can't fail: Other', /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-async-2.8.0/lib.rs:496:14
stack backtrace:
   0: rust_begin_unwind
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:593:5
   1: core::panicking::panic_fmt
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panicking.rs:67:14
   2: core::result::unwrap_failed
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/result.rs:1651:5
   3: core::result::Result<T,E>::expect
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/result.rs:1033:23
   4: slog_async::AsyncRecord::from
             at /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-async-2.8.0/lib.rs:493:9
   5: <slog_async::AsyncCore as slog::Drain>::log
             at /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-async-2.8.0/lib.rs:475:19
   6: <slog_async::Async as slog::Drain>::log
             at /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-async-2.8.0/lib.rs:788:15
   7: <slog::Fuse<D> as slog::Drain>::log
             at /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-2.7.0/src/lib.rs:1942:17
   8: <alloc::sync::Arc<D> as slog::Drain>::log
             at /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-2.7.0/src/lib.rs:1719:9
   9: slog::Logger<D>::log
             at /home/john/.cargo/registry/src/index.crates.io-6f17d22bba15001f/slog-2.7.0/src/lib.rs:1233:17
  10: slog_async_panic_cantfail::main
             at ./src/main.rs:43:5
  11: core::ops::function::FnOnce::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5

If I understand correctly, the root cause is that MyType implements SerdeValue but does not implement serialize_fallback(). The default implementation returns an error, and the logger I've set up (at least with the list of Cargo features specified above) does not support nested values.

However, I don't think slog-async should panic here, and certainly not with a can't fail message - that's quite confusing. :) I'm not entirely sure of the call stack through all the dynamic trait objects (and the backtrace above doesn't seem to help much), but I think the serialize call being unwrapped is failing before it even gets to the infallible emit_* methods on ToSendSerializer?

missing tag for v2.5.0

i see a tag for v2.4.0 and v2.6.0 but no tag for v2.5.0.

I suspect it should be placed on 337b9e5 but it would be good for the authors of the project to place the tag explicitly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.