Git Product home page Git Product logo

hs-yet-another-logger's Introduction

A logging framework written with flexibility and performance in mind.

Quick Start

import System.Logger

main  IO ()
main = withConsoleLogger Info $ do
    logg Info "moin"
    withLabel ("function", "f") f
    logg Warn "tschüss"
  where
    f = withLevel Debug $ do
        logg Debug "debug f"

Overview

The logging system consists of four main parts:

  1. The logging front-end are those types and functions that are used to produce log messages in the code. This includes the LogLevel type, the LogPolicy type, the LogLabel and LogScope types, the LogFunction type, and the MonadLog type class.

  2. The abstract LoggerCtx is the context through which the LogFunction delivers log messages to the logger back-end.

  3. The formatter is a function for serializing log messages.

  4. The logger back-end is a callback that is invoked by Logger on each log messages. The logger back-end applies the formatting function and delivers the log messages to some sink.

The framework allows you to combine this components in a modular way. The front-end types, the Logger, and the back-end callback are represented by types or type classes. The formatter exists only as a concept in the implementation of back-ends. These types and concepts together form the abstract logger interface that is defined in the module System.Logger.Types.

The package also provides a concrete Logger that implements these components in the module System.Logger.Logger and System.Logger.Backend.Handle.

Logger Implementation

Writing a log message in a service application should introduce only minimal latency overhead in the thread where the log message is written. Processing should be done asynchronously as much as possible. This framework addresses this by doing all serialization and IO in an asynchronous logger back-end callback.

When a log message is produced it is associated with a logger context. The logger context includes

  • a log-level threshold,
  • a scope, which is a list of key-value labels which are used to tag log messages with additional information, and
  • a policy that specifies how to deal with a situation where the log message pipeline is congested.

A log message can be any Haskell type with Show, Typeable, and NFData constraint. Ideally the logged value is computed anyways in the program so that constructing and forcing it does not introduce any additional overhead.

When a log messages is produced it is tagged with a time stamp. This introduces overhead and there is be room for optimizations here. A log message also has a log-level. If the log-threshold that is effective at the time a log message is written isn't met, no message is produced.

The logger has an internal log message queue. Further benchmarking should be done in chosen the queue implementation that is best suited for this purpose.

The logger asynchronously reads log messages from the queue and calls the back-end callback for each message. Right now the code includes only a single back-end, namely for writing to a handle, but we are going to add more back-ends soon. Due to the modular design, it is possible to combine different back-ends into a single back-end so that messages are processed by more than a single back-end and delivered to more than a single sink.

A back-end includes a formatting function. This is where, beside IO, most processing happens.

Delaying the serialization to the very end of the processing pipeline has the following advantages:

  1. serialization is done asynchronously,
  2. serialization is done only for messages that are actually delivered and it is done only for those parts of the message that are relevant for the respective back-end, and
  3. it is easy to deploy different serialization methods.

For instance, when logging to the console, one usually wants a line-wise UNIX-tool friendly format. For a cloud service one may chose an efficient binary serialization with a back-end that stores messages in a remote database. There may be circumstances where the data of all or some messages is just aggregated for statistical analysis before the messages are discarded. The modular design, which decouples generation and serialization of log messages, allows one to accommodate these different scenarios by just using different back-ends, possibly parameterized by the formatting function.

hs-yet-another-logger's People

Contributors

larskuhtz avatar fosskers avatar jonsterling avatar

Stargazers

Vince avatar  avatar Masashi Fujita avatar Matthew Wraith avatar Daniel Kahlenberg avatar

Watchers

Vincent Hanquez avatar  avatar James Cloos avatar  avatar Roy Peter D'Souza avatar  avatar  avatar  avatar  avatar CW avatar  avatar

hs-yet-another-logger's Issues

Generalize type of logMsg

Currently the definition is logMsg is overly restrictive:

logMsg  Lens' (LogMessage a) a
logMsg = lens _logMsg $ \a b  a { _logMsg = b }

It should be relax to allow changing the type of a:

logMsg  Lens (LogMessage a) (LogMessage b) a b
logMsg = lens _logMsg $ \a b  a { _logMsg = b }

Consider removal of inner `ReaderT`

The ReaderT embedded within LoggerCtxT makes one unable to use that transformer in a Monad stack that already contains a ReaderT. If removed, LoggerCtxT could be freely embedded.

implement clean shutdown

On shutdown the queue should be closed and there should be an option to flush all pending messages

Version of withHandleBackend that is generic in the log message type

withHandleBackend has the following type.

withHandleBackend
     (MonadIO m, MonadBaseControl IO m)
     HandleBackendConfig
     (LoggerBackend T.Text  m α)
     m α

It would be nice to provide a more general version with type:

withHandleBackend_
     (MonadIO m, MonadBaseControl IO m)
     HandleBackendConfig
     (msg -> T.Text)
     (LoggerBackend msg  m α)
     m α

Support log file rotation

  • Passively support log file rotation by an external tool.

    This probably only works for POSX system. A strategy is to regularly close and reopen the handle.

  • Support active log file rotation.

yet-another-logger-0.2.3.1 test suite failure

Citing from http://hydra.cryp.to/build/903657/log/raw. The test suite says that all test have succeeded, and then fails:

Running 1 test suites...
Test suite tests: RUNNING...
trivial backend
  no backend
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 1}:                OK (105.24s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 1}:             OK (106.71s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 1}:         OK (106.05s)
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 1}:                OK (105.30s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 1}:             OK (107.02s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 1}:         OK (106.52s)
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 1}:                 OK (105.17s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 1}:              OK (107.61s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 1}:          OK (113.44s)
  no backend
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 100}:              OK (111.89s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 100}:           OK (113.27s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 100}:       OK (113.46s)
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 100}:              OK (111.49s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 100}:           OK (112.12s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 100}:       OK (112.75s)
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 100}:               OK (110.50s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 100}:            OK (121.49s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 100}:        OK (120.81s)
  no backend
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 100000000}:        OK (121.95s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 100000000}:     OK (202.85s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 100000000}: OK (204.47s)
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 100000000}:        OK (121.73s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 100000000}:     OK (201.75s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 100000000}: OK (202.86s)
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 100000000}:         OK (73.73s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 100000000}:      OK (82.59s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 1, backendDelay = 5, exitDelay = Just 100000000}:  OK (83.08s)
  buggy backend 11 10
    TestParams {queueSize = 10, threadsN = 10, messageN = 100, messageSize = 1000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 10}:                OK (0.97s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 10}:            OK (0.94s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 10000, frontendDelay = 25, backendDelay = 1, exitDelay = Just 10}:        OK (0.68s)
  buggy recover backend 2 10
    TestParams {queueSize = 10, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 100, backendDelay = 1, exitDelay = Just 10}:              OK (70.09s)
    TestParams {queueSize = 1000, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 10}:             OK (8.21s)
    TestParams {queueSize = 10000000, threadsN = 100, messageN = 100, messageSize = 1000, frontendDelay = 10, backendDelay = 1, exitDelay = Just 10}:         OK (7.64s)
  buggy recover backend 8 10
    TestParams {queueSize = 1000, threadsN = 10, messageN = 100, messageSize = 100, frontendDelay = 100, backendDelay = 1, exitDelay = Just 10}:              OK (0.57s)
  buggy no recover backend 10 15
    TestParams {queueSize = 1000, threadsN = 10, messageN = 100, messageSize = 100, frontendDelay = 100, backendDelay = 1, exitDelay = Just 1000}:            OK (0.14s)

All 35 tests passed (423.41s)
Test suite tests: FAIL

use a more efficient queue implementation

Standard functional queue implementations (such as TQueue and its derivatives) with amortized constant read complexity are problematic in conjunction with STM. Those queues are based on two lists, a read and a write list. When read list becomes empty the reader swaps both lists and reverses read list, which is the former write list. Even though the overhead of this operation is amortized over all read operations, the worst case complexity of individual read operations varies between O(1) and O(n) in the queue size while write operations are always O(1).

The concurrency model of STM doesn't guarantee fair scheduling. Instead concurrent threads race for shared resources and the first to commit a transaction wins while the other threads rollback their transactions and retry. For the considered queues this means that if there are frequent writes and the write list grows up to a certain critical size it is possible that the reader never succeeds to commit a read operation that involves a swap (and thus reversal) of the list. Instead the reader retries either forever or, in case of a TBQueue, until the queue is full in which case the writer retries.

Even in cases where the reader manages to recover, it can still happen that a single read operation takes several orders of magnitude longer to finish than normal or than a write operation with much increased CPU load (due to continued retries of the expensive list reversal).

In tests we observed read operations that took over 20s with 80% CPU load compared to <1ms and <5% CPU load for "normal" reads. We were also able to trigger scenarios where the read would never return and >80% CPU load.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.