Git Product home page Git Product logo

lgr's Introduction

lgr

CRAN status Lifecycle: maturing

lgr is a logging package for R built on the back of R6 classes. It is designed to be flexible, performant and extensible. The package vignette contains a comprehensive description of the features of lgr (some of them unique among R logging packages) along with many code examples.

Users that have not worked with R6 classes before, will find configuring Loggers a bit strange and verbose, but care was taken to keep the syntax for common logging tasks and interactive usage simple and concise. User that have experience with shiny, plumber, python logging or Apache Log4j will feel at home. User that are proficient with R6 classes will also find it easy to extend and customize lgr, for example with their own appenders Loggers or Appenders.

Features

  • Hierarchical loggers like in log4j and python logging. This is useful if you want to be able to configure logging on a per-package basis.
  • An arbitrary number of appenders for each logger. A single logger can write to the console, a logfile, a database, etc… .
  • Support for structured logging. As opposed to many other logging packages for R a log event is not just a message with a timestamp, but an object that can contain arbitrary data fields. This is useful for producing machine readable logs.
  • Vectorized logging (so lgr$fatal(capture.output(iris)) works)
  • Lightning fast in-memory logs for interactive use.
  • Appenders that write logs to a wide range of destinations:
    • databases (buffered or directly)
    • email or pushbullet
    • plaintext files (with a powerful formatting syntax)
    • JSON files with arbitrary data fields
    • Rotating files that are reset and backed-up after they reach a certain file size or age
    • memory buffers
    • (colored) console output
  • Optional support to use glue instead of sprintf() for composing log messages.

Usage

To log an event with with lgr we call lgr$<logging function>(). Unnamed arguments to the logging function are interpreted by sprintf(). For a way to create loggers that glue instead please refer to the vignette.

lgr$fatal("A critical error")
#> FATAL [20:38:17.998] A critical error
lgr$error("A less severe error")
#> ERROR [20:38:18.057] A less severe error
lgr$warn("A potentially bad situation")
#> WARN  [20:38:18.072] A potentially bad situation
lgr$info("iris has %s rows", nrow(iris))
#> INFO  [20:38:18.074] iris has 150 rows

# the following log levels are hidden by default
lgr$debug("A debug message")
lgr$trace("A finer grained debug message")

A Logger can have several Appenders. For example, we can add a JSON appender to log to a file with little effort.

tf <- tempfile()
lgr$add_appender(AppenderFile$new(tf, layout = LayoutJson$new()))
lgr$info("cars has %s rows", nrow(cars))
#> INFO  [20:38:18.173] cars has 50 rows
cat(readLines(tf))
#> {"level":400,"timestamp":"2023-03-04 20:38:18","logger":"root","caller":"eval","msg":"cars has 50 rows"}

By passing a named argument to info(), warn(), and co you can log not only text but arbitrary R objects. Not all appenders support structured logging perfectly, but JSON does. This way you can create logfiles that are machine as well as (somewhat) human readable.

lgr$info("loading cars", "cars", rows = nrow(cars), cols = ncol(cars))
#> Warning in (function (fmt, ...) : one argument not used by format 'loading cars'
#> INFO  [20:38:18.263] loading cars {rows: `50`, cols: `2`}
cat(readLines(tf), sep = "\n")
#> {"level":400,"timestamp":"2023-03-04 20:38:18","logger":"root","caller":"eval","msg":"cars has 50 rows"}
#> {"level":400,"timestamp":"2023-03-04 20:38:18","logger":"root","caller":"eval","msg":"loading cars","rows":50,"cols":2}

For more examples please see the package vignette and documentation

See lgr in action

lgr is used to govern console output in my shiny based csv editor shed

# install.packages("remotes")
remotes::install_github("s-fleck/shed")
library(shed)

# log only output from the "shed" logger to a file
logfile <- tempfile()
lgr::get_logger("shed")$add_appender(AppenderFile$new(logfile))
lgr::threshold("all")

# edit away and watch the rstudio console!
lgr$info("starting shed")
shed(iris)  
lgr$info("this will not end up in the log file")

readLines(logfile)

# cleanup
file.remove(logfile)

Development status

lgr in general is stable and safe for use, but the following features are still experimental:

  • Database appenders which are available from the separate package lgrExtra.
  • yaml/json config files for loggers (do not yet support all planned features)
  • The documentation in general. I’m still hoping for more R6-specific features in roxygen2 before I invest more time in object documentation.

Dependencies

R6: The R6 class system provides the framework on which lgr is built and the only Package lgr will ever depend on. If you are a package developer and want to add logging to your package, this is the only transitive dependency you have to worry about, as configuring of the loggers should be left to the user of your package.

Optional dependencies

lgr comes with a long list of optional dependencies that make a wide range of appenders possible. You only need the dependencies for the Appenders you actually want to use. Care was taken to choose packages that are slim, stable, have minimal dependencies, and are well maintained :

Extra appenders (in the main package):

  • jsonlite for JSON logging via LayoutJson. JSON is a popular plaintext based file format that is easy to read for humans and machines alike.

  • rotor for log rotation via AppenderFileRotating and co.

  • data.table for fast in-memory logging with AppenderDt, and also by all database / DBI Appenders.

  • glue for a more flexible formatting syntax via LoggerGlue and LayoutGlue.

Extra appenders via lgrExtra:

  • DBI for logging to databases. lgr is confirmed to work with the following backends:

    In theory all DBI compliant database packages should work. If you are using lgr with a database backend, please report your (positive and negative) experiences, as database support is still somewhat experimental.

  • gmailr or

  • sendmailR for email notifications.

  • RPushbullet for push notifications.

  • Rsyslog for logging to syslog on POSIX-compatible systems.

  • elastic for logging to ElasticSearch

Other extra features:

  • yaml for configuring loggers via YAML files
  • crayon for colored console output.
  • whoami for guessing the user name from various sources. You can also set the user name manually if you want to use it for logging.
  • desc for the package development convenience function use_logger()
  • cli for printing the tree structure of registered loggers with logger_tree()

Other Suggests (future, future.apply) do not provide extra functionality but had to be included for some of the automated unit tests run by lgr.

Installation

You can install lgr from CRAN

install.packages("lgr")

Or you can install the current development version directly from github

#install.packages("remotes")
remotes::install_github("s-fleck/lgr")

Outlook

The long term goal is to support (nearly) all features of the python logging module. If you have experience with python logging or Log4j and are missing features/appenders that you’d like to see, please feel free to post a feature request on the issue tracker.

Acknowledgement

lgr's People

Contributors

atheriel avatar gadenbuie avatar mllg avatar mmuurr avatar s-fleck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

lgr's Issues

The %k and %K formats don't work

In format.LogEvent there are these lines

"%k" = colorize_levels(substr(lvls, 1, 1), colors),
"%K" = colorize_levels(substr(toupper(lvls), 1, 1), colors),

where you send one character to colorize_levels. However, this function checks that the input is either a number or one of the known strings, such as info or error. If we only send i there it errors out with:

'trimws(tolower(x))' must either the numeric or character representation of one of the following log levels: fatal (100), error (200), warn (300), info (400), debug (500), trace (600)
foo <- lgr::get_logger("foo")
layout <- lgr::LayoutFormat$new(fmt = "%K %m", colors = getOption("lgr.colors", list()))
appenders <- list(console = lgr::AppenderConsole$new(threshold = "all", layout = layout))
foo$set_appenders(appenders)
foo$info("hello")


INFO  [21:29:14.203] hello 
Warning message:
[2021-10-12 21:29:14.204] foo <AppenderConsole> ~ error in `foo$info("hello")`: 'trimws(tolower(x))' must either the numeric or character representation of one of the following log levels: fatal (100), error (200), warn (300), info (400), debug (500), trace (600) 
> 

I guess it is important that the LayoutFormat is initialized with colors argument.

Print the name of the logger in the log

I'm quite fond of using "complicated" logger hierarchies for tracing complicated algorithms, where I can enable/disable parts of the code selectively (by rising/lowering the log levels of various sub-loggers).

However sometimes I get lost in where the messages are coming from. It would be nice if I could log the logger name in the output (mostly using the format layout). I'm used to this from log4j and miss it a little.

Since I ship all my logs to elastic it would also enable me to filter only logs related to a specific part of the application in a very simple way (granted, adding some meta-data directly as asked in #40 would maybe serve this purpose better). But as a low-effort solution it would be sufficient.

How can I print level character string instead of integer in file?

I am using a rotating appender like so:

lg <- get_logger("test")$set_propagate(FALSE)$set_appenders(list(
  rotating = AppenderFileRotatingTime$new(
    file = "./logs/gen_evtrip.log",
    age = "1 day",
    layout = LayoutJson$new()
  )
))

When I use the following to print log to a file:
lg$log(level = "info", "This is an important message about %s going wrong", "->something<-", "IP" = get_ip())

I get:

{"level":400,"timestamp":"2019-11-23 18:10:36","logger":"(unnamed logger)","caller":"(shell)","msg":"This is an important message about ->something<- going wrong","IP":"71.212.154.194"}

I want to print the string "info" instead of 400. Also, somehow (due to some experiments with config probably), the "logger" is now "(unnamed logger)" when it was "test" before.

LoggerTree

Implement functions to visualise logger tree. Might just become its own package though

logger_tree merges different loggers with same "basename"

Let me illustrate with an example:

> lgr::get_logger("app/backend/api")
> lgr::get_logger("app/frontend/api")
> x <- lgr::logger_tree()
> x
root [info] -> 1 appender                                           
└─app                                               
  ├─backend                                         
  │ └─api                                           
  └─frontend                                        
    └─api                                           
> x %>% as_tibble
   parent      children   configured threshold threshold_inherited propagate n_appenders
   <chr>       <I<list>>  <lgl>          <int> <lgl>               <lgl>           <int>
 1 root        <chr [2]>  TRUE             400 FALSE               TRUE                1
 2 app         <chr [2]>  FALSE            400 TRUE                TRUE                0
 3 backend     <chr [1]>  FALSE            400 TRUE                TRUE                0
 4 api         <NULL>     FALSE            400 TRUE                TRUE                0
 5 frontend    <chr [1]>  FALSE            400 TRUE                TRUE                0

Now x only contains one row with api logger even tough there are two of them.

I'm building an interactive Shiny application for debugging of our R software and I want to have a dynamic control there with all the loggers and give the user the ability to change log-levels of various components (the logs are rendered in the shiny app). The logger_tree function looked like what I'd like, but it doesn't return complete data.

So far I've figured I can use ls(envir = lgr:::loggers) but this uses "private" access to the package which I'd rather avoid if possible.

I'd fix the function myself but I'm not sure what the expected solution is. Should we disambiguate similarly named loggers by the "shortest unique prefix" for example? Maybe inverting the relation, storing a child and its (single) parent in another column. This way the key would be the combination of (child, parent). Right now for two leaf loggers with the same name the children column is NULL and there's no way to know which one is which.

Maybe adding another function for introspection into existing loggers if you care about backward compatibility on this one.

Thanks!

Redefine how Loggers are configured

  • How should logger_config objects look exactly?
  • What are the paramters/defaults to lg$config()
  • Do we need an exported logger_config constructor?

Log Rotation

It would be nice to have log rotation in yog, but maybe that would even be a topic for a separate package?

How can I create a custom log format for JSON files like say one for logstash?

I say the custom format for logstash, like done by NodeJS logger Winston (https://github.com/winstonjs/logform#logstash).

This would be look something like so:

{"@message":"analysis status updated to processing ","@timestamp":"2019-11-23T21:39:41.853Z","@fields":{"level":"info","IP":"128.95.204.113"}}

This isnt, ofcourse, too critical as logstash can parse the JSON generated by lgr. Wanted to understand how I can totally transform the log format. I am able to add custom fields right now.

Remove AppenderDt from the root logger?

I really like AppenderDt and how easy it is to retrieve the logged objects for further inspections. However, I wonder if it is the best idea to have AppenderDt in the root logger. I am a bit concerned here about the memory consumption, especially for longer R sessions or when dealing with large R objects.

Wouldn't it make more sense to remove it from the root logger, and only active it for debugging?

suggestion: allow optional `tz` parameter for `timestamp_fmt` in Layout(s)

Sometimes it's helpful to have logs across disparate machines output timestamps with the same TZ offset.

Currently, the best way to do this is:

  • Not do it at all, but set the timestamp_fmt string to something like %FT%T%z, a proper ISO8601 with TZ offset (%z).
  • Change the default TZ of the R session (either from within R or prior to launching the R process from the shell; the latter approach is somewhat error-prone and dependent on how different OSes and Linux distros manage timezones).

But(!), base::format.POSIXct includes a tz parameter; here's the signature:

format.POSIXct <-  function(x, format = "", tz = "", usetz = FALSE, ...)

... and lgr's default Layouts simply delegate to that function.

I wonder if it'd be possible to pass on the tz argument, or perhaps even more generally simply 'pass the dots' and allow timestamp_tz setters to specify a dots (...) arg that is captured, stored, then spliced into the eventual format.POSIXct dispatch.

Only a suggestion :-) ... and thanks much for all the work you've put into lgr, it's quite nice!

How to add filters, and other logger objects from YAML / JSON config?

Hello, thank for lgr.

I try add filter to logger confige file, and i can`t do it, Could you help me?

JSON exampe:

{
"Logger": {
	"threshold": "all",
	"appenders": {
		"AppenderFile": {
			"threshold": "all",
			"file": "special.log",
			"filters": {"my_filter": "function(event) grepl( pattern = 'special', event$custom, ignore.case = TRUE )"}
     }
   }
 }
}

And i get: Error: fis not a function with the single argumenteventor an EventFilter, butfunction(event) grepl( patte... See ?is_filter.

YAML example:

Logger:
  threshold: all
  appenders:
    AppenderFile:
      threshold: all
      file: "special.log"
      filters: "function(event) grepl( pattern = 'special', event$custom, ignore.case = TRUE )"

Result: Error: 'filters' must be a list Filters or a single Filter (see ?is_filter)

Try next YAML:

Logger:
  threshold: all
  appenders:
    AppenderFile:
      threshold: all
      file: "special.log"
      filter: 
        fi: grepl('bird', event$msg)

Result: Error: fis not a function with the single argumenteventor an EventFilter, butgrepl('bird', event$msg). See ?is_filter.

And second query: How i can add name of Appender in confige file?

Integration with progress bar packages

Is there any possibility of integration with progress bar packages such as progressr or others? It would be nice to have a global control of all the output and the log levels provide this functionality very clearly.

However I'm not sure how it would work in practice or if it is even feasable.

Dependency on isFALSE

Hi there,

package installation currently seems to depend on R v3.5 because somewhere isFLASE is used, which has been introduced with 3.5. Can you remove the dependency, as discussed e.g. here, too?

Filter not injecting data to text file logs

I am using a logger that logs to the console and a text file, but the filter that I add to the logger is only writing the messages to the console appender and not writing the data to the text file.

Reproducible example:

library(lgr)
packageVersion("lgr")
# [1] ‘0.4.3’

# Get root logger
(lg <- get_logger())
# <LoggerRoot> [info] root
#
# appenders:
#   console: <AppenderConsole> [all] -> console

# Function to generate some logs
panic <- function() {
  lg$info("Writing info message to log")
  lg$warn("A big fat warning for you")
  lg$error("Something bad happened!")
}

# Test
panic()
# INFO  [18:36:53.476] Writing info message to log 
# WARN  [18:36:53.507] A big fat warning for you 
# ERROR [18:36:53.515] Something bad happened! 

# Inject message into logs
lg$add_filter(FilterInject$new(hello = "world", foo = "bar"), name = "inject")
panic()
# INFO  [18:37:23.928] Writing info message to log {foo: `bar`, hello: `world`}
# WARN  [18:37:23.932] A big fat warning for you {foo: `bar`, hello: `world`}
# ERROR [18:37:23.940] Something bad happened! {foo: `bar`, hello: `world`}

# Add a file appender to log to a file
tf <- tempfile()
lg$add_appender(AppenderFile$new(tf), name = "txtfile")
panic()
# INFO  [18:37:23.928] Writing info message to log {foo: `bar`, hello: `world`}
# WARN  [18:37:23.932] A big fat warning for you {foo: `bar`, hello: `world`}
# ERROR [18:37:23.940] Something bad happened! {foo: `bar`, hello: `world`}

# But no data injected to the text file!
readLines(tf)
# [1] "INFO  [2022-03-11 18:37:47.150] Writing info message to log"
# [2] "WARN  [2022-03-11 18:37:47.154] A big fat warning for you"  
# [3] "ERROR [2022-03-11 18:37:47.157] Something bad happened!" 

Bug (?): data is injected to the console logs but not injected to the log entries in the text file

Expected behavior: The log entries in both the console and text file contain the injected data

logger threshold per logger

Sorry, may be I'm missing something again, but I think it will be beneficial to be able to specify per logger thresholds. Now it seems root logger amend individual loggers threshold.

Reproduce:

# remotes::install_github('dselivanov/rsparse')
library(rsparse)
logger = lgr::get_logger('rsparse')
x = matrix(rnorm(100 * 100))
res = soft_svd(x, 10)

Works (reduce verbosity):

logger$set_threshold('warn')
res = soft_svd(x, 10)

Doesn't work (doesn't increase verbosity):

logger$set_threshold('trace')
res = soft_svd(x, 10)

Add Filters

Loggers an appenders should have filter like in log4j and python logging

Syslog appender

Hey, I'm the author of the rsyslog package. I'm wondering if you're interested in including a syslog appender. If so, I'd be happy to write it.

What is the recommended/best practice way to access loggers

I have a project with two kinds of R files: "scripts" (which start from the CLI and run the analysis) and "definitions" where I only define functions.

I would like for each function to log in an appropriate logger, for example train_model would log to model/train logger and forecast_model would log to model/forecast and so on.

These methods run thousands of times and I'm a bit curious about the performance. Would it be appropriate to call get_logger at the beginning of the method, storing the logger reference to a local variable, and using that throughout the function?

Or should I have a global object in the environment instead? This seems to be a bit inferior in that name clashes etc. would need to be handled globally ("no lg object for entire program"). Since the definition files are loaded in unspecified order, I feel like the global definitions could easily shadow each other and create a mess (as far as I know there is no such thing as "file local" variable scope in R).

Maybe this is a case of premature optimization, but having no experience with the python logging module which is the inspiration, I'm unsure about the best practices.

It would be also good to have some discussion about this in the readme, e.g. "how to use this package efficiently in a mid/large size project".

Thanks!

glue .transformer support (particularly for length-0 and NULL values)

Since {glue} is vectorized, the default behavior when an interpolated value is length-0 (which includes NULL) is to return character(0):

str(glue::glue("foo is {NULL}")
# 'glue' chr(0)

This causes problems with the msg disappearing for LoggerGlue log events if the glue template includes a value that evaluates to length-0 or NULL.

With {glue}, one can use a transformer to convert these annoyances into something useful:

null0_transformer <- function(text, envir) {
  out <- glue::identity_transformer(text, envir)
  if (is.null(out)) {
    "NULL"
  } else if (length(out) == 0) {
    sprintf("%s(0)", mode(out))
  } else {
    out
  }
}

glue::glue("foo is {NULL}", .transformer = null0_transformer)
# foo is NULL

How about a .transformer optional arg (with glue::identity_transformer as the default) for the logging methods in a LoggerGlue, forwarding on those values to glue::glue()?

Perhaps the same for the .na optional arg in glue::glue()?

(I'm also happy making these changes in a PR if you think it's a good idea but are otherwise busy.)

Proper Filters Implementation

Add a filter R6 class, and some ctors for commonlz useful filters like FilterInject (~with_log_value). Keep option just to use functions as filters though

Log to database with custom fields

Hello! Thanks for lgr.

How i can write log to BD with custom fields?

Example from vignette to JSON:

# The default console appender displays custom fields as pseudo-json after the message
lgr$info("Styria has", poultry = c("capons", "turkeys"))
#> INFO  [09:49:56.806] Styria has {poultry: [capons, turkeys]}

# JSON can store most R objects quite naturally 
read_json_lines(tf)
#>   level           timestamp logger caller                msg         poultry
#> 1   400 2020-10-20 09:49:56   root   eval We lived in Styria            NULL
#> 2   400 2020-10-20 09:49:56   root   eval         Styria has capons, turkeys
read_json_lines(tf)$poultry[[2]]  # works because poultry is a list column
#> [1] "capons"  "turkeys"

But i can`t write custom field in DB:

library(lgr)
library(lgrExtra)
library(RSQLite)


lg <- get_logger("db_logger")

lg$
  add_appender(
    name = "db", 
    lgrExtra::AppenderDbi$new(layout = 
      conn = DBI::dbConnect(RSQLite::SQLite(), 'log.db'),
      table = "log",
      buffer_size = 0
    )
  )

And i have creating 'log' with columns: level integer, timestamp TEXT, logger TEXT, caller TEXT, msg TEXT

One Logger per R6 object: is there any practical limit to number of Loggers?

While reviewing a few recently-closed issues, I came across #41. This dovetailed nicely with an idea I've had for more helpful logging in Shiny apps where much of the application logic exists within R6 objects. I had considered extending Logger to help identify which R6 class and instance (the former via class(obj)[1], the latter via rlang::env_label(obj)) from which the log msg originated in some cases. This quickly got messy (having to walk up the call stack via parent.frame() and such) and I abandoned the idea, but #41 has given me a simpler new idea.

I'm now considering creating a separate Logger instance for each R6 object instance. The Logger could be constructed via the object's constructor like so:

public = list(
  initialize = function(...) {
    private$lgr <- lgr::get_logger(sprintf("%s/%s", class(self)[1], rlang::env_label(self)))
  }

Now, if using a LayoutFormat, a format string like "%L [%t] [%g] [%c] %m %f" will include the classname and instance identifier (via the %g element) from which the message originated.

Still some thought to be put into this (e.g. needing to deal with cloning correctly/carefully (unless setting cloneable=FALSE)), but the looming largest question I have has to do with any performance impacts of having lots of Logger objects around in memory. I'm guessing hundreds (or thousands) of Logger instances wasn't part of the initial design strategy of {lgr}, so I'm curious about your initial thoughts around this idea, @s-fleck.

Configure multiple loggers with yaml

I think that configuring only a single logger with yaml is not very usable. Ideally, I'd be able to configure the entire hierarchy with a single yaml file, including the logger names.

I came up with this simple function to do that, maybe we can find a way to integrate it into the package.

Basically, it uses the same format as the current yaml configuration except the additional children keyword which contains the nested loggers. The Logger keyword is omitted and instead the logger name is used directly.

configure_loggers <- function(config, parent_name = NULL) {
    for (i in seq_along(config)) {
        name <- paste(c(parent_name, names(config[i])), collapse = "/")
        value <- config[i][[1]]
        children <- value$children

        value$children <- NULL
        lg_config <- lgr::as_logger_config(list(Logger = value))
        lgr::get_logger(name)$config(lg_config)

        configure_loggers(children, name)
    }
}

Example config:

## loggers.yaml
mypackage:
  threshold: info
  children:
    db:
      threshold: warn
    model:
      threshold: info
      children:
        user:
          threshold: debug

Usage:

configure_loggers(yaml::read_yaml('loggers.yaml'))

Add default extra fields to messages

I have a json appender where I would like to automatically add some meta-data without having to specify them in the log function every time.

For example, I have a loop which operates on some entities, I'd like to automatically add the entity ID to each log message emitted in the loop.

The motivation is removal of extra clutter in the logging code, where often I'd have 4-5 extra arguments which don't change inside the entire function/loop body.

In code (non-working):

appender$set_default_meta(a = "b", id = 33)

...

lg$info("Hello")
lg$warn("Warning")

Both logs would include the a and id extra keys without me having to specify them repeatedly.

Supply preset configs

Better default config for the root logger, and enable the user to be able to load configs from options() and/or environment variables.

lgr should also include some presets with documentation, fe:

  • minimal
  • memory

TESTS: Drop test with 'transparent' futures

Hi. As part of taking the future package to the next level, I'm narrowing down what futures may do. This is mostly backward compatible, but one thing that needs to go is the support for future(..., local = FALSE). Most of this has already been done, cf. HenrikBengtsson/future#382. However, it is still used internally by transparent futures, which I'll keep supporting for a while, but they should only be used for debugging and troubleshooting purposes. They should never for production use The reason for this is that local = FALSE introduces behavior that is an outlier in the future framework. (This will be clarified further in the next release of future).

I spotted that lgr uses 'transparent' futures in one of its package tests:

for (strategy in c(
"sequential",
"transparent",
"multicore"
# "multiprocess",
# "multisession",
# "cluster"
)){

Could you please drop "transparent" from your checks? It is so similar to "sequential", which should be sufficient for your tests anyway.

While at it, you could also drop that # "multiprocess" line, because that is already formally deprecated.

Buffered DBI appender powered by AppenderDt

AppenderDt could easily power a buffered database Appender if it hat an on_exit and on_rotate method. The only problem is right now that R6 does not support multiple inheritance and thus there would be a lot of code repetion

lgrmsg (or similarly-named) S3 method?

I routinely use custom vectors (often created with the help of vctrs) or custom classes (backed by R6, lists, etc.) and like to pass such objects to lgr via the pattern:

lgr::lgr$info("my msg", x = obj)

For non-atomic vectors, lgr's default behavior of describing the object (to save space) is nice, but as part of creating the custom classes, I've usually implemented format(), too. In cases where I want to use that method, I'll do something like:

lgr::lgr$info("my msg", x = format(obj))

... but this is a bit cumbersome to write, and is non-standard as not all objects should be formatted (so some lgr statements have format, some don't).

Two ideas on which I'm curious about your thoughts:

  1. lgr layouts that look for format implementations and use them when present (which I think is a bit error-prone, as it's not obvious what to do when a format method would be dispatched via S3-inheritance), or
  2. A lgrmsg S3 method (which would work well for R6 classes) that, when available, works with the existing standard lgr layouts and opts for that string representation in the lgr statements (and when not present, lgr simply uses the existing logic).

I realize one could build their own Layout implementation to do something like this, but I also think it would be a nice addition to the existing Layout(s) for anyone working with custom classes (and would encourage package authors who generate special classes to opt for lgr as their primary logging system :-).

Less fragile way to specify logging module names

Currently loggers need to inherit from a already existing logger object, that means you have to initalize them in the correct order. this is somewhat inconvenient and handled much nicer in python logging

JSON logging

Hi Stefan. Thanks for the package. I just wanted to provide some feedback.

  • I really like that lgr doesn't hide R6
  • I found that usage of some appenders and layouts/formatters little bit confusing - will clarify below.

As for me there should be clear separation between log destinations and log formatters (and it seems you generally share same idea). And from this perspective AppenderJson looks a little bit weird - it mixes both file sink and JSON formatting.

Sometimes I would like to have structured logging in order to be able to have msg in a log not as a plain string, but an easy to parse object. So it would be nice to have ability to specify on how to "translate" R's message into logging record. For example with JSON layout we can serialize R objects into json objects (and have fully machine-readable logs):

Current behaviour:

library(lgr)
lgr$appenders$console$set_layout(layout = LayoutJson$new())
lgr$info(list(a = 'b'))
# currently produces
# {"level":400,"timestamp":"2019-03-12 15:44:40","logger":"root","caller":"(shell)","msg":"b"}

Would like to have:

# {"level":400,"timestamp":"2019-03-12 15:44:40","logger":"root","caller":"(shell)","msg":{"a":"b"}}

should default Layout(s) recognize conditions?

One common pattern of mine (and likely others?) is:

tryCatch({
  ## do something
}, error = function(e) {
  lgr$error(conditionMessage(e))
  ## perhaps some error-handling
})

Would it be worth it for the common default Layout(s) to recognize objects inheriting from condition and simply extract the conditionMessage?, so lgr$error(conditionMessage(e)) can be simplified to lgr$error(e)?

The answer might be "no" (which is perfectly acceptable), but I thought it might be useful to suggest this for discussion anyhow.
I also realize I can just extend the various Layout R6 classes and implement this, but again, worth asking :-)

format timestamp in LayoutJson

Hi @s-fleck, thank you for creating this great logging package!

Is it possible to format timestamp in LayoutJson? for example if I want to include milisecond.

Question: Custom format for package logger

Your package looks really promising, I'm currently testing if we should prefer lgr over logger for https://github.com/mlr-org/mlr3.

Creating a package logger is nicely explained in your vignette. However, I'm unsure how to proceed if I want to have a custom format for log messages (background: we are automatically deploying our docs as HTML via pkgdown after each commit, and I don't want time stamps to be included in all the examples and vignettes).

I came up with the following lines in my .onLoad.

  logger = lgr::Logger$new(name = "mlr3", appenders = list(console = lgr::AppenderConsole$new()), propagate = FALSE)
  logger$appenders$console$set_layout(lgr::LayoutFormat$new(fmt = "[%L] %m"))
  assign(...)

This already worked reasonably (except that %L inserts a trailing whitespace). However, I'm unsure how the user now configures this logger. Without propagating to the root logger, I would have to export the logger object in my package, right?

Is there a way to propagate to the root logger with a different format?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.