Git Product home page Git Product logo

logr's Introduction

A minimal logging API for Go

Go Reference Go Report Card OpenSSF Scorecard

logr offers an(other) opinion on how Go programs and libraries can do logging without becoming coupled to a particular logging implementation. This is not an implementation of logging - it is an API. In fact it is two APIs with two different sets of users.

The Logger type is intended for application and library authors. It provides a relatively small API which can be used everywhere you want to emit logs. It defers the actual act of writing logs (to files, to stdout, or whatever) to the LogSink interface.

The LogSink interface is intended for logging library implementers. It is a pure interface which can be implemented by logging frameworks to provide the actual logging functionality.

This decoupling allows application and library developers to write code in terms of logr.Logger (which has very low dependency fan-out) while the implementation of logging is managed "up stack" (e.g. in or near main().) Application developers can then switch out implementations as necessary.

Many people assert that libraries should not be logging, and as such efforts like this are pointless. Those people are welcome to convince the authors of the tens-of-thousands of libraries that DO write logs that they are all wrong. In the meantime, logr takes a more practical approach.

Typical usage

Somewhere, early in an application's life, it will make a decision about which logging library (implementation) it actually wants to use. Something like:

    func main() {
        // ... other setup code ...

        // Create the "root" logger.  We have chosen the "logimpl" implementation,
        // which takes some initial parameters and returns a logr.Logger.
        logger := logimpl.New(param1, param2)

        // ... other setup code ...

Most apps will call into other libraries, create structures to govern the flow, etc. The logr.Logger object can be passed to these other libraries, stored in structs, or even used as a package-global variable, if needed. For example:

    app := createTheAppObject(logger)
    app.Run()

Outside of this early setup, no other packages need to know about the choice of implementation. They write logs in terms of the logr.Logger that they received:

    type appObject struct {
        // ... other fields ...
        logger logr.Logger
        // ... other fields ...
    }

    func (app *appObject) Run() {
        app.logger.Info("starting up", "timestamp", time.Now())

        // ... app code ...

Background

If the Go standard library had defined an interface for logging, this project probably would not be needed. Alas, here we are.

When the Go developers started developing such an interface with slog, they adopted some of the logr design but also left out some parts and changed others:

Feature logr slog
High-level API Logger (passed by value) Logger (passed by pointer)
Low-level API LogSink Handler
Stack unwinding done by LogSink done by Logger
Skipping helper functions WithCallDepth, WithCallStackHelper not supported by Logger
Generating a value for logging on demand Marshaler LogValuer
Log levels >= 0, higher meaning "less important" positive and negative, with 0 for "info" and higher meaning "more important"
Error log entries always logged, don't have a verbosity level normal log entries with level >= LevelError
Passing logger via context NewContext, FromContext no API
Adding a name to a logger WithName no API
Modify verbosity of log entries in a call chain V no API
Grouping of key/value pairs not supported WithGroup, GroupValue
Pass context for extracting additional values no API API variants like InfoCtx

The high-level slog API is explicitly meant to be one of many different APIs that can be layered on top of a shared slog.Handler. logr is one such alternative API, with interoperability provided by some conversion functions.

Inspiration

Before you consider this package, please read this blog post by the inimitable Dave Cheney. We really appreciate what he has to say, and it largely aligns with our own experiences.

Differences from Dave's ideas

The main differences are:

  1. Dave basically proposes doing away with the notion of a logging API in favor of fmt.Printf(). We disagree, especially when you consider things like output locations, timestamps, file and line decorations, and structured logging. This package restricts the logging API to just 2 types of logs: info and error.

Info logs are things you want to tell the user which are not errors. Error logs are, well, errors. If your code receives an error from a subordinate function call and is logging that error and not returning it, use error logs.

  1. Verbosity-levels on info logs. This gives developers a chance to indicate arbitrary grades of importance for info logs, without assigning names with semantic meaning such as "warning", "trace", and "debug." Superficially this may feel very similar, but the primary difference is the lack of semantics. Because verbosity is a numerical value, it's safe to assume that an app running with higher verbosity means more (and less important) logs will be generated.

Implementations (non-exhaustive)

There are implementations for the following logging libraries:

  • a function (can bridge to non-structured libraries): funcr
  • a testing.T (for use in Go tests, with JSON-like output): testr
  • github.com/google/glog: glogr
  • k8s.io/klog (for Kubernetes): klogr
  • a testing.T (with klog-like text output): ktesting
  • go.uber.org/zap: zapr
  • log (the Go standard library logger): stdr
  • github.com/sirupsen/logrus: logrusr
  • github.com/wojas/genericr: genericr (makes it easy to implement your own backend)
  • logfmt (Heroku style logging): logfmtr
  • github.com/rs/zerolog: zerologr
  • github.com/go-kit/log: gokitlogr (also compatible with github.com/go-kit/kit/log since v0.12.0)
  • bytes.Buffer (writing to a buffer): bufrlogr (useful for ensuring values were logged, like during testing)

slog interoperability

Interoperability goes both ways, using the logr.Logger API with a slog.Handler and using the slog.Logger API with a logr.LogSink. FromSlogHandler and ToSlogHandler convert between a logr.Logger and a slog.Handler. As usual, slog.New can be used to wrap such a slog.Handler in the high-level slog API.

Using a logr.LogSink as backend for slog

Ideally, a logr sink implementation should support both logr and slog by implementing both the normal logr interface(s) and SlogSink. Because of a conflict in the parameters of the common Enabled method, it is not possible to implement both slog.Handler and logr.Sink in the same type.

If both are supported, log calls can go from the high-level APIs to the backend without the need to convert parameters. FromSlogHandler and ToSlogHandler can convert back and forth without adding additional wrappers, with one exception: when Logger.V was used to adjust the verbosity for a slog.Handler, then ToSlogHandler has to use a wrapper which adjusts the verbosity for future log calls.

Such an implementation should also support values that implement specific interfaces from both packages for logging (logr.Marshaler, slog.LogValuer, slog.GroupValue). logr does not convert those.

Not supporting slog has several drawbacks:

  • Recording source code locations works correctly if the handler gets called through slog.Logger, but may be wrong in other cases. That's because a logr.Sink does its own stack unwinding instead of using the program counter provided by the high-level API.
  • slog levels <= 0 can be mapped to logr levels by negating the level without a loss of information. But all slog levels > 0 (e.g. slog.LevelWarning as used by slog.Logger.Warn) must be mapped to 0 before calling the sink because logr does not support "more important than info" levels.
  • The slog group concept is supported by prefixing each key in a key/value pair with the group names, separated by a dot. For structured output like JSON it would be better to group the key/value pairs inside an object.
  • Special slog values and interfaces don't work as expected.
  • The overhead is likely to be higher.

These drawbacks are severe enough that applications using a mixture of slog and logr should switch to a different backend.

Using a slog.Handler as backend for logr

Using a plain slog.Handler without support for logr works better than the other direction:

  • All logr verbosity levels can be mapped 1:1 to their corresponding slog level by negating them.
  • Stack unwinding is done by the SlogSink and the resulting program counter is passed to the slog.Handler.
  • Names added via Logger.WithName are gathered and recorded in an additional attribute with logger as key and the names separated by slash as value.
  • Logger.Error is turned into a log record with slog.LevelError as level and an additional attribute with err as key, if an error was provided.

The main drawback is that logr.Marshaler will not be supported. Types should ideally support both logr.Marshaler and slog.Valuer. If compatibility with logr implementations without slog support is not important, then slog.Valuer is sufficient.

Context support for slog

Storing a logger in a context.Context is not supported by slog. NewContextWithSlogLogger and FromContextAsSlogLogger can be used to fill this gap. They store and retrieve a slog.Logger pointer under the same context key that is also used by NewContext and FromContext for logr.Logger value.

When NewContextWithSlogLogger is followed by FromContext, the latter will automatically convert the slog.Logger to a logr.Logger. FromContextAsSlogLogger does the same for the other direction.

With this approach, binaries which use either slog or logr are as efficient as possible with no unnecessary allocations. This is also why the API stores a slog.Logger pointer: when storing a slog.Handler, creating a slog.Logger on retrieval would need to allocate one.

The downside is that switching back and forth needs more allocations. Because logr is the API that is already in use by different packages, in particular Kubernetes, the recommendation is to use the logr.Logger API in code which uses contextual logging.

An alternative to adding values to a logger and storing that logger in the context is to store the values in the context and to configure a logging backend to extract those values when emitting log entries. This only works when log calls are passed the context, which is not supported by the logr API.

With the slog API, it is possible, but not required. https://github.com/veqryn/slog-context is a package for slog which provides additional support code for this approach. It also contains wrappers for the context functions in logr, so developers who prefer to not use the logr APIs directly can use those instead and the resulting code will still be interoperable with logr.

FAQ

Conceptual

Why structured logging?

  • Structured logs are more easily queryable: Since you've got key-value pairs, it's much easier to query your structured logs for particular values by filtering on the contents of a particular key -- think searching request logs for error codes, Kubernetes reconcilers for the name and namespace of the reconciled object, etc.

  • Structured logging makes it easier to have cross-referenceable logs: Similarly to searchability, if you maintain conventions around your keys, it becomes easy to gather all log lines related to a particular concept.

  • Structured logs allow better dimensions of filtering: if you have structure to your logs, you've got more precise control over how much information is logged -- you might choose in a particular configuration to log certain keys but not others, only log lines where a certain key matches a certain value, etc., instead of just having v-levels and names to key off of.

  • Structured logs better represent structured data: sometimes, the data that you want to log is inherently structured (think tuple-link objects.) Structured logs allow you to preserve that structure when outputting.

Why V-levels?

V-levels give operators an easy way to control the chattiness of log operations. V-levels provide a way for a given package to distinguish the relative importance or verbosity of a given log message. Then, if a particular logger or package is logging too many messages, the user of the package can simply change the v-levels for that library.

Why not named levels, like Info/Warning/Error?

Read Dave Cheney's post. Then read Differences from Dave's ideas.

Why not allow format strings, too?

Format strings negate many of the benefits of structured logs:

  • They're not easily searchable without resorting to fuzzy searching, regular expressions, etc.

  • They don't store structured data well, since contents are flattened into a string.

  • They're not cross-referenceable.

  • They don't compress easily, since the message is not constant.

(Unless you turn positional parameters into key-value pairs with numerical keys, at which point you've gotten key-value logging with meaningless keys.)

Practical

Why key-value pairs, and not a map?

Key-value pairs are much easier to optimize, especially around allocations. Zap (a structured logger that inspired logr's interface) has performance measurements that show this quite nicely.

While the interface ends up being a little less obvious, you get potentially better performance, plus avoid making users type map[string]string{} every time they want to log.

What if my V-levels differ between libraries?

That's fine. Control your V-levels on a per-logger basis, and use the WithName method to pass different loggers to different libraries.

Generally, you should take care to ensure that you have relatively consistent V-levels within a given logger, however, as this makes deciding on what verbosity of logs to request easier.

But I really want to use a format string!

That's not actually a question. Assuming your question is "how do I convert my mental model of logging with format strings to logging with constant messages":

  1. Figure out what the error actually is, as you'd write in a TL;DR style, and use that as a message.

  2. For every place you'd write a format specifier, look to the word before it, and add that as a key value pair.

For instance, consider the following examples (all taken from spots in the Kubernetes codebase):

  • klog.V(4).Infof("Client is returning errors: code %v, error %v", responseCode, err) becomes logger.Error(err, "client returned an error", "code", responseCode)

  • klog.V(4).Infof("Got a Retry-After %ds response for attempt %d to %v", seconds, retries, url) becomes logger.V(4).Info("got a retry-after response when requesting url", "attempt", retries, "after seconds", seconds, "url", url)

If you really must use a format string, use it in a key's value, and call fmt.Sprintf yourself. For instance: log.Printf("unable to reflect over type %T") becomes logger.Info("unable to reflect over type", "type", fmt.Sprintf("%T")). In general though, the cases where this is necessary should be few and far between.

How do I choose my V-levels?

This is basically the only hard constraint: increase V-levels to denote more verbose or more debug-y logs.

Otherwise, you can start out with 0 as "you always want to see this", 1 as "common logging that you might possibly want to turn off", and 10 as "I would like to performance-test your log collection stack."

Then gradually choose levels in between as you need them, working your way down from 10 (for debug and trace style logs) and up from 1 (for chattier info-type logs). For reference, slog pre-defines -4 for debug logs (corresponds to 4 in logr), which matches what is recommended for Kubernetes.

How do I choose my keys?

Keys are fairly flexible, and can hold more or less any string value. For best compatibility with implementations and consistency with existing code in other projects, there are a few conventions you should consider.

  • Make your keys human-readable.
  • Constant keys are generally a good idea.
  • Be consistent across your codebase.
  • Keys should naturally match parts of the message string.
  • Use lower case for simple keys and lowerCamelCase for more complex ones. Kubernetes is one example of a project that has adopted that convention.

While key names are mostly unrestricted (and spaces are acceptable), it's generally a good idea to stick to printable ascii characters, or at least match the general character set of your log lines.

Why should keys be constant values?

The point of structured logging is to make later log processing easier. Your keys are, effectively, the schema of each log message. If you use different keys across instances of the same log line, you will make your structured logs much harder to use. Sprintf() is for values, not for keys!

Why is this not a pure interface?

The Logger type is implemented as a struct in order to allow the Go compiler to optimize things like high-V Info logs that are not triggered. Not all of these implementations are implemented yet, but this structure was suggested as a way to ensure they can be implemented. All of the real work is behind the LogSink interface.

logr's People

Contributors

balki avatar bentheelder avatar bombsimon avatar davidlanouette avatar dependabot[bot] avatar directxman12 avatar hn8 avatar iand avatar jeandeaual avatar liranp avatar pnacht avatar pohly avatar qbarrand avatar seankhliao avatar sfc-gh-jchacon avatar sgtcodfish avatar spacewander avatar thockin avatar timonwong avatar tonglil avatar tyler-at-fast avatar vvakame avatar wojas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logr's Issues

Information about the log verbosity can be lost

When using V-levels with glog or klog one common pattern is to avoid evaluation of logger invocation arguments when those might not be needed. So instead:

log.V(5).Info("Some message", "key", someComplexComputation())

something like this is used:

if log.V(5) {
  log.Info("Some message", "key", someComplexComputation())
}

In logr this translates to:

if log.V(5).Enabled() {
  log.Info("Some message", "key", someComplexComputation())
}

This unfortunately leads to a situation that the information about the log verbosity might be lost for the underlying logger implementation.
This can be problematic for implementations which have a fixed list of logging levels containing more then just error and info and would like to map different v-levels to the underlying logging levels.

This can be fixed by repeating V() function invocation:

if log.V(5).Enabled() {
  log.V(5).Info("Some message", "key", someComplexComputation())
}

but as this is not enforced by the API this might be error prone.

Some alternatives worth considering:

  • Do not embed logr.InfoLogger in logr.Logger. This way following code will not compile:
if log.V(5).Enabled() {
  log.Info("Some message", "key", someComplexComputation())
}
  • Remove V() method and logr.InfoLogger interface completely and pass verbosity as argument to Info method:
if log.InfoEnabled(5) {
  log.Info(5, "Some message", "key", someComplexComputation())
}
  • Use closure with Enabled():
log.V(5).IfEnabled(func(log logr.InfoLogger) {
 log.Info("Some message", "key", someComplexComputation())
})

Ability to retrieve underlying logger

I'm currently using https://github.com/cloudevents/sdk-go which allows you to set the zap logger for it use internally.

I'm using zapr in my application and would like to access the underlying logger so that I can configure cloudevents. The part of my application which is using the cloudevents SDK only has access to the logr instance.

I could configure the cloudevents SDK in the main package (where the logger is configured), however it seems a bit of a hack as only a portion of my application is using cloudevents.

Maybe adding a function to the Logger interface, something along the lines of:

type Logger interface {
    ...
    Underlying() interface{}
}

I'm aware this would break the interfaces of each wrapper, so would require updating those packages. This could be mitigated temporarily with a new interface and once all wrapper libraries are updated, merge back into the single interface? Like:

type LoggerWithUnderlying {
    logr.Logger
    Underlying() interface {}
}

What are your thoughts? Happy to contribute a PR

funcr: handle output file truncation

Not strictly funcr, but could be done there as a ready-made function. Something like:

package main

import (
	"fmt"
	"io"
	"os"
	"time"
)

func fileSeek(f *os.File, pos int64, whence int) (int64, error) {
	return f.Seek(pos, whence)
}

func nullSeek(*os.File, int64, int) (int64, error) {
	return 0, nil
}

func fileSize(f *os.File) (int64, error) {
	st, err := f.Stat()
	if err != nil {
		return 0, err
	}
	return st.Size(), nil
}

func nullSize(*os.File) (int64, error) {
	return 0, nil
}

func main() {
	seek := nullSeek
	size := nullSize

	_, err := os.Stdout.Seek(0, io.SeekCurrent)
	if err == nil {
		seek = fileSeek
		size = fileSize
	}

	for i := 0; i < 10000; i++ {
		pos, err := seek(os.Stdout, 0, io.SeekCurrent)
		if err != nil {
			fmt.Fprintln(os.Stderr, "ERR", err)
			os.Exit(1)
		}
		siz, err := size(os.Stdout)
		if err != nil {
			fmt.Fprintln(os.Stderr, "ERR", err)
			os.Exit(1)
		}
		fmt.Fprintln(os.Stderr, "POS", pos, "SIZ", siz)
		if siz < pos {
			fmt.Fprintln(os.Stderr, "TRUNCATE")
			_, err := seek(os.Stdout, siz, io.SeekStart)
			if err != nil {
				fmt.Fprintln(os.Stderr, "ERR", err)
				os.Exit(1)
			}
		}

		fmt.Println(i)
		time.Sleep(100 * time.Millisecond)
	}
}

logr cannot log struct fields which are not exposed

Hello,

In the below example i am using logr in combination with zap.

type nameValue struct {
   name string
   value int
}

// create a instance of this and try and log it
func main() {
    logger := logr.New(zap.New(zap.UseFlagOptions(&zap.Options)).GetSink())
    n := nameValue {
        name: "bingo",
        value: 10,
    }
    logger.V(2).Info("testing logging for nameValue", "nameValue", n)
}

This will print the following to the stdout:

testing logging for nameValue "nameValue": "{}"

Assumption made is that if you do not expose fields in a struct then you should also not print it out in logs. In general i agree but it seems too harsh a decision as you can enable debug logs and then get more detailed logs which also print the non-exposed fields of the struct (in other words it does not filter out unless explicitly coded for via MarshalLog).

I even wrote a String method (Stringer interface), which words when logging a single instance of the struct but when you have a slice of struct with un-exposed fields then it again prints out {} empty pair of curly braces. The only way to circumvent this behavior is therefore to directly use fmt.Sprintf and create a log message which beats the purpose of having a logging framework in the first place.

Also when native fmt package can easily print structs via %v and its variants then why should a logging framework be more restrictive than what golang natively provides? Also exposing fields just so that they can be logged is not a valid argument to expose them when there is no other package that requires it.

I would like to know if there are still better ways to achieve this and your reasoning for the current behavior.

Best Regards,
Madhav

Release 0.1.0

It'd be nice to have a pre-1.0 release so that it's a bit easier to specify versions in tools/libraries that consume it (e.g. controller-runtime). Otherwise, they have to relying on pinning to specific git hashes, and that gets messy when pulling in multiple libraries that have dependencies on logr (e.g. library X depends on logr directly, plus zapr, which also depends on logr).

log file

I was a little confused about the persistences of the logs.Is there a default file path for the log records??
If I did not set the file path for the log instance, will the log only print to standard out and not record to a file on the host running the program?

handling of unexported fields

It is a conscious decision that funcr doesn't call String. But that begs the question how callers should log types where relevant fields are not exported. Here's an example:

diff --git a/funcr/funcr_test.go b/funcr/funcr_test.go
index 735cbd6..f9050a7 100644
--- a/funcr/funcr_test.go
+++ b/funcr/funcr_test.go
@@ -191,6 +191,17 @@ func (c *capture) Func(prefix, args string) {
 	c.log = prefix + " " + args
 }
 
+
+type kmeta struct {
+	namespace, name string
+}
+
+func (k kmeta) String() string {
+	return k.namespace + "/" + k.name
+}
+
+var _ fmt.Stringer = kmeta{}
+
 func TestInfo(t *testing.T) {
 	testCases := []struct {
 		name   string
@@ -200,6 +211,13 @@ func TestInfo(t *testing.T) {
 		name:   "just msg",
 		args:   makeKV(),
 		expect: ` "level"=0 "msg"="msg"`,
+	}, {
+		name: "stringer",
+		args: makeKV("meta", kmeta{namespace: "foo", name: "bar"),
+		// funcr ignores String, so there is nothing to log for the
+		// internal type (unexported field not visible through
+		// reflection).
+		expect: ` "level"=0 "msg"="msg" "meta"={}`,
 	}, {
 		name:   "primitives",
 		args:   makeKV("int", 1, "str", "ABC", "bool", true),

Consider run time performance

If we want to push this to 1.0 (#38), we really need to make some intentional decisions about performance. The API as it stands was designed largely without considering performance, and (surprise!) it shows.

glogr_benchmark_test.go:

package bench

import (
	"flag"
	"os"
	"testing"

	"github.com/go-logr/glogr"
	"github.com/go-logr/logr"
	"github.com/golang/glog"
)

func init() {
	flag.Set("v", "1")
	flag.Set("logtostderr", "true")
	os.Stderr, _ = os.Open("/dev/null")
}

func BenchmarkGlogInfoOneArg(b *testing.B) {
	for i := 0; i < b.N; i++ {
		glog.Infof("this is a %s", "string")
	}
}

func BenchmarkGlogrInfoOneArg(b *testing.B) {
	var log logr.Logger = glogr.New()

	for i := 0; i < b.N; i++ {
		log.Info("this is", "a", "string")
	}
}

func BenchmarkGlogInfoSeveralArgs(b *testing.B) {
	for i := 0; i < b.N; i++ {
		glog.Infof("multi: bool %t, string %s, int %d, float %f, struct %v",
			true, "str", 42, 3.14, struct{ X, Y int }{93, 76})
	}
}

func BenchmarkGlogrInfoSeveralArgs(b *testing.B) {
	var log logr.Logger = glogr.New()

	for i := 0; i < b.N; i++ {
		log.Info("multi",
			"bool", true, "string", "str", "int", 42,
			"float", 3.14, "struct", struct{ X, Y int }{93, 76})
	}
}

func BenchmarkGlogInfoV0(b *testing.B) {
	for i := 0; i < b.N; i++ {
		glog.V(0).Infof("multi: bool %t, string %s, int %d, float %f, struct %v",
			true, "str", 42, 3.14, struct{ X, Y int }{93, 76})
	}
}

func BenchmarkGlogrInfoV0(b *testing.B) {
	var log logr.Logger = glogr.New()

	for i := 0; i < b.N; i++ {
		log.V(0).Info("multi",
			"bool", true, "string", "str", "int", 42,
			"float", 3.14, "struct", struct{ X, Y int }{93, 76})
	}
}

func BenchmarkGlogInfoV9(b *testing.B) {
	for i := 0; i < b.N; i++ {
		glog.V(9).Infof("multi: bool %t, string %s, int %d, float %f, struct %v",
			true, "str", 42, 3.14, struct{ X, Y int }{93, 76})
	}
}

func BenchmarkGlogrInfoV9(b *testing.B) {
	var log logr.Logger = glogr.New()

	for i := 0; i < b.N; i++ {
		log.V(9).Info("multi",
			"bool", true, "string", "str", "int", 42,
			"float", 3.14, "struct", struct{ X, Y int }{93, 76})
	}
}

Running this:

goos: linux
goarch: amd64
pkg: github.com/go-logr/glogr/bench
cpu: Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz
BenchmarkGlogInfoOneArg-6         	  713682	      1675 ns/op
BenchmarkGlogrInfoOneArg-6        	  251154	      4761 ns/op
BenchmarkGlogInfoSeveralArgs-6    	  527288	      2285 ns/op
BenchmarkGlogrInfoSeveralArgs-6   	  167934	      7114 ns/op
BenchmarkGlogInfoV0-6             	  529798	      2286 ns/op
BenchmarkGlogrInfoV0-6            	  164413	      7293 ns/op
BenchmarkGlogInfoV9-6             	46018132	        25.70 ns/op
BenchmarkGlogrInfoV9-6            	 8412883	       142.2 ns/op

So it's notably slower. All of the variadic args escape to the heap, including the string keys (which regular glog does not suffer). But doesn't account for enough.

V() calls that are not taken also expand all their variadic args used.

Some of this is attributable to glogr being very dumb and wasteful (clone() on every call) but it's not clear how much. Before we call it 1.0 we need to do some homework.

Add InfoLogger type alias for v0.1.0 compatibility

v0.2.0 removed the InfoLogger interface in favor of returning a Logger from V(). This simplification is a nice improvement, but unfortunately this is a breaking change that makes it impossible to both satisfy the v0.1.0 and the v0.2.0 interface.

I am currently working on a Kubernetes operator using kubebuilder. When I switched to my own logr implementation (genericr), I found that Kubernetes depends on the v0.1.0 interface, while my backend implements the new v0.2.0 interface. I hit the following errors:

# sigs.k8s.io/controller-runtime/pkg/log
../../go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/null.go:48:32: undefined: logr.InfoLogger
# k8s.io/klog/v2
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:690:45: undefined: logr.InfoLogger
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:702:43: undefined: logr.InfoLogger
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:706:48: undefined: logr.InfoLogger
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:721:44: undefined: logr.InfoLogger
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:739:55: undefined: logr.InfoLogger
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:857:43: undefined: logr.InfoLogger
../../go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1210:10: undefined: logr.InfoLogger

A quick look at klog suggests that that one can be fixed by declaring an internal InfoLogger interface without breaking anything, but I still need to verify this.

The custom null logger in the controller-runtime is a bigger problem. It is not possible to write an implementation of logr.Logger that works both with 0.1.0 and with 0.2.0. This is not a problem in a project where you control all the dependencies, but it is if you depend on third party code that may have their own implementation (e.g., for testing) of this interface.

Unfortunately this is a transition that is hard to manage for third party libraries, because the whole ecosystem would have to upgrade to v0.2.0 of the interface at the same time.

The problem can easily be solved by adding the following type alias to logr:

// InfoLogger provides compatibility with code that relies on the v0.1.0 interface
// Deprecated: use Logger instead. This will be removed in a future release.
type InfoLogger = Logger

This ensures that v0.1.0 implementations continue to work with a new logr v0.2.1. Once v0.2.1+ becomes the norm in the ecosystem, the type alias can be dropped in a future release.

Reconcile NullLogger and discardLogger

They are functionally identical.

  1. EOL NullLogger in favor of logr.Discard()

  2. Expose discardLogger type and make NullLogger embed it

  3. Expose discardLogger type and make NullLogger a type alias

Anyone have feelings?

can remove funcr to an independent repo?

thanks to nice work, because of the funcr package use a api strconv.FormatComplex that come from go1.16, if some one want to use this log interface, he must update his golang version, this seems no need for a minimal log interface

build breaking after latest commit on 28/29th May

We have custom logging library build using go-logr. After new commit on friday on go-logr repo our build is breaking. Can't we define only string in logging path.? Can you share the detailed changelog, which can help us to understand the new changes.

# k8s.io/klog
../../../../../go/src/k8s.io/klog/klog.go:705:10: invalid operation: logr != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:724:10: invalid operation: logr != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:742:10: invalid operation: logr != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:763:10: invalid operation: logr != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:782:11: invalid operation: loggr != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:783:3: undefined: logr.WithCallDepth
../../../../../go/src/k8s.io/klog/klog.go:794:11: invalid operation: loggr != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:795:3: undefined: logr.WithCallDepth
../../../../../go/src/k8s.io/klog/klog.go:915:9: invalid operation: log != nil (mismatched types logr.Logger and nil)
../../../../../go/src/k8s.io/klog/klog.go:919:4: undefined: logr.WithCallDepth
../../../../../go/src/k8s.io/klog/klog.go:919:4: too many errors
make: *** [build] Error 2

"go get github.com/go-logr/logr/funcr" got error

github.com/go-logr/logr/funcr

../../go/src/github.com/go-logr/logr/funcr/funcr.go:398:16: undefined: strconv.FormatComplex
../../go/src/github.com/go-logr/logr/funcr/funcr.go:400:16: undefined: strconv.FormatComplex
../../go/src/github.com/go-logr/logr/funcr/funcr.go:443:16: undefined: strconv.FormatComplex
../../go/src/github.com/go-logr/logr/funcr/funcr.go:445:16: undefined: strconv.FormatComplex

Please include proper copyright notice in the source.

Please include this boilerplate with the appropriate {substitutions} in your source according to the guidance given in Appendix A of the Apache 2.0 license. We cannot use 'logr' (and therefore the structured logging features of 'klog') in our product otherwise.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

Generic way to add options without breaking changes

This was originally a comment on PR #31 that implements the idea proposed in #30.

There will always be a demand for more options to the Logger interface, but we have to balance the added value of these new options with the impact on the ecosystem of breaking changes. Constant breaking changes to the interface make this project less attractive. Ideally we will be able to define a v1.0 that will remain stable for years to come, without too many breaking changes getting there.

I would like to find a generic way to add arbitrary options and then just break compatibility once to add this new interface.

For example, we could add this method to the interface:

WithOption(option Option) Logger

with Option being some interface. This could be nothing more than an interface with some marker method that never actually gets called:

type Option interface {
    LogrOption()
}

Then we could define a new CallDepth option, which you would use as follows:

log = log.WithOption(logr.CallDepth(2))

Implementations would be required to ignore unknown options. Applications would be able to define their own custom options that get passed around.

We would always be able to add new 'official' options to the logr package without breaking the interface.

If we also add something similar to convert the logger into something else, we may never have to break the Logger interface again:

Convert(Conversion) interface{}

Usage example that would solve #11:

underlying := log.Convert(logr.Underlying).(*logrus.Logger)

call stack skipping

When wrapping a logr instance with some additional helper functions, it may be necessary to inform the wrapped instance that it has to skip additional levels in the call stack to find the actual source code location for the log message.

JSON LogSink loosely-coupled to Google Cloud Logging

Thanks for logr.

I've been using logr across the components of an app, deployed to different different Google Cloud Platform (GCP) services (GKE, Cloud Functions, Cloud Run) and all use Cloud Logging.

To benefit most from Cloud Logging, I want to use structured logging but:

  1. Cloud Logging requires that log entries be JSON formatted
  2. Cloud Logging requires that logs use GCP-specific keys (e.g. logging.googleapis.com/labels, logging.googleapis.com/sourceLocation)

For (1), I've switched from stdr to zerologr to emit JSON-formatted logs.

This requires taking a dependency on rs/zerolog when all I really need is the standard library's encoding/json.

Unfortunately, zerologr entries omit the caller (file, line) struct that's almost a match for Cloud Logging's SourceLocation.

Is there a way to retain this?

For (2), I'm using the GCP-specific keys when writing logs but I'd prefer to not be so tightly bound.

Should I consider writing a JSON LogSink?

Can I chain LogSinks or would I then also need to write a LogSink that combines to JSON and GCP-specific keys?

Thanks!

A related but different stupid question, why do LogSinks bind Info and Err to a single writer? For container logging, I'd prefer to write Info to os.Stdout but Err to os.Stderr. As it is, I can either have one or the other, or I need to have duplicate Loggers so that I can stdout.Info and stderr.Err (and never stdout.Err or stderr.Info) which seems redundant.

Need for `Logger.UpdateValues()`

We use a bunch of web middlewares, if a web middleware needs to attach contextual information into the logger, we can achieve it as below:

...
logger, _ := logr.FromContext(ctx)
newLogger := logger.WithValues("foo", foo, "bar", bar)
ctx = NewContext(ctx, newLogger)
...

Since there are so many middlewares attaching contextual information into the logger, it will end up making a long chain of Contexts (Refer to the source), and only the last Context that carries a logger would be used.

If there is a method for logger (e.g. UpdateValues) that appends new values into the existing logger without spawning a new logger, the long chain of Contexts could be avoided:

...
logger, _ := logr.FromContext(ctx)
logger.UpdateValues("foo", foo, "bar", bar)
...

Is InfoLogger valuable?

If I pass you an InfoLogger, you can never log an error or use WithValues(). Is that a valuable restriction? Or should V() just return a full Logger, and V(3).V(2).V(1) would be the same as V(6) ?

Context support

One common pattern for logging is to pass the current logging context to called functions and types using a Go context. This decouples much of the program from a specific implementation of the logr interface. This is similar to adding a logger argument to every function but is more general since it allows non logr-aware packages to pass contexts through to logr-aware packages.

A suggested API (these are top level functions):

// FromContext returns a logr.Logger constructed from the context or returns a null logger if no
// logger details are found in the context.
func FromContext(ctx context.Context) logr.Logger

// NewContext returns a new context that embeds the logger.
func NewContext(ctx context.Context, l logr.Logger) context.Context

This can already be implemented in a separate package (I created logctxr as an example) but I thought it would be worth considering as a core addition to the logr package.

Updated 2020-10-20 to remove NewLogger function based on discussion

lint: comment check

We were wondering recently why a mismatch between comment and struct member (LogTimestamps vs LogTimestamp) was not reported.

The reason is that golangci-lint by default suppresses several issues, including the one about comments. This can be disabled, which then leads to:

golangci-lint run -E revive --exclude-use-default=false
funcr/example/main.go:26:6: exported: exported type E should have comment or be unexported (revive)
type E struct {
     ^
funcr/example/main.go:34:1: exported: exported function Helper should have comment or be unexported (revive)
func Helper(log logr.Logger, msg string) {
^
logr.go:327:1: exported: comment on exported method Logger.Helper should be of the form "Helper ..." (revive)
// WithCallStackHelper returns a new Logger instance that skips the
^
funcr/funcr.go:83:2: exported: exported const None should have comment (or a comment on this block) or be unexported (revive)
	None MessageClass = iota
	^
funcr/funcr.go:330:1: exported: comment on exported method Formatter.Init should be of the form "Init ..." (revive)
// Note that this receiver is a pointer, so depth can be saved.
^
funcr/funcr.go:335:1: exported: exported method Formatter.Enabled should have comment or be unexported (revive)
func (f Formatter) Enabled(level int) bool {
^
funcr/funcr.go:355:1: exported: comment on exported method Formatter.FormatError should be of the form "FormatError ..." (revive)
// FormatInfo flattens an Error log message into strings.
^
funcr/funcr.go:386:1: exported: exported method Formatter.AddValues should have comment or be unexported (revive)
func (f *Formatter) AddValues(kvList []interface{}) {
^
funcr/funcr.go:392:1: exported: exported method Formatter.AddCallDepth should have comment or be unexported (revive)
func (f *Formatter) AddCallDepth(depth int) {

I don't mind being stricter, i.e. adding --exclude-use-default=false and fixing these findings.

Structured logging idea

@thockin and I had an offline chat about structured logging (some related to kubernetes/enhancements#1367) and one idea that came up was that some sort of tuple type might speed up logging performance. One concern was that tuples might be annoying to express in go, if you have to include a type annotation with each instantiation. But it turns out you don't need this, as long as the slice is annotated, so you can do something like this:

package main

import (
	"fmt"
)

type S struct {
	k string
	v interface{}
}

func Log(m string, args ...S) {
	fmt.Println(m, args)
}

func main() {
	type N struct {
		n string
	}
	a := N{"foo"}
	b := N{"bar"}
	Log("test", []S{{"foo", a}, {"bar", b}}...)
}

You can also invoke Log like Log("", S{"foo", a}, S{"bar", b}), but with an import prefix on the annotation that can get verbose. Either way, it would be up to the user how to invoke it.

Tim asked me to file an issue with these notes, so here it is :-).

/assign @thockin

Enforce WithValues to use key/value type

keysAndValues are used at Error, Info and WithValues functions

WithValues(keysAndValues ...interface{}) Logger

Implementators will need to check for tuples, and deal with scenarios where odd number of parameters are passed. That sounds tricky at compile time.

A key value type like

type KV map[string]interface{}

would solve this problem, with the downside of enforcing key to be an string, which I guess is something we can take for granted.

Would it be possible to rename pkg testing?

The use case of "github.com/go-logr/logr/testing" is for testing, and in tests an import of stdlib's "testing" is guaranteed. Hence, any use of "github.com/go-logr/logr/testing" requires to alias the import by definition. Would it be possible to give it a smarter name instead? Or, because a rename will break backwards compatibility, would it make sense to create a testr pkg in the same spirt as zapr, stdr, etc.?

funcr: non-string key handling

As discussed in https://github.com/go-logr/logr/pull/103/files#r725865981, replacing invalid keys with <non-string-key-%d> (unique for each key/value pair in the call) was replaced with <non-string-key> (same key for everything) because it could no longer be guaranteed that the number is correct and really unique when both WithValues and function call parameters have the issue.

It would be useful to bring that back because if a log consumer only stores unique keys, then information is lost.

Accept `context` when logging.

Is it possible to extend the API to accept context when logging? The reason is that we want to extract information from the context like opentelemetry trace ids (trace_id/span_id) or other information that we propagate in the app using the context.

All examples are based on main() only

Hello, the README says:

Ideally only main() knows what logging implementation is being used.

yet the examples, even of the various implementations, contain just the main() function.

Can you please provide a basic example how to use this across multiple packages in a single application, for beginners like me, explaining how to make the app/packages logging implementation-agnostic?

Thanks!

Reserved keys spec is too loose.

Spec says:

// While users are generally free to use key names of their choice, it's generally best to avoid
// using the following keys, as they're frequently used by implementations:
//
// - `"error"`: the underlying error value in the `Error` method.
// - `"stacktrace"`: the stack trace associated with a particular log line or error
//                   (often from the `Error` message).
// - `"caller"`: the calling information (file/line) of a particular log line.
// - `"msg"`: the log message.
// - `"level"`: the log level.
// - `"ts"`: the timestamp for a log line.
//
// Implementations are encouraged to make use of these keys to represent the above
// concepts, when neccessary (for example, in a pure-JSON output form, it would be
// necessary to represent at least message and timestamp as ordinary named values).

Maybe we should just reserve all keys starting with "_" or all keys starting with "logr." or even just the key named "logr" (which can be a struct) to segregate things provided by the impl from things provided by the user?

@munnerz
@DirectXMan12

MarshalLog() does not marshal slices or maps

I don't know if this is appropriate here or it should be an issue in each of the adapters.

When you have a type that implements the Marshaler interface logr will take the expected action and Marshal the type before sending it to the sink. If you have a Marshaler in a slice, map, or object logr will not marshal this type before sending it.

This means that different adapters (zapr, logrusr etc) are marshaling the object at different levels. For example, stdr will recursively unmarshal and get the expected behavior, but I would have to implement zap's/logrus's Marshaler interface to have the same effect. I created a gist, and a playground that demonstrates this.

What I expect to happen is that the object eventually sent to the sink would be unmarshaled uniformly across all adapters. So from my example:
stdr would output:

2009/11/10 23:00:00 "level"=0 "msg"="slice" "y"=[{"Value":2},{"Value":3}]

And zap would output:

{"level":"info","msg":"slice","y":[{"Value":2},{"Value":3}]}

Error: Invalid operation: logr != nil. Despite not having changed versions

I have a Go application which depends indirectly on logr which has been working consistently for months, but it has suddenly started showing compilation errors. I haven't made any changes to the project, it compiled without errors as recently as last Friday (28th May) but started showing errors on Tuesday 1st June):

k8s.io/klog/klog.go:705:10: invalid operation: logr != nil (mismatched types logr.Logger and nil);
k8s.io/klog/klog.go:919:4: undefined: logr.WithCallDepth

I see a new tag marked "BREAKING CHANGE" was released for this project (https://github.com/go-logr/logr/releases/tag/v1.0.0-rc1) at the time this occurred but can't work out how it is related because the dependencies in my project explicitly rely on v0.1.0 and v0.2.0 of logr.

My go.mod file specifies this: k8s.io/apimachinery v0.21.1, which has given rise in the go.sum file to the following dependencies (among others):

github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0 h1:QvGt2nLcHH0WK9orKa+ppBPAxREcH364nPUedEpK0TY=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.4.0 h1:7+X0fUguPyrKEC4WjH8iGDg3laWgMo5tMnRTIGTTxGQ=
k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/apimachinery v0.20.0 h1:jjzbTJRXk0unNS71L7h3lxGDH/2HPxMPaQY+MjECKL8=
k8s.io/apimachinery v0.20.0/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.2 h1:hFx6Sbt1oG0n6DZ+g4bFt5f6BoMkOjKWsQFu077M3Vg=
k8s.io/apimachinery v0.20.2/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=

I'd appreciate any insight into the cause of the issue and how to fix it.

Add NullLogger

Following discussion in #25, it is probably a good idea to add a null.Logger to logr that ignores all messages.

This can be used as a default logger by components that optionally allow you to pass in a logger. If none is passed in, it can simply use the null logger for all its logging.

The implementation would live under a "null" subpackage and could look like this (needs doc comments):

package null

import "github.com/go-logr/logr"

type Logger struct{}

func (n Logger) Enabled() bool {
	return false
}

func (n Logger) Info(msg string, keysAndValues ...interface{}) {
}

func (n Logger) Error(err error, msg string, keysAndValues ...interface{}) {
}

func (n Logger) V(level int) logr.Logger {
	return n
}

func (n Logger) WithValues(keysAndValues ...interface{}) logr.Logger {
	return n
}

func (n Logger) WithName(name string) logr.Logger {
	return n
}

Usage:

import "github.com/go-logr/logr/null"

func foo() {
	l := null.Logger{}
}

funcr: handle fmt.Stringer that panics on nil

I ran into this with klog in Kubernetes (see kubernetes/klog#278). While I haven't tested it, funcr is probably also affected, at least here but potentially also elsewhere:

value = v.String()

The problem is that log calls might pass a nil pointer for a struct which implements fmt.Stringer, but then segfaults when called. Here's an example:
https://github.com/kubernetes/kubernetes/blob/a1e8a5bf39d48719dfbcf49ea09223ee04840502/pkg/kubelet/kubelet.go#L2334-L2335

I think it is reasonable to expect that a logging implementation is resilient against such problems. That means that funcr must either check for nil beforehand or deal with the panic. I'm undecided. Perhaps fmt.Sprintf("%q", <nil stringer>) can serve as reference.

extending a Logger

In #31 we discussed an approach for adding optional features to a logger and settled on optional interfaces, like this one:

https://github.com/go-logr/zapr/blob/97f3a0a6a3f0212bdc9326c2c17054e0a9cd2e9f/zapr.go#L164-L174

Users would then do something like:

func helper(log logr.Logger, msg string) {
	log.WithCallDepth(2).Info(msg)

	if zaplogger, ok := log.(zapr.Underlier); ok {
		log.Info("have zap logger", "zapr", zaplogger)
	}
}

That no longer works because logr.Logger is a struct.

View logs with verbosity level set

I have set the verbosity level as follows, as I essentially would like these to be like Debug level logs-
l.Log.V(10).Info("req reconciled", "req", req.NamespacedName)

How do I ensure these logs get printed, when I do want to view the logs at this verbosity level ?
Currently I only see the logs printed at l.Log.Info , which is default verbosity level as 0.

funcr: strict JSON output mode?

It seems useful to make it do strict JSON output (probably as an option).

E.g.

    log := funcr.New(
        func(pfx, args string) { fmt.Printf("{%q:%q, %s}\n", "logger", pfx, args) },                                                                                    
        funcr.Options{
            LogCaller:    funcr.All,
            LogTimestamp: true,
            OutputFormat: funcr.JSON,
        })

Prefix is still weird, so it requires the caller to do some formatting. Maybe in strict mode, the prefix is always blank, and args comes in a single JSON object. We can pick a key for the prefix ("logger" or "name"?). Do we need output to be {"foo":"bar"} or just "foo":"bar" ?

[funcr.NewJSON] Naive RenderValuesHook incorrectly triggered recursive labels

This isn't an issue with go-logr but it may be worth identifying as a potential trap for the unwary; I will update my blog post.

My naive implementation of RenderValuesHook was:

renderLabels := func(kvList []interface{}) []interface{} {
  // loggingLabels becomes the only label key
  return []interface{}{
    loggingLabels,
    funcr.PseudoStruct(kvList),
  }
}

This worked fine if WithValues is only called once per Logger. If WithValues is called again, the existing labels are nested within the new set, i.e.:

{
  "jsonPayload": {
    "logging.googleapis.com/labels": {
      "logging.googleapis.com/labels": { <--- INCORRECT
        "foo": "X"
      },
      "bar": "Y"
    }
  }
  "labels": {
    "instanceId": "00bf4bf0..."
  }
}

NOTE In my case, Google Cloud Logging no longer recognized the borked logging.googleapis.com/labels as special values and does not merge them with the labels field.

Using NewStdoutLogger, the issue does not occur:

	l := NewStdoutLogger()
	l.Info("info")

	l = l.WithValues("foo", "X")
	l.Info("info")

	l = l.WithValues("bar", "Y")
	l.Info("info")

Outputs:

"level"=0 "msg"="info"
"level"=0 "msg"="info" "foo"="X"
"level"=0 "msg"="info" "foo"="X" "bar"="Y"

So, it's an issue with my implementation.

I revised the implementation (improvements welcome):

renderLabels := func(kvList []interface{}) []interface{} {
  // If loggingLabels is present, it will be the first label key
  k, _ := kvList[0].(string)
  if k == loggingLabels && len(kvList) > 2 {
    // The label values should be a PseudoStruct
    v, _ := kvList[1].(funcr.PseudoStruct)
    // Append additional ([2:]) label keys|values to the existing loggingLabels
    // loggingLabels becomes the only label key
      return []interface{}{
        loggingLabels,
        append(v, funcr.PseudoStruct(kvList[2:])...),
      }
    }

    // loggingLabels becomes the only label key
    return []interface{}{
      loggingLabels,
      funcr.PseudoStruct(kvList),
    }
}

And, replacing NewStdoutLogger with NewStructuredLogger, I get:

{"level":0,"msg":"info"}
{"level":0,"msg":"info","logging.googleapis.com/labels":{"foo":"X"}}
{"level":0,"msg":"info","logging.googleapis.com/labels":{"foo":"X","bar":"Y"}}

Logger.Error: nil err allowed?

In Kubernetes, sometimes klog.ErrorF is called without having an error. During the structured logging migration, that then gets translated to klog.ErrorS(nil, ...). I wonder whether the same nil error would or should be allowed for a logr.Logger. If yes, then we should document it.

Currently it is undefined behavior, so one could argue that explicitly allowing it is not an API change and just documents how loggers should have been implemented all along anyway (tolerate user errors if it was considered as one, don't crash, etc.).

funcr doesn't handle recursive pointers well

type T struct {    
    P *T    
}    

func example(log logr.Logger) {    
    t := T{}    
    t.P = &t    
    log.Info("recurse", "t", t)     
}

yields a (predictable) stack overflow.

We probably should add a max-depth option or track a stack of pointers across pretty calls and not print pointers already in the stack. Or both.

change log entry details depending on log level

This might be another one of my ideas that aren't useful enough in practice, but let me share it anyway...

In a few places in Kubernetes, if Enabled() is used to choose between two different log calls where the difference is the amount of information that gets logged. In other words, at higher log levels one does not only get more log entries, but also more detailed ones.

This can be done at the call site with the existing API, but it becomes less readable:

if logV := logger.V(5); logV.Enabled() {
    logV.Info("hello", "v1", 1, "v3", 3, "v5", 5)
else if logV := logger.V(3); logV.Enabled() {
    logV.Info("hello", "v1", 1, "v3", 3)
} else {
    log.V(1).Info("hello", "v1", 1)
}

One option would be to create a []interface{} slice with the parameters. The klog logchecker will complain about that, though, because it cannot do a static analysis that the key/value pairs are well-formed:

kv := []interface{}{"v1", 1}
if logger.V(3).Enabled() {
    kv := append(kv, "v3", 3)
    if logger.V(5).Enabled() {
        kv := append(kv, "v5", 5)
    }
}
log.V(1).Info("hello", kv...)

This could remain a single call if we had a way to express verbosity of each key/value pair, for example like this:

log.V(1).Info("hello", "v1", 1, logr.KeyV(3, "v3"), 3, logr.KeyV(5, "v5"), 5)
func KeyV(int level, key string) { return keyV{level: level, key: key} }

type keyV struct {
    level int
    key string
}

func (k keyV) String { return k.key }
func (k keyV) Level int { return k.level }

type LeveledStringer interface {
    fmt.Stringer
    Level() int
}

Loggers which support this could check for this interface and skip key/value pairs which are above the threshold. Those that don't still can construct a string key, without having to be modified.

Comparability of NullLogger

Today, I found the comparability of NullLogger useful to assert that a particular Logger instance was making it through a few layers of injection. A bit like the following two snippets:

https://play.golang.org/p/4V5uhqJor6J

ctx := context.Background()
double := struct{ logr.Logger }{testing.NullLogger{}}
assert.Equal(t, double, logr.FromContext(logr.NewContext(ctx, double)))

Do you think this is a property that can be documented and maintained here in logr? If so, this test might suffice:

package testing

import (
	"reflect"
	"testing"

	"github.com/go-logr/logr"
)

func TestNullComparable(t *testing.T) {
	var a logr.Logger = NullLogger{}
	if !reflect.TypeOf(a).Comparable() {
		t.Fatal("null logger should be comparable")
	}

	var b logr.Logger = NullLogger{}
	if a != b {
		t.Fatal("two null loggers should be equal")
	}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.