Git Product home page Git Product logo

fluent-logger-golang's Introduction

fluent-logger-golang

Build Status GoDoc

A structured event logger for Fluentd (Golang)

How to install

go get github.com/fluent/fluent-logger-golang/fluent

Usage

Install the package with go get and use import to include it in your project.

import "github.com/fluent/fluent-logger-golang/fluent"

Example

package main

import (
  "github.com/fluent/fluent-logger-golang/fluent"
  "fmt"
  //"time"
)

func main() {
  logger, err := fluent.New(fluent.Config{})
  if err != nil {
    fmt.Println(err)
  }
  defer logger.Close()
  tag := "myapp.access"
  var data = map[string]string{
    "foo":  "bar",
    "hoge": "hoge",
  }
  error := logger.Post(tag, data)
  // error := logger.PostWithTime(tag, time.Now(), data)
  if error != nil {
    panic(error)
  }
}

data must be a value like map[string]literal, map[string]interface{}, struct or msgp.Marshaler. Logger refers tags msg or codec of each fields of structs.

Setting config values

f := fluent.New(fluent.Config{FluentPort: 80, FluentHost: "example.com"})

FluentNetwork

Specify the network protocol. The supported values are:

  • "tcp" (use FluentHost and FluentPort)
  • "tls" (useFluentHost and FluentPort)
  • "unix" (use FluentSocketPath)

The default is "tcp".

FluentHost

Specify a hostname or IP address as a string for the destination of the "tcp" protocol. The default is "127.0.0.1".

FluentPort

Specify the TCP port of the destination. The default is 24224.

FluentSocketPath

Specify the unix socket path when FluentNetwork is "unix".

Timeout

Set the timeout value of time.Duration to connect to the destination. The default is 3 seconds.

WriteTimeout

Sets the timeout value of time.Duration for Write call of logger.Post. Since the default is zero value, Write will not time out.

BufferLimit

Sets the number of events buffered on the memory. Records will be stored in memory up to this number. If the buffer is full, the call to record logs will fail. The default is 8192.

RetryWait

Set the duration of the initial wait for the first retry, in milliseconds. The actual retry wait will be r * 1.5^(N-1) (r: this value, N: the number of retries). The default is 500.

MaxRetry

Sets the maximum number of retries. If the number of retries become larger than this value, the write/send operation will fail. The default is 13.

MaxRetryWait

The maximum duration of wait between retries, in milliseconds. If the calculated retry wait is larger than this value, the actual retry wait will be this value. The default is 60,000 (60 seconds).

TagPrefix

Sets the prefix string of the tag. Prefix will be appended with a dot ., like ppp.tag (ppp: the value of this parameter, tag: the tag string specified in a call). The default is blank.

Async

Enable asynchronous I/O (connect and write) for sending events to Fluentd. The default is false.

ForceStopAsyncSend

When Async is enabled, immediately discard the event queue on close() and return (instead of trying MaxRetry times for each event in the queue before returning) The default is false.

AsyncResultCallback

When Async is enabled, if this is callback is provided, it will be called on every write to Fluentd. The callback function takes two arguments - a []byte of the message that was to be sent and an error. If the error is not nil this means the delivery of the message was unsuccessful.

AsyncReconnectInterval

When async is enabled, this option defines the interval (ms) at which the connection to the fluentd-address is re-established. This option is useful if the address may resolve to one or more IP addresses, e.g. a Consul service address.

SubSecondPrecision

Enable time encoding as EventTime, which contains sub-second precision values. The messages encoded with this option can be received only by Fluentd v0.14 or later. The default is false.

MarshalAsJson

Enable Json data marshaling to send messages using Json format (instead of the standard MessagePack). It is supported by Fluentd in_forward plugin. The default is false.

RequestAck

Sets whether to request acknowledgment from Fluentd to increase the reliability of the connection. The default is false.

TlsInsecureSkipVerify

Skip verifying the server certificate. Useful for development and testing. The default is false.

FAQ

Does this logger support the features of Fluentd Forward Protocol v1?

"the features" includes heartbeat messages (for TCP keepalive), TLS transport and shared key authentication.

This logger doesn't support those features. Patches are welcome!

Is it allowed to call Fluent.Post() after connection close?

Before v1.8.0, the Fluent logger silently reopened connections whenever Fluent.Post() was called.

logger, _ := fluent.New(fluent.Config{})
logger.Post(tag, data)
logger.Close()
logger.Post(tag, data)  /* reopen connection */

However, this behavior was confusing, in particular when multiple goroutines were involved. Starting v1.8.0, the logger no longer accepts Fluent.Post() after Fluent.Close(), and instead returns a "Logger already closed" error.

Tests


go test

fluent-logger-golang's People

Contributors

akerouanton avatar alexlry avatar bnyu avatar choplin avatar cosmo0920 avatar dearoneesama avatar enm10k avatar fujimotos avatar fujiwara avatar ganmacs avatar hotchpotch avatar ivan-valkov avatar jamesjj avatar johanavril avatar kakakakakku avatar kaxap avatar masahide avatar mattn avatar methane avatar nagesh4193 avatar najeira avatar nokute78 avatar repeatedly avatar rykov avatar szamuboy avatar t-k avatar tagomoris avatar tulequ avatar y-matsuwitter avatar yoheimuta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-logger-golang's Issues

Feature: periodically reconnect to fluentd-address

In cases where the FluentHost can resolve to one of many IPs (e.g. a Consul service address), periodic reconnection is a desirable feature. In a Consul example, a member can be marked unhealthy and leave the service pool, or new members can be added to the service pool. While ForceStopAsyncSend is welcomed in that it prevents Docker containers hanging, but it doesn't prevent log loss, and a single application with a log spike can still overwhelm a fluentd worker. If periodic reconnection were in place, FluentHost could resolve to a new address and continue logging.

it doesn't auto-reconnect when the connection is lost

From the code, if the connection to the fluentd server is lost (For example, the fluentd server is restarted), it seems the connection won't auto-reconnect and all the 'Post' methods don't return the 'send' error, so the program will never have the chance to detect the connection lost error. This may cause the logs lost. Please correct me if I am wrong.

func (f *Fluent) PostRawData(data []byte) {
    f.mu.Lock()
    f.pending = append(f.pending, data...)
    f.mu.Unlock()
    if err := f.send(); err != nil {
        f.close() //only close the connection but the error is not return, so how the program knows this error?
       if len(f.pending) > f.Config.BufferLimit {
            f.flushBuffer()
        }
    } else {
        f.flushBuffer()
    }
}

Are idle timeouts/ keepalives being observed?

Hi folks,

I'm running FluentBit behind an AWS Network Load Balancer (running in TCP mode) in conjunction with the fluentd Docker log driver and am seeing occasional dropping of logs. On investigation this seems to occur only when there are sudden large increases in traffic from previously low (or nil) levels. When this happens I observe that the NLB initially returns reset packets to the clients (in this case the fluentd log driver I believe) before beginning to receive new connections.

As per the AWS NLB docs I'm aware that NLBs have a hardcoded idle timeout value of 350 seconds. The idle timeout value can be reset through the sending of keepalive packets.

This made me wonder if the fluentd log driver either sends keepalives, or checks for reset packets in its connections? I don't know golang so may have missed it when looking at the code here but couldn't see anything which would suggest either of these are in place.

Most grateful for your thoughts.

Cheers,

Edd

Environment:
Docker: 19.03.2, build 6a30dfc
FluentBit: Have tried both 1.2.2 and 1.3.0, both exhibit the same behaviour.

Change in semantics of config.BufferLimit with addition of RequestAck in 1.4

I think in #63 the config.BufferLimit meaning was changed. I'm not sure if this was intentional or not. Instead of allocating a bytes array it allocates a channel buffer of pointers. Callers who didn't realize this would be allocating 8x the memory on startup.

The specific change is here where pending was changed from []byte to chan *msgToSend.

Afaict this change wasn't documented anywhere but if I missed it let me know. For context see: moby/moby#41488.

Option to skip loggertag and timestamp

Presently loggertag and timestamp are getting marshalled along with log message. Is there any option to skip the loggertag and timestamp in the marshalled log?

RequestAck usage

Hi Team

What is the usage of RequestAck flag

Is there any relation between Async and RequestAck ?

we noticed if Async is false and RequestAck is true with connections not closing

Thanks
Jagadeesh

tls

Would there be interest in merging an update to add options for tls connections?

I'd be happy to work on a PR

How to forward output plugin records

I have written a Go output plugin for fluentbit. The records the plugin receives are in msgpack format. The plugin forwards the unprocessed records to an endpoint. The endpoint will use fluent-logger-golang to post the records to another fluentbit instance.

It appears I would have to decode the records, unmarshal them, then post them. That obviously isn't very efficient. The ideal method wouldn't involve any re-encoding or marshaling. E.g.,

func (f *Fluent) PostRawRecords(tag string, records []bytes)

where records is the same byte blob as passed to the output plugin.

Is something like that possible? If it's possible, but not implemented, would you accept a PR?

Passing struct pointers to `Post` method

I noticed that the Post method only supports passing values typed as interface{}, for example, only doing this is valid:

ptr := &struct{}{}
fluent.Post("tag", *ptr)

and passes this check

msg := reflect.ValueOf(message)
msgtype := msg.Type()
if msgtype.Kind() == reflect.Struct {

Passing pointers generates error instead. Can we support passing a struct pointer (use .Elem().Field() to get its fields) and if not, what are possible concerns?

sending raw json?

I want to send a complex struct to the td-agent. fluent-logger-golang works fine.. but it only sees exported fields (variables starting with upper case). I need the field names in the resulting json to be all lower case and in a format I am used to. Is it possible to Post raw json?

Simplified example:-

type Timinginfo struct {
    Dns      int `json:"dns"`
    Connect  int `json:"connect"`
}

posting this, my log sees

{"Dns": 0, "Connect": 0}

What i would like to see is

{"dns": 0, "connect": 0}

when marshaling to json the encoder converts field names to my liking. Is there anything similar in fluent?

Driver still retries on unknown network error

Retry code does not check whether the error returned from connect() is "unknown network" error which is cannot be fixed by retrying:

err := f.connect()
if err != nil {
f.muconn.Unlock()
waitTime := f.Config.RetryWait * e(defaultReconnectWaitIncreRate, float64(i-1))
if waitTime > f.Config.MaxRetryWait {
waitTime = f.Config.MaxRetryWait
}
time.Sleep(time.Duration(waitTime) * time.Millisecond)
continue
}

api inconsistency

f.TagPrefix is respected by f.Post(..) and f.PostWithTime(..) methods but not by f.EncodeAndPostData(..) and f.EncodeData(..).
It should be consistent across all methods.
It is very easy to fix but requires api breakage.

Calls to Write() after calling Close() in sync mode reopen the connection

Currently, in sync mode, the code reopens the connection when Write() is called after Close(). This could happen either because:

  1. Client's code call these functions serially from a single goroutine ;
  2. Or when two goroutines call these functions concurrently and the connection mutex lead to both goroutines being serialized in that specific order โฌ†๏ธ

In the first case, if Write() is serially called after Close(), it might seem legit to have to call Close() once again but in the second case, it doesn't seem good to have to call Close() multiple times to be really sure the connection is closed.

This case doesn't happen in async mode because f.Close() takes care of setting f.chanClosed = true and f.appendBuffer() checks if f.chanClosed = true and returns an error if that's the case.

func (f *Fluent) Close() (err error) {
if f.Config.Async {
f.pendingMutex.Lock()
if f.chanClosed {
f.pendingMutex.Unlock()
return nil
}
f.chanClosed = true

func (f *Fluent) appendBuffer(msg *msgToSend) error {
f.pendingMutex.RLock()
defer f.pendingMutex.RUnlock()
if f.chanClosed {
return fmt.Errorf("fluent#appendBuffer: Logger already closed")
}

I suggest to change the code for sync mode to not accept any new messages when f.Close() has been called and document that change (as well as thread safety improvement done in #82).

I've written the following test to confirm my findings (although I'm not sure how to test the inverse behavior). I ran it against the current master branch and on pre-#82 code to make sure this behavior wasn't introduced by that change. Both tests trigger the error.

func TestSyncWriteAfterCloseFails(t *testing.T) {
	d := newTestDialer()

	go func() {
		var f *Fluent
		var err error
		if f, err = newWithDialer(Config{
			Async: false,
		}, d); err != nil {
			t.Errorf("Unexpected error: %v", err)
		}

		_ = f.PostWithTime("tag_name", time.Unix(1482493046, 0), map[string]string{"foo": "bar"})

		if err := f.Close(); err != nil {
			t.Errorf("Unexpected error: %v", err)
		}

		_ = f.PostWithTime("tag_name", time.Unix(1482493050, 0), map[string]string{"fluentd": "is awesome"})
	}()

	conn := d.waitForNextDialing(true, false)
	conn.waitForNextWrite(true, "")

	conn = d.waitForNextDialing(true, false)
	conn.waitForNextWrite(true, "")

        // isOpen is a field added to Conn specifically for this test
	if conn.isOpen == true {
		t.Error("Connection has been reopened.") // This is currently triggered
	}
}

Release v1.0.1?

I'd like to add support for async connections in docker.

It is particularly useful there, as the fluent server could be in a container, and docker can't start the container until it's running... :)

Could you release a new version to that effect?

Can't convert โ€ฆ not supported at Record

I experienced this while using logrus_fluent (which use this library).

I have a type alias defined as

type Mode = string

When I try to log a message with .WithFields(logrus.Fields{}) and use this type, I get the following error:

Failed to fire hook: fluent#EncodeAndPostData: can't convert 
'map[string]interface {}{"":"Starting process", "args":[]interface {}{"10000"}, "cmd":"sleep", "level":"info", "mode":"launcher"}' 
to msgpack:msgp: type "config.Mode" not supported at Record

(note: I've added some newlines to the message for readability)

The workaround in this case is to just convert the Mode value back to string:

log.WithFields(logrus.Fields{
		"mode": string(config.Mode),
})

Fail to reconnect after Fluentd destination node recovered

This issue was reported originally as moby/moby#27374.

I added some debug messages, and got logs:

DEBU[1740] [logger/fluentd] Writing a log line with tag docker.ping-1 and timestamp 2016-10-17 08:18:48.016553696 +0000 UTC: map[container_id:d199ca5e2556de10b9401a5a87e091760378822bcd052930396432aa7e823624 container_name:/ping-1 source:stdout log:yay] 
DEBU[1740] [fluent-logger-golang] Appending data into buffer 
DEBU[1740] [fluent-logger-golang] postRawData called    
DEBU[1740] [fluent-logger-golang] Entering send() to send log data 
DEBU[1740] [fluent-logger-golang] No valid connections to send data 
DEBU[1740] [fluent-logger-golang] Reconnecting          
ERRO[1740] Failed to log msg "yay" for logger fluentd: fluent#send: can't send logs, client is reconnecting

Unknown writing error: MaxRetry shows zero

I am using fluent-logger-golang v.1.5.0.

It is failing when calls method Post. The snippet below shows the error catching:

			logger := getLogger(enabledLog)
			error:= logger.Post(tag, data)
			if error != nil {
			    fmt.Println(error.Error())
			}
			logger.Close()

The error message is: fluent#write: failed to reconnect, max retry: 0

It is strange, because max retry has a default value greater than 0:

defaultMaxRetry               = 13

For having right, I pass the parameter to constructor:

		logger, err := fluent.New(fluent.Config{FluentPort: logServerPort, FluentHost: logServer, MaxRetry: 13})
		_ = logger
		if err != nil {
			die("GOLOG: ERROR: It was not possible to open connection to server fluentd.")
		}
  • The snippet right above is part of method getLogger, mentioned previously

Ok, I don't know the cause of the write error. It is supposed that connection to Fluentd is established because it pass by if. I added some tests:

A TCP test:

conn, err := net.Dial("tcp", FLUENTD_HOST + ":" + FLUENTD_PORT)
    if err != nil {
        die("Could not connect to Fluentd server: " + err.Error())
    }
    defer conn.Close()

It passed by test.

A write test:

if _, err := conn.Write([]byte("Fluentd write test\n")); err != nil {
                die("Could not write to Fluentd server:" + err.Error())
}

It passed by test.

What could producing this error? I suppose that could be an ErrUnknownNetwork. But the error message does not shows this.

The Fluentd shows that the program opens the connection, but closes it soon. I don't understand what is happening. I believe that it would be needed a more expressive error treatment.

Panic if TLS fluentd server cannot be connected

Running below code will panic if fluentd server cannot be connected.

package main

import (
    "github.com/fluent/fluent-logger-golang/fluent"
)

func main() {
    f, _ := fluent.New(fluent.Config{FluentNetwork: "tls", FluentPort: 11111, FluentHost: "127.0.0.1", Async: false})
    f.Post("tag", map[string]string{"log": "msg"})
    f.Post("tag", map[string]string{"log": "msg"})
}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x59b539]

goroutine 1 [running]:
crypto/tls.(*Conn).SetWriteDeadline(0xc00010baa8?, {0x5f3dd6?, 0x2c?, 0x0?})
        /usr/local/go/src/crypto/tls/conn.go:153 +0x19
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).write.func3(0xc000122000, 0xc0000267b0)
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:606 +0x10e
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).write(0xc000122000, {0x6a8898?, 0xc0000180a8?}, 0xc0000267b0)
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:611 +0xd7
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).writeWithRetry(0xc000122000, {0x6a8898, 0xc0000180a8}, 0xc000026750?)
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:558 +0x5c
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).postRawData(0xc000122000?, 0x644ca5?)
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:300 +0x5b
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).EncodeAndPostData(0x614720?, {0x644ca5?, 0x644c81?}, {0x3?, 0xc0000167f0?, 0x7e7da0?}, {0x614720, 0xc000026750})
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:283 +0xb5
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).PostWithTime(0xc000122000, {0x644ca5?, 0x16365a5d5533203e?}, {0x638f20?, 0x1?, 0x7e7da0?}, {0x614840?, 0xc000026720})
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:274 +0x4ee
github.com/fluent/fluent-logger-golang/fluent.(*Fluent).Post(0x614840?, {0x644ca5, 0x3}, {0x614840, 0xc000026720})
        /home/oliver/go/pkg/mod/github.com/fluent/[email protected]/fluent/fluent.go:226 +0x59
main.main()
        /home/oliver/go/projects/test/main.go:9 +0xdc

In Fluent.connect(), below line assigns f.conn a *tls.Conn object instead of net.Conn interface object. Therefore, "f.conn == nil" returns false even if tls.Conn is nil.

f.conn, err = tls.DialWithDialer(

Please use main sentence in LICENSE file

Like a
https://github.com/Microsoft/TypeScript/blob/master/LICENSE.txt
or
https://github.com/google/ggrc-core/blob/develop/LICENSE

You wrote like below:

Copyright (c) 2013 Tatsuo Kaniwa

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

This template is used for source-code header or ReadMe.

Data dropped without error report in Async mode

I see a lot of this in my logs:

[2020-03-26T13:23:11Z] Unable to send logs to fluentd, reconnecting...
[2020-03-26T13:23:11Z] Unable to send logs to fluentd, reconnecting...
[2020-03-26T13:23:11Z] Unable to send logs to fluentd, reconnecting...
[2020-03-26T13:23:11Z] Unable to send logs to fluentd, reconnecting...
[2020-03-26T13:23:11Z] Unable to send logs to fluentd, reconnecting...
[2020-03-26T13:23:11Z] Unable to send logs to fluentd, reconnecting...

no records are sent, no errors are returned.

I think it would be better to leave the data in the pending queue until reconnection is achieved.

Release v1.8.0

This ticket tracks the release of a new version of fluent-logger-golang.

  • I'm planning to make a new release as v1.8.0.
  • Mostly because #105 changed the behaviour of Close() (so not fully compatible with v1.7.0).

Release steps

  • Add documentation for FluentNewtork=tls
  • Add documentation for TlsInsecureSkipVerify={true,false}
  • Document the new API behavior introduced by #105
  • Add the release information to CHANGELOG.md
  • Add the v1.8.0 tag to the master HEAD.
  • Push the tag to fluent-logger-golang

can't convert to msgpack

I've found an incompatible issue between v0.5.1 and v0.6.0.

$ go test -bench Benchmark_PostWithStruct
fluent#Post: can't convert to msgpack: {john smith} msgp: type "struct { Name string \"codec:\\\"name\\\"\" }" not supported
...

Until v0.5.1, ugorji/go/codec could accept values any types(= interface{}).
Now fluent.Entry.Record defined as interface{}, but msgp can't convert from struct to msgpack.

failure test case is below.

func Test_EncodeData(t *testing.T) {
    s := struct {
        Name string `codec:"name"`
    }{
        "john smith",
    }
    var messages = []interface{}{
        1,
        "String",
        map[string]interface{}{
            "str":  "bar",
            "int":  1,
            "list": []int{1, 2, 3},
        },
        s,
    }
    for _, m := range messages {
        msg := &Message{Tag: "test", Time: time.Now().Unix(), Record: m}

        data, err := msg.MarshalMsg(nil)
        if err != nil {
            t.Error(err)
        }
        if data == nil {
            t.Error("MarshalMsg failed", msg)
        }
    }
}

Panic when MaxRetry exceeded

If the fluentd server goes away, panic() is called in a goroutine which causes the entire application to crash. I would like to recover from a panic but seeing as how the panic() is in a separate goroutine that is created by the client library, I can't see how to do that.

Please provide a way to recover from the library panicking.

Disordered Ack messages will be missed

Currently, the write() function will wait for the ack message of the message sent. And reading ack messages from a connection will be processed in the order of written messages.
If the series of ack messages are disordered (it can happen), arrived messages will be missed and the original messages will be re-sent to the server. It is not a critical problem (because sending messages are retried), but it may cause a problem about highly heavier traffic with at-least-once configuration.

A possible solution could be (solution is not only this way, of course):

  1. write() will set the msg.ack value to a Set (with locking)
  2. write() lets a goroutine read response messages from the connection
  3. the goroutine will read a message from any of connections passed, and remove an AckResp.Ack from the Set (with locking)
  4. write() will wait until the ack value will be removed from the Set (or timeout)
  5. if it timed out, write() will retry to send the message

Buffer lock can block client (i.e, dockerd)

Hello,

We have experienced a couple of blocked web server containers in our production infrastructure caused by fluentd being overloaded.

I suspect this is caused by the attempt of send to flush the entire buffer and keeping a hold on the buffer lock while doing so.
If the fluend server is slow for any reason the send method will slowly loop until the buffer is empty.
In the meantime, any attempt to log will block at the appendBuffer level because send still has the lock.

While looking for known issues/prs I found #33 and the implementing PR from @alexlry #52 which would remove the lock altogether.

It would be great if that PR could be reviewed and merged in so that docker containers would not longer block while trying to write to stdout but rather fail with full buffers.

I have also built a Gist detailing how to reproduce this issue with docker-compose and tc: https://gist.github.com/stefano-pogliani/463af63c040d1bd3740fb46255d2768a

Add asynchronous requests to client

As a user who is instrumenting a service with fluent logging, I am concerned about blocking requests affecting my service in different scenarios:

  1. The fluentd server going down or degrading.
  2. Fluent requests slowing down the requests being processed by my service. I'd rather stop logging than degrading the performance of my service.

As a result I ended up incorporating asynchronous logging through a worker goroutine reading from a small memory buffer on top of your Go client. If the buffer gets full for whatever reason (logging slowing down, fluent going down ...), events will get dropped but the service won't be affected.

Admittedly, if my service goes down, the events inside the buffer will be lost, but I am perfectly fine with that.

I believe others would benefit from this scheme and probably should be shipped by default with the client.

My code is pretty ad-hoc at this point but I may consider cleaning it up and upstreaming it at some point.

Add WithContext methods to Fluent struct

Currently the only way to have control over the maximum duration of the Post and related methods is by configuring the Timeout, WriteTimeout, RetryWait, MaxRetry and MaxRetryWait, which all together define an upper bound to the total duration. However, there is no way to define an overall timeout to the action.

A lot of networking libraries use the context deadline as an overall timeout for the operations, so it would be nice if that is also added to this library.

Work with fluent-bit?

I'm having a problem getting this to work with fluent-bit. Is it expected to?

I have verified that fluent-bit is running and have tried using using both forward and tcp input plugins, but neither work. I keep getting errors like:

fluent#write: failed to reconnect, max retry 0

If I use nc and write some simple JSON to the ip/port that fluent-bit is listening on (the tcp input), it is logged by fluent-bit.

Add a new option TLSCertPath

Background

#107 added a basic TLS support to Go logger. However, we don't yet provide any
configurable options regarding certificates.

This turns out to be being a hardle for developers who want to set up a encrypted
transmission secured by a self-signed cert.

Solution

Add a new option TLSCertPath that takes a list of path strings. Load each path
as Certificate and make use of them to connect Fluentd.

Pending buffer could be written twice

In Fluent.send, f.pending is written as follows:

        f.mu.Lock()
        _, err = f.conn.Write(f.pending)
        f.mu.Unlock()

Let's say there're two goroutines A and B, and a problem can happen with the execution pattern below.

  1. A acquires lock
  2. B is blocked by the same lock
  3. A writes f.pending and returns from Fluent.send
  4. B writes f.pending before A flushes f.pending in Fluent.PostRawData

Because f.pendingisn't flushed at the time B writes it and still have the same data which A just wrote, the same data can be written twice. To avoid this problem, f.pending must be flushed just after it's written without unlocking the mutex (i.e. in an atomic manner).

tagomoris will step down from the maintainer

The maintenance of this repository is now far from my current job, and on the other hand, there are many requests/demands for updating/maintaining this library (mainly for the docker logging driver).

I will not be able to work on maintaining this library as a high-priority task in the future, so I think it's time for me to step down from the maintainer.

The problem is, how we can choose the next maintainer.
Do you have any idea, guys? Or are there any volunteers?
Cc: @fluent/admins

(Even without any replies, I'll stop working on this at the end of this year - I'll be too busy to respond to issues/pull-requests on this repo.)

Post Method is not Throwing error when the machine is not reachable

logger, err := fluent.New(fluent.Config{FluentPort: config.Aggregatorport, FluentHost: config.Aggregatorip})

At this time, i am getting logger instance.
Then reading file content which i want send to my log aggregator.
Something like this:
for value := range result{
error := logger.Post(tag, value)
if error != nil {
// Assumes that machine is not reachable, so writing back to the file.
}
}

Network connection is getting break while i am inside for-loop, but that error instance is still nil.
What really Post method is doing here????

Sent payloads not valid JSON, wrapped in function call

When I send a message it shows up in cloudwatch as a function call for some reason.

f, _ := fluent.New(fluent.Config{FluentPort: port, FluentHost: "localhost"})
f.Post("tag_name_here", msg) // msg is EMF payload
f.Close()
function-wrapped

I am trying to use fluent bit to post EMF payloads to a specific cloudwatch log group so I can use them for metrics.

My fluentbit configuration if it helps

[SERVICE]
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020
    Flush        1
    Grace        30
    Log_Level    info
    storage.path /var/log/flb-storage/
    storage.max_chunks_up  32

[INPUT]
    Name          tcp
    Alias         TCP-ServiceMetrics
    Tag           ServiceMetrics
    Listen        0.0.0.0
    Port          5172
    Format        none
    Chunk_Size    32
    Buffer_Size   256
    storage.type  filesystem

[OUTPUT]
    Name                cloudwatch_logs
    Alias               CloudWatch-ServiceMetrics
    Match               ServiceMetrics
    region              ${LOG_REGION}
    log_group_name      SampleApp-${STAGE}-ServiceMetrics
    log_stream_prefix   ServiceMetrics-${HOSTNAME}
    log_key             log
    log_format          json/emf
    auto_create_group   false
    retry_limit         3
    endpoint            ${CLOUDWATCH_ENDPOINT}

Send JSON encoded payloads without re-marshalling them?

I would like to use this library as the basis for multiple logging libraries. However, the only way I was able to figure was to create an io.WriteCloser that wraps this library to do the .Post(tag, data) with the actual log line.

type FluentWriter struct {
	Fluent *fluent.Fluent
	Tag    string
}

func (w *FluentWriter) Close() error {
	return w.Fluent.Close()
}

func (f *FluentWriter) Write(p []byte) (n int, err error) {
	// this is wasteful since the logger just spent time to json.Encode this very payload.
	msg := map[string]interface{}{}
	json.Unmarshal(p, &msg)
	err = f.Fluent.Post(f.Tag, msg)
	return len(p), err
}

The issue is that Post() and EncodeAndPostData() both expect an non-encoded map/struct/interface which isn't the b []byte that an io.Writer provides.

f, err := fluent.New(fluent.Config{});
logSink := &logger.FluentWriter{Fluent: f, Tag: "ApplicationLogs"}
logger := zerolog.New(logSink).With().Timestamp().Logger()

Is there either 1) a better way to wrap this library so I can avoid this encode/decode/encode process or 2) is there a way to send a raw b []byte payload via msgpack without fluent-logger-golang needing to mess with it?

As-is, there really isn't much performance savings going with zerolog or zap over using something like slog or logrus because there will be a ton of extra memory allocations running the JSON encoder/decoder over the payloads.

Support sending data as json

It would be nice if this library supported sending data as json instead of using msgpack. If your using this as a catch all error logger then a lot of things do not support being marshaled as msgpack. While practically everything works with the json marshaller.

Docker daemon dies when fluent driver can not send messages to saturated fluent server

Got very chatty 25 nodes in elasticsearch dockerised cluster.

We use fluentd log driver for elasticssearch containers.

So saturation/slowness on fluent server (which stores data on standalone elasticsearch node in logstash format to view with kibana) led to panic and death of random elasticsearch containers.

From /var/log/syslog:

Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: panic: runtime error: invalid memory address or nil pointer dereference
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: [signal 0xb code=0x1 addr=0x0 pc=0x70808c]
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: goroutine 125885 [running]:
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: net.(*TCPConn).Write(0x0, 0xc8235a4000, 0x11857, 0x30000, 0x0, 0x0, 0x0)
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: <autogenerated>:75 +0x2c
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: github.com/fluent/fluent-logger-golang/fluent.(*Fluent).send(0xc820eecb00, 0x0, 0x0)
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: /usr/src/docker/vendor/src/github.com/fluent/fluent-logger-golang/fluent/fluent.go:242 +0x1d6
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: github.com/fluent/fluent-logger-golang/fluent.(*Fluent).PostRawData(0xc820eecb00, 0xc8230d6000, 0x118, 0x146)
Mar  4 01:41:02 es-100-pem-300-data-1 docker[21750]: /usr/src/docker/vendor/src/github.com/fluent/fluent-logger-golang/fluent/fluent.go:154 +0x171

From fluent forwarder log:

2016-03-04 00:54:49 +0000 [warn]: emit transaction failed: error_class=Fluent::BufferQueueLimitError error="queue size exceeds limit" tag="docker.49c656b5827d"
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/buffer.rb:198:in `block in emit'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/2.2.0/monitor.rb:211:in `mon_synchronize'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/buffer.rb:187:in `emit'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/output.rb:448:in `emit'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/event_router.rb:88:in `emit_stream'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/event_router.rb:79:in `emit'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:186:in `on_message'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:276:in `call'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:276:in `block in on_read_msgpack'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:275:in `feed_each'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:275:in `on_read_msgpack'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:261:in `call'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:261:in `on_read'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/cool.io-1.4.3/lib/cool.io/io.rb:123:in `on_readable'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/cool.io-1.4.3/lib/cool.io/io.rb:186:in `on_readable'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/cool.io-1.4.3/lib/cool.io/loop.rb:88:in `run_once'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/cool.io-1.4.3/lib/cool.io/loop.rb:88:in `run'
  2016-03-04 00:54:49 +0000 [warn]: /usr/lib/ruby/gems/2.2.0/gems/fluentd-0.12.20/lib/fluent/plugin/in_forward.rb:98:in `run'
2016-03-04 00:54:49 +0000 [error]: forward error error=#<Fluent::BufferQueueLimitError: queue size exceeds limit> error_class=Fluent::BufferQueueLimitError
  2016-03-04 00:54:49 +0000 [error]: suppressed same stacktrace

Let me know if you need full stack trace or more details. We really eager to fix that.

Because even if I start bigger fluentd server and standalone elasticsearch node I want to make sure
we will not start loosing data nodes due to panics during log spikes.

Related docker ticker: moby/moby#20960

[v1.2.1] logger.Post blocks forever when the fluentd server has a certain trouble

I encountered a problem that apps using fluent-logger-golang failed to process requests due to timeout while the fluentd didn't output any error log. See below in detail.

It is certain that server errors are the root cause of this problem. However, I would like to take some measures on the client side for the future.

How about implementing to be enable to set a timeout to logger.Post ? If it's reasonable, I can create a PR later.

REPRODUCED MANUAL:

Installed version

td-agent.rb
source {
  id 'in-forward'
  type :forward
  port 24224
}

##
# in app
[
  "service",
].each do |app|
  match("hogehoge.#{app}.*") {
    id "forest-hogehoge#{app}"
    type :forest
    subtype :copy
    template {
      id "copy-hogehoge#{app}"
      ##
      # BigQuery
      store {
        id "out-bigquery-hogehoge#{app}"
        log_level 'warn'
        type :bigquery
        method :insert
        auth_method :private_key
        email 'hogehoge'
        private_key_path '/etc/td-agent/hogehoge-547f3ca2f35c.p12'
        project 'turnkey-conduit-708'
        dataset :hogehoge
        table "#{app}"
        fetch_schema true
        flush_interval '3s'
        try_flush_interval 0.05
        queued_chunk_flush_interval 0.01
        buffer_type :file
        buffer_chunk_limit '512k'
        buffer_queue_limit 1000
        buffer_chunk_records_limit 300
        retry_limit 10
        max_retry_wait '1s'
        num_threads 10
        buffer_path '/hogehoge/tmp/td-agent/bq-buffer/${tag}'
      }
    }
  }
end
main.go
package main

import (
        "fmt"
        "log"

        "github.com/fluent/fluent-logger-golang/fluent"
)

func main() {
        logger, err := fluent.New(fluent.Config{FluentPort: 24224, FluentHost: "127.0.0.1"})
        if err != nil {
                fmt.Println(err)
        }
        defer logger.Close()
        tag := "hogehoge.service.test"
        var data = map[string]string{
                "foo":  "bar",
                "hoge": "hoge"}
        for i := 0; i < 1000000; i++ {
                e := logger.Post(tag, data)
                if e != nil {
                        log.Println("Error while posting log: ", e)
                } else if (i % 10000) == 0{
                        log.Println("Success to post log: ", i)
                }
        }
}
go run main.go
$ go run main.go
2017/01/26 18:14:40 Success to post log:  0
2017/01/26 18:14:40 Success to post log:  10000
2017/01/26 18:14:40 Success to post log:  20000
2017/01/26 18:14:40 Success to post log:  30000
2017/01/26 18:14:40 Success to post log:  40000
2017/01/26 18:14:40 Success to post log:  50000
2017/01/26 18:14:40 Success to post log:  60000
2017/01/26 18:14:40 Success to post log:  70000
2017/01/26 18:14:40 Success to post log:  80000
2017/01/26 18:14:40 Success to post log:  90000
2017/01/26 18:14:40 Success to post log:  100000
2017/01/26 18:14:40 Success to post log:  110000
# It kept blocking even after 10 minutes or more.

ONE SOLUTION:

When I fixed it to set a timeout of 10 seconds, the logger started to reconnect regularly. As a result, the behavior of blocking for a long time does not occur and the logger reports a certain problem.

fluent/fluent.go
diff --git a/fluent/fluent.go b/fluent/fluent.go
index 655f623..30db90b 100644
--- a/fluent/fluent.go
+++ b/fluent/fluent.go
@@ -4,7 +4,6 @@ import (
 	"encoding/json"
 	"errors"
 	"fmt"
-	"io"
 	"math"
 	"net"
 	"reflect"
@@ -46,7 +45,7 @@ type Fluent struct {
 	pending []byte
 
 	muconn       sync.Mutex
-	conn         io.WriteCloser
+	conn         net.Conn
 	reconnecting bool
 }
 
@@ -297,6 +296,7 @@ func (f *Fluent) send() error {
 
 	var err error
 	if len(f.pending) > 0 {
+		f.conn.SetWriteDeadline(time.Now().Add(10 * time.Second))
 		_, err = f.conn.Write(f.pending)
 		if err != nil {
 			f.conn.Close()
go run main.go
[18:16:09 mtburn@ip-10-0-7-194 dev /(T_T)\]$ go run main.go
2017/01/26 18:16:20 Success to post log:  0
2017/01/26 18:16:20 Success to post log:  10000
2017/01/26 18:16:20 Success to post log:  20000
2017/01/26 18:16:20 Success to post log:  30000
2017/01/26 18:16:20 Success to post log:  40000
2017/01/26 18:16:20 Success to post log:  50000
2017/01/26 18:16:20 Success to post log:  60000
2017/01/26 18:16:20 Success to post log:  70000
2017/01/26 18:16:20 Success to post log:  80000
2017/01/26 18:16:20 Success to post log:  90000
2017/01/26 18:16:20 Success to post log:  100000
2017/01/26 18:16:20 Success to post log:  110000
2017/01/26 18:16:30 Error while posting log:  write tcp 127.0.0.1:51946->127.0.0.1:24224: i/o timeout
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:30 Success to post log:  120000
2017/01/26 18:16:30 Success to post log:  130000
2017/01/26 18:16:30 Success to post log:  140000
2017/01/26 18:16:30 Success to post log:  150000
2017/01/26 18:16:30 Success to post log:  160000
2017/01/26 18:16:30 Success to post log:  170000
2017/01/26 18:16:30 Success to post log:  180000
2017/01/26 18:16:30 Success to post log:  190000
2017/01/26 18:16:31 Success to post log:  200000
2017/01/26 18:16:31 Success to post log:  210000
2017/01/26 18:16:31 Success to post log:  220000
2017/01/26 18:16:31 Success to post log:  230000
2017/01/26 18:16:41 Error while posting log:  write tcp 127.0.0.1:51950->127.0.0.1:24224: i/o timeout
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:41 Success to post log:  240000
2017/01/26 18:16:41 Success to post log:  250000
2017/01/26 18:16:41 Success to post log:  260000
2017/01/26 18:16:41 Success to post log:  270000
2017/01/26 18:16:41 Success to post log:  280000
2017/01/26 18:16:41 Success to post log:  290000
2017/01/26 18:16:41 Success to post log:  300000
2017/01/26 18:16:41 Success to post log:  310000
2017/01/26 18:16:41 Success to post log:  320000
2017/01/26 18:16:41 Success to post log:  330000
2017/01/26 18:16:41 Success to post log:  340000
2017/01/26 18:16:41 Success to post log:  350000
2017/01/26 18:16:51 Error while posting log:  write tcp 127.0.0.1:51954->127.0.0.1:24224: i/o timeout
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Error while posting log:  fluent#send: can't send logs, client is reconnecting
2017/01/26 18:16:51 Success to post log:  360000
2017/01/26 18:16:51 Success to post log:  370000
...

What should be the source configuration to use fluent-logger-golang?

What should be the source configuration to use fluent-logger-golang?

My source configuration look like below.


<source>
  @type tcp
  tag test.log
  <parse>
    @type regexp
    expression /^(?<field1>\d+):(?<field2>\w+)$/
  </parse>
  port 24224
  bind 0.0.0.0 
  delimiter "\n"
</source>

when I run following program

package main

import (
	"fmt"
	"log"
	"time"

	"github.com/fluent/fluent-logger-golang/fluent"
)

func main() {
	logger, err := fluent.New(fluent.Config{FluentPort: 24224, FluentHost: "192.168.1.12"})
	if err != nil {
		fmt.Println(err)
	}
	defer logger.Close()
	tag := "test.log"
	var data = map[string]string{
		"foo":  "bar",
		"hoge": "hoge"}
	for i := 0; i < 100; i++ {
		e := logger.Post(tag, data)
		if e != nil {
			log.Println("Error while posting log: ", e)
		} else {
			log.Println("Success to post log")
		}
		time.Sleep(1000 * time.Millisecond)
	}
}

I get following error

dial tcp 192.168.1.12:24224: i/o timeout
2020/07/19 18:43:03 Error while posting log:  fluent#write: failed to reconnect, max retry: 13
2020/07/19 18:45:49 Error while posting log:  fluent#write: failed to reconnect, max retry: 13
2020/07/19 18:48:36 Error while posting log:  fluent#write: failed to reconnect, max retry: 13
2020/07/19 18:51:23 Error while posting log:  fluent#write: failed to reconnect, max retry: 13
2020/07/19 18:54:09 Error while posting log:  fluent#write: failed to reconnect, max retry: 13

I am running fluentd inside container on ubuntu

How about to move this repository under fluent organization?

Hi,

This module is now depended by many software written in golang, but there seems be not so many maintainers against its importance.

So, I will propose to move this repository under fluent organization. I think that i can use some of my time to maintain this library. In fact, now I want to add some methods for logging driver of docker for fluentd.

@t-k How do you think about this proposal?

Compatability with evalphobia/logrus_fluent

Just to preface this, I am aware that this isn't entirely your issue but it still let seemed like something you should know.

Your repo is a dependency of evalphobia/logrus_fluent but recent changes on their end break compatibility with your most recent official stable release in favor of being compatible with more recent commits. So far not bad, however it does mean that when one runs 'dep ensure' on code that uses their repo it grabs the stable version of your repo which breaks the code. If you manually install the more up to date version of your repo, both your code and their code works fine.

It would be nice to have a new official release of your code so that dep ensure would work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.