Git Product home page Git Product logo

fluent-logger-node's Introduction

fluent-logger for Node.js

fluent-logger implementation for Node.js inspired by fluent-logger-python.

NPM

Build Status

Install

$ npm install fluent-logger

Prerequistes

Fluent daemon should listen on TCP port.

Simple configuration is following:

<source>
  @type forward
  port 24224
</source>

<match **.*>
  @type stdout
</match>

Usage

Send an event record to Fluentd

Singleton style

var logger = require('fluent-logger')
// The 2nd argument can be omitted. Here is a default value for options.
logger.configure('tag_prefix', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000 // 10 minutes
});

// send an event record with 'tag.label'
logger.emit('label', {record: 'this is a log'});

Instance style

var logger = require('fluent-logger').createFluentSender('tag_prefix', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000 // 10 minutes
});

The emit method has following signature

.emit([label string], <record object>, [timestamp number/date], [callback function])

Where only the record argument is required. If the label is set it will be appended to the configured tag.

Disable automatic reconnect

Both Singleton and Instance style can disable automatic reconnect allowing the user to handle reconnect himself

logger.configure('tag_prefix', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   enableReconnect: false // defaults to true
});

Shared key authentication

Logger configuration:

var logger = require('fluent-logger').createFluentSender('dummy', {
  host: 'localhost',
  port: 24224,
  timeout: 3.0,
  reconnectInterval: 600000, // 10 minutes
  security: {
    clientHostname: "client.localdomain",
    sharedKey: "secure_communication_is_awesome"
  }
});
logger.emit('debug', { message: 'This is a message' });

Server configuration:

<source>
  @type forward
  port 24224
  <security>
    self_hostname input.testing.local
    shared_key secure_communication_is_awesome
  </security>
</source>

<match dummy.*>
  @type stdout
</match>

See also Fluentd examples.

TLS/SSL encryption

Logger configuration:

var logger = require('fluent-logger').createFluentSender('dummy', {
  host: 'localhost',
  port: 24224,
  timeout: 3.0,
  reconnectInterval: 600000, // 10 minutes
  security: {
    clientHostname: "client.localdomain",
    sharedKey: "secure_communication_is_awesome"
  },
  tls: true,
  tlsOptions: {
    ca: fs.readFileSync('/path/to/ca_cert.pem')
  }
});
logger.emit('debug', { message: 'This is a message' });

Server configuration:

<source>
  @type forward
  port 24224
  <transport tls>
    ca_cert_path /path/to/ca_cert.pem
    ca_private_key_path /path/to/ca_key.pem
    ca_private_key_passphrase very_secret_passphrase
  </transport>
  <security>
    self_hostname input.testing.local
    shared_key secure_communication_is_awesome
  </security>
</source>

<match dummy.*>
  @type stdout
</match>

FYI: You can generate certificates using fluent-ca-generate command since Fluentd 1.1.0.

See also How to enable TLS/SSL encryption.

Mutual TLS Authentication

Logger configuration:

var logger = require('fluent-logger').createFluentSender('dummy', {
  host: 'localhost',
  port: 24224,
  timeout: 3.0,
  reconnectInterval: 600000, // 10 minutes
  security: {
    clientHostname: "client.localdomain",
    sharedKey: "secure_communication_is_awesome"
  },
  tls: true,
  tlsOptions: {
    ca: fs.readFileSync('/path/to/ca_cert.pem'),
    cert: fs.readFileSync('/path/to/client-cert.pem'),
    key: fs.readFileSync('/path/to/client-key.pem'),
    passphrase: 'very-secret'
  }
});
logger.emit('debug', { message: 'This is a message' });

Server configuration:

<source>
  @type forward
  port 24224
  <transport tls>
    ca_path /path/to/ca-cert.pem
    cert_path /path/to/server-cert.pem
    private_key_path /path/to/server-key.pem
    private_key_passphrase very_secret_passphrase
    client_cert_auth true
  </transport>
  <security>
    self_hostname input.testing.local
    shared_key secure_communication_is_awesome
  </security>
</source>

<match dummy.*>
  @type stdout
</match>

EventTime support

We can also specify EventTime as timestamp.

var FluentLogger = require('fluent-logger');
var EventTime = FluentLogger.EventTime;
var logger = FluentLogger.createFluentSender('tag_prefix', {
var eventTime = new EventTime(1489547207, 745003500); // 2017-03-15 12:06:47 +0900
logger.emit('tag', { message: 'This is a message' }, eventTime);

Events

var logger = require('fluent-logger').createFluentSender('tag_prefix', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000 // 10 minutes
});
logger.on('error', (error) => {
  console.log(error);
});
logger.on('connect', () => {
  console.log('connected!');
});

Logging Library Support

log4js

Use log4js-fluent-appender.

winston

Before using winston support, you should install it IN YOUR APPLICATION.

var winston = require('winston');
var config = {
  host: 'localhost',
  port: 24224,
  timeout: 3.0,
  requireAckResponse: true // Add this option to wait response from Fluentd certainly
};
var fluentTransport = require('fluent-logger').support.winstonTransport();
var fluent = new fluentTransport('mytag', config);
var logger = winston.createLogger({
  transports: [fluent, new (winston.transports.Console)()]
});

logger.on('flush', () => {
  console.log("flush");
})

logger.on('finish', () => {
  console.log("finish");
  fluent.sender.end("end", {}, () => {})
});

logger.log('info', 'this log record is sent to fluent daemon');
logger.info('this log record is sent to fluent daemon');
logger.info('end of log message');
logger.end();

NOTE If you use winston@2, you can use [email protected] or earlier. If you use winston@3, you can use [email protected] or later.

stream

Several libraries use stream as output.

'use strict';
const Console = require('console').Console;
var sender = require('fluent-logger').createFluentSender('tag_prefix', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 600000 // 10 minutes
});
var logger = new Console(sender.toStream('stdout'), sender.toStream('stderr'));
logger.log('this log record is sent to fluent daemon');
setTimeout(()=> sender.end(), 5000);

Options

tag_prefix

The tag prefix string. You can specify null when you use FluentSender directly. In this case, you must specify label when you call emit.

host

The hostname. Default value = 'localhost'.

See socket.connect

port

The port to listen to. Default value = 24224.

See socket.connect

path

The path to your Unix Domain Socket. If you set path then fluent-logger ignores host and port.

See socket.connect

timeout

Set the socket to timetout after timeout milliseconds of inactivity on the socket.

See socket.setTimeout

reconnectInterval

Set the reconnect interval in milliseconds. If error occurs then reconnect after this interval.

requireAckResponse

Change the protocol to at-least-once. The logger waits the ack from destination.

ackResponseTimeout

This option is used when requireAckResponse is true. The default is 190. This default value is based on popular tcp_syn_retries.

eventMode

Set Event Modes. This logger supports Message, PackedForward and CompressedPackedForward. Default is Message.

NOTE: We will change default to PackedForward and drop Message in next major release.

flushInterval

Set flush interval in milliseconds. This option has no effect in Message mode. The logger stores emitted events in buffer and flush events for each interval. Default 100.

messageQueueSizeLimit

Maximum number of messages that can be in queue at the same time. If a new message is received and it overflows the queue then the oldest message will be removed before adding the new item. This option has effect only in Message mode. No limit by default.

security.clientHostname

Set hostname of this logger. Use this value for hostname based authentication.

security.sharedKey

Shared key between client and server.

security.username

Set username for user based authentication. Default values is empty string.

security.password

Set password for user based authentication. Default values is empty string.

sendQueueSizeLimit

Queue size limit in bytes. This option has no effect in Message mode. Default is 8 MiB.

tls

Enable TLS for socket.

tlsOptions

Options to pass to tls.connect when tls is true.

For more details, see following documents

internalLogger

Set internal logger object for FluentLogger. Use console by default. This logger requires info and error method.

Examples

Winston Integration

An example of integrating with Winston can be found at ./example/winston.

You will need Docker Compose to run it. After navigating to ./example/winston, run docker-compose up and then node index.js. You should see the Docker logs having an "it works" message being output to FluentD.

License

Apache License, Version 2.0.

About NodeJS versions

This package is compatible with NodeJS versions >= 6.

fluent-logger-node's People

Contributors

aheckmann avatar edsiper avatar esamattis avatar fatfatson avatar haywardmorihara avatar hidetaka-f-matsumoto avatar indexzero avatar jbarreneche avatar jdirwin avatar madfist avatar martinheidegger avatar mitsos1os avatar nak2k avatar navono avatar nbvehbq avatar nebulis avatar nevon avatar notslang avatar okkez avatar orangemi avatar piroor avatar repeatedly avatar solatch avatar sphansekar-cci avatar stoshiya avatar thakkaryash94 avatar winstonwp avatar yssk22 avatar zbjornson avatar zephinzer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-logger-node's Issues

when use as log4j appender, when fluentd is dead application also died

Minimal example to reproduce

var log4js=require("log4js");

log4js.addAppender(require('fluent-logger').support.log4jsAppender('app.test', {
   host: '/tmp/fluentd.sock',
   timeout: 3.0
}));

var logger = log4js.getLogger('foo');

 setInterval(function() {
  latency = Math.round(Math.random() * 100);
  logger.info('this log record is sent to fluent daemon');
}, 5000);

when we stop fluentd we got follow error, and application stop

user@social:~/test/log# node ./test.js
[2015-09-18 18:42:31.166] [INFO] foo - this log record is sent to fluent daemon
events.js:141
      throw er; // Unhandled 'error' event
      ^

Error: connect ECONNREFUSED /tmp/fluentd.sock
    at Object.exports._errnoException (util.js:837:11)
    at exports._exceptionWithHostPort (util.js:860:20)
    at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1060:14)

Support PackedForward mode to improve performance

Move from #73

@slang800 said:

It depends a lot on how you're doing packed forward mode. If you're trying to send messages as soon as possible then you're not going to be building up a backlog until you get well over 250 msg/sec (even if you limit to 1 outstanding ack at any given time), so messages will be sent individually up until that point. You could wait until you have a backlog of n messages before you pack them all together, but then if the process crashes up to n - 1 messages would be lost... And those messages would be from right before the crash, so they're probably important. You could pack and send a backlog of messages on an interval, but that creates the same type of problem n seconds of data could be lost in a crash.

Logger can't be configured using json file.

The logger seems not be configurable using a json file.

{
  "appenders": [
    {
        "type": "fluent-logger",
        "host": "localhost",
        "port": 24224,
        "timeout": 3.0,
        "reconnectInterval": 600000
    }
  ]
}

The module does not expose an "appender" function like mentionned in the doc. This function is inside "support" and options are not binded correctly.

From log4js:

module.exports.appenders[appender] = appenderModule.appender.bind(appenderModule);

Unhandled exception if fluentd instance dies in the middle of queue flushing

If fluentd endpoint dies in the middle of message queue flushing, unhandled exception start rising while the library tries to reconnect to fluentd, here's the error:

TypeError: Cannot read property 'write' of null
    at FluentSender._doWrite (/var/app/current/node_modules/fluent-logger/lib/sender.js:360:15)
    at FluentSender._doFlushSendQueue (/var/app/current/node_modules/fluent-logger/lib/sender.js:334:10)
    at process.nextTick (/var/app/current/node_modules/fluent-logger/lib/sender.js:390:14)
    at _combinedTickCallback (internal/process/next_tick.js:73:7)
    at process._tickDomainCallback (internal/process/next_tick.js:128:9)

Add support for Unix-domain sockets

I am using an environment where our fluentd server is exposed via Unix-domain socket rather than a TCP socket. I looked at issue #30 which was closed after a work-around was given. I'd like for this support to be first class rather than forking and patching myself.

Is there interest in this? I can submit a PR if so. FWIW, the suggested fix in #30 isn't right for general usage, because it changes the default value of the port field; a separate unixSocket field or something similar seems like it would be a clearer option.

Error: This socket is closedใŒ็™บ็”Ÿ

logger = require('fluent-logger')
logger.configure("test", {host: "localhost", port: 24224});
logger.emit("error", {hoge: parseInt(null)});
logger.emit("error", {hoge: "fuga"}); //Error: This socket is closed.

NaNใซใชใ‚‹ใƒ‡ใƒผใ‚ฟใ‚’ๆ›ธใ่พผใ‚€ใจfluentdๅดใซ
forward error: 'NaN' is an invalid number
ใŒๅ‡บๅŠ›ใ•ใ‚Œใ€fluent-loggerๅดใฎon('error')ใซใŠใ„ใฆsocketใ‚’
closeใ™ใ‚‹ใฎใงใ€ๅ†ๅบฆใƒญใ‚ฐใซๆ›ธใ่พผใ‚‚ใ†ใจใ™ใ‚‹ใจ
Error: This socket is closed
ใฎไพ‹ๅค–ใŒๅ‡บๅŠ›ใ•ใ‚Œใพใ™ใ€‚

NaNใ‚’ใƒญใ‚ฐใซๆ›ธใ่พผใ‚€ใจ้ง„็›ฎใจใ„ใ†ใฎใฏ่พ›ใ„ใฎใงNaNใฎ
ๆ›ธใ่พผใฟใ‚‚่จฑๅฏใ—ใฆใปใ—ใ„ใงใ™ใ€‚

ใพใŸcloseใ—ใŸsocketใ‚’ๅ†ๅบฆๅˆฉ็”จใงใใชใ„ไป•็ต„ใฟใŒๅฟ…่ฆใ ใจๆ€ใ„ใพใ™

Define log levels

Is it possible to define log levels like in winston, to log only the specified levels?
And if yes, how could it be implemented?

Thanx and regards,
Andreas

How to add timestamp to message data in winston + fluent transport

Is there is a way to automatically include a timestamp in the fluentTransport as part of the message? I can see in the code that the packet to fluentd is sent with a timestamp created in the FluentSender but there is no way to include that or an application-generated timestamp to the data of the message.

I guess the issue is that the fluentTransport does not support custom formatting of the message as it is done in other winston transports.

Thanks!

Module not found: Error: Can't resolve 'winston-transport'

I'm getting this error when I try to setup the logger. Current on version 2.8.0

ERROR in ./node_modules/fluent-logger/lib/winston.js
Module not found: Error: Can't resolve 'winston-transport' in '/path/to/code/webpack/doc_page/node_modules/fluent-logger/lib'
 @ ./node_modules/fluent-logger/lib/winston.js 8:18-46
 @ ./node_modules/fluent-logger/lib/index.js
 @ ./src/shared/utils/fluent_monkey_patch.js
 @ ./src/index.server.jsx

self._socket.write TypeError: Cannot read property 'write' of null

I am trying to build a winston transport with fluent-logger but for some reasons I receive this error:

TypeError: Cannot read property 'write' of null
  at FluentSender._flushSendQueue (/project/node_modules/winston-fluentd/node_modules/fluent-logger/lib/sender.js:145:17)
  at /project/node_modules/winston-fluentd/node_modules/fluent-logger/lib/sender.js:48:10
  at Socket. (/project/node_modules/winston-fluentd/node_modules/fluent-logger/lib/sender.js:118:9)
  at Socket.g (events.js:260:16)
  at emitNone (events.js:72:20)
  at Socket.emit (events.js:166:7)
  at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1044:10)

Currently I am trying to use it in a Sails.js application.
node: 4.4.0

Here is my module code where I call fluent-logger:

  var logger = require('fluent-logger')
  logger.configure(this.tag, {
    host: this.host,
    port: this.port,
    timeout: 3.0,
    reconnectInterval: 600000 // 10 minutes
  });
  logger.emit(this.label, {message: msg});

Any idea what is causing the issue? What can I try?

EventEmitter memory leak?

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at EventEmitter.addListener (events.js:175:15)
at FluentSender.forEach.FluentSender.(anonymous function) as on
at FluentSender.log (/Users/kuno/Hub/devo-ps/api.devo.ps/node_modules/devops-log/lib/log.js:25:16)
at FluentSender.notify (/Users/kuno/Hub/devo-ps/api.devo.ps/node_modules/devops-log/lib/log.js:47:20)
at module.exports (/Users/kuno/Hub/devo-ps/api.devo.ps/node_modules/devops-log/utils/errorMiddleware.js:39:20)
at process.startup.processNextTick.process._tickCallback (node.js:245:9)

logger.end unsupported in winston transport

problem statements

  • logger.end doesn't work when using winston
  • logger.end isn't documented

discussion

logger.end perhaps shouldn't be a thingโ„ข (i.e. don't mangle winston's API), however, there's gotta be a way to teardown a process and not have the logger unintentionally keep it alive. any ideas on that front?

i did this as a hack:

// logger.js
var FluentTransport = require('fluent-logger').support.winstonTransport()
var fluentTransport = new FluentTransport('loud-n-rad', config)
var logger = new (winston.Logger)({
  transports: [
    fluentTransport,
    new (winston.transports.Console)()
  ]
})
logger.end = function () {
  fluentTransport.sender.end()
}
module.exports = logger

Why not works?

This is works

var express = require('express');
var app = express.createServer(express.logger());

var logger = require('fluent-logger');
logger.configure('fluentd.test', {host: '192.168.33.11', port:24224});

app.get('/', function(request, response) {
  logger.emit('follow', {from: 'userA', to: 'userB'});
  response.send('Hello World!');
});
var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log("Listening on " + port);
});  

This is not

var express = require('express');
var app = express(express.logger());
var log4js = require('log4js');
var fluentd =  require('fluent-logger');

log4js.addAppender(fluentd.support.log4jsAppender('fluentd.test', {
   host: '192.168.33.11',
   port: 24224,
   timeout: 3.0
}));

var logger = log4js.getLogger('foo');
logger.info('this log record is sent to fluent daemon'); 

It return a network error

Error: connect ECONNREFUSED
    at errnoException (net.js:770:11)
    at Object.afterConnect [as oncomplete] (net.js:761:19)

Install failed

I installed Python 2.7.3 and VS C++ 2010 Express on Win 7. Then I install the fluent-logger by "npm install fluent-logger".But the cmd showed:

npm http GET https://registry.npmjs.org/fluent-logger
npm http 304 https://registry.npmjs.org/fluent-logger
npm http GET https://registry.npmjs.org/msgpack
npm http 304 https://registry.npmjs.org/msgpack

[email protected] install C:\Users\Gianmario\Downloads\myspace\node_modules\fluent
-logger\node_modules\msgpack
node-gyp rebuild

npm ERR! [email protected] install: node-gyp rebuild
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is most likely a problem with the msgpack package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get their info via:
npm ERR! npm owner ls msgpack
npm ERR! There is likely additional logging output above.

npm ERR! System Windows_NT 6.1.7600
npm ERR! command "C:\Program Files\nodejs\node.exe" "C:\Program Files\nod
ejs\node_modules\npm\bin\npm-cli.js" "install" "fluent-logger"
npm ERR! cwd C:\Users\Gianmario\Downloads\myspace
npm ERR! node -v v0.10.4
npm ERR! npm -v 1.2.18
npm ERR! syscall spawn
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! C:\Users\Gianmario\Downloads\myspace\npm-debug.log
npm ERR! not ok code 0

I don't know how to figure it.

Queue is not flushed at time of re-connection?

If Fluent is started from a stopped state, Queue is not flushed.

(fluent.js)
logger.configure(...);
logger.emit('xxx', data, null, function() {
  console.log('SUCCESS'); <- not output...
});
$ node fluent.js
# stop fluentd
Fluentd error { [Error: connect ECONNREFUSED xxx:24224]
  code: 'ECONNREFUSED',
  errno: 'ECONNREFUSED',
  syscall: 'connect',
  address: 'xxx',
  port: 24224 }
Fluentd will reconnect after 3 seconds
Fluentd is reconnecting...
Fluentd error { [Error: connect ECONNREFUSED xxx:24224]
  code: 'ECONNREFUSED',
  errno: 'ECONNREFUSED',
  syscall: 'connect',
  address: 'xxx',
  port: 24224 }
Fluentd will reconnect after 3 seconds
Fluentd is reconnecting...
# start fluentd ('SUCCESS' is not output)
Fluentd reconnection finished!!

I want to flush queue at time of re-connection.

e.g. ...
naomichi-y@730c282

Do you have any idea?

Support log4js 2.2.0

fluent-logger-node works with log4js 1.1.1, but doesn't work with log4js 2.2.0.
Because log4js 2.2.0 add/remove APIs.

program not gracefully terminating after logger.emit()

Hi, I am using fluent-logger for the first time.
after logger.emit() the control doesnt return to my nodeJS program.

Is it something that I am missing?

My code ******:

var logger = require('fluent-logger')
// The 2nd argument can be omitted. Here is a default value for options. 
logger.configure('mongo', {
   host: 'localhost',
   port: 24224,
   timeout: 3.0,
   reconnectInterval: 180000 // 3 minutes 
});
 
// send an event record with 'tag.label' 
logger.emit('info', 
    {
        type: 'info',
        message: {name: 'testprogram', description: 'hello world 111', timestamp: (new Date()).toString()},
        notif: false,
        caller: 'mylibrary/test.js'
    });

My fluent.conf file*******:

<source>
  @type forward
  port 24224
</source>
<match mongo.**>
  @type mongo
  host 192.168.8.152
  port 27017
  database TransactionData
  collection logs
  include_time_key true
  time_key time

  # flush
  flush_interval 10s
</match>


I can see my logs going to mongoDB using logger.emit(). But my test program doesn't terminate when I use logger.emit() statement.

Thanks
Farrukh

`TypeError: Cannot call method 'write' of null` if fluentd is unavailable

The emit() function of sender.js tries to connect to fluentd, emits an "error" if it failed, but it also calls _flushSendQueue() even if the connection failed.

After sending data and receiving the 'error' event multiple times , _flushSendQueue() raises an error, because self._socket is null.

/var/app/current/node_modules/fluent-logger/lib/sender.js:128
    self._socket.write(new Buffer(item.packet), function(){
                 ^
TypeError: Cannot call method 'write' of null
    at FluentSender._flushSendQueue (/var/app/current/node_modules/fluent-logger/lib/sender.js:128:18)
    at FluentSender._flushSendQueue (/var/app/current/node_modules/fluent-logger/lib/sender.js:132:12)
    at process.startup.processNextTick.process._tickCallback (node.js:244:9)

I made a dirty patch in _flushSendQueue() by checking if the socket is null, and in this case dropping the queued data.

  if( item === undefined ){
    // nothing written;
  }else{
    self._sendQueueTail--;
    self._sendQueue.shift();
+    if( self._socket === null ){
+      console.log('Fluent items were dropped because fluentd is unavailable.');
+      return;
+    }
    self._socket.write(new Buffer(item.packet), function(){
      item.callback && item.callback();
    });

The python version seems to handle this by keeping a buffer with a maximum size bufmax. So it tries sending the data multiple times before dropping the data without warnings.

The node version could perhaps send a new event (or re-use 'error') to warn the the developer.

Layout is ignored

When i configure an appender using a custom Layout, it's not used to format the message.
You should add a dependency to log4js in order to access to the layout function.

If you can't fix it quickly, i will try to fix it as a PR.

Ack response handling

Move from #73 (about ack response)

What should we do about ack response handling?

I've investigated:

  • Fluentd v0.12 out_forward blocks

  • Fluentd v0.14 out_forward non-blocking (create another thread to wait ack response)

  • Fluency (yet another fluent-logger implementation for Java) non-blocking

  • fluent-logger-perl blocks

  • fluent-logger-ruby unsupported

  • fluent-logger-{java,go,ocaml,d,python} unsupported

  • fluent-logger-node v2.4.1

    • wait for ack and block writing new data
    • but don't block for emitting new data (store emitted data into internal queue)

cc/ @slang800 @mururu

Too many open sockets?

We just recently upgraded 2.4.2 as of yesterday and immediately noticed severe network issues in our cluster. Just to give a little background on our deployment. We are running a kubernetes cluster where fluentd is deployed as deamon-set. This allows any app in the cluster to connect and push data locally without having to jump the network.

I just quickly went through #77 which was just merged in and published to npm. If I'm not understanding the code correctly let me know but It looks like a new socket is being created on every single flush. If we are doing 500~1000 emits to the logger per second this is a super high number of sockets being created. Which affects both the app and any other applications running on the same vm.

socket.write(new Buffer(item.packet), () => {
if (this.requireAckResponse) {
socket.once('data', (data) => {
timeoutId && clearTimeout(timeoutId);
var response = msgpack.decode(data, { codec: codec });
if (response.ack !== item.options.chunk) {
var error = new FluentLoggerError.ResponseError('ack in response and chunk id in sent data are different',
{ ack: response.ack, chunk: item.options.chunk });
this._handleEvent('error', error, item.callback);
}
item.callback && item.callback();
socket && socket.destroy();
socket = null;
process.nextTick(() => {
this._doFlushSendQueue(socket);
});
});
timeoutId = setTimeout(() => {
var error = new FluentLoggerError.ResponseTimeout('ack response timeout');
this._handleEvent('error', error, item.callback);
}, this.ackResponseTimeout);
} else {
item.callback && item.callback();
socket && socket.destroy();
socket = null;
process.nextTick(() => {
this._doFlushSendQueue(socket);
});
}
});
// TODO: how should we recorver if dequeued items are not sent.
}

AWS Lambda timing out due to open socket.

We are currently using your library in some of our AWS Lambda functions.

The way we use it is as a Stream in our bunyan logger.
Even though everything seems to be working fine, and the logs reach our backend log aggregator servers, the Lambdas keep timing ou 100% of the time.

setting

context.callbackWaitsForEmptyEventLoop = false

seems to help to get rid of the timeouts, however the logs stop being sent to the backend.

We haven't nailed down the issue 100% but it seems very likely that the timeouts are happening because of your library leaving connections/sockets open.

Can someone please advise?

'ack in response and chunk id in sent data are different' if logs are submitted too quickly

Using this test code:

var fluent = require('./lib');
var ref = '0.0.0.0:24224'.split(':'), host = ref[0], port = ref[1];
var logger = fluent.createFluentSender('tester', {
  host: host,
  port: port,
  timeout: 3.0,
  reconnectInterval: 600000,
  requireAckResponse: true
});

var logNumber = 0;

var sendMsg = function() {
  var data, time;
  time = Math.round(Date.now() / 1000);
  data = {
    logNumber: logNumber
  };
  logNumber += 1;
  return logger.emit('test', data, time);
};

setInterval(sendMsg, 10);

I get these errors:

Fluentd error { Error
    at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
    at ontimeout (timers.js:488:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:283:5)
  name: 'ResponseError',
  message: 'ack in response and chunk id in sent data are different',
  options: 
   { ack: 'o5zRrA/EeHL3HUoHpB0Zhg==',
     chunk: 'rBKxRgOoDOMd/ZdzdJqOuA==' } }
Fluentd will reconnect after 600 seconds
Fluentd error { Error
    at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
    at ontimeout (timers.js:488:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:283:5)
  name: 'ResponseError',
  message: 'ack in response and chunk id in sent data are different',
  options: 
   { ack: 'wFehfIh/mnUp87vkZqC/xw==',
     chunk: 'tgjLYMj3Mvk6vO8I9UBo0Q==' } }
Fluentd will reconnect after 600 seconds
Fluentd error { Error
    at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
    at ontimeout (timers.js:488:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:283:5)
  name: 'ResponseError',
  message: 'ack in response and chunk id in sent data are different',
  options: 
   { ack: 'wi2TRtMCGKnd3+qtLpERtw==',
     chunk: '73bSwxgEXO67MkIYQtCpqQ==' } }
Fluentd will reconnect after 600 seconds
Fluentd error { Error
    at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
    at ontimeout (timers.js:488:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:283:5)
  name: 'ResponseError',
  message: 'ack in response and chunk id in sent data are different',
  options: 
   { ack: 'b7Qm2aMz4zTLLdHYOcEc/Q==',
     chunk: 'lzvnEts0MbpuOFfFCBHesQ==' } }

MessagePack::MalformedFormatError

When I'm emitting the following object, then I get a MessagePack::MalformedFormatError in FluentD:

{ timestamp: 2017-02-19T12:38:27.487Z,
  isUp: true,
  isResponsive: true,
  time: 490,
  monitorName: 'origin' }

The full error is this:

forward error error=#<MessagePack::MalformedFormatError: invalid byte> error_class=MessagePack::MalformedFormatError

When I convert all the values above to strings, then the error disappears but I'm not sure that's what is the correct way to resolve this. Any advise?

`_status` not reset after FluentLoggerError.MissingTag

On FluentLoggerError.MissingTag errors the reconnect handler (https://github.com/fluent/fluent-logger-node/blob/master/lib/sender.js#L453) does not set _status back to established, and does not re-emit the connect event.

Background: I've implemented a bounded event queue that queues if the fluent-logger is not connected, and attempts to flush when the logger reconnects. In cases of event data errors callers cannot be accurately notified of when the logger reconnects either through looking at _status or depending on the connect event.

Support Node.js v4 plan

I'm investigating that we should support which versions of Node.js.

This library depends on msgpack-node, so we can support same versions that is supported by msgpack-node.

msgpack-node supports Node.js v0.12 or later. I've tested to build msgpack-node(1.0.2 and HEAD) following versions:

Version 1.0.2 HEAD
v0.10.40 NG NG
v0.12.7 OK OK
iojs-v1.8.4 OK OK
iojs-v2.5.0 OK OK
iojs-v3.3.1 OK OK
v4.0.0 OK OK
v4.1.2 OK OK
v4.2.1 OK OK

msgpack-node v0.2.7 supports Node.js v0.10.x.

LTS says that current LTS versions are v0.12.x and v4.2.x. v0.10.x has been maintenance mode since 2015-10-01.

Therefore I think that we can support Node.js versions as following:

Branch name Supported version Comment
master v0.12 or later Support actively. Using latest version of msgpack-node.
v0.10 v0.10.x Bug fix only. Using old version of msgpack-node.
nodejs-4 Drop This branch does not support msgpack-node.

How about this plan? @edsiper @repeatedly

MsgPack pin versions

currently release of msgpack1.0.0 has install errors on many platforms.
fluent-logger-node dependent msgpack but dependent package versions are currently not pined. so, npm try to install newest msgpack when installing this package and fail.
it should be pin stable version of dependent packages

     "dependencies": {
         "msgpack": "0.2.6"
     },

npm install fluent-logger doesn't work

I'm on windows 8 64 bit
Python 3+ isn't supported so I switched to python 2.7.8.
This error came out:

SyntaxError: unqualified exec is not allowed in function 'namedtuple' it contains a nested function with free variables
gyp ERR! configure error
gyp ERR! stack Error: gyp failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (D:\software\node\node_modules\npm\node_modules\node-gyp\lib\configure.js:340:16)
gyp ERR! stack at ChildProcess.EventEmitter.emit (events.js:98:17)
gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:807:12)
gyp ERR! System Windows_NT 6.2.9200
gyp ERR! command "node" "D:\software\node\node_modules\npm\node_modules\node-gyp\bin\node-gyp.js" "rebuild"
...

Fluentd sender doesn't send logs in case of end() call.

I am creating fluentd sender by using createFluentSender() function.
Below is my code.

var fluentdSender=fluentdLogger.createFluentSender(lambdaName,fluentdConfig);

var logger = new (winston.Logger)({
  transports: [new (winston.transports.Console)()]
});

exports.setRequestId=function(reqId){
  requestId=reqId;
}

logger.on('logging', (transport, level, message, meta) => {
  var data={"message":message,"requestId":"2108191222","environment":environment,"level":level};
    fluentdSender.emit('',data);
    if (meta.end) {
      fluentdSender.end();
    }
 });

It is not logging anything in backend. If I remove fluentdSender.end() it works properly.
Removing fluentdSender.end() hangs the process and in case of lambda, function timeout occurs.
Using setTimeout() cannot be the solution in my case, since we have multiple lambdas, which eventually increase cost for us.

I cannot use winston support transport because I need to manipulate message before emit.

Queue Message until Reconnection Complete

I am using fluent-logger-node in a docker swarm environment which is to say the cluster of services operates within their own private network and I have one node service communicating to fluentd container within this internal network.

The problem I face in all socket situations (e.g. database connections) is that sockets seem to close after a period of time that only becomes evident to the service after a failed connection attempt, I assume it's a docker swarm problem but that's beside the point. The error as output to me when this socket close occurs is an ECONNRESET error. Which I have learned translates to the other side of the TCP conversation abruptly closed.

In the case of the pooled db connections I have access to the socket object and enable the keepAlive setting to all pools, but even if I forked this repo and exposed the socket I don't want to have to utilise this technique. Rather it would seem more elegant to queue emitted messages until a reconnection is established,

I've looked to the .on error event which I would assume would trigger in the event of a failed connection, but I don't think it would be sensible to bind an error event to every message I emit as I don't seem to have the capacity to turn those events off on a success callback.

Also, does the callback arg in emit method pass an error message in the event of an err? Or does the callback just fire regardless of message success and provide no value?

I was also thinking of setting the timeout to be something short and frequent so that the socket gets refreshed regularly before the docker swarm network shuts the socket but I had the value to 3.0 initially and my emits would still be hit with an ECONNRESET.

Any advice?

in order to handle 'error' events

Hi. I have a concern regarding the title.

For example, when I run my app with fluent-logger without fluentd,
the process will be stopped because of "unhandled error event" like following.

events.js:71
        throw arguments[1]; // Unhandled 'error' event
                       ^
Error: connect ECONNREFUSED
    at errnoException (net.js:770:11)
    at Object.afterConnect [as oncomplete] (net.js:761:19)

It seems that sender.js has sent error event, but there are no logic to handle it.
I think it will be better if fluent-logger could provide interface for us to define how to deal this event as following.

logger.onError(function(err) {
  // some logic
});

It could be done by adding following logic to fluent-logger/lib/index.js

module.exports = {
  ...
  onError: function (callback) {
    sender.on('error', callback);
  }
}

Please let me know what do you think about this point.

Thanks.

Switch to using node.js core library `Agent` for socket pooling

Reading through the source code, it looks like a new socket is made on every invocation. Both memory and CPU time performance can be improved by switching to node's core Agent class, which by default includes socket pooling, which will reuse existing sockets, instead making a new socket every time.

Can you confirm socket pooling isnt currently being used?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.