fluent / fluent-logger-node Goto Github PK
View Code? Open in Web Editor NEWA structured logger for Fluentd (Node.js)
License: Apache License 2.0
A structured logger for Fluentd (Node.js)
License: Apache License 2.0
When I'm emitting the following object, then I get a MessagePack::MalformedFormatError in FluentD:
{ timestamp: 2017-02-19T12:38:27.487Z,
isUp: true,
isResponsive: true,
time: 490,
monitorName: 'origin' }
The full error is this:
forward error error=#<MessagePack::MalformedFormatError: invalid byte> error_class=MessagePack::MalformedFormatError
When I convert all the values above to strings, then the error disappears but I'm not sure that's what is the correct way to resolve this. Any advise?
I'm getting this error when I try to setup the logger. Current on version 2.8.0
ERROR in ./node_modules/fluent-logger/lib/winston.js
Module not found: Error: Can't resolve 'winston-transport' in '/path/to/code/webpack/doc_page/node_modules/fluent-logger/lib'
@ ./node_modules/fluent-logger/lib/winston.js 8:18-46
@ ./node_modules/fluent-logger/lib/index.js
@ ./src/shared/utils/fluent_monkey_patch.js
@ ./src/index.server.jsx
Is there any API for closing loggers?
If requireAckResponse
is used, then the 'ack response timeout'
will be thrown after ackResponseTimeout
has elapsed, even if an ack is sent... There doesn't seem to be anything to clear the setTimeout
used to throw that error.
Minimal example to reproduce
var log4js=require("log4js");
log4js.addAppender(require('fluent-logger').support.log4jsAppender('app.test', {
host: '/tmp/fluentd.sock',
timeout: 3.0
}));
var logger = log4js.getLogger('foo');
setInterval(function() {
latency = Math.round(Math.random() * 100);
logger.info('this log record is sent to fluent daemon');
}, 5000);
when we stop fluentd we got follow error, and application stop
user@social:~/test/log# node ./test.js
[2015-09-18 18:42:31.166] [INFO] foo - this log record is sent to fluent daemon
events.js:141
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED /tmp/fluentd.sock
at Object.exports._errnoException (util.js:837:11)
at exports._exceptionWithHostPort (util.js:860:20)
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1060:14)
If fluentd endpoint dies in the middle of message queue flushing, unhandled exception start rising while the library tries to reconnect to fluentd, here's the error:
TypeError: Cannot read property 'write' of null
at FluentSender._doWrite (/var/app/current/node_modules/fluent-logger/lib/sender.js:360:15)
at FluentSender._doFlushSendQueue (/var/app/current/node_modules/fluent-logger/lib/sender.js:334:10)
at process.nextTick (/var/app/current/node_modules/fluent-logger/lib/sender.js:390:14)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
Reading through the source code, it looks like a new socket is made on every invocation. Both memory and CPU time performance can be improved by switching to node's core Agent class, which by default includes socket pooling, which will reuse existing sockets, instead making a new socket every time.
Can you confirm socket pooling isnt currently being used?
https://travis-ci.org/fluent/fluent-logger-node/builds/5737213
This is because server.close() does not take an callback in 0.6.
When i configure an appender using a custom Layout, it's not used to format the message.
You should add a dependency to log4js in order to access to the layout function.
If you can't fix it quickly, i will try to fix it as a PR.
On FluentLoggerError.MissingTag errors the reconnect handler (https://github.com/fluent/fluent-logger-node/blob/master/lib/sender.js#L453) does not set _status
back to established
, and does not re-emit the connect
event.
Background: I've implemented a bounded event queue that queues if the fluent-logger is not connected, and attempts to flush when the logger reconnects. In cases of event data errors callers cannot be accurately notified of when the logger reconnects either through looking at _status
or depending on the connect
event.
I am trying to build a winston transport with fluent-logger but for some reasons I receive this error:
TypeError: Cannot read property 'write' of null
at FluentSender._flushSendQueue (/project/node_modules/winston-fluentd/node_modules/fluent-logger/lib/sender.js:145:17)
at /project/node_modules/winston-fluentd/node_modules/fluent-logger/lib/sender.js:48:10
at Socket. (/project/node_modules/winston-fluentd/node_modules/fluent-logger/lib/sender.js:118:9)
at Socket.g (events.js:260:16)
at emitNone (events.js:72:20)
at Socket.emit (events.js:166:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1044:10)
Currently I am trying to use it in a Sails.js application.
node: 4.4.0
Here is my module code where I call fluent-logger:
var logger = require('fluent-logger')
logger.configure(this.tag, {
host: this.host,
port: this.port,
timeout: 3.0,
reconnectInterval: 600000 // 10 minutes
});
logger.emit(this.label, {message: msg});
Any idea what is causing the issue? What can I try?
currently release of msgpack1.0.0 has install errors on many platforms.
fluent-logger-node dependent msgpack but dependent package versions are currently not pined. so, npm try to install newest msgpack when installing this package and fail.
it should be pin stable version of dependent packages
"dependencies": {
"msgpack": "0.2.6"
},
logger.end
doesn't work when using winstonlogger.end
isn't documentedlogger.end
perhaps shouldn't be a thing™ (i.e. don't mangle winston's API), however, there's gotta be a way to teardown a process and not have the logger unintentionally keep it alive. any ideas on that front?
i did this as a hack:
// logger.js
var FluentTransport = require('fluent-logger').support.winstonTransport()
var fluentTransport = new FluentTransport('loud-n-rad', config)
var logger = new (winston.Logger)({
transports: [
fluentTransport,
new (winston.transports.Console)()
]
})
logger.end = function () {
fluentTransport.sender.end()
}
module.exports = logger
I am using an environment where our fluentd server is exposed via Unix-domain socket rather than a TCP socket. I looked at issue #30 which was closed after a work-around was given. I'd like for this support to be first class rather than forking and patching myself.
Is there interest in this? I can submit a PR if so. FWIW, the suggested fix in #30 isn't right for general usage, because it changes the default value of the port
field; a separate unixSocket
field or something similar seems like it would be a clearer option.
Is there is a way to automatically include a timestamp in the fluentTransport
as part of the message? I can see in the code that the packet to fluentd
is sent with a timestamp created in the FluentSender
but there is no way to include that or an application-generated timestamp to the data of the message.
I guess the issue is that the fluentTransport
does not support custom formatting of the message as it is done in other winston transports.
Thanks!
The emit()
function of sender.js
tries to connect to fluentd, emits an "error" if it failed, but it also calls _flushSendQueue()
even if the connection failed.
After sending data and receiving the 'error' event multiple times , _flushSendQueue()
raises an error, because self._socket
is null.
/var/app/current/node_modules/fluent-logger/lib/sender.js:128
self._socket.write(new Buffer(item.packet), function(){
^
TypeError: Cannot call method 'write' of null
at FluentSender._flushSendQueue (/var/app/current/node_modules/fluent-logger/lib/sender.js:128:18)
at FluentSender._flushSendQueue (/var/app/current/node_modules/fluent-logger/lib/sender.js:132:12)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
I made a dirty patch in _flushSendQueue()
by checking if the socket is null, and in this case dropping the queued data.
if( item === undefined ){
// nothing written;
}else{
self._sendQueueTail--;
self._sendQueue.shift();
+ if( self._socket === null ){
+ console.log('Fluent items were dropped because fluentd is unavailable.');
+ return;
+ }
self._socket.write(new Buffer(item.packet), function(){
item.callback && item.callback();
});
The python version seems to handle this by keeping a buffer with a maximum size bufmax
. So it tries sending the data multiple times before dropping the data without warnings.
The node version could perhaps send a new event (or re-use 'error') to warn the the developer.
This is works
var express = require('express');
var app = express.createServer(express.logger());
var logger = require('fluent-logger');
logger.configure('fluentd.test', {host: '192.168.33.11', port:24224});
app.get('/', function(request, response) {
logger.emit('follow', {from: 'userA', to: 'userB'});
response.send('Hello World!');
});
var port = process.env.PORT || 3000;
app.listen(port, function() {
console.log("Listening on " + port);
});
This is not
var express = require('express');
var app = express(express.logger());
var log4js = require('log4js');
var fluentd = require('fluent-logger');
log4js.addAppender(fluentd.support.log4jsAppender('fluentd.test', {
host: '192.168.33.11',
port: 24224,
timeout: 3.0
}));
var logger = log4js.getLogger('foo');
logger.info('this log record is sent to fluent daemon');
It return a network error
Error: connect ECONNREFUSED
at errnoException (net.js:770:11)
at Object.afterConnect [as oncomplete] (net.js:761:19)
this is very useful for local fluentd, and remote aggregators
can you support EvenTime in fluent-logger-node
Move from #73 (about ack response)
What should we do about ack response handling?
I've investigated:
Fluentd v0.12 out_forward blocks
Fluentd v0.14 out_forward non-blocking (create another thread to wait ack response)
Fluency (yet another fluent-logger implementation for Java) non-blocking
fluent-logger-perl blocks
fluent-logger-ruby unsupported
fluent-logger-{java,go,ocaml,d,python} unsupported
fluent-logger-node v2.4.1
I'm investigating that we should support which versions of Node.js.
This library depends on msgpack-node, so we can support same versions that is supported by msgpack-node.
msgpack-node supports Node.js v0.12 or later. I've tested to build msgpack-node(1.0.2 and HEAD) following versions:
Version | 1.0.2 | HEAD |
---|---|---|
v0.10.40 | NG | NG |
v0.12.7 | OK | OK |
iojs-v1.8.4 | OK | OK |
iojs-v2.5.0 | OK | OK |
iojs-v3.3.1 | OK | OK |
v4.0.0 | OK | OK |
v4.1.2 | OK | OK |
v4.2.1 | OK | OK |
msgpack-node v0.2.7 supports Node.js v0.10.x.
LTS says that current LTS versions are v0.12.x and v4.2.x. v0.10.x has been maintenance mode since 2015-10-01.
Therefore I think that we can support Node.js versions as following:
Branch name | Supported version | Comment |
---|---|---|
master | v0.12 or later | Support actively. Using latest version of msgpack-node. |
v0.10 | v0.10.x | Bug fix only. Using old version of msgpack-node. |
nodejs-4 | Drop | This branch does not support msgpack-node. |
How about this plan? @edsiper @repeatedly
I installed Python 2.7.3 and VS C++ 2010 Express on Win 7. Then I install the fluent-logger by "npm install fluent-logger".But the cmd showed:
npm http GET https://registry.npmjs.org/fluent-logger
npm http 304 https://registry.npmjs.org/fluent-logger
npm http GET https://registry.npmjs.org/msgpack
npm http 304 https://registry.npmjs.org/msgpack
[email protected] install C:\Users\Gianmario\Downloads\myspace\node_modules\fluent
-logger\node_modules\msgpack
node-gyp rebuild
npm ERR! [email protected] install: node-gyp rebuild
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is most likely a problem with the msgpack package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get their info via:
npm ERR! npm owner ls msgpack
npm ERR! There is likely additional logging output above.
npm ERR! System Windows_NT 6.1.7600
npm ERR! command "C:\Program Files\nodejs\node.exe" "C:\Program Files\nod
ejs\node_modules\npm\bin\npm-cli.js" "install" "fluent-logger"
npm ERR! cwd C:\Users\Gianmario\Downloads\myspace
npm ERR! node -v v0.10.4
npm ERR! npm -v 1.2.18
npm ERR! syscall spawn
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! C:\Users\Gianmario\Downloads\myspace\npm-debug.log
npm ERR! not ok code 0
I don't know how to figure it.
logger = require('fluent-logger')
logger.configure("test", {host: "localhost", port: 24224});
logger.emit("error", {hoge: parseInt(null)});
logger.emit("error", {hoge: "fuga"}); //Error: This socket is closed.
NaNになるデータを書き込むとfluentd側に
forward error: 'NaN' is an invalid number
が出力され、fluent-logger側のon('error')においてsocketを
closeするので、再度ログに書き込もうとすると
Error: This socket is closed
の例外が出力されます。
NaNをログに書き込むと駄目というのは辛いのでNaNの
書き込みも許可してほしいです。
またcloseしたsocketを再度利用できない仕組みが必要だと思います
Recent build error is from 0.6 engine (https://travis-ci.org/fluent/fluent-logger-node/jobs/16815111) during npm install
and CI with 0.8 is fine (https://travis-ci.org/fluent/fluent-logger-node/jobs/16815112). It is a good chance to remove 0.6 and add 0.10 CI.
Sorry for my misunderstanding.
We are currently using your library in some of our AWS Lambda functions.
The way we use it is as a Stream in our bunyan logger.
Even though everything seems to be working fine, and the logs reach our backend log aggregator servers, the Lambdas keep timing ou 100% of the time.
setting
context.callbackWaitsForEmptyEventLoop = false
seems to help to get rid of the timeouts, however the logs stop being sent to the backend.
We haven't nailed down the issue 100% but it seems very likely that the timeouts are happening because of your library leaving connections/sockets open.
Can someone please advise?
We just recently upgraded 2.4.2
as of yesterday and immediately noticed severe network issues in our cluster. Just to give a little background on our deployment. We are running a kubernetes cluster where fluentd is deployed as deamon-set. This allows any app in the cluster to connect and push data locally without having to jump the network.
I just quickly went through #77 which was just merged in and published to npm. If I'm not understanding the code correctly let me know but It looks like a new socket is being created on every single flush. If we are doing 500~1000 emits
to the logger per second this is a super high number of sockets being created. Which affects both the app and any other applications running on the same vm.
fluent-logger-node/lib/sender.js
Lines 178 to 209 in 8da5132
I believe this is caused by npm install msgpack
failing.
I cannot deploy my application anymore. Even tho now it works on 4.0
[email protected]: wanted: {"node":">=4.0.0"} (current: {"node":"0.10.38","npm":"1.4.28"})
...
npm ERR! Failed at the [email protected] install script.
npm ERR! This is most likely a problem with the msgpack package,
Can this be resolved?
The logger seems not be configurable using a json file.
{
"appenders": [
{
"type": "fluent-logger",
"host": "localhost",
"port": 24224,
"timeout": 3.0,
"reconnectInterval": 600000
}
]
}
The module does not expose an "appender" function like mentionned in the doc. This function is inside "support" and options are not binded correctly.
From log4js:
module.exports.appenders[appender] = appenderModule.appender.bind(appenderModule);
I am creating fluentd sender by using createFluentSender() function.
Below is my code.
var fluentdSender=fluentdLogger.createFluentSender(lambdaName,fluentdConfig);
var logger = new (winston.Logger)({
transports: [new (winston.transports.Console)()]
});
exports.setRequestId=function(reqId){
requestId=reqId;
}
logger.on('logging', (transport, level, message, meta) => {
var data={"message":message,"requestId":"2108191222","environment":environment,"level":level};
fluentdSender.emit('',data);
if (meta.end) {
fluentdSender.end();
}
});
It is not logging anything in backend. If I remove fluentdSender.end() it works properly.
Removing fluentdSender.end() hangs the process and in case of lambda, function timeout occurs.
Using setTimeout() cannot be the solution in my case, since we have multiple lambdas, which eventually increase cost for us.
I cannot use winston support transport because I need to manipulate message before emit.
I would like to do some processing when the socket connection was
connected, and do some processing when the connection was disconnected.
Please publish changes for support of node.js 4 to npm
Is it possible to define log levels like in winston, to log only the specified levels?
And if yes, how could it be implemented?
Thanx and regards,
Andreas
I'm on windows 8 64 bit
Python 3+ isn't supported so I switched to python 2.7.8.
This error came out:
SyntaxError: unqualified exec is not allowed in function 'namedtuple' it contains a nested function with free variables
gyp ERR! configure error
gyp ERR! stack Error: gyp
failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (D:\software\node\node_modules\npm\node_modules\node-gyp\lib\configure.js:340:16)
gyp ERR! stack at ChildProcess.EventEmitter.emit (events.js:98:17)
gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:807:12)
gyp ERR! System Windows_NT 6.2.9200
gyp ERR! command "node" "D:\software\node\node_modules\npm\node_modules\node-gyp\bin\node-gyp.js" "rebuild"
...
I'm looking at the memory profile using this module, and I noticed that this._sendQueueSize
never gets reset to 0 after send operations.
fluent-logger-node/lib/sender.js
Line 42 in a2fcc64
Is this intentional? Shouldn't it decrement or reset to 0 during certain conditions?
Hi. I have a concern regarding the title.
For example, when I run my app with fluent-logger without fluentd,
the process will be stopped because of "unhandled error event" like following.
events.js:71
throw arguments[1]; // Unhandled 'error' event
^
Error: connect ECONNREFUSED
at errnoException (net.js:770:11)
at Object.afterConnect [as oncomplete] (net.js:761:19)
It seems that sender.js has sent error event, but there are no logic to handle it.
I think it will be better if fluent-logger could provide interface for us to define how to deal this event as following.
logger.onError(function(err) {
// some logic
});
It could be done by adding following logic to fluent-logger/lib/index.js
module.exports = {
...
onError: function (callback) {
sender.on('error', callback);
}
}
Please let me know what do you think about this point.
Thanks.
Hi, I am using fluent-logger for the first time.
after logger.emit() the control doesnt return to my nodeJS program.
Is it something that I am missing?
My code ******:
var logger = require('fluent-logger')
// The 2nd argument can be omitted. Here is a default value for options.
logger.configure('mongo', {
host: 'localhost',
port: 24224,
timeout: 3.0,
reconnectInterval: 180000 // 3 minutes
});
// send an event record with 'tag.label'
logger.emit('info',
{
type: 'info',
message: {name: 'testprogram', description: 'hello world 111', timestamp: (new Date()).toString()},
notif: false,
caller: 'mylibrary/test.js'
});
My fluent.conf file*******:
<source>
@type forward
port 24224
</source>
<match mongo.**>
@type mongo
host 192.168.8.152
port 27017
database TransactionData
collection logs
include_time_key true
time_key time
# flush
flush_interval 10s
</match>
I can see my logs going to mongoDB using logger.emit(). But my test program doesn't terminate when I use logger.emit() statement.
Thanks
Farrukh
Should I use http://docs.fluentd.org/articles/in_tcp or http://docs.fluentd.org/articles/in_forward?
This should be clear from README.md, like https://github.com/fluent/fluent-logger-python#configuration
Move from #73
@slang800 said:
It depends a lot on how you're doing packed forward mode. If you're trying to send messages as soon as possible then you're not going to be building up a backlog until you get well over 250 msg/sec (even if you limit to 1 outstanding ack at any given time), so messages will be sent individually up until that point. You could wait until you have a backlog of n messages before you pack them all together, but then if the process crashes up to n - 1 messages would be lost... And those messages would be from right before the crash, so they're probably important. You could pack and send a backlog of messages on an interval, but that creates the same type of problem n seconds of data could be lost in a crash.
Here is your comment at https://github.com/fluent/fluent-logger-node/blob/master/lib/sender.js#L153
I wonder whether a solution here? or any idea?
fluent-logger-node works with log4js 1.1.1, but doesn't work with log4js 2.2.0.
Because log4js 2.2.0 add/remove APIs.
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at EventEmitter.addListener (events.js:175:15)
at FluentSender.forEach.FluentSender.(anonymous function) as on
at FluentSender.log (/Users/kuno/Hub/devo-ps/api.devo.ps/node_modules/devops-log/lib/log.js:25:16)
at FluentSender.notify (/Users/kuno/Hub/devo-ps/api.devo.ps/node_modules/devops-log/lib/log.js:47:20)
at module.exports (/Users/kuno/Hub/devo-ps/api.devo.ps/node_modules/devops-log/utils/errorMiddleware.js:39:20)
at process.startup.processNextTick.process._tickCallback (node.js:245:9)
msgpack seems to be unmaintained.
Using this test code:
var fluent = require('./lib');
var ref = '0.0.0.0:24224'.split(':'), host = ref[0], port = ref[1];
var logger = fluent.createFluentSender('tester', {
host: host,
port: port,
timeout: 3.0,
reconnectInterval: 600000,
requireAckResponse: true
});
var logNumber = 0;
var sendMsg = function() {
var data, time;
time = Math.round(Date.now() / 1000);
data = {
logNumber: logNumber
};
logNumber += 1;
return logger.emit('test', data, time);
};
setInterval(sendMsg, 10);
I get these errors:
Fluentd error { Error
at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
at ontimeout (timers.js:488:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:283:5)
name: 'ResponseError',
message: 'ack in response and chunk id in sent data are different',
options:
{ ack: 'o5zRrA/EeHL3HUoHpB0Zhg==',
chunk: 'rBKxRgOoDOMd/ZdzdJqOuA==' } }
Fluentd will reconnect after 600 seconds
Fluentd error { Error
at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
at ontimeout (timers.js:488:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:283:5)
name: 'ResponseError',
message: 'ack in response and chunk id in sent data are different',
options:
{ ack: 'wFehfIh/mnUp87vkZqC/xw==',
chunk: 'tgjLYMj3Mvk6vO8I9UBo0Q==' } }
Fluentd will reconnect after 600 seconds
Fluentd error { Error
at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
at ontimeout (timers.js:488:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:283:5)
name: 'ResponseError',
message: 'ack in response and chunk id in sent data are different',
options:
{ ack: 'wi2TRtMCGKnd3+qtLpERtw==',
chunk: '73bSwxgEXO67MkIYQtCpqQ==' } }
Fluentd will reconnect after 600 seconds
Fluentd error { Error
at Timeout.<anonymous> (/home/slang/proj/forks/fluent-logger-node/lib/sender.js:204:27)
at ontimeout (timers.js:488:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:283:5)
name: 'ResponseError',
message: 'ack in response and chunk id in sent data are different',
options:
{ ack: 'b7Qm2aMz4zTLLdHYOcEc/Q==',
chunk: 'lzvnEts0MbpuOFfFCBHesQ==' } }
Your library supports only connection via host:port, but I need to connect it via unix socket
I am using fluent-logger-node in a docker swarm environment which is to say the cluster of services operates within their own private network and I have one node service communicating to fluentd container within this internal network.
The problem I face in all socket situations (e.g. database connections) is that sockets seem to close after a period of time that only becomes evident to the service after a failed connection attempt, I assume it's a docker swarm problem but that's beside the point. The error as output to me when this socket close occurs is an ECONNRESET error. Which I have learned translates to the other side of the TCP conversation abruptly closed.
In the case of the pooled db connections I have access to the socket object and enable the keepAlive setting to all pools, but even if I forked this repo and exposed the socket I don't want to have to utilise this technique. Rather it would seem more elegant to queue emitted messages until a reconnection is established,
I've looked to the .on
error event which I would assume would trigger in the event of a failed connection, but I don't think it would be sensible to bind an error event to every message I emit as I don't seem to have the capacity to turn those events off on a success callback.
Also, does the callback arg in emit
method pass an error message in the event of an err? Or does the callback just fire regardless of message success and provide no value?
I was also thinking of setting the timeout to be something short and frequent so that the socket gets refreshed regularly before the docker swarm network shuts the socket but I had the value to 3.0 initially and my emits would still be hit with an ECONNRESET.
Any advice?
If Fluent is started from a stopped state, Queue is not flushed.
(fluent.js)
logger.configure(...);
logger.emit('xxx', data, null, function() {
console.log('SUCCESS'); <- not output...
});
$ node fluent.js
# stop fluentd
Fluentd error { [Error: connect ECONNREFUSED xxx:24224]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect',
address: 'xxx',
port: 24224 }
Fluentd will reconnect after 3 seconds
Fluentd is reconnecting...
Fluentd error { [Error: connect ECONNREFUSED xxx:24224]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect',
address: 'xxx',
port: 24224 }
Fluentd will reconnect after 3 seconds
Fluentd is reconnecting...
# start fluentd ('SUCCESS' is not output)
Fluentd reconnection finished!!
I want to flush queue at time of re-connection.
e.g. ...
naomichi-y@730c282
Do you have any idea?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.