moscajs / mosca Goto Github PK
View Code? Open in Web Editor NEWMQTT broker as a module
Home Page: mosca.io
MQTT broker as a module
Home Page: mosca.io
I'm consistently seeing this when I run the tests locally:
✖ 2 of 214 tests failed:
actual expected
1 | {
2 | "hello": {
3 | "qos": 1
4 | }
5 | }{}
make: Entering directory /opt/programs/node/lib/node_modules/mosca/node_modules/zmq/build' CXX(target) Release/obj.target/zmq/binding.o ../binding.cc:28:17: warning: zmq.h: 没有那个文件或目录 CXX(target) Release/obj.target/leveldb/deps/leveldb/leveldb-1.11.0/table/block.o ../binding.cc: In function ‘const char* zmq::ErrorMessage()’: ../binding.cc:156: error: ‘zmq_errno’ was not declared in this scope ../binding.cc:156: error: ‘zmq_strerror’ was not declared in this scope ../binding.cc: In constructor ‘zmq::Context::Context(int)’: ../binding.cc:211: error: ‘zmq_init’ was not declared in this scope ../binding.cc: In member function ‘void zmq::Context::Close()’: ../binding.cc:224: error: ‘zmq_term’ was not declared in this scope ../binding.cc: In member function ‘bool zmq::Socket::IsReady()’: ../binding.cc:296: error: ‘zmq_pollitem_t’ was not declared in this scope ../binding.cc:296: error: expected ‘;’ before ‘items’ ../binding.cc:297: error: ‘items’ was not declared in this scope ../binding.cc:298: error: ‘ZMQ_POLLIN’ was not declared in this scope ../binding.cc:299: error: ‘zmq_poll’ was not declared in this scope ../binding.cc: In constructor ‘zmq::Socket::Socket(zmq::Context*, int)’: ../binding.cc:332: error: ‘zmq_socket’ was not declared in this scope ../binding.cc:344: error: ‘ZMQ_FD’ was not declared in this scope ../binding.cc:344: error: ‘zmq_getsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::GetSockOpt(int) [with T = char*]’: ../binding.cc:397: error: ‘zmq_getsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::SetSockOpt(int, v8::Handle<v8::Value>) [with T = char*]’: ../binding.cc:410: error: ‘zmq_setsockopt’ was not declared in this scope ../binding.cc: In static member function ‘static v8::Handle<v8::Value> zmq::Socket::GetSockOpt(const v8::Arguments&)’: ../binding.cc:438: error: ‘zmq_strerror’ was not declared in this scope ../binding.cc: In static member function ‘static v8::Handle<v8::Value> zmq::Socket::SetSockOpt(const v8::Arguments&)’: ../binding.cc:465: error: ‘zmq_strerror’ was not declared in this scope ../binding.cc: In static member function ‘static void zmq::Socket::UV_BindAsync(uv_work_t*)’: ../binding.cc:520: error: ‘zmq_bind’ was not declared in this scope ../binding.cc:521: error: ‘zmq_errno’ was not declared in this scope ../binding.cc: In static member function ‘static void zmq::Socket::UV_BindAsyncAfter(uv_work_t*)’: ../binding.cc:529: error: ‘zmq_strerror’ was not declared in this scope ../binding.cc: In static member function ‘static v8::Handle<v8::Value> zmq::Socket::BindSync(const v8::Arguments&)’: ../binding.cc:560: error: ‘zmq_bind’ was not declared in this scope ../binding.cc: In static member function ‘static v8::Handle<v8::Value> zmq::Socket::Connect(const v8::Arguments&)’: ../binding.cc:583: error: ‘zmq_connect’ was not declared in this scope ../binding.cc: At global scope: ../binding.cc:639: error: expected type-specifier before ‘zmq_msg_t’ ../binding.cc:672: error: expected type-specifier before ‘zmq_msg_t’ ../binding.cc:677: error: ‘zmq_msg_t’ does not name a type ../binding.cc: In member function ‘v8::Local<v8::Value> zmq::Socket::IncomingMessage::GetBuffer()’: ../binding.cc:646: error: ‘zmq_msg_data’ was not declared in this scope ../binding.cc:646: error: ‘zmq_msg_size’ was not declared in this scope ../binding.cc: In constructor ‘zmq::Socket::IncomingMessage::MessageReference::MessageReference()’: ../binding.cc:663: error: ‘msg_’ was not declared in this scope ../binding.cc:663: error: ‘zmq_msg_init’ was not declared in this scope ../binding.cc: In destructor ‘zmq::Socket::IncomingMessage::MessageReference::~MessageReference()’: ../binding.cc:668: error: ‘msg_’ was not declared in this scope ../binding.cc:668: error: ‘zmq_msg_close’ was not declared in this scope ../binding.cc: In static member function ‘static v8::Handle<v8::Value> zmq::Socket::Recv(const v8::Arguments&)’: ../binding.cc:706: error: ‘zmq_recvmsg’ was not declared in this scope ../binding.cc: At global scope: ../binding.cc:734: error: expected type-specifier before ‘zmq_msg_t’ ../binding.cc:775: error: ‘zmq_msg_t’ does not name a type ../binding.cc: In constructor ‘zmq::Socket::OutgoingMessage::OutgoingMessage(v8::Handle<v8::Object>)’: ../binding.cc:722: error: ‘msg_’ was not declared in this scope ../binding.cc:723: error: ‘zmq_msg_init_data’ was not declared in this scope ../binding.cc: In destructor ‘zmq::Socket::OutgoingMessage::~OutgoingMessage()’: ../binding.cc:730: error: ‘msg_’ was not declared in this scope ../binding.cc:730: error: ‘zmq_msg_close’ was not declared in this scope ../binding.cc: In static member function ‘static v8::Handle<v8::Value> zmq::Socket::Send(const v8::Arguments&)’: ../binding.cc:809: error: ‘zmq_msg_t’ was not declared in this scope ../binding.cc:809: error: expected ‘;’ before ‘msg’ ../binding.cc:812: error: ‘msg’ was not declared in this scope ../binding.cc:812: error: ‘zmq_msg_init_size’ was not declared in this scope ../binding.cc:816: error: ‘zmq_msg_data’ was not declared in this scope ../binding.cc:824: error: ‘zmq_sendmsg’ was not declared in this scope ../binding.cc: In member function ‘void zmq::Socket::Close()’: ../binding.cc:836: error: ‘zmq_close’ was not declared in this scope ../binding.cc: In function ‘v8::Handle<v8::Value> zmq::ZmqVersion(const v8::Arguments&)’: ../binding.cc:875: error: ‘zmq_version’ was not declared in this scope ../binding.cc: In function ‘void zmq::Initialize(v8::Handle<v8::Object>)’: ../binding.cc:943: error: ‘ZMQ_VERSION_MAJOR’ was not declared in this scope ../binding.cc:943: error: ‘ZMQ_VERSION_MINOR’ was not declared in this scope ../binding.cc:944: error: ‘ZMQ_PUB’ was not declared in this scope ../binding.cc:945: error: ‘ZMQ_SUB’ was not declared in this scope ../binding.cc:950: error: ‘ZMQ_REQ’ was not declared in this scope ../binding.cc:951: error: ‘ZMQ_XREQ’ was not declared in this scope ../binding.cc:952: error: ‘ZMQ_REP’ was not declared in this scope ../binding.cc:953: error: ‘ZMQ_XREP’ was not declared in this scope ../binding.cc:956: error: ‘ZMQ_PUSH’ was not declared in this scope ../binding.cc:957: error: ‘ZMQ_PULL’ was not declared in this scope ../binding.cc:958: error: ‘ZMQ_PAIR’ was not declared in this scope ../binding.cc:960: error: ‘ZMQ_POLLIN’ was not declared in this scope ../binding.cc:961: error: ‘ZMQ_POLLOUT’ was not declared in this scope ../binding.cc:962: error: ‘ZMQ_POLLERR’ was not declared in this scope ../binding.cc:964: error: ‘ZMQ_SNDMORE’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::GetSockOpt(int) [with T = int]’: ../binding.cc:427: instantiated from here ../binding.cc:377: error: ‘zmq_getsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::GetSockOpt(int) [with T = unsigned int]’: ../binding.cc:429: instantiated from here ../binding.cc:377: error: ‘zmq_getsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::GetSockOpt(int) [with T = long int]’: ../binding.cc:431: instantiated from here ../binding.cc:377: error: ‘zmq_getsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::GetSockOpt(int) [with T = long unsigned int]’: ../binding.cc:433: instantiated from here ../binding.cc:377: error: ‘zmq_getsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::SetSockOpt(int, v8::Handle<v8::Value>) [with T = int]’: ../binding.cc:454: instantiated from here ../binding.cc:388: error: ‘zmq_setsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::SetSockOpt(int, v8::Handle<v8::Value>) [with T = unsigned int]’: ../binding.cc:456: instantiated from here ../binding.cc:388: error: ‘zmq_setsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::SetSockOpt(int, v8::Handle<v8::Value>) [with T = long int]’: ../binding.cc:458: instantiated from here ../binding.cc:388: error: ‘zmq_setsockopt’ was not declared in this scope ../binding.cc: In member function ‘v8::Handle<v8::Value> zmq::Socket::SetSockOpt(int, v8::Handle<v8::Value>) [with T = long unsigned int]’: ../binding.cc:460: instantiated from here ../binding.cc:388: error: ‘zmq_setsockopt’ was not declared in this scope make: *** [Release/obj.target/zmq/binding.o] 错误 1 make: Leaving directory
/opt/programs/node/lib/node_modules/mosca/node_modules/zmq/build'
gyp ERR! build error
gyp ERR! stack Error: make
failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/opt/programs/node/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:267:23)
gyp ERR! stack at ChildProcess.EventEmitter.emit (events.js:98:17)
gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:789:12)
gyp ERR! System Linux 2.6.32-5-amd64
gyp ERR! command "node" "/opt/programs/node/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /opt/programs/node/lib/node_modules/mosca/node_modules/zmq
gyp ERR! node -v v0.10.12
gyp ERR! node-gyp -v v0.10.0
gyp ERR! not ok
npm WARN optional dep failed, continuing [email protected]
in project description, readme, package.json etc :-)
$ mosquitto_sub -p 4883 -t "/#" -v
/s/t/hello/world "\"dd\""
/s/t/hello/world "\"dd\""
/s/hello/world "dd"
Thanks to @teomurgi.
It is very important that a persistent storage is added to Mosca, to support QoS 1 & 2 and retained messages.
For single-instance storage, the best candidate seems LevelDB, but we need a multi-db support.
We might choose https://npmjs.org/package/ueberDB or https://github.com/1602/jugglingdb.
$ mosca -c mosca.conf -p 1883 --non-secure -v | bunyan
/usr/local/share/npm/lib/node_modules/mosca/lib/cli.js:96
opts.logger.level = 30;
^
TypeError: Cannot set property 'level' of undefined
at /usr/local/share/npm/lib/node_modules/mosca/lib/cli.js:96:25
at cli (/usr/local/share/npm/lib/node_modules/mosca/lib/cli.js:232:12)
at Object.<anonymous> (/usr/local/share/npm/lib/node_modules/mosca/bin/mosca:3:22)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)
at node.js:901:3
$
node -v: v0.10.9
mosca -V: 0.12.0
Latest versions. Seems to have been caused by recent changes to cli.js? I see work being done around there. This error only occurs when throwing -c mosca.conf
Thanks.
See the spec http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#msg-id
The message identifier is local to a connection, it is not globally unique.
@davedoesdev could you please fix this?
This support is composed of:
The authentication support is twofold:
Authorization support is manyfold:
Hi,
We are currently using mosca as an MQTT->AMQP bridge and are loving the custom authentication routines. The issue we have is as follows:
A client connects with username 1234 and then a little while later another client connects with the same username. We want to accept the latter connection and then forcefully disconnect the first. We are currently calling client.close() inside the authenticate function but this does not remove the internal binding inside ascoltatori so we get a whole lot of the following messages:
WARN: mosca/5290 on hostname: tryint to send a packet to a disconnected client
Is there a proper way that we should be doing this? I don't know enough about the internals of ascoltatori and the stores and it seems mosca doesn't handle the binding part anyway.
Any help is greatly appreciated!
https://github.com/mcollina/mosca/blob/master/lib/server.js#L330
Isn't some information lost here? e.g. if I'm using '' in the topic itself, that will be lost and the backend won't be able to tell where a wildcard was used and where a '' was used.
Mosca will occasionally crash with the following output:
/usr/local/lib/node_modules/mosca/node_modules/mows/node_modules/ws/lib/WebSocket.js:187
else throw new Error('not opened');
^
Error: not opened
at WebSocket.send (/usr/local/lib/node_modules/mosca/node_modules/mows/node_modules/ws/lib/WebSocket.js:187:16)
at WebsocketStream._write (/usr/local/lib/node_modules/mosca/node_modules/mows/node_modules/websocket-stream/index.js:80:15)
at WebsocketStream.write (/usr/local/lib/node_modules/mosca/node_modules/mows/node_modules/websocket-stream/index.js:73:10)
at Connection.eval [as connack] (eval at <anonymous> (/usr/local/lib/node_modules/mosca/node_modules/mqtt/lib/connection.js:58:29), <anonymous>:2:126)
at Client.completeConnection (/usr/local/lib/node_modules/mosca/lib/client.js:277:14)
at Client.close (/usr/local/lib/node_modules/mosca/lib/client.js:477:14)
at /usr/local/lib/node_modules/mosca/lib/client.js:286:36
at EventEmitter.Server.authenticate (/usr/local/lib/node_modules/mosca/lib/server.js:207:3)
at Client.handleConnect (/usr/local/lib/node_modules/mosca/lib/client.js:245:15)
at Connection.<anonymous> (/usr/local/lib/node_modules/mosca/lib/client.js:44:10)
Thrown when Mosca attempts to send data through a previously closed web-socket.
Adding a try{}catch{} in lib/client.js @ Line 274 will keep Mosca running:
var completeConnection = function(){
try
{
logger.info("client connected");
that.setUpTimer();
client.connack({
returnCode: 0
});
that.server.restoreClient(that);
that.server.emit("clientConnected", that);
}
catch(e)
{
logger.warn("completeConnection error", e);
// Additional cleanup required?
}
};
Supporting node v0.8 is becoming very hard due to spurious test failures on travis. If there is no good objection I'll drop support in the next release.
We should log at the info level something in the line:
We received X message and forwarded Y messages in the last Z secods
If I create two moscas one on top of the other, the "pattern matching" is not forwarded.
As reported by @davedoesdev in #31, the spec for the client identifiers states:
The first UTF-encoded string. The Client Identifier (Client ID) is between 1 and 23 characters long, and uniquely identifies the client to the server. It must be unique across all clients connecting to a single server, and is the key in handling Message IDs messages with QoS levels 1 and 2. If the Client ID contains more than 23 characters, the server responds to the CONNECT message with a CONNACK return code 2: Identifier Rejected.
Moreover, it's common implementation that if the same clientId connects twice to the same server, the oldest one is disconnected.
I'm evaluating a scenario where Mosca needs to be restarted, thereby causing a temporary connection failure between the broker and the client.
I'm simulating this by starting the Broker from the command line:
mosca -v | bunyan
then killing the process (CTRL-C) and restarting it.
Meanwhile, I have a simple NodeJS client which publishes every few seconds:
var mqtt = require('mqtt');
var c = 0;
var handleInterval = function()
{
var topic = '/ping/' + c;
c++;
client.publish(topic, '');
console.log('Published', topic);
}
var handleConnect = function()
{
console.log('Connected to mqtt broker with ClientId:', client.options.clientId);
client.subscribe('/ping/*');
}
var handleMessage = function(topic, message, packet)
{
console.log('Received Message, topic:', topic, 'message length:', message.length ,'message:', message);
}
console.log('Connecting to broker...');
var client = mqtt.createClient(1883, 'localhost');
client.on('connect' , handleConnect);
client.on('message' , handleMessage);
var timer = setInterval(handleInterval, 3000);
On restarting Mosca, it will regularly fail and throw the following error:
[2013-11-06T07:55:15.074Z] INFO: mosca/2680 on VIP-PC-LOE: server started (port=1883)
[2013-11-06T07:55:15.326Z] INFO: mosca/2680 on VIP-PC-LOE: client connected (client=Patonga)
[2013-11-06T07:55:15.330Z] INFO: mosca/2680 on VIP-PC-LOE: subscribed to topic(client=Patonga, topic=/ping/*, qos=0)
[2013-11-06T07:55:15.336Z] INFO: mosca/2680 on VIP-PC-LOE: unsubscribed (client=Patonga, topic=/ping/*)
[2013-11-06T07:55:15.337Z] INFO: mosca/2680 on VIP-PC-LOE: closed (client=Patonga)
Error: This socket has been ended by the other party
at Socket.writeAfterFIN [as write] (net.js:276:12)
at MqttServerClient.eval [as connack] (eval at <anonymous> (c:\nodeProjects\
nodist\bin\node_modules\mosca\node_modules\mqtt\lib\connection.js:58:29), <anony
mous>:2:126)
at Client.completeConnection (c:\nodeProjects\nodist\bin\node_modules\mosca\
lib\client.js:277:14)
at Socket.cleanup (c:\nodeProjects\nodist\bin\node_modules\mosca\lib\client.
js:507:7)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
The problem appears to stem from MQTT lib during a connack event, though I'm not yet familiar enough with the relationship between the two libs to provide a solution.
Currently mosca re-uses the same description in the package.json file as ascolotari, this should be update.
Subscribing to 'a/*' and 'a/b' forward the message twice, but MQTT dictates it should not.
Ref: http://mqtt.org/wiki/doku.php/overlapping_topics
Maybe is more easy to just do #23.
Mosca will occasionally enter a continous client connect / disconnect cycle when a connection is made via web-sockets using the MOWS client package.
Environment:
Node v0.10.21
Win8 x64
Mosca v0.13.3
[2013-11-07T09:23:59.013Z] INFO: mosca/9736 on VIP-PC-Leo: closed (client=Browser1383816223769)
[2013-11-07T09:23:59.013Z] INFO: mosca/9736 on VIP-PC-Leo: client connected (client=Browser1383816223769)
[2013-11-07T09:24:00.035Z] INFO: mosca/9736 on VIP-PC-Leo: closed (client=Browser1383816223769)
[2013-11-07T09:24:00.035Z] INFO: mosca/9736 on VIP-PC-Leo: client connected (client=Browser1383816223769)
[2013-11-07T09:24:01.035Z] INFO: mosca/9736 on VIP-PC-Leo: closed (client=Browser1383816223769)
[2013-11-07T09:24:01.035Z] INFO: mosca/9736 on VIP-PC-Leo: client connected (client=Browser1383816223769)
[2013-11-07T09:24:02.036Z] INFO: mosca/9736 on VIP-PC-Leo: closed (client=Browser1383816223769)
[2013-11-07T09:24:02.036Z] INFO: mosca/9736 on VIP-PC-Leo: client connected (client=Browser1383816223769)
To recreate this issue, start mosca:
mosca -v --port 1883 --http-port 1884 | bunyan
Open the following HTML file (see below) in your browser (mows.js also required).
Stop and restart Mosca while keeping the browser open. Mosca will eventually lead to a client closed / client connected loop for that client.
Increasing 'reconnectPeriod' to a value of ~2000 appears to stop this from occuring.
I believe there is some sort of race conditition between MOWS client requesting a reconnect and Mosca @ mosca/lib/client.js, Line 298:
if (that.id in that.server.clients){
that.server.clients[that.id].close(completeConnection.bind(that));
} else {
completeConnection();
}
Test HTML file:
<!DOCTYPE html>
<html lang="en">
<head>
<script type="text/javascript" src="mows.js"></script>
<script type="text/javascript">
var port = 1884;
var host = 'localhost';
var options =
{
clientId: 'Browser' + new Date().getTime(),
reconnectPeriod: 1000
};
var handleConnect = function()
{
console.log('Connected');
client.subscribe('/heartbeat/*');
}
var handleClose = function()
{
console.log('Closed, reconnecting in '+ options.reconnectPeriod);
}
var handleMessage = function(topic, message)
{
console.log('Received:', topic, 'Message:', message);
}
console.log('Connecting...');
var client = mows.createClient(port, host, options);
client.on('connect', handleConnect);
client.on('message', handleMessage);
client.on('close' , handleClose);
var heartbeatCount = 0;
var heartbeatTimer = setInterval(function(){
heartbeatCount++;
var topic = '/heartbeat/' + heartbeatCount;
client.publish(topic, '');
console.log('Published', topic);
}, 1000);
</script>
</head>
<body>
</body>
</html>
hi. i'm one of the curators for #thethingsystem and i want to write a simple HOWTO for people who want to:
this is what i'm thinking for the config file:
module.exports =
{ backend :
{ type : 'memory'
}
, secure :
{ keyPath : 'conf/mosca.key'
, certPath : 'conf/mosca.crt'
, port : 8883
, nonSecure : false
}
, credentials : 'conf/credentials.json'
};
the idea here is that persistence is around as long as mosca is running, all connections are MQTT over TLS on TCP port 8883 - the key pair is generated like this:
var keygen = require('../node_modules/x509-keygen').x509_keygen;
keygen({ subject : '/CN=mosca'
, keyfile : 'mosca.key'
, certfile : 'mosca.crt'
, destroy : false
}, function(err, results) {/* jshint unused: false */
if (err) return console.log('keypair generation error: ' + err.message);
console.log('keypair generated.');
});
do my assumptions match the config file above?
many thanks!
I believe it is better to provide a persistent storage first.
Issue: Client is disconnected if pingreq is not sent in keepalive period, even though other publish/subscribe messages were sent in that period.
Description:
We have a mobile client that tries to be efficient by resetting its own keepalive timer if a publish/subscribe message is sent within the keepalive period.
But if Mosca doesn't hear an actual pingreq in the keepalive period, it will disconnect the client.
I propose adding a call to "setUpTimer" in the publish, subscribe callbacks in Client.js.
If you agree I can submit a pull request.
Thanks,
Samir
After npm install mosca -g
I run mosca -v
and get:
module.js:340
throw err;
^
Error: Cannot find module '/usr/local/share/npm/bin/mosca.js'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)
at node.js:901:3
What gives?
Thanks!
Mosquitto implement it, why shouldn't we?
Maybe we can write a MQTT bunyan transport and release it as a separate library.
Hi,
I'm currently testing Mosca and having a problem getting it to play nice with MongoDB.
On publishing to a subscribed client, the broker fails with the following error:
/usr/local/bin/node Server/app.js
Server ready
/home/leo/test/Server/node_modules/mosca/node_modules/mqtt/lib/generate.js:178
length += Buffer.byteLength(payload);
^
Published { cmd: 'publish',
retain: false,
qos: 0,
dup: false,
length: 28,
topic: 'heartbeat',
payload: <Buffer 74 69 6d 65 31 33 38 33 32 31 37 33 38 32 39 33 32> }
TypeError: Argument must be a string
at Object.module.exports.publish (/home/leo/test/Server/node_modules/mosca/node_modules/mqtt/lib/generate.js:178:22)
at MqttServerClient.eval [as publish] (eval at <anonymous> (/home/leo/test/Server/node_modules/mosca/node_modules/mqtt/lib/connection.js:58:29), <anonymous>:2:26)
at Client.actualSend (/home/leo/test/Server/node_modules/mosca/lib/client.js:142:21)
at Client.forward (/home/leo/test/Server/node_modules/mosca/lib/client.js:198:10)
at Array.handler [as 0] (/home/leo/test/Server/node_modules/mosca/lib/client.js:347:10)
at EventEmitter.TrieAscoltatore.publish (/home/leo/test/Server/node_modules/mosca/node_modules/ascoltatori/lib/trie_ascoltatore.js:52:11)
at EventEmitter.newPublish (/home/leo/test/Server/node_modules/mosca/node_modules/ascoltatori/lib/abstract_ascoltatore.js:122:20)
at /home/leo/test/Server/node_modules/mosca/node_modules/ascoltatori/lib/mongo_ascoltatore.js:202:29
at process._tickCallback (node.js:415:13)
Process finished with exit code 8
Any pointers on where I might be going astray? To confirm, this error only occurs when configuring a MongoDB backend. I'm testing with Node v0.10.21 in both Windows x64 and Ubuntu - results are consistent.
Below are my server/client.js I am using for testing.
server.js:
var mosca = require('mosca');
var serverConfig =
{
mosca:
{
port: 1883,
backend:
{
type: 'mongo',
uri: 'mongodb://localhost:27017/',
db: 'mqtt',
pubsubCollection: 'ascoltatori',
mongo: {}
}
}
}
var server = new mosca.Server(serverConfig.mosca);
server.on('ready' , function(){
console.log('Server ready');
});
server.on('published', function(packet, client){
console.log('Published', packet);
});
client.js:
var mqtt = require('mqtt');
var client = mqtt.createClient(1883, 'localhost');
var handleConnect = function()
{
console.log('Connected to mqtt broker with ClientId:', client.options.clientId);
client.on('message', handleMessage);
client.subscribe('heartbeat');
client.publish('heartbeat', 'time' + new Date().getTime());
}
var handleMessage = function(topic, message, packet)
{
console.log('Received Message');
console.log('topic:', topic, 'message', message);
}
client.on('connect', handleConnect);
It seems that Mosca is not working on Windows.
As I do not do development on windows, if someone has the need for it, it should fix this and submit a pull-request.
For every dead subscriber a new message is delivered
From the spec:
A leading "/" creates a distinct topic. For example, /finance is different from finance. /finance matches "+/+" and "/+", but not "+".
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.