totodore / socketioxide Goto Github PK
View Code? Open in Web Editor NEWA socket.io server implementation in Rust that integrates with the Tower ecosystem and the Tokio stack.
Home Page: https://docs.rs/socketioxide
License: MIT License
A socket.io server implementation in Rust that integrates with the Tower ecosystem and the Tokio stack.
Home Page: https://docs.rs/socketioxide
License: MIT License
I got panic on my online server:
A panic occurred at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/socketioxide-0.3.0/src/socket.rs:362: called `Result::unwrap()` on an `Err` value: "Closed(..)
Hope this message can help to determine the error.
Currently the Error
enum is a big mess and need to be splitted into smaller chunks.
I hope that it will be much easier to manage errors in enginexiodie thanks to that.
It is needed for the PR #41 : Disconnect handler to return the TransportError reasons to the user.
@sleeyax it will also be much easier for your v3 implementation to have better error enums
Hi,
Is there any methods to join to room like in nodejs socket io? , in node, we can do something like socket.join("room name")
or do I need to create a struct with arc and add a socket to it?
I am a noob in Rust, so I am comparing nodejs socket io and this repo examples to get an idea.
I noticed that there is a println that gets called everytime a socket connects which really takes away from logging if smth like the tracing crate is used. So my question is if this is needed or if those println can be removed?
Is it possible to add yjs backend? This would allow us to build collaborative editors using socketioxide.
Resources:
https://github.com/ivan-topp/y-socket.io
https://github.com/yjs/y-websocket
In the example, an nsHandler
is returned from this code:
let ns = Namespace::builder().add("/", handlers::handler).build();
Which is not a NameSpace
type and I can't get the sockets inside the "/" path.
I see a method called get_sockets
implemented in NameSpace
type. I was hoping to use this method.
Is there a way to do this? Possibly, upholding the namespaces in gobal scope and accessing them from different http routes when needed?
Created this issue as a way to keep track of this feature.
According to the Engine.IO protocol docs, changes from V3 to V4 are fairly small in scope.
How do I use cookies for auth instead of query string?
I'm currently using salvo and I use cookie for auth which is a jwt token that I don't have access from JS due to httponly
cookie.
Currently the socket id in the engine.io crate is generated with a snowflake ID.
However it means that socket ids are predictable whereas for security reasons they should be fully random (like in the socket.io server with a base64 20 char ID).
base64 Ids should have a fixed size and represented as an array of bytes to avoid heap allocation (like String).
A type alias would be good to unify SID type around the codebase.
Is your feature request related to a problem? Please describe.
unable to use socketio with rocket server.How to use socketio with rocket server?
Describe the solution you'd like
provide example code to use socketio with rocket server.
Describe alternatives you've considered
Unable to find any alternative.
Currently the socket id that reference a socket.io socket (one for each namespace) is the same than the engine socket id.
The implementation should match the socket.io server in node which create a new socket id for each new socket connected to a namespace (only for the socket.io v5 protocol) :
Greetings,
Thank you for your wonderful contribution to the open-source and Rust communities. I wanted to discuss this unsolicited email I received today:
Hi Tong,
Would you star this repository? https://github.com/Totodore/socketioxide
It is an open source socket.io implementation in Rust. Your star will help the project gain a lot of momentum.
Thanks for your time!
The sender is one of the contributors of this repository.
I was wondering if your team was aware of these emails.
Thank you.
When I run cargo run
in a project that depends on socketioxide
, I receive the following error:
Compiling socketioxide v0.2.0
error[E0277]: `EngineIo<Client<A>>` doesn't implement `std::fmt::Debug`
--> /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/socketioxide-0.2.0/src/client.rs:23:5
|
19 | #[derive(Debug)]
| ----- in this derive macro expansion
...
23 | engine: Weak<EngineIo<Self>>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `EngineIo<Client<A>>` cannot be formatted using `{:?}` because it doesn't implement `std::fmt::Debug`
|
= help: the trait `std::fmt::Debug` is not implemented for `EngineIo<Client<A>>`
= help: the trait `std::fmt::Debug` is implemented for `std::sync::Weak<T>`
= note: this error originates in the derive macro `Debug` (in Nightly builds, run with -Z macro-backtrace for more info)
For more information about this error, try `rustc --explain E0277`.
error: could not compile `socketioxide` due to previous error
The issue still occurs when adding the latest main
branch as a dependency: socketioxide = {git = "https://github.com/Totodore/socketioxide.git", branch = "main"}
.
My toolchain:
$ rustup show
# ...
active toolchain
----------------
stable-x86_64-unknown-linux-gnu (default)
rustc 1.66.0 (69f9c33d7 2022-12-12)
Make a version of the adapter trait async. This will allow remote adapter implementation (e.g redis adapter, mongodb adapter).
TransportType
of a given socketTransportType
Hyper is currently in its "polish period" for v1.0.
After its definitive v1.0 release, all http frameworks will switch to v1.0. The salvo framework already moved to hyper 1.0 (currently rc4).
Hyper tracking issue :
hyperium/hyper#3088
Axum tracking issue :
tokio-rs/axum#1325
Salvo tracking issue for socketioxide integration : salvo-rs/salvo#185
If anyone wants to contribute, feel free to create a draft PR with the hyper migration
surrealdb is written in rust and can be embedded like sqlite with rocksdb backend or memory backend or can be run in HA with tikv/foundationdb. You can use surrealdb::Surreal<surrealdb::engine::any>
to support any backend in surrealdb. It has a concept of live queries via websockets - https://www.youtube.com/watch?v=zEaQBiNbkoU.
-- Subscribe to all matching document changes
LIVE SELECT * FROM document
WHERE
account = $auth.account
OR public = true
;
-- Subscribe to all changes to a single record
LIVE SELECT * FROM post:c569rth77ad48tc6s3ig;
-- Stop receiving change notifications
KILL "1986cc4e-340a-467d-9290-de81583267a2";
Would be great to have a surealdb backend.
It would be an improvement if to have automatic code formatting so code is always formatted nicely before it reaches the repository. We could do this by enabling auto-format on save at the IDE/text editor level or by creating a git 'before commit' hook to format staged files. Usually rustfmt is fine but I suppose we could look into stricter formatting tools/standards, I'll leave that decision up to you.
Currently the main doc.rs page lacks some documentation. It needs more example and documentation about :
Currently integer data is encoded in string packet with the to_string()
method which allocates and is really slow. Using the itoa crate would allow to avoid allocations and slowness.
Related to #124 (comment)
Is there a way to allow cross origin resource sharing for example if the backend (rust) would be on a different port than the frontend (ts)? Because as of now I couldn't find anything in the documentation that allows me to enable cors or hook into the tower implementation and add it myself in my program
async_trait
crate is not used, it should be removedMake callbacks take a Result<impl Serialize, serde_json::Error>
rather than a impl Serialize
type. It would allow to integrates conversion error directly into the user API
I launched tests from here
two of them failed:
1) ignores HTTP requests with same sid after upgrade
2) ignores WebSocket connection with same sid after upgrade
Protocol says: "A client MUST NOT open more than one WebSocket connection per session. Should it happen, the server MUST close the WebSocket connection." Unfortunately this is not really clear
The real behavior of canonical TS implementation:
const client = this.clients[id];
if (!client) {
debug("upgrade attempt for closed client");
res.close();
} else if (client.upgrading) {
debug("transport has already been trying to upgrade");
res.close();
} else if (client.upgraded) {
debug("transport had already been upgraded");
res.close();
It means they don't try to upgrade connection transport to ws at all in this cases.
on_ws_req_init function should be split in two: upgrading logic(should update or not) and communication part(recv/send messages).
Current behavior: on_ws_req function runs on_ws_req_init as tokio task, it always(except query problems and smth else) tries to send 101 of switching protocol.
Suggested behavior: call function that will check is upgrading acceptable, then runs tokio task handling messages
Currently we are using Base64Id crate to generate fixed size random id.
However ids may collide each others because there is no time or sequential parameter contrary to the JS implementation :
/**
* Generates a base64 id
*
* (Original version from socket.io <http://socket.io>)
*/
Base64Id.prototype.generateId = function () {
var rand = Buffer.alloc(15); // multiple of 3 for base64
if (!rand.writeInt32BE) {
return Math.abs(Math.random() * Math.random() * Date.now() | 0).toString()
+ Math.abs(Math.random() * Math.random() * Date.now() | 0).toString();
}
this.sequenceNumber = (this.sequenceNumber + 1) | 0;
rand.writeInt32BE(this.sequenceNumber, 11);
if (crypto.randomBytes) {
this.getRandomBytes(12).copy(rand);
} else {
// not secure for node 0.4
[0, 4, 8].forEach(function(i) {
rand.writeInt32BE(Math.random() * Math.pow(2, 32) | 0, i);
});
}
return rand.toString('base64').replace(/\//g, '_').replace(/\+/g, '-');
};
A sequential int32 is incremented at each generation and appended at the end of the buffer (in bytes 11 to 15). Then random bytes are added from bytes 0 to 12.
I don't know why the sequential number is cropped on one byte.
Currently there is no global state management for socketioxide
so it is quite frustrating to manage state.
Most of the time it is needed to make a global OnceCell
or something like that and then getting the state from the Cell
each time it is required.
Because of the current architecture with the imbricated closures it is not possible to adopt the "Axum" way of managing state which is having only one typed field on handlers and then using a State
extractor to extract it. It has many advantages because it allows to have State type checking at compilation and it is quite efficient.
The remaining way of achieving state management in socketioxide is to have either one dynamic OnceCell
or a TypeMap
stored in the Client
and an extractor that extract the corresponding data. In case of failure it would not call the handler and a tracing error would be logged.
Things to do are :
TypeMap
but only with creation fn (no mut ref). (behind a feature flag)client
module for the state struct (behind feature flag)State
extractor that would extract a given state from the TypeMaperror:
panicked at '14called `Result::unwrap()` on an `Err` value: PoisonError { .. }
', thread 'socketioxide/src/ns.rstokio-runtime-worker:' panicked at '104called `Result::unwrap()` on an `Err` value: PoisonError { .. }:', 14socketioxide/src/ns.rs
:104:14
How to reproduce:
run socketio server(i used main) cargo run --release --bin socketioxide-e2e
run js clients client.zip
For me it reproduces even with 100 clients.
also I locally tried to replace RWlock to Dashmap and errors dissappear
Firefox is stupid and defaults to text/xml
filling the console with multiple XML Parsing Error
So I'm doing a break from the 5th of july until the 24th and won't be able to answer to any issue or doing any code review.
But even without me don't hesitate to contribute by making PR and issues ! I'll try to address everything when I return.
Most of the time, events and namespaces are registered statically with user inputs. Switching to a Cow<T>
may benefits from this and reduce ram usage.
I saw a similar feature in node js socketio server (middlewares). Also, where are the reserved events, such as "connection", "disconnect" etc?
it would be a lot more user friendly to give a &Socket
to the user rather than an Arc<Socket>
because thanks to that the user cannot clone the Arc
and hold a ref that would lead to a memory leak if he doesn't drop the ref when the socket is disconnecting.
It should be the same with socket references returned by the io
struct calls.
I have a scenario where I have a long running example, such as extracting zip files that might happen in the background and would like to show progress or send a message when it is done to the client in realtime.
Add server socket.io examples : e.g chat app
Hyper 1.0 is now stable. Major frameworks are migrating to it.
For the moment v0.14 will still remain the default but the hyper-1 feature deps need to be updated to stable.
Also because http is now 1.0.0, extensions are cloneable
so it will be possible to clone extensions
when using the ws transport
Salvo tracking issue : salvo-rs/salvo#494
Axum release PR : tokio-rs/axum#2354
Created this issue as a way to keep track of this feature.
Currently only v5 - which uses EngineIO v4 under the hood - is implemented. Adding support for v4 - which is based on EngineIO v3 - should be possible now that #22 has been resolved.
Differences between v5 and v4 are also fairly small in scope.
Tracking issue for:
Create an empty cargo bin repo.
Add this dependency.
[dependencies]
socketioxide = { version = "0.7.0", features = ["hyper-v1", "tracing"] }
tokio = { version = "1.34.0", features = ["rt-multi-thread", "macros"] }
Get the following error.
/Users/prabirshrestha/code/x$ cargo build
Compiling engineioxide v0.7.0
error[E0433]: failed to resolve: could not find `rt` in `hyper_util`
--> /Users/prabirshrestha/.cargo/registry/src/index.crates.io-6f17d22bba15001f/engineioxide-0.7.0/src/transport/ws.rs:88:38
|
88 | .map(hyper_util::rt::TokioIo::new),
| ^^ could not find `rt` in `hyper_util`
For more information about this error, try `rustc --explain E0433`.
Currently emit errors are pretty bad, if the socket.io session is closed or if the internal channel buffer is full, only a tracing error log will be emitted to alert the user (if the user enabled tracing
features).
One of the best way to handle this would be to use the Permit
API of the tokio::mpsc
channel struct. Thanks to that it would be possible to ask a new Permit
before encoding the json data/packet to be sure that it is possible to send the packet into the buffer. If not the case the emit
fn would return an error with the provided data.
An alternative volatile_emit
fn could be made to directly send messages without getting a Permit
before encoding.
There are two remaining questions :
Permits
for each emit call.Permits
from N sockets at the same time ? Because packet is encoded only once and then cloned many times to be sent. Maybe it would be better to just clone the data before sending anything so it is possible to send it back to the user.Operators
, and Operators
are not always used to broadcast, they can be used to simply add a timeout or a binary payload when emitting. So the emit signature needs to be the same from the Operator
fn even if it is not broadcasting.TODO: if closed, update lib doc and remove issue ref from doc
Poem has a feature to support tower layer and services. Therefore it should be possible to add socketioxide to any poem application.
If no additional development is needed, we could just add an example with poem.
The easiest examples to make is to take all the examples from socket.io official repo and re-implement them in rust with socketioxide. Examples that are copiable are :
Hello,
I'm not experienced with rust so I must admit my defeat - I don't know how to asynchronously generate messages from the code logic.
All the examples are focused on relaying messages between clients, but in my case, in addition i want to create a bridge between socket.io and various other protocols (MQTT, flatbuf over serial, etc.)
Simple POC that would unblock me would be sending a message to a room on timer created from main context.
I'm happy to provide a simple example if you help me with understanding how to approach that.
I am building a axum server and want to use your socketio library. Is it safe to use this in production?
In node, we can use io.set(transports,['websocket'])
to specific websocket protocol. How to do it in the socketioxide? Thanks!
Operator methods (e.g. emit
, sockets
, disconnect
) which are expected to work the same as equivalent SocketIo
methods for all sockets do not work properly when selecting a namespace but not a room.
Steps to reproduce the behavior:
cnt
is 1 (e.g. info!(cnt)
).background_task
function, replace a global operation on io
with a namespace selection:let cnt = io.sockets().unwrap().len();
let cnt = io.of("/").unwrap().sockets().unwrap().len();
The example here is superfluous, but the issue is relevant when using a namespace other than the main one.
Selecting a namespace with SocketIo::of
should make subsequent operations act on all sockets.
The issue is resolved when using the Operators::broadcast
operator, e.g.
let cnt = io.of("/").unwrap().broadcast().sockets().unwrap().len();
but this seems counterproductive, especially when the doc comments describe the general operations as aliases
Alias for
io.of("/").unwrap().emit(event, data)
(SocketIo::emit
)
First time working with Rust. Using current latest main: eadd59f.
I found that this library only works when the router is "/", and not work when the router nested by "/xxx". Anyway to solve this? Thanks!
Currently the max_payload
option is only applied when decoding incoming requests (with polling transport).
It also must be applied when encoding payload. It may be done by using peekable streams on the tokio receiver. It allows us to calculate the future size of the payload when adding a packet without polling it out of the channel.
If there is not enough remaining space we must stop the encoding process
Otherwise we can continue to poll packets and add them to the payload.
I noticed a change in your commit message style over the past 15 days. Initially, you were using conventional commits with each message starting with a capital letter. However, lately, you've switched to writing regular commit messages in lowercase. When I started working on my pull request, I followed your initial style with capital letters. Now, it might look a bit inconsistent once the pull request is merged. Just wanted to bring it to your attention. I suggest documenting the definite commit convention (could be just a link to conventional commits website) somewhere in README.md
. And if you want to go all in I'm sure there are methods to enforce commit conventions too.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.