mas-bandwidth / netcode Goto Github PK
View Code? Open in Web Editor NEWA protocol for secure client/server connections over UDP
License: BSD 3-Clause "New" or "Revised" License
A protocol for secure client/server connections over UDP
License: BSD 3-Clause "New" or "Revised" License
*** BANNED ***
Hello,
Do you have knowledge of known implementations for clients using javascript/node?
This project looks really interesting. Me and a colleague are interested in something which can send and receive UDP in the browser from either a JS or C# backend for game development.
I will have a look over the code and concepts to see if we can make use / fork this and add our work to help the project,
I could be missing something, but right now there doesn't seem to be a way to get the address/port of a connected client from a server instance.
Are there any concerns about adding netcode_address_t
to the API and adding a method like netcode_address_t netcode_server_client_address(int client_index)
that returns the appropriate entry from netcode_server_t::client_address
?
Hi, we've got a mobile game with a custom implementation of netcode.io already live . (Almost) everything works fine except when switching between carrier network and wifi network as the server is not gonna recognize one or the other depending on which one the client uses to connect at the start.
Is there anything handling this in the protocol that I missed? If not, what would be the best way to handle that case so that the player can continue its game almost seamlessly?
Hello! First and foremost, thanks for your work on this. I've been trying to improve my C/C++ and network programming skills and netcode.io makes for an excellent project to learn from. That being said, I've been using clang++4.0 [-std=c++11 -Werror
] on Linux as my primary dev environment and when I include netcode.h and attempt to initialize it, a series of errors like the following emerge:
ISO C++11 does not allow conversion from string literal to 'char *'
My fix was just to change 'char *' to 'const char *' on arguments and return values (among others) that were causing problems. Does this feel like an appropriate way to handle this issue?
Thanks again. netcode.io rocks.
👋
This is probably a case of me using netcode for the wrong situation but we'll see...
I'm developing a game that ideally would have a matchmaker service, but for redundancy (or.. if I run out of time) I'd quite like players to be able to host a game, and enter an IP address to direct connect to that server. This was working fine for testing on localhost.. but not across a network...
The problem I've hit is that the server needs to know its own external IP address when validating the connect token.. is there a way to avoid that? Would it be dangerous to disable that check in the direct connection case? (Ignoring the fact that I've probably blown any security out of the water anyway by shipping the private key in the client side but I'd use a separate key for direct connecting...)
Say a valid, but malicious, client records their own list of valid dedicated server IP addresses. They then want to DoS another player, or steal another player's session. They sniff the other player's packets and get access to the opaque connect token. They then make an attempt to connect to a dedicated server by using their own personal IP address collection, and send the token to each server to try and connect before the sniffed client.
This causes two problems:
What is the idea for netcode regarding this strategy?
I'm digging into Go's codebase and found a bug in Buffer.GetBytes
.
Here's a playpen to see it in action:
https://play.golang.org/p/3-1uLvXD83
I'll send a PR with a fix for this and the corresponding test.
There is no lib compiled as /MD, perhaps add it in addition to the /MT one?
Is this bloatware libsodium library really necessary, perhaps strip out the parts that are actually used?
It appears that netcode only calls 5 functions from this library--
Or add an option to disable this requirement..
FYI There is one initiative for bringing UDP (and much more) to the browser QUIC
(Quick UDP Internet Connection).
Here is its home page; https://quicwg.github.io/
Found it via https://www.simpleservers.co.uk/2017/07/litespeed-web-server-supports-quic
Hi there,
I've been reading your netcode spec and I got some questions about the matchers: they're using a nonce (incremented for every token generated) and a private key (known by the game server).
As far as I know, a nonce and a key should be used together only once to prevent security breaches.
So what about matchers restart (after a system/application crash)? Is it a big deal its nonce restarts at 0 (which means it already have been used)? Should the key be regenerated?
How about multiple matchers (for load balancing/failover/etc)? Should each matcher gets its own private key? How does the game server should handle this?
Thank you
While working on my implementation based on the spec at https://github.com/networkprotocol/netcode.io/blob/master/STANDARD.md, I hit a bit of a snag: while the spec says that in the private connect token, for each server address, a byte identifies the address type and is either 0 or 1, I was getting a 1 byte despite the buffer clearly containing an IPv4 address (first four bytes were 127, 0, 0, 1, and the next two bytes together formed the port I had bound my server to). I'm not generating my own token yet, I'm using the server example that comes with the Netcode.IO browser project over at the Redpoint Games github, and that one just calls into native code, so this isn't an issue with an invalid connect token as far as I can tell.
So after digging through the C code I noticed that it defined the following:
#define NETCODE_ADDRESS_NONE 0
#define NETCODE_ADDRESS_IPV4 1
#define NETCODE_ADDRESS_IPV6 2
And, later, it just writes the same values into the buffer. Which would make sense, as it means a value of 1 actually means IPv4, not IPv6, as hinted at by the 127, 0, 0, 1 bytes. If this is true and I haven't missed something (which is entirely possible, it's late and I'm tired), I'd suggest the docs be modified to indicate that valid values are actually 1 and 2, not 0 and 1.
Hi, Glenn.
I recently found this excellent library by reading an article on your site.
I have a question regarding the compilation for x32 bit.
Can I compile a project as x32 bit .lib and use it stably? Or is it just x64 bit library? (Windows)
If so, what do I need to change in the code (netcode.io) so as not to run into problems.
Another question:
If you use x64 on the server and x32 on the client, are there any consequences?
I hope this is useful for you, perhaps you could join hands and get browsers to implement UDP sinks. There's a new effort trying to achieve a similar result, I guess, https://socketify.net/, description:
What? A cross-platform, cross-browser extension for desktop browsers that injects simple & easy-to-use UdpPeer, TcpServer and TcpClient sockets API into page window, available in plain JavaScript.
There's an error in the example for sequence number encoding in the spec.
0x000003E8
requires only 2 bytes: 0xE8
and 0x03
.
Also, I'm a bit confused about the byte order of sequence numbers in the packet. The pseudo code indicates that it should be little-endian, is this correct?
Hi there, I was studying the connect token design and have a couple questions. I am just trying to learn about design intentions and use-cases.
// Proposed new design for connect token. The entire token, minus the REST SECTION,
// serves dual purpose: provide the client with necessary information, and also act in
// its entirety as a connection request packet.
// -- BEGIN PUBLIC SECTION --
// -- BEGIN REST SECTION --
[version info] (13 bytes) // "NETCODE 1.02" ASCII with null terminator.
[protocol id] (uint64) // 64 bit value unique to this particular game/application
[client to server key] (32 bytes)
[server to client key] (32 bytes)
// -- END REST SECTION --
[zero byte] // Packet type connection request
[version info] (13 bytes) // "NETCODE 1.02" ASCII with null terminator.
[protocol id] (uint64) // 64 bit value unique to this particular game/application
[create timestamp] (uint64) // 64 bit unix timestamp when this connect token was created
[expire timestamp] (uint64) // 64 bit unix timestamp when this connect token expires
[timeout seconds] (uint32) // timeout in seconds. negative values disable timeout (dev only)
[num server addresses] (uint32) // in [1,32]
<for each server address>
{
[address type] (uint8) // value of 1 = IPv4 address, 2 = IPv6 address.
<if IPV4 address>
{
// for a given IPv4 address: a.b.c.d:port
[a] (uint8)
[b] (uint8)
[c] (uint8)
[d] (uint8)
[port] (uint16)
}
<else IPv6 address>
{
// for a given IPv6 address: [a:b:c:d:e:f:g:h]:port
[a] (uint16)
[b] (uint16)
[c] (uint16)
[d] (uint16)
[e] (uint16)
[f] (uint16)
[g] (uint16)
[h] (uint16)
[port] (uint16)
}
}
[connect token nonce] (24 bytes)
<zero pad to 744 bytes>
// -- END PUBLIC SECTION --
// -- BEGIN SECRET SECTION --
[client id] (uint64) // globally unique identifier for an authenticated client
[client to server key] (32 bytes)
[server to client key] (32 bytes)
[user data] (256 bytes) // user defined data specific to this protocol id
// -- END SECRET SECTION --
[hmac bytes] (16 bytes)
The connection request packet is defined as the connect token minus the REST SECTION. The PUBLIC SECTION of the connection request packet can be used as the additional data in AEAD. This way the connection request packet is entirely protected from tampering, and the client id/userdata/keys are still encrypted. I imagine there is no security risk, since unencrypted information is publicly knowable to authenticated clients anyways.
This design might have some benefits:
Does this consolidation/simplification make any sense? I'm probably missing some obvious problems, and am asking to better understand the netcode design.
P.S.
I moved the SECRET SECTION to the end of the packet. This way the SECRET SECTION resides at a known byte-offset, and the server can quickly validate connect tokens, just as before, without the need to parse the server address list.
I noticed in a few areas that you are using 64 bit fields in area's that don't seem necessary. For example, 32 bits would allow 2,147,483,647 4,294,967,295 user id's which seems like it should be well over enough. Same for protocol id. Timestamps don't need to be 64 bit unless timeouts need to be to the millisecond since Jan 01 1970. I'm pretty sure you can get it down to the millisecond within the same month using 32 bits or less. Not to criticize, just wondering if any specific reasons exist.
I'm getting linker errors with MinGW for multiple definitions of inet_pton()
and inet_ntop()
because netcode.io
exports those since #66. netcode.io
should not export those.
A quick fix would be to declare those functions introduced in #66 static
to stop them from leaking into the linker.
The more correct way, as pointed out in #65, would be to check for Windows feature level Vista or higher, and throw an error if not (#if _WIN32_WINNT < 0x0600
), to ensure the functions are declared.
From the implementation:
int netcode_replay_protection_packet_already_received( struct netcode_replay_protection_t * replay_protection, uint64_t sequence )
{
netcode_assert( replay_protection );
if ( sequence & ( 1ULL << 63 ) )
return 0;
...
}
Could someone explain what this first if statement is doing? It looks to me like if the least significant (?) bit of the sequence number is non-zero we say the packet has not been received already. Why would this be desired?
I'm probably misunderstanding something here but it's also not mentioned in the standard as far as I can tell.
Thanks
I'm doing some tests with netcode, specifically with Go's server implementation and I'm struggling a little bit.
The C implementation exposes a very nice API to send/receive packets to a single client.
The Go implementation only exposes Server.SendPayloads
(source) which sends something to all connected clients.
Am I missing something?
/cc @wirepair
I have been working to port reliable.io to Rust to eventually implement a yojimbo clone in pure rust, which also included updating netcode.io. The current linked fork is rather old, and required some fixing for the latest version of netcode.
My fork is available @ https://github.com/jaynus/netcode.io if you'd like to include another pure rust implementation. This is strictly a fork and fix of vvanders version, and I plan on expanding the implementation and supporting it - as he has dropped support for his (RE: vvanders#1)
After working through some mobile networking trouble, we found that some cell providers work better with IPv6 while some work better with IPv4.
In testing, we found that IPv6 connect tokens only work properly when the Yojimbo client is initialized with an IPv6 address to bind to.
// Will not work with an IPv6 address as the only server address in the connect token.
Client* client = new Client(GetDefaultAllocator(), Address("0.0.0.0"), config, adapter, time);
vs:
// Works with an IPv6 address in the connect token.
Client* client = new Client(GetDefaultAllocator(), Address("0:0:0:0:0:0:0:0"), config, adapter, time);
We then tracked this down to netcode_socket_create
in netcode.c, the address the client binds to must match the protocol (IPv4 vs IPv6) of the address container in the connect token. While the address that's used for sendto
is being handled as IPv6 correctly, it fails if the socket wasn't created for an IPv6 client.
At the moment, we're unsure whether this is a bug, or by design. Should netcode/yojimbo be able to handle connect tokens with both IPv4 and IPv6? Is it possible we're doing something wrong here? Is this a bug with one of these two libraries?
If this isn't a supported feature, would you have any suggestions for this? If it is, and seems to be working as expected, is it possible we're not binding the client address correctly?
Hi, Glenn
I'm a little confused in the macros, there are too many of them and they all have similar names.
I have a few questions, hope you can give me a detailed answer.
Why is #define NETCODE_SERVER_MAX_RECEIVE_PACKETS 64 * NETCODE_MAX_CLIENTS?
It's mean, 1 client can't send more 64 byte or how much packets he can accept?
It's correct? NETCODE_MAX_PACKET_BYTES = NETCODE_MAX_PAYLOAD_BYTES + 100 ?
My head is torn from the fact that macros are not commented out
Can you tell that each of these macros is doing, as well as how are the two of them interconnected?
Honestly, it would be easier if I contacted you directly so that you could help me set up netcode according to my project.
I hope you have a couple of minutes for me.
As per https (TLS) have the client and server agree what cipher suite to use rather than hardcode it as ChaCha20/Poly1305
This will allow platform encryption libraries to be used (e.g. native Win/macOS, and chosen Linux); which will be updated with the natural flow of OS security patches - rather than having a game lib dependency which relies on the developer to release patched versions. (though could still use libsodium if preferred)
Also allows for new ciphers and old ciphers to be depreciated; or prioritized by OS security updates rather than requiring game updates.
Additionally, the server enforces that only one client with a given IP address may be connected at any time
Not everyone has an IP to themselves. Carrier grade NATs are a thing. In particular you have whole countries behind just a couple of IPs.
A question:
*packet_bytes = packet->payload_bytes;
netcode_assert( *packet_bytes >= 0 );
netcode_assert( *packet_bytes <= NETCODE_MAX_PACKET_BYTES );
Since packet_bytes
is actually packet->payload_bytes
, shouldn't the latter netcode_assert
be comparing to NETCODE_MAX_PAYLOAD_BYTES
?
Found in the following locations:
https://github.com/networkprotocol/netcode.io/blob/master/netcode.c#L3359
https://github.com/networkprotocol/netcode.io/blob/master/netcode.c#L4858
I'm in the process of implementing a non-gaming related library for AEAD-secured DTO transfer between micro-services. I took a look at the netcode.io 1.02 standard.md and I can see the sense in virtually all of it, but there is one aspect I don't yet understand.
The connect token establishes 2 keys:
[client to server key] (32 bytes)
[server to client key] (32 bytes)
This has the effect of making the data transfer unidirectional for a given key.
What benefit does this have?
In my current understanding, if the client and server shared the same key [per unique server-client pair] (which has been established out-of-band over a secure side channel) to allow bidirectional comms with the same key, this wouldn't degrade the security but I suspect there is something I don't yet understand.
Thank you!
Steps to reproduce:
server ignored connection request. server address not in connect token whitelist
I found this log in netcode here:
https://github.com/networkprotocol/netcode.io/blob/master/netcode.c#L4245
If I add a log to netcode_address_equal
that uses netcode_address_to_string
to print the addresses, I get something like this:
netcode_address_equal: '1.2.3.4' '1.2.3.4:49315'
Essentially, the connect token sent to the client includes the port assigned to the server by the system, however, server->address still references the netcode_address without the port assignment.
I'm happy to make a patch, but I don't understand how the connect token is getting an address with the valid port, but server->address doesn't include it.
Max
Just curious I come from the blog article, is it dropped?
Here's a diff to support ietf 96bit nonces, I would have done a PR but my branch is not stable yet and i didn't want to push a bunch of changes that would make it hard to track. It's a very basic diff, just increasing nonce sizes, padding the first 4bytes with a uint32(0) and changing the libsodium calls to use the ietf version. I ran all tests and they passed, ran the server and had my go client communicate with no problems.
For your review:
netcode_ietf.diff.txt
All of the documentation referenced in the readme are geared torwards protocol implementors and people looking for details on how it works internally.
Are there any documentation/guides on how to use the netcode.io interface?
I've started on a wrapper of netcode for Rust located at https://github.com/vvanders/netcode-rust (until we can figure out best way to merge).
Right now only client functions are hooked up and nothing is tested but it's a bit of a start. Currently it bootstraps netcode via "gcc" crate which shells out to msvc pretty cleanly on windows. Still need to sort out linux.
I'll update this issue once I've got something more stable, this is mostly a placeholder to discuss ongoing integration and related items.
While porting the connect token to Go I noticed that I was unable to get the connect token to verify due to a failure in comparing the ipv6 addresses. Attached is a screen shot of the netcode_address_equal while debugging the connect token verification. You'll notice that it appears the final uint16 of an ipv6 address of "::1" looks like it is in big endian.
I could only get client to server communications working from Go to C by forcing ipv6 to be BigEndian encoded: code
I'll admit it's been a while since I've done any endianness encoding so my assumptions may be incorrect.
Hi There,
Really good idea for a protocol but I'm somewhat concerned by how your apparent useage of AEAD in the protocol.
1/ I can see a static private_key variable apparently included in both the server & client source, if it really is a private key then sharing it like this means its no longer private.
2/ When the private_key variable is used in crypto_aead_chacha20poly1305_encrypt function as parameter key it is I believe is expecting a symmetric key and not an asymmetric private key.
We were looking into using this library but found that it has no support for cross compilers. There are too many differences between MinGW GCC and MSVC for it to compile.
Upon inspection, it looks like inet_pton
causes issues as, for the symbol to become declared, it requires Windows Vista macros to be set. Some of the old C99 compatibility macros from inttypes.h
cause issues as well under MinGW builds.
目前该靠谱网络库有没有实际应该过?
I noticed that netcode.io does have a protocol id, wondering if it also does crc based integrity checking, or is this expected to be built at a higher level in the application?
See the replay protection where it is called in netcode_read_packet
here: https://github.com/networkprotocol/netcode.io/blob/master/netcode.c#L1874-L1883
int netcode_replay_protection_packet_already_received( struct netcode_replay_protection_t * replay_protection, uint64_t sequence )
{
netcode_assert( replay_protection );
if ( sequence + NETCODE_REPLAY_PROTECTION_BUFFER_SIZE <= replay_protection->most_recent_sequence )
return 1;
if ( sequence > replay_protection->most_recent_sequence )
replay_protection->most_recent_sequence = sequence;
int index = (int) ( sequence % NETCODE_REPLAY_PROTECTION_BUFFER_SIZE );
if ( replay_protection->received_packet[index] == 0xFFFFFFFFFFFFFFFFLL )
{
replay_protection->received_packet[index] = sequence;
return 0;
}
if ( replay_protection->received_packet[index] >= sequence )
return 1;
replay_protection->received_packet[index] = sequence;
return 0;
}
Replay protection happens before:
recvfrom
All an attacker would have to do to DoS another client is send the client a packet with a large sequence number. The connection will be halted, as the replay protection will cull all valid sequence numbers, since they are "too small" to fit into the buffer due to this check:
if ( sequence + NETCODE_REPLAY_PROTECTION_BUFFER_SIZE <= replay_protection->most_recent_sequence )
This is also a problem for the server, and is not safe against IP spoofing -- anyone can spoof a valid client's IP address and send a large sequence number, especially in an innocuous keepalive packet.
Solution: Separate the replay protection into two stages. One stage that checks for duplicates before decryption (as an optimization), and the other stage to track the maximum sequence number after decryption succeeds (thus validating the sequence number, since a client valid client will presumably never DoS themselves by sending an artificially large sequence, since that makes no sense).
Here's a test case confirming the problem: https://gist.github.com/RandyGaul/cabfd7441bb89f5c0a38e6f5fb152f60
Here's a pull-request fixing the problem: #85
Why use single file ( netcode.c ) instead of splitting it ?
At least move the test functions to different one, so if someone would use this library they don't have to handle with unnecessary code.
Hi, playing with a Node.js implementation. My understanding is that the connect token has two fields: client_to_server
and server_to_client
keys. These are symmetric keys that are shared between the client, web server, and backend servers, and they are protected by both HTTPS and encryption with the private key.
My question: that means the servers should be creating and storing these symmetric keys per client to be later reference, right?
I had a couple of questions about the protocol:
I like the sounds of the protocol, anyway! I'm interested in doing a full Rust implementation (I saw someone else has already jumped on that too :) ), so I'll see if I can bash away at it when I get some time.
Cheers,
Jack
Based on the current spec, the user_data field in the private connect token should be user specified data (this means I assume the API should be able to receive user_data from an outside source). However, in netcode_generate_connect_token line 5081, a private connect token is generated with random bits filling the user_data field.
This seems like something that would be easy to fix with a user_data parameter in netcode_generate_connect_token. (I don't know C enough to be sure.)
Hi,
If I understand the protocol correctly there is no connection between Web backend and Game server so the client should pass some data in the encrypted part of connect token to game server.
The game server can be initialized with the information from web backend, the keys, the allowed players to connect and an associated connection ticket, with the ticket it can reject all other connection requests.
So we can replace the encrypted part with the connection ticket that is a random string data, and we can find player information with the corresponding connection ticket.
What is the reason that the protocol does not consider a simple web request from game server to web backend to get such initialization data?
Thanks.
@gafferongames Hi Glenn! I'm currently working on an app for testing the reliable UDP networking solutions. I successfully integrated netcode.io with it (using C# bindings and a reliability layer on top of it) and it works absolutely great! But unfortunately, it's not allowed to have more than 256 simultaneous connections. So my question is - if I want to bypass this limitation, it's fine to change the NETCODE_MAX_CLIENTS preprocessor directive or it's a bad idea?
Native implementation of netcode.io in C#. Question, what language specification can we target? I think there are enough language features to warrant supporting C#7 at this point. Were you planning on supporting netcore, netstandard, or the full dotnetframework (or target against all of them). NET core 1.1, Net Standard 1.5 (eventually 2?) and NET 462.
I'd like to store the state of my game server once all players have left. However, netcode calls connect_disconnect_callback_function()
before server->num_connected_clients--;
, which means that the number of connect clients is 1 when the last disconnect callback fires.
I thought about moving the callback to the bottom, but it seems like a good amount of metadata about the connection is reset that may be useful to those subscribing to the callback (yojimbo for example).
Would it make sense to move the server->num_connected_clients--;
line to before connect_disconnect_callback_function()
in netcode_server_disconnect_client_internal
?
Max
Haven't started work but here's a placeholder.
Elixir is a dynamic, functional language designed for building scalable and maintainable applications. Elixir leverages the Erlang VM
I wish to make this a library for elixir too.
According to the crypto spec, the ivs cannot be repeated. There's a variant that allows random ivs, but it's a different algo.
What is the recommended algorithm to avoid this?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.