projectharmonia / bevy_replicon Goto Github PK
View Code? Open in Web Editor NEWServer-authoritative networking crate for the Bevy game engine.
Home Page: https://crates.io/crates/bevy_replicon
License: Apache License 2.0
Server-authoritative networking crate for the Bevy game engine.
Home Page: https://crates.io/crates/bevy_replicon
License: Apache License 2.0
We use Evets::iter_current_update_events to iterate collect all component removals. But according to the docs this will miss removals that happen after this system runs. We need to either document it or add a system in Last
schedule to collect missed removals (or a system in Last that runs in debug and asserts there are no removals).
Many games have a predefined client list. In that case you can assign client ids as indices into the client list, and access clients on the server in O(1) instead of O(n) or O(logn).
This is relevant for client visibility, because to set the visibility of an entity on a client you need to access that client in the client list. Setting visibility can be considered the hot path, since you may need to update visibility frequently for many entities.
Options:
Note that this would require changes to how connection status is tracked.
I'm currently trying to get this to work with a couple of custom renet transports I wrote for running a client inside wasm.
After some debugging, I found out that the client systems will only .run_if(client_connected())
where the client_connected() run condition is from renet, that run condition will only return true
if the built-in renet client transport is connected and not any custom ones.
This is to summarise a design discussion from a discord chat today, relating to migrating from my custom renet netcode to replicon.
In my game I send spawn messages from server to client on the frame they happen, but only send regular state updates (position, velocity, etc) at a slower rate (10hz) My clients predict everything and reconcile accordingly when updates arrive.
Currently this isn't possible with replicon. If I set the server tickrate to 10hz, my spawns will also be broadcast at 10hz. No good for bullets, which need to appear on clients asap.
This proposal also moves towards replicon deciding how many packets to send per tick, by deciding what goes into each individual packet (~1200bytes or whatever). Sending one large buffer and letting renet fragment it is fine for some games, but loss of one fragment renders the entire update useless. Problem is worse for games with a lot of state data requiring multiple packets. They might be better served by replicon crafting multiple single packets each with replication data for a subset of entities, that can be applied individually.
Described in this gaffer article under the "Priority Accumulator" heading, it could work like this at a high level:
PriorityAccumulator(f32)
component.Each packet contains all required component data for a subset of entities, so would need per-entity acks
See also: #16
On top of this, it should be possible to have a higher level API like "always send spawns every tick, but otherwise i'm happy with 10hz state updates". But with a reasonable bandwidth budget, and allowances for bursting higher due to lots of spawns, there probably wouldn't be any need to fix the rate of updates, as long as freshly spawned entities started with a very high accumulator.
If the number of entities is small enough that it fits in a packet, or the game isn't compatible with partial state updates, you could skip sorting by accumulator and just include everything in one large buffer and let renet split it (if needed) like it does at the moment.
Currently we have two buffers for each client (for each channels). But it would be much more efficient to write everything into a single buffer and store data positions for clients. This way data will be serialized only once no matter how much clients we have.
In some games (mainly open-world games), it is useful to prioritize updating of some entities over others in order to manage bandwidth constraints.
Here is a sketch of how to integrate priority with bevy_replicon
.
Instead of true/false for client visibility, set a priority between 0 and 1.0. A priority below 0.01 means 'don't replicate', and a priority over 0.99 means 'always replicate'.
Add a global throttling resource that records a range of replication frequencies per-client: [min frequency, max frequency]. This range can be manually adjusted by users (we need to expose as many stats as possible about client acks/latency and bandwidth usage). We or someone can write a bevy_replicon_throttle
crate that provides automatic frequency adjustments (hypothetically - I don't plan to write it).
In replication, always spawn and insert new components for entities that are visible (priority >= 0.01). Only update entities at a frequency computed from their priority mapped onto the [min freq, max freq] range.
Open question: how to map replication frequencies onto replicon TickPolicy?
-- nevermind, I misread
Hi,
I'm using your wonderful plugin in my game and so far almost everything was perfect.
However, today I noticed that my on_client_event
condition fires my system twice (two consecutive frames) and on the second run in the system I get empty event reader.
The only two possible reasons I can think of for this strange behavior is either:
client_event.iter().next().unwrap()
to read an event.So the question is: can I expect ClientEvent
to be empty after being read at the beginning of the next frame? And also am I right that network events are cleaned totally the same way as bevy events?
On this line, events are drained before being mapped and sent. I believe this is done for efficiency's sake (to avoid having to clone the events before mapping) but it is inconsistent with the behaviour of the other default systems for network events. Non-mapped client events are not drained (only read), and neither are server events. This may lead to confusing behaviour.
In my case, it meant that my system intended to run on the client and read events of a type added with App::add_mapped_client_event
never found any, as they had been drained to send to the server.
I would like to make the behaviour consistent across the default network event sending systems, with my preference being that events are not drained in any of them. The current functionality of the sending system for mapped events may be more efficient, but maybe could be provided under a different function (e.g. App::add_mapped_client_event_drained
).
I am willing to create a pull request implementing these changes myself, just thought I would make this issue first in case maintainers disagree with this direction.
We can do this by storing registered events in a sorted list of typeids, then assigning channel ids based on index.
Using clumsy to simulate a throttled network condition, I get a panic with this error when entities are despawned
thread 'main' panicked at 'server should send valid entities to despawn: EntityNotFound(24v0)',
bevy_replicon-0.2.1\src\client.rs:148:22
I think this should likely be an error message instead of a panic as this seems to be a recoverable situation in most cases. If the entity doesn't exist on the client anymore, then the mission is already accomplished. Having it log an error still makes sense though as it could be a sign that you are running your despawning system on the client when it should only be run on the server and it shouldn't occur under normal network conditions.
Download and run clumsy with these setttings:
git clone https://github.com/paul-hansen/bevy-jam-3.git
cd bevy-jam-3
git checkout ef33179
cargo run --features "bevy_editor_pls" -- --listen 127.0.0.1
cargo run --features "bevy_editor_pls" -- --connect 127.0.0.1
Press and hold spacebar to fire bullets, eventually the client will panic.
It handled other network conditions really well!
Possible that this is a sign that something else is going on that could be fixed, it does seem a bit odd that throttle would end up with a despawn request being received twice.
Feel free to close this as won't fix if you think this is correct behavior, it's not a big deal, just wanted to report it in case it could help make this lib more robust.
error[E0283]: type annotations needed
--> /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_replicon-0.11.0/src/client.rs:57:29
|
57 | let end_pos = message.len().try_into().unwrap();
| ^^^^^^^
...
85 | if cursor.position() == end_pos {
| -- type must be known at this point
|
= note: multiple `impl`s satisfying `u64: PartialEq<_>` found in the following crates: `core`, `serde_json`:
- impl PartialEq for u64;
- impl PartialEq<serde_json::value::Value> for u64;
help: consider giving `end_pos` an explicit type
|
57 | let end_pos: /* Type */ = message.len().try_into().unwrap();
| ++++++++++++
Note that serde_json is included with default bevy features (bevy_gltf)
Would be good to integrate https://crates.io/crates/zstd directly for replication, probably under a default feature flag, but overall I think the cpu/bandwidth tradeoff here is worthwhile for most games.
This crate looks awesome and exactly what I need, only some of the targets for my project are tiny microcontrollers so im being pretty conservative regarding dependencies.
Would you be open to using bevy_ecs
, bevy_app
, bevy_reflect
etc instead of the kitchen sink bevy
crate? This should also speed up compile times and make rust-analyzer
quicker.
As part of migrating my custom netcode to replicon, i'm trying to sketch out how to handle client-predicted entities.
Here's how I imagine it working with replicon:
Res<ClientPredictedEntitiesMap>
, mapping {(SE, client_id) --> CE}collect_changes
, if the entity being written to the bufferOption<Entity>
to the ReplicationBuffer, and writing it after the entity_data
.Option<predicted_client_entity>
let mut entity = entity_map.get_by_server_or_spawn(world, entity, predicted_client_entity);
That would allow clients to predict spawns and reconcile predicted entities with the server version.
Looking for a sanity check before I write more code.. sound sensible?
Clients deserialize world diffs into a pile of allocated bits (vectors and map nodes).
Use a span or similar byte magic to access serialized world diffs directly.
Instead of registering multiple systems, we can use a single system by using custom functions (similar to what we do for components).
This will make the public API simpler and more efficient.
Bevy did a similar optimization recently: bevyengine/bevy#12936 and we can benefit from the provided EventRegistry
.
renet is great, but the author is not very active and its examples depend on other third-party crates. Because of it, the crate could be behind a Bevy version. It's not a problem on its own, but since we have a small ecosystem, a release delayed due to renet delays other crates that depend on us.
bevy_replicon
release without bevy_replicon_renet
when needed. This is what we do now, but users won't be able to run examples from the repo and I have to temporarely remove the crate from CI.renet
directly. I tried this approach in #299. It works, but since we can't enable bevy
feature on renet
, we created wrappers with the same name and Deref(Mut)
. I think it's a bit hacky. Users will have to change their imports from something like renet::RenetServer
to wrappers::RenetServer
. I also have to copy-paste all available constructors since they aren't affected by Deref(Mut)
.bevy_replicon_renet
into a separate repository. I will continue to maintain it as before, but this way I will be free to update bevy_replicon
independently. But this also means that we won't have examples in this repo. Here is what we can do about it:bevy_replicon_renet
with bevy_replicon_renet2
. But I will need to ask @UkoeHB to temporarily remove webtransport (to be able to draft a new release, blocked by h3
) and remove all other examples from the repo that have dependencies on third-party crates.I think that 3(.1) will be the best solution.
Instead of specify each component to replicate, require to insert Replication
and use not_replicate_if
for exclusion we should could specify bundles and their markers for replication. For example:
app.replicate::<CityBundle>();
#[derive(Bundle)]
struct CityBundle {
transform: Transform,
city: City,
...
}
// Could be a macro?
impl ReplicationMarker for CityBundle {
fn replication_marker(world: &World) -> TypeId {
TypeId::of::<City>()
}
}
So instead of iterating over all entities with Replication
we could iterate over all entities with special markers and replicate only components from the list.
This will require more check on archetype:
https://github.com/lifescapegame/bevy_replicon/blob/ecb264477e353bda6550603b5fdd5e50c5d53b7b/src/replication_core.rs#L130
But will completely remove this check:
https://github.com/lifescapegame/bevy_replicon/blob/ecb264477e353bda6550603b5fdd5e50c5d53b7b/src/server.rs#L180
I would expect this approach to work faster and be more user-friendly.
The server event sending_system
and the client event receiving_system
are both configured in set OnUpdate(ServerState::Hosting)
. This means a server response to a client event received by the server in tick A cannot be sent out until tick B (the same is true for a client responding to a server event). The default sending/receiving systems have private visibility too, meaning they can't be configured to run at other times.
Possible alternative:
ServerSet::ReceiveEvent
and ServerSet::SendEvent
.run_if(in_state(ServerState::Hosting))
for server endpoints, and .run_if(in_state(ClientState::Connected))
for client endpointsWhat do you think? This way the server sets can be configured.
Currently we sending one message per diff and rely on the Renet fragmentation. If the message is large, then Renet divides it into several parts.
But if packet loss is present, then the loss of even one part results in the loss of the entire diff.
To avoid this we should send diff per-entity. To achieve this we need to track last acknowledged tick per entity.
If you try to remove RenetClient or RenetServer resources after registering client or/and? server events.
Example of the panic when removing RenetServer:
thread 'Compute Task Pool (9)' panicked at 'Resource requested by bevy_replicon::network_event::client_event::receiving_system<leafwing_input_manager::action_state::ActionDiff<stellar_squeezebox:
:player::PlayerAction, stellar_squeezebox::network::NetworkOwner>> does not exist: renet::server::RenetServer', C:\Users\Paul\.cargo\registry\src\github.com-1ecc6299db9ec823\bevy_ecs-0.10.1\src\system\system_param.rs:555:17
I expected this to not panic and instead drop any resources needed to close the connection. This would allow setting up a new connection, for example when the user want's to stop playing one game they joined and join another or host their own.
git clone https://github.com/paul-hansen/bevy-jam-3.git
git checkout cb451e27
cargo run -- --listen 127.0.0.1
cargo run -- --connect 127.0.0.1
On either the client or the server press escape and "y" to accept. It will panic here trying to run the event systems saying the client or server resource does not exist.
I have a working patched version of bevy_replicon. If you want to test with that version, do the same but checkout 4217a6d5
instead. (Our game's main branch is using this patch too) Instead of a panic you should see a dialog that allows you to join or create a new game.
There is a category of games that have limited conditions where the server world will be modified. For those games, it is feasible to manually track the last tick where something in the world was modified (or an event sent to clients). Once the last world-change-tick has been acked, it is a waste of CPU time to scan the ECS world for individual changes.
It would be nice if we could manually define/inject the last world-change-tick to the server replication loop, either on a global or per-client basis. We can then short-circuit replication if a client has acked that tick. Moreover, even if a client has not acked a tick, if we have cached replication buffers for the tick range (last acked, last world-change], then we don't need to scan the world (just send the buffers again).
Since we moved despawns and removals to the middle of init messages, array header trimming is less effective. We should add a 1-byte header to the start of init messages that contains bit flags indicating which sections are present in the message. This will shave several bytes off most messages.
Despawns are more likely to happen and with #15 it will happen even more often.
Changing order will reduce message size if there are no removals.
While this may seem like a simple replacement, some games rely on ticks to track time. For example, you can define buff duration in ticks instead of ms.
If we switch to u16
, then we will need to provide an API that will allow us to map the received tick with the simulation tick.
Replicating a transform seems to stop randomly in the new 0.2 version.
We decided to try this out for the Bevy Jam 3, can see the bug demonstrated in our early prototype: https://github.com/paul-hansen/bevy-jam-3
git clone https://github.com/paul-hansen/bevy-jam-3
cd bevy-jam-3
git checkout 1ed281e
cargo run -- server
cargo run -- client
Select the server window and use the wasd keys to move bevy icon. It works for a bit, but then randomly will stop replicating the position changes, no errors are logged and it never logs that the client disconnected.
Simply changing the bevy_replicon version back to 0.1.0 like we did in paul-hansen/bevy-jam-3@8a679e2 makes it work fine again.
Thanks for making this, the API looks really similar to what I've been thinking about making. Really excited to see where it goes!
Some networked games try to compress diffs between updates by networking only the value difference at a component level. Replicon doesn't currently support that.
Here is a first-draft idea for how to support it:
If we pass the change-limit tick + current tick into serializers, then the serializer can have a local that tracks historical state and performs a diff internally. The deserializer would also need to track historical state and apply diffs to the correct older value. We might be able to provide a pre-baked value-diff compressor for types that implement certain traits.
Note that supporting this would require changes to the shared copy buffer, since different clients could get different component serializations.
It would be quite convenient to support Bevy's filters in replicate_group
as in queries, via second generic parameter. These rules will be applied when filtering archetypes.
World diff collection involves:
The first problem can be mitigated with per-entity change detection, so you don't need to iterate over components in entities that haven't changed in a while.
Here are three ideas to mitigate the second problem (as discussed on discord).
In conclusion, (1) seems like a safe bet.
If a client is not sending acks but they are still connected, then the server should gradually throttle replication updates. Otherwise a malicious client can abuse replication to waste server resources.
An easy method would be 'double the tick gap between updates every X updates' (with X as config) and reset when a non-stale ack shows up. So for X = 3 and an ack in tick 0, you'd get updates on ticks: 1, 2, 3, 5, 7, 9, 13, 17, 21.
EDIT: The throttling needs to take into account client reconnects, since their ack tick will reset to zero. It may also be worthwhile to throttle clients who reconnect too frequently.
Otherwise clients can spam log files when you don't want them to.
The current replicon code does not clean up despawns/component removals on a client after the client reconnects, for despawns/removals that happened while the client was disconnected.
The client needs an additional reconnect_system
which despawns all entities with Replication
before the first call to diff_receiving_system
after a client reconnects.
Sometimes you replicate only part of the world. Commonly used for things like fog of war or card games.
To solve this @koe
from Bevy discord server suggested to implement rooms.
Assign each entity a room ID (inspired by naia's rooms). Each client is a member of an arbitrary number of rooms (e.g. using a hashset of ids). By default entities are in room โ0โ which means global (or maybe have no room IDs). All other room ids are user-defined. Replicon just needs a resource to track client room membership, and some updates to the replication logic to only replicate an entity for a given client if they are in the same room (and also to โdespawnโ an entity if it stops being visible to a client).
I believe there is an error in the documentation for the variants of VisibilityPolicy
. I have checked over the source code to confirm this (to the best of my ability).
The documentation describing the behaviour of VisibilityPolicy::Blacklist
and VisibilityPolicy::Whitelist
appear to be the wrong way around.
because of how bevy clears event queues it currently happens that server events can fail to be received by clients using FixedUpdate.
you might write to the EventWriter, and then a few frames tick past before FixedUpdate runs again, by which time the event is long gone. Double buffering keeps it for 2 ticks only.
On your client, if you want to receive the server events in a FixedUpdate system, you might not see them all (even though replicon receives them and writes them to the EventWriter).
here (server_event.rs) is where the standard add_event
is used, which is what the clients consume via a system using normal EventReaders.
to make this work for fixedupdate clients, that would need to be a manually created Events resource too, like are used for the serverside events (the ones with ToClients<>
wrappers).
but then the question is: where do we clear the event queues?
we don't really know if the client is using fixedupdate..
I could change it to a manually created/cleared Events, and put a system to clear it in in Last
. That would work the same for people who aren't using FixedUpdate.
Then we'd need a setting in the replicon client plugin: using_fixed_update=true
which doesn't add the event clearing system, but makes the consumer add it themselves to fixedupdate.
How does that sound?
I'm running at 60 FPS but the default MaxTickRate is ony 30. This seems to cause server events to be dropped. I think the problem is that the events are only around for one Update cycle but the server system that requests them only runs half of the time.
I tried removing the in_set condition on the sending_system and that fixed it. I think that it was fixed because the system now runs on every Update cycle.
MapNetworkEntities
and Mapper
.The following test will sometimes cause a panic:
#[test]
fn despawn_replication_hierarchy() {
let mut server_app = App::new();
let mut client_app = App::new();
for app in [&mut server_app, &mut client_app] {
app.add_plugins((
MinimalPlugins,
ReplicationPlugins.set(ServerPlugin::new(TickPolicy::Manual)),
))
.replicate::<TableComponent>();
}
common::connect(&mut server_app, &mut client_app);
let server_entity = server_app.world.spawn((Replication, TableComponent)).id();
server_app.add_systems(
Update,
move |mut commands: Commands, mut has_run: Local<bool>| {
if *has_run {
return;
}
*has_run = true;
// Should be inserted in `Update` to avoid sync in `PreUpdate`.
commands.entity(server_entity).with_children(|parent| {
parent.spawn((Replication, ParentSync::default()));
});
},
);
server_app.update();
client_app.update();
let client_entities = client_app
.world
.query_filtered::<Entity, With<Replication>>()
.iter(&client_app.world)
.count();
let client_entities_with_parent = client_app
.world
.query_filtered::<Entity, (With<Replication>, With<Parent>)>()
.iter(&client_app.world)
.count();
assert_eq!(client_entities, 2);
assert_eq!(client_entities_with_parent, 1);
despawn_with_children_recursive(&mut server_app.world, server_entity);
server_app.update();
client_app.update();
let entity_map = client_app.world.resource::<NetworkEntityMap>();
assert!(entity_map.to_client().is_empty());
assert!(entity_map.to_server().is_empty());
}
This is the panic backtrace:
thread 'despawn_replication_hierarchy' panicked at 'Entity 1v0 does not exist', src/replicon_core.rs:394:31
stack backtrace:
0: rust_begin_unwind
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:593:5
1: core::panicking::panic_fmt
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panicking.rs:67:14
2: bevy_ecs::world::World::entity_mut::panic_no_entity
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:285:13
3: bevy_ecs::world::World::entity_mut
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:290:21
4: bevy_replicon::replicon_core::WorldDiff::deserialize_to_world::{{closure}}::{{closure}}
at ./src/replicon_core.rs:394:25
5: bevy_ecs::world::World::resource_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
6: bevy_replicon::replicon_core::WorldDiff::deserialize_to_world::{{closure}}
at ./src/replicon_core.rs:363:13
7: bevy_ecs::world::World::resource_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
8: bevy_replicon::replicon_core::WorldDiff::deserialize_to_world
at ./src/replicon_core.rs:362:9
9: bevy_replicon::client::ClientPlugin::diff_receiving_system::{{closure}}
at ./src/client.rs:51:17
10: bevy_ecs::world::World::resource_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
11: bevy_replicon::client::ClientPlugin::diff_receiving_system
at ./src/client.rs:49:9
12: core::ops::function::FnMut::call_mut
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:166:5
13: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &mut F>::call_mut
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:294:13
14: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn() .> Out>>::run::call_inner
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:203:21
15: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn() .> Out>>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:206:17
16: <bevy_ecs::system::exclusive_function_system::ExclusiveFunctionSystem<Marker,F> as bevy_ecs::system::system::System>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:103:19
17: bevy_ecs::schedule::executor::multi_threaded::MultiThreadedExecutor::spawn_exclusive_system_task::{{closure}}::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/multi_threaded.rs:592:21
18: core::ops::function::FnOnce::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
19: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
20: std::panicking::try::do_call
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
21: __rust_try
22: std::panicking::try
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
23: std::panic::catch_unwind
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
24: bevy_ecs::schedule::executor::multi_threaded::MultiThreadedExecutor::spawn_exclusive_system_task::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/multi_threaded.rs:591:27
25: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::future::future::Future>::poll
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:296:9
26: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:42
27: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
28: std::panicking::try::do_call
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
29: __rust_try
30: std::panicking::try
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
31: std::panic::catch_unwind
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
32: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:9
33: async_executor::Executor::spawn::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.5.1/src/lib.rs:145:20
34: async_task::raw::RawTask<F,T,S,M>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.4.0/src/raw.rs:563:17
35: async_task::runnable::Runnable<M>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.4.0/src/runnable.rs:782:18
36: async_executor::Executor::tick::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.5.1/src/lib.rs:210:9
37: bevy_tasks::thread_executor::ThreadExecutorTicker::tick::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/thread_executor.rs:105:39
38: bevy_tasks::task_pool::TaskPool::execute_scope::{{closure}}::{{closure}}::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:503:45
39: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::future::future::Future>::poll
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:296:9
40: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:42
41: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
42: std::panicking::try::do_call
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
43: __rust_try
44: std::panicking::try
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
45: std::panic::catch_unwind
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
46: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:9
47: bevy_tasks::task_pool::TaskPool::execute_scope::{{closure}}::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:506:77
48: <futures_lite::future::Or<F1,F2> as core::future::future::Future>::poll
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:526:33
49: bevy_tasks::task_pool::TaskPool::execute_scope::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:509:41
50: bevy_tasks::task_pool::TaskPool::scope_with_executor_inner::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:420:85
51: futures_lite::future::block_on::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:89:27
52: std::thread::local::LocalKey<T>::try_with
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:270:16
53: std::thread::local::LocalKey<T>::with
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:246:9
54: futures_lite::future::block_on
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:79:5
55: bevy_tasks::task_pool::TaskPool::scope_with_executor_inner
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:374:13
56: bevy_tasks::task_pool::TaskPool::scope_with_executor::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:318:17
57: std::thread::local::LocalKey<T>::try_with
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:270:16
58: std::thread::local::LocalKey<T>::with
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:246:9
59: bevy_tasks::task_pool::TaskPool::scope_with_executor
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:307:9
60: <bevy_ecs::schedule::executor::multi_threaded::MultiThreadedExecutor as bevy_ecs::schedule::executor::SystemExecutor>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/multi_threaded.rs:190:9
61: bevy_ecs::schedule::schedule::Schedule::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/schedule.rs:235:9
62: bevy_ecs::world::World::try_run_schedule::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1851:55
63: bevy_ecs::world::World::try_schedule_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1782:21
64: bevy_ecs::world::World::try_run_schedule
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1851:9
65: bevy_app::main_schedule::Main::run_main::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_app-0.11.2/src/main_schedule.rs:146:25
66: bevy_ecs::world::World::resource_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
67: bevy_app::main_schedule::Main::run_main
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_app-0.11.2/src/main_schedule.rs:144:9
68: core::ops::function::FnMut::call_mut
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:166:5
69: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &mut F>::call_mut
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:294:13
70: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn(F0) .> Out>>::run::call_inner
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:203:21
71: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn(F0) .> Out>>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:206:17
72: <bevy_ecs::system::exclusive_function_system::ExclusiveFunctionSystem<Marker,F> as bevy_ecs::system::system::System>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:103:19
73: <bevy_ecs::schedule::executor::single_threaded::SingleThreadedExecutor as bevy_ecs::schedule::executor::SystemExecutor>::run::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/single_threaded.rs:98:21
74: core::ops::function::FnOnce::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
75: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
76: std::panicking::try::do_call
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
77: __rust_try
78: std::panicking::try
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
79: std::panic::catch_unwind
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
80: <bevy_ecs::schedule::executor::single_threaded::SingleThreadedExecutor as bevy_ecs::schedule::executor::SystemExecutor>::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/single_threaded.rs:97:27
81: bevy_ecs::schedule::schedule::Schedule::run
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/schedule.rs:235:9
82: bevy_ecs::world::World::run_schedule::{{closure}}
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1865:51
83: bevy_ecs::world::World::try_schedule_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1782:21
84: bevy_ecs::world::World::schedule_scope
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1836:9
85: bevy_ecs::world::World::run_schedule
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1865:9
86: bevy_app::app::App::update
at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_app-0.11.2/src/app.rs:244:13
87: replication::despawn_replication_hierarchy
at ./tests/replication.rs:288:5
88: replication::despawn_replication_hierarchy::{{closure}}
at ./tests/replication.rs:237:36
89: core::ops::function::FnOnce::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
90: core::ops::function::FnOnce::call_once
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
The methods add_server_event_with()
and add_client_event_with()
depend on both ServerState
and ClientState
. This means if you disable the server or client plugins then the program will crash on update with error 'resource not found' for the missing state.
The doc test demoing plugin disabling doesn't actually try to update the app. Here is an updated test that fails:
# use bevy::prelude::*;
# use bevy_replicon::prelude::*;
# let mut app = App::new();
app.add_plugins(MinimalPlugins).add_plugins(
ReplicationPlugins
.build()
.disable::<ClientPlugin>()
.set(ServerPlugin { tick_policy: TickPolicy::MaxTickRate(60) }),
);
# #[derive(Debug)]
# struct X;
# app.add_server_event_with::<X, _, _>(||{}, ||{});
# app.update();
Hello! Awesome project, looks very much like what I'd like to use (or implement myself), except I need to use a different networking layer for my game.
I figured I could simply use the replication_core
- but that is also dependent on renet. Would be great if this project was working around simply a trait anyone could implement as a bring-your-own networking layer kind of thing.
In addition, the project is currently structured as a single crate. I suspect I will have issues with building it to wasm for the web due to the dependencies on Send/Sync stuff and other things that don't work in wasm - so even if I won't use them I suspect it won't build.
Would you consider restructuring the code in such a way that:
The rationale for (2) in addition to (1) is that in the browser environment, it could be possible that the data exchange is not even done via the network in common sense - maybe it uses message posting or some other odd way (i.e. serializing the exchanges via HTTP requests and SSE responses). In these situations, it would be hard to implement a trait that looks something like a networking API, but it would be possible to drive the core logic via the data obtained "somehow".
I am using the editor to see live the changes from the custom component.
If I change the enabled boolean from the server, it will replicate to the client.
But when I change it on the client, it does not replicate back to the server.
Is bidirectionality supposed to work? I assumed it should, otherwise I am not sure how this plugin is supposed to be used?
Here is a sample and cargo deps, custom component is Compo
, ran both server/client with first cargo run server
and then cargo run client
.
[dependencies]
bevy = {version = "0.10.1", features = ["dynamic_linking"]}
bevy_editor_pls = "0.4.0"
bevy_replicon = "0.3.0"
clap = { version = "4.2.3", features = ["derive"] }
use bevy::prelude::*;
use bevy_editor_pls::EditorPlugin;
use bevy_replicon::{
prelude::*,
renet::{ClientAuthentication, RenetConnectionConfig, ServerAuthentication, ServerConfig},
};
use clap::Parser;
use std::{
net::{Ipv4Addr, SocketAddr, UdpSocket},
time::SystemTime,
};
const PORT: u16 = 1234;
const PROTOCOL_ID: u64 = 1;
const MAX_CLIENTS: usize = 1;
fn main() {
App::new()
.init_resource::<Cli>()
.add_plugins(DefaultPlugins)
.add_plugin(EditorPlugin::default())
.add_plugins(ReplicationPlugins)
.replicate::<Compo>()
.add_startup_system(setup)
.run();
}
#[derive(Debug, Parser, PartialEq, Resource)]
#[command(author, version, about, long_about = None)]
pub enum Cli {
Server,
Client,
}
impl Default for Cli {
fn default() -> Self {
Self::parse()
}
}
#[derive(Component, Default, Reflect)]
#[reflect(Component)]
pub struct Compo {
pub enabled: bool,
}
pub fn setup(mut commands: Commands, cli: Res<Cli>, network_channels: Res<NetworkChannels>) {
let send_channels_config = network_channels.server_channels();
let receive_channels_config = network_channels.client_channels();
let connection_config = RenetConnectionConfig {
send_channels_config,
receive_channels_config,
..Default::default()
};
let current_time = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap();
let server_addr = SocketAddr::new(Ipv4Addr::LOCALHOST.into(), PORT);
match *cli {
Cli::Server => {
let socket = UdpSocket::bind(server_addr).unwrap();
let server_config = ServerConfig::new(
MAX_CLIENTS,
PROTOCOL_ID,
server_addr,
ServerAuthentication::Unsecure,
);
let server =
RenetServer::new(current_time, server_config, connection_config, socket).unwrap();
commands.insert_resource(server);
commands.spawn((Compo { enabled: true }, Replication {}));
}
Cli::Client => {
let client_id = current_time.as_millis() as u64;
let socket = UdpSocket::bind((Ipv4Addr::LOCALHOST, 0)).expect("localhost should be bindable");
let authentication = ClientAuthentication::Unsecure {
client_id,
protocol_id: PROTOCOL_ID,
server_addr,
user_data: None,
};
let client =
RenetClient::new(current_time, socket, connection_config, authentication).unwrap();
commands.insert_resource(client);
}
}
}
I'd like the ability to not only be able to define which entities are visible to specific clients (possible with the bevy_replicon::server::VisibilityPolicy) but also be able to define what components are visible for each client.
An example: A client can see the stats of his own troops but not the ones of his enemies.
@Shatur and I had a discussion on bevy discord in the the bevy_replicon ecosystem-crates channel.
Here is a little summary from what Shatur wrote:
The API was originally suggested by @NiseVoid. The idea is to have component access levels via bitmasks like in physics engine. User define their meaning. Some examples:
To achieve this, we assign a mask to each component. Like "send this component only to owner and party members". And assign masks to client entities.
To achieve this we can turn hashset for whitelist policy into a hashmap and for blacklist policy add additional hashmap. Both hashmaps will map entity to its mask for a client. We also need to store last processed value into order to detect changes. So the map will be entity -> (mask, mask).
Should this API be per component or per component group (i.e. per replication rule)?
Currently the despawn tracker calls .retain()
on a hashmap of cached replicated entities every tick to identify despawns. This is far from ideal, especially since iterating a hashmap is slow.
Instead, we can add a component to replicated entities that contains a Sender<Entity>
and implement a custom Drop
on that component to send the entity's Entity
to a receiver that is polled once per tick. This way we don't need to track a despawn map, so overall the memory footprint should be similar and the amortized cost should be much lower.
Add documentation summarizing the replication protocol, with performance and synchronization characteristics.
Server and client events logic is separated by "systems for a client event' and "systems for a server event". The big events refactor #262 didn't change it. But I think it makes sense to revisit this aspect as well.
Many users have separate client and server. And they expect to disable ClientEventsPlugin
on server and ServerEventsPlugin
on client. But this won't work because disabling plugin disables the events entirely.
We need to move all server systems into ServerEventsPlugin
and all client systems into ClientEventsPlugin
. And events registry should be initialized by the core plugin.
Thanks so much for this awesome crate!
I have adapted the simple_box
example to use runtime server / client initialisation and am encountering a really strange bug, where consistently one 'part' of my program seems to run at 'the wrong time' so that the server side of bevy_replicon
doesn't actually replicate any entities to clients, and the other 'part' of my program seems to run 'at the correct time' so that the bevy_replicon
server works just fine. I will continue to be more specific as I strip away more layers.
I have been frustrated with this for a few days now, and was wondering what errors I might not know about when I initialise my NetcodeServerTransport
and RenetServer
resources in the Update
or OnEnter(T)
schedule versus the Startup
schedule. The examples use the Startup
schedule because it decides if its is a server / client / solo by reading CLI args, I envisage my game to be able to freely switch between server client and solo modes at runtime.
How can I debug bevy_replicon
? Is it designed for this use case, dynamically switching and connecting to different servers? (Also, I am getting a segfault when running on nightly
compiler immediately after running so I'm using the stable toolchain)
I just upgraded to bevy_replicon = "0.16"
.
This is an awesome crate, and believe it or not is actually blocking me from upgrading to bevy 0.12
:(
I'll try to do a simple upgrade, anything I should know about @Shatur?
The manual fragmentation PR #116 changes how component updates are acked. The server tracks pending acks, and removes them when an ack appears. This is currently a memory leak since the client isn't guaranteed (or expected) to ack all component updates, which are sent over an unreliable channel.
We should add a cleanup protocol that automatically discards stale pending acks after a period of time (e.g. 2x the server's client timeout).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.