Git Product home page Git Product logo

bevy_networking_turbulence's Introduction

The future is

Under Construction

bevy_networking_turbulence's People

Contributors

billyb2 avatar itytophile avatar jcapucho avatar mgeist avatar mvlabat avatar phuocthanhdo avatar rj avatar smokku avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bevy_networking_turbulence's Issues

(Win10) Wasm server crashes on startup

First, small fix for readme:
cargo run --example simple --no-default-features --features use-webrtc -- --server
(missing --no-default-features)

This is the error:

PS C:\Projects-temp\bevy_networking_turbulence> cargo run --example simple --no-default-features --features use-webrtc -- --server
Finished dev [unoptimized + debuginfo] target(s) in 0.26s
Running `target\debug\examples\simple.exe --server`
error: process didn't exit successfully: `target\debug\examples\simple.exe --server` (exit code: 0xc0000135, STATUS_DLL_NOT_FOUND)

STATUS_DLL_NOT_FOUND
Most definitely a windows-specific error!
It is triggered when calling net.listen. I will dig and see if I can trace it further.

Client / Server running inside same instance

Hello,

Firstly, thank you for making this library - it's been great fun to play around with.

For development speed I'd like to have the Server and Client logic running inside the same Bevy instance. Similar to your balls example, the client would events send through the reliable channel at 60fps and the server would broadcast state via an unreliable channel on state update.

It'd be good to be able to separate the client side logic and the server side simulation as soon as possible as the porotype will be p2p.

Having the NetworkResource listen() and connect() on the same address / port has been predictably problematic with broadcast messages not being received. Can you think of any quick work-arounds?

Removing a connection causes a panic sometimes

I can't always reproduce it, it's probably some kind of a race condition.

thread 'IO Task Pool (0)' panicked at 'called `Option::unwrap()` on a `None` value', /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_networking_turbulence-0.3.3/src/transport.rs:209:55
stack backtrace:
   0: _rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::panicking::panic
   3: core::option::Option<T>::unwrap
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/core/src/option.rs:722:21
   4: <bevy_networking_turbulence::transport::ServerConnection as bevy_networking_turbulence::transport::Connection>::build_channels::{{closure}}
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_networking_turbulence-0.3.3/src/transport.rs:209:30
   5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/core/src/future/mod.rs:80:19
   6: async_executor::Executor::spawn::{{closure}}
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/async-executor-1.4.1/src/lib.rs:144:13
   7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/core/src/future/mod.rs:80:19
   8: async_task::raw::RawTask<F,T,S>::run
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/async-task-4.0.3/src/raw.rs:489:20
   9: async_executor::Executor::run::{{closure}}::{{closure}}
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/async-executor-1.4.1/src/lib.rs:235:21
  10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/core/src/future/mod.rs:80:19
  11: <futures_lite::future::Or<F1,F2> as core::future::future::Future>::poll
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-lite-1.12.0/src/future.rs:529:33
  12: async_executor::Executor::run::{{closure}}
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/async-executor-1.4.1/src/lib.rs:242:9
  13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/core/src/future/mod.rs:80:19
  14: futures_lite::future::block_on::{{closure}}
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-lite-1.12.0/src/future.rs:89:27
  15: std::thread::local::LocalKey<T>::try_with
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/std/src/thread/local.rs:399:16
  16: std::thread::local::LocalKey<T>::with
             at /rustc/59216858a323978a97593cba22b5ed84350a3783/library/std/src/thread/local.rs:375:9
  17: futures_lite::future::block_on
             at /Users/mvlabat/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-lite-1.12.0/src/future.rs:79:5
  18: bevy_tasks::task_pool::TaskPool::new_internal::{{closure}}::{{closure}}
             at /Users/mvlabat/.cargo/git/checkouts/_empty-a49bea87237abba3/5a3e28a/crates/bevy_tasks/src/task_pool.rs:139:25

Can't launch two servers on the same machine (WebRTC)

This is probably because naia-server-socket always starts the RTC on the same port, and this is not exposed through bevy_networking_turbulence. Planning on addressing it myself.

thread 'Compute Task Pool (6)' panicked at 'could not start RTC server: Os { code: 48, kind: AddrInUse, message: "Address already in use" }'

This from naia-server-socket-0.4.2/src/impls/webrtc/server_socket.rs:153:14

Can't connect with server listening on `0.0.0.0`, but `127.0.0.1` works (WebRTC)

Hey, I've held off opening an issue here for a few days now but I'm really stumped.

I've had success connecting client to server using WebRTC, however I'm only able to do so when the server listens on localhost:

let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 8081);
net.listen(addr, None, None);

Like this, I'm able to connect just fine from my browser and things work as I'd expect. But I want my server to be available publicly, so localhost is not an option for me.

If I use the unspecified address (0.0.0.0) in the server's listen parameter...

let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::UNSPECIFIED), 8081);
net.listen(addr, None, None);

...WebRTC is no longer able to connect, and my client logs fill up with message send failures. On the server side, interestingly enough, I am indeed able to see the client's WebRTC session request coming in, but no WebRTC messages come through, and the session deterministically times out each time.

I am happy to provide more information, I don't really know where to start on figuring this out. Is listening from 0.0.0.0 supposed to just work, or am I doing something funny in my code?

Bevy 0.5 support

The plugin no longer works with the latest Bevy release.
Is this awesome networking plugin going to support 0.5 in the near future?

Wasm build fails

Excited to try this out!
However, the wasm build fails,

PS C:\Projects-temp\bevy_networking_turbulence> cargo build --example simple --target wasm32-unknown-unknown
   Compiling slice-deque v0.3.0
   Compiling libloading v0.6.5
   Compiling num-iter v0.1.42
   Compiling bitflags v1.2.1
   Compiling byteorder v1.3.4
   Compiling maybe-uninit v2.0.0
   Compiling anyhow v1.0.34
   Compiling crc32fast v1.2.1
   Compiling proc-macro-nested v0.1.6
   Compiling rustc_version v0.2.3
   Compiling unicode-normalization v0.1.16
error[E0433]: failed to resolve: use of undeclared type or module `imp`
  --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:66:20
   |
66 | pub struct Library(imp::Library);
   |                    ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:131:9
    |
131 |         imp::Library::new(filename).map(From::from)
    |         ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:213:11
    |
213 | impl From<imp::Library> for Library {
    |           ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:214:18
    |
214 |     fn from(lib: imp::Library) -> Library {
    |                  ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:219:24
    |
219 | impl From<Library> for imp::Library {
    |                        ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:220:30
    |
220 |     fn from(lib: Library) -> imp::Library {
    |                              ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:241:12
    |
241 |     inner: imp::Symbol<T>,
    |            ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:264:37
    |
264 |     pub unsafe fn into_raw(self) -> imp::Symbol<T> {
    |                                     ^^^ use of undeclared type or module `imp`

error[E0433]: failed to resolve: use of undeclared type or module `imp`
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\libloading-0.6.5\src\lib.rs:289:36
    |
289 |     pub unsafe fn from_raw<L>(sym: imp::Symbol<T>, _: &'lib L) -> Symbol<'lib, T> {
    |                                    ^^^ use of undeclared type or module `imp`

error: aborting due to 9 previous errors

For more information about this error, try `rustc --explain E0433`.
error[E0425]: cannot find function `allocation_granularity` in this scope
 --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\slice-deque-0.3.0\src\mirrored\buffer.rs:7:14
  |
7 |     let ag = allocation_granularity();
  |              ^^^^^^^^^^^^^^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `allocation_granularity` in this scope
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\slice-deque-0.3.0\src\mirrored\buffer.rs:112:15
    |
112 |             * allocation_granularity();
    |               ^^^^^^^^^^^^^^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `allocation_granularity` in this scope
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\slice-deque-0.3.0\src\mirrored\buffer.rs:135:41
    |
135 |         assert!(mem::align_of::<T>() <= allocation_granularity());
    |                                         ^^^^^^^^^^^^^^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `allocation_granularity` in this scope
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\slice-deque-0.3.0\src\mirrored\buffer.rs:148:36
    |
148 |         debug_assert!(alloc_size % allocation_granularity() == 0);
    |                                    ^^^^^^^^^^^^^^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `allocate_mirrored` in this scope
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\slice-deque-0.3.0\src\mirrored\buffer.rs:151:19
    |
151 |         let ptr = allocate_mirrored(alloc_size)?;
    |                   ^^^^^^^^^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `deallocate_mirrored` in this scope
   --> C:\Users\xinux\.cargo\registry\src\github.com-1ecc6299db9ec823\slice-deque-0.3.0\src\mirrored\buffer.rs:172:18
    |
172 |         unsafe { deallocate_mirrored(first_half_ptr, buffer_size_in_bytes) };
    |                  ^^^^^^^^^^^^^^^^^^^ not found in this scope

error: could not compile `libloading`.

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: aborting due to 6 previous errors

For more information about this error, try `rustc --explain E0425`.
error: build failed

I'm on Windows 10 1809

naia-socket 0.6

What do you think of upgrading to the latest version of naia-client-socket and naia-server-socket? Currently, there's a bug with naia-client-socket that doesn't allow it to work in Firefox without either manual patching or using the latest version.

README example doesn't work

README example doesn't work (002a7ae)

Maybe this is due to using wasm-bindgen directly, rather than wasm-pack?

simple.js:323 panicked at 'could not initialize thread_rng: getrandom: this target is not supported', /Users/johnmayer/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.7.3/src/rngs/thread.rs:65:17

Stack:

Error
    at imports.wbg.__wbg_new_59cb74e423758ede (http://127.0.0.1:4000/target/simple.js:329:19)
    at console_error_panic_hook::Error::new::h494e4956a227c339 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[14203]:0x2d7278)
    at console_error_panic_hook::hook_impl::h8f2c406d1778ae80 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[1406]:0x12e4d2)
    at console_error_panic_hook::hook::h4aa82df0ae9adfe2 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[15742]:0x2ecdfd)
    at core::ops::function::Fn::call::ha7ab20e537201524 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[13176]:0x2c7110)
    at std::panicking::rust_panic_with_hook::h123718ba3bf480af (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[2994]:0x1a4468)
    at std::panicking::begin_panic_handler::{{closure}}::hf393a82e58397bd2 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[5444]:0x2132b1)
    at std::sys_common::backtrace::__rust_end_short_backtrace::h2ff2cfc953878925 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[16947]:0x2fc2fe)
    at rust_begin_unwind (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[15260]:0x2e66e5)
    at std::panicking::begin_panic_fmt::h96153d9dc0e882ed (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[16763]:0x2f9fc5)

simple_bg.wasm:0x30ddad Uncaught (in promise) RuntimeError: unreachable
    at __rust_start_panic (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[19154]:0x30ddad)
    at rust_panic (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[17584]:0x302e8a)
    at std::panicking::rust_panic_with_hook::h123718ba3bf480af (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[2994]:0x1a448f)
    at std::panicking::begin_panic_handler::{{closure}}::hf393a82e58397bd2 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[5444]:0x2132b1)
    at std::sys_common::backtrace::__rust_end_short_backtrace::h2ff2cfc953878925 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[16947]:0x2fc2fe)
    at rust_begin_unwind (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[15260]:0x2e66e5)
    at std::panicking::begin_panic_fmt::h96153d9dc0e882ed (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[16763]:0x2f9fc5)
    at rand::rngs::thread::THREAD_RNG_KEY::__init::{{closure}}::h8ae4330291bfd8f8 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[3734]:0x1cb502)
    at core::result::Result<T,E>::unwrap_or_else::h633a94d6b0dafd1c (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[748]:0xe029f)
    at rand::rngs::thread::THREAD_RNG_KEY::__init::h5b5b278c06a41cb3 (http://127.0.0.1:4000/target/simple_bg.wasm:wasm-function[1507]:0x137f8e)

Heartbeats aren't received when using WebRTC

It seems to me like heartbeats don't work when default-features = false and the feature "use-webrtc" is enabled.

If I toggle back to not using WebRTC then I see them arrive between both client and server, and when I toggle WebRTC back on for both client/server then they don't manage to receive them.

I have added prints in my local fork to verify what is happening and the net.send(handle, Packet::new()).unwrap(); line that sends the heartbeats is reached. However, the receiving routine in Ok(packet) => { ... packet read here... in receive_packets is never triggered on the opposite end. This is seen accordingly on both client and server.

Could this be due to how the heartbeats are not turbulence messages and pure UDP (as per the heartbeats readme file)

Panic on sending heartbeat after build channels

The problem

If channels are setup, it is no longer possible to send regular packets. Since the send method uses the sender which has been moved when building channels, it causes panic trying to unwrap None.
Due to this bug we can not use built-in heartbeat feature along with channels.

Steps to reproduce

  1. Create bevy project
  2. Add NetworkingPlugin with auto_heartbeat_ms: Some(1000)
  3. Setup message channels
  4. Run server
  5. Connect client to server

Possible solution

Create separate senders for packets and messages.

Backtrace

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_networking_turbulence-0.3.3/src/transport.rs:282:14
stack backtrace:
   0: rust_begin_unwind
             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:493:5
   1: core::panicking::panic_fmt
             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/panicking.rs:92:14
   2: core::panicking::panic
             at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/panicking.rs:50:5
   3: core::option::Option<T>::unwrap
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/option.rs:386:21
   4: <bevy_networking_turbulence::transport::ClientConnection as bevy_networking_turbulence::transport::Connection>::send
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_networking_turbulence-0.3.3/src/transport.rs:280:9
   5: bevy_networking_turbulence::NetworkResource::send
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_networking_turbulence-0.3.3/src/lib.rs:379:33
   6: bevy_networking_turbulence::heartbeats_and_timeouts
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_networking_turbulence-0.3.3/src/lib.rs:473:9
   7: core::ops::function::Fn::call
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:70:5
   8: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &F>::call_mut
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:247:13
   9: <Func as bevy_ecs::system::into_system::SystemParamFunction<(),Out,(F0,F1),()>>::run
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_ecs-0.5.0/src/system/into_system.rs:207:21
  10: <bevy_ecs::system::into_system::FunctionSystem<In,Out,Param,Marker,F> as bevy_ecs::system::system::System>::run_unsafe
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_ecs-0.5.0/src/system/into_system.rs:147:19
  11: bevy_ecs::schedule::executor_parallel::ParallelExecutor::prepare_systems::{{closure}}
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_ecs-0.5.0/src/schedule/executor_parallel.rs:200:30
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80:19
  13: async_executor::Executor::spawn::{{closure}}
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/async-executor-1.4.1/src/lib.rs:144:13
  14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80:19
  15: async_task::raw::RawTask<F,T,S>::run
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/async-task-4.0.3/src/raw.rs:489:20
  16: async_executor::Executor::try_tick
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/async-executor-1.4.1/src/lib.rs:181:17
  17: bevy_tasks::task_pool::TaskPool::scope::{{closure}}
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_tasks-0.5.0/src/task_pool.rs:222:21
  18: std::thread::local::LocalKey<T>::try_with
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/local.rs:272:16
  19: std::thread::local::LocalKey<T>::with
             at /home/vigdail/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/local.rs:248:9
  20: bevy_tasks::task_pool::TaskPool::scope
             at /home/vigdail/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_tasks-0.5.0/src/task_pool.rs:169:9
...

Branch to keep up-to-date with bevy repo?

The Bevy main repo has upgraded wasm-bindgen, and renamed the AppBuilder to just App. Is there a possibility of keeping a branch up-to-date with bevy repo until an official release is made? I've worked around this by pulling the repo and making the changes locally, but it would be preferable to use the git repo.

An additional branch would mean fixes can be applied to the main branch (for the "official" bevy release) without breaking any existing dependencies.

How does this compare to using Laminar?

I'm quite new to games networking so I have trouble comparing similar projects without some guidance.

How would this project compare to: Bevy Networking with Laminar?

These are the features Laminar provides:

  • Fragmentation
  • Unreliable packets
  • Unreliable sequenced packets
  • Reliable unordered packets
  • Reliable ordered packets
  • Reliable sequenced packets
  • Rtt estimations
  • Protocol version monitoring
  • Basic connection management
  • Heartbeat
  • Basic DoS mitigation
  • High Timing control
  • Protocol Versioning
  • Well-tested by integration and unit tests
  • Can be used by multiple threads (Sender, Receiver)

I see that this project uses a stripped down version of WebRTC protocol, is there some of the features above which come for free because of that?

Ability to manually close a connection

It would be great to add a possibility to forcibly close a connection with a peer.

While closing connections seems to be implied/covered by #6, this issue is intended to have a narrower scope.

Whereas #6 implies automatic connection tracking, this simple feature would allow working around the limitations of missing connection tracking by letting users take care of it on the application level.
Also, this feature will be useful even if automatic connection tracking arrives. For instance, forcible closing of a connection would mean emptying channel buffers, so that previous reliable messages wouldn't be resent.

Forcible closing of connection shouldn't imply that another peer can't send a message thus opening a connection again. I just expect the "closing" peer to stop trying to deliver old reliable messages.

I've never designed lower-level networking libraries, but I hope this feature request makes sense. Would be happy to hear what you think about it.

Thanks!

NetworkEvent::Connected is created immediately when connecting

When trying to connect to a server using NetworkResource::connect, a Connected event is created. This happens immediately (except 1 frame delay), and even if there isn't actually a server listening.
I find this misleading and think this event should not be sent when connecting as a client.

What is the intended way to check connectivity?

Channels example client doesn't build with use-webrtc feature

Trying to reconstruct the build commands for the channel example on web, but ran into issues.

Server works fine:

env RUST_LOG=debug cargo run --example channels --no-default-features --features use-webrtc,bevy/default -- --server

But client does not:

cargo build --example channels --target wasm32-unknown-unknown --no-default-features --features use-webrtc,bevy/default

Client fails to build - compile issue in wgpu - Lots of "unresolved import", "failed to resolve", and "cannot find type" in web_sys

Another crate feature missing?

Missing connection tracking

if you quit a client application and then open a new one on the same port it can't reconnect to the server again until you restart the server ๐Ÿ˜ฌ

Call flush once per tick rather than after every send_message

Currently turbulence's channel flush() is called within each call to NetworkResource::send_message or broadcast_message. This will immediately send a packet containing the message.

If I understand correctly, one of the features of turbulence messages is that, given they must be less than the size of a packet, it coalesces them such that multiple messages are sent per packet.

By flushing each send, we are only ever sending 1 message per packet.

In the case of bevy, surely it would be better to flush() each channel once at the end of a tick, so that all the small messages that were sent by systems during that tick can be coalesced into fewer packets.

Perhaps exposing a flush_all_messages.system() that the user can run in an appropriate stage?

Thanks for a great library btw :)

Support running servers on wasm

Hi :)

I was looking at this project and https://github.com/ErnWong/dango-tribute and it seems like @ErnWong hacked together a version that lets the server be run in the browser (in its own iframe, I think).

I think most of it is in this commit: ErnWong/dango-tribute@f1f6942

Maybe it could be added as a feature flag instead of being #[cfg(not(target_arch = "wasm32"))]?

The reasons I'm interested in this are:

  1. Reduce latency if the host is close to the other players, for instance if they're on the same local network.
  2. Lower cost, I just have to host a signaling/matchmaking server. At least as long as I'm not worried about cheating.

Packet length limit overflows native socket

When communicating a lot of data per frame with a client using a native UDP socket, the following error gets logged:

bevy_networking_turbulence: Receive Error: IoError(Wrapped(Os { code: 10040, kind: Uncategorized, message: "A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself." }))

I think this is caused by the max combined packet size being set to 32768 here:

MuxPacketPool::new(BufferPacketPool::new(SimpleBufferPool(MAX_PACKET_LEN)));

While the receiving naia socket has a fixed buffer size of 1472 (which is close to the minimum reassembly buffer size for IPv6):
https://github.com/naia-rs/naia-socket/blob/6dce8f628aee63997716f6317d9bebc4ddb18375/client/src/impls/native/packet_receiver.rs#L27

I assume this could be fixed by either exposing the option to set the max size, or setting it to the largest size supported by all built-in sockets (which I think is the mentioned 1472).

WebRTC connections do not work on latest Chrome

Hi everyone,
When upgrading from bevy_networking_turbulence v0.3.0 to v0.3.1 I found that I was completely unable to connect from my web build to my native build. Not even the simple connect function Using v0.3.0 works perfectly, however.

I tried connecting to 127.0.0.1 and to my local network IP address, with no success. I tested on Brave browser (Chromium) to Linux.

Graphical wasm example (Enhancement)

With bevy and wasm support, you can't avoid longingly looking at https://github.com/mrk-its/bevy_webgl2.

The balls example should be possible to get running in the browser with this plugin.
This requires

  • git version of bevy
  • working channels

With some fiddling, I have gotten it to build (webrtc server, wasm client):
https://github.com/ostwilkens/bevy_networking_turbulence/tree/channel_wasm_example

Server seems to work well, but the client hangs on load, freezing my chrome tab. I'm pretty sure this has to do with the channels, as it doesn't hang when the networking is commented out.

NetworkEvent::Error reports TurbulenceChannelError(IncomingTrySendError::Error(ChannelReceiverDropped))

When preparing like 33+ messages to be sent to the same handle in the same Bevy frame, NetworkEvent::Error and the plugin itself report the following errors:

ERROR bevy_networking_turbulence: Channel Incoming Error: channel receiver has been dropped

NetworkEvent::Error: TurbulenceChannelError(IncomingTrySendError::Error(ChannelReceiverDropped))

After this occurs the connection appears to be essentially dead.

An attempt of circumventing this issue by combining network message data so they become a single message to be passed to NetworkResource instead also has been in vain.
This makes me believe that this error is not caused by the number of messages, rather by the amount of information passed around in the connection. Once the server passes a certain amount of data to the client this issue consistently occurs.

What do I do to not run into this?

Reproduction project:
bevy_turbulence_error.zip

Use LogPlugin instead of simple_logger in examples

Hi. I can't run the simple example on Arch Linux because of a panic from simple_logger:

thread 'main' panicked at 'Could not determine the UTC offset on this system. Possible causes are that the time crate does not implement "local_offset_at" on your system, or that you are running in a multi-threaded environment and the time crate is returning "None" from "local_offset_at" to avoid unsafe behaviour.

The cause of this panic is not important, we can replace simple_logger with the LogPlugin from Bevy and the example works like a charm (on wasm too):

.add_plugin(LogPlugin)

If you agree, I can make a pull request to fix the examples.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.