Git Product home page Git Product logo

asynchronix's Issues

Add helper function to get chrono::DateTime from MonotonicTime

What do you think about adding a helper API to retrieve a chrono datetime from a MonotonicTime ?
The chrono::DateTime is an extremely convenient time format to work with, similar to the Python datetime API. If you think this might be a worthwhile addition, I can open a PR. I have already added something similar in the spacepackets crate for a unix timestamp wrapper object.

It would probably make sense to hide this behind a chrono feature gate similar to how it was done for serde.

recursive self-scheduling

I'd like to self-schedule an input from within that same input. Something like this:

#[derive(Default)]
struct Fetcher {}

impl Fetcher {
    fn blackbox(&mut self) -> bool { return rand::thread_rng().gen_bool(0.5); }

    async fn on_fetch(&mut self, _: (), scheduler: &Scheduler<Self>) {
        let b = self.blackbox();
        println!("fetching!\n{}", if b {"success"} else {"failure, retrying in 1s"});
        if b { scheduler.schedule_in(Duration::from_secs(5), Self::on_fetch, ()); }
        else { scheduler.schedule_in(Duration::from_secs(1), Self::on_fetch, ()); }
    }
}

impl Model for Fetcher {}

Rust doesn't allow for direct recursion in async functions, so this will give a compiler error. Generally, on workaround is to return a BoxFuture (instead of a Future).

// --snip--
    fn on_fetch<'a>(&'a mut self, _: (), scheduler: &'a Scheduler<Self>) -> BoxFuture<()> {
        async move {
            let b = self.blackbox();
            println!("fetching!\n{}", if b {"success"} else {"failure, retrying in 1s"});
            if b { scheduler.schedule_in(Duration::from_secs(5), Self::on_fetch, ()); } 
            else { scheduler.schedule_in(Duration::from_secs(1), Self::on_fetch, ()); }
        }.boxed()
    }
// --snip--

This messes up the code quite a bit, so I was looking for a nicer way to do this. The only alternative I found is to create a dedicated loopback output

#[derive(Default)]
struct Fetcher {
    loopback: Output<bool>,
}

impl Fetcher {
    // --snip--
    async fn on_fetch(&mut self, _: (), scheduler: &Scheduler<Self>) {
        let b = self.blackbox();
        self.loopback.send(b).await;
    }
    async fn on_loopback(&mut self, b: bool, scheduler: &Scheduler<Self>) {
        println!("fetching!\n{}", if b {"success"} else {"failure, retrying in 1s"});
        if b { scheduler.schedule_in(Duration::from_secs(5), Self::on_fetch, ()); }
        else { scheduler.schedule_in(Duration::from_secs(1), Self::on_fetch, ()); }
    }
}
// --snip--

But this is not any better, it requires the simulator to connect the output correctly, and assumes that the output is not connected to anything else.

Is there a better way to do this with asynchronix?

Delayed outputs?

What is the best way to implement sending messages to an output with a delay? The docs suggest a self-scheduling mechanism, e.g.:

scheduler.schedule_event(Duration::from_secs(1), Self::push_msg, msg);

with a corresponding handler

fn push_msg(&mut self, msg: Msg) {
    self.output.send(msg);
}

but I feel this might get slightly tedious in complex models. Ideally, I would like to simply write

self.output.send_in(Duration::from_secs(1), msg);

Any suggestions how I should do this?

Cancellation of event scheduled for the current time is cumbersome

The current implementation makes it possible to easily cancel a scheduled event using its SchedulerKey, provided that this event is scheduled for a future time. When the event is scheduled for the current time but was not processed yet, however, the user must resort to cumbersome workarounds to discard the event when it is processed (see the epoch-based workaround in the current espresso machine example).

An implementation that allows the event to be cancelled at any time before it is processed would bring a sizable quality-of-life improvement.

Lack of support for real-time execution

There is currently no support for real-time execution, which is necessary in a number of scenarios such as Hardware-in-the-Loop simulation.

Ideally, custom clocks should also be supported to allow synchronization from time sources other than the system, or for more exotic needs such as clocks with scaled real-time.

Cycle detection

Hello, thank you for developing this very interesting project! I am currently trying to evaluate whether it can be used for a set of simulators I am designing. There could eventually be many constituent interconnected models that these simulators would run, which would be developed by a community of engineers, so one concern I have centers on preventing subtle model wiring issues like accidental cycles.

In particular, say there are three models, A, B, and C. What if an engineer accidentally wires them up so that, all within the same timestamp, A sends a message to B, then B to C, and then C to A. I assume this would simply spin indefinitely, right? The way I'd think to avoid this is to delay one of the messages, of course, but I worry this could represent a common "gotcha" when dealing with complex simulations, and ideally I'd like to have a way to error out early and alert the engineer of the problem. Are there any elegant ways to approach this?

Thanks so much!

Missing functionality: scheduling periodic events

It is possible today to schedule periodic events by requesting a method to re-scheduling itself at a later time.

Because periodic scheduling is a very common pattern in simulation, however, it would be good to have the possibility to do this with less boilerplate by issuing only one scheduling request with the period as argument.

Implement InputFn for a higher amount of input arguments

I had an input function like this:

pub async fn switch_device(&mut self, switch: PcduSwitch, target_state: SwitchStateBinary) {
   ...
}

and had to rewrite it as:

pub async fn switch_device(&mut self, switch_and_target: (PcduSwitch, SwitchStateBinary)) {
   ...
}

Did I do something wrong or is the number of additional arguments constrained to 1?
it would be nice if I could instead use the initial version. This probably would be possible by implementing InputFn for a certain number of arguments. I saw a similar thing done inside the bevy game engine to allow magically passing functions with a higher number of arguments to the engine:

  1. Implementation macro for a all functions which can be system functions (analogue to the input functions, see this specific line): https://github.com/bevyengine/bevy/blob/c75d14586999dc1ef1ff6099adbc1f0abdb46edf/crates/bevy_ecs/src/system/function_system.rs#L639
  2. https://github.com/bevyengine/bevy/blob/c75d14586999dc1ef1ff6099adbc1f0abdb46edf/crates/bevy_utils/macros/src/lib.rs#L109 : Helper macro to call this macro for tuple variants.
  3. https://github.com/bevyengine/bevy/blob/c75d14586999dc1ef1ff6099adbc1f0abdb46edf/crates/bevy_ecs/src/system/function_system.rs#L697 macro call to implement this for 0 to 16 arguments if I understand correctly.

WASI support

Are there any plans to provide wasm32-wasi target support, presumably with single threaded simulation fallback? This would increase portability (for a developer) and security (for a client) for non-performance critical testing purposes.

Currently, the espresso machine example compiles successfully with

cargo build --package asynchronix --target wasm32-wasi --example espresso_machine

but running the generated espresso_machine.wasm on a WASM VM will fail where the executor tries to spawn threads unchecked. As far as I know, all use of std APIs not supported by WASI (such as threading) will need to be made target-dependent with macros.

I am not sure how much asynchronix depends on multi-threading and unsafe code, and hence how hard it would be to introduce fallbacks for WASI.

How to cleanly abort a simulation?

I wonder whether it'd be possible to modify InputFn to allow the following code to compile:

impl DelayedMultiplier {
    pub fn input(&mut self, value: f64, scheduler: &Scheduler<Self>) -> anyhow::Result<()> {
        scheduler
            .schedule_event(Duration::from_secs(1), Self::send, 2.0 * value)?;
        Ok(())
    }
    async fn send(&mut self, value: f64) {
        self.output.send(value).await;
    }
}

Or maybe I'm just misunderstanding the API?

Type erased address type

Is there some way to get type erased variants of the address type and work with those when sending or scheduling events?

I have a struct like this

// The simulation controller processes requests and drives the simulation.
pub struct SimController {
    pub sys_clock: SystemClock,
    pub request_receiver: mpsc::Receiver<SimRequest>,
    pub simulation: Simulation,
    pub mgm_addr: Address<MagnetometerModel>,
    pub pcdu_addr: Address<PcduModel>,
}

where I need to keep the address handles for each distinct model type to send or schedule events to these devices. I think if I scale up my simulation to a large number of devices, the number of fields in this struct, and the effort to always update the SimController construction might get unwieldy. A type-erased variant would allow something like HashMap<ModelId, Address> to be used. But there are probably technical limitations due to the way generics are used here?

Add optional serde feature

I think optional serde support for types like the MonotonicTime would be useful. There are probably other types which could profit from this as well.

Providing an example with network integration and multithreading

Hi!

I am not sure where to best ask this question, I hope this is the right place :)
Thanks for providing this excellent framework. I am working on a mini-simulator for an example satellite on-board software which can be run on a host system. The goal is to provide an environment which more closely resembles a real satellite system directly on a host computer.

My goal is still to have the OBSW application and the simulator as two distinct applications which communicate through a UDP interface. So far, I have developed a basic model containing a few example devices. The simulator is driven in a dedicated thread which calls simu.step permanently, while the UDP server handling is done inside a separate thread. One problem I now have: How do I model deferred reply handling in a system like this? There will probably be some devices where I need to drive the simulator, wait for a certain time, and then send some output back via the UDP server.

I already figured out that I probably have to separate the request/reply handling from the UDP server completely by using messaging., and that would probably be a good idea from an architectural point of view. Considering that the reply handling still has to be a fast as possible to simulate devices as best as possible, I was thinking of the following solution to leverage the asynchronix features:

  1. Providing a reply handler which explicitly handles all requests sent from a UDP server and which is scheduled separately to handle incoming UDP requests as fast as possible.
  2. This reply handler then drives the simulator model (or simply asks for a reply ,depending on the request) by sending the respective events depending on the Request.
  3. The models themselves send a replies directly to a dedicated UDP TM Handler. Some models might send the reply ASAP, others can schedule the reply in the future, depending on the requirements.

What do you think about the general approach? I think an example application showcasing some sort of network integration and multi-threading might be useful in general. If this approach works well and you think this is a good idea, I could also try to provide an example application.

Repository where this is developed: https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/lets-get-this-minsim-started/satrs-minisim/src/main.rs

Ensuring determinism with periodics

I'm wondering about determinism in the following simplified situation: Say we have two models, A and B, each with an execute method that we schedule to run periodically at the same periodicity and no phase offset. Thus for each cycle, A::execute and B::execute will run at the same simulated time. Say further that the execute method for A immediately sends an event to B, which modifies B's state, affecting what B's execute method will do. As a result, without a way of specifying which periodic executes first each cycle, there appears to be a race condition possible where the outcome of the simulation isn't fully determined/predictable.

Am I understanding this right? Are there ordering guarantees for periodics? Is there a way to ensure determinism, other than something hacky like adding tiny phase offsets? Would appreciate any context that can be provided. Thanks so much!

New Clock abstraction is not Sendable

I am not sure whether this is intended, but the new Clock abstraction introduced makes the Simulation struct unsendable.
This is because the Clock trait does not have a Send bound and the Clock trait object field does not include a Send bound as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.