Git Product home page Git Product logo

asynchronix's Introduction

Asynchronix

Asynchronix is a developer-friendly, highly optimized discrete-event simulation framework written in Rust. It is meant to scale from small, simple simulations to very large simulation benches with complex time-driven state machines.

Cargo Documentation License

Overview

Asynchronix is a simulator that leverages asynchronous programming to transparently and efficiently auto-parallelize simulations by means of a custom multi-threaded executor.

It promotes a component-oriented architecture that is familiar to system engineers and closely resembles flow-based programming: a model is essentially an isolated entity with a fixed set of typed inputs and outputs, communicating with other models through message passing via connections defined during bench assembly.

Although the main impetus for its development was the need for simulators able to handle large cyberphysical systems, Asynchronix is a general-purpose discrete-event simulator expected to be suitable for a wide range of simulation activities. It draws from experience on spacecraft real-time simulators but differs from existing tools in the space industry in a number of respects, including:

  1. performance: by taking advantage of Rust's excellent support for multithreading and asynchronous programming, simulation models can run efficiently in parallel with all required synchronization being transparently handled by the simulator,
  2. developer-friendliness: an ergonomic API and Rust's support for algebraic types make it ideal for the "cyber" part in cyberphysical, i.e. for modelling digital devices with even very complex state machines,
  3. open-source: last but not least, Asynchronix is distributed under the very permissive MIT and Apache 2 licenses, with the explicit intent to foster an ecosystem where models can be easily exchanged without reliance on proprietary APIs.

Documentation

The API documentation is relatively exhaustive and includes a practical overview which should provide all necessary information to get started.

More fleshed out examples can also be found in the dedicated directory.

Usage

Add this to your Cargo.toml:

[dependencies]
asynchronix = "0.2.2"

Example

// A system made of 2 identical models.
// Each model is a 2× multiplier with an output delayed by 1s.
//
//              ┌──────────────┐      ┌──────────────┐
//              │              │      │              │
// Input ●─────▶│ multiplier 1 ├─────▶│ multiplier 2 ├─────▶ Output
//              │              │      │              │
//              └──────────────┘      └──────────────┘
use asynchronix::model::{Model, Output};
use asynchronix::simulation::{Mailbox, SimInit};
use asynchronix::time::{MonotonicTime, Scheduler};
use std::time::Duration;

// A model that doubles its input and forwards it with a 1s delay.
#[derive(Default)]
pub struct DelayedMultiplier {
    pub output: Output<f64>,
}
impl DelayedMultiplier {
    pub fn input(&mut self, value: f64, scheduler: &Scheduler<Self>) {
        scheduler
            .schedule_event(Duration::from_secs(1), Self::send, 2.0 * value)
            .unwrap();
    }
    async fn send(&mut self, value: f64) {
        self.output.send(value).await;
    }
}
impl Model for DelayedMultiplier {}

// Instantiate models and their mailboxes.
let mut multiplier1 = DelayedMultiplier::default();
let mut multiplier2 = DelayedMultiplier::default();
let multiplier1_mbox = Mailbox::new();
let multiplier2_mbox = Mailbox::new();

// Connect the output of `multiplier1` to the input of `multiplier2`.
multiplier1
    .output
    .connect(DelayedMultiplier::input, &multiplier2_mbox);

// Keep handles to the main input and output.
let mut output_slot = multiplier2.output.connect_slot().0;
let input_address = multiplier1_mbox.address();

// Instantiate the simulator
let t0 = MonotonicTime::EPOCH; // arbitrary start time
let mut simu = SimInit::new()
    .add_model(multiplier1, multiplier1_mbox)
    .add_model(multiplier2, multiplier2_mbox)
    .init(t0);

// Send a value to the first multiplier.
simu.send_event(DelayedMultiplier::input, 3.5, &input_address);

// Advance time to the next event.
simu.step();
assert_eq!(simu.time(), t0 + Duration::from_secs(1));
assert_eq!(output_slot.take(), None);

// Advance time to the next event.
simu.step();
assert_eq!(simu.time(), t0 + Duration::from_secs(2));
assert_eq!(output_slot.take(), Some(14.0));

Implementation notes

Under the hood, Asynchronix is based on an asynchronous implementation of the actor model, where each simulation model is an actor. The messages actually exchanged between models are async closures which capture the event's or request's value and take the model as &mut self argument. The mailbox associated to a model and to which closures are forwarded is the receiver of an async, bounded MPSC channel.

Computations proceed at discrete times. When executed, models can request the scheduler to send an event (or rather, a closure capturing such event) at a certain simulation time. Whenever computations for the current time complete, the scheduler selects the nearest future time at which one or several events are scheduled (next event increment), thus triggering another set of computations.

This computational process makes it difficult to use general-purposes asynchronous runtimes such as Tokio, because the end of a set of computations is technically a deadlock: the computation completes when all model have nothing left to do and are blocked on an empty mailbox. Also, instead of managing a conventional reactor, the runtime manages a priority queue containing the posted events. For these reasons, Asynchronix relies on a fully custom runtime.

Even though the runtime was largely influenced by Tokio, it features additional optimizations that make its faster than any other multi-threaded Rust executor on the typically message-passing-heavy workloads seen in discrete-event simulation (see benchmark). Asynchronix also improves over the state of the art with a very fast custom MPSC channel, which performance has been demonstrated through Tachyonix, a general-purpose offshoot of this channel.

License

This software is licensed under the Apache License, Version 2.0 or the MIT license, at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

asynchronix's People

Contributors

jauhien avatar robamu avatar sbarral avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

asynchronix's Issues

How to cleanly abort a simulation?

I wonder whether it'd be possible to modify InputFn to allow the following code to compile:

impl DelayedMultiplier {
    pub fn input(&mut self, value: f64, scheduler: &Scheduler<Self>) -> anyhow::Result<()> {
        scheduler
            .schedule_event(Duration::from_secs(1), Self::send, 2.0 * value)?;
        Ok(())
    }
    async fn send(&mut self, value: f64) {
        self.output.send(value).await;
    }
}

Or maybe I'm just misunderstanding the API?

recursive self-scheduling

I'd like to self-schedule an input from within that same input. Something like this:

#[derive(Default)]
struct Fetcher {}

impl Fetcher {
    fn blackbox(&mut self) -> bool { return rand::thread_rng().gen_bool(0.5); }

    async fn on_fetch(&mut self, _: (), scheduler: &Scheduler<Self>) {
        let b = self.blackbox();
        println!("fetching!\n{}", if b {"success"} else {"failure, retrying in 1s"});
        if b { scheduler.schedule_in(Duration::from_secs(5), Self::on_fetch, ()); }
        else { scheduler.schedule_in(Duration::from_secs(1), Self::on_fetch, ()); }
    }
}

impl Model for Fetcher {}

Rust doesn't allow for direct recursion in async functions, so this will give a compiler error. Generally, on workaround is to return a BoxFuture (instead of a Future).

// --snip--
    fn on_fetch<'a>(&'a mut self, _: (), scheduler: &'a Scheduler<Self>) -> BoxFuture<()> {
        async move {
            let b = self.blackbox();
            println!("fetching!\n{}", if b {"success"} else {"failure, retrying in 1s"});
            if b { scheduler.schedule_in(Duration::from_secs(5), Self::on_fetch, ()); } 
            else { scheduler.schedule_in(Duration::from_secs(1), Self::on_fetch, ()); }
        }.boxed()
    }
// --snip--

This messes up the code quite a bit, so I was looking for a nicer way to do this. The only alternative I found is to create a dedicated loopback output

#[derive(Default)]
struct Fetcher {
    loopback: Output<bool>,
}

impl Fetcher {
    // --snip--
    async fn on_fetch(&mut self, _: (), scheduler: &Scheduler<Self>) {
        let b = self.blackbox();
        self.loopback.send(b).await;
    }
    async fn on_loopback(&mut self, b: bool, scheduler: &Scheduler<Self>) {
        println!("fetching!\n{}", if b {"success"} else {"failure, retrying in 1s"});
        if b { scheduler.schedule_in(Duration::from_secs(5), Self::on_fetch, ()); }
        else { scheduler.schedule_in(Duration::from_secs(1), Self::on_fetch, ()); }
    }
}
// --snip--

But this is not any better, it requires the simulator to connect the output correctly, and assumes that the output is not connected to anything else.

Is there a better way to do this with asynchronix?

Type erased address type

Is there some way to get type erased variants of the address type and work with those when sending or scheduling events?

I have a struct like this

// The simulation controller processes requests and drives the simulation.
pub struct SimController {
    pub sys_clock: SystemClock,
    pub request_receiver: mpsc::Receiver<SimRequest>,
    pub simulation: Simulation,
    pub mgm_addr: Address<MagnetometerModel>,
    pub pcdu_addr: Address<PcduModel>,
}

where I need to keep the address handles for each distinct model type to send or schedule events to these devices. I think if I scale up my simulation to a large number of devices, the number of fields in this struct, and the effort to always update the SimController construction might get unwieldy. A type-erased variant would allow something like HashMap<ModelId, Address> to be used. But there are probably technical limitations due to the way generics are used here?

Missing functionality: scheduling periodic events

It is possible today to schedule periodic events by requesting a method to re-scheduling itself at a later time.

Because periodic scheduling is a very common pattern in simulation, however, it would be good to have the possibility to do this with less boilerplate by issuing only one scheduling request with the period as argument.

Delayed outputs?

What is the best way to implement sending messages to an output with a delay? The docs suggest a self-scheduling mechanism, e.g.:

scheduler.schedule_event(Duration::from_secs(1), Self::push_msg, msg);

with a corresponding handler

fn push_msg(&mut self, msg: Msg) {
    self.output.send(msg);
}

but I feel this might get slightly tedious in complex models. Ideally, I would like to simply write

self.output.send_in(Duration::from_secs(1), msg);

Any suggestions how I should do this?

Lack of support for real-time execution

There is currently no support for real-time execution, which is necessary in a number of scenarios such as Hardware-in-the-Loop simulation.

Ideally, custom clocks should also be supported to allow synchronization from time sources other than the system, or for more exotic needs such as clocks with scaled real-time.

Cancellation of event scheduled for the current time is cumbersome

The current implementation makes it possible to easily cancel a scheduled event using its SchedulerKey, provided that this event is scheduled for a future time. When the event is scheduled for the current time but was not processed yet, however, the user must resort to cumbersome workarounds to discard the event when it is processed (see the epoch-based workaround in the current espresso machine example).

An implementation that allows the event to be cancelled at any time before it is processed would bring a sizable quality-of-life improvement.

WASI support

Are there any plans to provide wasm32-wasi target support, presumably with single threaded simulation fallback? This would increase portability (for a developer) and security (for a client) for non-performance critical testing purposes.

Currently, the espresso machine example compiles successfully with

cargo build --package asynchronix --target wasm32-wasi --example espresso_machine

but running the generated espresso_machine.wasm on a WASM VM will fail where the executor tries to spawn threads unchecked. As far as I know, all use of std APIs not supported by WASI (such as threading) will need to be made target-dependent with macros.

I am not sure how much asynchronix depends on multi-threading and unsafe code, and hence how hard it would be to introduce fallbacks for WASI.

Add helper function to get chrono::DateTime from MonotonicTime

What do you think about adding a helper API to retrieve a chrono datetime from a MonotonicTime ?
The chrono::DateTime is an extremely convenient time format to work with, similar to the Python datetime API. If you think this might be a worthwhile addition, I can open a PR. I have already added something similar in the spacepackets crate for a unix timestamp wrapper object.

It would probably make sense to hide this behind a chrono feature gate similar to how it was done for serde.

Add optional serde feature

I think optional serde support for types like the MonotonicTime would be useful. There are probably other types which could profit from this as well.

New Clock abstraction is not Sendable

I am not sure whether this is intended, but the new Clock abstraction introduced makes the Simulation struct unsendable.
This is because the Clock trait does not have a Send bound and the Clock trait object field does not include a Send bound as well.

Providing an example with network integration and multithreading

Hi!

I am not sure where to best ask this question, I hope this is the right place :)
Thanks for providing this excellent framework. I am working on a mini-simulator for an example satellite on-board software which can be run on a host system. The goal is to provide an environment which more closely resembles a real satellite system directly on a host computer.

My goal is still to have the OBSW application and the simulator as two distinct applications which communicate through a UDP interface. So far, I have developed a basic model containing a few example devices. The simulator is driven in a dedicated thread which calls simu.step permanently, while the UDP server handling is done inside a separate thread. One problem I now have: How do I model deferred reply handling in a system like this? There will probably be some devices where I need to drive the simulator, wait for a certain time, and then send some output back via the UDP server.

I already figured out that I probably have to separate the request/reply handling from the UDP server completely by using messaging., and that would probably be a good idea from an architectural point of view. Considering that the reply handling still has to be a fast as possible to simulate devices as best as possible, I was thinking of the following solution to leverage the asynchronix features:

  1. Providing a reply handler which explicitly handles all requests sent from a UDP server and which is scheduled separately to handle incoming UDP requests as fast as possible.
  2. This reply handler then drives the simulator model (or simply asks for a reply ,depending on the request) by sending the respective events depending on the Request.
  3. The models themselves send a replies directly to a dedicated UDP TM Handler. Some models might send the reply ASAP, others can schedule the reply in the future, depending on the requirements.

What do you think about the general approach? I think an example application showcasing some sort of network integration and multi-threading might be useful in general. If this approach works well and you think this is a good idea, I could also try to provide an example application.

Repository where this is developed: https://egit.irs.uni-stuttgart.de/rust/sat-rs/src/branch/lets-get-this-minsim-started/satrs-minisim/src/main.rs

Implement InputFn for a higher amount of input arguments

I had an input function like this:

pub async fn switch_device(&mut self, switch: PcduSwitch, target_state: SwitchStateBinary) {
   ...
}

and had to rewrite it as:

pub async fn switch_device(&mut self, switch_and_target: (PcduSwitch, SwitchStateBinary)) {
   ...
}

Did I do something wrong or is the number of additional arguments constrained to 1?
it would be nice if I could instead use the initial version. This probably would be possible by implementing InputFn for a certain number of arguments. I saw a similar thing done inside the bevy game engine to allow magically passing functions with a higher number of arguments to the engine:

  1. Implementation macro for a all functions which can be system functions (analogue to the input functions, see this specific line): https://github.com/bevyengine/bevy/blob/c75d14586999dc1ef1ff6099adbc1f0abdb46edf/crates/bevy_ecs/src/system/function_system.rs#L639
  2. https://github.com/bevyengine/bevy/blob/c75d14586999dc1ef1ff6099adbc1f0abdb46edf/crates/bevy_utils/macros/src/lib.rs#L109 : Helper macro to call this macro for tuple variants.
  3. https://github.com/bevyengine/bevy/blob/c75d14586999dc1ef1ff6099adbc1f0abdb46edf/crates/bevy_ecs/src/system/function_system.rs#L697 macro call to implement this for 0 to 16 arguments if I understand correctly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.