Git Product home page Git Product logo

bevy_app_compute's Introduction

Bevy App Compute

MIT/Apache 2.0 Doc Crate

Dispatch and run compute shaders on bevy from App World .

Getting Started

Add the following line to your Cargo.toml

[dependencies]
bevy_app_compute = "0.10.3"

Usage

Setup

Declare your shaders in structs implementing ComputeShader. The shader() fn should point to your shader source code. You need to derive TypeUuid as well and assign a unique Uuid:

#[derive(TypeUuid)]
#[uuid = "2545ae14-a9bc-4f03-9ea4-4eb43d1075a7"]
struct SimpleShader;

impl ComputeShader for SimpleShader {
    fn shader() -> ShaderRef {
        "shaders/simple.wgsl".into()
    }
}

Next, declare a struct implementing ComputeWorker to declare the bindings and the logic of your worker:

#[derive(Resource)]
struct SimpleComputeWorker;

impl ComputeWorker for SimpleComputeWorker {
    fn build(world: &mut World) -> AppComputeWorker<Self> {
        let worker = AppComputeWorkerBuilder::new(world)
            // Add a uniform variable
            .add_uniform("uni", &5.)

            // Add a staging buffer, it will be available from
            // both CPU and GPU land.
            .add_staging("values", &[1., 2., 3., 4.])

            // Create a compute pass from your compute shader
            // and define used variables
            .add_pass::<SimpleShader>([4, 1, 1], &["uni", "values"])
            .build();

        worker
    }
}

Don't forget to add a shader file to your assets/ folder:

@group(0) @binding(0)
var<uniform> uni: f32;

@group(0) @binding(1)
var<storage, read_write> my_storage: array<f32>;

@compute @workgroup_size(1)
fn main(@builtin(global_invocation_id) invocation_id: vec3<u32>) {
    my_storage[invocation_id.x] = my_storage[invocation_id.x] + uni;
}

Add the AppComputePlugin plugin to your app, as well as one AppComputeWorkerPlugin per struct implementing ComputeWorker:

use bevy::prelude::*;
use bevy_app_compute::AppComputePlugin;

fn main() {
    App::new()
        .add_plugins(AppComputePlugin)
        .add_plugins(AppComputeWorkerPlugin::<SimpleComputeWorker>::default());
}

Your compute worker will now run every frame, during the PostUpdate stage. To read/write from it, use the AppComputeWorker<T> resource!

fn my_system(
    mut compute_worker: ResMut<AppComputeWorker<SimpleComputeWorker>>
) {
    if !compute_worker.available() {
        return;
    };

    let result: Vec<f32> = compute_worker.read_vec("values");

    compute_worker.write_slice("values", [2., 3., 4., 5.]);

    println!("got {:?}", result)
}

(see simple.rs)

Multiple passes

You can have multiple passes without having to copy data back to the CPU in between:

let worker = AppComputeWorkerBuilder::new(world)
    .add_uniform("value", &3.)
    .add_storage("input", &[1., 2., 3., 4.])
    .add_staging("output", &[0f32; 4])
    // add each item + `value` from `input` to `output`
    .add_pass::<FirstPassShader>([4, 1, 1], &["value", "input", "output"]) 
    // multiply each element of `output` by itself
    .add_pass::<SecondPassShader>([4, 1, 1], &["output"]) 
    .build();

    // the `output` buffer will contain [16.0, 25.0, 36.0, 49.0]

(see multi_pass.rs)

One shot computes

You can configure your worker to execute only when requested:

let worker = AppComputeWorkerBuilder::new(world)
    .add_uniform("uni", &5.)
    .add_staging("values", &[1., 2., 3., 4.])
    .add_pass::<SimpleShader>([4, 1, 1], &["uni", "values"])

    // This `one_shot()` function will configure your worker accordingly
    .one_shot()
    .build();

Then, you can call execute() on your worker when you are ready to execute it:

// Execute it only when the left mouse button is pressed.
fn on_click_compute(
    buttons: Res<Input<MouseButton>>,
    mut compute_worker: ResMut<AppComputeWorker<SimpleComputeWorker>>
) {
    if !buttons.just_pressed(MouseButton::Left) { return; }

    compute_worker.execute();
} 

It will run at the end of the current frame, and you'll be able to read the data in the next frame.

(see one_shot.rs)

Examples

See examples

Features being worked upon

  • Ability to read/write between compute passes.
  • add more options to the api, like deciding BufferUsages or size of buffers.
  • Optimization. Right now the code is a complete mess.
  • Tests. This badly needs tests.

Bevy version mapping

Bevy bevy_app_compute
main main
0.10 0.10.3
0.12 0.10.5

bevy_app_compute's People

Contributors

engodev avatar kjolnyr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

bevy_app_compute's Issues

Better Control Over Running Passes

So I think I got a working solution for #2, but in order to demonstrate it I decided to try porting over the Game of Life example from Bevy. And ran into another problem. That example actually has two passes in it, an init pass and an update pass. The init pass needs to run first, and only once. This library supports running once, but only for the whole worker, not just one pass. It's not hard to port the Game of Life init pass onto the CPU, but I think this still exposed a need.

In my own project, I also have reason to want to run some passes a set number of times.

So I'm not sure exactly what the API and internal structure should look like, but clearly there's value in having more fine-grained control over how things run.

getting `In Queue::write_buffer Copy of 0..32 would end up overrunning the bounds of the Destination buffer of size 16` when trying to write to buffer

this is my shader code:

@group(0) @binding(0) var<storage,read> firstArray: array<f32>;
@group(0) @binding(1) var<storage,read> secondArray: array<f32>;
@group(0) @binding(2) var<storage,read_write> resultArray: array<f32>;

@compute @workgroup_size(1, 1, 1)
fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
    let index: u32 = global_id.x;

    resultArray[index] = firstArray[index] + secondArray[index];
}

and the code in bevy:

fn calc_gpu_things(mut compute_worker: ResMut<AppComputeWorker<SimpleComputeWorker>>) {
    if !compute_worker.ready() {
        return;
    };

    compute_worker.write_slice("firstArray", &[2.0, 3.0, 5.0, 6.0]);
    //compute_worker.write("secondArray", &[2.0, 3.0, 4.0, 5.0]);
    let result: [f32; 4] = compute_worker.read("resultArray");

    println!("got {:?}", result);
}

#[derive(TypeUuid)]
#[uuid = "2545ae14-a9bc-4f03-9ea4-4eb43d1075a7"]
struct SimpleShader;

impl ComputeShader for SimpleShader {
    fn shader() -> ShaderRef {
        "compute1.wgsl".into()
    }
}

#[derive(Resource)]
struct SimpleComputeWorker;

impl ComputeWorker for SimpleComputeWorker {
    fn build(world: &mut World) -> AppComputeWorker<Self> {
        let worker = AppComputeWorkerBuilder::new(world)
            // Add a staging buffer, it will be available from
            // both CPU and GPU land.
            .add_staging("firstArray", &[2.0, 3.0, 4.0, 5.0])
            .add_staging("secondArray", &[2.0, 3.0, 4.0, 5.0])
            .add_staging("resultArray", &[0.0, 0.0, 0.0, 0.0])
            // Create a compute pass from your compute shader
            // and define used variables
            .add_pass::<SimpleShader>([4, 1, 1], &["firstArray", "secondArray", "resultArray"])
            .build();

        worker
    }
}

and the panic i get during run time:

thread 'Compute Task Pool (4)' panicked at C:\Users\bramb\.cargo\registry\src\index.crates.io-6f17d22bba15001f\wgpu-0.17.2\src\backend\direct.rs:3056:5:
wgpu error: Validation Error

Caused by:
    In Queue::write_buffer
    Copy of 0..32 would end up overrunning the bounds of the Destination buffer of size 16

use existing bevy Buffers

Hi,

I need to keep the data on GPU, essentially to compute a uniform buffer that can be shared across workers and used in shaders. is there a way to do that here ?

Best regards

Game of life example

Hi :)
I'm wondering if you could write a simple game of life example in order to show how to integrate a compute shader, built using this plugin, with bevy's traditional textures.

My goal is to write a compute shader which modifies an entity's texture while also computing some values. A "Game of life" example would pretty much solve the "reading and writing a texture" part, which would be amazing.

I'm aware that this request exists just because I'm not able to do it myself :...)

Pass data to fragment shader or call fragment shader

Hi and thank you for this library, seems like an amazing addition to the ecosystem!
Is there a way to make the calculated data available to a fragment shader to display the results?

For my specific purposes I would need to avoid copying to the CPU as the computed data is too much to copy every frame.

Too busy to maintain. Looking for maintainers

I've been so busy since I started this project that I can't find the time to work on it regularily.

This crate is quite useful for niche scenarios so I believe it should carry on living without me.

If you are willing to maintain it, please contact me.

Error reflecting bind group 0: Invalid group index 0

i get this error and i have no idea where it could be coming from

thread 'Compute Task Pool (7)' panicked at C:\Users\bramb.cargo\registry\src\index.crates.io-6f17d22bba15001f\wgpu-0.17.2\src\backend\direct.rs:1729:13:
Error reflecting bind group 0: Invalid group index 0

any info / help would be awesome :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.