djeedai / bevy_hanabi Goto Github PK
View Code? Open in Web Editor NEW๐ Hanabi โ a GPU particle system plugin for the Bevy game engine.
License: Apache License 2.0
๐ Hanabi โ a GPU particle system plugin for the Bevy game engine.
License: Apache License 2.0
I'm not sure if I'm missing something obvious, but it seems that if I attach a ParticleEffectBundle
to an entity, let it run for a while, and then commands.entity(entity).despawn()
, the particle system just keeps emitting new particles at the last location of the entity.
When running some examples, like the spawn
example, I'm getting the following error:
2022-12-25T15:02:28.965779Z ERROR wgpu::backend::direct: Handling wgpu errors as fatal by default
thread 'Compute Task Pool (0)' panicked at 'wgpu error: Validation Error
Caused by:
In Device::create_texture
note: label = `view_depth_texture`
Dimension X value 2560 exceeds the limit of 2048
Looks like the problem is in this code:
let mut options = WgpuSettings::default();
let limits = WgpuLimits::downlevel_defaults();
options.constrained_limits = Some(limits);
Examples should ideally work out of the box for everyone, so we might want to leave this (already option) section commented out or somehow detect the appropriate limits
As of commit ca6d4a0, the spawn
example panics in the wgpu backend on a 2019 Intel MacBook Pro with Radeon 5500M graphics.
The following error is printed by the panic message:
wgpu error: Validation Error
Caused by:
In a ComputePass
note: encoder = `<CommandBuffer-(0, 1, Metal)>`
In a set_bind_group command
note: bind group = `particles_spawner_bind_group`
dynamic binding at index 0: offset 576 does not respect device's requested `min_storage_buffer_offset_alignment` limit 256
This panic further results in a failed assertion in the Metal layer (because the command encoder is prematurely released) which triggers an Apple crash report dialogue.
I haven't yet had the chance to check how the example behaves on other Apple hardware or OS versions.
After updating to latest main
branch, I get the following error when creating a spawner:
thread 'Compute Task Pool (5)' panicked at 'wgpu error: Validation Error
Caused by:
In Device::create_bind_group
note: label = `hanabi:spawner_bind_group`
bound buffer range 0..592 does not fit in buffer of size 584
note: buffer = `<Buffer-(64, 421, Vulkan)>`
Because of the change detection of Mut
and the fact ParticleEffect::set_property()
takes &mut self
, the effect instance gets marked as changed, so all shaders get invalidated and rebuilt, which defeats the point of using properties to dynamically change values while keeping the same shader.
Effect batching, the process of processing multiple compatible* effect instances with a single compute or render shader pass, is currently broken due to the fact it doesn't account for the variability in GpuSpawnerParams
, which is per-effect data and cannot be batched. Since particles do not "remember" which effect they're part of, this means we effectively cannot batch them.
One possible fix would be to leverage the "particle index", passed in the thread ID of compute shaders and the instance of the render shader, to encode both the index of the particle into the particle buffer and the index of the effect in the batch it's from. This would allow each particle to index an array of GpuSpawnerParams
in the various shaders, to consume the proper data for their effect. For example, with a 32-bit index, it's reasonable to assume only 24 bits (16 millions particles; 512 MB buffer @ 32B/particle) are needed for the particle itself, leaving 8 bits to batch together up to 256 compatible effect instances.
*compatible = which has the same GPU layout and shaders.
When spawning more particles than an effect's capacity, it causes a panic:
thread 'main' panicked at 'wgpu error: Validation Error
Caused by:
In a ComputePass
note: encoder = `<CommandBuffer-(0, 1935, Vulkan)>`
In a set_bind_group command
note: bind group = `hanabi:vfx_particles_bind_group_update0`
dynamic binding at index 0 with offset 32768 would overrun the buffer (limit: 0)
I believe this should be handled gracefully by clearing old particles out of the buffer.
When creating a cone, particle distribution is not uniform. I think the problem comes from alpha_h being set to pow(rand(), 1.0/3.0)
. This is not correct and performs even worse than setting it directly to rand()
, which would at least behave correctly on a cylinder (top_radius == bottom_radius) and looks somewhat uniform on any cone without a sharp point.
I'm currently inserting a ParticleEffectBundle on every bullet in my game to create a trail of particles. When I despawn the bullets on collision, I occasionally get the following panic:
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/render/mod.rs:1963:30
It seems like it might be a race condition regarding the bind groups.
Reproducible on my fork in the spawner-removal
branch, in the remove.rs
example: https://github.com/auderer/bevy_hanabi/blob/spawner-removal/examples/remove.rs
In my example, if you hold down the left mouse button to spam spawners, they will eventually panic while being removed.
Currently we can spawn particles with a radial velocity and give them acceleration in an arbitrary direction. However, for some graphical effects, especially 3D ones, these options are not enough.
I suggest adding:
The user should be able to combine any of these on the same effect.
Sadly, since I'm not great at 3D space math, I can't add these myself. However I can provide you some example effects that would use these modifiers:
Flame pillar: You spawn particles on the surface of a circle facing Y. Then combine Y+ velocity with tangential velocity.
Whirlwind: You spawn partciles at the surface of a circle facing Y. Then combine negative radial velocity with tangential acceleration.
Gathering ball: You spawn particles on the surface of a sphere. Then combine negative radial velocity with the right amount of damping so they stop near the center of the sphere
Explosion+implosion: You spawn particles in just about any shape. Then combine positive radial velocity with negative radial acceleration.
Hello. I want to make a trail effect during the flight of a projectile or rocket. After the projectile hits, in order to stop the spawning of particles, I remove the emitter, while the particles caused by it disappear. This looks bad, it would be better to let the particles go by themselves. How can I do that?
When using the PositionCone3d the particles are generated at a location that seems to be 2x (in each dimension) the actual translation of the entity.
When going back to the InitPositionCircleModifier the particles are generated around the correct center.
Version: main (specifically 8cbfa36)
The particle lifetime is driven by two phases:
Currently on main
those two phases are merged into a single compute shader pass. This has the advantage to allow recycling a dead particle on-the-fly into a newly spawned one, without the need for any intermediate storage ("dead list"). But this has also several limitations:
All particles need to be updated each frame, even if they're dead, since the CPU doesn't know how many particles are alive at a given time and cannot dispatch the necessary amount of workgroups, but instead has to dispatch conservatively enough for the entire effect system capacity. This makes usage error-prone to over-allocating and degrading performances. See i.e. most examples allocating 32768 particles but using a handful of this. Internally, all 32768 particles are still updated each frame.
The initializing code can be arbitrarily large/complex, using rand()
(ALU) to add randomness to attribute initializing, and sampling textures for e.g. color gradients or modulate, compared to the update which is generally simpler. On the other hand the init code only needs to run on newly spawned particles, which are generally one to many orders of magnitude less than the number of alive particles to update. This mix in the same compute shader produce variable workloads which work against the parallelism.
Splitting those two phases allows tighter dispatching for the init phase, and more consistent workload for the update one. This also increases the chances to batch multiple effect instances together in update phase even if they have different initializing code (and vice versa; but this is probably more rare). Work has started on the vfx_init
branch to explore that design.
I added a ParticleEffectBundle as a child of my Player entity, however when despawn_recursive() is called, the particle don't despawn and stay there. Also the particle won't spawn a second time if the player respawns.
Maybe I got wrong how to spawn and despawn particles bundle, didn't found any way of deleting / despawning particles in the examples.
The current particle attribute layout is hard-coded into the GpuParticle
type:
Lines 1117 to 1129 in 0f7494d
It contains the position and velocity of the particle, its age, and its maximum lifetime after which the particle dies.
Although this is the most common set of attributes, some effects require more per-particle attributes, like the particle size (per-particle size variation/randomness), or its color, among many examples. To unlock building such effects, the particle layout should instead be dynamically determined by the set of modifiers which define the effect. This allows enabling a wide range of new effects while keeping the per-particle data as small as possible.
The idea is to define an Attribute
type representing a single attribute of a particle, with a name (e.g. "position") and a value (e.g. vec3<f32>
), and compose the minimal set of attributes needed per effect into a particle layout which determines how the per-particle data is encoded in the GPU buffer of particles.
Instead of rendering quads, would it be possible to have particles rendered as points? As in, PrimitiveTopology::PointList
This would always render each particle as exactly one pixel in size. I bet this would have performance gains, but also particles would no longer scale based on distance to camera, which matches the style I'm going for in my game. Combined with HDR and bloom you can still get good looking particles.
(As a bonus, maybe even render as lines, using velocity? Just a thought)
Particles currently have an initial velocity determined by the InitModifiers on an EffectAsset and the GlobalTransform of the emitter entity. Certain classes of visual effects also need to account for the velocity of a moving emitter, and add this value to the initial velocity of all its spawned particles.
To maintain open-ended interoperability with physics engines, the API for this feature could expose an initial velocity field on the ParticleEffect component, and make the API consumer responsible for copying the appropriate value from any physics components.
thread 'main' panicked at 'wgpu error: Validation Error
Caused by:
In a ComputePass
note: encoder = `<CommandBuffer-(0, 1, Vulkan)>`
In a set_bind_group command
note: bind group = `hanabi:spawner_bind_group`
dynamic binding at index 0: offset 592 does not respect device's requested `min_storage_buffer_offset_alignment` limit 256
When spawning a new ParticleEffect, all particles disappear and existing effects will appear as if they had just been created. This effect can be seen in the example of #106. This seems to happen regardless of them using the same effect handle or a different effect.
I'm trying to implement a bullet trail effect, and for that the particles are supposed to shoot out the back and stick around for just a fraction of a second.
However, occasionally the particles don't disappear, and instead stick around, creating long streaks of particles.
Current repro is this: https://github.com/OleStrohm/basic_game
(F/Left click to shoot)
The particle creation is at the end of the bullet.rs
Not sure if this qualifies as an issue, but I'm not sure how to get the particle system to work the way I'd like it to, and I'd really appreciate some help if you can spare the time :)
I'm using Bevy 0.8.0 and Hanabi 0.3.0 to simulate space dust in a 3D game, but the particles always render facing down the X-axis, and I'm not sure how to go about correcting them. I assumed the particles would always face the camera, but that doesn't appear to be the case.
When viewing the particle effect cluster from along the X-axis, it looks perfect:
When viewing from the Z-axis, the particles are barely visible, because they're perpendicular to the viewing angle:
I initially just tried to create two perpendicular particle effects to circumvent the issue, but the rotation component of the transform is also ignored by the renderer, which makes sense in hindsight, but I had to try :)
The code that I'm using to create the clouds is here
My camera definition, which I suspect might be relevant is defined here. I'm parenting it to a "Ship" entity, which I suspect might be relevant.
Anyway, thanks for creating this plugin, even with this issue, the particle system is amazing and helped me get over a mental block!
It's not unusual to want particles with infinite lifetimes and some other criteria for their destruction.
Suggestions:
Using bevy_hanabi
fails in WASM because VERTEX_WRITABLE_STORAGE
is not supported (even if WGPU doc says it is supported by all platforms).
Is it planned to make this crate WASM-compatible?
Seeing as Bevy 0.10 is about to be released, I thought it's time to put this in the room ๐
I've got an app that is spawning/despawning several particle emitters as part of an asteroid mining thing. New particle systems work better, in that systems spawned after another has been despawned now work; however there seems to be a crash caused by pulling an unwrap()
thread 'Compute Task Pool (2)' panicked at 'assertion failed: (left == right)
left: 11,
right: 0: Broken table invariant: buffer=0 row=11', /Users/nope/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_hanabi-0.5.2/src/render/mod.rs:1375:13
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
thread 'main' panicked at 'called Option::unwrap() on a None value', /Users/nope/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_tasks-0.9.1/src/task_pool.rs:273:45
Dep:
bevy_hanabi = { version = "0.5.2", default-features = false, features = [ "2d" ] }
Hello,
is it possible to use bevy_hanabi with a 2D camera, like OrthographicCameraBundle::new_2d() ? If so, how can I use it?
Hi, I want to draw a start field but with given positions, i think its not possible rightnow. Any clue here?
If MSAA samples are set to 1, Hanabi will crash.
This line needs to be updated to use the defined MSAA sample amount: https://github.com/djeedai/bevy_hanabi/blob/main/src/render/mod.rs#L545
Hello,
I'm unsure the feasibility of this idea, but I thought it might be interesting to be able to use bevy_hanabi to render text particles. The main use-case that comes to mind would be for highlighting damage on hits to enemies/players in many games. I'll keep searching around for alternatives to this, but I hadn't seen anything thus far, and didn't see any previous related issues posted. Let me know if there's any additional thoughts on this idea, thanks for the time!
If one load the HanabiPlugin and have a Camera, but no ParticleEffects entity spawned, the plugin panics:
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /home/boris/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_hanabi-0.5.0/src/render/mod.rs:2742:21
Can be reproduced with the 2d.rs
example by just spawning the camera in the setup fonction ( just keep first 6 lines).
bevy: 0.9
bevy_hanabi: 0.5
OS: MacOS
After despawning ParticleEffect
entities, newly spawned ParticleEffect
entities don't produce particles
See the following code for a reproduction: theon@132ce0b
The reproduction code spawns a new ParticleEffect entity every second. Once there are MAX_EFFECTS
entities it despawns the oldest entity before spawning the next.
Setting MAX_EFFECTS
in that example seems to determine when the issue starts:
Example where MAX_EFFECTS=5
and the 7th spawned entity onwards doesn't produce particles:
https://user-images.githubusercontent.com/759170/209003169-085f376e-8465-41cd-91a8-1649c318d691.mp4
Note: There is no issue when commenting out the line with despawn_recursive()
This plugin, according to StarArawn who wrote bevy_ecs_tilemap, doesn't correctly set the z value for particle rendering: https://github.com/djeedai/bevy_hanabi/blob/main/src/render/mod.rs#L1606 .
This results in problems where the particles appear randomly below or in front of any of the sprites or tiles from bevy_ecs_tilemap.
I first mention this bug here:
StarArawn/bevy_ecs_tilemap#188
When repeatedly creating and removing lots of emitters, they cause an increasing amount of FPS lag, even when none are in the world. It seems like something is not being fully cleaned up on removal.
Per https://github.com/djeedai/bevy_hanabi/blob/main/Cargo.toml#L25, bytemuck
is pinned at "=1.12.3"
.
This introduces quite a few dependency clashes in my projects, since bytemuck
is a very popular crate. I see that 47bff12 introduced the pin, but I don't understand why it's needed.
I ran the instancing
example mentioned in the PR with both the pinned dependency and bytemuck
at 1.13.0
, but I found them to behave identically. I did however not run all examples to verify.
Maybe the reason bytemuck
was pinned no longer applies and we can update the dependency? If not: can I somehow help?
I'd like for there to be some other things you can do with spawners, such as:
SpawnMode::Once
, this would allow bursts to be controlled by game logic)I was going to implement them, but I'm not sure if this would interfere with any planned spawner cleanup system.
Unlike some of the other customizable shader code blocks, it looks like vertex_modifiers isn't fully configurable. It might be preferable to make this block more configurable, to enable things like per-vertex color randomization or drawing from noise.
Hi,
I want to manually destroy particles instead of by lifetime, how can I do?
The compute jobs for simulating the effects run once per view, as they're driven by the 2D or 3D render graph. This is apparent in the multicam
example, taking a RenderDoc snapshot for example. They should run instead once per frame before the actual rendering, and only if there's any effect active in any view.
The stock Bevy render pipeline allows cameras with the RenderLayers component to ignore entities without a matching layer mask (encoded with the same RenderLayers component). This allows multiple cameras to be used to construct a wide range of visual effects.
Particles currently render on all cameras, ignoring the RenderLayers component. This can result in conflicts between effects that use particles and effects that use multiple cameras. To support this Bevy feature, the particle rendering pipeline must detect the RenderLayers component on camera entities and allow layer-specific emitters or effect assets.
This is required to create streaks as used in effects with sparkles or other fast-moving particles. However this is currently impossible to implement due to the rigid nature of the rendering shader, which has not been converted to attributes and properties yet (see #129).
Currently the number of point sources (attractor or repulsors) on the ForceFieldModifier
is fixed at 16. This does not only limit the number of sources, but also forces a large constant-size GPU data structure holding all 16 possible sources even if some of them are unused. The source array should be refactored to be dynamically-sized such that the GPU resources only consume a size equal to the number of actually active point sources.
Currently, particle spawn rate is decoupled from render framerate in a way that works for static emitters. For moving emitters, each particle's initial position is still framerate-dependent. An emitter that moves fast enough will leave visible gaps between the batches of particles it spawns on each frame. If the render pipeline stalls, these gaps can grow much larger.
Reducing this framerate dependence requires providing information about an emitter's trajectory to the particles it spawns, so that a batch of particles can be distributed amongst interpolated points along this trajectory. For the time being, it should suffice to linearly interpolate between the previous and current GlobalTransforms of the emitter entity. The same interpolation method can be applied to other parameters as well.
It's common for particles to be animated using a spritesheet, allowing you to create effects like in this video.
I'm planning on working on this during the next week or two, so any feedback would be appreciated.
I'm going to update the RenderLayout
struct to look like this:
#[derive(Debug, Default, Clone, PartialEq)]
pub struct RenderLayout {
pub particle_texture: Option<ParticleTexture>,
...
}
pub enum ParticleTexture {
Image(Handle<Image>),
TextureAtlas {
texture: Handle<TextureAtlas>,
/// The index of the texture in the texture atlas.
lifetime_texture_index_gradient: Gradient<f32>,
/// How to interpolate between two textures in the texture atlas.
texture_interpolation: TextureInterpolation,
}
}
pub enum TextureInterpolation {
Nearest,
LinearBlend
}
In addition to the PARTICLE_TEXTURE
shader key, I'll add a new PARTICLE_TEXTURE_ATLAS
shader key, which will enable sampling a subset of the uvs following the configuration in the ParticleTexture
enum.
Currently the extraction step extracts the GlobalTransform
of the emitter entity.
Line 737 in 6d33bc4
However only the translation part is actually uploaded to GPU.
Line 1158 in 6d33bc4
This was not an issue until now since only rotation-invariant emitters are implemented (circle, sphere) but becomes critical for "directional" effects like spawning particles through a cone in a specific direction.
Hello,
my application panics at
thread 'TaskPool (10)' panicked at 'called
Option::unwrap()on a
Nonevalue', /home/marcel/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_hanabi-0.1.1/src/render/mod.rs:1173:62
I can't pinpoint/reproduce what is causing the panic, at the moment. I suspect it is some race between calling prepare_effects and queue_effects.
Could you please help me debug the issue.
Instead of a custom gradient, systems should accept some kind of generalized curve. This is also a bevy issue, in that some bevy modules will want to be able to be parameterized over generalized curves and there needs to be agreement across crates about what curves to use.
Why is this useful?
Hey there
Super excited to use with bevy 0.8!
Any plans to support?
main branch
MacOS (M1)
2022-11-22T21:19:43.648196Z ERROR wgpu::backend::direct: Handling wgpu errors as fatal by default
thread 'main' panicked at 'wgpu error: Validation Error
Caused by:
In a RenderPass
note: encoder = `<CommandBuffer-(0, 1, Metal)>`
In a set_pipeline command
note: render pipeline = `hanabi:pipeline_render`
Render pipeline targets are incompatible with render pass
Incompatible color attachment: the renderpass expected [Some(Rgba16Float)] but was given [Some(Rgba8UnormSrgb)]
', /Users/robparrett/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.14.0/src/backend/direct.rs:2403:5
Modify example like so:
let mut camera = Camera3dBundle {
camera: Camera {
hdr: true,
..default()
},
..default()
};
cargo run --example gradient --features="bevy/bevy_winit bevy/bevy_pbr bevy/png 3d"
cargo run --example 2d --features="bevy/bevy_winit bevy/bevy_sprite 2d"
Hello! First of all - thank you for this useful plugin.
Right now I am working on toy visualization of planetary accretion and decided to use particle system to show space dust. So particles need to be spawned once and live till the end of simulation, but lifetime seems to be hardcoded here:
bevy_hanabi/src/render/particles_update.wgsl
Line 118 in c861037
Do you have any plans to implement lifetime settings for particle system? Or maybe I am just unaware of how to set it properly...
I found a way overcome this by forking repo and implementing ParticleLifetimeModifier, but not sure if this is a proper way to achieve desired outcome.
d78c885
Hello,
when running 2D or 3D examples (0e1df4d), bevy warns about a performance problem:
2022-04-11T18:17:01.846362Z WARN wgpu_hal::vulkan::instance: PERFORMANCE [UNASSIGNED-CoreValidation-Shader-OutputNotConsumed (0x609a13b)]
Validation Performance Warning: [ UNASSIGNED-CoreValidation-Shader-OutputNotConsumed ] Object 0: handle = 0x984b920000000104, type = VK_OBJECT_TYPE_SHADER_MODULE; | MessageID = 0x609a13b | Vertex attribute at location 1 not consumed by vertex shader
2022-04-11T18:17:01.846425Z WARN wgpu_hal::vulkan::instance: objects: (type: SHADER_MODULE, hndl: 0x984b920000000104, name: ?)
``
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.