Git Product home page Git Product logo

bevy_hanabi's People

Contributors

arewerage avatar atornity avatar bendzae avatar djeedai avatar eliotbo avatar flmng0 avatar hayashi-stl avatar janhohenheim avatar jannik4 avatar katsutoshii avatar leonidgrr avatar marshauf avatar mathiaspius avatar nisevoid avatar olestrohm avatar pcwalton avatar piturnah avatar seldom-se avatar sludgephd avatar soulghost avatar werner291 avatar yrns avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bevy_hanabi's Issues

Some examples won't run on m1 macs

When running some examples, like the spawn example, I'm getting the following error:

2022-12-25T15:02:28.965779Z ERROR wgpu::backend::direct: Handling wgpu errors as fatal by default    
thread 'Compute Task Pool (0)' panicked at 'wgpu error: Validation Error

Caused by:
    In Device::create_texture
      note: label = `view_depth_texture`
    Dimension X value 2560 exceeds the limit of 2048

Looks like the problem is in this code:

    let mut options = WgpuSettings::default();
    let limits = WgpuLimits::downlevel_defaults();
    options.constrained_limits = Some(limits);

Examples should ideally work out of the box for everyone, so we might want to leave this (already option) section commented out or somehow detect the appropriate limits

Spawn example crashes on Intel Mac

As of commit ca6d4a0, the spawn example panics in the wgpu backend on a 2019 Intel MacBook Pro with Radeon 5500M graphics.

The following error is printed by the panic message:

wgpu error: Validation Error

Caused by:
    In a ComputePass
      note: encoder = `<CommandBuffer-(0, 1, Metal)>`
    In a set_bind_group command
      note: bind group = `particles_spawner_bind_group`
    dynamic binding at index 0: offset 576 does not respect device's requested `min_storage_buffer_offset_alignment` limit 256

This panic further results in a failed assertion in the Metal layer (because the command encoder is prematurely released) which triggers an Apple crash report dialogue.

I haven't yet had the chance to check how the example behaves on other Apple hardware or OS versions.

bound buffer range 0..592 does not fit in buffer of size 584

After updating to latest main branch, I get the following error when creating a spawner:

thread 'Compute Task Pool (5)' panicked at 'wgpu error: Validation Error

Caused by:
    In Device::create_bind_group
      note: label = `hanabi:spawner_bind_group`
    bound buffer range 0..592 does not fit in buffer of size 584
      note: buffer = `<Buffer-(64, 421, Vulkan)>`

Effect batching

Effect batching, the process of processing multiple compatible* effect instances with a single compute or render shader pass, is currently broken due to the fact it doesn't account for the variability in GpuSpawnerParams, which is per-effect data and cannot be batched. Since particles do not "remember" which effect they're part of, this means we effectively cannot batch them.

One possible fix would be to leverage the "particle index", passed in the thread ID of compute shaders and the instance of the render shader, to encode both the index of the particle into the particle buffer and the index of the effect in the batch it's from. This would allow each particle to index an array of GpuSpawnerParams in the various shaders, to consume the proper data for their effect. For example, with a 32-bit index, it's reasonable to assume only 24 bits (16 millions particles; 512 MB buffer @ 32B/particle) are needed for the particle itself, leaving 8 bits to batch together up to 256 compatible effect instances.

*compatible = which has the same GPU layout and shaders.

Handle capacity overflow gracefully

When spawning more particles than an effect's capacity, it causes a panic:

thread 'main' panicked at 'wgpu error: Validation Error

Caused by:
    In a ComputePass
      note: encoder = `<CommandBuffer-(0, 1935, Vulkan)>`
    In a set_bind_group command
      note: bind group = `hanabi:vfx_particles_bind_group_update0`
    dynamic binding at index 0 with offset 32768 would overrun the buffer (limit: 0)

I believe this should be handled gracefully by clearing old particles out of the buffer.

Cone distribution not uniform in all cases

When creating a cone, particle distribution is not uniform. I think the problem comes from alpha_h being set to pow(rand(), 1.0/3.0). This is not correct and performs even worse than setting it directly to rand(), which would at least behave correctly on a cylinder (top_radius == bottom_radius) and looks somewhat uniform on any cone without a sharp point.

Panic when spawners are removed quickly

I'm currently inserting a ParticleEffectBundle on every bullet in my game to create a trail of particles. When I despawn the bullets on collision, I occasionally get the following panic:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/render/mod.rs:1963:30

It seems like it might be a race condition regarding the bind groups.

Reproducible on my fork in the spawner-removal branch, in the remove.rs example: https://github.com/auderer/bevy_hanabi/blob/spawner-removal/examples/remove.rs

In my example, if you hold down the left mouse button to spam spawners, they will eventually panic while being removed.

Feature request: More velocity & acceleration modifiers

Currently we can spawn particles with a radial velocity and give them acceleration in an arbitrary direction. However, for some graphical effects, especially 3D ones, these options are not enough.

I suggest adding:

  • Arbitrary initial velocity (for which there seems to be a stale PR draft)
  • Tangential velocity, which gives a spin in the specified axis around the origin
  • Radial acceleration, similar to the current spawning behavior but as acceleration (it might be possible to abuse force field modifiers for this, but I'm not sure if that is the desired solution)
  • Tangential acceleration, same as the velocity but as acceleration
  • Damping, reduces the speed of particles at the specified rate. Different from negative acceleration in that it can fully stop particles.

The user should be able to combine any of these on the same effect.

Sadly, since I'm not great at 3D space math, I can't add these myself. However I can provide you some example effects that would use these modifiers:
Flame pillar: You spawn particles on the surface of a circle facing Y. Then combine Y+ velocity with tangential velocity.
Whirlwind: You spawn partciles at the surface of a circle facing Y. Then combine negative radial velocity with tangential acceleration.
Gathering ball: You spawn particles on the surface of a sphere. Then combine negative radial velocity with the right amount of damping so they stop near the center of the sphere
Explosion+implosion: You spawn particles in just about any shape. Then combine positive radial velocity with negative radial acceleration.

disable the spawner or remove it without removing the spawned particles

Hello. I want to make a trail effect during the flight of a projectile or rocket. After the projectile hits, in order to stop the spawning of particles, I remove the emitter, while the particles caused by it disappear. This looks bad, it would be better to let the particles go by themselves. How can I do that?

PositionCone3dModifier generates particles at wrong location

When using the PositionCone3d the particles are generated at a location that seems to be 2x (in each dimension) the actual translation of the entity.

When going back to the InitPositionCircleModifier the particles are generated around the correct center.

Version: main (specifically 8cbfa36)

Split init/update phases

The particle lifetime is driven by two phases:

  • init, which spawns the particle by making it active and initializing its attributes;
  • update, which update active particles (Euler motion integration, aging, etc.).

Currently on main those two phases are merged into a single compute shader pass. This has the advantage to allow recycling a dead particle on-the-fly into a newly spawned one, without the need for any intermediate storage ("dead list"). But this has also several limitations:

  1. All particles need to be updated each frame, even if they're dead, since the CPU doesn't know how many particles are alive at a given time and cannot dispatch the necessary amount of workgroups, but instead has to dispatch conservatively enough for the entire effect system capacity. This makes usage error-prone to over-allocating and degrading performances. See i.e. most examples allocating 32768 particles but using a handful of this. Internally, all 32768 particles are still updated each frame.

  2. The initializing code can be arbitrarily large/complex, using rand() (ALU) to add randomness to attribute initializing, and sampling textures for e.g. color gradients or modulate, compared to the update which is generally simpler. On the other hand the init code only needs to run on newly spawned particles, which are generally one to many orders of magnitude less than the number of alive particles to update. This mix in the same compute shader produce variable workloads which work against the parallelism.

Splitting those two phases allows tighter dispatching for the init phase, and more consistent workload for the update one. This also increases the chances to batch multiple effect instances together in update phase even if they have different initializing code (and vice versa; but this is probably more rare). Work has started on the vfx_init branch to explore that design.

ParticleEffectBundle don't despawn

I added a ParticleEffectBundle as a child of my Player entity, however when despawn_recursive() is called, the particle don't despawn and stay there. Also the particle won't spawn a second time if the player respawns.
Maybe I got wrong how to spawn and despawn particles bundle, didn't found any way of deleting / despawning particles in the examples.

Customizable particle attribute layout

The current particle attribute layout is hard-coded into the GpuParticle type:

bevy_hanabi/src/render/mod.rs

Lines 1117 to 1129 in 0f7494d

/// GPU representation of a single particle stored in a GPU buffer.
#[repr(C)]
#[derive(Debug, Copy, Clone, Pod, Zeroable, ShaderType)]
struct GpuParticle {
/// Particle position in effect space (local or world).
pub position: [f32; 3],
/// Current particle age in \[0:`lifetime`\].
pub age: f32,
/// Particle velocity in effect space (local or world).
pub velocity: [f32; 3],
/// Total particle lifetime.
pub lifetime: f32,
}

It contains the position and velocity of the particle, its age, and its maximum lifetime after which the particle dies.

Although this is the most common set of attributes, some effects require more per-particle attributes, like the particle size (per-particle size variation/randomness), or its color, among many examples. To unlock building such effects, the particle layout should instead be dynamically determined by the set of modifiers which define the effect. This allows enabling a wide range of new effects while keeping the per-particle data as small as possible.

The idea is to define an Attribute type representing a single attribute of a particle, with a name (e.g. "position") and a value (e.g. vec3<f32>), and compose the minimal set of attributes needed per effect into a particle layout which determines how the per-particle data is encoded in the GPU buffer of particles.

Rendering particles as points

Instead of rendering quads, would it be possible to have particles rendered as points? As in, PrimitiveTopology::PointList

This would always render each particle as exactly one pixel in size. I bet this would have performance gains, but also particles would no longer scale based on distance to camera, which matches the style I'm going for in my game. Combined with HDR and bloom you can still get good looking particles.

(As a bonus, maybe even render as lines, using velocity? Just a thought)

Inherit initial velocity from moving emitters

Particles currently have an initial velocity determined by the InitModifiers on an EffectAsset and the GlobalTransform of the emitter entity. Certain classes of visual effects also need to account for the velocity of a moving emitter, and add this value to the initial velocity of all its spawned particles.

To maintain open-ended interoperability with physics engines, the API for this feature could expose an initial velocity field on the ParticleEffect component, and make the API consumer responsible for copying the appropriate value from any physics components.

Regression in examples\spawn.rs

thread 'main' panicked at 'wgpu error: Validation Error

Caused by:
    In a ComputePass
      note: encoder = `<CommandBuffer-(0, 1, Vulkan)>`
    In a set_bind_group command
      note: bind group = `hanabi:spawner_bind_group`
    dynamic binding at index 0: offset 592 does not respect device's requested `min_storage_buffer_offset_alignment` limit 256

All particles reset when a new effect is spawned

When spawning a new ParticleEffect, all particles disappear and existing effects will appear as if they had just been created. This effect can be seen in the example of #106. This seems to happen regardless of them using the same effect handle or a different effect.

Particles are occasionally living too long

I'm trying to implement a bullet trail effect, and for that the particles are supposed to shoot out the back and stick around for just a fraction of a second.

However, occasionally the particles don't disappear, and instead stick around, creating long streaks of particles.

Current repro is this: https://github.com/OleStrohm/basic_game
(F/Left click to shoot)
The particle creation is at the end of the bullet.rs

Particles always render facing along the X-axis

Not sure if this qualifies as an issue, but I'm not sure how to get the particle system to work the way I'd like it to, and I'd really appreciate some help if you can spare the time :)

I'm using Bevy 0.8.0 and Hanabi 0.3.0 to simulate space dust in a 3D game, but the particles always render facing down the X-axis, and I'm not sure how to go about correcting them. I assumed the particles would always face the camera, but that doesn't appear to be the case.

When viewing the particle effect cluster from along the X-axis, it looks perfect:
Screenshot from 2022-08-11 10-46-46

When viewing from the Z-axis, the particles are barely visible, because they're perpendicular to the viewing angle:
Screenshot from 2022-08-11 10-49-06

I initially just tried to create two perpendicular particle effects to circumvent the issue, but the rotation component of the transform is also ignored by the renderer, which makes sense in hindsight, but I had to try :)

The code that I'm using to create the clouds is here
My camera definition, which I suspect might be relevant is defined here. I'm parenting it to a "Ship" entity, which I suspect might be relevant.

Anyway, thanks for creating this plugin, even with this issue, the particle system is amazing and helped me get over a mental block!

Infinite lifetime particles.

It's not unusual to want particles with infinite lifetimes and some other criteria for their destruction.

Suggestions:

  • separate the destruction criteria and provide configurable DestructionModifiers
  • example destruction criteria: particle goes offscreen, particle enters volume, particle exits volume
  • remove the requirement that all particles have a bounded lifetime
  • adjust age/gradient to loop
  • init modifier that is able to set initial particle age (ex: to random value)

WASM compatibility

Using bevy_hanabi fails in WASM because VERTEX_WRITABLE_STORAGE is not supported (even if WGPU doc says it is supported by all platforms).

Is it planned to make this crate WASM-compatible?

Update to Bevy 0.10

Seeing as Bevy 0.10 is about to be released, I thought it's time to put this in the room ๐Ÿ™‚

Hanabi 0.5.2 Causes Panic on Unwrap

I've got an app that is spawning/despawning several particle emitters as part of an asteroid mining thing. New particle systems work better, in that systems spawned after another has been despawned now work; however there seems to be a crash caused by pulling an unwrap()

thread 'Compute Task Pool (2)' panicked at 'assertion failed: (left == right)
left: 11,
right: 0: Broken table invariant: buffer=0 row=11', /Users/nope/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_hanabi-0.5.2/src/render/mod.rs:1375:13
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
thread 'main' panicked at 'called Option::unwrap() on a None value', /Users/nope/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_tasks-0.9.1/src/task_pool.rs:273:45

Dep:
bevy_hanabi = { version = "0.5.2", default-features = false, features = [ "2d" ] }

2D camera support

Hello,
is it possible to use bevy_hanabi with a 2D camera, like OrthographicCameraBundle::new_2d() ? If so, how can I use it?

Text Based Partices

Hello,

I'm unsure the feasibility of this idea, but I thought it might be interesting to be able to use bevy_hanabi to render text particles. The main use-case that comes to mind would be for highlighting damage on hits to enemies/players in many games. I'll keep searching around for alternatives to this, but I hadn't seen anything thus far, and didn't see any previous related issues posted. Let me know if there's any additional thoughts on this idea, thanks for the time!

Panic when just loading the plugin with a camera and no effets.

If one load the HanabiPlugin and have a Camera, but no ParticleEffects entity spawned, the plugin panics:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /home/boris/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_hanabi-0.5.0/src/render/mod.rs:2742:21

Can be reproduced with the 2d.rs example by just spawning the camera in the setup fonction ( just keep first 6 lines).

After despawning, new ParticleEffects don't produce particles

Environment

bevy: 0.9
bevy_hanabi: 0.5
OS: MacOS

Issue

After despawning ParticleEffect entities, newly spawned ParticleEffect entities don't produce particles

Reproducing

See the following code for a reproduction: theon@132ce0b

The reproduction code spawns a new ParticleEffect entity every second. Once there are MAX_EFFECTS entities it despawns the oldest entity before spawning the next.

Setting MAX_EFFECTS in that example seems to determine when the issue starts:

  • MAX_EFFECTS=1, 3rd spawned particle effect doesn't create particles
  • MAX_EFFECTS=2, 4th spawned particle effect doesn't create particles
  • MAX_EFFECTS=10, 12th spawned particle effect doesn't create particles

Example where MAX_EFFECTS=5 and the 7th spawned entity onwards doesn't produce particles:
https://user-images.githubusercontent.com/759170/209003169-085f376e-8465-41cd-91a8-1649c318d691.mp4

Note: There is no issue when commenting out the line with despawn_recursive()

Pinned bytemuck

Per https://github.com/djeedai/bevy_hanabi/blob/main/Cargo.toml#L25, bytemuck is pinned at "=1.12.3".
This introduces quite a few dependency clashes in my projects, since bytemuck is a very popular crate. I see that 47bff12 introduced the pin, but I don't understand why it's needed.
I ran the instancing example mentioned in the PR with both the pinned dependency and bytemuck at 1.13.0, but I found them to behave identically. I did however not run all examples to verify.
Maybe the reason bytemuck was pinned no longer applies and we can update the dependency? If not: can I somehow help?

Spawner Enhancements

I'd like for there to be some other things you can do with spawners, such as:

  • activating/deactivating them
  • resetting (for SpawnMode::Once, this would allow bursts to be controlled by game logic)
  • disappearing after a certain number of seconds.

I was going to implement them, but I'm not sure if this would interfere with any planned spawner cleanup system.

Generalize vertex_modifiers?

Unlike some of the other customizable shader code blocks, it looks like vertex_modifiers isn't fully configurable. It might be preferable to make this block more configurable, to enable things like per-vertex color randomization or drawing from noise.

Simulation compute jobs run once per view instead of once per frame

The compute jobs for simulating the effects run once per view, as they're driven by the 2D or 3D render graph. This is apparent in the multicam example, taking a RenderDoc snapshot for example. They should run instead once per frame before the actual rendering, and only if there's any effect active in any view.

Support RenderLayers and multi-camera rendering

The stock Bevy render pipeline allows cameras with the RenderLayers component to ignore entities without a matching layer mask (encoded with the same RenderLayers component). This allows multiple cameras to be used to construct a wide range of visual effects.

Particles currently render on all cameras, ignoring the RenderLayers component. This can result in conflicts between effects that use particles and effects that use multiple cameras. To support this Bevy feature, the particle rendering pipeline must detect the RenderLayers component on camera entities and allow layer-specific emitters or effect assets.

Add modifier to orient particles along their velocity

This is required to create streaks as used in effects with sparkles or other fast-moving particles. However this is currently impossible to implement due to the rigid nature of the rendering shader, which has not been converted to attributes and properties yet (see #129).

Make the number of point sources of the `ForceFieldModifier` dynamic

Currently the number of point sources (attractor or repulsors) on the ForceFieldModifier is fixed at 16. This does not only limit the number of sources, but also forces a large constant-size GPU data structure holding all 16 possible sources even if some of them are unused. The source array should be refactored to be dynamically-sized such that the GPU resources only consume a size equal to the number of actually active point sources.

Sub-frame emitters with interpolated trajectories

Currently, particle spawn rate is decoupled from render framerate in a way that works for static emitters. For moving emitters, each particle's initial position is still framerate-dependent. An emitter that moves fast enough will leave visible gaps between the batches of particles it spawns on each frame. If the render pipeline stalls, these gaps can grow much larger.

Reducing this framerate dependence requires providing information about an emitter's trajectory to the particles it spawns, so that a batch of particles can be distributed amongst interpolated points along this trajectory. For the time being, it should suffice to linearly interpolate between the previous and current GlobalTransforms of the emitter entity. The same interpolation method can be applied to other parameters as well.

Allow sampling texture from sprite sheet

It's common for particles to be animated using a spritesheet, allowing you to create effects like in this video.

I'm planning on working on this during the next week or two, so any feedback would be appreciated.

Current Plan

I'm going to update the RenderLayout struct to look like this:

#[derive(Debug, Default, Clone, PartialEq)]
pub struct RenderLayout {
    pub particle_texture: Option<ParticleTexture>,
    ...
}

pub enum ParticleTexture {
    Image(Handle<Image>),
    TextureAtlas {
        texture: Handle<TextureAtlas>,
        /// The index of the texture in the texture atlas.
        lifetime_texture_index_gradient: Gradient<f32>,
        /// How to interpolate between two textures in the texture atlas.
        texture_interpolation: TextureInterpolation,
    }
}

pub enum TextureInterpolation {
    Nearest,
    LinearBlend
}

In addition to the PARTICLE_TEXTURE shader key, I'll add a new PARTICLE_TEXTURE_ATLAS shader key, which will enable sampling a subset of the uvs following the configuration in the ParticleTexture enum.

Account for orientation of spawner entity

Currently the extraction step extracts the GlobalTransform of the emitter entity.

transform: transform.compute_matrix(),

However only the translation part is actually uploaded to GPU.

origin: extracted_effect.transform.col(3).truncate(),

This was not an issue until now since only rotation-invariant emitters are implemented (circle, sphere) but becomes critical for "directional" effects like spawning particles through a cone in a specific direction.

Panic when referencing a buffer with no value

Hello,

my application panics at
thread 'TaskPool (10)' panicked at 'calledOption::unwrap()on aNonevalue', /home/marcel/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_hanabi-0.1.1/src/render/mod.rs:1173:62
I can't pinpoint/reproduce what is causing the panic, at the moment. I suspect it is some race between calling prepare_effects and queue_effects.
Could you please help me debug the issue.

Generalize curve support.

Instead of a custom gradient, systems should accept some kind of generalized curve. This is also a bevy issue, in that some bevy modules will want to be able to be parameterized over generalized curves and there needs to be agreement across crates about what curves to use.

Why is this useful?

  1. writing a bunch of your own curve types will mean fewer supported types
  2. also its annoying to do!
  3. if we're going to write curves, we should all share the results
  4. you'll often want more curves than just linear interp between keyframes. ex: non-linear ramping over a color space

Bevy 0.8 Support

Hey there

Super excited to use with bevy 0.8!

Any plans to support?

Panic when HDR is enabled

Version tested

main branch
MacOS (M1)

What happened

2022-11-22T21:19:43.648196Z ERROR wgpu::backend::direct: Handling wgpu errors as fatal by default    
thread 'main' panicked at 'wgpu error: Validation Error

Caused by:
    In a RenderPass
      note: encoder = `<CommandBuffer-(0, 1, Metal)>`
    In a set_pipeline command
      note: render pipeline = `hanabi:pipeline_render`
    Render pipeline targets are incompatible with render pass
    Incompatible color attachment: the renderpass expected [Some(Rgba16Float)] but was given [Some(Rgba8UnormSrgb)]

', /Users/robparrett/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.14.0/src/backend/direct.rs:2403:5

Repro

Modify example like so:

let mut camera = Camera3dBundle {
    camera: Camera {
        hdr: true,
        ..default()
    },
    ..default()
};
cargo run --example gradient --features="bevy/bevy_winit bevy/bevy_pbr bevy/png 3d"
cargo run --example 2d --features="bevy/bevy_winit bevy/bevy_sprite 2d"

Particles lifetime

Hello! First of all - thank you for this useful plugin.

Right now I am working on toy visualization of planetary accretion and decided to use particle system to show space dust. So particles need to be spawned once and live till the end of simulation, but lifetime seems to be hardcoded here:

fn init_lifetime() -> f32 {

Do you have any plans to implement lifetime settings for particle system? Or maybe I am just unaware of how to set it properly...
I found a way overcome this by forking repo and implementing ParticleLifetimeModifier, but not sure if this is a proper way to achieve desired outcome.
d78c885

UNASSIGNED-CoreValidation-Shader-OutputNotConsumed warning

Hello,

when running 2D or 3D examples (0e1df4d), bevy warns about a performance problem:

2022-04-11T18:17:01.846362Z  WARN wgpu_hal::vulkan::instance: PERFORMANCE [UNASSIGNED-CoreValidation-Shader-OutputNotConsumed (0x609a13b)]
        Validation Performance Warning: [ UNASSIGNED-CoreValidation-Shader-OutputNotConsumed ] Object 0: handle = 0x984b920000000104, type = VK_OBJECT_TYPE_SHADER_MODULE; | MessageID = 0x609a13b | Vertex attribute at location 1 not consumed by vertex shader    
2022-04-11T18:17:01.846425Z  WARN wgpu_hal::vulkan::instance:   objects: (type: SHADER_MODULE, hndl: 0x984b920000000104, name: ?) 
``

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.