Git Product home page Git Product logo

webgpufundamentals's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webgpufundamentals's Issues

How to better render to multiple canvases?

https://webgpufundamentals.org/webgpu/lessons/webgpu-from-webgl.html

But, because of that, unlike WebGL, you can use one WebGPU device to render to multiple canvases. 🎉🤩

Hi @greggman , while I was reading the article WebGPU from WebGL, this part attracted my attention. I tried to adopt a similar idea to reuse the same GPUDevice to render to multiple canvases but failed finally. I create a proxy class named WebGPURenderer which is used to finish rendering that requestAnimationFrame() triggers it to be called. I wish WebGPURenderer could reuse the same GPUDevice and serve for multiple canvases. However, every time it serves for another canvas, I need to configure GPUCanvasContext again for the next canvas.
For example, here are two canvases on the screen and this round of rendering is coming, I need to configure the GPUCanvasContext for the first canvas and then configure it one more time for the second canvas. I found this kind of mechanism costs a lot.
From my understanding of your words, I think you want to say that the device attribute can be set with the same GPUDevice when each GPUCanvasContext calls configure(). So my question is, how can I better render to multiple canvases?

this.#mWGPUContext.configure({
        device: this.#mDevice,
        format: this.#mFormat,
        alphaMode: 'premultiplied',
});

Vertex buffer with index buffer winding issue

Hi,
I am working on the tutorial using rust and wgpu.
While working on the vertex buffer with index buffer example, I ran into a little issue: nothing showed up.
It turns out the winding of the triangle as generated by the create_circle_vertices is clockwise (CW) whereas the render pipeline primitive front face winding rule is using counter-clockwise winding (CCW).

A simple fix was to change the primitive front face winding rule from CCW to CW. But this is not mentioned in the tutorial.

Not sure if it is only an issue with rust and wgpu though (shouldn't be, right?).

Thanks for the awesome tutorial!

(Uniforms) Example of rendering multiple times to the same texture does not work

Even if it is slow.. I tried to actually get it working. From Vulkan I know that there are use cases where one want to render multiple times to the same texture. But even with more experimentation I didn't get it working. The shown example produces an error that the texture we are rendering to, has been already destroyed. And yes I found a stack overflow article about it that when the texture is submitted it gets destroyed. But within a frame the getCurrentTexture() method should return a new instance pointing to the same image.

Since I know Vulkan a bit tried to created a second renderPassDescriptor with loadOp = "load" so that the next submit would not overwrite the previous rendered triangle. Then I see that when I resize the window, a triangle at a previous location is sometimes visible... but the end result is that one triangle is drawn at the right side but nothing else...

Code from tutorial:
`// BAD! Slow!
for (let x = -1; x < 1; x += 0.1) {
uniformValues.set([x, 0], kOffsetOffset);
device.queue.writeBuffer(uniformBuffer, 0, uniformValues);

  const encoder = device.createCommandEncoder();
  const pass = encoder.beginRenderPass(renderPassDescriptor);
  pass.setPipeline(pipeline);
  pass.setBindGroup(0, bindGroup);
  pass.draw(3);
  pass.end();

  // Finish encoding and submit the commands
  const commandBuffer = encoder.finish();
  device.queue.submit([commandBuffer]);
}`

And here is my last test:

`async function Render()
{
const aspect = canvas.width / canvas.height;
uniformValues.set([0.5 / aspect, 0.5], scaleOffset);

    // BAD! Slow version we record and submit a command buffer for each triangle we draw.
    const isFirstPass = true;
    for (let x = -1;  x < 1; x += 0.1)
    {
        uniformValues.set([x, 0], offsetOffset);
        device.queue.writeBuffer(uniformBuffer, 0, uniformValues);

        const encoder = device.createCommandEncoder();

        if (isFirstPass)
        {
            renderPassDecriptor.colorAttachments[0].view = context.getCurrentTexture().createView();
        
            const pass = encoder.beginRenderPass(renderPassDecriptor);

            pass.setPipeline(pipeline);
            pass.setBindGroup(0, bindGroup);
            pass.draw(3);
            pass.end();
        }
        else
        {
            renderPassDecriptorNoClear.colorAttachments[0].view = context.getCurrentTexture().createView();

            const pass = encoder.beginRenderPass(renderPassDecriptorNoClear);

            pass.setPipeline(pipeline);
            pass.setBindGroup(0, bindGroup);
            pass.draw(3);
            pass.end();
        }

        device.queue.submit([ encoder.finish() ]);
        await device.queue.onSubmittedWorkDone();
    }
}`

Maybe another language in addition to JS

Hi,

I'm wondering if it's possible to expand this tutorial and add other examples using another language (rust for example) so it's not centered only around JS

Thanks

Possible issue or misunderstanding with timing performance

Could be that I'm just misunderstanding something, but it seems to me like the helper timing class made here reports way too small numbers.

When I run this example with max number of objects, ir reports 0.8ms CPU and 0.3ms GPU which sounds like it should easily be able to hit stable 144fps. However it runs at somewhat stable 120fps. If I use Chrome performance profiling it claims over 7ms of GPU time even though the stats only claim ~0.3ms.

Have I misunderstood what it's supposed to measure or is there some kind of issue here? Clearly the numbers reported by Chrome profiling match with my FPS while the in-app ones do not.

Here's example screenshot to make it clear what I'm talking about:

Strange orders of this tutorial

Sorry I am a newbie

For one's intuition, we should first talk about vertex buffer and index buffer, then storage buffer, and finally the uniform buffer. This is because vertex buffers are the simplest one. Then may followed by "globally available" storage buffers, and finally special storage buffer: uniform buffers.

Now the order is very strange, after reading those buffers, I am very confused, and even went to look up why we need vertex buffers when we have uniform buffers - I don't understand why this ordered?

Files not found when I'm trying to write an ES article

Hello, I tried to start the spanish translation of webgpu-fundamentals.md but the build throws an error about missing files.

I'm getting the following message:

---[ webgpu\lessons\es\webgpu-fundamentals.md ]---
   link:[/webgpu/resourceditor.html] not found in English file
   link:[resourcwebgpu-draw-diagram.svg] not found in English file
   link:[resourcwebgpu-command-buffer.svg] not found in English file
   link:[resourcclipspace.svg] not found in English file
   link:[resourcwebgpu-simple-triangle-diagram.svg] not found in English file
   link:[resourcwebgpu-simple-compute-diagram.svg] not found in English file
   link:[/webgpu/lessons/resourcwgsl-offset-computer.html] not found in English file
   link:[/webgpu/lessons/resourcprettify.js] not found in English file
   link:[/webgpu/lessons/resourclesson.js] not found in English file
   link:[/webgpu/lessons/resourcwebgpufundamentals-icon.png] not found in English file
   link:[/webgpu/lessons/resourclesson.css] not found in English file
Fatal error: 12 errors

Just to make sure that it's not an issue about article links I wrote a new file with a couple of lines and there's still something wrong.

imagen

The article is written in /lessons/es with the same name as the original, did I miss something?

Could you share some best practices for multiple render targets?

image

Hi Greggmen, while reading this paragraph in the article (https://webgpufundamentals.org/webgpu/lessons/webgpu-fundamentals.html), I am contemplating the potential utilization of multiple render targets for my specific scenario. Currently, I am engaged in 2D video rendering and have encountered situations where multiple canvases are created on the same display. Therefore, I would greatly appreciate it if you could share your experiences or provide any valuable insights regarding this topic.

compiling...

webgpufundamentals.org
Browser: Falkon
OS: Ubuntu

The site is stuck at "compiling..." and not showing any information. Would it be possible to host a pre-built version?

Thanks!

Question about wireframes and multiple index buffers

I'm working on a model preview program and two problems are bothering me right now, so forgive me for asking here because I can't find a direct solution on other sites.

One of the problems is: my model uses custom indexing to combine vertices into triangles, all vertices are divided into polygon groups, and each polygon group has its own array of vertex coordinates and index array.
But I found that webgpu's GPURenderPassEncoder.setIndexBuffer function can only set one vertex index buffer, my current thinking is to build multiple GPUCommandEncoders and render each polygon group separately, but I haven't tried it yet, and I'm wondering if there is a simpler solution, and if not, whether my solution is feasible.

Another problem is: I can't seem to choose to render out only the wireframe of the model with the topology: 'triangle-list' mode selected, I tried the topology: 'line-list' mode, which displays the wireframe of the model, but this mode is described as: front and back vertices are connected, and this description It seems to imply that the vertices of my model will not form triangles in the correct order, I'm wondering if there is an easy way to make my model show only wireframes in topology: 'triangle-list' mode, and if I can only implement my idea in topology: 'line-list' mode, do I need to change the vertex index array?

English is not my native language and I might have missed the answer location, if you can answer me or tell me where I can find the answer to my question, both can help.

Looking forward to your replies!

trouble with write_texture data

I have been trying to work around the lack of HTMLVideoElement interaction by getting video frame data into a canvas element and then into C++ webgpu.h write_texture via emscripten FS files. Using regular webgl getImageData commands gives me an odd shaped image over in C++. Maybe i don't know the exact request of bytesPerBlockRow or blockRowsPerImage purpose or equation? I get my frames all chopped up into consecutively smaller chunks as if the data sizes don't line up.

Anyone want to clarify?
ty...

The introductory tutorial contains broken example code.

The website introduces WebGPU with a lesson named after the site. Towards the end of that document, there's a section on resizing the canvas. The example code sets up a resize observer like this:

const observer = new ResizeObserver(entries => {
    for (const entry of entries) {
        const canvas = entry.target;
        const width = entry.contentBoxSize[0].inlineSize;
        const height = entry.contentBoxSize[0].blockSize;
        canvas.width = Math.max(1, Math.min(width, device.limits.maxTextureDimension2D));
        canvas.height = Math.max(1, Math.min(height, device.limits.maxTextureDimension2D));
        render();
    }
});

Using entry.contentBoxSize[0] gives incorrect results. The edges of the resulting triangle are jagged, and the canvas is the wrong size.

If you resize the canvas element (by changing the size of the DevTools window), then refresh the page (with the DevTools window open still), the canvas changes size (when logically, its size should be a function of the viewport size). Using entry.devicePixelContentBoxSize[0] fixes both issues.


You can simplify the observer a little too. Mine looks like this:

const observer = new ResizeObserver(function(entries) {

    const box = entries[0].devicePixelContentBoxSize[0];
    const maxSize = device.limits.maxTextureDimension2D;

    canvas.width = Math.max(1, Math.min(box.inlineSize, maxSize));
    canvas.height = Math.max(1, Math.min(box.blockSize, maxSize));

    render();
});

P.S. Thanks for all the awesome content. WebGL Fundamentals taught me to write my first shaders, so I was really pleased to see there's a sister-site for WebGPU.

Switch to using HTMLImageElement

When WebGPU shipped in May 2023 it didn't support loading images from HTMLImageElement. The spec was updated to allow this and Chrome shipped support in v118 so, ... update the examples to use HTMLImageElement

using ImageBitmap is supposedly "better", at least in Chrome, as the idea is that using ImageBitmap is supposed to load the image into the GPU so that when you actually go to use it for uploading into WebGPU (and WebGL) it's ready to use. Further, you can pass in options like "premultiplyAlpha" and "colorSpaceConversion"

Conversely, HTMLImageElement image might not be in a format that can be used immediately and so it forces the browser to re-decode the image, making it slow. For example the HTMLImageElement might use premultiplied alpha which is a lossy format. If you request premultiplyAlpha false, the browser is require to re-decode the image so it is not lossy.

In any case, HTMLImageElement is more common (probably) and the examples should probably be switched to use it? Or maybe just bring it up.

min buffer sizes

bring up minimum buffer sizes

Example

 struct Uniforms {
    lightDirection: vec3<f32>,
  };

  @group(0) @binding(0) var<uniform> uniforms1: Uniforms;
  @group(0) @binding(1) var<uinform> uniforms2: vec3<f32>;

WebGPU requires SizeOf(thing). According to the spec, SizeOf(vec3<f32>) is 12 where as SizeOf(Uniforms) is 16

`let` not supported in Firefox Nightly

In current Firefox Nightlies, none of the vertex shader examples work on the intro webgpu-fundamentals.html page because it looks like let is not supported yet (which is used to define the pos array).

I know it's not an issue with webgpufundamentals and there is a lot of other stuff also not supported, but perhaps you'd consider using var in these simple initial examples (or add a note about it) until it's fixed in Firefox because it's disheartening to see no red triangle when you're excited to check out webgpu!

Video texture example not working

I got stuck on the video texture part for quite some time, I tought it was a problem with my code but looking on the site, it also doesn't work there.

Tested on Chrome 113.0.5672.93 on Windows but got the same issues on an Macbook as well and two other machines, so not sure if it is an driver issue.

Uncaught (in promise) DOMException: Failed to execute 'copyExternalImageToTexture' on 'GPUQueue': Copy rect is out of bounds of external image

Funny enough, you can get it to work by subtracting one from the video size for some reason:

  function getSourceSize(source) {
    return [
      source.videoWidth - 1 || source.width,
      source.videoHeight - 1|| source.height,
    ];
  }

So may it be a Google Chrome issue? I was unable to run WebGPU on Firefox so no luck there...

Issue with navigation in translated articles

Issue: Navigation from one trasnlated article to another already translated article doesn't work.

How to reproduce:

Let's use this Inter-stage Variables article as example. For english version of the artciale we got this href value for a previous article which is exactly what we expecting here:

image

Now let's change the language to Ukrainian (Українська). We will get this result:

image

The ../webgpu-fundamentals.html will navigate us to the English version of Fundamentals article. I expect it to be webgpu-fundamentals.html so we can go to already translated version.

@greggman I am not sure if that actually a bug or something I am doing wrong. I was trying to figure it out but lost myself in the gruntfile. In any case I'll be happy to help with this issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.