Git Product home page Git Product logo

wgpu-py's Introduction

CI Documentation Status PyPI version

wgpu-py

A Python implementation of WebGPU - the next generation GPU API. ๐Ÿš€

Introduction

The purpose of wgpu-py to to provide Python with a powerful and reliable GPU API.

It serves as a basis to build a broad range of applications and libraries related to visualization and GPU compute. We use it in pygfx to create a modern Pythonic render engine.

To get an idea of what this API looks like have a look at triangle.py and the other examples.

Status

  • Until WebGPU settles as a standard, its specification may change, and with that our API will probably too. Check the changelog when you upgrade!
  • Coverage of the WebGPU spec is complete enough to build e.g. pygfx.
  • Test coverage of the API is close to 100%.
  • Support for Windows, Linux (x86 and aarch64), and MacOS (Intel and M1).

What is WebGPU / wgpu?

WGPU is the future for GPU graphics; the successor to OpenGL.

WebGPU is a JavaScript API with a well-defined spec, the successor to WebGL. The somewhat broader term "wgpu" is used to refer to "desktop" implementations of WebGPU in various languages.

OpenGL is old and showing it's cracks. New API's like Vulkan, Metal and DX12 provide a modern way to control the GPU, but these are too low-level for general use. WebGPU follows the same concepts, but with a simpler (higher level) API. With wgpu-py we bring WebGPU to Python.

Technically speaking, wgpu-py is a wrapper for wgpu-native, exposing its functionality with a Pythonic API closely resembling the WebGPU spec.

Installation

pip install wgpu glfw

Linux users should make sure that pip >= 20.3. That should do the trick on most systems. See getting started for details.

Usage

Also see the online documentation and the examples.

The full API is accessable via the main namespace:

import wgpu

To render to the screen you can use a variety of GUI toolkits:

# The auto backend selects either the glfw, qt or jupyter backend
from wgpu.gui.auto import WgpuCanvas, run, call_later

# Visualizations can be embedded as a widget in a Qt application.
# Import PySide6, PyQt6, PySide2 or PyQt5 before running the line below.
# The code will detect and use the library that is imported.
from wgpu.gui.qt import WgpuCanvas

# Visualizations can be embedded as a widget in a wx application.
from wgpu.gui.wx import WgpuCanvas

Some functions in the original wgpu-native API are async. In the Python API, the default functions are all sync (blocking), making things easy for general use. Async versions of these functions are available, so wgpu can also work well with Asyncio or Trio.

License

This code is distributed under the 2-clause BSD license.

Projects using wgpu-py

  • pygfx - A python render engine running on wgpu.
  • shadertoy - Shadertoy implementation using wgpu-py.
  • tinygrad - deep learning framework
  • fastplotlib - A fast plotting library
  • xdsl - A Python Compiler Design Toolkit (optional wgpu interpreter)

Developers

  • Clone the repo.
  • Install devtools using pip install -r dev-requirements.txt (you can replace pip with pipenv to install to a virtualenv).
  • Install wgpu-py in editable mode by running pip install -e ., this will also install runtime dependencies as needed.
  • Run python download-wgpu-native.py to download the upstream wgpu-native binaries.
    • Or alternatively point the WGPU_LIB_PATH environment variable to a custom build.
  • Use black . to apply autoformatting.
  • Use flake8 . to check for flake errors.
  • Use pytest . to run the tests.
  • Use pip wheel --no-deps . to build a wheel.

Updating to a later version of WebGPU or wgpu-native

To update to upstream changes, we use a combination of automatic code generation and manual updating. See the codegen utility for more information.

Testing

The test suite is divided into multiple parts:

  • pytest -v tests runs the core unit tests.
  • pytest -v examples tests the examples.
  • pytest -v wgpu/__pyinstaller tests if wgpu is properly supported by pyinstaller.
  • pytest -v codegen lints the generated binding code.

There are two types of tests for examples included:

Type 1: Checking if examples can run

When running the test suite, pytest will run every example in a subprocess, to see if it can run and exit cleanly. You can opt out of this mechanism by including the comment # run_example = false in the module.

Type 2: Checking if examples output an image

You can also (independently) opt-in to output testing for examples, by including the comment # test_example = true in the module. Output testing means the test suite will attempt to import the canvas instance global from your example, and call it to see if an image is produced.

To support this type of testing, ensure the following requirements are met:

  • The WgpuCanvas class is imported from the wgpu.gui.auto module.
  • The canvas instance is exposed as a global in the module.
  • A rendering callback has been registered with canvas.request_draw(fn).

Reference screenshots are stored in the examples/screenshots folder, the test suite will compare the rendered image with the reference.

Note: this step will be skipped when not running on CI. Since images will have subtle differences depending on the system on which they are rendered, that would make the tests unreliable.

For every test that fails on screenshot verification, diffs will be generated for the rgb and alpha channels and made available in the examples/screenshots/diffs folder. On CI, the examples/screenshots folder will be published as a build artifact so you can download and inspect the differences.

If you want to update the reference screenshot for a given example, you can grab those from the build artifacts as well and commit them to your branch.

wgpu-py's People

Contributors

almarklein avatar artyif avatar berendkleinhaneveld avatar cansik avatar claydugo avatar correct-syntax avatar dskkato avatar firefoxmetzger avatar hmaarrfk avatar korijn avatar kushalkolar avatar panxinmiao avatar pieper avatar tlambert03 avatar unprex avatar vipitis avatar wpmed92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wgpu-py's Issues

Better error messages / how to turn on validation

I'm wondering how (or if) I can get better backtraces / validation / logging out of wgpu-py.

I have made a mistake in my code (clearly!) and am looking at a backtrace that looks like:

thread '<unnamed>' panicked at 'assertion failed: `(left == right)`
  left: `Ok(())`,
 right: `Err(ERROR_DEVICE_LOST)`', <::std::macros::panic macros>:5:6
stack backtrace:
   0:     0x7ffc57c92e19 - wgpu_swap_chain_present
   1:     0x7ffc57ca51bb - rust_eh_personality
   2:     0x7ffc57c90e44 - wgpu_swap_chain_present
   3:     0x7ffc57c9542c - wgpu_swap_chain_present
   4:     0x7ffc57c9507c - wgpu_swap_chain_present
   5:     0x7ffc57c95b5f - wgpu_swap_chain_present
   6:     0x7ffc57c956e5 - wgpu_swap_chain_present
   7:     0x7ffc57c9565c - wgpu_swap_chain_present
   8:     0x7ffc575dc4bc - wgpu_compute_pass_destroy
   9:     0x7ffc576d2882 - wgpu_compute_pass_destroy
  10:     0x7ffc578d88cb - wgpu_queue_submit
 ... up to 43

No line numbers, nothing useful printed before it.

I have RUST_BACKTRACE=full, RUST_LOG=trace and VK_INSTANCE_LAYERS=VK_LAYER_LUNARG_standard_validation and WGPU_LIB_PATH= a .dll downloaded by download-wgpu-native.py -os windows --arch 64 --build debug . What am I missing?

Consider removing ctypes as part of the public API

I think we only use ctypes stuff for mapped buffers. We could instead follow the approach of wgpu-rs, and use something like create_buffer_with_data. We can then accept both ctypes and numpy arrays, and more, in a generic way. The downside is that we deviate from WebGPU a bit. I think it might be worth it.

Thread panicked when creating mapped buffer

Hi,
Instead of the dummy cube texture I tried to use a real image (1024x1024 RGBA) and it crashes when trying to create the mapped buffer:

self.console.log("Creating Texture")
texture_buffer_object = self.device.create_texture(
    size=texture_size,
    usage=wgpu.TextureUsage.COPY_DST | wgpu.TextureUsage.SAMPLED,
    dimension=wgpu.TextureDimension.d2,
    format=wgpu.TextureFormat.r8uint,
    mip_level_count=1,
    sample_count=1,
)

self.console.log("Creating Texture View")
self.console.log("-- Texture size: {} texture_size: {}".format(texture_data.nbytes, texture_size))

texture_view = texture_buffer_object.create_view()
self.console.log("-- Creating mapped buffer")
tmp_buffer = self.device.create_buffer_mapped(
    size=texture_data.nbytes, 
    usage=wgpu.BufferUsage.COPY_SRC
)    

Trace:

           Creating Texture                                                                                                                                                     WGPUHelper.py:154
           Creating Texture View                                                                                                                                                WGPUHelper.py:164
           -- Texture size: 4194304 texture_size: (4096, 1024, 1)                                                                                                               WGPUHelper.py:167
           -- Creating mapped buffer                                                                                                                                            WGPUHelper.py:170
thread '<unnamed>' panicked at 'assertion failed: size <= self.linear_size', <::std::macros::panic macros>:2:4
stack backtrace:
   0:     0x7f8edb15df44 - backtrace::backtrace::libunwind::trace::h90669f559fb267f0
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1:     0x7f8edb15df44 - backtrace::backtrace::trace_unsynchronized::hffde4e353d8f2f9a
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2:     0x7f8edb15df44 - std::sys_common::backtrace::_print_fmt::heaf44068b7eaaa6a
                               at src/libstd/sys_common/backtrace.rs:77
   3:     0x7f8edb15df44 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h88671019cf081de2
                               at src/libstd/sys_common/backtrace.rs:59
   4:     0x7f8edb178edc - core::fmt::write::h4e6a29ee6319c9fd
                               at src/libcore/fmt/mod.rs:1052
   5:     0x7f8edb15c8d7 - std::io::Write::write_fmt::hf06b1c86d898d7d6
                               at src/libstd/io/mod.rs:1426
   6:     0x7f8edb15fb65 - std::sys_common::backtrace::_print::h404ff5f2b50cae09
                               at src/libstd/sys_common/backtrace.rs:62
   7:     0x7f8edb15fb65 - std::sys_common::backtrace::print::hcc4377f1f882322e
                               at src/libstd/sys_common/backtrace.rs:49
   8:     0x7f8edb15fb65 - std::panicking::default_hook::{{closure}}::hc172eff6f35b7f39
                               at src/libstd/panicking.rs:204
   9:     0x7f8edb15f851 - std::panicking::default_hook::h7a68887d113f8029
                               at src/libstd/panicking.rs:224
  10:     0x7f8edb16014a - std::panicking::rust_panic_with_hook::hb7ad5693188bdb00
                               at src/libstd/panicking.rs:472
  11:     0x7f8edb10f30e - std::panicking::begin_panic::h5ff0047d973e6d39
  12:     0x7f8edb101636 - <gfx_memory::allocator::linear::LinearAllocator<B> as gfx_memory::allocator::Allocator<B>>::alloc::hbde26216c4096237
  13:     0x7f8edb0d12d0 - gfx_memory::heaps::memory_type::MemoryType<B>::alloc::hfc4604eab2128fce
  14:     0x7f8edb100d20 - gfx_memory::heaps::Heaps<B>::allocate::ha7ea60b7a1f69861
  15:     0x7f8edb0e7627 - wgpu_core::device::Device<B>::create_buffer::hb40f09283e8f5fd9
  16:     0x7f8edb0bb0aa - wgpu_core::device::<impl wgpu_core::hub::Global<G>>::device_create_buffer_mapped::h2ec40bca1b2f7f4e
  17:     0x7f8edb0d5ddf - wgpu_device_create_buffer_mapped
  18:     0x7f8edb612dec - ffi_call_unix64
  19:     0x7f8edb611f55 - ffi_call
  20:     0x7f8edb834d96 - cdata_call
                               at c/_cffi_backend.c:3148
  21:           0x5c9f63 - _PyObject_FastCallKeywords
  22:           0x535a11 - <unknown>
  23:           0x53c5a1 - _PyEval_EvalFrameDefault
  24:           0x536f27 - _PyEval_EvalCodeWithName
  25:           0x5c9468 - _PyFunction_FastCallKeywords
  26:           0x535880 - <unknown>
  27:           0x5394e1 - _PyEval_EvalFrameDefault
  28:           0x5365e7 - _PyEval_EvalCodeWithName
  29:           0x5c9468 - _PyFunction_FastCallKeywords
  30:           0x535880 - <unknown>
  31:           0x5394e1 - _PyEval_EvalFrameDefault
  32:           0x5365e7 - _PyEval_EvalCodeWithName
  33:           0x5c9468 - _PyFunction_FastCallKeywords
  34:           0x535880 - <unknown>
  35:           0x538713 - _PyEval_EvalFrameDefault
  36:           0x5c916b - _PyFunction_FastCallKeywords
  37:           0x535880 - <unknown>
  38:           0x538713 - _PyEval_EvalFrameDefault
  39:           0x5365e7 - _PyEval_EvalCodeWithName
  40:           0x64cbb3 - PyEval_EvalCode
  41:           0x6402a3 - <unknown>
  42:           0x640357 - PyRun_FileExFlags
  43:           0x64110a - PyRun_SimpleFileExFlags
  44:           0x678eff - <unknown>
  45:           0x6791ee - _Py_UnixMain
  46:     0x7f8ee2c311e3 - __libc_start_main
  47:           0x5cf93e - _start
  48:                0x0 - <unknown>
fatal runtime error: failed to initiate panic, error 5
Aborted

Looks it is due to the data size (4194304 bytes) but I did not find the assertion size <= self.linear_size in the rust codebase (not sure in which repo to look at)
This one? https://github.com/gfx-rs/gfx-extras/blob/master/gfx-memory/src/allocator/linear.rs#L217

I tried with a smaller image (512x512x4), it did not crash.

Thanks!

Revive struct check lite

The check_struct at runtime was removed because the codegen validates the structs. However, it also checked for unknown fields, thereby catching typos. We should bring that part back.

Don't force users to set up an event loop

I was just thinking about this; can wgpu be decoupled from asyncio, allowing users to stick to the native event loops of the various GUI toolkits, if they so desire?

It seems like it would have a few advantages;

  • Examples would have less boilerplate and there would be less concepts newcomers need to deal with in order to get started.
  • It might also make it more feasible for existing GUI software projects to start using WGPU since they may already have complicated event loop configurations going on.

Taking Qt as an example, we can have three examples where one just uses the Qt event loop, one uses the naive integration (which we have currently) and another one that shows more advanced event loop integration.

Thoughts?

Automatically resize / recreate depth buffer

The with swap_chain as current_texture_view: trick causes swap_chain to recreate its native swap chain if the canvas it was produced from has been resized since the last __enter__. This makes resizing the canvas magically work --- so well you have to go looking for the resizing code.

All is well. But then you want to have a depth texture view associated with your canvas which needs to be resized at the same time or panic. Any additional sugar to be poured on this? An argument should_resize to draw_frame ? Should we have a swap_chain.size_has_changed() ? Something else that's already there that I'm missing?

PyInstaller compatibility

To support freezing apps with wgpu-py, we need to take a couple of things into account:

  • PyInstaller can't see dynamic imports, so static imports of specific backends are required; otherwise nested dependencies are not included in the frozen app. See #18 (comment) for the original discussion.
  • Our package data (wgpu/resources folder) won't be included automatically either, unless we submit a hook to the pyinstaller repo or users write their own, containing a call to collect_data_files('wgpu.resources'). See https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/hook-certifi.py for an example

Progress:

  • Implement the new hook system in wgpu-py
  • Test it in CI using the PR branch of PyInstaller
  • Make a release of wgpu-py so PyInstaller can use it in their tests.
  • PyInstaller branch merged (pyinstaller/pyinstaller#4582)
  • PyInstaller 4 release
  • Update wgpu-py dev requirments to use pyinstaller>=4 instead of the branch

Drop Tk support?

  • On Linux: seems impossible to make it work.
  • On macOS: Dito.
  • On Windows: works, but painting occurs after the actual paint event, so there is flicker.
  • No high-res support.

The only advantage is that Tk is always there (unless you have a mac). But glfw does a great job at providing a lean GUI that is very easy to install (binaries also ship on Linux since a few weeks).

Is it worth keeping the backend around only for a shitty solution on Windows?

request_draw callable argument

From https://github.com/almarklein/visvis2/blob/master/examples/geometry_cube.py:

    # Request new frame
    canvas.request_draw()


if __name__ == "__main__":
    canvas.draw_frame = animate
    app.exec_()

Wouldn't it be nice if that last line were canvas.request_draw(animate) instead of canvas.draw_frame = animate? Just a thought I had reading the code. The example would then become:

    # Request new frame
    canvas.request_draw(animate)


if __name__ == "__main__":
    canvas.request_draw(animate)
    app.exec_()

Feels elegant, because less API to remember for the user?

Renew Pypi token

I revoked the current Pypi token because there was some suspicious activity. As an alternative to letting CI push to Pypi, we can also create a script to run locally, that downloads the release builds from Github and then pushes that to Pypi. Safer, but one extra step.

Details about the activity

There were two PR's opened, that were also closes soon thereafter, and the PR's seem to be fully deleted too, just like the user who created the PR's. The email notification looked like this:
image

This looks like an attempt to steal secret data, and the only thing that we seem to expose to our actions is a Pypi token.

Improved support for Linux (Xorg+tk, Wayland)

It seems like running wgpu on a VM is not possible. Or just hard? Please let us know if you know how to :D

In #27 we implemented support for all platforms, but we have not yet been able to verify this by actually running an example.

Checks:

  • Xorg tk
  • Xorg qt
  • Xorg glfw
  • Wayland tk
  • Wayland qt
  • Wayland glfw (kinda: it works, but also crashes)

Support for making traces

wgpu-native recently added support to record traces of the API calls, and write them to file. One can than inspect the files, or send them as part of an issue to allow others to replay it.

RecursionError on wayland (sway)

With a fresh clone and virtualenv I followed the instructions for running from a checkout and then tried to run the triangle example. I got this error:

$ ./venv/bin/python examples/triangle_glfw.py 2>&1
Traceback (most recent call last):
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 52, in _reraise
    raise exception.with_traceback(traceback)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 595, in callback_wrapper
    return func(*args, **kwargs)
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 95, in _on_pixelratio_change
    self._set_logical_size()
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 136, in _set_logical_size
    glfw.set_window_size(
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 1272, in set_window_size
    _glfw.glfwSetWindowSize(window, width, height)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 616, in errcheck
    _reraise(exc[1], exc[2])
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 52, in _reraise
    raise exception.with_traceback(traceback)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 595, in callback_wrapper
    return func(*args, **kwargs)
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 95, in _on_pixelratio_change
    self._set_logical_size()
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 136, in _set_logical_size
    glfw.set_window_size(
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 1272, in set_window_size
    _glfw.glfwSetWindowSize(window, width, height)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 616, in errcheck
    _reraise(exc[1], exc[2])
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 52, in _reraise
    raise exception.with_traceback(traceback)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 595, in callback_wrapper
    return func(*args, **kwargs)
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 95, in _on_pixelratio_change
    self._set_logical_size()
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 136, in _set_logical_size
    glfw.set_window_size(
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 1272, in set_window_size
    _glfw.glfwSetWindowSize(window, width, height)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 616, in errcheck
    _reraise(exc[1], exc[2])
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 52, in _reraise
    raise exception.with_traceback(traceback)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 595, in callback_wrapper
...
    raise exception.with_traceback(traceback)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 595, in callback_wrapper
    return func(*args, **kwargs)
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 95, in _on_pixelratio_change
    self._set_logical_size()
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 136, in _set_logical_size
    glfw.set_window_size(
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 1272, in set_window_size
    _glfw.glfwSetWindowSize(window, width, height)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 616, in errcheck
    _reraise(exc[1], exc[2])
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 52, in _reraise
    raise exception.with_traceback(traceback)
  File "/home/dev/repos/wgpu-py/venv/lib/python3.8/site-packages/glfw/__init__.py", line 595, in callback_wrapper
    return func(*args, **kwargs)
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 99, in _on_size_change
    self._logical_size = self._get_logical_size()
  File "/home/dev/repos/wgpu-py/wgpu/gui/glfw.py", line 116, in _get_logical_size
    psize = glfw.get_framebuffer_size(self._window)
RecursionError: maximum recursion depth exceeded
  • Ubuntu 20.04
  • sway 1.4-dffc184a

Refactor code generation and update process

Issue to track a series of tasks/PRs to make the code-generation easier and better defined, so that upgrading to newer versions of wpgu-native and the WebGPU spec become easier.

In particular, we want to:

  • Untangle how we use the IDL (webgpu spec) and headerfile. Instead, we use the IDL to generate the public API, and the header file to help write low-level calls. And maybe validate them. Only "match" them for generating enum mappings.
  • Don't parse the headerfile ourselves, but get the the required info from cffi.
  • Implement the codegen in smaller pieces that are easier to understand and test.
  • Better document the update-process.

Tasks:

  • #135 implement code-patching for the API (IDL -> base API -> backends). Notes:
  • #143 Refactor rs backend into multiple files.
  • #143 Generate enums and flags from IDL.
  • #144 Improve the generated type annotations.
  • #145 Header parsing (via cfffi) and implement new workflow from updating to changes in the headerfile.
  • Delete or revive help() function for using during maintenance.
  • #149 Delete or revive codegen report.
  • #152 Consider putting configure_swap_chain on the canvas.
  • #153 We define properties that WebGPU does not define. Either remove or @apidiff.add them.
  • #154 Update to latest WebGPU
  • Update to latest wgpu-native

Textured rotated cube example?

Hi,

I'm discovering wgpu and it looks very nice but I'm far from my level of comfort (openGL)... Could you add a little example to display a textured rotated cube ?

I succesfully integrated wgpu and headless pygame (for other stuff: sound effect, keyboard/joystick events, 2D surface manipulation) but now I'd like to replace my openGL code to map a surface on a quad with wgpu.

I found some javascript examples but not (yet) straight forward to port.

Thanks !

Mac M1 support


Overview (edited by AK):

  • wgpu-native must work with macos_arm64: gfx-rs/wgpu-native#114
  • wgpu-native must build binaries for macos_arm64 in a release: gfx-rs/wgpu-native#139
  • CFFI must work on M1: #190
    • Convert cffi callback usage to new-style #197
    • Resolve problems with cffi wheels for M1 #196
  • Make our get_surface_id_from_canvas work for M1. #195
  • We must update our CI to build wheels for macos_arm64: #194

I was getting this error:

  โ€ข Installing wgpu (0.3.0): Failed

  EnvCommandError

  Command ['/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/bin/pip', 'install', '--no-deps', 'file:///Users/anentropic/Library/Caches/pypoetry/artifacts/aa/f3/b2/8caef6980405490715f8040d2349bddb320b3962d15c24f0525b66d749/wgpu-0.3.0.tar.gz'] errored with the following return code 1, and output:
  Processing /Users/anentropic/Library/Caches/pypoetry/artifacts/aa/f3/b2/8caef6980405490715f8040d2349bddb320b3962d15c24f0525b66d749/wgpu-0.3.0.tar.gz
      ERROR: Command errored out with exit status 1:
       command: /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-req-build-99e_f9gh/setup.py'"'"'; __file__='"'"'/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-req-build-99e_f9gh/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-pip-egg-info-23tylv6i
           cwd: /private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-req-build-99e_f9gh/
      Complete output (5 lines):
      Traceback (most recent call last):
        File "<string>", line 1, in <module>
        File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-req-build-99e_f9gh/setup.py", line 6, in <module>
          from wheel.pep425tags import get_platform
      ModuleNotFoundError: No module named 'wheel.pep425tags'

https://nomodulenamed.com/m/wheel.pep425tags says:

This is probably because you don't have package wheel installed.

But:

Python 3.9.2 (default, Mar 18 2021, 20:48:06)
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from wheel.pep425tags import get_platform
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'wheel.pep425tags'
>>> import wheel
>>> wheel.__version__
'0.36.2'
>>> exit()

https://wheel.readthedocs.io/en/stable/news.html

0.35.0 (2020-08-13)
Switched to the packaging library for computing wheel tags

...sounded possibly relevant. The previous version to that is 0.34.2

which was also mentioned on the nomodulenamed page

And yes, this fixed it:

% pip install wheel==0.34.2
Collecting wheel==0.34.2
  Downloading wheel-0.34.2-py2.py3-none-any.whl (26 kB)
Installing collected packages: wheel
  Attempting uninstall: wheel
    Found existing installation: wheel 0.36.2
    Uninstalling wheel-0.36.2:
      Successfully uninstalled wheel-0.36.2
Successfully installed wheel-0.34.2
% python
Python 3.9.2 (default, Mar 18 2021, 20:48:06)
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import wheel
>>> wheel.__version__
'0.34.2'
>>> from wheel.pep425tags import get_platform
>>> exit()

and poetry install then succeeded.

I'm not totally sure where the problem originates, whether you need to pin wheel==0.34.2 in your setup.py or some other part of the build machinery is to blame.

(Pinning wheel==0.34.2 in my pyproject.toml didn't help, because Poetry didn't know about the dependency relationship and tries to install wgpu first... so for now it's fixed by manually installing old wheel version in my virtualenv)

Just posting this in case it helps someone else.

How to actually run wgpu on CI?

It would be very nice if we could run tests with wgpu on CI, also in pygfx, to better validate our code. Actually, I would be a bit disappointing if we can't.

Considerations

GPU vs integrated vs software

I briefly looked into GPU instances (e.g. at Scaleway), but these are aimed at really high performance work and therefore really expensive. Strictly speaking we don't need a GPU, as long as Vulkan/Metal has something to run on (Intel graphics or software), but I'm not sure how that'd work in a VM.

Possible approaches (generic)

Vulkan on CI

One way for this to work is Vulkan/Metal to be available on the CI machines, which does not appear to be the case by default (see #45), as could have been expected. There are some posts suggestion that other projects setup their own local server via e.g. Gitlab runner.

DirextX

DX11/DX12 is available on GH Actions, but it is prone to crashing (for our examples).

WGPU

In theory there could be a software WGPU implementation that does not rely on hardware.

Relevant links

last edit: june 2021

Support for macOS

In #27 we implemented support for all platforms, but we have not yet been able to verify this by actually running an example on macOS. Need at least High Sierra 10.13 to have Metal.

Checks:

  • OS X tk
  • OS X qt
  • OS X glfw

Moving forward

Note that currently, CI depends on the progress branch of python-shader. Although both projects do not depend on each-other, both the CI's do :) The plan to move forward:

  • Merge #71
  • Perhaps a bit more work on wgpu-py
    • Improve api coverage #74
    • Implement indirect drawing #75
    • Use one API for defining shader inputs #76
  • Release wgpu-py
  • Merge pygfx/pyshader#22
  • Perhaps a bit more work on python-shader
    • Allow just one API for defining shader inputs pygfx/pyshader#26
    • More consistent exceptions
    • More examples and validate them on CI
    • Docs in readme
  • Release python-shader
  • Remove wgpu-py's pip install from git
  • More work on wgpu-py
    • Update to latest wgpu-bin and webgpu #77
    • Canvas update method #63 #78
    • Re-evaluate #25
    • Docs #13 #79 #80
    • Update readme #80
    • Update wgpu-bin to wgpu-native v0.5.1
    • Update wgpu-py to v0.5.1 #81
    • Rework request_adapter and swap chain w.r.t. surface id. #83
    • Have another quick look at #73
    • Implement debug markers #84
  • Yet more work on wgpy-py :)
    • Fixes for linux #85
    • Untangle the API from canvas gui details #87
    • Fix the grayscale texture issue #89
    • Implement optional/defaults where IDL has them #91
  • Also some fixes in python-shader
  • New release of python-shader
  • New release of wgpu-py
  • Finally get to work on visvis2 pygfx ๐Ÿคฃ

Query version of wgpu-native?

We know the version of our own lib, but if someone uses their own lib, it would be nice to be validate that the versions match.

Support for grayscale textures

Currently, textures of format r8unorm and r8uint work, but r16sint, r32sint and r32float fail. I've not yet been able to find out why.

Tutorial docs

I just noticed this exists, and it's pretty helpful!

We could reference it in our README maybe?

Prevent passing invalid keys in dict arguments

Some functions have arguments that are dicts (and some even contain sub-dicts). At the moment, we just take values from the dict, but we do not check whether the dict contains invalid values. This means that typos can cause the wrapper to use the value of the intended key instead, and thus lead to odd behavior.

Fix triple quote issue on azure pipelines

It's unclear why but there are additional quotes at the end of the TAG variable... sometimes? It's causing the condition to fail even when it should succeed. :/

Evaluating: and(succeeded(), eq(variables['Build.SourceBranchName'], variables['TAG']))
Expanded: and(True, eq('merge', 'v0.1.2-2-g166d188'''))
                                                  ^^

Canvas method to request a new draw

Having a (cross-gui-toolkit) method that can be used to request a new draw would be nice. When using qt canvas.update() does the trick, but a more general solution would be favorable, plus we can add some niceties:

  • calling window.requestAnimationFrame schedules a draw call indirectly (non-blocking)
  • animation is paused when the canvas is not visible (scheduled calls are not executed until canvas becomes visible again)
  • refresh rate of the GUI environment is matched if possible
  • callback is passed the time delta since the last call
  • you have to request another draw at the end of your animation function; this avoids scheduling too many calls if your drawing happens to be slower than the time budget allows for

Compute Shader Crash on Linux and macOS

Hi, so I created a very simple shader for my program using pyshader, and attempted to run it in wgpu. The codebase I'm using can be explored here.

This the simple shader I created:

import wgpu
import math
import wgpu.backends.rs  # Select backend
from wgpu.utils import compute_with_buffers  # Convenience function
from pyshader import python2shader, ivec3, i32, Array, vec3, f32

@python2shader
def compute_shader(
    index: ("input", "GlobalInvocationId", ivec3),
    image: ("buffer", 0, Array(vec3)),
    output: ("buffer", 1, Array(vec3)),
    lut: ("buffer", 2, Array(vec3)),
    size: ("buffer", 3, i32),
):
    i = index.x
    coords = ivec3(math.floor(image[i] * f32(size - 1)))
    output[i] = clamp(lut[coords.r * size ** 2 + coords.g * size + coords.b], 0., 1.)

With macOS 11.0.1, the shader runs fine on the FIRST run, but attempts to run the shader again results in the following crash. I've tried a hack which creates a new subprocess on every run (thus making it a first run everytime), which seems to work fine.

yoonsik@macbook-air-f3 pycubelut % RUST_BACKTRACE=full python3 pycubelut.py /Users/yoonsik/Pictures/favluts/ /Users/yoonsik/Pictures/walksandusky/favs/P1040484.jpg -v
INFO: Starting pool with max 34 tasks in queue
INFO: Processing image: /Users/yoonsik/Pictures/walksandusky/favs/P1040484.jpg
/Users/yoonsik/Pictures/favluts/GSG_LUT_Sample_Cine OfficialSelection.cube
INFO: Processing image: /Users/yoonsik/Pictures/walksandusky/favs/P1040484.jpg
/Users/yoonsik/Pictures/favluts/Faded 47.CUBE
thread '<unnamed>' panicked at 'assertion failed: `(left == right)`
  left: `Ok(false)`,
 right: `Ok(true)`: GPU got stuck :(', /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4/src/libstd/macros.rs:16:9
stack backtrace:
   0:        0x110fdaf1f - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h83d53b696ac99295
   1:        0x110ffb1de - core::fmt::write::hf81c429634e1f3ed
   2:        0x110fd95e7 - std::io::Write::write_fmt::had2a3b01a2c037b5
   3:        0x110fdc95a - std::panicking::default_hook::{{closure}}::ha991e4eca34b4afa
   4:        0x110fdc69c - std::panicking::default_hook::h722aa3f5c1c31788
   5:        0x110fdcf28 - std::panicking::rust_panic_with_hook::h2cd47f71d6d55501
   6:        0x110fdcaf2 - rust_begin_unwind
   7:        0x11101324b - std::panicking::begin_panic_fmt::h769fb8929973777e
   8:        0x110d63755 - wgpu_core::device::life::LifetimeTracker<B>::triage_submissions::h63b6a70055507986
   9:        0x110d51cc5 - wgpu_core::device::Device<B>::maintain::hdafaf4bb769a895f
  10:        0x110d8f768 - wgpu_core::device::<impl wgpu_core::hub::Global<G>>::device_poll::h9207a6f24a526c81
  11:     0x7fff2d95b8e5 - ffi_call_unix64
  12:     0x7fff2d95b22a - ffi_call_int
  13:        0x110889cd1 - cdata_call
  14:        0x10ac4c037 - _PyObject_MakeTpCall
  15:        0x10acf40c7 - call_function
  16:        0x10acf1224 - _PyEval_EvalFrameDefault
  17:        0x10acf4bdb - _PyEval_EvalCode
  18:        0x10ac4c66c - _PyFunction_Vectorcall
  19:        0x10acf4093 - call_function
  20:        0x10acf1208 - _PyEval_EvalFrameDefault
  21:        0x10acf4bdb - _PyEval_EvalCode
  22:        0x10ac4c66c - _PyFunction_Vectorcall
  23:        0x10acf4093 - call_function
  24:        0x10acf1208 - _PyEval_EvalFrameDefault
  25:        0x10acf4bdb - _PyEval_EvalCode
  26:        0x10ac4c66c - _PyFunction_Vectorcall
  27:        0x10acf4093 - call_function
  28:        0x10acf1224 - _PyEval_EvalFrameDefault
  29:        0x10ac4c6dc - function_code_fastcall
  30:        0x10acf4093 - call_function
  31:        0x10acf1208 - _PyEval_EvalFrameDefault
  32:        0x10ac4c6dc - function_code_fastcall
  33:        0x10acf1554 - _PyEval_EvalFrameDefault
  34:        0x10ac4c6dc - function_code_fastcall
  35:        0x10acf4093 - call_function
  36:        0x10acf12d0 - _PyEval_EvalFrameDefault
  37:        0x10acf4bdb - _PyEval_EvalCode
  38:        0x10acea60d - PyEval_EvalCode
  39:        0x10ad25675 - run_eval_code_obj
  40:        0x10ad24a6d - run_mod
  41:        0x10ad23931 - PyRun_FileExFlags
  42:        0x10ad22f21 - PyRun_SimpleFileExFlags
  43:        0x10ad3ae3d - Py_RunMain
  44:        0x10ad3b176 - pymain_main
  45:        0x10ad3b1c4 - Py_BytesMain
fatal runtime error: failed to initiate panic, error 5
zsh: abort      RUST_BACKTRACE=full python3 pycubelut.py /Users/yoonsik/Pictures/favluts/  -v

I've tried getting it to run the exact same on Ubuntu Linux 18.04 with Nvidia 2070 Super, (Driver Version: 450.80.02), but it causes a segfault.

yoonsik@ubuntu:~/git/pycubelut$ gdb --args python3 pycubelut.py ~/favluts/ ~/foo/P1040484.jpg 
(gdb) start
Temporary breakpoint 1 at 0x4b0ce0
Starting program: /usr/bin/python3 pycubelut.py /home/yoonsik/favluts/ /home/yoonsik/foo/P1040484.jpg
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
c
Temporary breakpoint 1, 0x00000000004b0ce0 in main ()
(gdb) continue
Continuing.
[New Thread 0x7ffff4101700 (LWP 1375)]
[New Thread 0x7ffff3900700 (LWP 1376)]
[New Thread 0x7fffe30ff700 (LWP 1377)]
/home/yoonsik/favluts/FGCineBright.cube

Thread 1 "python3" received signal SIGSEGV, Segmentation fault.
0x00007fffb9f76b54 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
(gdb) bt
#0  0x00007fffb9f76b54 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#1  0x00007fffb9f0aa39 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#2  0x00007fffb9f15ae3 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#3  0x00007fffb9ddd1cb in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#4  0x00007fffb9dc6514 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#5  0x00007fffb9dd5d71 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#6  0x00007fffb9dd78c7 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#7  0x00007fffb9dd7f3a in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#8  0x00007fffb9dd841d in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#9  0x00007fffb9e1db66 in _nv008nvvm () from /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.450.80.02
#10 0x00007fffbd1ff407 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#11 0x00007fffbd1905cf in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#12 0x00007fffbd190b8d in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#13 0x00007fffbd20bc58 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#14 0x00007fffbd20ca12 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#15 0x00007fffbd199e28 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#16 0x00007fffbd1a7f38 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#17 0x00007fffbd1b9739 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#18 0x00007fffbd1c0dc0 in ?? () from /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.450.80.02
#19 0x00007fffd170eb4c in gfx_backend_vulkan::device::<impl gfx_hal::device::Device<gfx_backend_vulkan::Backend> for gfx_backend_vulkan::Device>::create_compute_pipeline ()
   from /home/yoonsik/.local/lib/python3.6/site-packages/wgpu/resources/libwgpu_native.so
#20 0x00007fffd1682770 in wgpu_core::device::<impl wgpu_core::hub::Global<G>>::device_create_compute_pipeline ()
   from /home/yoonsik/.local/lib/python3.6/site-packages/wgpu/resources/libwgpu_native.so
#21 0x00007fffd1a95dec in ffi_call_unix64 () from /home/yoonsik/.local/lib/python3.6/site-packages/cffi.libs/libffi-806b1a9d.so.6.0.4
#22 0x00007fffd1a94f55 in ffi_call () from /home/yoonsik/.local/lib/python3.6/site-packages/cffi.libs/libffi-806b1a9d.so.6.0.4
#23 0x00007fffd1cb7db6 in cdata_call (cd=0x7fffd6a8b710, args=<optimized out>, kwds=<optimized out>) at c/_cffi_backend.c:3182
#24 0x00000000005a9dac in _PyObject_FastCallKeywords ()
#25 0x000000000050a433 in ?? ()
#26 0x000000000050beb4 in _PyEval_EvalFrameDefault ()
#27 0x0000000000507be4 in ?? ()
#28 0x0000000000509900 in ?? ()
#29 0x000000000050a2fd in ?? ()
#30 0x000000000050cc96 in _PyEval_EvalFrameDefault ()
#31 0x0000000000507be4 in ?? ()
#32 0x0000000000509900 in ?? ()
#33 0x000000000050a2fd in ?? ()
#34 0x000000000050beb4 in _PyEval_EvalFrameDefault ()
#35 0x00000000005095c8 in ?? ()
#36 0x000000000050a2fd in ?? ()
#37 0x000000000050beb4 in _PyEval_EvalFrameDefault ()
#38 0x0000000000507be4 in ?? ()
#39 0x0000000000588c8b in ?? ()
#40 0x000000000059fd0e in PyObject_Call ()
#41 0x000000000050d256 in _PyEval_EvalFrameDefault ()
#42 0x00000000005095c8 in ?? ()
#43 0x000000000050a2fd in ?? ()
#44 0x000000000050beb4 in _PyEval_EvalFrameDefault ()
#45 0x0000000000507be4 in ?? ()
#46 0x000000000050ad03 in PyEval_EvalCode ()
#47 0x0000000000634e72 in ?? ()
#48 0x0000000000634f27 in PyRun_FileExFlags ()
#49 0x00000000006386df in PyRun_SimpleFileExFlags ()
#50 0x0000000000639281 in Py_Main ()
#51 0x00000000004b0dc0 in main ()
(gdb) 

This is the first time I've ever tried GPU programming, and I would really appreciate some help. Is my shader programming broken?

Generate (API) docs

Sphinx autodoc etc. upload to rtd. Should not be too hard. Mostly for reference docs for now.

Update API for optional/nullable values

In IDL, some values that are optional do not have a default value. We should probably express these in Python as having a default value of None.

This is partly the cause of #73

Separate WgpuCanvas into toplevel and subwidget classes

A bit nitpicking, but ...

Right now we've implemented the GUI's to do:

from wgpu.gui.whichever import WgpuCanvas

This makes the examples nicely consistent. Also, that class can be instantiated to work as a toplevel widget or a subwidget. I guess we did it this way because it's trivial with Qt. For wx I had to use some __new__ to make it work (#141). For other GUI backends it may similarly not be easy.

I propose something like:

from wgpu.gui.whichever import WgpuCanvas
from wgpu.gui.whichever_support_subwidgets import WgpuCanvasWidget

Linux Wayland support

Updated 01-03-2024

Current status

Up to 01-03-2024:

  • Qt did not work on Wayland.
  • glfw did, kinda, but unresponsive and undecorated window.

After #470:

  • All gui backends work, but by forcing them to use x11 (XWayland), so more of a workaround.
  • Expecting glfw to work properly with a newer release.

Introduction

Since Ubuntu 21.04, Wayland is the default display server. This means that this issue potentially affects a lot of users.

The XDG_SESSION_TYPE env variable is either x11 or wayland. This variable is used by many applications to select the window system. What's important for us is that glfw and qt (and wx?) use this variable too.

Forntunately, there is XWayland, a compatibility layer that allows applications to "talk X", but still run on Wayland. XWayland is installed by default on Ubuntu too.

This means we can tell glfw and qt to just use X, even when on Wayland. There are env vars for that.

How does it affect us exactly?

To obtain a surface id for the canvas to render to, we call wgpuInstanceCreateSurface(), the descriptor argument for that function is platform specific; there is a different struct for Windows, Metal, X11, Wayland, and Xcb.

On Windows and MacOS, that struct can be composed with just the "window id". Glfw, Qt and Wx have a metthod to obtain it. So Windows and MacOS are relatievely easy.

On Linux, apart from having to deal with multiple window platforms, we also need an additional value: the display id. This is basically a pointer to a display context: the thing an app creates to start doing things with x11/Wayland. A bit similar to a device of wgpu. This is where the hard part is.

Support for glfw

GLFW has improved support over the past years/months, but seems not quite 100% yet. Pyglfw on Linux includes one lib for x11 and one for Wayland, and selects one based on XDG_SESSION_TYPE and a few other variables.

The latest glfw (3.4, released 23-02-2024) ought to have better support for Wayland. And includes that support in a single binary lib. Unfortunately, there are some build problems, so pyglfw cannot ship with these binaries yet. It ships glfw 3.3.9 instead.

When I apt install libglfw3 it installs version 3.3.6 (on Ubunty 22.04). I don't feel like compiling glfw from source right now. So I have not tested the new glfw 3.4.

This snippet can be used to create a glfw window. Without the PYGLFW_LIBRARY_VARIANT this produces an unresponsive window without decorations on Wayland (with glfw 3.3.9).

import os

# os.environ["PYGLFW_LIBRARY_VARIANT"] = "x11"  # force using XWayland

import glfw

glfw.init()

glfw.window_hint(glfw.CLIENT_API, glfw.NO_API)
glfw.window_hint(glfw.RESIZABLE, True)
        
w = glfw.create_window(800, 800, "Test!", None, None)

while True:
    glfw.poll_events()

Also see gfx-rs/wgpu#4775 and gfx-rs/wgpu-native#377.

The solution for now (March 2024) is to force glfw to use X11 (i.e. XWayland on Wayland).

Support for Qt

The problem with Qt is that we cannot obtain the display id that Qt uses internally. There is QGuiApplication.nativeInterface but ... it's not implemented in PySide or PyQt. See e.g. https://wiki.qt.io/Qt_for_Python_Missing_Bindings. Though maybe its available soon?

Another thing I tried was to mage qt's WgpuCanvas a QWindow instead of a QWidget. This class has a surfaceHandle() method ... except it raises an exception saying the method is private.

Instead of using the actual display id that Qt uses, we can also create our own "display object" and use that. That works fine for X11. Unfortunately, this does not work for Wayland.

Then there is QT_QPA_PLATFORM, which can be set to (amongst other things "wayland-egl" and "xcb".

The solution for now (March 2024) is to set QT_QPA_PLATFORM to xcb to tell Qt to use X11 (i.e. XWayland on Wayland).

Support for wx

Can force to use x11 by setting GDK_BACKEND to "x11". But I haven't tested because cannot install wxPython with apt or pip.

AttributeError: memoryview has no attribute "contiguous"

backends/rs.py:598, macOS Catalina, Pypy

This happens when you try to create a buffer, even in the cube_glfw and compute examples. memory view objects don't seem to have the attribute contiguous anymore, so I commented it out and they both run fine, as well as my code.
I also had to comment it out on 1712, 1094, and 1742

BaseCanvas._draw_frame_and_present swallowing errors

Currently the except in BaseCanvas._drawFrameAndPresent swallows errors and writes to stderr.

I think users will want to bring their own implementation for logging, and I also think some users will prefer to see errors bubble up to the top of the stack (where they may be logged or handled centrally for example) rather than be silently swallowed.

In short, the post-mortem debugging implementation probably has its use in development, but maybe it should be behind a flag?

How to update multiple uniform variables?

Hi,

I made some good progress on my pygame-glfw-wgpu integration and I start to understand the quite chatty WebGPU dialtect. Nethertheless, the bind groups, bind groups layouts are still making me puzzled.

I defined 4 uniforms variables in the vertex shader: projection, view, model, transform

SAMPLER_BINDING = 0, 0
TEXTURE_BINDING = 0, 1
UNIFORM_BINDING = 0, 2 #not used

@python2shader
def tex_vertex_shader(
    in_pos: ("input", 0, vec4),
    in_texcoord: ("input", 1, vec2),
    out_pos: ("output", "Position", vec4),
    v_texcoord: ("output", 0, vec2),
    projection: ("uniform", (0, 2), mat4),
    view: ("uniform", (0, 3), mat4),
    model: ("uniform", (0, 4), mat4),
    transform: ("uniform", (0, 5), mat4),
):
    ndc = projection * view * model * transform * in_pos
    out_pos = vec4(ndc.xy, 0, 1)  # noqa - shader output
    v_texcoord = in_texcoord  # noqa - shader output

I inited them using:

uniform_type = Struct(transform=mat4, projection=mat4, view=mat4, model=mat4)
uniform_data = np.asarray(shadertype_as_ctype(uniform_type)())

I updated them in my update loop:

rot_x         = pyrr.matrix44.create_from_x_rotation(-0.3*time.time(), dtype=np.float32)
rot_y         = pyrr.matrix44.create_from_y_rotation(time.time(), dtype=np.float32)

view          = pyrr.matrix44.create_from_translation(pyrr.Vector3([0.0, 0.0, -10.0]))
projection    = pyrr.matrix44.create_perspective_projection_matrix(1200.0, aspect_ratio, 0.1, 100.0)
model         = pyrr.matrix44.create_from_translation(pyrr.Vector3([0.0, 0.0, 0.0]))

uniform_data["transform"] = (rot_x @ rot_y).flat
uniform_data["view"] = view.flat
uniform_data["projection"] = projection.flat
uniform_data["model"] = model.flat      

# instead of:
# uniform_data["transform"] = (rot_x @ rot_y @ view @ projection @ model).flat

But I did not understand yet how to initialize the pipeline:

...
uniform_buffer_object = device.create_buffer(
    size=uniform_data.nbytes, 
    usage=wgpu.BufferUsage.UNIFORM | wgpu.BufferUsage.COPY_DST
uniform_buffer_object.unmap()

bind_groups_entries[0].append(
    {
        "binding": 2, #?
        "resource": {
            "buffer": uniform_buffer_object,
            "offset": 0, #?
            "size": uniform_buffer_object.size,
        },
    }
)
bind_groups_layout_entries[0].append(
    {
        "binding": 2, #?
        "visibility": wgpu.ShaderStage.VERTEX | wgpu.ShaderStage.FRAGMENT,
        "type": wgpu.BindingType.uniform_buffer,
    }
)

and update them:

uniform_nbytes = uniform_data.nbytes
tmp_buffer = device.create_buffer_mapped(
    size=uniform_nbytes, usage=wgpu.BufferUsage.COPY_SRC
)
ctypes.memmove(
    ctypes.addressof(tmp_buffer.mapping), uniform_data.ctypes.data, uniform_nbytes
)
tmp_buffer.unmap()

Could you tell me how to complete the pipeline initialization ?

Thanks !

Making buffer mapping part of the public API?

Also see:

Intro

We're deviating from the WebGPU API with respect to how to read/write buffer data, because its hard to reproduce the API in Python in a way that does not make it very easy for the user to access released memory and thus cause a segfault.

The WebGPU API

The WebGPU API for synchronizing data between a GPUBuffer and the CPU makes use of "mapping". The API works as follows:

  • You request the buffer to map its data (mapAsync()), using a specific range.
  • Then you obtain an ArrayBuffer using getMappedRange(), again using a subrange (within the total subrange).
  • You copy to/from that array-buffer (via a typed array view).
  • You unmap the buffer.

This API offers appealing advantages:

  • Allows reading and writing data in a buffer.
  • Allows doing that with subranges.
  • Allows doing that with multiple subranges in one go, i.e. without mapping/unmapping multiple times.
  • Without unnecessary data copies.

The problem

It is challenging to find a Pythonic API to replicate this behavior. I think that any solution that we implement should make it impossible for the user to access the memory after it is unmapped. (I mean impossible, unless the users is deliberately using ffi or something to do so.)

However, this appears to be oddly hard to do with the way buffers, arrays and views interact in Python.

As an example. one can invalidate a memorview object by calling its release() method. But if another memoryview or numpy array has been mapped to the same memory, these continue to work.

What we have now

The solution so far has been to implement a much simpler API using map_read() and map_write(data), without the option to read/write a subrange of the buffer. So basically only the first bulletpoint.

What we need

At the very least the first two bullet points. I very much hope to also include the third bullet-point. If we can't avoid data-copies (the fourth bullet point), that's unfortunate, but not problematic as Python is not super-duper-fast anyway.

Some options ...

Stick to read_data and write_data

What we have now, but can include args to specify a range.

Chunked writing / reading

I think this would cover most use-cases, without the need to expose the mapping stuff:

def write_chunks(sequence, offset=0, size=0):
    # With sequence an iterable (or even a generator) providing tuples (offset, data)
    ...

Mapping, but via a custom class so that we can restrict access

class BufferMapping:

    def __init__(self, mem):
        self._never_touch_this_mem = mem  # a memoryview
        self._ismapped = True  # The buffer will set this to False when it's unmapped

    def cast(self, format, shape=None):
        if not self._ismapped:
            raise RuntimeError("Cannot use a buffer mapping after it's unmapped.")
        self._never_touch_this_mem = self._never_touch_this_mem.cast(format, shape)
        return self

    def __getitem__(self, index):
        if not self._ismapped:
            raise RuntimeError("Cannot use a buffer mapping after it's unmapped.")
        res = self._never_touch_this_mem.__getitem__(index)
        if isinstance(res, memoryview):
            raise IndexError("Cannot get a sub-view")  # or also wrap in a BufferMapping?
        return res

    def __setitem__(self, index, value):
        if not self._ismapped:
            raise RuntimeError("Cannot use a buffer mapping after it's unmapped.")
        self._never_touch_this_mem.__setitem__(index, value)

    def to_memoryview(self):
       # Make a copy
        new_obj = (ctypes.c_uint8 * self._never_touch_this_mem.nbytes)()
        new_mem = memoryview(new_obj)
        new_mem[:] = self._never_touch_this_mem
        return new_mem

The thing is ... when would you use this? To map the data and then setting data elements one by one? That would be slow because of the overhead that we introduce. In batches then? Well, in that case you could call write_data(subdata, offset) a few times ...

The use-cases where a mapped API has an advantage (in Python) seem flaky, and the API is much more complex. Therefore we don't currently expose this API in wgpu-py.

However ... I could miss a use-case. And I could miss a possibly elegant solution.

wgpu-py does not work on Windows 32 bit

Because we only have a 64 bit version of wgpu-native. The unfortunate thing is that if you go to python.org on Windows, the default installer you'll be prompted with is for 32bit Python :( ... in 2020 :'(

Options:

  • Detect win32 and provide a reasonable error message.
  • There seems to be consensus to make python.org offer both versions, but someone needs to implement that. Maybe help?
  • Somehow build a 32bit version of wgpu-native (if it's even possible). This looks similar to #36.

Triangle example: white window that closes after 1 second (Windows 10)

Running the triangle example with GLFW on Windows 10 (up to date, i.e. running the last Fall update) gives me a white window that exits shortly afterwards.
Using the tkinter canvas exits immediately without even showing a window.
I'm running a Surface Book 2 which has an integrated Intel GPU as well as a dedicated Nvidia GeForce 1050. My graphics drivers are up to date, both Intel and Nvidia. I can compile and run OpenGL and Vulkan applications just fine, so hardware support should not be the issue. A rough guess would be that wgpu-py fails to select the adapter because two are available, but I can't say for sure as I don't know the internals of wgpu.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.