Git Product home page Git Product logo

Comments (58)

marcdownie avatar marcdownie commented on June 3, 2024 2

I can confirm that, with the latest wgpu-py and wgpu-native build from source everything is working on M1 with a couple of caveats:

  1. wgpu-native asks for a version of bindgen that just plain doesn't work on M1. The latest bindgen works fine.
  2. rs_helpers's get_surface_id_from_canvas doesn't seem to be able to recognize GLFWWindow. I replace this hacking-around-libobjc-with-ctypes code with my own hacking-around-libobjc-with-ctypes code and everything now works. I haven't figured out the differences yet.

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024 2

Pull request for getting wgpu-native to build on M1 here: gfx-rs/wgpu-native#114

My angry hacks to get get_surface_id_from_canvas working is harder to build a pull request from, not least of all because it's dependent on some random gist I found (https://gist.github.com/tlinnet/746a18788dd51f0827fb4840b9a8631c) which, at the very least, doesn't have a license.

I think the difference here is turning on calling methods (like contentView()). The existing get_surface_id_from_canvas is using a raw objc_msgSend while I'm using a method returned from class_getInstanceMethod(objc_class, someSelector) and calling that.

Specifically, I have:

cv = ObjCInstance(window).contentView()
cv.setWantsLayer(True)
metal_layer = ObjCClass("CAMetalLayer").layer()
cv.setLayer(metal_layer)

To replace the existing:

content_view = objc.objc_msgSend(window, content_view_sel)
...
objc.objc_msgSend(content_view, set_wants_layer_sel, True)
ca_metal_layer_class = objc.objc_getClass(b"CAMetalLayer")
metal_layer = objc.objc_msgSend(ca_metal_layer_class, layer_sel)
objc.objc_msgSend(content_view, set_layer_sel, ctypes.c_void_p(metal_layer))

My code works where get_surface_id_from_canvas fails because on M1 objc.objc_msgSend(window, responds_to_sel_sel, ctypes.c_void_p(content_view_sel)) isn't True when it clearly should be. objc.objc_msgSend is, famously, coupled directly to the ABI.

Meanwhile, I'm trying to inline everything from that gist so that I might actually have code you'd want in your repository, but it might take a few days for me to get to it.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024 2

Step 1 is done. Once there is a release of wgpu-native, we can do what's needed in wpu-py. @Korijn if you feel like starting with that, you could use the unofficial release on my fork.

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024 2

The MemoryError is the only remaining issue at this point, and so far only @marcdownie has reported it with a conda environment... @berendkleinhaneveld has been able to run multiple examples on M1 now with all the changes. I'm inclined to say we should close this and #190, and release some new wheels to pypi! πŸš€

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024 2

There it is!

image

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024 1

Ok I built wgpu.h and libwgpu_native.dylib again from v0.5.2 tag of wgpu-native

Different error this time:

Traceback (most recent call last):
  File "/Users/paul/Documents/Dev/Personal/python-hexblade/experiments/pygfx_hexes.py", line 6, in <module>
    import wgpu.backends.rs  # noqa: F401, Select Rust backend
  File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 117, in <module>
    ffi.cdef(_get_wgpu_h())
  File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 112, in cdef
    self._cdef(csource, override=override, packed=packed, pack=pack)
  File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 126, in _cdef
    self._parser.parse(csource, override=override, **options)
  File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 389, in parse
    self._internal_parse(csource)
  File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 396, in _internal_parse
    self._process_macros(macros)
  File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 479, in _process_macros
    raise CDefError(
cffi.CDefError: only supports one of the following syntax:
  #define WGPUBufferUsage_MAP_READ ...     (literally dot-dot-dot)
  #define WGPUBufferUsage_MAP_READ NUMBER  (with NUMBER an integer constant, decimal/hex/octal)
got:
  #define WGPUBufferUsage_MAP_READ (uint32_t)1

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024 1

There are no CI runners available

See also actions/runner-images#2187

Different error this time

The only thing I can think of is that there is a mismatch between the header-file and the compiled lib.

Otherwise, with a bit of luck things will be better once we've moved to a newer version of wgpu-native ...

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024 1

Sorry for wall of text but I might have found a piece of the puzzle. Reading up on architectural differences between x86 and arm64 (m1) here: https://developer.apple.com/documentation/apple-silicon/addressing-architectural-differences-in-your-macos-code

The x86_64 and arm64 architectures have different calling conventions for variadic functionsβ€”functions with a variable number of parameters. On x86_64, the compiler treats fixed and variadic parameters the same, placing parameters in registers first and only using the stack when no more registers are available. On arm64, the compiler always places variadic parameters on the stack, regardless of whether registers are available. If you implement a function with fixed parameters, but redeclare it with variadic parameters, the mismatch causes unexpected behavior at runtime.

Bit further down on that page, discussing the function objc_msgSend that is being called by get_surface_id_from_canvas:

A function like objc_msgSend calls a method of an object, passing the parameters you supply to that method. Because objc_msgSend must support calls to any method, it accepts a variable list of parameters instead of fixed parameters. This usage of variable parameters changes how objc_msgSend calls your function, effectively redeclaring your method as a variadic function.

My guess is that the parameters for objc_msgSend is in the registers on x86 and in the stack on arm64. The article goes on to explain how to solve it. I will give that a go tomorrow.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024 1

I prefer to keep this open until @marcdownie (and others?) can confirm it also works for them.

We can still do a new release in the meantime with what we have now - would also make it easier for others to test it.

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024 1

The MemoryError is the only remaining issue at this point, and so far only @marcdownie has reported it with a conda environment...

To be clear (because this is quite the issue!) I was only not getting the MemoryError in my base conda; I've never made it past the MemoryError outside of a conda environment. And, this morning, after much uninstalling bits of my system, now even conda is throwing the MemoryError.

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024 1

Results! I walked away from my python superfund site onto a fresh machine. Either the prerelease cffi, or the lack of detritus left over from x64 brew, is in fact the missing ingredient.

WGPU_BACKEND_TYPE=Metal PYGLFW_LIBRARY=/opt/homebrew/Cellar/glfw/3.3.4/lib/libglfw.3.3.dylib python3.9 examples/triangle_glfw.py

Gives me a triangle!

from wgpu-py.

berendkleinhaneveld avatar berendkleinhaneveld commented on June 3, 2024 1

Awesome! Too bad it's still a mystery as to what exactly causes the MemoryError, but at least you get to experience the triangle now!

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024 1

I think the MemoryError is picking up the wrong ffi or trampoline lib, the one without Apple's secret write+execute trampoline sauce. The real mystery to me is how conda managed to move ahead of pip here, which led me to see a working triangle last week. I'd still feel better if I can verify that this all works in a fresh venv, but for now, I'm back to feeling ok about wgpu-py on M1. So a LGTM for this and #190 from me.

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

Following that:

  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 108, in _get_wgpu_lib_path
    raise RuntimeError(f"Could not find WGPU library in {embedded_path}")
RuntimeError: Could not find WGPU library in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

Bypassing Poetry does not seem to help

% pip uninstall wgpu
Found existing installation: wgpu 0.3.0
Uninstalling wgpu-0.3.0:
  Would remove:
    /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/LICENSE
    /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu-0.3.0.dist-info/*
    /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/*
Proceed (y/n)? y
  Successfully uninstalled wgpu-0.3.0

% pip install wgpu
Collecting wgpu
  Downloading wgpu-0.3.0.tar.gz (65 kB)
     |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 65 kB 2.0 MB/s
Requirement already satisfied: cffi>=1.10 in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages (from wgpu) (1.14.5)
Requirement already satisfied: pycparser in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages (from cffi>=1.10->wgpu) (2.20)
Building wheels for collected packages: wgpu
  Building wheel for wgpu (setup.py) ... done
  Created wheel for wgpu: filename=wgpu-0.3.0-py3-none-macosx_11_2_arm64.whl size=72787 sha256=ef24b3c659660431881b441788b1205d7b90eec89f534f95a2f04ff468a0c040
  Stored in directory: /Users/anentropic/Library/Caches/pip/wheels/65/61/c8/553073b0633ba01220ede3798da3293ff8b054a5445ab2d218
Successfully built wgpu
Installing collected packages: wgpu
Successfully installed wgpu-0.3.0

% python experiments/pygfx_hexes.py
Traceback (most recent call last):
  File "/Users/anentropic/Documents/Dev/Personal/python-hexblade/experiments/pygfx_hexes.py", line 6, in <module>
    import wgpu.backends.rs  # noqa: F401, Select Rust backend
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 119, in <module>
    _lib = ffi.dlopen(_get_wgpu_lib_path())
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 108, in _get_wgpu_lib_path
    raise RuntimeError(f"Could not find WGPU library in {embedded_path}")
RuntimeError: Could not find WGPU library in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib

% ls /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources
__init__.py	__pycache__	webgpu.idl	wgpu.h

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Looks like you installed the source distribution there which indeed does not contain prebuilt binaries:

Downloading wgpu-0.3.0.tar.gz (65 kB) (not a .whl file)

I'll investigate this further later on. The python packaging peeps have been making breaking changes to pip and wheel lately, we may have to adjust our setup.py file.

Thanks for your report.

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

Perhaps it's because I'm on an M1 mac...

I thought I'd try installing more manually. The first step is python download-wgpu-native.py

Which downloads https://github.com/gfx-rs/wgpu-native/releases/download/v0.7.0/wgpu-macos-64-release.zip

But then I get:

no suitable image found.  Did find:
	/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib: mach-o, but wrong architecture
	/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib: mach-o, but wrong architecture.  Additionally, ctypes.util.find_library() did not manage to locate a library called '/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib'

I then followed the instructions to build gfx-rs/wgpu-native from source:
https://github.com/gfx-rs/wgpu/wiki/Getting-Started#getting-started

I copied the wgpu.h and libwgpu_native.dylib that I built into my virtualenv site-packages/wgpu/resources/ dir

But now I get:

Traceback (most recent call last):
  File "/Users/anentropic/Documents/Dev/Personal/python-hexblade/experiments/pygfx_hexes.py", line 6, in <module>
    import wgpu.backends.rs  # noqa: F401, Select Rust backend
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 117, in <module>
    ffi.cdef(_get_wgpu_h())
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 112, in cdef
    self._cdef(csource, override=override, packed=packed, pack=pack)
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 126, in _cdef
    self._parser.parse(csource, override=override, **options)
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 389, in parse
    self._internal_parse(csource)
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 396, in _internal_parse
    self._process_macros(macros)
  File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 479, in _process_macros
    raise CDefError(
cffi.CDefError: only supports one of the following syntax:
  #define WGPUFeatures_DEPTH_CLAMPING ...     (literally dot-dot-dot)
  #define WGPUFeatures_DEPTH_CLAMPING NUMBER  (with NUMBER an integer constant, decimal/hex/octal)
got:
  #define WGPUFeatures_DEPTH_CLAMPING (uint64_t)1

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

my mistake, I just downloaded latest gfx-rs/wgpu-native but I should have grabbed 0.5.2... I'll try that

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Well there's multiple issues, but you trying this out on an M1 is definitely the biggest one :) it's a new build target and we have not looked into supporting it just yet. There are no CI runners available and I don't know anyone (except for you of course!) that owns one so it will be tricky indeed. If you managed to compile the right version of wgpu-native (0.5.2) yourself it should all just work though

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

the last comment I posted was with 0.5.2 wgpu-native

wgpu-native itself works ok on the Rust side... I can run the make run-example-triangle test and a GLFW window opens up and draws a triangle

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Did you also update the header file? The last error you posted is actually a failure to parse/load the wgpu.h header file from cffi. it seems to indicate a syntax error:

cffi.CDefError: only supports one of the following syntax:
  #define WGPUBufferUsage_MAP_READ ...     (literally dot-dot-dot)
  #define WGPUBufferUsage_MAP_READ NUMBER  (with NUMBER an integer constant, decimal/hex/octal)
got:
  #define WGPUBufferUsage_MAP_READ (uint32_t)1

Here's what that last line looks like in my wgpu.h file:

$ cat wgpu.h | grep WGPUBufferUsage_MAP_READ
#define WGPUBufferUsage_MAP_READ 1

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

πŸ€” I was sure I had but I will double-check that

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

my grep returns:

#define WGPUBufferUsage_MAP_READ (uint32_t)1

I have double-checked and this is what is built by wgpu-native 0.5.2 on my machine

from wgpu-py.

anentropic avatar anentropic commented on June 3, 2024

I'm a bit out of my depth here, but let me know if I can help test or check anything else

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Would you mind listing the specific versions and code adjustments you've used? It might be just what we need to propose the appropriate changes upstream.

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

brew install llvm is yielding an 12.0.0_1 llvm, which lets you set LLVM_CONFIG_PATH=/opt/homebrew/Cellar/llvm/12.0.0_1/bin/llvm-config and then
bindgen='0.58.1' inside wpgu-native's Cargo.toml gets its cargo build to work.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

@marcdownie would you be interested in submitting a PR to https://github.com/gfx-rs/wgpu-native for the required changes to get it compiling? Might also be the easiest way to discuss the get_surface_id_from_canvas.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

Thanks for the effort! Would be great to get that code in, so others with an M1 can benefit as well :)

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024

What is happening on this front? Im a Mac M1 user and would like to use wgpu-py. Any progress?

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Looks like the mentioned gfx-rs/wgpu-native#114 has been merged, so I imagine that it's at least possible to build for M1 now, but it doesn't look like there are prebuilt binaries yet: https://github.com/gfx-rs/wgpu-native/releases/tag/v0.9.2.2

So I guess the next step would be adjusting CI over there to provide those. Then we can easily add support here as a next step.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

I just created gfx-rs/wgpu-native#138 to track step 1.

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024

What would need to be done in wgpu-py to add support? Just curious and eager to contribute

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

What would need to be done in wgpu-py to add support? Just curious and eager to contribute

  • Run the script as documented at the end of the readme to update the binary pointer to Almar's custom build that he mentioned in his comment
  • Update CI to build the package for mac M1. I suppose that's maybe the non trivial part; you probably need to fiddle with the GitHub actions yml and possibly setup.py.

The challenge here is that there aren't any M1 machines available on CI yet. So you need to implement a mechanism to force a new job on CI to (1) download the arm64 binaries and (2) build a wheel with the arm64 ABI tag, even though the CI job is running on a regular macos machine.

You may need to adjust this little hack here:

self.plat_name = get_platform(None) # force a platform tag

It mostly comes down to massaging setuptools and bdist_wheel into doing what you need it to.

Hope this helps.

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024

I think CI and github yaml is a bit out of my depth. I updated the pointer to AlmarΒ΄s custom build. I know the pointer is right because I got an error saying it could not find the lib when using the wrong path.

triangle_glfw.py gives:

Expected wgpu-native version (0, 9, 2, 2) but got (0, 0, 0)
Traceback (most recent call last):
  File "/Users/simon/Documents/Coding/wgpu-py/examples/triangle_glfw.py", line 20, in <module>
    main(canvas)
  File "/Users/simon/Documents/Coding/wgpu-py/examples/triangle.py", line 55, in main
    adapter = wgpu.request_adapter(canvas=canvas, power_preference="high-performance")
  File "/Users/simon/Documents/Coding/wgpu-py/wgpu/backends/rs.py", line 215, in request_adapter
    surface_id = get_surface_id_from_canvas(canvas)
  File "/Users/simon/Documents/Coding/wgpu-py/wgpu/backends/rs_helpers.py", line 107, in get_surface_id_from_canvas
    raise RuntimeError("Received unidentified objective-c object.")
RuntimeError: Received unidentified objective-c object.

I think the first line is just a warning. A few comments up @marcdownie discussed his hacks to get get_surface_id_from_canvas working. That to is abit out of my depth IΒ΄m afraid, but it looks like the solution is in the gist he referenced.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

Expected wgpu-native version (0, 9, 2, 2) but got (0, 0, 0)

Yes, this is just a warning. Is this with the binary from the release of my fork? If so, it looks like the version is not baked-in correctly ... not something that affects anything directly, but we'd need to fix that eventually.

File "/Users/simon/Documents/Coding/wgpu-py/wgpu/backends/rs_helpers.py", line 107, in get_surface_id_from_canvas
raise RuntimeError("Received unidentified objective-c object.")
RuntimeError: Received unidentified objective-c object.

Why would this part be different for M1?

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024

Is this with the binary from the release of my fork?

Yes I compiled the binary from your fork.

Why would this part be different for M1?

IΒ΄m not sure. In this comment @marcdownie discusses that "get_surface_id_from_canvas" fails on his M1 #136 (comment).

Looks like he solved the issue using code from the gist but had some issues making it into a pull request.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

Is this with the binary from the release of my fork?

Yes I compiled the binary from your fork.

Ah, ok then it makes sense. I meant whether you downloaded it from the unofficial release of my fork :)

IΒ΄m not sure. In this comment @marcdownie discusses that "get_surface_id_from_canvas" fails on his M1 #136 (comment).

Right, I forgot how long this thread is :)

I understand that code like that feels overwhelming. I felt the same. I dug around in many (sometimes obscure) parts of the internet to piece everything together. But this is something that only someone with an M1 can do :)

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

I think CI and github yaml is a bit out of my depth.

I can handle that, and updating the pointer. I'll put it on a branch and open a draft PR.

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024

Solved it by using the code in the gist mentioned by @marcdownie. The examples run. But as I donΒ΄t understand how the code in the gist actually works I canΒ΄t really make pull request out of it. Licensing is also an issue.

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

I've had my eye on the easily pip-installable https://github.com/beeware/rubicon-objc as an (almost) drop in replacement for that random gist we've both now used with success, but I haven't quite gotten around to it. How do we feel about adding a dependency?

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

Given a little:

from rubicon.objc.api import ObjCInstance, ObjCClass

Then I can successfully patch our get_surface_id_from_canvas(canvas) Darwin if clause with:

        cv = ObjCInstance(window).contentView
        cv.setWantsLayer(True)
        metal_layer = ObjCClass("CAMetalLayer").layer()
        cv.setLayer(metal_layer)

        struct = ffi.new("WGPUSurfaceDescriptorFromMetalLayer *")
        struct.layer = ffi.cast("void *", metal_layer.ptr.value)
        struct.chain.sType = lib.WGPUSType_SurfaceDescriptorFromMetalLayer

I only have access to M1 Macs, but this at the very least seems like much less code. Let me know if it's worth a pull request.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

How do we feel about adding a dependency?

I was hoping we could solve this without adding a dependency. However, since rubicon-objc is pure Python and has no dependencies by itself (AFAIK), it's a relatively "safe" dependency (we would not depend on their maintainers to push new packages for new Python versions, etc).

Let me know if it's worth a pull request.

It is :)


I still think it'd be interesting to obtain the surface id without a dependency. But this is something that can be picked up later. What I'd try (if I had a mac M1) was to trace the code path through that gist (or through rubicon-objc) and check what (ctypes) API calls are performed. I suspect you'd end up with very similar code as what we have, and we can then see what we were missing. That said, I may be underestimating this :)

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

I have added a list of things to fix at the top of this issue.

Another issue is the installation of cffi, this is where @anentropic got stuck on. @marcdownie, @SuperSimon81 how have you tackled that?

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

Short answer is that the cffi installed via conda install cffi installs the correct _cffi_backend.cpython-39-darwin.so (while pip install cffi, and thus python3 setup develop, gets you an x86_64 one).

There aren't any aarch64 releases of wgpu-native yet are there? Specifically: there's nothing I put in my pull request for download-wgpu-native.py to make it download the correct binaries?

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

Actually, we're not there yet; we have problems with the latest cffi pulled in from conda:

Traceback (most recent call last):
  File "/Users/marc/temp/wgpu_pull_req/wgpu-py/examples/triangle_glfw.py", line 11, in <module>
    import wgpu.backends.rs  # noqa: F401, Select Rust backend
  File "/Users/marc/temp/wgpu_pull_req/wgpu-py/wgpu/backends/rs.py", line 44, in <module>
    from .rs_ffi import ffi, lib, check_expected_version
  File "/Users/marc/temp/wgpu_pull_req/wgpu-py/wgpu/backends/rs_ffi.py", line 108, in <module>
    def _logger_callback(level, c_msg):
  File "/Users/marc/miniforge3/envs/wgpu_work/lib/python3.9/site-packages/cffi/api.py", line 396, in callback_decorator_wrap
    return self._backend.callback(cdecl, python_callable,
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks

from wgpu-py.

SuperSimon81 avatar SuperSimon81 commented on June 3, 2024

Another issue is the installation of cffi, this is where @anentropic got stuck on. @marcdownie, @SuperSimon81 how have you tackled that?

I used conda like @marcdownie.

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

It's fine if that works in anaconda land but we also need to provide a solution here in pypi/pip country.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

There aren't any aarch64 releases of wgpu-native yet are there? Specifically: there's nothing I put in my pull request for download-wgpu-native.py to make it download the correct binaries?

Yes there are! And I just merged #185 that makes the necessary updates here. So if you pull the latest main and run the download script you should be up :)


MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this.

Also posting the error that @berendkleinhaneveld got, which is yet something different than @anentropic reported. This has been reported earlier but the proposed workaround does not seem to work:

E ImportError: dlopen(/Users/cg/Library/Caches/pypoetry/virtualenvs/wgpu-5bSf_T1V-py3.9/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024
  1. that MemoryError is spurious, but good to have searchable in github. The root cause seems to be what MacOS entitlements my conda python ends up having vs other less isolated pythons floating around my system (including, confusingly, the conda base python / Python.Framework).
  2. the _ffi_prep_closure error was solved for me with conda's building from source of cffi or, equivalently, pip install cffi --no-binary :all:

I think we're a) slowly getting there b) really going to need that CI integration to feel great about this.

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

I'm now less convinced that we can work around the MemoryError: Cannot allocate write+execute memory for ffi.callback() error in the long (or even immediate) term. With a completely fresh-from- brew python3.9 environment, I can't get around this error from cffi with wgpu-py. This is known to cffi:

https://cffi.readthedocs.io/en/latest/using.html#callbacks-old-style

and is a deliberate consequence of Apple's arm64-binaries-must-be-signed policy. The only mystery right now is why my base conda env works with cffi at all.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

Or we should (try to) make use of the new-style callback mechanism described there.

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

I'm now less convinced that we can work around the MemoryError: Cannot allocate write+execute memory for ffi.callback() error in the long (or even immediate) term. With a completely fresh-from- brew python3.9 environment, I can't get around this error from cffi with wgpu-py. This is known to cffi:

https://cffi.readthedocs.io/en/latest/using.html#callbacks-old-style

and is a deliberate consequence of Apple's arm64-binaries-must-be-signed policy. The only mystery right now is why my base conda env works with cffi at all.

From that page:

To fix the issue once and for all on the affected platforms, you need to refactor the involved code so that it no longer uses ffi.callback().

Looks like we are using it in two places... do you think it is possible to get rid of those usages @almarklein ?

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Also posting the error that @berendkleinhaneveld got, which is yet something different than @anentropic reported. This has been reported earlier but the proposed workaround does not seem to work:

E ImportError: dlopen(/Users/cg/Library/Caches/pypoetry/virtualenvs/wgpu-5bSf_T1V-py3.9/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure

This is (as described in the linked bug report) a linking error... so the installed cffi is attempting to load a dynamic library that doesn't exist. This is a problem caused by CFFI which they would need to solve in their build pipeline... you could try to run otool -L <binary> on _cffi_backend.cpython-39-darwin.so to see what it's trying to load.

from wgpu-py.

almarklein avatar almarklein commented on June 3, 2024

Let's move the discussion about cffi to #190

do you think it is possible to get rid of those usages @almarklein ?

Worth a try!

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Reference #194 which adds macos arm64 wheels to CI.

image

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

Now that #195 has also been merged, we only have problems with cffi remaining.

Has anyone tried the latest cffi pre-release yet?

from wgpu-py.

Korijn avatar Korijn commented on June 3, 2024

I prefer to keep this open until @marcdownie (and others?) can confirm it also works for them.

We can still do a new release in the meantime with what we have now - would also make it easier for others to test it.

Let's do that!

from wgpu-py.

marcdownie avatar marcdownie commented on June 3, 2024

I'm back (sorry about the pause); and I'm up to date with main, out of conda, using the pre-release pip cffi, not having an x64 parallel brew installation any more, and ... I'm still on the cffi python MemoryError.

from wgpu-py.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.