Git Product home page Git Product logo

Comments (2)

simonw avatar simonw commented on September 21, 2024

I may be misunderstanding Kompute - it looks like it might not support Apple Silicon at all?

In that case I'm confused by the docs here: https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.device which say:

device (str | None, default: None ) --

The processing unit on which the GPT4All model will run. It can be set to: - "cpu": Model will run on the central processing unit. - "gpu": Use Metal on ARM64 macOS, otherwise the same as "kompute". - "kompute": Use the best GPU provided by the Kompute backend. - "cuda": Use the best GPU provided by the CUDA backend. - "amd", "nvidia": Use the best GPU provided by the Kompute backend from this vendor. - A specific device name from the list returned by GPT4All.list_gpus(). Default is Metal on ARM64 macOS, "cpu" otherwise.

But when I do this:

model = GPT4All("Phi-3-mini-4k-instruct.Q4_0.gguf", device='gpu')

I get this:

availableGPUDevices: built without Kompute
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/simon/.local/share/virtualenvs/llm-gpt4all-N30dYrxH/lib/python3.11/site-packages/gpt4all/gpt4all.py", line 208, in __init__
    self.model.init_gpu(device)
  File "/Users/simon/.local/share/virtualenvs/llm-gpt4all-N30dYrxH/lib/python3.11/site-packages/gpt4all/_pyllmodel.py", line 272, in init_gpu
    all_gpus = self.list_gpus()
               ^^^^^^^^^^^^^^^^
  File "/Users/simon/.local/share/virtualenvs/llm-gpt4all-N30dYrxH/lib/python3.11/site-packages/gpt4all/_pyllmodel.py", line 260, in list_gpus
    raise ValueError("Unable to retrieve available GPU devices")
ValueError: Unable to retrieve available GPU devices

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 21, 2024

Firstly, that documentation is for a version of the python bindings that hasn't been released yet. As of the current release, the only way to use the GPU on Apple Silicon is to not pass the device argument. There is no way to force use of the CPU.

Secondly, the Metal backend is much more complete (and efficient) than a Vulkan-based backend on top of MoltenVK would be. So we've never built GPT4All with Kompute support on Apple Silicon.

Even on the latest main branch, GPT4All.list_gpus() is not implemented for the Metal backend. But I'm not aware of any devices the llama.cpp Metal backend supports that can have more than one GPU.

from gpt4all.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.