Git Product home page Git Product logo

Comments (12)

sorasoras avatar sorasoras commented on September 24, 2024 1

No : I cant install the desktop driver because it is not a gpu-card. I had installed the datacenter-driver and installed now Your proposed new version - with success, then also the newest vulkan runtime (vulkan now 1.3.275 : though it is not documented as necessary with gpt4all). I still see no card in the dropdown(auto - gpu) . Here Your new vulkaninfo --summary > d:\sum3.txt ( just the same) sum3.txt Just in powershell appear now no more warnings. I have now between 2.5 (mixtral instruct.Q6) and 4 tokens(mistral 1.2-Q4)

You have to switch to driver mode from TCC mode to WDDM mode to be able to use graphic APIs
I remember there is a guide somewhere.

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 24, 2024

What is the output of vulkaninfo --summary? You may need to install the Runtime or SDK from here: https://vulkan.lunarg.com/sdk/home

from gpt4all.

gtbu avatar gtbu commented on September 24, 2024

As i said in another issue, i have of course installed the sdk-runtimes (and published a link there)! I think the problem is that Gtp4all uses either a cpu or a gpu bot not both (why not, if the Ram is small - of course difficult to program such a code) and not 'either of them AND a cuda-card ' without gpu. I

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 24, 2024

I need the output of vulkaninfo --summary to have a better idea of what GPT4All sees when it tries to find your GPU. I just mentioned the Vulkan SDK in case that command is not found for some reason.

Btw, CUDA is irrelevant here, as GPT4All does not use it. It uses Vulkan as a compute backend. Tesla card are GPUs and can do graphics, they just don't have any video outputs. But GPT4All does not care whether the card has any video outputs - I use my Tesla P40 with GPT4All and it works just fine.

from gpt4all.

gtbu avatar gtbu commented on September 24, 2024

I append the vulkaninfo --summary > d:\sum2.txt
sum2.txt
-- with warnings in the powershell appeded. Gpt4all is not on

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 24, 2024

So, this is not a GPT4All issue. Your installed NVIDIA driver is not providing the Vulkan API for your Tesla P4. One of these might do it:
https://www.nvidia.com/download/driverResults.aspx/222668/en-us/
https://www.nvidia.com/download/driverResults.aspx/221875/en-us/

I'm using Linux, and the standard nvidia proprietary driver supports Vulkan on my Tesla P40.

from gpt4all.

gtbu avatar gtbu commented on September 24, 2024

No : I cant install the desktop driver because it is not a gpu-card. I had installed the datacenter-driver and installed now Your proposed new version - with success, then also the newest vulkan runtime (vulkan now 1.3.275 : though it is not documented as necessary with gpt4all).
I still see no card in the dropdown(auto - gpu) . Here Your new vulkaninfo --summary > d:\sum3.txt ( just the same)
sum3.txt
Just in powershell appear now no more warnings. I have now between 2.5 (mixtral instruct.Q6) and 4 tokens(mistral 1.2-Q4)

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 24, 2024

So, there's your answer (thanks sorasoras) - you can only use the Tesla P4 with GPT4All on Windows if you have a GRID license, and use nvidia-smi to switch to WDDM mode: https://docs.nvidia.com/nsight-visual-studio-edition/reference/index.html#setting-tcc-mode-for-tesla-products

This is a limitation of Tesla devices on Windows. Unless GPT4All adds support for llama.cpp's CUDA backend, or you downgrade to an older driver that doesn't require a GRID license, there is no way around this.

from gpt4all.

gtbu avatar gtbu commented on September 24, 2024

Tesla - Cuda devices like P4 improve gaming-speed on windows without special grid-license .
( I cant install the desktop-driver, because it is for a nvidia gpu-card (and i have only the gpu of the processor) and the installation cant find an nvidia card .)
For what do we need a vGPU ? LM-Studio doesnt need one.
Can You please give me the name of such an older driver that doesn't require a GRID license (and version) ?
I found something at

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 24, 2024

For the last time, the Tesla P4 is a fully featured GPU (minus the physical display connectors), not just a "CUDA card", NVIDIA just treats Tesla GPUs differently from e.g. GeForce and Quadro on Windows. You should not need a vGPU, only WDDM mode so you can use Vulkan for compute. GPT4All does not use CUDA at all at the moment.

I found this guide, maybe it helps.

from gpt4all.

gtbu avatar gtbu commented on September 24, 2024

Yes - but the desktop-driver doesnt recognize the P4 and similar others as GPU. The P4 and others are no GRID-Cards which require a license.
The above links says : Enable Above 4G memory (my bios doesnt have that point. Evtl. it is default. The windows-device-manager sees the P4 and the inner gpu of the cpu parallel ) ;
Disable CSM : If not disabled, the Nvidia graphics card selection will give an error message (not on my computer). He wants a second geforce-graphics card (...no free second long slot as most boards)
I will go on at that registry basis (make a registry back before with Erunt)
If It doesnt help : Why functions LM-Studio with that . The Cuda-datacenter-driver ist installed. It only needs some code which switches to WDDM mode .

**** To finalize this discussion : With grid-driver 4.72.39 (500+ do not function) the nvidia wmi-driver is installed and the card appears in the dropdown. If i choose the P4, then i get an 'out of Vram-error' (as expected). It seems that Gpt4all uses either Cpu or on-chip-Gpu or P4-card or a graphics-card and not all (as LM-Strudio does - which is therefore faster ). This should be the next point in develeopment (from my sight). If LMstudio develops a localdocs-extension, then You will have a problem.

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 24, 2024

It seems that Gpt4all uses either Cpu or on-chip-Gpu or P4-card or a graphics-card and not all

I believe that LM Studio uses the llama.cpp CUDA backend. The fastest use of llama.cpp is always to use a single GPU which is the fastest one available. And your integrated Intel GPU certainly isn't supported by the CUDA backend, so LM Studio can't use it. Since the CPU is universally slower than the GPU for LLMs, you should only split computation between CPU and GPU if you would run out of VRAM otherwise - you can do this in GPT4All by adjusting the per-model "GPU layers" setting.

The main reason that LM Studio would be faster than GPT4All when fully offloading is that the kernels in the llama.cpp CUDA backend are better optimized than the kernels in the Nomic Vulkan backend. This is something we intend to work on, but there are higher priorities at the moment.

from gpt4all.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.