Git Product home page Git Product logo

Comments (15)

marella avatar marella commented on May 29, 2024

Can you please try reinstalling ctransformers with CUDA enabled:

pip uninstall ctransformers --yes
CT_CUBLAS=1 pip install ctransformers --no-binary ctransformers

from chatdocs.

TheFinality avatar TheFinality commented on May 29, 2024

now i think it only uses it for a few seconds, then stops using the gpu. why is that? after this it just hovers around 1-2% usage.
image

from chatdocs.

LazyCat420 avatar LazyCat420 commented on May 29, 2024

running into the same problem as well I followed the instructions above and got it to run but it uses the gpu for only a second then the CPU starts to go up to 40%. I'm running at 3090ti.

Tried setting using set CT_CUBLAS=1 but still didn't seem to work.

Here is the yml

llm: ctransformers

ctransformers:
model: F:\AI_Scripts\chatdocs\model\wizard-mega-13B.ggmlv3.q8_0.bin
model_file: wizard-mega-13B.ggmlv3.q8_0.bin
model_type: llama
config:
context_length: 1024
gpu_layers: 100

embeddings:
model_kwargs:
device: cuda

from chatdocs.

marella avatar marella commented on May 29, 2024

Did you notice any performance drop if you don't set gpu_layers?

Recently llama.cpp added full GPU acceleration (ggerganov/llama.cpp#1827) which is added to ctransformers in 0.2.9 today. Can you please try updating ctransformers to 0.2.9 and see if there is any improvement in speed:

CT_CUBLAS=1 pip install 'ctransformers>=0.2.9' --no-binary ctransformers

Note: If you are on Windows, you should run set CT_CUBLAS=1 (Command Prompt) or $env:CT_CUBLAS=1 (PowerShell) before running pip install.

Also try setting threads: 1 in your config:

ctransformers:
  config:
    threads: 1

from chatdocs.

LazyCat420 avatar LazyCat420 commented on May 29, 2024

I reinstalled ctransformers 0.2.9 but it only worked when I removed the quotes. I tried fixing it in conda and venv and ran into the same issues. I'm running cuda 11.8 with CUDA version 12.2 . I set threads to 1 and removed gpu_layers and now its basically doing nothing cpu is set to 3% and gpu is at 1%.

Here is how i installed it on venv and conda

set CT_CUBLAS=1
pip install ctransformers>=0.2.9 --no-binary ctransformers

when i check pip list its there under ctransformers 0.2.9

this is the yml

ctransformers:
model: F:\AI_Scripts\chatdocs\model\wizard-mega-13B.ggmlv3.q8_0.bin
model_file: wizard-mega-13B.ggmlv3.q8_0.bin
model_type: llama
config:
context_length: 1024
threads: 1

from chatdocs.

marella avatar marella commented on May 29, 2024

You should set gpu_layers also. I just wanted to see if there is any performance difference when you don't set gpu_layers. So you can add it back.

ctransformers:
  config:
    gpu_layers: 100
    threads: 1

By setting both gpu_layers and threads: 1, it should utilize GPU more.

from chatdocs.

TheFinality avatar TheFinality commented on May 29, 2024

did this, and same as @LazyCat420 it barely uses cpu now, and it doesn't use gpu either.

from chatdocs.

nilvaes avatar nilvaes commented on May 29, 2024

now i think it only uses it for a few seconds, then stops using the gpu. why is that? after this it just hovers around 1-2% usage. image

here in gpu tab you can see that its using your gpu vram (dedicated gpu memory usage). I think its the important part. If you use only cpu, you will see that gpu vram tab will stay same, but if you use gpu that line will go higher as in the screenshot.

In short, its working, if its whats expected.
When i used only my cpu i got answers in 70 secs. but after gpu i got answers in 35 or 37 secs. (It might be still slow i have low specs)

from chatdocs.

TheFinality avatar TheFinality commented on May 29, 2024

but shouldnt the utilization also go up? in that picture it is still at 1%.

from chatdocs.

LazyCat420 avatar LazyCat420 commented on May 29, 2024

I got gpu to work on GPTQ I would suggest trying that if you haven't yet. It was using 12GB of vram and 95% of the card the whole time. I just followed the instructions to install chatdocs with GPTQ and it worked. Only issue I ran into was I had to reinstall protobuf 3.2 in order to download the model.

This was the yml

gptq:
model: TheBloke/Nous-Hermes-13B-GPTQ
model_file: nous-hermes-13b-GPTQ-4bit-128g.no-act.order.safetensors
device: 0

llm: gptq

from chatdocs.

nilvaes avatar nilvaes commented on May 29, 2024

Hey, @TheFinality
So i wanted to be sure and downloaded MSI Afterburner (a software that you can check usage and temperature of your cpu gpu and much more)

In this first picture i haven't started the prompt so the numbers on bottom right corner are low (red is gpu usage; yellow is cpu1 usage; green is cpu1 temperature)
Screenshot 2023-06-21 114316

After entering the prompt, it has started to proccessing:
Screenshot 2023-06-21 114333

i don't know why but in task manager it doesn't show how much procent gpu is used when i use chatdocs, but when i checked with MSI Afternburner, i saw that red number (gpu usage) stayed high while processing as 73-60-53 etc.

If you're curious as me, you can also try to use the Software (it's free from MSI) or any other apps you want and check again for yourself. And/ Or you can try what @LazyCat420 did.

I'm just explaining what i've learned and experienced as i tested. Hope it helps!

from chatdocs.

marella avatar marella commented on May 29, 2024

Thanks @nilvaes for the explanation.

I suggest simply looking at the response generation speed instead of the GPU usage numbers.

Try out both the CPU and GPU configs and see which gives better performance for your system.

CPU config:

ctransformers:
  config:
    gpu_layers: 0
    threads: 4 # set it to the number of physical cores your CPU has

GPU config:

ctransformers:
  config:
    gpu_layers: 100
    threads: 1

You can also try other models like GPTQ as LazyCat420 mentioned and pick the one that works best for your system.

from chatdocs.

Ananderz avatar Ananderz commented on May 29, 2024

I finally figured out how to run GGML using GPU.

I had the same issue as all of you where GPU would be at 0-1% use.

I am on Windows 10:

What I did was the following:

  1. Installed visual studio community 2022 and added both C++ packages and python packages.
  2. pip uninstall torch torchvision
  3. pip cache purge
  4. Went to the pytorch website and found my install command based on operating system and added CUDA version 11.8
  5. pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 (this will be based on your operating system, get this from the pytorch website)
  6. pip uninstall ctransformers --yes
  7. Because I am on windows the command listed in the documentation didn't work so I had to do it this way
  8. set CT_CUBLAS=1
  9. pip install ctransformers --no-binary ctransformers
  10. Add the following to config of ctransformers
    config:
    context_length: 2048
    gpu_layers: 100
    threads: 1
    max_new_tokens: 512
    temperature: 0.1
  11. Add the following under embeddings
    embeddings:
    model_kwargs:
    device: cuda
  12. Run the setup for chatdocs again
  13. chatdocs download
  14. chatdocs ui
  15. It's live with GPU GGML support for both the LLM and Embeddings

Note, you can remove both max_new_tokens and temperature setting from the config. It now works with GGML. Usage and memory of GPU maxes out!

Hope this helps

from chatdocs.

csocha avatar csocha commented on May 29, 2024

ugh. I followed these steps but no matter what I do I get this error. The file is there. I did get GPU working well wiih oobabooga, but not with this install. It also couldn't find pydantic which it could once I copied it over to the \chatdocs folder. Something very weird is going on and not sure what to do. If I run without CUDA, it works fine, just slow.

FileNotFoundError: Could not find module
'C:\Users\curti\AppData\Local\Programs\Python\Python310\Lib\site-packages\ctransformers\lib\cuda\ctransformers.dll' (or one of
its dependencies). Try using the full path with constructor syntax

from chatdocs.

marella avatar marella commented on May 29, 2024

Please run the following command and post the output:

pip show ctransformers nvidia-cuda-runtime-cu12 nvidia-cublas-cu12

Make sure you have installed the CUDA libraries using:

pip install ctransformers[cuda]

from chatdocs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.