Comments (8)
which is why I was assuming the following error occured for me while installing:
I also have this question
I met this error too, how did you solve it?Thanks!
I gave up, autoawq is more convenient,https://github.com/casper-hansen/AutoAWQ
from llm-awq.
I copied the .so file to the current directory and tried again:
from llm-awq.
awesome!
https://stackoverflow.com/questions/65710713/importerror-libc10-so-cannot-open-shared-object-file-no-such-file-or-director
from llm-awq.
I also get this error when I try to run the cli after installing AWQ per instructions
(env-fastchat-awq)Russells-MBP:llm-awq $ python3 -m fastchat.serve.cli --model-path models/git-vicuna-7b-awq/ --awq-wbits 4 --awq-groupsize 128
Loading AWQ quantized model...
Error: Failed to import tinychat. No module named 'awq_inference_engine'
Please double check if you have successfully installed AWQ
See https://github.com/lm-sys/FastChat/blob/main/docs/awq.md
Does FastChat only work if one has Nvidia GPU? I do not. My understanding is CUDA is only for Nvidia, which is why I was assuming the following error occured for me while installing:
(env-fastchat-awq)Russells-MBP:llm-awq $ cd awq/kernels/
(env-fastchat-awq)Russells-MBP:kernels $ python setup.py install
Traceback (most recent call last):
File "/Volumes/ExtremePro/fastchat-awq/FastChat/repositories/llm-awq/awq/kernels/setup.py", line 35, in <module>
CUDAExtension(
File "/Volumes/ExtremePro/fastchat-awq/env-fastchat-awq/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1076, in CUDAExtension
library_dirs += library_paths(cuda=True)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/ExtremePro/fastchat-awq/env-fastchat-awq/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1203, in library_paths
if (not os.path.exists(_join_cuda_home(lib_dir)) and
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/ExtremePro/fastchat-awq/env-fastchat-awq/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 2416, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
(env-fastchat-awq)Russells-MBP:kernels $
(env-fastchat-awq)Russells-MBP:fastchat-awq $
from llm-awq.
which is why I was assuming the following error occured for me while installing:
I also have this question
from llm-awq.
which is why I was assuming the following error occured for me while installing:
I also have this question
I met this error too, how did you solve it?Thanks!
from llm-awq.
I copied the .so file to the current directory and tried again:
Where did you get this file?
from llm-awq.
which is why I was assuming the following error occured for me while installing:
I also have this question
I met this error too, how did you solve it?Thanks!
I gave up, autoawq is more convenient,https://github.com/casper-hansen/AutoAWQ
OK, thanks!
from llm-awq.
Related Issues (20)
- No module named 'awq_inference_engine' HOT 2
- No such file or directory: "VILA1.5-13b-AWQ/llm/model-00001-of-00006.safetensors" HOT 8
- tinychat.serve.model_worker_new.py AWQ model in training mode
- how to support to custom module like mla in deep-seek-v2
- openAI-compatible tinychat API?
- AWQ kernel Issue
- Can you provide examples code to run inference on video QA? HOT 2
- AWQ and VILA dependency compatible issue HOT 2
- google.protobuf.message.DecodeError: Error parsing message HOT 1
- Is this a bug for the quantization phase? HOT 1
- Rocm support request
- Invalid Characters
- Memory increases significantly during inference
- Invalid Compute Capability when building Docker pytorch:23.12 HOT 1
- Request for Semi-Structured Sparse Matrix Support in AWQ Kernel
- Illegal memory access for LLama-3-70B
- 显卡要求
- How to load and infer the VILA-1.5-40B-AWQ model on multiple GPUs? I currently have 4 A30✖️24GB GPUs and a cuda out of memory error occurs.
- Add support for GPUs with compute capability lower than 8.0 for awq/kernels installation
- Plans for running model on other devices?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llm-awq.