Comments (8)
Hi @BlackSamorez, I ran your new kernels on an A6000, 1 GPU and saw the same 30% speedup for 1x16 but I also got a 40% or so speedup for the 2x8. The 2x8 was already pretty blazing compared to the 1x16. Based on my timings I'm setting the breakpoints between the gemv and dequant at (the pretty tiny) batch size 6 for the 2x8 and the same for 1x16. My numbers are probably different from yours because I'm doing the scaling on the output. The 2x8 numbers are almost 70% worse when you scale the weights instead of the outputs. Here's a set of spreadsheets, check out the 2x8 and 1x16 pages. Red in the column means it's slower than doing the gemv kernel: https://docs.google.com/spreadsheets/d/1PYt90XEzUjTXJe_qVF4e3N2xnc951huamj_hPXdh_ZU/edit?usp=sharing
from aqlm.
Hi @jaemzfleming !
First of all, great work! The amount of effort put into the vllm integration is enormous just as it's practical implications!
Secondly, I have also been working on faster dequantization kernels. I've got 1x16 dequant implemented based on @efrantar 's gemv kernels. It's in the #55. I've also added your fast 1x16 dequant kernel there for comparison.
I also did a small test and I think mine is ~30% faster than that in vllm in some setups.
I'm very sorry for not bringing it up to you earlier. Hope we figure out which one is better and the amount of time spent on the redundant one wouldn't have been to large.
from aqlm.
2x8 dequant is exactly the same speed for vllm and the implementation based on the Elias's matvec code.
from aqlm.
Nice! I had hoped there could be further speed improvements. My back of envelope bandwidth calculations showed that my 1x16 was about half as fast as theoretically possible. Which is still good, but I thought maybe there would be some room left for improvement. I'll port the ones from #55 back into my PR and see if our timings agree.
With so much interest and activity there's bound to be some overlap of effort, not a problem, but a good thing, for sure.
from aqlm.
Hi @jaemzfleming I've also incorporated post matmul scaling, and it's faster indeed.
I've updated the tables and plots in #55 and I also see that 6
is around the optimal batch size to switch to matmul.
from aqlm.
Awesome! Nice to get the same numbers :) I'm now busy updating my plot in the vLLM PR with the new kernels...
from aqlm.
@BlackSamorez and I'm super happy to hear that it speeds up training as well. I had thought it would, but didn't try it. I know there were a few complaints/concerns about training times. Hopefully a moot issue now.
from aqlm.
For efficient training, the only new kernels needed are transposed matmul kernels. Using a fast dequant kernel and then transposing is already much faster than what we had before. And implementing completely separate transposed matmul/dequant would be very time consuming.
from aqlm.
Related Issues (20)
- Question: How to choose a dataset for quantizing with AQLM a model like Mistral 7b-Instruct v0.2 HOT 9
- aqlm/inference_kernels/cuda_kernel.cu compilation errors HOT 10
- DBRX Support HOT 6
- Any lightweight OpenAI API compatible HTTP server that can be used to serve AQLM models yet? HOT 1
- AQLM support for Cohere Command-R + HOT 3
- Request: AQLM quantization of the new Mixtral 8x22B HOT 4
- Fine-tune colab example doesn't work HOT 2
- Issues while attempting LLaMA-3 Quantization HOT 22
- Request: Phi-3-mini-128k-instruct Support HOT 2
- Query on Evaluation Support for C4 Validation HOT 5
- KV Cache Quantization HOT 4
- Minor race condition in CPU 2x8 inference code HOT 3
- Finetuning ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8: RuntimeError: CUDA error: invalid argument HOT 2
- Actual bitrate of models on github? HOT 5
- Request for the Llama-2-13B with AQLM (2x8 scheme) HOT 3
- How to run perplexity eval on HF hub models? HOT 3
- when load Llama, AutoConfig will occur error. HOT 2
- Request for Nvidia's RAG Implementation of Llama-3-70B "ChatQA 1.5" HOT 8
- Can you please share the *end-to-end* quantization script+config (including data used) for each model you've already quantized? (particularly llama-3 and miqu - i.e. 70B models) HOT 5
- FV tuning based on GPTQ HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aqlm.