Comments (7)
I've merged #39 and released aqlm==1.1.0
where I got rid of the need to use aqlm.optimize_for_training()
. Everything is determined automatically from here on.
from aqlm.
Sounds great! We will instruct users to use the latest AQLM in our training framework
from aqlm.
Hi @hiyouga !
It's, sadly, not properly documented yet, but you should do:
import aqlm
with aqlm.optimize_for_training():
model = AutoModelForCausalLM.from_pretrained(...)
The thing is, there a few ways to compute a forward pass and some of them work better for very small number of tokens (e.g. generation), and some are optimized for large batch sizes (e.g. training). We're hoping to be able to determine which kernels to use dynamically in later versions of aqlm, but, for now, please add that wrapped explicitly. Also, keep in mind, that a model loaded under that wrapper will be very slow on generation. We're working on making it a more pleasant experience!
from aqlm.
@BlackSamorez Indeed! It's very important to me, I will try to fine-tune the model again following your advice. Thanks for pointing it out!
from aqlm.
A bit more context: those are the speeds for a typical layer on an RTX 3090 GPU. We have a kernel for a single token pass (generation), which is slightly faster than fp16
, and we have a kernel which introduces a huge but constant overhead over fp16, meaning it's asymptotically as fast as fp16
.
num_tokens (batch_size x seq_len) | with optimize_for_training , ms/pass |
without optimize_for_training , ms/pass |
fp16 baseline |
---|---|---|---|
1 | 4.71 | 0.18 | 0.14 |
4 | 4.69 | 0.53 | 0.14 |
16 | 4.70 | 1.91 | 0.14 |
64 | 4.72 | 7.43 | 0.16 |
256 | 5.02 | too slow | 0.46 |
1024 | 6.14 | too slow | 1.57 |
4096 | 10.04 | too slow | 5.54 |
16384 | 25.68 | too slow | 21.15 |
As of now, we don't have a good enough kernel for anything between 4 and 4000 tokens processed in a pass. We're hoping to implement them someday.
from aqlm.
I see. The generation is much faster than training, and it might also be related to the gradient checkpointing technique in training.
from aqlm.
This issue is stale because it has been open for 30 days with no activity.
from aqlm.
Related Issues (20)
- aqlm/inference_kernels/cuda_kernel.py HOT 2
- NaNs in sequence classifier output HOT 1
- Using pv-tuning on other quantization methods HOT 1
- Any lightweight OpenAI API compatible HTTP server that can be used to serve AQLM models yet? HOT 1
- AQLM support for Cohere Command-R + HOT 3
- Request: AQLM quantization of the new Mixtral 8x22B HOT 4
- Fine-tune colab example doesn't work HOT 2
- Issues while attempting LLaMA-3 Quantization HOT 22
- Request: Phi-3-mini-128k-instruct Support HOT 2
- Query on Evaluation Support for C4 Validation HOT 5
- KV Cache Quantization HOT 4
- Minor race condition in CPU 2x8 inference code HOT 3
- Finetuning ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8: RuntimeError: CUDA error: invalid argument HOT 2
- Actual bitrate of models on github? HOT 5
- Request for the Llama-2-13B with AQLM (2x8 scheme) HOT 3
- How to run perplexity eval on HF hub models? HOT 3
- when load Llama, AutoConfig will occur error. HOT 2
- Request for Nvidia's RAG Implementation of Llama-3-70B "ChatQA 1.5" HOT 8
- Can you please share the *end-to-end* quantization script+config (including data used) for each model you've already quantized? (particularly llama-3 and miqu - i.e. 70B models) HOT 5
- FV tuning based on GPTQ HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aqlm.