Comments (9)
@remiconnesson I just wanted to mention that, it appears, somebody already did a quantization of Mistral-7B-Instruct-v0.2
:
https://huggingface.co/alpindale/Mistral-7B-Instruct-v0.2-AQLM-2Bit-1x16
I couldn't find any evaluation results for it, but it might be what you're looking for.
from aqlm.
This very helpful thank you!
Hope this helps, if you have further questions please don't hesitate to ask.
No more questions for now, (I think the budget and how many gpu/time are needed can be derived from other discussions in the repo :))
from aqlm.
@remiconnesson I just wanted to mention that, it appears, somebody already did a quantization of
Mistral-7B-Instruct-v0.2
: https://huggingface.co/alpindale/Mistral-7B-Instruct-v0.2-AQLM-2Bit-1x16 I couldn't find any evaluation results for it, but it might be what you're looking for.
Also it is not clear if they perform global fine-tuning after quantization at the end or not.
from aqlm.
@remiconnesson I just wanted to mention that, it appears, somebody already did a quantization of
Mistral-7B-Instruct-v0.2
:
https://huggingface.co/alpindale/Mistral-7B-Instruct-v0.2-AQLM-2Bit-1x16
I couldn't find any evaluation results for it, but it might be what you're looking for.Thanks! How could evaluate the quality of the quantization myself? Should I use the same dataset than you used in the AQLM paper?
Yes. For PPL, it is recommended to use slice of WikiText2 and c4 datasets. Please see this link for the code to load and calculate PPL after quantization.
For zero-shot evaluations, we utilized LM Eval Harness, specifically we used 2023 spring commit (to fix version) available at this location. Instructions on how to use it can be found here. There, you should provide the path to the quantized model (before HF format conversion).
P.s. With some modifications, newer versions of LM Eval Harness also can be used to calculate zero shot/few shot evals.
from aqlm.
@remiconnesson I just wanted to mention that, it appears, somebody already did a quantization of
Mistral-7B-Instruct-v0.2
:
https://huggingface.co/alpindale/Mistral-7B-Instruct-v0.2-AQLM-2Bit-1x16
I couldn't find any evaluation results for it, but it might be what you're looking for.Thanks! How could evaluate the quality of the quantization myself? Should I use the same dataset than you used in the AQLM paper?
Yes. For PPL, it is recommended to use slice of WikiText2 and c4 datasets. Please see this link for the code to load and calculate PPL after quantization.
For zero-shot evaluations, we utilized LM Eval Harness, specifically we used 2023 spring commit (to fix version) available at this location. Instructions on how to use it can be found here. There, you should provide the path to the quantized model (before HF format conversion). P.s. With some modifications, newer versions of LM Eval Harness also can be used to calculate zero shot/few shot evals.
Thank you this is very useful! I'm going to try this out :)
from aqlm.
Hello!
It is recommended to calibrate on the same data that the model was trained/fine-tuned. In the case of Mistral Instruct v2, if I'm not mistaken, this information is omitted. Because of that, for Instruct type models, we used https://huggingface.co/datasets/mosaicml/dolly_hhrlhf. But you may find something better.
As for succeeding in creating a good quantized model with AQLM, here are some recommendations:
- More data often leads to better results. It is recommended to use 1k+ number of samples with 4k+ context length for Mistal/Mixtral (we used 1k number of samples with an 8k context length for Mistral v0.1, for Llama2 we used 2k number of samples with a 4k context length). After 1k data samples, the performance boost is negligible please see ablation study on this in the AQLM paper.
- After calibration, to enhance results, consider doing global fine-tuning for a few epochs (one epoch is often sufficient). See fine-tune code for more details https://github.com/Vahe1994/AQLM/blob/main/finetune.py .
- Define what is more import for you, the quality or speed. If quality use 1x16 with group size 8, if speed use 2x8 with group size 8 (for avg 2 bits setup). AQLM quantized Llama-2-7b, the 1x16 configuration with a group size of 8 gives a WikiText2 perplexity (PPL) of 5.92, while the 2x8 configuration with the same group size has a WikiText2 PPL of 6.69. For more details on inference speed for both 1x16 and 2x8 configurations, see https://github.com/Vahe1994/AQLM?tab=readme-ov-file#inference-kernels.
- Don't forget to convert final quantized model it to HF format before inference https://github.com/Vahe1994/AQLM/blob/main/convert_to_hf.py
- Tweaking parameters like learning rate (lr), relative_mse_tolerance, and finetune_lr may led slightly better results.
- To check the quality of the quantized model, look not only on perplexity , but also on lm_eval harness results.
Example:
For the Mistral-v1 model, we used this set of parameters to calibrate the model:
python main.py \
$MODEL_PATH \
$DATASET_PATH \
--nsamples=1024 \
--val_size=128 \
--model_seqlen=8192 \
--num_codebooks=1 \
--nbits_per_codebook=16 \
--in_group_size=8 \
--out_group_size=1 \
--relative_mse_tolerance=0.01 \
--finetune_lr=1e-4 \
--finetune_adam_beta1=0.90 \
--finetune_adam_beta2=0.999 \
--finetune_keep_best \
--finetune_batch_size=8 \
--finetune_max_epochs=10 \
--finetune_early_stop=3 \
--local_batch_size=1 \
--offload_activations \
--save $DATA_PATH \
--wandb
And got around 5.78 ppl on wikiText2.
We then perform global fine-tuning on quantized model with the script below:
python finetune.py \
--base_model $MODEL_PATH \
--quant_model $INPUT_PATH \
--dataset $DATASET_PATH \
--model_seqlen=8192 \
--eval_datasets wikitext2 \
--nsamples=1024 \
--val_size=128 \
--lr=1e-5 \
--adam_beta1=0.90 \
--adam_beta2=0.999 \
--epochs=1 \
--early_stop=3 \
--batch_size=8 \
--microbatch_size=1 \
--temperature=1.0 \
--save $DATA_PATH \
--gradient_checkpointing \
--amp \
--wandb
After one epoch of global fine tuning we got 5.40 ppl on WikiText2.
Hope this helps, if you have further questions please don't hesitate to ask.
from aqlm.
@remiconnesson I just wanted to mention that, it appears, somebody already did a quantization of
Mistral-7B-Instruct-v0.2
:
https://huggingface.co/alpindale/Mistral-7B-Instruct-v0.2-AQLM-2Bit-1x16
I couldn't find any evaluation results for it, but it might be what you're looking for.
Thanks! How could evaluate the quality of the quantization myself? Should I use the same dataset than you used in the AQLM paper?
from aqlm.
Related Issues (20)
- ERROR: Could not find a version that satisfies the requirement aqlm[gpu]==1.0.0 HOT 2
- AttributeError: module 'torch.library' has no attribute 'impl_abstract' HOT 4
- Are there any tools that can serve AQLM quantized models? HOT 4
- Supported Models HOT 2
- How model_seqlen affects quantization quality HOT 5
- Reproduce perplexity HOT 6
- RuntimeError: Error building extension 'codebook_cuda': [1/3] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output cuda_kernel.cuda.o.d HOT 2
- How to fine-tune the compressed model HOT 3
- optimized 1x16 and 2x8 decompression/dequantization kernels now exist HOT 8
- 33B llama quantization post-inference time HOT 3
- Please help quantize sayhan/Qwen1.5-72B-Chat-LLaMAfied HOT 2
- print Total number of parameters is incomplete. HOT 3
- Aqlm models cannot be trained in parallel. HOT 5
- aqlm/inference_kernels/cuda_kernel.cu compilation errors HOT 10
- DBRX Support HOT 6
- Any lightweight OpenAI API compatible HTTP server that can be used to serve AQLM models yet? HOT 1
- AQLM support for Cohere Command-R + HOT 3
- Request: AQLM quantization of the new Mixtral 8x22B HOT 3
- Fine-tune colab example doesn't work HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aqlm.