Comments (5)
Hello!
Thank you for your interest in the work.
In the main.py there is a code for calculating ppl on c4 validation dataset
Line 896 in 0e57afb
from aqlm.
Hello!
Sorry for the late answer. We are calculating ppl on a split of c4. The code comes from GPTQ.
Regarding different ppl score on c4. I'm not sure why you are getting 6.94ppl. I rerun the ppl calculation and got 5.646. Although it is a little bit off(probably something slightly changed in either model,data or dependencies), it is not that much.
Can you please check are you using Mistral-7B-v0.1 or instruct version?
Can you please try to run the code from main branch with this command:
CUDA_VISIBLE_DEVICES=0 OMP_NUM_THREADS=16 MKL_NUM_THREADS=16 python main.py "mistralai/Mistral-7B-v0.1" "pajama" --max_epochs=15 --nsamples=256 --num_codebooks=1 --nbits_per_codebook=15 --in_group_size=8 --scale_nbits=0 --model_seqlen=8192 --offload_activations --no_quant
from aqlm.
Thanks for your timely reply.
After running the main.py
using the no_quant
mode, I noticed that only 256 samples were sampled to calculate the ppl on c4 validation dataset and I got the final ppl 6.63.
As I want to reproduce the baseline result 6.63 of c4 in the first row of your Tables 1 & 2, I wonder if this result was also obtained by using 256 samples or the whole valdation dataset. If the whole valdation dataset is needed, then how to modify the sampling code in get_c4()
function of the datautils.py
.
from aqlm.
Dear authors,
By using the no_quant
mode to evaluate Llama-2-7b-hf and Mistral-7B models, I got the following results:
- Llama-2-7b-hf (model_seqlen=4096, no_quant) ppl=6.63, which is consistent with the results in the paper.
- Mistral-7B (model_seqlen=8192, no_quant) ppl=6.94. It is weird that Mistral-7B performs worse than Llama-2-7b-hf and is inconsistent with the result 5.71 reported in your Table.17. I present the command as below and can you help me to figure out the difference?
export CUDA_VISIBLE_DEVICES=4,5,6,7 # optional: select GPUs
export MODEL_PATH=pretrained/Mistral-7B # Llama-2-7b-hf
export DATASET_PATH=dataset/c4
export SAVE_PATH=None
python main.py $MODEL_PATH $DATASET_PATH \
--model_seqlen=8192 \ # 4096 for Llama-2
--nsamples=1024 \
--val_size=128 \
--num_codebooks=1 \
--nbits_per_codebook=16 \
--in_group_size=8 \
--relative_mse_tolerance=0.01 \
--finetune_batch_size=32 \
--finetune_max_epochs=0 \
--finetune_early_stop=3 \
--finetune_keep_best \
--local_batch_size=1 \
--offload_activations \
--no_quant \
--save $SAVE_PATH
from aqlm.
Thanks for your advice. I have achieved the right ppl score by fixing the version of Mistral-7B-v0.1.
from aqlm.
Related Issues (20)
- aqlm/inference_kernels/cuda_kernel.py HOT 2
- aqlm/inference_kernels/cuda_kernel.cu compilation errors HOT 10
- DBRX Support HOT 6
- Any lightweight OpenAI API compatible HTTP server that can be used to serve AQLM models yet? HOT 1
- AQLM support for Cohere Command-R + HOT 3
- Request: AQLM quantization of the new Mixtral 8x22B HOT 4
- Fine-tune colab example doesn't work HOT 2
- Issues while attempting LLaMA-3 Quantization HOT 22
- Request: Phi-3-mini-128k-instruct Support HOT 2
- KV Cache Quantization HOT 4
- Minor race condition in CPU 2x8 inference code HOT 4
- Finetuning ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8: RuntimeError: CUDA error: invalid argument HOT 3
- Actual bitrate of models on github? HOT 5
- Request for the Llama-2-13B with AQLM (2x8 scheme) HOT 3
- How to run perplexity eval on HF hub models? HOT 3
- when load Llama, AutoConfig will occur error. HOT 2
- Request for Nvidia's RAG Implementation of Llama-3-70B "ChatQA 1.5" HOT 9
- Can you please share the *end-to-end* quantization script+config (including data used) for each model you've already quantized? (particularly llama-3 and miqu - i.e. 70B models) HOT 6
- FV tuning based on GPTQ HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aqlm.