Comments (8)
In my opinion, using perplexity (ppl) as a metric might not be optimal due to its sensitivity to outliers, and a high perplexity score doesn't necessarily indicate poor model performance.
After thoroughly analyzing extensive datasets using various weight-only algorithms and assessing perplexities on datasets like Wikitext-2, PTB-new, and C4-new with GPTQ code and wiki ppl using LM-eval, we discovered that many algorithms, including floating models (though the probability is low), encounter this issue. We hypothesize that this issue stems from the mathematical calculation of perplexity (https://huggingface.co/docs/transformers/en/perplexity). For instance, even a low probability of a single token could significantly inflate the perplexity score.
from autogptq.
Pass at least 128 examples of sufficient length to quantize()
from autogptq.
#657
this maybe related
from autogptq.
I change wikiText2 as calibration dataset, then the ppl is 6.62,which use c4 as calibration dataset is 17.5. Now the ppl can basically met expectations. I didn't realize before that the calibration dataset has such a significant impact on the quantization results. Do we need to quantifybased on the actual dataset every time we use GPTQ?
from autogptq.
@lyx-111111 Wikitext2 is just a generic dataset and not guaranteed to work the best for all models. For best result, use a dataset that the model was pretrained on. Unfortunately dataset is often private/proprietary secret and thus lots of people use "wikitext2" as alternative. So yes, you need to use/find closest dataset for each model to get best calibration result.
from autogptq.
I change wikiText2 as calibration dataset, then the ppl is 6.62,which use c4 as calibration dataset is 17.5. Now the ppl can basically met expectations. I didn't realize before that the calibration dataset has such a significant impact on the quantization results. Do we need to quantifybased on the actual dataset every time we use GPTQ?
Hi, have you evaluated the models with the accuracy like mmlu, I suspect there won't be significant differences. Using the same dataset for both calibration and evaluation could lead to overfitting, resulting in poor generalization when applied to other tasks. We have already observed this issue with an 'SOTA' paper.
from autogptq.
Using the same dataset for both calibration and evaluation could lead to overfitting, resulting in poor generalization when applied to other tasks. We have already observed this issue with an 'SOTA' paper.
@wenhuach21 Do you have a link to the SOTA paper? I really want to check it out. Also would a larger nsamples help with the overfittiing on quantization?
from autogptq.
Using the same dataset for both calibration and evaluation could lead to overfitting, resulting in poor generalization when applied to other tasks. We have already observed this issue with an 'SOTA' paper.
@wenhuach21 Do you have a link to the SOTA paper? I really want to check it out. Also would a larger nsamples help with the overfittiing on quantization?
I apologize for any confusion. What I meant is that some papers utilize WikiText for calibration and solely report WikiText perplexity, which may appear satisfactory but doesn't generalize effectively. Based on our data on llama2, increasing the sample size(128->512) could enhance average accuracy by an absolute 0.1-0.2 sometimes at W4G128, but this comes with about 1x tuning cost for GPTQ. And we do find that increase the samples size could have a big help occasionally in some other scenarios.
In this context, what I want to emphasize is that perplexity is highly sensitive to outliers. A large perplexity value may be caused by a low probability of a single token and might not necessarily indicate significant issues in real-world scenarios. We've observed this phenomenon across multiple quantization algorithms, where perplexity appears to be high on certain datasets, despite the accuracy remaining consistently good.
from autogptq.
Related Issues (20)
- [PR Ready for Review] [FEATURE] Extend Support for Phi-3
- [FEATURE] Backport vllm expanded Marlin kernel to autogptq. HOT 1
- [DEPRECATION] Discussion on Fused attention and QiGEN HOT 5
- Llama-3 8B Instruct quantized to 8 Bit spits out gibberish in transformers `model.generate()` but works fine in vLLM? HOT 5
- [BUG]safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
- [Question] Differences in quantization logic compared to AWQ
- [FEATURE] ADD SUPPORT DeepSeek-V2 HOT 1
- [BUG] ARM installation error
- [BUG] ROCm installation and building broken
- Target modules [] not found in the base model. Please check the target modules and try again.
- [BUG] Cannot install from source
- [BUG] Following the quant_with_alpaca.py example but keep getting "You shouldn't move a model that is dispatched using accelerate hooks." and the model is never saved.
- [FEATURE] Models that support MOE do GPTQ
- [FEATURE] Add marlin24 support
- How to select between different kernels?
- Question about data shape difference between quantization and forward
- [FEATURE] Added code support to 5,6,7 bits quantization can you please add me as contributor I will create a new pull request HOT 4
- [BUG] Quantitative model Yi-1.5-9b-16K does not produce text output.
- How to install auto-gptq in GCC 8.5.0 environment?
- How to get a dequantized model?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autogptq.