Git Product home page Git Product logo

Comments (6)

qwopqwop200 avatar qwopqwop200 commented on July 20, 2024 1

here's the performance:

LLaMA-7B BPW Wikitext2
FP16 16 5.68
RTN 4.00 6.29
RTN 3.00 25.54
GPTQ-128g 4.15 5.85
GPTQ-128g 3.15 6.61
AWQ-128g 4.15 5.81
AWQ-128g 3.15 6.46
AWQ-32g 3.60 6.10
SpQR-3b-16g-3b-32g-0.4% 3.63 5.73

AWQ is more hardware efficient and simpler to implement than SpQR, but the compression ratio seems to be worse than SpQR.

from llm-awq.

Sakits avatar Sakits commented on July 20, 2024 1

Hi @loklok-infi,

Thanks for your interests in our work!
I think there are two potential reasons for the difference in the results:
(i) We used lm-eval-harness for evaluation, while GPTQ used their own implementation for evaluation (please refer here). There might be some differences in the experiment settings between them.
(ii) Regarding the 3/4-bit results, the results from GPTQ's paper are based on per-channel quantization without using group quantization. Our results are based on group quantization with a group size of 128.
Hope this answers your question :)

from llm-awq.

loklok-infi avatar loklok-infi commented on July 20, 2024

Hi @loklok-infi,

Thanks for your interests in our work! I think there are two potential reasons for the difference in the results: (i) We used lm-eval-harness for evaluation, while GPTQ used their own implementation for evaluation (please refer here). There might be some differences in the experiment settings between them. (ii) Regarding the 3/4-bit results, the results from GPTQ's paper are based on per-channel quantization without using group quantization. Our results are based on group quantization with a group size of 128. Hope this answers your question :)

Thank you for answering! Actually what confuses me more is the fp16 results are also ~10% different, but as you said I guess it's from the lm-eval-harness and the implementation of GPTQ.

I guess it's a problem for the whole community today, a similar problem seems happened between huggingface's LLM leaderboard and LLaMA's official result, hope soon we could have a one-true-standard benchmark implementation for LLMs.

from llm-awq.

loklok-infi avatar loklok-infi commented on July 20, 2024

here's the performance:

LLaMA-7B BPW Wikitext2
FP16 16 5.68
RTN 4.00 6.29
RTN 3.00 25.54
GPTQ-128g 4.15 5.85
GPTQ-128g 3.15 6.61
AWQ-128g 4.15 5.81
AWQ-128g 3.15 6.46
AWQ-32g 3.60 6.10
SpQR-3b-16g-3b-32g-0.4% 3.63 5.73
AWQ is more hardware efficient and simpler to implement than SpQR, but the compression ratio seems to be worse than SpQR.

Thank you! It's very helpful๐Ÿ‘

from llm-awq.

JianbangZ avatar JianbangZ commented on July 20, 2024

here's the performance:

LLaMA-7B BPW Wikitext2
FP16 16 5.68
RTN 4.00 6.29
RTN 3.00 25.54
GPTQ-128g 4.15 5.85
GPTQ-128g 3.15 6.61
AWQ-128g 4.15 5.81
AWQ-128g 3.15 6.46
AWQ-32g 3.60 6.10
SpQR-3b-16g-3b-32g-0.4% 3.63 5.73
AWQ is more hardware efficient and simpler to implement than SpQR, but the compression ratio seems to be worse than SpQR.

Hey do you have a repo to benchmark all these quant methods?

from llm-awq.

Sravanth-k27 avatar Sravanth-k27 commented on July 20, 2024

here's the performance:
LLaMA-7B BPW Wikitext2
FP16 16 5.68
RTN 4.00 6.29
RTN 3.00 25.54
GPTQ-128g 4.15 5.85
GPTQ-128g 3.15 6.61
AWQ-128g 4.15 5.81
AWQ-128g 3.15 6.46
AWQ-32g 3.60 6.10
SpQR-3b-16g-3b-32g-0.4% 3.63 5.73
AWQ is more hardware efficient and simpler to implement than SpQR, but the compression ratio seems to be worse than SpQR.

Hey do you have a repo to benchmark all these quant methods?

Yes , Code for this benchmark will be appreciated

from llm-awq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.