Git Product home page Git Product logo

Comments (6)

Vahe1994 avatar Vahe1994 commented on July 22, 2024

Hi!
There are 2 different factors contributing to the mismatch.

  1. I believe the difference between your results and those in the paper are mainly due to difference in hyperparameters. The reported result in the paper were achieved using the following settings: --nsamples=1024 --num_codebooks=1 --nbits_per_codebook=16 --in_group_size=8 --relative_mse_tolerance=0.01 --finetune_lr=1e-5 --finetune_adam_beta1=0.90 --finetune_adam_beta2=0.95 --finetune_keep_best --finetune_relative_mse_tolerance=0.001 --finetune_batch_size=32 --local_batch_size=4 --save save_path --wandb. Additionally, results may slightly vary from run to run due to randomness. For more details, please refer to Table 8 in the paper's appendix.

  2. The result of 5.92 from ReadMe/HF was achieved through full fine-tuning on top of the obtained quantization please see Appendix A and #30 . The code for fine-tuning can be found in #50.

Hope this helps. If you have any additional questions, please feel free to ask.

from aqlm.

deciding avatar deciding commented on July 22, 2024

Very clear, thx so much. I will try to reproduce it.

from aqlm.

Godofnothing avatar Godofnothing commented on July 22, 2024

@deciding The current Llama-2-7b checkpoint with wikitext2 ppl=5.91 was obtained as follows.

Quantization with blockwise finetuning yields 6.22 ppl. Compared to the version in the main branch it has early stopping on a validation set. The run script (with main.py) used the following hyperparameters.

python main.py \
    $MODEL_PATH \
    $DATASET_PATH \
    --nsamples=2048 \
    --val_size=256 \
    --model_seqlen=4096 \
    --num_codebooks=1 \
    --nbits_per_codebook=16 \
    --in_group_size=8 \
    --out_group_size=1 \
    --relative_mse_tolerance=0.01 \
    --finetune_lr=1e-4 \
    --finetune_adam_beta1=0.90 \
    --finetune_adam_beta2=0.999 \
    --finetune_keep_best \
    --finetune_batch_size=8 \
    --finetune_max_epochs=20 \
    --finetune_early_stop=3 \
    --local_batch_size=4 \
    --offload_activations

The final model was obtained via end-to-end finetuning (script finetune.py) from the model above with the following hyperparameters:

python finetune.py \
  --base_model $MODEL_PATH \
  --quant_model $INPUT_PATH \
  --dataset $DATASET_PATH \
  --nsamples=1024 \
  --val_size=256 \
  --lr=1e-5 \
  --adam_beta1=0.90 \
  --adam_beta2=0.999 \
  --epochs=5 \
  --early_stop=3 \
  --batch_size=8 \
  --microbatch_size=4 \
  \
  --temperature=1.0 \
  \
  --save $DATA_PATH \
  \
  --gradient_checkpointing

from aqlm.

deciding avatar deciding commented on July 22, 2024

@Godofnothing really appreciate the tuning details! Besides, may I know the number of a100 GPU hours required for this finetune script?

from aqlm.

Godofnothing avatar Godofnothing commented on July 22, 2024

@deciding I do not remember exact numbers, I think the first part took 1 day on 2 A 100 and the second one 6 hours on single A100

from aqlm.

deciding avatar deciding commented on July 22, 2024

@Godofnothing Cool. Thx a lot for the information 👍

from aqlm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.