Git Product home page Git Product logo

Comments (10)

Liangtaiwan avatar Liangtaiwan commented on July 30, 2024 2

@maksym-del, @sebastianruder
If you use the scripts/train_qa.sh and scripts/predict_qa.sh, you should remove --do_lower_case argument by yourself.
After removing the argument, I can get the results almost the same as the performance on paper.

line 53 and line 63

CUDA_VISIBLE_DEVICES=$GPU python third_party/run_squad.py \
--model_type ${MODEL_TYPE} \
--model_name_or_path ${MODEL} \
--do_lower_case \
--do_train \
--do_eval \
--train_file ${TRAIN_FILE} \
--predict_file ${PREDICT_FILE} \
--per_gpu_train_batch_size 4 \
--learning_rate ${LR} \
--num_train_epochs ${NUM_EPOCHS} \
--max_seq_length $MAXL \
--doc_stride 128 \
--save_steps -1 \
--overwrite_output_dir \
--gradient_accumulation_steps 4 \
--warmup_steps 500 \
--output_dir ${MODEL_PATH} \
--weight_decay 0.0001 \
--threads 8 \
--train_lang en \
--eval_lang en

CUDA_VISIBLE_DEVICES=${CUDA} python third_party/run_squad.py \
--model_type ${MODEL_TYPE} \
--model_name_or_path ${MODEL_PATH} \
--do_eval \
--do_lower_case \
--eval_lang ${lang} \
--predict_file "${TEST_FILE}" \
--output_dir "${PRED_DIR}" &> /dev/null

from xtreme.

sebastianruder avatar sebastianruder commented on July 30, 2024 1

Hi Max,
Thanks for your interest. For training BERT models on the QA tasks, we actually used the original BERT codebase as that was faster with Google infrastructure (see Appendix B in the paper). I'll check that the same results can be obtained with Transformers and will get back to you.

from xtreme.

MaksymDel avatar MaksymDel commented on July 30, 2024

Thanks, Sebastian!

Interesting to see if there were differences in hparams that caused such a the difference. I can immediately see that several choices are hardcoded in the google's codebase that differ from what you pass in transformers version:

  1. linear learning rate decay in transformers vs polynomial lr decay in google's script
  2. weight_decay=0.0001 in transformers vs weight_decay_rate=0.01 in google's script
  3. adam epsilon=1e-8 in transformers vs 1e-6 in google's script

So unless you manually changed these values in the google's script these are some of the notable differences.

Meanwhile, I would like to also additionally confirm that the issue is only related to mBERT since for XLMR I got following avg numbers: 76.7 / 61.0 which is in pair with 76.6 / 60.8 from the paper.

from xtreme.

sebastianruder avatar sebastianruder commented on July 30, 2024

Thanks for the note, Max. Yes, these are some of the settings that should probably explain the difference in performance.
Yes, for XLM-R we went with the implementation (and the default hyper-parameters) in Transformers, so this should work out-of-the-box as expected.

from xtreme.

Liangtaiwan avatar Liangtaiwan commented on July 30, 2024

Here are the results I got on XQuAD

XQuAD
  en {"exact_match": 72.18487394957984, "f1": 84.05491660467752}
  es {"exact_match": 56.63865546218487, "f1": 75.50683844229154}
  de {"exact_match": 58.23529411764706, "f1": 73.97330302393942}
  el {"exact_match": 47.73109243697479, "f1": 64.71526367876008}
  ru {"exact_match": 54.285714285714285, "f1": 70.85210687094488}
  tr {"exact_match": 39.15966386554622, "f1": 54.04959679389641}
  ar {"exact_match": 47.39495798319328, "f1": 63.42460795613208}
  vi {"exact_match": 50.33613445378151, "f1": 69.39497841433942}
  th {"exact_match": 32.94117647058823, "f1": 42.04649738683358}
  zh {"exact_match": 48.99159663865546, "f1": 58.25216753368008}
  hi {"exact_match": 44.95798319327731, "f1": 58.764676794694026}

from xtreme.

hit-computer avatar hit-computer commented on July 30, 2024

@Liangtaiwan Hi, I only find test data in download/xquad folder and this data just for test which do not have label. How can you get above result on XQuAD? Thanks :)

from xtreme.

Liangtaiwan avatar Liangtaiwan commented on July 30, 2024

@hit-computer You can find the labels here. https://github.com/deepmind/xquad

from xtreme.

hit-computer avatar hit-computer commented on July 30, 2024

@Liangtaiwan Thank you very much!

from xtreme.

sebastianruder avatar sebastianruder commented on July 30, 2024

Hi @hit-computer, I've answered in the corresponding issue. Please don't post in other unrelated issues but instead tag people in your issue.

from xtreme.

melvinjosej avatar melvinjosej commented on July 30, 2024

Closing this issue. Please re-open if needed.

from xtreme.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.