Comments (10)
@maksym-del, @sebastianruder
If you use the scripts/train_qa.sh and scripts/predict_qa.sh, you should remove --do_lower_case argument by yourself.
After removing the argument, I can get the results almost the same as the performance on paper.
line 53 and line 63
Lines 50 to 71 in 5d7e462
Lines 59 to 66 in 5d7e462
from xtreme.
Hi Max,
Thanks for your interest. For training BERT models on the QA tasks, we actually used the original BERT codebase as that was faster with Google infrastructure (see Appendix B in the paper). I'll check that the same results can be obtained with Transformers and will get back to you.
from xtreme.
Thanks, Sebastian!
Interesting to see if there were differences in hparams that caused such a the difference. I can immediately see that several choices are hardcoded in the google's codebase
that differ from what you pass in transformers
version:
linear learning rate decay
in transformers vspolynomial lr decay
in google's scriptweight_decay=0.0001
in transformers vsweight_decay_rate=0.01
in google's scriptadam epsilon=1e-8
in transformers vs1e-6
in google's script
So unless you manually changed these values in the google's script these are some of the notable differences.
Meanwhile, I would like to also additionally confirm that the issue is only related to mBERT
since for XLMR
I got following avg numbers: 76.7 / 61.0
which is in pair with 76.6 / 60.8
from the paper.
from xtreme.
Thanks for the note, Max. Yes, these are some of the settings that should probably explain the difference in performance.
Yes, for XLM-R we went with the implementation (and the default hyper-parameters) in Transformers, so this should work out-of-the-box as expected.
from xtreme.
Here are the results I got on XQuAD
XQuAD
en {"exact_match": 72.18487394957984, "f1": 84.05491660467752}
es {"exact_match": 56.63865546218487, "f1": 75.50683844229154}
de {"exact_match": 58.23529411764706, "f1": 73.97330302393942}
el {"exact_match": 47.73109243697479, "f1": 64.71526367876008}
ru {"exact_match": 54.285714285714285, "f1": 70.85210687094488}
tr {"exact_match": 39.15966386554622, "f1": 54.04959679389641}
ar {"exact_match": 47.39495798319328, "f1": 63.42460795613208}
vi {"exact_match": 50.33613445378151, "f1": 69.39497841433942}
th {"exact_match": 32.94117647058823, "f1": 42.04649738683358}
zh {"exact_match": 48.99159663865546, "f1": 58.25216753368008}
hi {"exact_match": 44.95798319327731, "f1": 58.764676794694026}
from xtreme.
@Liangtaiwan Hi, I only find test data in download/xquad
folder and this data just for test which do not have label. How can you get above result on XQuAD? Thanks :)
from xtreme.
@hit-computer You can find the labels here. https://github.com/deepmind/xquad
from xtreme.
@Liangtaiwan Thank you very much!
from xtreme.
Hi @hit-computer, I've answered in the corresponding issue. Please don't post in other unrelated issues but instead tag people in your issue.
from xtreme.
Closing this issue. Please re-open if needed.
from xtreme.
Related Issues (20)
- explainaboard doesn't work
- There is no AmazonPhotos.zip in the link to download the dataset. How to download the dataset?
- WikiExtractor git cloning error HOT 2
- New tasks in xtreme-r HOT 5
- XLMR results on POS tagging (ja, zh and yo) HOT 2
- Unclear which transformers version should be used when testing Tatoeba HOT 6
- 2513 Segmentation fault: 11 when running Tatoeba HOT 1
- Cannot achieve the XNLI performance in the paper
- TypeError: compute_predictions_logits() takes 12 positional arguments but 13 were given
- XCopa scripts?
- Metrics for sequence tagging tasks dependant on the max_seq_length parameter HOT 1
- Old Russian in Russian UDpos dataset
- Training data for XCOPA
- Evaluation results of PANX task HOT 3
- How to get LAReQA results across different question languages HOT 5
- How is the test data for TyDiQA generated? HOT 3
- Adding a new language
- Tatoeba baseline from XTREME-R
- Missing de translation data in MLQA HOT 1
- [bugs] All results 0 in test with the NER(PANX) task HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xtreme.