Git Product home page Git Product logo

Comments (7)

nicola-decao avatar nicola-decao commented on September 28, 2024

Training seems correct. Are you using constrained search during the evaluation?

from genre.

nicola-decao avatar nicola-decao commented on September 28, 2024

Also, with BLINK you do not need to use convert_kilt_to_fairseq.

from genre.

ma787639046 avatar ma787639046 commented on September 28, 2024

Thanks for your quick response.

  1. I use constrained search during the evaluation. The trie is downloaded from kilt_titles_trie_dict.pkl, and I use evaluate_kilt_dataset.py with beam=10, max_len_a=384, max_len_b=15.

  2. I get the BLINK train set in JSON Line format from blink-train-kilt.jsonl. It seems that this file is structured in the same way as other KILT datasets. So I just cat the blink-train-kilt.jsonl and other 8 KILT train jsonl files mentioned in the paper to one single file. Then I shuffle this JSON Line file with random.shuffle() using python.
    I cat all 11 dev jsonl files of KILT to one file as development set.
    Then use the script convert_kilt_to_fairseq.py & preprocess_fairseq.sh to process above files.

Am I doing these the right way?

Thanks again!

from genre.

ma787639046 avatar ma787639046 commented on September 28, 2024

@nicola-decao

from genre.

nicola-decao avatar nicola-decao commented on September 28, 2024

Yes, you are doing it correctly then. I am not sure what is going wrong. Are you sure you are training with the same batch size and number of steps as reported in the paper?

from genre.

ma787639046 avatar ma787639046 commented on September 28, 2024

Yes, I rerun the whole finetune process on 8 V100 GPUs torch1.6.0+cuda10.1. I directly use the training script train.sh with max-tokens per GPU to 1024, update-freq to 128, max-update to 200000, which should be the same hypermeters reported in the appendix A.3. I get the following results.

model_name FEV AY2 WnWi WnCw T-REx zsRE NQ HoPo TQA ELI5 WoW Avg
genre_fairseq_wikipage_retrieval (provided) 0.84681 0.92747 0.87691 0.7053 0.7968 0.94844 0.64258 0.51821 0.71114 0.1347 0.5632 0.69742
My reproduced model 0.84203 0.92559 0.88516 0.71048 0.7288 0.86198 0.60416 0.40625 0.69938 0.13603 0.58481 0.67133

T-REx, zsRE, NQ, HoPo, TQA are still lower than expected.

from genre.

nicola-decao avatar nicola-decao commented on September 28, 2024

That is weird, but I do not know how to help. I do not work at Facebook/ Meta anymore, so I cannot re-run experiments or check the original code that was launched. Note: I run on more GPUs.

from genre.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.