Git Product home page Git Product logo

learning_to_retrieve_reasoning_paths's Issues

Training data construction for reader verifier

Hello and thanks again!!!
I'm trying to reproduce the reader(SQUAD 2.0 alike) part. If I'm not wrong, the reader is also a path re-ranker to help get the best path that contains the answers and supporting sentences. About this I have 2 questions: (1)How are the negative paths(is impossible=True) constructed? by TF-IDF or the upstream retriever? (2) What if the negative paths contain part of the supporting sentences, or even the answer(eg. for comparison question)? also make is_impossible==True?

Evaluation input for retriever

Hi,

Thanks for sharing the code. Wondering which file should I use if I want to get the output of graph retriever using hotpot_dev_fullwiki_v1.json? Looks like the code only takes input in SQuAD 2.0 format. Please let me know where I can download processed hotpot_dev_fullwiki data.

Thanks

`database is locked` while evaluation

Hi, I am trying to run the eval_main.py for the nq data by this

python eval_main.py \
--eval_file_path nq.jsonl \
--graph_retriever_path models/nq_models/graph_retriever/pytorch_model.bin \
--reader_path models/nq_models/reader \
--tfidf_path models/nq_models/tfidf_retriever/wiki_20181220_nq_hyper_linked-tfidf-ngram=2-hash=16777216-tokenizer=simple.npz \
--db_path models/nq_models/wiki_db/wiki_20181220_nq_hyper_linked.db \
--bert_model_sequential_sentence_selector bert-base-uncased --do_lower_case --tfidf_limit 20 --eval_batch_size 4 --pruning_by_links \
--beam_graph_retriever 4 --max_para_num 100

And I got this error:

  File "eval_main.py", line 57, in <module>
    main()
  File "eval_main.py", line 24, in main
    tfidf_retrieval_output, selector_output, reader_output = odqa.eval()
  File "/home/bill/learning_to_retrieve_reasoning_paths/eval_odqa.py", line 303, in eval
    tfidf_retrieval_output = self.retrieve(eval_questions)
  File "/home/bill/learning_to_retrieve_reasoning_paths/eval_odqa.py", line 237, in retrieve
    eval_q["id"], eval_q["question"], self.args)
  File "/home/bill/learning_to_retrieve_reasoning_paths/pipeline/tfidf_retriever.py", line 126, in get_abstract_tfidf
    context = self.load_abstract_para_text(doc_names)
  File "/home/bill/learning_to_retrieve_reasoning_paths/pipeline/tfidf_retriever.py", line 45, in load_abstract_para_text
    para_title_text_pairs = load_para_collections_from_tfidf_id_intro_only(doc_name, self.db)
  File "/home/bill/learning_to_retrieve_reasoning_paths/retriever/utils.py", line 213, in load_para_collections_from_tfidf_id_intro_only
    if db.get_doc_text(tfidf_id) is None:
  File "/home/bill/learning_to_retrieve_reasoning_paths/retriever/doc_db.py", line 42, in get_doc_text
    (doc_id,)
sqlite3.OperationalError: database is locked
Question:   0%|               

Thanks!

What the TF-IDF retriever data output mean

Thanks for the good work. Just to be sure I understand the paper and implementation correctly,

  1. does the graph retriever model extract paragraphs from another source except from the output data from the TF-IDF retrieval output during training and inference?
  2. Going by the TF-IDF output format
{
"question": 'Were Scott Derrickson and Ed Wood of the same nationality?'.
"q_id": "5ab3b0bf5542992ade7c6e39",
"context":
    {"Scott Derrickson_0": "Scott Derrickson (born July 16, 1966) is an American director,....",
      "Ed Wood'_0": "...", ....},
'all_linked_para_title_dic':
    {"Scott Derrickson_0": ['Los Angeles_0', 'California_0', 'Horror film_0', ...]},
'all_linked_paras_dic':
    {"Los Angeles_0": "Los Angeles, officially the City of Los Angeles and often known by its initials L.A., is ...", ...},
'short_gold':[],
'redundant_gold': [],
'all_redundant_gold': []
}

Am I correct to say that
C_1 = context
C_2 = any of the **all_linked_para_title_dic** as extracted by the graph based retriever?

Am I also correct to say that the data format would work only for questions that need to be answered in at-most 2-hops?

Sorry if my questions are too basic

The error when training the graph_retriever in the HotpotQA

Thanks for your great work!
When I run the run_graph_retriever.py ( in the graph_retriever folder) to train the graph-based recurrent retriever model in the train dataset ( the files in the hotpotqa_new_selector_train_data_db_2017_10_12_fix.zip ) of HotpotQA, there is an error like it.

Traceback (most recent call last):
File "run_graph_retriever.py", line 546, in
main()
File "run_graph_retriever.py", line 264, in main
train_examples = processor.get_train_examples(graph_retriever_config)
File "/DATA/sunzhanchen/learning_to_retrieve_reasoning_paths-master/graph_retriever/utils.py", line 200, in get_train_examples
examples += self._create_examples(file_name, graph_retriever_config, "train")
File "/DATA/sunzhanchen/learning_to_retrieve_reasoning_paths-master/graph_retriever/utils.py", line 429, in _create_examples
assert t in context
AssertionError

Is there any question about the train dataset or any other question?
Thanks you again!

What do output_masks do?

Hi @AkariAsai ,

Thanks for the great repo!

I'm trying to adapt your model to a new dataset, I find that there is an output_masks in the graph_retriever/utils.py convert_examples_to_features function, may I check what exactly do the output_masks do? How should I set the masking for positive and negative reasoning paths?

Also, may I clarify how did you set the gold labels for each RNN step, and for the negative paths?

Thanks!

negative documents construction for graph retriever of hotpotQA fullwiki

Hello AkariAsai, thank you for the great job! After going through the codes of graph retriever, I found that the principle of negative documents construction for graph retriever seems: TF-IDF documents first, then the hyperlink negative ones? My question is: hyperlink negative docs are considered by appending docs of all_linked_paras_dic, but keys of all_linked_paras_dic are all TF-IDF retrieved titles, so the most important part, hyperlink negative doc of gold path, may not be included for training?

Why are some document titles missing?

Thank you for the amazing repo.

I am curious why are some titles missing from the tfidf index. It seems that during evaluation we get multiple such warnings:

Oranjegekte_0 is missing
James Gunn_0 is missing
..

I assume this means that some document titles are not found in the database. Is that normal? could you explain?

Thanks!

A problem about total tranining steps of reader

Hi, very appreciate to your contribution.

I have a question about the training steps setting here:

https://github.com/AkariAsai/learning_to_retrieve_reasoning_paths/blob/master/reader/run_reader_confidence.py#L209-L210

num_train_optimization_steps = int(
            len(train_examples) / args.train_batch_size / args.gradient_accumulation_steps) * args.num_train_epochs

I think the len(train_examples) should be replaced with len(train_features) since the total length of dataset is the length of all processed features, whose amount is much more than that of the initial examples.

Preprocessing of HotpotQA

Hi,

Thank you for your work and sharing your code!

I have some questions about the file you provide: "HotpotQA reader train data". Can you please point/share the preprocessing code that gives "answer starts", because the original hotpotqa training data doesn't have this. Also, in that file, are all yes/no regarded as "is_impossible=True"?

Small typo in the paper

In figure 2, the third input the recurrent network in the lower part should be "H" instead of "D", is that a typo?

How to train and evaluate the models in HotpotQA distractor setting?

Hi, thanks for your great works!

I'm currently trying to reproduce your results in HotpotQA distractor setting, but I am facing some technical difficulties.
I apologize in advance if these are dumb questions, but it would be very helpful if you answer these:

  1. 'hotpot_train_order_sensitive.json' file
    Readme file in graph_retriever folder specifies that 'hotpot_train_order_sensitive.json' is used for training in hotpot distractor setting. But I can't find this file in train_data folder you released. Is there any way I can download this particular file, or is there a way I can create a file of this particular format from original HotpotQA training set?

  2. sentence selector
    I read in your paper that, graph retriever in hotpotQA distractor setting is different from full-wiki setting, but both settings share the same reader model. I'm curious if the sentence selector model is separate(like graph retriever) or shared(like reader) across distractor/full-wiki setting. Also, if the sentence selector for distractor setting is different from that of full-wiki setting, I wonder how I can get the train data for distractor setting. (It seems that the train data you released contains only one pair of dev/train data for the sentence selector)

  3. preprocessing of hotpot distractor dataset
    It seems that, in order to run your model(for evaluation), the user needs a preprocessed dataset.
    I checked that the preprocessed hotpot full-wiki data is available, but I am not sure I have access to the hotpot distractor dataset. Is there any way for me to get preprocessed hotpot distractor data? (Downloading it or Preprocessing it by myself?)

  4. Evaluation on distractor setting
    It seems that the evaluation code for QA/SP task basically considers the open-domain scenario.
    How can I evaluate the model in closed scenario in distractor setting, as you did in your paper?

Thanks for your reading and attention :) @hassyGo @AkariAsai

demo.py arg error about NQ

Hi Akari,
Thanks for the great repo. I had an error when trying to use the demo for the natural questions' model. I followed the tip to rename demo.py in the previous example for running eval of NQ.

I am running

python demo.py \
--graph_retriever_path models/nq/selector/pytorch_model.bin \
--reader_path models/nq/reader/ \
--tfidf_path models/nq_models/tfidf_retriever/wiki_20181220_nq_hyper_linked-tfidf-ngram=2-hash=16777216-tokenizer=simple.npz \
--db_path models/nq_models/wiki_db/wiki_20181220_nq_hyper_linked.db \
--bert_model bert-base-uncased --do_lower_case --tfidf_limit 20 --eval_batch_size 4 --pruning_by_links \
--beam_graph_retriever 8 --max_para_num 2000 --use_full_article 

And got the errors:

  • demo.py: error: ambiguous option: --bert_model could match --bert_model_graph_retriever, --bert_model_sequential_sentence_selector.

  • demo.py: error: unrecognized arguments: --use_full_article (i assumed it was the bert_model_sequential_sentence_selector)

  • FileNotFoundError: [Errno 2] No such file or directory: 'models/nq/selector/pytorch_model.bin' (I removed the --use_full_article) I think this should be models/nq_models/graph_retrieverpytorch_model.bin?

  • --reader_path models/nq/reader/ seems not correct? I think this should be models/nq_models/reader

question about wikipedia data

Hi, thanks for sharing the code. Great work!

I have a quick question: Where can I find your preprocessed Wikipedia paragraphs and Wikipedia graph?

How to evaluate the pretrained graph retriever model?

I downloaded the pretrained model. I want to evaluate the graph retriever on HotpotQA. Should I just input the 'models/hotpot_models/graph_retriever' as the output_dir? And can I use the pretrained model to test the HotpotQA distractor? Or I need to train a new model for HotpotQA distractor?

How to evaluate the supporting facts in the HotPotQA experiment?

Hello, the content is amazing, aoao, but I am curious about hotpotqa's supporting facts experiment.

If the reasoning is based on the wikipedia data of the external chain, then how is the accuracy of the hotpotqa data: "from which text is the answer" (supporting the fact) calculated for the hotpotqa data?

thank you for your reply!

hotpot model zip file corrupted?

Hello, I haven't been able to unzip the hotpot model zip, and ive tried a few different methods. Seems its corrupt in some way? Has anybody else had a problem unzipping?
The squad models dont have a problem

The hyperparameters for training the bert-base reader ?

Hi, thanks to your contribution, would you mind sharing the hyperparameters for training the bert-base reader ?
It seems that the group of hyperparameters using mini-batch of 128 mentioned in the paper are for bert-wwm-large. And I can't reproduce the results using the command provided by the reader dir. I obtained an em as 50.53 and f1 as 63.17.

Thank you very much!

What is the problem?

After I ran the provided script file, the result returned was as follows: {'em': 0.2, 'f1': 0.2976111111111111, 'prec': 0.30498268398268397, 'recall': 0.3068333333333333, 'sp_em': 0.02, 'sp_f1': 0.10866666666666668, 'sp_prec': 0.11333333333333334, 'sp_recall': 0.1075, 'joint_em': 0.01, 'joint_f1': 0.057762626262626265, 'joint_prec': 0.0596111111111111, 'joint_recall': 0.06208333333333333}

Fine-tuning on own documents?

Hi - what would be the recommended approach for fine tuning (not full retrain) of the model on one's own documents?

Thank you

Minor fix in demo.py

Hi,

Thank you for your amazing work. While running demo.py, I encountered a simple bug.
Line 44:

tfidf_retrieval_output += self.tfidf_retriever.get_abstract_tfidf('DEMO_{}'.format(i), question, self.args.tfidf_limit)

should be

tfidf_retrieval_output += self.tfidf_retriever.get_abstract_tfidf('DEMO_{}'.format(i), question, self.args)

as get_abstract_tfidf(...) expects args as the last argument.

Some details regarding generating NQ trainset for the reader model

Hi @AkariAsai. Thank you for this great work.

I'd like to understand more clearly how the NQ trainset for the reader model is generated.
On your comment, you said that you removed all the tables and list elements from the NQ's original preprocessed HTML data.
#9 (comment)

I'm curious how you handled the case where a list element contains an answer and a paragraph contains the list? (like the following example)
https://github.com/google-research-datasets/natural-questions/blob/master/toy_example.md

eg. <p>Google was founded in 1998 By:<ul><li>Larry</li><li>Sergey</li></ul></p>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.