Git Product home page Git Product logo

dialogentailment's People

Contributors

korymath avatar nouhadziri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dialogentailment's Issues

How to test with other metrics ?

Hi. I have already trained my BERT model and tested my responses with the entailment model.

But I didn't know how to test with other metrics, i.e., Semantic Similarity, Word-level metrics, Consistency by textual entailment.

Can you show a script as an example?
Thanks

Errors when training BERT

Hi @korymath @ehsk ,

Thanks for your great work. I followed your instruction to train BERT and I got the following error:

Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
Namespace(bert_model='bert-base-uncased', cache_dir='', do_eval=True, do_lower_case=True, do_train=True, eval_batch_size=8, eval_dataset='/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/convai_nli_valid_both_revised_no_cands_ctx2_v1.3.jsonl', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, max_seq_length=128, model='bert-base-uncased', no_cuda=False, num_train_epochs=3.0, output_dir='/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/output', seed=42, train_batch_size=32, train_dataset='/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/convai_nli_train_both_revised_no_cands_ctx2_v1.3.jsonl', warmup_proportion=0.1)
08/02/2022 08:42:33 - INFO - dialogentail.huggingface.finetune_bert -   device: cpu n_gpu: 0
08/02/2022 08:42:34 - INFO - pytorch_pretrained_bert.file_utils -   https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache, downloading to /tmp/tmpvr7fii78
100% 231508/231508 [00:00<00:00, 907140.45B/s]
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.file_utils -   copying /tmp/tmpvr7fii78 to cache at /root/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.file_utils -   creating metadata file for /root/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.file_utils -   removing temp file /tmp/tmpvr7fii78
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.tokenization -   loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /root/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
Traceback (most recent call last):
  File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/__main__.py", line 93, in <module>
    main()
  File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/__main__.py", line 89, in main
    bert_run(args)
  File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/finetune_bert.py", line 353, in run
    train_examples = processor.get_train_examples(args.train_dataset)
  File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/finetune_bert.py", line 111, in get_train_examples
    self._read_tsv(data_file), "train")
  File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/finetune_bert.py", line 129, in _create_examples
    text_a = line[8]
IndexError: list index out of range

I linked the train and valid paths to convai_nli_train_both_revised_no_cands_ctx2_v1.3.jsonl and convai_nli_valid_both_revised_no_cands_ctx2_v1.3.jsonl. Do you have any idea what happened?

Thanks!

Test script

Can you guide me how to test after training cause it described a little vaguely.
Please show a test script you've made.
Thanks,

Where can I find the original dataset

Dear author,
I am impressed by the work! I have a few questions though.

  1. It seems the proposed dataset is built upon a previous dataset called ConvAI. Is this dataset also named PERSONA-CHAT Dataset? In addition, where can I download this original dataset? i.e. the convai_file in convai_to_nli.py?
  2. As I observed from the proposed dataset, each dialogue only has one turn. I wonder if you have tried multi-turn dialogue? If so, how is the performance?
    Thanks!

How to obtain the results of Table 2 in the paper?

How to obtain the results of Table 2 in the paper?

ESIM+ELMo and BERT both output labels like entailment, contradiction or neutral. But in Reddit and OpenSubtitles majority vote of the 4-scale human rating constitutes the labels(Excellent, good, poor, bad). How to calculate accuracy in this situation?
Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.