nouhadziri / dialogentailment Goto Github PK
View Code? Open in Web Editor NEWThe implementation of the paper "Evaluating Coherence in Dialogue Systems using Entailment"
Home Page: https://arxiv.org/abs/1904.03371
License: MIT License
The implementation of the paper "Evaluating Coherence in Dialogue Systems using Entailment"
Home Page: https://arxiv.org/abs/1904.03371
License: MIT License
Hi. I have already trained my BERT model and tested my responses with the entailment model.
But I didn't know how to test with other metrics, i.e., Semantic Similarity
, Word-level metrics
, Consistency by textual entailment
.
Can you show a script as an example?
Thanks
Thanks for your great work. I followed your instruction to train BERT and I got the following error:
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
Namespace(bert_model='bert-base-uncased', cache_dir='', do_eval=True, do_lower_case=True, do_train=True, eval_batch_size=8, eval_dataset='/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/convai_nli_valid_both_revised_no_cands_ctx2_v1.3.jsonl', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, max_seq_length=128, model='bert-base-uncased', no_cuda=False, num_train_epochs=3.0, output_dir='/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/output', seed=42, train_batch_size=32, train_dataset='/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/convai_nli_train_both_revised_no_cands_ctx2_v1.3.jsonl', warmup_proportion=0.1)
08/02/2022 08:42:33 - INFO - dialogentail.huggingface.finetune_bert - device: cpu n_gpu: 0
08/02/2022 08:42:34 - INFO - pytorch_pretrained_bert.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache, downloading to /tmp/tmpvr7fii78
100% 231508/231508 [00:00<00:00, 907140.45B/s]
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmpvr7fii78 to cache at /root/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for /root/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmpvr7fii78
08/02/2022 08:42:35 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /root/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/__main__.py", line 93, in <module>
main()
File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/__main__.py", line 89, in main
bert_run(args)
File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/finetune_bert.py", line 353, in run
train_examples = processor.get_train_examples(args.train_dataset)
File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/finetune_bert.py", line 111, in get_train_examples
self._read_tsv(data_file), "train")
File "/content/gdrive/MyDrive/A/factoid_one_focus/CoQA/Answer-unaware/Evaluation/DialogEntailment/dialogentail/huggingface/finetune_bert.py", line 129, in _create_examples
text_a = line[8]
IndexError: list index out of range
I linked the train and valid paths to convai_nli_train_both_revised_no_cands_ctx2_v1.3.jsonl
and convai_nli_valid_both_revised_no_cands_ctx2_v1.3.jsonl
. Do you have any idea what happened?
Thanks!
Can you guide me how to test after training cause it described a little vaguely.
Please show a test script you've made.
Thanks,
Thanks for your great work, and I am impressed by the work!
could you please release the trained model for a test
Thanks
Dear author,
I am impressed by the work! I have a few questions though.
ESIM+ELMo and BERT both output labels like entailment, contradiction or neutral. But in Reddit and OpenSubtitles majority vote of the 4-scale human rating constitutes the labels(Excellent, good, poor, bad). How to calculate accuracy in this situation?
Thank you very much!
After installing the DialogEntailment package, how can I use it to score a certain response given the conversation history in Python? Can you give me a simple demo?
Thank you very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.