Git Product home page Git Product logo

qgforqa's People

Contributors

zhangshiyue avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qgforqa's Issues

About Bert vocab

hello,
i have some questions about the bert_qg vocab. you just given the preprocessed vocab file but the codes to get vocab and related embed are not in the preprocess.py. would you like to release this part of code?

About BERT-QPC and BERT-QA model

Hi, this work is really impressive! Now I want to use your BERT QG model to train on my dataset. Can you give a link of your pre-trained BERT-QPC and BERT-QA model?

A quick question on the use of BERT Embeddings

I have a quick question about how BERT Embedding are used in this model. Based on this line it seems that questions are only generated if there is a ground truth to get the CLS token from BERT. Is this what is going on behind the scenes? I'm currently trying to implement a similar model in PyTorch and I'm running into an issue on what to use as the first input to the decoder and it centers on the highly-contextualized nature of BERT.

Thanks,
Matthew.

ELMO-QG baseline got 13.9 BLEU-4

Hi, I totally follow your step and train the ELMO-QG baseline model but I only got 13.9 BLEU-4 scores. It seems that something was wrong somewhere ....

QG elmo

How to generate the questions from the text I have,
Processing etc?

About semi-supervised QA

Hi, thank you very much for publishing the codes and data.
They are very helpful because few research papers in QG publish their code or data.

Now I am trying to reproduce the result of the semi-supervised QA result in your paper using HarvestingQA dataset (https://github.com/xinyadu/harvestingQA/tree/master/dataset).

However, when I use your preprocess.py to preprocess HarvestingQA dataset, I got many errors in:

https://github.com/ZhangShiyue/QGforQA/blob/master/QG/ELMo_QG/prepare.py#L54

In total, 1189314 examples in HarvestingQA got this error here.
I used the train.json of HarvestingQA.
How did you cope with this in your experiment?
I would appreciate it if you could show the preprocessing procedure in your semi-supervised QA experiment.
Thank you very much.

Whats Next

Hi @ZhangShiyue ,
I had downloaded all data sets need for the project according to README.md file can you tell me the next steps to be done fro training and testing usage for generating of questions and answers...

Thanks and Regards,
Manikantha Sekhar.

Script for computing Q-BLEU1 score?

Hi,

Thank you for sharing this awesome code! I noticed that you also reported Q-BLEU1 score in your paper. I was wondering if you could kindly share the script for computing Q-BLEU1 score as well. I had some trouble using the official repo to compute Q-BLEU1, as detailed here.

Thank you!

pytorch version

hello, do you have a code version about PyTorch to share?

Saving best model based on which metric?

Hi Shiyue,

Thanks for the nice job.
I realized from your code that for each checkpoint you are saving the best model based on BLEU score on validation set. Is that right?
Just wanted double check what is the metric you used to save the best model? BLEU or your own reward scores?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.