Comments (7)
Can you pls give the exact command that you used again? A screenshot of output would also be helpful.
from embedkgqa.
I encountered the same problem. I used the training command of metaQA_full 2-hop that you released on GitHub, but the accuracy rate of the test set was only 0.70. Can you provide me the training commands for metaQA_full 3-hop?
from embedkgqa.
I got a same problem. With the ComplEx embeddings you provided, the best validation score achieved on MetaQA_full is only 0.717879. I have already unfroze the embeddings.
from embedkgqa.
I got a same problem. I used the training command on the 3-hop MetaQA data, but the model early stopped when it reached 14 rounds with the accuarcy rate about 0.141376. And it took about 2 day to train. When I set the roberta_model.parameters.requires_grad = False, it can reach 0.599131 accuarcy at 37 epoch in a shorter time. The command is "python RoBERTa/main.py --mode train --relation_dim 200 --hidden_dim 256
--gpu 3 --freeze 0 --batch_size 128 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2
--decay 1.0 --model ComplEx --patience 10 --ls 0.0 --outfile 3hop"
and I use the "qa_train_3hop.txt" to train the model. Can you provide me the training log and I want to know how much training time the best model took.
from embedkgqa.
I got a same problem. I used the training command on the 3-hop MetaQA data, but the model early stopped when it reached 14 rounds with the accuarcy rate about 0.141376. And it took about 2 day to train. When I set the roberta_model.parameters.requires_grad = False, it can reach 0.599131 accuarcy at 37 epoch in a shorter time. The command is "python RoBERTa/main.py --mode train --relation_dim 200 --hidden_dim 256
--gpu 3 --freeze 0 --batch_size 128 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2
--decay 1.0 --model ComplEx --patience 10 --ls 0.0 --outfile 3hop"
and I use the "qa_train_3hop.txt" to train the model. Can you provide me the training log and I want to know how much training time the best model took.
Please use LSTM for MetaQA datasets
from embedkgqa.
from embedkgqa.
I got a same problem. I used the training command on the 3-hop MetaQA data, but the model early stopped when it reached 14 rounds with the accuarcy rate about 0.141376. And it took about 2 day to train. When I set the roberta_model.parameters.requires_grad = False, it can reach 0.599131 accuarcy at 37 epoch in a shorter time. The command is "python RoBERTa/main.py --mode train --relation_dim 200 --hidden_dim 256
--gpu 3 --freeze 0 --batch_size 128 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2
--decay 1.0 --model ComplEx --patience 10 --ls 0.0 --outfile 3hop"
and I use the "qa_train_3hop.txt" to train the model. Can you provide me the training log and I want to know how much training time the best model took.Please use LSTM for MetaQA datasets
Hello. I used LSTM for MetaQA 3-hop-full dataset, training on many sets of hyperparameters, but the result can only reached 0.728 for the best hyperparameters. Here is the command for the best result I use:
python main_LSTM.py --mode train --relation_dim 200 --hidden_dim 256 --gpu 0 --freeze 0 --batch_size 1024 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2 --decay 1.0 --model ComplEx --patience 12 --ls 0.0 --kg_type full
from embedkgqa.
Related Issues (20)
- relation in fbwq_full
- relation matching
- MetaQA relation match HOT 1
- About ComplEx score calculation HOT 1
- The MetaQA dataset HOT 2
- Question with "half KG" protocol HOT 2
- How to build your own pruning_ train.txt
- Hello I have some questiones about the code such as what is the meaning of "best_valid" HOT 1
- Do you use the folder "train_embeddings"?
- I have some question in the relation matching
- When I run RoBERTa / main.py, running to ' creating model ' GPU takes up 0 and takes several hours to create the model.
- Is it possible to share the pdf version or ppt version slides
- How to use eval? How can I use pre trained model for QA? HOT 1
- RoBERTa used for question embedding
- Pretrained models missing HOT 1
- How to set up fbwq_full?
- miss files HOT 2
- About dataset
- pretrained_models.zip HOT 1
- No found pretrained_model
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from embedkgqa.