ansonb / recon Goto Github PK
View Code? Open in Web Editor NEWThis is the code for the paper 'RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network'.
License: MIT License
This is the code for the paper 'RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network'.
License: MIT License
Hi, thanks for your paper and code. I downloaded the data and have put in ./data/ accordingly. However, I couldn't find these files in your repo. I am running this on colab.
The GAT folder is in the main directory of this code and doesn't contain the missing file. Could you please upload any missing files?
Thanks
ERROR:root:No list found. [Errno 2] No such file or directory: '../resources/property_blacklist.txt'
ERROR:root:No list found. [Errno 2] No such file or directory: '../../resources/property_blacklist.txt'
ERROR:root:No list found. [Errno 2] No such file or directory: '../resources/property_blacklist.txt'
ERROR:root:No list found. [Errno 2] No such file or directory: '../../resources/property_blacklist.txt'
Traceback (most recent call last):
File "train.py", line 410, in <module>
train()
File "train.py", line 102, in train
W_ent2rel_all_rels = np.load(w_ent2rel_all_rels_file)
File "/usr/local/lib/python3.7/dist-packages/numpy/lib/npyio.py", line 416, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: './models/GAT/WikipediaWikidataDistantSupervisionAnnotations/W_ent2rel.json.npy'
firstly thanks for your sharing. I'm having some problems running the code. as follows:
The corresponding code in the project:
### args.add_argument("-g1hop", "--get_1hop", type=bool, default=False)
args.add_argument("-g2hop", "--get_2hop", type=bool, default=False)
args.add_argument("-u2hop", "--use_2hop", type=bool, default=True)
args.add_argument("-u1hop", "--use_1hop", type=bool, default=True)
if(args.use_2hop):
print("Opening node_neighbors pickle object")
file = args.data + "/2hop.pickle"
with open(file, 'rb') as handle:
node_neighbors_2hop = pickle.load(handle)
file = args.data + "/2hop_test.pickle"
with open(file, 'rb') as handle:
node_neighbors_2hop_test = pickle.load(handle)
Corpus_.node_neighbors_2hop = node_neighbors_2hop
Corpus_.node_neighbors_2hop_test = node_neighbors_2hop_test
I don't know how to generate the corresponding file???????
Hi,
thank you for your valuable work!
I was wondering how come the dataset_triples_train.json
file contains 570,084 examples, while in the paper it is indicated 455,771?
Best,
Andrei
Could you provide the macro_pr and micro_pr result on Wikidata Dataset please?
Hello, when I python train.py, it raised the error AttributeError: 'str' object has no attribute 'property2idx'.So I looked up models.py and I found that indeed RECON doesn't have attribute "property2idx". I wonder if its my problem? Can I change property_index to any folder?
When I run the train.py, it occurs the error as follows:
FileNotFoundError: [Errno 2] No such file or directory: 'model_params.json'
Is it a problem with the scikit-learn version?
If it is which version should be downloaded
I found that there are many duplicated triplets in 'train.txt', 'valid.txt' and 'test.txt' you use.
Does it lead to a leakage?
Where are the files final_entity_embeddings.json, final_relation_embeddings.json and W_ent2rel.json.npy?
Hi,
I am currently looking into your great work. I download the data and the code but find it will take too much time even with GPUs to train the model. So I am wondering whether a pre-trained model (RECON, or RECON_EAC, etc.) can be provided? Many thanks.
Dennis
Hello, I reread your paper, and I'd like to try deploying your model in my work. But I need to create a domain dataset. So what should I prepare for the dataset?
Hello, what is the difference between wikidata and NYT data sets? Why use different indicators?
Hello, I would like to modify the NYT dataset, do you have a script to generate the files it uses?
Hello, when model = Recon is final_ relation_ embeddings. JSON and W_ ent2rel. json. NPY are these two files available? I didn't find these two files in your wikidata
您好,请问NTY的评价指标的代码可以公布一下吗?
Hello, after serveral days of debugging, I have found many bugs in your code.
I'd like to share them with other people, and I hope you can fix them.
(1)Tons of file path errors which I don't want to list here.
(2)models/models.py
Why you use nn.Parameter to create 'init_gat_relation_embeddings' here? Moreover, why you set 'requires_grad=True'?
(3)GAT_seq_space/main.py
'outfolder' is not defined...^^
I don't know why you write these.
The proper code is
model_gat.load_state_dict(torch.load(
os.path.join(args.output_folder, 'trained_{}.pth'.format(args.epochs_gat - 1))))
the same bug also exists on many other lines...
(4)train.py
Too many...I just wanna list some of them
torch.from_numpy(***.astype(int))
Thanks a lot if you can fix them all :)
And it will be more thankful if you can write CLEARLY how to run your code in README.md 👍
Hello, your work has benefited me a lot. I want to ask what the category of wikidata is (like P106: Occupation), not just p106. Where can I find the specific category
Traceback (most recent call last):
File "main.py", line 276, in
word_vocab, char_vocab, word_embed_matrix = build_vocab(args, entities_context_data, save_vocab, embedding_file)
File "main.py", line 187, in build_vocab
with open(save_path[0],'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: './vocab.json'
please upload the missing file
thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.