Git Product home page Git Product logo

recon's People

Contributors

ansonb avatar kulsingh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

recon's Issues

Files missing - 'W_ent2rel.json.npy'

Hi, thanks for your paper and code. I downloaded the data and have put in ./data/ accordingly. However, I couldn't find these files in your repo. I am running this on colab.
The GAT folder is in the main directory of this code and doesn't contain the missing file. Could you please upload any missing files?
Thanks

ERROR:root:No list found. [Errno 2] No such file or directory: '../resources/property_blacklist.txt'
ERROR:root:No list found. [Errno 2] No such file or directory: '../../resources/property_blacklist.txt'
ERROR:root:No list found. [Errno 2] No such file or directory: '../resources/property_blacklist.txt'
ERROR:root:No list found. [Errno 2] No such file or directory: '../../resources/property_blacklist.txt'
Traceback (most recent call last):
  File "train.py", line 410, in <module>
    train()
  File "train.py", line 102, in train
    W_ent2rel_all_rels = np.load(w_ent2rel_all_rels_file)
  File "/usr/local/lib/python3.7/dist-packages/numpy/lib/npyio.py", line 416, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: './models/GAT/WikipediaWikidataDistantSupervisionAnnotations/W_ent2rel.json.npy'

use_2hop

firstly thanks for your sharing. I'm having some problems running the code. as follows:

FileNotFoundError: [Errno 2] No such file or directory: '../data/WikipediaWikidataDistantSupervisionAnnotations.v1.0/2hop.pickle'_**

The corresponding code in the project:
### args.add_argument("-g1hop", "--get_1hop", type=bool, default=False)
args.add_argument("-g2hop", "--get_2hop", type=bool, default=False)
args.add_argument("-u2hop", "--use_2hop", type=bool, default=True)
args.add_argument("-u1hop", "--use_1hop", type=bool, default=True)

if(args.use_2hop):
print("Opening node_neighbors pickle object")
file = args.data + "/2hop.pickle"
with open(file, 'rb') as handle:
node_neighbors_2hop = pickle.load(handle)
file = args.data + "/2hop_test.pickle"
with open(file, 'rb') as handle:
node_neighbors_2hop_test = pickle.load(handle)
Corpus_.node_neighbors_2hop = node_neighbors_2hop
Corpus_.node_neighbors_2hop_test = node_neighbors_2hop_test

I don't know how to generate the corresponding file???????

NYT train split size

Hi,

thank you for your valuable work!

I was wondering how come the dataset_triples_train.json file contains 570,084 examples, while in the paper it is indicated 455,771?

Best,
Andrei

P-R Curve

Could you provide the macro_pr and micro_pr result on Wikidata Dataset please?

AttributeError: 'str' object has no attribute 'property2idx'

Hello, when I python train.py, it raised the error AttributeError: 'str' object has no attribute 'property2idx'.So I looked up models.py and I found that indeed RECON doesn't have attribute "property2idx". I wonder if its my problem? Can I change property_index to any folder?

No such file "model_params.json"

When I run the train.py, it occurs the error as follows:
FileNotFoundError: [Errno 2] No such file or directory: 'model_params.json'

KG?

I found that there are many duplicated triplets in 'train.txt', 'valid.txt' and 'test.txt' you use.
Does it lead to a leakage?

Missing documents

Where are the files final_entity_embeddings.json, final_relation_embeddings.json and W_ent2rel.json.npy?

Do you have pre-trained model ready for use?

Hi,

I am currently looking into your great work. I download the data and the code but find it will take too much time even with GPUs to train the model. So I am wondering whether a pre-trained model (RECON, or RECON_EAC, etc.) can be provided? Many thanks.

Dennis

What if I create a my-own dataset?

Hello, I reread your paper, and I'd like to try deploying your model in my work. But I need to create a domain dataset. So what should I prepare for the dataset?

datasets

Hello, what is the difference between wikidata and NYT data sets? Why use different indicators?

Create a new dataset

Hello, I would like to modify the NYT dataset, do you have a script to generate the files it uses?

W_ ent2rel. json

Hello, when model = Recon is final_ relation_ embeddings. JSON and W_ ent2rel. json. NPY are these two files available? I didn't find these two files in your wikidata

p@k?

您好,请问NTY的评价指标的代码可以公布一下吗?

Bugs...

Hello, after serveral days of debugging, I have found many bugs in your code.
I'd like to share them with other people, and I hope you can fix them.
(1)Tons of file path errors which I don't want to list here.
(2)models/models.py
image
Why you use nn.Parameter to create 'init_gat_relation_embeddings' here? Moreover, why you set 'requires_grad=True'?
(3)GAT_seq_space/main.py

  • set '--get_1hop' and '--get_2hop' to True on your first run.

image
'outfolder' is not defined...^^

image
I don't know why you write these.
The proper code is

model_gat.load_state_dict(torch.load(
        os.path.join(args.output_folder, 'trained_{}.pth'.format(args.epochs_gat - 1))))

the same bug also exists on many other lines...

(4)train.py
Too many...I just wanna list some of them

  • lines 54 - 66
    Code here is not complete, resulting a lot of 'not defined' errors.
  • A lot of inappropriate using, like
torch.from_numpy(***.astype(int))
  • Did you really train this model with ONLY 10 epochs? Seriously?
  • File paths written in fron are all about wikidata, and then you load data using data='nyt'... ^^
    I know it's not a big deal, but I seems to prove that your code is ..... :(
    (5)Tons of other bugs that I have fixed and forgotten

Thanks a lot if you can fix them all :)
And it will be more thankful if you can write CLEARLY how to run your code in README.md 👍

File missing -vocab.json and char_v.json

Traceback (most recent call last):
File "main.py", line 276, in
word_vocab, char_vocab, word_embed_matrix = build_vocab(args, entities_context_data, save_vocab, embedding_file)
File "main.py", line 187, in build_vocab
with open(save_path[0],'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: './vocab.json'

please upload the missing file
thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.