Git Product home page Git Product logo

cesi's Introduction

CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information

Conference Paper Slides Poster

Source code and dataset for The WebConf 2018 (WWW 2018) paper: CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information.

Overview of CESI. CESI first acquires side information of noun and relation phrases of Open KB triples. In the second step, it learns embeddings of these NPs and relation phrases while utilizing the side information obtained in previous step. In the third step, CESI performs clustering over the learned embeddings to canonicalize NP and relation phrases. Please refer paper for more details

Dependencies

  • Compatible with both Python 2.7/3.x
  • Dependencies can be installed using requirements.txt

Datasets

  • Datasets ReVerb45k, Base and Ambiguous are included with the repository.
  • The input to CESI is a KG as list of triples. Each triple is stored as a json in a new line. An example entry is shown below:
{
	"_id": 	  36952,
	"triple": [
		"Frederick",
		"had reached",
		"Alessandria"
	],
	"triple_norm": [
		"frederick",
		"have reach",
		"alessandria"
	],
  	"true_link": {
		"subject": "/m/09w_9",
		"object":  "/m/02bb_4"
	},
  	"src_sentences": [
		"Frederick had reached Alessandria",
		"By late October, Frederick had reached Alessandria."
	],
	"entity_linking": {
		"subject":  "Frederick,_Maryland",
		"object":   "Alessandria"
	},
	"kbp_info": []
}        
  • _id unique id of each triple in the Knowledge Graph.
  • triple denotes the actual triple in the Knowledge Graph
  • triple_norm denotes the normalized form of the triple (after lemmatization, lower casing ...)
  • true_link is the gold canonicalization of subject and object. For relations gold linking is not available.
  • src_sentences is the list of sentences from which the triple was extracted by Open IE algorithms.
  • entity_linking is the Entity Linking side information which is utilized by CESI.
  • kbp_info Knowledge-Base Propagation side information used by CESI.

Usage:

Setup Environment:
  • After installing python dependencies, execute sh setup.sh for setting up required things.
  • Pattern library is required to run the code. Please install it from Python 2.x/Python 3.x.
Start PPDB server:
  • Running PPDB server is essential for running the main code.
  • To start the server execute: python ppdb/ppdb_server.py -port 9997 (Let the server run in a separate terminal)
Run the main code:
  • python src/cesi_main.py -name reverb45_test_run
  • On executing the above command, all the output will be dumped in output/reverb45_test_run directory.
  • -name is an arbitrary name assigned to the run.

Citing:

Please cite the following paper if you use this code in your work.

@inproceedings{cesi2018,
	author = {Vashishth, Shikhar and Jain, Prince and Talukdar, Partha},
	title = {{CESI}: Canonicalizing Open Knowledge Bases Using Embeddings and Side Information},
	booktitle = {Proceedings of the 2018 World Wide Web Conference},
	series = {WWW '18},
	year = {2018},
	isbn = {978-1-4503-5639-8},
	location = {Lyon, France},
	pages = {1317--1327},
	numpages = {11},
	url = {https://doi.org/10.1145/3178876.3186030},
	doi = {10.1145/3178876.3186030},
	acmid = {3186030},
	publisher = {International World Wide Web Conferences Steering Committee},
	address = {Republic and Canton of Geneva, Switzerland},
	keywords = {canonicalization, knowledge graph embeddings, knowledge graphs, open knowledge bases},
}

cesi's People

Contributors

parthatalukdar avatar svjan5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cesi's Issues

ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216

Hi, the version of python i used is 3.6, and others are installed by following the requirement. However, throw the value error ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216, which means the version of numpy is too low. But when i updated numpy, ran again, it will throw file not found error FileNotFoundError: [Errno 2] No such file or directory: './output/reverb45_test_run/triples.txt', which is reall strange.
Is there any solution to solve it?
Thanks!

Using CESI with large dataset

Suppose we have very large data set with millions of OIE triples. In such scenario, CESI runs out of memory. What are the most memory intensive side information procedures that one can ignore in order to get canonicalized data from a large dataset, while at the same time not damaging precision too much?

error in getPPDBclustersRaw

After running all installation steps, I run the script python src/cesi_main.py -name reverb45_test_run . Then, I get the following error:

2020-10-02 11:45:51,494 - [INFO] - Running reverb45_test_run
2020-10-02 11:45:51,494 - [INFO] - Reading Triples
2020-10-02 11:45:51,494 - [INFO] -      Loading cached triples
2020-10-02 11:45:52,521 - [INFO] - Side Information Acquisition
Error! Status code :500
Traceback (most recent call last):
  File "src/cesi_main.py", line 234, in <module>
    cesi.get_sideInfo() # Side Information Acquisition
  File "src/cesi_main.py", line 89, in get_sideInfo
    self.side_info = SideInfo(self.p, self.triples_list, self.amb_mentions, self.amb_ent, self.isAcronym)
  File "/home/kgashteovski/cesi/cesi/src/sideInfo.py", line 24, in __init__
    self.fixTypos(amb_ent, amb_mentions, isAcronym)
  File "/home/kgashteovski/cesi/cesi/src/sideInfo.py", line 413, in fixTypos
    ent2ppdb = getPPDBclustersRaw(self.p.ppdb_url, sub_list)
  File "/home/kgashteovski/cesi/cesi/src/helper.py", line 118, in getPPDBclustersRaw
    if rep_list[i] == None: continue        # If no representative for phr then skip
TypeError: 'NoneType' object is not subscriptable

Any idea how to fix this quickly?

Training, Validation, and Test Splits?

I read your paper "CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information", and I see the following remark for the Reverb45k dataset construction:

"Through these steps, we obtained 45K high-quality triples which we used for evaluation. We call this resulting dataset ReVerb45K."

I see the 45k triples are shown in the data\reverb45k\reverb45k_valid.txt + data\reverb45k\reverb45k_test.txt files in this github repo.

Can you explain how these two .txt files alone are used to create the automatically learned embeddings from the objective function in section 5, as well as compute the F1 scores in evaluation? Is there no training dataset separate from these two .txt files in your github repo?

Why representative of cluster using ent2freq and NOT sub2freq dict?

I noticed that when at this line the subject embeddings and relation embeddings are passed for clustering, and then the cluster representative is found using (possibly) wrong ent2freq dictionary here. The subject embeddings dict contains 11878 subjects, whereas the ent2freq dict contains 23219 entities. The ent2freq dict maps from entity, and not subject, to its frequency i.e. there is a mismatch in entity id and subject id. Could you please clarify this? I am happy to elaborate my concern if needed.

Regarding Source Text of Sentences of Reverb 45

Firstly, for every triple in ReVerb, we extracted the source text from Clueweb09 corpus from which the triple was generated. In this process, we rejected triples for which we could not find any source text

Can the source text document information be please shared?

Process custom data

Suppose I have some dataset (other than Reverb45k) which I formatted previously in the format described on the README page. How can I run CESI to learn the canonicalizations from the other dataset?

Custom Dataset "entity_linking" and "true_link"

Hi,
I would like to run for my own custom dataset. I understand the dataset needs to be in the json format mentioned in README section. However I would like to know how did you extract "entity_linking" and "true_link" given a triplet in KB?

Error in src/skge/util.py

def getPairs(id_list, id2clust, mode = 'm2o'):
	pairs = set()
	map_clust = dict()

	for ele in id_list:
		if ele in id2clust: map_clust[ele] = id2clust[ele]

	Z = len(map_clust.keys())

	clusters = invertDic(map_clust, mode)

	for _, v in clusters.items():
		pairs.union(itertools.combinations(v, 2))

	return list(pairs), Z

In this code, itertools.combinations(v,2) returns just a itertoolsclass object, so it should be typecast to set, also pair.union didn't update pair inplace and hence that line should be changed to pairs=pairs.union(set(itertools.combinations(v, 2))). I am using python3.

If I rerun with this modification on reverb45k dataset, then pairwise accuracy seems to drop, could you please help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.