Git Product home page Git Product logo

hnre's Introduction

HNRE

Codes and datasets for our paper "Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention"

If you use the code, please cite the following paper:

 @inproceedings{han2018hierarchicalRE,
   title={Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention},
   author={Han, Xu and Yu, Pengfei and Liu, Zhiyuan and Sun, Maosong and Li, Peng},
   booktitle={Proceedings of EMNLP},
   year={2018}
 }

Requirements

The model is implemented using tensorflow. The versions of packages used are shown below.

  • tensorflow = 1.4.1
  • numpy = 1.13.3
  • scipy = 0.19.1

Initialization

First unzip the ./raw_data/data.zip and put all the files under ./raw_data. Once the original raw text corpus data is in ./raw_data, run

python scripts/initial.py

Train the model

For CNN hierarchical model,

PYTHONPATH=. python scripts/train.py --model cnn_hier

For PCNN,

PYTHONPATH=. python scripts/train.py --model pcnn_hier

Evaluate the model

Run various evaluation by specifying --mode in commandline, see the paper for detailed description for these evaluation methods.

PYTHONPATH=. python scripts/evaluate.py --mode [test method: pr, pone, ptwo, pall] --test-single --test_start_ckpt [ckpt number to be tested] --model [cnn_hier or pcnn_hier]

The logits are saved at ./outputs/logits/. To see the PR curve, run the following command which directly show() the curve, and you can adjust the codes in ./scripts/show_pr.py for saving the image as pdf file or etc. :

python scripts/show_pr.py [path/to/generated .npy logits file from evaluation]

Pretrained models

The pretrained models is already saved at ./outputs/ckpt/. To directly evaluate on them, run the following command:

PYTHONPATH=. python scripts/evaluate.py --mode [test method: hit_k_100, hit_k_200, pr, pone, ptwo, pall] --test_single --test_start_ckpt 0 --model [cnn_hier or pcnn_hier]

And PR curves can be generated same way as above.

The results of the released checkpoints

As this toolkit is reconstructed based on the original code and the checkpoints are retrained on this toolkit, the results of the released checkpoints are comparable with the reported ones.

The Main Experiments

  • pr
Model Micro Macro
CNN+HATT 41.8 16.5
PCNN+HATT 42.0 17.1

The Auxiliary Experiments

  • Hits@N(<100)
micro 10 15 20
CNN+HATT 5.6 33.3 50.0
PCNN+HATT 33.3 50.0 61.1
macro 10 15 20
CNN+HATT 5.6 29.6 57.4
PCNN+HATT 29.6 50.0 61.1
  • Hits@N(<200)
micro 10 15 20
CNN+HATT 41.4 58.6 69.0
PCNN+HATT 55.2 69.0 75.9
macro 10 15 20
CNN+HATT 22.7 42.4 65.2
PCNN+HATT 41.4 59.1 68.2
  • pone
P@N 100 200 300 Mean
CNN+HATT 84.0 76.5 70.7 77.1
PCNN+HATT 82.0 75.0 71.0 76.0
  • ptwo
P@N 100 200 300 Mean
CNN+HATT 83.0 79.0 72.3 78.1
PCNN+HATT 82.0 77.0 75.3 78.1
  • pall
P@N 100 200 300 Mean
CNN+HATT 82.0 80.0 74.7 78.9
PCNN+HATT 81.0 80.5 76.0 79.2

Baseline models

+ATT,+ONE

+ADV

+SL

Some of other baselines can be found in other baselines.

hnre's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hnre's Issues

Question about his@k

I have already extracted a subset of the test dataset in which all the relations have fewer than 100/200 training instances. However, my reproduced results (according to your model that published on github) is not match the results on your paper. Would you mind giving me some notices?

paper

Hello, can you give me the link to paper? Thanks a lot

Evaluation Problem

I have downloaded your source code and trained without any change but I can not achieve the score from paper. The AUC score just 0.2966.
I wonder whether there is any tricks when I use your source code to train and to evaluate.

error config

in file train line 18, their an error set of mode save path /output/ckpt/ which supposed be /outputs/ckpt

The result about Top-N precision

I run your pretrained model to get the result of pone, ptwo and pall. But all of them had a great gap with the results given in your paper.
How to solve this problem?

dataset

the dataset relesed in your code has 57w sent but your paper writes 52w, please tell me the reason , the two kinds datasets really have very differrent performance i know

question about Hits@K

Hi, I am confused about Hits@K metrics.
In the paper, "accuracy (%) of Hits@K on relations with training instances fewer than 100/200." is written.
I wonder that the long-tail relations is defined as the relations which training bags are fewer than 100/200 or training sentences are fewer than 100/200?

Thanks a lot.
Best.

About initial_vectors file

How do I get pretrained vectors(init_vec and init_vec_pcnn) with other dataset in the initial_vectors document?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.