Git Product home page Git Product logo

nested-term-extraction's Introduction

Term Extraction

A pytorch implement code for paper Feature-Less End-to-End Nested Term Extraction in NLPCC XAI 2019.

This code is based on span ranking and classification, it supports the nested term extraction via an end2end manner and does not call for any additional features.

1. File Discribtion:

./data: the dir contains the corpus

./models: the dir contains model python files, in it:

|--> `./charfeat`: the model to build char level feature. It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model), it contains:

	    |--> `charbigru.py`: the bi-directional gru model for char feature.
	    
	    |--> `charbilstm.py`: the bi-directional lstm model for char feature.
	    
	    |--> `charcnn.py`: the cnn pooling model for char feature
	    
|--> `./wordfeat`: the model to build word level and sequence level feature. It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model) and modified, it contains:

	    |--> `WordRep.py`: the model class to build word level features
	    
	    |--> `WordSeq.py`: the model class to build sequential features from word level features

โ€‹ |-->FCRanking.py: the model file for span classification based ranking model

./saves: the dir to save models & data & test output.

./utils: the dir contains some utils that load data, build vocab, attention functions and ect:

|--> `alphabet.py`: the tools to build vocab, It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model)

|--> `data.py`: the tools to load data and build vocab, It is copied from [Jie's code](https://github.com/jiesutd/NCRFpp/tree/master/model) and modified.

|--> `functions.py`: the python file that includes attention, softmax, masked softmax and ect. tools.

main.py: the python file to train and test model.

Key files: FCRanking.py main.py

2. How to run:

2.1 Before Run, Data Pre-processing:

Please process the data into the jsonline format below:

{"words": ["IL-2", "gene", "expression", "and", "NF-kappa", "B", "activation", "through", "CD28", "requires", "reactive", "oxygen", "production", "by", "5-lipoxygenase", "."], "tags": ["NN", "NN", "NN", "CC", "NN", "NN", "NN", "IN", "NN", "VBZ", "JJ", "NN", "NN", "IN", "NN", "."], "terms": [[0, 1, "G#DNA_domain_or_region"], [0, 2, "G#other_name"], [4, 5, "G#protein_molecule"], [4, 6, "G#other_name"], [8, 8, "G#protein_molecule"], [14, 14, "G#protein_molecule"]]}

There are three keys:

"words": the tokenized sentence.

"tags": the POS-tag, not a must, you can modified in the load file data.py

"terms": the golden term spans. In it, [0, 1, "G#DNA_domain_or_region"] for example, the first two int number is a must. the third string can use a placeholder like '@' instead if you don't want to do detailed labelling.

Here we use the GENIA corpus and shared it in the ./data dir in jsonlines format.

2.2 How to run and parameters:

  1. train (you can change the parameters below, the parameter in () is not a must):

    python main.py --status train (--early_stop 26 --dropout 0.5 --use_gpu False --gpuid 3 --max_lengths 5 --word_emb [YOUR WORD EMBEDDINGS DIR])
  2. test (Be noted that the parameters should be strictly the same with those you used to train the model except the --status)

python main.py --status test (--early_stop 26 --dropout 0.5 --use_gpu False --gpuid 3 --max_lengths 5 --word_emb [YOUR WORD EMBEDDINGS DIR])

Please be noted that the path of data is written in data.py in ./data dir.


Some details: ranker --> all best Dropout:0.6 | lr:0.005 | MaxLen:6 | HD:100 | POS:True | ELMO:True | Elmo_dim 100

nested-term-extraction's People

Contributors

gyyz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

ngoctanle

nested-term-extraction's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.