A toolkit for segmenting Elementary Discourse Units (clauses). This is a refactoring of the paper: Toward Fast and Accurate Neural Discourse Segmentation
This version runs under Python 3.7. The AllenNLP dependency is replaced by Tensorflow-Hub.
We cannot provide the complete RST-DT corpus due to the LDC copyright.
So we only put several samples in ./data/rst/
to test the our code and show the data structure.
If you want to train or evaluate our model on RST-DT, you need to download the data manually and put it in the same folder. Then run the following command to preprocess the data and create the vocabulary:
python run.py --prepare
You can evaluate the performance of a model after downloading and preparing the RST-DT data as mentioned above:
python run.py --evaluate --test_files ../data/rst/preprocessed/test/*.preprocessed
You can use the following command to train the model from scratch:
python run.py --train
Hyper-parameters and other training settings can be modified in config.py
.
You can segment files with raw text into EDUs:
python run.py --segment --input_files ../data/rst/TRAINING/wsj_110*.out --result_dir ../data/results/
Please cite the following paper if you use this toolkit in your work:
@inproceedings{wang2018edu,
title={Toward Fast and Accurate Neural Discourse Segmentation},
author={Wang, Yizhong and Li, Sujian and Yang, Jingfeng},
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
pages={962--967},
year={2018}
}