a course project
Abstracts of journal articles are taken as a representation for summarization of articles from the arxiv-summarization dataset on Hugging Face. This contains articles and abstracts taken from arXiv.
The basis of the model is the BERT model.
Tuned models are in the results/
directory. Note that git lfs may be required to clone this repository.
Model performance under the current training parameters is rather poor. Full training could be done on a different device or via a cloud provider to provide better results. Nonetheless, the training procedure could likely be similar to that utilized here.
- A. Cohan et al., “A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents,” Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics, 2018. doi: 10.18653/v1/n18-2097.
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423.