Git Product home page Git Product logo

longdocfactscore's Introduction

LongDocFACTScore

This is the repository associated with the paper: LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation, presented at LREC-COLING 2024 in Turin, Italy.

Abstract:

Maintaining factual consistency is a critical issue in abstractive text summarisation, however, it cannot be assessed by traditional automatic metrics used for evaluating text summarisation, such as ROUGE scoring. Recent efforts have been devoted to developing improved metrics for measuring factual consistency using pre-trained language models, but these metrics have restrictive token limits, and are therefore not suitable for evaluating long document text summarisation. Moreover, there is limited research and resources available for evaluating whether existing automatic evaluation metrics are fit for purpose when applied in long document settings. In this work, we evaluate the efficacy of automatic metrics for assessing the factual consistency of long document text summarisation. We create a human-annotated data set for evaluating automatic factuality metrics, LongSciVerify, which contains fine-grained factual consistency annotations for long document summaries from the scientific domain. We also propose a new evaluation framework, LongDocFACTScore, which is suitable for evaluating long document summarisation. This framework allows metrics to be efficiently extended to any length document and outperforms existing state-of-the-art metrics in its ability to correlate with human measures of factuality when used to evaluate long document summarisation data sets.

Method:

LongDocFACTScore is a reference-free framework which can be applied to any reference-free metric for assessing factual consistency. In this repo, it is implemented with BARTScore. The method uses sentence embeddings to calculate similarity between source document sentences and predicted summary sentences, and then applies metrics to the highest similarity text snippets. The scores per sentence in the predicted summary are averaged to give one score per predicted summary.

In this work, LongDocFACTScore is implemented with BARTScore, and some code is copied from the linked repo.

longdocfactscore.png

Data sets (including LongSciVerify)

In our work, we curate LongSciVerify data set consisting of PubMed and ArXiv papers with human annotations of factual consistency. More information about the data sets we use can be found here

Usage of LongDocFACTScore

Install:

pip install longdocfactscore

or for an editable version

git clone https://github.com/jbshp/LongDocFACTScore.git
pip install -e . 

To run on a piece of text:

from longdocfactscore.ldfacts import LongDocFACTScore

predict_summary = "INSERT PREDICTED SUMMARY HERE"
src_doc = "INSERT SOURCE DOCUMENT HERE"

ldfacts_scorer = LongDocFACTScore(device='cpu')

scores = ldfacts_scorer.score_src_hyp_long([src_doc],[predict_summary])

To run with some example data:

python run_example.py

Repeat evaluation in LongDocFACTScore paper

Set up

  1. Run the following
pip install longdocfactscore
cd evaluation_scripts
git clone https://github.com/ThomasScialom/QuestEval.git
git clone https://github.com/neulab/BARTScore.git
git clone https://github.com/salesforce/factCC.git 
cp ./utils/factcc_run.py ./factCC/modeling/run.py
pip install -r requirements.txt
  1. Download the factCC trained checkpoint from their repo for evaluation and copy into the top level of this repo in a folder called factcc-checkpoint
  2. Run scripts, dataset options are: pubmed_longdocfactscore , arxiv_longdocfactscore, pubmed_longdocfactscore

e.g.,

cd ..
python evaluation_scripts/run_evaluation.py --dataset pubmed_longdocfactscore

longdocfactscore's People

Contributors

jbshp avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.