The extent to which humans and LLMs are capable evaluators remains uncertain. This study investigates the behavior of crowd-sourced and expert annotators, as well as LLMs, when comparing outputs from different models. Curate a dataset of intentionally flawed machine-generated answers. Our findings reveal a concerning bias in the evaluation process, as answers with factual errors are rated more favorably than answers that are too short or contained grammatical errors.
Independently evaluating machine-generated text across multiple dimensions, rather than merging all the evaluation aspects into a single score -- Multi-Elo Rating System Significantly enhances the quality of LLM-based evaluations No significant improvement in crowd sourced-based evaluations
All the judges exhibit a bias toward longer texts. GPT-4 demonstrates the most bias and the expert annotators demonstrate the least bias
Human and LLMs: The response of “Several Major Factual Errors” as superior to that of “Correct + Short”
Crowd-sourced annotators lack fact-checking, while experts and LLM judges can fact-check, albeit imperfectly. LLM fails to detect inaccuracies, it often favors flawed outputs over shorter or grammatically imprecise responses.
The order of answers affects the judges’ decisions.
Stop Using Crowd-Sourced Annotators!
COBBLER pipeline LLMs are biased text quality evaluators Machine preferences are misaligned with humans EGOCENTRIC bias
Automatic evaluation leaderboards(with LLMs) have a number of limitations, including a preference for long outputs or outputs that are more similar to the evaluators’ generation qualities.
Test six different biases to benchmark their evaluation quality and categorize the model biases into two groups: