Manual and Automatic Evaluations of Summaries
- Chin-Yew Lin ,
- E.H. Hovy
The ACL-02 Workshop on Automatic Summarization |
Published by Association for Computational Linguistics | Organized by Association for Computational Linguistics
In this paper we discuss manual and automatic evaluations of summaries using data from the Document Understanding Conference 2001 (DUC-2001). We first show the instability of the manual evaluation. Specifically, the low interhuman agreement indicates that more reference summaries are needed. To investigate the feasibility of automated summary evaluation based on the recent BLEU method from machine translation, we use accumulative n-gram overlap scores between system and human summaries. The initial results provide encouraging correlations with human judgments, based on the Spearman rank-order correlation coefficient. However, relative ranking of systems needs to take into account the instability.