Enhancing Factual Consistency of Abstractive Summarization

  • Chenguang Zhu ,
  • William Hinthorn ,
  • Ruochen Xu ,
  • Qingkai Zeng ,
  • ,
  • Xuedong Huang ,
  • Meng Jiang

North American Chapter of the Association for Computational Linguistics (NAACL) 2021 |

Automatic abstractive summaries are found to often distort or fabricate facts in the article. This inconsistency between summary and original text has seriously impacted its applicability. We propose a fact-aware summarization model FASum to extract and integrate factual relations into the summary generation process via graph attention. We then design a factual corrector model FC to automatically correct factual errors from summaries generated by existing systems. Empirical results show that the fact-aware summarization can produce abstractive summaries with higher factual consistency compared with existing systems, and the correction model improves the factual consistency of given summaries via modifying only a few keywords.