Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially with the trend of fine-tuning large pre-trained language models on the downstream dataset. These models are typically decoded with beam search to generate a unique summary. However, the search space is very large, and due to exposure bias, such decoding is not optimal. In this paper, we show that it is possible to directly train a second-stage model performing re-ranking on a set of summary candidates. Our mixture-of-experts SummaReranker learns to select a better candidate and systematically improves the performance of the base model. With a base PEGASUS, we push ROUGE scores by 5.44% on CNN-DailyMail (47.16 ROUGE-1), 1.31% on XSum (48.12 ROUGE-1) and 9.34% on Reddit TIFU (29.83 ROUGE-1), reaching a new state-of-the-art.
SummaReranker: A Multi-Task Mixture-of-Experts Re-Ranking Framework for Abstractive Summarization
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL'22) 2022.
PDF Abstract BibTex Slides