Sequence-to-sequence deep neural models fine-tuned for abstractive summarization can achieve great performance on datasets with enough human annotations. Yet, it has been shown that they have not reached their full potential, with a wide gap between the top beam search output and the \emphoracle beam. Recently, re-ranking methods have been proposed, to learn to select a better summary candidate. However, such methods are limited by the summary quality aspects captured by the first-stage candidates. To bypass this limitation, we propose a new paradigm in second-stage abstractive summarization called SummaFusion that fuses several summary candiates to produce a novel abstractive \emphsecond-stage summary. Our method works well on several summarization datasets, improving both the ROUGE scores and qualitative properties of fused summaries. It is especially good when the candidates to fuse are worse, such as in the few-shot setup where we set a new state-of-the-art.
Towards Summary Candidates Fusion
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. In the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP'22) 2022.
PDF Abstract BibTex Slides