ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Negative Guided Abstractive Dialogue Summarization

Junpeng Liu, Yanyan Zou, Yuxuan Xi, Shengjie Li, Mian Ma, Zhuoye Ding, Bo Long

The goal of the abstractive dialogue summarization task is to generate a shorter form of a long conversation while retaining its most salient information, which plays an important role in speech. Unlike the well-structured text, such as scientific articles and news, dialogues often comprise of utterances coming from multiple interlocutors, where the conversations are often informal, verbose, repetitive, and sprinkled with false-starts, backchanneling, reconfirmations, hesitations as well as speaker interruptions, which might introduce much noisy information and thus brings new challenges of summarizing dialogues. In this work, we extend the widely-used sequence-to-sequence summarization framework with a negative guided mechanism, which allows models to explicitly perceive the unnecessary pieces (i.e., noise) of a dialogue and thus focus more on the salient information. Specifically, the negative guided mechanism has two main components, negative example construction and negative guided loss. We explore two different ways to constructing the negative examples and further calculate the negative loss. Extensive experiments on the benchmark datasets demonstrate that our method significantly outperforms the baselines with regard to both semantic matching and factual consistent based metrics. We also elicit the human efforts to prove the performance gains.


doi: 10.21437/Interspeech.2022-10395

Cite as: Liu, J., Zou, Y., Xi, Y., Li, S., Ma, M., Ding, Z., Long, B. (2022) Negative Guided Abstractive Dialogue Summarization. Proc. Interspeech 2022, 3253-3257, doi: 10.21437/Interspeech.2022-10395

@inproceedings{liu22r_interspeech,
  author={Junpeng Liu and Yanyan Zou and Yuxuan Xi and Shengjie Li and Mian Ma and Zhuoye Ding and Bo Long},
  title={{Negative Guided Abstractive Dialogue Summarization}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={3253--3257},
  doi={10.21437/Interspeech.2022-10395}
}