Skip to main content

Topic-Features for Dialogue Summarization

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13551))

  • 2875 Accesses

Abstract

Texts such as news reports and academic papers come from one single speaker and are well-structured. However, dialogues often come from two or more speakers exchanging information. In this case, the topic or intention may change in a dialogue, and the key information is often scattered in utterances of different speakers, which brings challenges to the abstractive dialogue summarization. It is difficult to apply the traditional topic modeling approaches because of too much noise and the inherent characteristics of dialogue. In order to effectively model the entire dialogue and capture various topic information, this paper proposes a topic-feature approach based on neural topic model, including word-level embedding and dialogue-level representation. Experimental results on the largest dialogue summarization corpus SAMSum show that the proposed approach can significantly improve over competitive baselines. In addition, we also conduct experiments on other datasets from different domains to verify the effectiveness and generality of our proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chen, J., Yang, D.: Multi-view sequence-to-. sequence models with conversational structure for abstractive dialogue summarization. In: Proceedings of EMNLP, pp. 4106–4118 (2020)

    Google Scholar 

  2. Chen, J., Yang, D.: Structure-aware abstractive conversation summarization via discourse and action graphs. In: Proceedings of ACL, pp. 1380–1391 (2021)

    Google Scholar 

  3. Chen, Y., Liu, Y., Chen, L., Zhang, Y.: DialogSum: a real-life scenario dialogue summarization dataset. In: Findings of ACL-IJCNLP, pp. 5062–5074 (2021)

    Google Scholar 

  4. Fabbri, A., et al.: ConvoSumm: conversation summarization benchmark and improved abstractive summarization with argument mining. In: Proceedings of ACL, pp. 6866–6880 (2021)

    Google Scholar 

  5. Feng, X., Feng, X., Qin, B.: Incorporating commonsense knowledge into abstractive dialogue summarization via heterogeneous graph networks. In: Proceedings of CCL, pp. 964–975 (2021)

    Google Scholar 

  6. Feng, X., Feng, X., Qin, B.: A survey on dialogue summarization: recent advances and new frontiers. CoRR abs/2107.03175 (2021)

    Google Scholar 

  7. Gliwa, B., Mochol, I., Biesek, M., Wawer, A.: Samsum corpus: a human-annotated dialogue dataset for abstractive summarization. CoRR abs/1911.12237 (2019)

    Google Scholar 

  8. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. CoRR abs/1312.6114 (2014)

    Google Scholar 

  9. Koay, J.J., Roustai, A., Dai, X., Burns, D., Kerrigan, A., Liu, F.: How domain terminology affects meeting summarization performance. In: Proceedings of COLING, pp. 5689–5695 (2020)

    Google Scholar 

  10. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of ACL, pp. 7871–7880 (2020)

    Google Scholar 

  11. Lin, C.Y.: Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In: Proceedings of ACL, pp. 74–81 (2004)

    Google Scholar 

  12. Liu, J., Zou, Y., Zhang, H., Chen, H., Ding, Z., Yuan, C., Wang, X.: Topic-aware contrastive learning for abstractive dialogue summarization. In: Findings of EMNLP, pp. 1229–1243 (2021)

    Google Scholar 

  13. Miao, Y., Grefenstette, E., Blunsom, P.: Discovering discrete latent topics with neural variational inference. In: Proceedings of ICML, pp. 2410–2419 (2017)

    Google Scholar 

  14. Nguyen, T., Luu, A.T., Lu, T., Quan, T.: Enriching and controlling global semantics for text summarization. In: Proceedings of EMNLP, pp. 9443–9456 (2021)

    Google Scholar 

  15. Qin, K., Wang, L., Kim, J.: Joint modeling of content and discourse relations in dialogues. In: Proceedings of ACL, pp. 974–984 (2017)

    Google Scholar 

  16. Rameshkumar, R., Bailey, P.: Storytelling with dialogue: a critical role dungeons and dragons dataset. In: Proceedings of ACL, pp. 5121–5134 (2020)

    Google Scholar 

  17. Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Proceedings of EMNLP, pp. 379–389 (2015)

    Google Scholar 

  18. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of ACL, pp. 1073–1083 (2017)

    Google Scholar 

  19. Sennrich, R., Haddow, B.: Linguistic input features improve neural machine translation. In: Proceedings of Conference on Machine Translation, pp. 83–91 (2016)

    Google Scholar 

  20. Shang, G., Ding, W., Zhang, Z., Tixier, A., Meladianos, P., Vazirgiannis, M.: Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In: Proceedings of ACL, pp. 664–674 (2018)

    Google Scholar 

  21. Tixier, A., Meladianos, P., Vazirgiannis, M.: Combining graph degeneracy and submodularity for unsupervised extractive summarization. In: Proceedings of the Workshop on New Frontiers in Summarization, pp. 48–58 (2017)

    Google Scholar 

  22. Vaswani, A., et al.: Attention is all you need. In: Proceedings of NIPS, pp. 6000–6010 (2017)

    Google Scholar 

  23. Zhang, J., Li, L., Way, A., Liu, Q.: Topic-informed neural machine translation. In: Proceedings of COLING, pp. 1807–1817 (2016)

    Google Scholar 

  24. Zhao, L., Xu, W., Guo, J.: Improving abstractive dialogue summarization with graph structures and topic words. In: Proceedings of COLING, pp. 437–449 (2020)

    Google Scholar 

  25. Zhao, Z., Pan, H., Fan, C., Liu, Y., Li, L., Yang, M., Cai, D.: Abstractive meeting summarization via hierarchical adaptive segmental network learning. In: Proceedings of WWW, pp. 3455–3461 (2019)

    Google Scholar 

  26. Zhong, M., et al.: QMSum: a new benchmark for query-based multi-domain meeting summarization. In: Proceedings of NAACL, pp. 5905–5921 (2021)

    Google Scholar 

  27. Zhu, C., Xu, R., Zeng, M., Huang, X.: A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In: Findings of EMNLP, pp. 194–203 (2020)

    Google Scholar 

  28. Zou, Y., et al.: Topic-oriented spoken dialogue summarization for customer service with saliency-aware topic modeling. Proceedings of AAAI, pp. 14665–14673 (2021)

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive feedback. This work was supported by the National Natural Science Foundation of China (Grant No. 61876120).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junhui Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Z., Li, J. (2022). Topic-Features for Dialogue Summarization. In: Lu, W., Huang, S., Hong, Y., Zhou, X. (eds) Natural Language Processing and Chinese Computing. NLPCC 2022. Lecture Notes in Computer Science(), vol 13551. Springer, Cham. https://doi.org/10.1007/978-3-031-17120-8_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17120-8_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17119-2

  • Online ISBN: 978-3-031-17120-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics