Skip to main content

Multi-granularity Contrastive Siamese Networks for Abstractive Text Summarization

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1966))

Included in the following conference series:

  • 798 Accesses

Abstract

Abstractive text summarization is an important task in natural language generation, which aims to compress input documents and generate concise and informative summaries. Sequence-to-Sequence (Seq2 Seq) models have achieved good results in abstractive text summarization in recent years. However, such models are often sensitive to noise information in the training data and exhibit fragility in practical applications. To enhance the denoising ability of the models, we propose a Multi-Granularity Contrastive Siamese Networks for Abstractive Text Summarization. Specifically, we first perform word-level and sentence-level data augmentation on the input text and integrate the noise information of the two granularities into the input text to generate augmented text pairs with diverse noise information. Then, we jointly train the Seq2Seq model using contrastive learning to maximize the consistency between the representations of the augmented text pairs through a Siamese network. We conduct empirical experiments on the CNN/Daily Mail and XSum datasets. Compared to many existing benchmarks, the results validate the effectiveness of our model.

Project supported by the National Natural Science Foundation of China (62176145).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019)

  2. Zhang, J., Zhao, Y., Saleh, M., Liu, P.: Pegasus: pre-training with extracted gap-sentences for abstractive summarization. In: International Conference on Machine Learning, pp. 11328–11339. PMLR. (2020)

    Google Scholar 

  3. Jia, R., Liang, P.: Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328 (2017)

  4. Zheng, C., Zhang, K., Wang, H. J., Fan, L., Wang, Z.: Enhanced Seq2Seq autoencoder via contrastive learning for abstractive text summarization. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 1764–1771. IEEE. (2021)

    Google Scholar 

  5. Zhou, K., Zhang, B., Zhao, W. X., Wen, J. R.: Debiased contrastive learning of unsupervised sentence representations. arXiv preprint arXiv:2205.00656 (2022)

  6. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR. (2020)

    Google Scholar 

  7. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)

    Google Scholar 

  8. Grill, J.B., et al.: Bootstrap your own latent-a new approach to self-supervised learning. In: Advances in Neural Information Processing Systems, vol. 33, 21271–21284 (2020)

    Google Scholar 

  9. Lee, S., Lee, D.B., Hwang, S.J.: Contrastive learning with adversarial perturbations for conditional text generation. arXiv preprint arXiv:2012.07280 (2020)

  10. Liu, Y., Liu, P.: SimCLS: a simple framework for contrastive learning of abstractive summarization. arXiv preprint arXiv:2106.01890 (2021)

  11. Sun, S., Li, W.: Alleviating exposure bias via contrastive learning for abstractive text summarization. arXiv preprint arXiv:2108.11846 (2021)

  12. Gu, J., Lu, Z., Li, H., Li, V. O.: Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 (2016)

  13. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)

  14. Li, H., et al.: Learn to copy from the copying history: correlational copy network for abstractive summarization. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 4091–4101 (2021)

    Google Scholar 

  15. Qi, W., et al.: Prophetnet: predicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063 (2020)

  16. Gao, T., Yao, X., Chen, D.: Simcse: simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821 (2021)

  17. Qiu, S., et al.: Easyaug: an automatic textual data augmentation platform for classification tasks. In: Companion Proceedings of the Web Conference 2020, pp. 249–252 (2020)

    Google Scholar 

  18. Hermann, K.M., et al.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  19. Narayan, S., Cohen, S.B., Lapata, M.: Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. arXiv preprint arXiv:1808.08745 (2018)

  20. Liu, Y., Lapata, M.: Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345 (2019)

  21. Xie, Q., Huang, J., Saha, T., Ananiadou, S.: GRETEL: graph contrastive topic enhanced language model for long document extractive summarization. arXiv preprint arXiv:2208.09982 (2022)

  22. Ji, X., Zhao, W.: SKGSUM: abstractive document summarization with semantic knowledge graphs. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021)

    Google Scholar 

  23. Qiu, Y., Cohen, S.B.: Abstractive summarization guided by latent hierarchical document structure. arXiv preprint arXiv:2211.09458 (2022)

  24. Liu, W., Wu, H., Mu, W., Li, Z., Chen, T., Nie, D.: CO2Sum: contrastive learning for factual-consistent abstractive summarization. arXiv preprint arXiv:2112.01147 (2021)

  25. Xu, S., Zhang, X., Wu, Y., Wei, F.: Sequence level contrastive learning for text summarization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 10, pp. 11556–11565 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hu Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, H., Li, K., Zhang, G., Guan, Y., Li, R. (2024). Multi-granularity Contrastive Siamese Networks for Abstractive Text Summarization. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1966. Springer, Singapore. https://doi.org/10.1007/978-981-99-8148-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8148-9_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8147-2

  • Online ISBN: 978-981-99-8148-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics