Skip to main content

A Comparative Study of Sentence Embeddings for Unsupervised Extractive Multi-document Summarization

  • Conference paper
  • First Online:
Artificial Intelligence and Machine Learning (BNAIC/Benelearn 2022)

Abstract

Obtaining large-scale and high-quality training data for multi-document summarization (MDS) tasks is time-consuming and resource-intensive, hence, supervised models can only be applied to limited domains and languages. In this paper, we introduce unsupervised extractive methods for both generic and query-focused MDS tasks, intending to produce a relevant summary from a collection of documents without using labeled training data or domain knowledge. More specifically, we leverage the potential of transfer learning from recent sentence embedding models to encode the input documents into rich semantic representations. Moreover, we use a coreference resolution system to resolve the broken pronominal coreference expressions in the generated summaries, aiming to improve their cohesion and textual quality. Furthermore, we provide a comparative analysis of several existing sentence embedding models in the context of unsupervised extractive multi-document summarization. Experiments on the standard DUC’2004-2007 datasets demonstrate that the proposed methods are competitive with previous unsupervised methods and are even comparable to recent supervised deep learning-based methods. The empirical results also show that the SimCSE embedding model, based on contrastive learning, achieves substantial improvements over strong sentence embedding models. Finally, the newly involved coreference resolution method is proven to bring a noticeable improvement to the unsupervised extractive MDS task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/huggingface/neuralcoref.

  2. 2.

    Semantically meaningful means similar sentences are close in the vector space.

  3. 3.

    https://spacy.io/.

  4. 4.

    https://www.nltk.org/.

  5. 5.

    https://github.com/huggingface/neuralcoref.

  6. 6.

    https://duc.nist.gov/data.html.

  7. 7.

    ROUGE-1.5.5 with parameters “-n 4 -m -l 100 -c 95 -r 1000 -f A -p 0.5 -t 0” (G-MDS), “-a -c 95 -m -n 2 -2 4 -u -p 0.5 -l 250” (QF-MDS).

  8. 8.

    https://pypi.org/project/trectools/.

  9. 9.

    https://huggingface.co/.

  10. 10.

    https://tfhub.dev/google.

  11. 11.

    https://github.com/facebookresearch/InferSent, https://github.com/kawine/usif.

References

  1. Antunes, J., Lins, R.D., Lima, R., Oliveira, H., Riss, M., Simske, S.J.: Automatic cohesive summarization with pronominal anaphora resolution. Comput. Speech Lang. 52, 141–164 (2018)

    Article  Google Scholar 

  2. Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632–642 (2015)

    Google Scholar 

  3. Bromley, J., et al.: Signature verification using a “siamese’’ time delay neural network. Int. J. Pattern Recognit. Artif. Intell. 7(04), 669–688 (1993)

    Article  Google Scholar 

  4. Cao, Z., Wei, F., Li, W., Li, S.: Faithful to the original: fact aware neural abstractive summarization. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  5. Carbonell, J., Goldstein, J.: The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development In Information Retrieval, pp. 335–336 (1998)

    Google Scholar 

  6. Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14 (2017)

    Google Scholar 

  7. Cer, D., et al.: Universal sentence encoder for English, pp. 169–174 (2018)

    Google Scholar 

  8. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607 (2020)

    Google Scholar 

  9. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 670–680 (2017)

    Google Scholar 

  10. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding, pp. 4171–4186 (2019)

    Google Scholar 

  11. Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10(7), 1895–1923 (1998)

    Article  Google Scholar 

  12. El-Kassas, W.S., Salama, C.R., Rafea, A.A., Mohamed, H.K.: Automatic text summarization: a comprehensive survey. Expert Syst. Appl. 165, 113679 (2020)

    Article  Google Scholar 

  13. Ethayarajh, K.: Unsupervised random walk sentence embeddings: a strong but simple baseline. In: Proceedings of the Third Workshop on Representation Learning for NLP, pp. 91–100 (2018)

    Google Scholar 

  14. Fabbri, A., Li, I., She, T., Li, S., Radev, D.: Multi-news: a large-scale multi-document summarization dataset and abstractive hierarchical model. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1074–1084 (2019)

    Google Scholar 

  15. Gambhir, M., Gupta, V.: Recent automatic text summarization techniques: a survey. Artif. Intell. Rev. 47(1), 1–66 (2017)

    Article  Google Scholar 

  16. Gao, T., Yao, X., Chen, D.: SimCSE: simple contrastive learning of sentence embeddings. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6894–6910 (2021)

    Google Scholar 

  17. Iyyer, M., Manjunatha, V., Boyd-Graber, J., Daumé III, H.: Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp. 1681–1691 (2015)

    Google Scholar 

  18. Kulesza, A., Taskar, B., et al.: Determinantal point processes for machine learning. Found. Trends® Mach. Learn. 5(2–3), 123–286 (2012)

    Google Scholar 

  19. Lamsiyah, S., El Mahdaouy, A., El Alaoui, S.O., Espinasse, B.: Unsupervised query-focused multi-document summarization based on transfer learning from sentence embedding models, BM25 model, and maximal marginal relevance criterion. J. Ambient Intell. Humaniz. Comput. 1–18 (2021)

    Google Scholar 

  20. Lamsiyah, S., El Mahdaouy, A., Espinasse, B., Ouatik, S.E.A.: An unsupervised method for extractive multi-document summarization based on centroid approach and sentence embeddings. Expert Syst. Appl. 167, 114152 (2021)

    Article  Google Scholar 

  21. Lebanoff, L., Song, K., Liu, F.: Adapting the neural encoder-decoder framework from single to multi-document summarization. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4131–4141 (2018)

    Google Scholar 

  22. Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  23. Liu, Y., Lapata, M.: Hierarchical transformers for multi-document summarization. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5070–5081 (2019)

    Google Scholar 

  24. Liu, Y., Lapata, M.: Text summarization with pretrained encoders. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3730–3740 (2019)

    Google Scholar 

  25. Long, Q., Luo, T., Wang, W., Pan, S.: Domain confused contrastive learning for unsupervised domain adaptation. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2982–2995 (2022)

    Google Scholar 

  26. Oliveira, H., Lins, R.D., Lima, R., Freitas, F., Simske, S.J.: A concept-based ilp approach for multi-document summarization exploring centrality and position. In: 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), pp. 37–42 (2018)

    Google Scholar 

  27. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  28. Radev, D.R., Jing, H., Styś, M., Tam, D.: Centroid-based summarization of multiple documents. Inf. Process. Manag. 40(6), 919–938 (2004)

    Article  MATH  Google Scholar 

  29. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (2016)

    Google Scholar 

  30. Ramos, J., et al.: Using TF-IDF to determine word relevance in document queries. In: Proceedings of the First Instructional Conference on Machine Learning, vol. 242, pp. 29–48 (2003)

    Google Scholar 

  31. Rankel, P.A., Conroy, J., Slud, E., O’leary, D.P.: Ranking human and machine summarization systems. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 467–473 (2011)

    Google Scholar 

  32. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3982–3992 (2019)

    Google Scholar 

  33. Ren, P., Chen, Z., Ren, Z., Wei, F., Ma, J., de Rijke, M.: Leveraging contextual sentence relations for extractive summarization using a neural attention model. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 95–104 (2017)

    Google Scholar 

  34. Ren, P., et al.: Sentence relations for extractive summarization with deep neural networks. ACM Trans. Inf. Syst. (TOIS) 36, 1–32 (2018)

    Article  Google Scholar 

  35. Roitman, H., Feigenblat, G., Cohen, D., Boni, O., Konopnicki, D.: Unsupervised dual-cascade learning with pseudo-feedback distillation for query-focused extractive summarization. In: WWW 2020: The Web Conference 2020, Taipei, Taiwan, 20–24 April 2020, pp. 2577–2584 (2020)

    Google Scholar 

  36. Rossiello, G., Basile, P., Semeraro, G.: Centroid-based text summarization through compositionality of word embeddings. In: Proceedings of the MultiLing 2017 Workshop on Summarization and Summary Evaluation Across Source Types and Genres, pp. 12–21 (2017)

    Google Scholar 

  37. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  38. Wieting, J., Gimpel, K.: ParaNMT-50M: pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 451–462 (2018)

    Google Scholar 

  39. Williams, A., Nangia, N., Bowman, S.: A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salima Lamsiyah .

Editor information

Editors and Affiliations

A Example of Our QFMDS-SimCSE Output’s Summary

A Example of Our QFMDS-SimCSE Output’s Summary

Table 6. Example of the generated summary for Cluster D374a from DUC’2005 dataset using our QFMDS-SimCSE method.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lamsiyah, S., Schommer, C. (2023). A Comparative Study of Sentence Embeddings for Unsupervised Extractive Multi-document Summarization. In: Calders, T., Vens, C., Lijffijt, J., Goethals, B. (eds) Artificial Intelligence and Machine Learning. BNAIC/Benelearn 2022. Communications in Computer and Information Science, vol 1805. Springer, Cham. https://doi.org/10.1007/978-3-031-39144-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-39144-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-39143-9

  • Online ISBN: 978-3-031-39144-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics