Skip to main content

An Abstractive Automatic Summarization Approach Based on a Text Comprehension Model of Cognitive Psychology

  • Conference paper
  • First Online:
Computational Collective Intelligence (ICCCI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14162))

Included in the following conference series:

  • 523 Accesses

Abstract

This paper proposes a new cognitive abstractive text summarization model. The proposed approach is a double-stage system. First, text segments are mapped to salient topics, and a saliency score is computed for every sentence. Next, the summarization task is formulated as a fuzzy logic problem. Sentences ensuring maximum coverage and fidelity are selected to be part of a pre-summary. Sentences of the first stage’s output are rephrased using a T5 transformer. Experimental results show that the proposed approach outperforms three state-of-the-art summarization protocols.

Supported by NSERC (Natural Sciences and Engineering Research Council of Canada).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mehdi, A., et al.: Text summarization techniques: a brief survey. J. Comput. Lang. 1 (2017)

    Google Scholar 

  2. Narendra, A., Bewoor, L.A.: An overview of text summarization techniques. In: International Conference on Computing Communication Control and automation (ICCUBEA), pp. 1–7 (2016)

    Google Scholar 

  3. Yogan, J.K., Ong, S.H., Halizah, B., Ngo, H.C., Puspalata, C.S.: A review on automatic text summarization approaches. J. Comput. Sci. 2(4), 178–190 (2016)

    Google Scholar 

  4. Ani, N., Lucy, V., Kathleen, M.: A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. The 29th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, pp. 573–580 (2006)

    Google Scholar 

  5. Ani, N., Lucy, V.: The impact of frequency on summarization. Microsoft Research (2005)

    Google Scholar 

  6. Elena, F., Vasileios, H.: A formal model for information selection in multi-sentence text extraction. COLING 2004; The 20th International Conference on Computational Linguistics (ACL), pp. 397–403 (2004)

    Google Scholar 

  7. Pascale, F., Grace, N.: One story, one flow: hidden Markov story models for multilingual multidocument summarization. The ACM Transactions Speech Language Processing, pp. 1–16 (2006)

    Google Scholar 

  8. Michel, G.: A skip-chain conditional random field for ranking meeting utterances by importance. In: The Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pp. 364–372 (2006)

    Google Scholar 

  9. Vishal, G., Gurpreet, S.L.: A survey of text summarization extractive techniques. J. Emerg. Technol. Web Intell. 2(3), 258–268 (2010)

    Google Scholar 

  10. Krysta, S., Lucy, V., Christopher, B.: Enhancing single-document summarization by combining RankNet and third-party sources. In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 448–457 (2007)

    Google Scholar 

  11. Chris, B., et al.: Learning to rank using gradient descent. In: Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), pp. 89–96 (2005)

    Google Scholar 

  12. Esther, H., Saswati, M.: A classification-based summarisation model for summarising text documents. Int. J. Inf. Commun. Technol. 6(3), 292–308 (2014)

    Google Scholar 

  13. Regina, B., Michael, E.: Using lexical chains for text summarization. In: Proceedings of the ACL Workshop on Intelligent Scalable Text Summarization, pp. 10–17 (1997)

    Google Scholar 

  14. Muhammad, Z.A.: Detection and scoring of internet slangs for sentiment analysis using SentiWordNet. Life Sci. J. 11, 66–72 (2014)

    Google Scholar 

  15. Walter, K., Teun, V.D.: Toward a model of text comprehension and production. Psychol. Rev. 85, 363–394 (1978)

    Article  Google Scholar 

  16. Myers, J.L., O’Brien, E.J.: Accessing the discourse representation during reading. J. Discourse Process. 26, 131–157 (1998)

    Article  Google Scholar 

  17. Jason, E.A., Jerome, L.M.: Role of context in accessing distant information during reading. J. Exp. Psychol. Learn. Mem. Cogn. 21, 1459–1468 (1995)

    Article  Google Scholar 

  18. Paul, V.D.B., Kirsten, R., Charles, R.F., Richard, T.: A “landscape” view of reading: fluctuating patterns of activation and the construction of a stable memory representation. In: Britton, B.K., Graesser, A.C. (eds.) Models of Understanding Text, Mahwah, NJ: Erlbaum, pp. 165–187 (1996)

    Google Scholar 

  19. Tom, T., Sperry, L.L.: Modeling causal integration and availability of information during comprehension of narrative texts. In: van Oostendorp, H., Goldman, S.R. (eds.) The Construction of Mental Representations During Reading, Mahwah, NJ: Erlbaum, pp. 29–69 (1999)

    Google Scholar 

  20. Mark, C.-L., Tom, T., Joseph, P.M.: A connectionist model of narrative comprehension. In: Ram, A., Moorman, K. (eds.) Understanding Language Understanding: Computational Models of Reading, Cambridge, MA: MIT Press, pp. 181–222 (1999)

    Google Scholar 

  21. Jerome, L.M., Makiko, S., Susan, A.D.: Degree of causal relatedness and memory. J. Mem. Lang. 26, 453–465 (1987)

    Article  Google Scholar 

  22. Tom, T., Linda, L.S.: Causal relatedness and importance of story events. J. Mem. Lang. 24, 595–611 (1985)

    Article  Google Scholar 

  23. Walter, K., David, M.W.: The construction-integration model: a framework for studying memory for text. In: Hockley, W.E., Lewandowsky, S. (eds.) Relating Theory and Data: Essays on Human Memory in Honor of Bennet B. Murdock, pp. 367–385 (1991)

    Google Scholar 

  24. John, M.F.: The story gestalt: a model of knowledge-intensive processes in text comprehension. Cogn. Sci. 16, 271–306 (1992)

    Article  Google Scholar 

  25. John, M.F., Mc-Clelland, J.L.: Parallel constraint satisfaction as a comprehension mechanism. In: Reilly, R.G., Sharkey, N.E. (eds.) Connectionist Approaches to Natural Language Processing, pp. 97–136 (1992)

    Google Scholar 

  26. Walter, K.: Predication. Cogn. Sci. 25, 173–202 (2001)

    Article  Google Scholar 

  27. Thomas, K.L., Peter, W.F., Darrell, L.: Introduction to latent semantic analysis. Discourse Process. 25, 259–284 (1998)

    Article  Google Scholar 

  28. Richard, M.G., David, E.R.: A parallel distributed processing model of story comprehension and recall. Discourse Process. 16, 203–237 (1993)

    Article  Google Scholar 

  29. Richard, M.G., David, E.R., Joseph, S., Alice, T.: Markov random fields for text comprehension. In: Levine, D.S., Aparicio, M. (eds.) Neural Networks for Knowledge Representation and Inference, pp. 283–309 (1994)

    Google Scholar 

  30. Walter, K.: The role of knowledge in discourse comprehension: a construction-integration model. Psychol. Rev. 95(2), 163–182 (1988)

    Article  Google Scholar 

  31. Stefan, F., Mathieu, K., Leo, N., Wietske, V.: Modeling knowledge-based inferences in story comprehension. Cogn. Sci. 27, 875–910 (2003)

    Article  Google Scholar 

  32. Morton, A.G.: The structure building framework: what it is, what it might also be, and why. In: Britton, B.K. Graesser, A.C. (eds.) Models of Text Understanding, pp. 289–311 (1995)

    Google Scholar 

  33. Alaidine, B.A., Ismaïl, B., Jean-Guy, M.: Automatic text summarization: a new hybrid model based on vector space modelling, fuzzy logic and rhetorical structure analysis. ICCCI 2, 26–34 (2019)

    Google Scholar 

  34. Colin, R., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR J. (2019)

    Google Scholar 

  35. Binh, G., Tuan, A.T., Nam, K.T., Mohammad, T.: Leverage learning to rank in an optimization framework for timeline summarization. In: TAIA Workshop SIGIR (2013)

    Google Scholar 

  36. Luhn, H.P.: The automatic creation of literature abstracts. IBM J. 2, 159–165 (1958)

    Article  MathSciNet  Google Scholar 

  37. Rada, M., Paul, T.: TextRank: bringing order into texts. In: Proceedings of Empirical Methods for Natural Language Processing, pp. 404–411 (2004)

    Google Scholar 

  38. Gunes, E., Dragomir, R.R.: LexRank: graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res. 22, 457–479 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Natural Sciences and Engineering Research Council of Canada (NSERC) for financing this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alaidine Ben Ayed .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ayed, A.B., Biskri, I., Meunier, JG. (2023). An Abstractive Automatic Summarization Approach Based on a Text Comprehension Model of Cognitive Psychology. In: Nguyen, N.T., et al. Computational Collective Intelligence. ICCCI 2023. Lecture Notes in Computer Science(), vol 14162. Springer, Cham. https://doi.org/10.1007/978-3-031-41456-5_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41456-5_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41455-8

  • Online ISBN: 978-3-031-41456-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics