Abstract
This paper proposes a new cognitive abstractive text summarization model. The proposed approach is a double-stage system. First, text segments are mapped to salient topics, and a saliency score is computed for every sentence. Next, the summarization task is formulated as a fuzzy logic problem. Sentences ensuring maximum coverage and fidelity are selected to be part of a pre-summary. Sentences of the first stage’s output are rephrased using a T5 transformer. Experimental results show that the proposed approach outperforms three state-of-the-art summarization protocols.
Supported by NSERC (Natural Sciences and Engineering Research Council of Canada).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mehdi, A., et al.: Text summarization techniques: a brief survey. J. Comput. Lang. 1 (2017)
Narendra, A., Bewoor, L.A.: An overview of text summarization techniques. In: International Conference on Computing Communication Control and automation (ICCUBEA), pp. 1–7 (2016)
Yogan, J.K., Ong, S.H., Halizah, B., Ngo, H.C., Puspalata, C.S.: A review on automatic text summarization approaches. J. Comput. Sci. 2(4), 178–190 (2016)
Ani, N., Lucy, V., Kathleen, M.: A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. The 29th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, pp. 573–580 (2006)
Ani, N., Lucy, V.: The impact of frequency on summarization. Microsoft Research (2005)
Elena, F., Vasileios, H.: A formal model for information selection in multi-sentence text extraction. COLING 2004; The 20th International Conference on Computational Linguistics (ACL), pp. 397–403 (2004)
Pascale, F., Grace, N.: One story, one flow: hidden Markov story models for multilingual multidocument summarization. The ACM Transactions Speech Language Processing, pp. 1–16 (2006)
Michel, G.: A skip-chain conditional random field for ranking meeting utterances by importance. In: The Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pp. 364–372 (2006)
Vishal, G., Gurpreet, S.L.: A survey of text summarization extractive techniques. J. Emerg. Technol. Web Intell. 2(3), 258–268 (2010)
Krysta, S., Lucy, V., Christopher, B.: Enhancing single-document summarization by combining RankNet and third-party sources. In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 448–457 (2007)
Chris, B., et al.: Learning to rank using gradient descent. In: Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), pp. 89–96 (2005)
Esther, H., Saswati, M.: A classification-based summarisation model for summarising text documents. Int. J. Inf. Commun. Technol. 6(3), 292–308 (2014)
Regina, B., Michael, E.: Using lexical chains for text summarization. In: Proceedings of the ACL Workshop on Intelligent Scalable Text Summarization, pp. 10–17 (1997)
Muhammad, Z.A.: Detection and scoring of internet slangs for sentiment analysis using SentiWordNet. Life Sci. J. 11, 66–72 (2014)
Walter, K., Teun, V.D.: Toward a model of text comprehension and production. Psychol. Rev. 85, 363–394 (1978)
Myers, J.L., O’Brien, E.J.: Accessing the discourse representation during reading. J. Discourse Process. 26, 131–157 (1998)
Jason, E.A., Jerome, L.M.: Role of context in accessing distant information during reading. J. Exp. Psychol. Learn. Mem. Cogn. 21, 1459–1468 (1995)
Paul, V.D.B., Kirsten, R., Charles, R.F., Richard, T.: A “landscape” view of reading: fluctuating patterns of activation and the construction of a stable memory representation. In: Britton, B.K., Graesser, A.C. (eds.) Models of Understanding Text, Mahwah, NJ: Erlbaum, pp. 165–187 (1996)
Tom, T., Sperry, L.L.: Modeling causal integration and availability of information during comprehension of narrative texts. In: van Oostendorp, H., Goldman, S.R. (eds.) The Construction of Mental Representations During Reading, Mahwah, NJ: Erlbaum, pp. 29–69 (1999)
Mark, C.-L., Tom, T., Joseph, P.M.: A connectionist model of narrative comprehension. In: Ram, A., Moorman, K. (eds.) Understanding Language Understanding: Computational Models of Reading, Cambridge, MA: MIT Press, pp. 181–222 (1999)
Jerome, L.M., Makiko, S., Susan, A.D.: Degree of causal relatedness and memory. J. Mem. Lang. 26, 453–465 (1987)
Tom, T., Linda, L.S.: Causal relatedness and importance of story events. J. Mem. Lang. 24, 595–611 (1985)
Walter, K., David, M.W.: The construction-integration model: a framework for studying memory for text. In: Hockley, W.E., Lewandowsky, S. (eds.) Relating Theory and Data: Essays on Human Memory in Honor of Bennet B. Murdock, pp. 367–385 (1991)
John, M.F.: The story gestalt: a model of knowledge-intensive processes in text comprehension. Cogn. Sci. 16, 271–306 (1992)
John, M.F., Mc-Clelland, J.L.: Parallel constraint satisfaction as a comprehension mechanism. In: Reilly, R.G., Sharkey, N.E. (eds.) Connectionist Approaches to Natural Language Processing, pp. 97–136 (1992)
Walter, K.: Predication. Cogn. Sci. 25, 173–202 (2001)
Thomas, K.L., Peter, W.F., Darrell, L.: Introduction to latent semantic analysis. Discourse Process. 25, 259–284 (1998)
Richard, M.G., David, E.R.: A parallel distributed processing model of story comprehension and recall. Discourse Process. 16, 203–237 (1993)
Richard, M.G., David, E.R., Joseph, S., Alice, T.: Markov random fields for text comprehension. In: Levine, D.S., Aparicio, M. (eds.) Neural Networks for Knowledge Representation and Inference, pp. 283–309 (1994)
Walter, K.: The role of knowledge in discourse comprehension: a construction-integration model. Psychol. Rev. 95(2), 163–182 (1988)
Stefan, F., Mathieu, K., Leo, N., Wietske, V.: Modeling knowledge-based inferences in story comprehension. Cogn. Sci. 27, 875–910 (2003)
Morton, A.G.: The structure building framework: what it is, what it might also be, and why. In: Britton, B.K. Graesser, A.C. (eds.) Models of Text Understanding, pp. 289–311 (1995)
Alaidine, B.A., Ismaïl, B., Jean-Guy, M.: Automatic text summarization: a new hybrid model based on vector space modelling, fuzzy logic and rhetorical structure analysis. ICCCI 2, 26–34 (2019)
Colin, R., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR J. (2019)
Binh, G., Tuan, A.T., Nam, K.T., Mohammad, T.: Leverage learning to rank in an optimization framework for timeline summarization. In: TAIA Workshop SIGIR (2013)
Luhn, H.P.: The automatic creation of literature abstracts. IBM J. 2, 159–165 (1958)
Rada, M., Paul, T.: TextRank: bringing order into texts. In: Proceedings of Empirical Methods for Natural Language Processing, pp. 404–411 (2004)
Gunes, E., Dragomir, R.R.: LexRank: graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res. 22, 457–479 (2004)
Acknowledgements
The authors would like to thank Natural Sciences and Engineering Research Council of Canada (NSERC) for financing this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ayed, A.B., Biskri, I., Meunier, JG. (2023). An Abstractive Automatic Summarization Approach Based on a Text Comprehension Model of Cognitive Psychology. In: Nguyen, N.T., et al. Computational Collective Intelligence. ICCCI 2023. Lecture Notes in Computer Science(), vol 14162. Springer, Cham. https://doi.org/10.1007/978-3-031-41456-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-41456-5_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-41455-8
Online ISBN: 978-3-031-41456-5
eBook Packages: Computer ScienceComputer Science (R0)