Abstract
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader’s mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of comprehension (AMoC) simulates the construction of readers’ mental representations of text by building syntactic and semantic relations between words, coupled with inferences of related concepts that rely on various automated semantic models. This paper introduces the second version of AMoC that builds upon the initial model with a revised processing pipeline in Python leveraging state-of-the-art NLP models, additional heuristics for improved representations, as well as a new radiant graph visualization of the comprehension model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
McNamara, D.S., Kintsch, E., Songer, N.B., Kintsch, W.: Are good texts always better? Interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cogn. Instr. 14(1), 1–43 (1996)
Kintsch, W., Welsch, D.M.: The construction-integration model: a framework for studying memory for text. In: Relating Theory and Data: Essays on Human Memory in Honor of Bennet B. Murdock., pp. 367–385. Lawrence Erlbaum Associates, Inc., Hillsdale (1991)
Van den Broek, P., Young, M., Tzeng, Y., Linderholm, T.: The Landscape Model of Reading: Inferences and the Online Construction of a Memory Representation. The Construction of Mental Representations during Reading, pp. 71–98 (1999)
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265–283. {USENIX} Association, Savannah, GA, USA (2016)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp. 8026–8037, Vancouver, BC, Canada (2019)
Honnibal, M., Montani, I.: Spacy 2: natural language understanding with bloom embeddings. In: Convolutional Neural Networks and Incremental Parsing, vol. 7, no. 1 (2017)
Dascalu, M., Dessus, P., Trausan-Matu, Ş, Bianco, M., Nardy, A.: ReaderBench, an environment for analyzing text complexity and reading strategies. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS (LNAI), vol. 7926, pp. 379–388. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39112-5_39
huggingface: Neuralcoref, Accessed 30 Dec 2020. https://github.com/huggingface/neuralcoref (2020)
Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J.R., Bethard, S., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: Proceedings of 52nd Annual Meeting of the Association for computational Linguistics: System Demonstrations, pp. 55–60. The Association for Computer Linguistics, Baltimore, MD, USA (2014)
Wolf, T.: State-of-the-art neural coreference resolution for chatbots, Accessed 30 Dec 2020. https://medium.com/huggingface/state-of-the-art-neural-coreference-resolution-for-chatbots-3302365dcf30 (2017)
Explosion: SpaCy, Accessed 9 Feb 2021. https://spacy.io/usage/facts-figures#benchmarks (2016–2021).
The Stanford Natural Language Processing Group: Neural Network Dependency Parser, Accessed 9 Feb 2021. https://nlp.stanford.edu/software/nndep.html
Google: word2vec, Accessed 30 Nov 2020. https://code.google.com/archive/p/word2vec/ (2013)
Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)
Davies, M.: The corpus of contemporary American English as the first reliable monitor corpus of English. Literary Linguist. Comput. 25(4), 447–464 (2010)
Ratner, B.: The correlation coefficient: its values range between+1/−1, or do they? J. Target. Meas. Anal. Mark. 17(2), 139–142 (2009)
Kuperman, V., Stadthagen-Gonzalez, H., Brysbaert, M.: Age-of-acquisition ratings for 30,000 English words. Behav. Res. Methods 44(4), 978–990 (2012)
Brin, S., Page, L.: The anatomy of a large-scale hypertextual web search engine. Comput. Netw. 30(1–7), 107–117 (1998)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Acknowledgments
This research was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS – UEFISCDI, project number TE 70 PN-III-P1-1.1-TE-2019-2209, ATES – “Automated Text Evaluation and Simplification”, the Institute of Education Sciences (R305A180144 and R305A180261), and the Office of Naval Research (N00014-17-1-2300; N00014-20-1-2623). The opinions expressed are those of the authors and do not represent views of the IES or ONR.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Corlatescu, DG., Dascalu, M., McNamara, D.S. (2021). Automated Model of Comprehension V2.0. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12749. Springer, Cham. https://doi.org/10.1007/978-3-030-78270-2_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-78270-2_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78269-6
Online ISBN: 978-3-030-78270-2
eBook Packages: Computer ScienceComputer Science (R0)