Skip to main content

Using “Idealized Peers” for Automated Evaluation of Student Understanding in an Introductory Psychology Course

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11625))

Abstract

Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning activities). Previous work demonstrates that automated scoring of constructed responses such as summaries and essays using latent semantic analysis (LSA) can successfully predict human scoring. The goal for this study was to test whether LSA can be used to generate predictive indices when students are learning from social science texts that describe theories and provide evidence for them. The corpus consisted of written responses generated while reading textbook excerpts about a psychological theory. Automated scoring indices based in response length, lexical diversity of the response, the LSA match of the response to the original text, and LSA match to an idealized peer were all predictive of human scoring. In addition, student understanding (as measured by a posttest) was predicted uniquely by the LSA match to an idealized peer.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Chi, M.T.H.: Self-explaining expository texts: the dual process of generating inferences and repairing mental models. In: Glaser, R. (ed.) Advances in Instructional Psychology, pp. 161–237. Erlbaum, Mahwah (2000)

    Google Scholar 

  2. Hinze, S.R., Wiley, J., Pellegrino, J.W.: The importance of constructive comprehension processes in learning from tests. J. Mem. Lang. 69, 151–164 (2013)

    Article  Google Scholar 

  3. Chi, M.T.H., de Leeuw, N., Chiu, M.H., LaVancher, C.: Eliciting self-explanation improves understanding. Cogn. Sci. 18, 439–477 (1994)

    Google Scholar 

  4. McNamara, D.S.: SERT: self-explanation reading training. Discourse Process. 38, 1–30 (2004)

    Article  Google Scholar 

  5. Guerrero, T.A., Wiley, J.: Effects of text availability and reasoning processes on test performance. In: Proceedings of the 40th Annual Conference of the Cognitive Science Society, pp. 1745–1750. Cognitive Science Society, Madison (2018)

    Google Scholar 

  6. Landauer, T.K., Foltz, P.W., Laham, D.: An introduction to latent semantic analysis. Discourse Process. 25, 259–284 (1998)

    Article  Google Scholar 

  7. Foltz, P.W., Gilliam, S., Kendall, S.: Supporting content-based feedback in on-line writing evaluation with LSA. Interact. Learn. Environ. 8, 111–127 (2000)

    Article  Google Scholar 

  8. Wolfe, M.B., Schreiner, M.E., Rehder, B., Laham, D., Foltz, P.W., Kintsch, W., Landauer, T.K.: Learning from text: matching readers and text by latent semantic analysis. Discourse Process. 25, 309–336 (1998)

    Article  Google Scholar 

  9. León, J.A., Olmos, R., Escudero, I., Cañas, J.J., Salmerón, L.: Assessing short summaries with human judgments procedure and latent semantic analysis in narrative and expository texts. Behav. Res. Methods 38, 616–627 (2006)

    Article  Google Scholar 

  10. Kintsch, E., Steinhart, D., Stahl, G., LSA research group: Developing summarization skills through the use of LSA-based feedback. Interact. Learn. Environ. 8, 87–109 (2000)

    Article  Google Scholar 

  11. Graesser, A.C., Penumatsa, P., Ventura, M., Cai, Z., Hu, X.: Using LSA in AutoTutor: learning through mized initiative dialogue in natural language. In: Handbook of Latent Semantic Analysis, pp. 243–262 (2007)

    Google Scholar 

  12. Ventura, M.J., Franchescetti, D.R., Pennumatsa, P., Graesser, A.C., Hu, G.T.J.X., Cai, Z.: Combining computational models of short essay grading for conceptual physics problems. In: Lester, J.C., Vicari, R.M., Paraguaçu, F. (eds.) ITS 2004. LNCS, vol. 3220, pp. 423–431. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30139-4_40

    Chapter  Google Scholar 

  13. Wiley, J., et al.: Different approaches to assessing the quality of explanations following a multiple-document inquiry activity in science. Int. J. Artif. Intell. Educ. 27, 758–790 (2017)

    Article  Google Scholar 

  14. Muñoz, B., Magliano, J.P., Sheridan, R., McNamara, D.S.: Typing versus thinking aloud when reading: implications for computer-based assessment and training tools. Behav. Res. Methods 38, 211–217 (2006)

    Article  Google Scholar 

  15. McNamara, D.S., Boonthum, C., Levinstein, I.B., Millis, K.: Evaluating self-explanations in iSTART: comparing word-based and LSA algorithms. In: Handbook of Latent Semantic Analysis, pp. 227–241 (2007)

    Google Scholar 

  16. Pennebaker, J. W., Booth, R.J., Boyd, R.L., Francis, M.E.: Linguistic inquiry and word count: LIWC 2015. LIWC.net, Austin, TX (2015)

    Google Scholar 

  17. Graesser, A.C., McNamara, D.S., Louwerse, M.M., Cai, Z.: Coh-Metrix: analysis of text on cohesion and language. Behav. Res. Methods Instrum. Comput. 36, 193–202 (2004)

    Article  Google Scholar 

  18. Dikli, S.: An overview of automated scoring of essays. J. Technol. Learn. Assess. 5, 1–35 (2006)

    Google Scholar 

  19. Kobrin, J.L., Deng, H., Shaw, E.J.: The association between SAT prompt characteristics, response features, and essay scores. Assessing Writ. 16, 154–169 (2011)

    Article  Google Scholar 

  20. Ferris, D.R.: Lexical and syntactic features of ESL writing by students at different levels of L2 proficiency. TESOL Q. 28, 414–420 (1994)

    Article  Google Scholar 

  21. Crossley, S.A., McNamara, D.S.: Predicting second language writing proficiency: the role of cohesion, readability, and lexical difficulty. J. Res. Read. 35, 115–135 (2012)

    Article  Google Scholar 

  22. Guenther, F., Dudschig, C., Kaup, B.: LSAfun: an R package for computations based on latent semantic analysis. Behav. Res. Methods 47, 930–944 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by grants from the Institute for Education Sciences (R305A160008) and the National Science Foundations (GRFP to first author). The authors thank Grace Li for her support in scoring the student responses, and Thomas D. Griffin and Marta K. Mielicki for their contributions as part of the larger project from which these data were derived.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tricia A. Guerrero .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guerrero, T.A., Wiley, J. (2019). Using “Idealized Peers” for Automated Evaluation of Student Understanding in an Introductory Psychology Course. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11625. Springer, Cham. https://doi.org/10.1007/978-3-030-23204-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23204-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23203-0

  • Online ISBN: 978-3-030-23204-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics