Skip to main content

Advertisement

Log in

Predicting Learning in a Multi-component Serious Game

  • Original research
  • Published:
Technology, Knowledge and Learning Aims and scope Submit manuscript

Abstract

The current study investigated predictors of shallow versus deep learning within a serious game known as Operation ARA. This game uses a myriad of pedagogical features including multiple-choice tests, adaptive natural language tutorial conversations, case-based reasoning, and an E-text to engage students. The game teaches 11 topics in research methodology across three distinct modules that target factual information, application of reasoning to specific cases, and question generation. The goal of this investigation is to discover predictors of deep and shallow learning by blending Evidence-Centered Design (ECD) with educational data mining. In line with ECD, time-honored cognitive processes or behaviors of time-on-task, discrimination, generation, and scaffolding were selected because there is a large research history supporting their importance to learning. The study included 192 college students who participated in a pretest-interaction-posttest design. These data were used to discover the best predictors of learning across the training experiences. Results revealed distinctly different patterns of predictors of deep versus shallow learning for students across the training environments of the game. Specifically, more interactivity is important for environments contributing to shallow learning whereas generation and discrimination is more important in training environments supporting deeper learning. However, in some training environments the positive impact of generation may be at the price of decreased discrimination.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of Learning Science,4, 167–207.

    Google Scholar 

  • Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives: complete edition. New York: Longman.

    Google Scholar 

  • Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: McKay.

    Google Scholar 

  • Cai, Z., Graesser, A. C., Forsyth, C., Burkett, C., Millis, K., Wallace, P., Halpern, D. & Butler, H. (2011). Trialog in ARIES: User input assessment in an intelligent tutoring system. In W. Chen & S. Li (Eds.), Proceedings of the 3rd IEEE international conference on intelligent computing and intelligent systems (pp. 429–433). Guangzhou: IEEE Press.

  • Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin,132, 354–380.

    Google Scholar 

  • Chi, M. (2009). Active–constructive–interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science,1, 73–115.

    Google Scholar 

  • Chi, M. T. H., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. M. (2001). Learning from human tutoring. Cognitive Science,25, 471–533.

    Google Scholar 

  • Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior,11, 671–684.

    Google Scholar 

  • Driscoll, D. M., Craig, S. D., Gholson, B., Ventura, M., & Hu, X. (2003). Vicarious learning: Effects of overhearing dialog and monologue-like virtual tutoring sessions. Journal of Educational Computing Research,12, 431–450.

    Google Scholar 

  • Forsyth, C. M., Graesser, A. C., Cai, Z., Pavlik, P., Millis, K., & Halpern, D. (2013b). Learner profiles emerge from a serious game teaching scientific inquiry. Presented at the annual meeting of the American Educational Research Association, San Francisco, CA.

  • Forsyth, C. M., Graesser, A. C., Pavlik, P., Millis, K., & Samei, B. (2014). Discovering theoretically grounded predictors of shallow vs. deep- level learning. In J. Stamper, Z. Pardos, M. Mavrikis, & B. M. McLaren (Eds.), Proceedings of the 7th international conference on educational data mining (EDM 2014) (pp. 229–232). International Educational Data Mining Society.

  • Forsyth, C. M., Graesser, A. C., Walker, B., Millis, K., Pavlik, P., & Halpern, D. (2013a). Didactic galactic: Types of knowledge learned in a serious game. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Proceedings of artificial intelligence in education: 16th international conference (AIED 2013) (pp. 832–835). Berlin: Springer.

  • Forsyth, C. M., Pavlik, P., Graesser, A. C., Cai, Z., Germany, M., Millis, K., et al. (2012). Learning gains for core concepts in a serious game on scientific reasoning. In K. Yacef, O. Zaïane, H. Hershkovitz, M. Yudelson, & J. Stamper (Eds.), Proceedings of the 5th international conference on educational data mining (pp. 172–175). Chania: International Educational Data Mining Society.

  • Goldman, S. R., Duschl, R. A., Ellenbogen, K., Williams, S., & Tzou, C. T. (2003). Science inquiry in a digital age: Possibilities for making thinking visible. In H. van Oostendorp (Ed.), Cognition in a digital world (pp. 253–284). Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Graesser, A. C., Chipman, P., Haynes, B., & Olney, A. (1995). AutoTutor: An intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education,48, 612–618.

    Google Scholar 

  • Graesser, A. C., Conley, M. W., & Olney, A. M. (2012). Intelligent tutoring systems. In S. Graham & K. Harris (Eds.), APA educational psychology handbook: Vol. 3 (pp. 451–473)., Applications to learning and teaching Washington, DC: American Psychological Association.

    Google Scholar 

  • Graesser, A. C., Jeon, M., & Dufty, D. (2008). Agent technologies designed to facilitate interactive knowledge construction. Discourse Processes,45, 298–322.

    Google Scholar 

  • Graesser, A. C., Lippert, A. M., & Hampton, A. J. (in press). Successes and failures in building learning environments to promote deep learning: the value of conversational agents. In J. Buder & F. W. Hesse (Eds.), Informational environments: Effects of use, effective design. New York: Springer.

  • Graesser, A. C., & McNamara, D. S. (2011). Computational analyses of multilevel discourse comprehension. Topics in Cognitive Science,3, 371–398.

    Google Scholar 

  • Graesser, A. C., Millis, K. K., & Zwaan, R. A. (1997). Discourse comprehension. Annual Review of Psychology,48, 163–189.

    Google Scholar 

  • Graesser, A. C., Moreno, K., Marineau, J., Adcock, A., Olney, A., & Person, N. (2003). AutoTutor improves deep learning of computer literacy: Is it the dialog or the talking head? In U. Hoppe, F. Verdejo, & J. Kay (Eds.), Proceedings of artificial intelligence in education (pp. 47–54). Amsterdam: IOS.

  • Graesser, A. C., Ozuru, Y., & Sullins, J. (2009). What is a good question? In M. G. McKeown & L. Kucan (Eds.), Threads of coherence in research on the development of reading ability (pp. 112–141). New York: Guilford.

    Google Scholar 

  • Graesser, A. C., & Person, N. K. (1994). Question asking during tutoring. American Educational Research Journal,31, 104–137.

    Google Scholar 

  • Halpern, D. F., Millis, K., Graesser, A. C., Butler, H., Forsyth, C., & Cai, Z. (2012). Operation ARA: A computerized learning game that teaches critical thinking and scientific reasoning. Thinking Skills and Creativity,7, 93–100.

    Google Scholar 

  • Hunt, R. R., & McDaniel, M. A. (1993). The enigma of organization and distinctiveness. Journal of Memory and Language,32, 421–445.

    Google Scholar 

  • Jackson, G. T., & Graesser, A. C. (2006). Applications of human tutorial dialog in AutoTutor: An intelligent tutoring system. Revista Signos,39, 31–48.

    Google Scholar 

  • Jackson, G. T., & McNamara, D. S. (2013). Motivation and performance in a game-based intelligent tutoring system. Journal of Educational Psychology,105, 1036–1049.

    Google Scholar 

  • Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge: Cambridge University Press.

    Google Scholar 

  • Koedinger, K. R., Corbett, A. C., & Perfetti, C. (2012). The Knowledge-Learning-Instruction (KLI) framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science,36, 757–798.

    Google Scholar 

  • Marton, F., & Säljö, R. (1976). On qualitative differences in learning: I—Outcome and process. British Journal of Educational Psychology,46(1), 4–11.

    Google Scholar 

  • Mayer, R. E. (2015). Computer games for learning. Cambridge: MIT press.

    Google Scholar 

  • McClure, M., Friedman, S. E., & Forbus, K. D. (2010). Learning concepts from sketches via analogical generalization and near misses. In the Proceedings of the 32nd annual conference of the cognitive science society (Cogsci2010).

  • McNamara, D.S. (1992). The generation effect: A detailed analysis of the role of semantic processing (ICS Rep. No. 92-2). Boulder, CO.

  • McNamara, D. S., & Healy, A. F. (2000). A procedural explanation of the generation effect for simple and difficult multiplication problems and answers. Journal of Memory and Language,43, 652–679.

    Google Scholar 

  • Medin, D. L., Goldstone, R. L., & Gentner, D. (1993). Respect for similarity. Psychological Review,100, 254–278.

    Google Scholar 

  • Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher,23, 13–23.

    Google Scholar 

  • Millis, K., Forsyth, C., Butler, H., Wallace, P., Graesser, A. C., & Halpern, D. (2011). Operation ARIES! A serious game for teaching scientific inquiry. In M. Ma, A. Oikonomou, & J. Lakhmi (Eds.), Serious games and edutainment applications (pp. 169–196). London: Springer.

    Google Scholar 

  • Millis, K., Forsyth, C. M., Wallace, P., Graesser, A. C., & Timmins, G. (2017). The impact of game-like features on learning from an intelligent tutoring system. Technology, Knowledge and Learning,22, 1–22.

    Google Scholar 

  • Millis, K., Forsyth, C., Wiemer, K., Wallace, P., & Steciuch, C. (2019b). Learning scientific inquiry from a serious game that uses autotutor. In K. Millis, D. Long, J. Magliano, & K. Wiemer (Eds.), Deep comprehension: Multi-disciplinary approaches to understanding, enhancing and measuring comprehension (pp. 180–193). New York: Routledge.

    Google Scholar 

  • Millis, K., Graesser, A., & Halpern, D. (2014). Operation ARA: A serious game that combines intelligent tutoring and learning principles to teach science. In V. Benassi, C. E. Overson, & C. M. Hakala, (Eds.), Applying the science of learning in education: Infusing psychological science into the curriculum. Retrieved from the Society for the Teaching of Psychology web site: http://teachpsych.org/ebooks/asle2014/index.php. Accessed 1 Jan 2019.

  • Millis, K. K., King, A., & Kim, J. (2001). Updating situation models from descriptive texts: A test of the sitational operator model. Discourse Processes,30, 201–236.

    Google Scholar 

  • Millis, K., Long, D., Magliano, J., & Wiemer, K. (2019a). Deep comprehension: Multi-disciplinary approaches to understanding, enhancing and measuring comprehension. New York: Routledge.

    Google Scholar 

  • Mislevy, R. J, Almond, R. G., & Lucas, J. F. (2003). Brief overview of evidence-centered assessment design, from Frase, L. T., Almond, R. G., Burstein, J., Kukich, K., Mislevy, R. J., Sheehan, K. M., Steinberg, L. S., Singley, K., & Chodorow, M. (2003) Technology and assessment. In H. F. O’Neil & R. Perez (Eds.), Technology applications in assessment: a learning view (pp. 245–265). Mahwah, NJ: Erlbaum

  • Pashler, H., Bain, P. M., Bottge, B. A., Graesser, A., Koedinger, K., & McDaniel, M. (2007). Organizing instruction and study to improve student learning: IES practice guide (NCER 2007–2004). Washington, DC: National Center for Education Research.

    Google Scholar 

  • Perfetti, C. A., Britt, M. A., & George, M. (1995). Text-based learning and reasoning: Studies in history. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Person, N. K., & Graesser, A. C. (1999). Evolution of discourse in cross-age tutoring. In A. M. O’Donnell & A. King (Eds.), Cognitive perspectives on peer learning (pp. 69–86). Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Raaijmakers, J. G. W., & Shiffrin, R. M. (1981). Search of associative memory. Psychological Review,88, 93–134.

    Google Scholar 

  • Ratan, R., & Ritterfeld, U. (2009). Classifying serious games. In U. Ritterfeld, M. Cody, & P. Vorderer (Eds.), Serious games: Mechanisms and effects (pp. 10–24). New York: Routledge.

    Google Scholar 

  • Reder, L. (1987). Strategy selection in question answering. Cognitive Psychology,19, 90–138.

    Google Scholar 

  • Rouet, J. (2006). The skills of document use: From text comprehension to web-based learning. Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common principles in three paradigms suggest new concepts for training. Psychological Science,3, 207–217.

    Google Scholar 

  • Snow, C. (2002). Reading for understanding: Toward an R&D program in reading com- prehension. Santa Monica, CA: RAND Corporation.

    Google Scholar 

  • Taraban, R., Rynearson, K., & Stalcup, K. (2001). Time as a variable in learning on the World Wide Web. Behavior Research Methods, Instruments, & Computers,33, 217–225.

    Google Scholar 

  • VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science,31, 3–62.

    Google Scholar 

  • Vygotsky, L. S. (1978). Mind in society: Development of higher psychological processes. Cambridge: Harvard College.

    Google Scholar 

  • Wang, H., Shen, C., & Ritterfeld, U. (2009). Enjoyment of digital games: What makes them “seriously” fun? In U. Ritterfeld, M. Cody, & P. Vorderer (Eds.), Serious games: Mechanisms and effects (pp. 25–47). New York: Routledge.

    Google Scholar 

  • Winston, P. H. (1981). Learning structural descriptions from examples. In P. H. Winston (Ed.), The psychology of computer vision. New York: McGraw-Hill.

    Google Scholar 

  • Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Psychology and Psychiatry,17, 89–100.

    Google Scholar 

  • Zapata-Rivera, D., Liu, L., Chen, L., Hao, J., & von Davier, A. A. (2017). Assessing science inquiry skills in an immersive, conversation-based scenario. In B. Kei Daniel (Ed.), Big data and learning analytics in higher education. Cham: Springer.

    Google Scholar 

Download references

Acknowledgements

This research was supported by the Institute for Education Sciences, U.S. Department of Education, through Grant R305B070349. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education. Additional funding was provided by Educational Testing Service and Pearson Education.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carol M. Forsyth.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Forsyth, C.M., Graesser, A. & Millis, K. Predicting Learning in a Multi-component Serious Game. Tech Know Learn 25, 251–277 (2020). https://doi.org/10.1007/s10758-019-09421-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10758-019-09421-w

Keywords

Navigation