Elsevier

Computers in Human Behavior

Volume 76, November 2017, Pages 641-655
Computers in Human Behavior

Full length article
Using multi-channel data with multi-level modeling to assess in-game performance during gameplay with Crystal Island

An earlier version of this study was presented at the 13th International Conference on Intelligent Tutoring Systems (ITS 2016) in Zagreb, Croatia and published in A. Micarelli, J. Stamper, & K. Panourgia (Eds.), Proceedings of the 13th International Conference on Intelligent Tutoring Systems—Lecture Notes in Computer Science 9684 (pp. 240–246). The Netherlands: Springer.
https://doi.org/10.1016/j.chb.2017.01.038Get rights and content

Highlights

  • Used multi-level modeling to assess performance on in-game assessments with Crystal Island.

  • Better performance when reading fewer total books, but reading each book more frequently.

  • Better performance with low proportions of fixations on book content and book concept matrices.

  • Highest performance with fewer books and low proportions of fixations on books and matrices.

  • Implications for designing GBLEs that model efficient behavior leading to greater performance.

Abstract

Game-based learning environments (GBLEs) have been touted as the solution for failing educational outcomes. In this study, we address some of these major issues by using multi-level modeling with data from eye movements and log files to examine the cognitive and metacognitive self-regulatory processes used by 50 college students as they read books and completed the associated in-game assessments (concept matrices) while playing the Crystal Island game-based learning environment. Results revealed that participants who read fewer books in total, but read each of them more frequently, and who had low proportions of fixations on books and concept matrices exhibited the strongest performance. Results stress the importance of assessing quality vs. quantity during gameplay, such that it is important to read books in-depth (i.e., quality), compared to reading books once (i.e., quantity). Implications for these findings involve designing adaptive GBLEs that scaffold participants based on their trace data, such that we can model efficient behaviors that lead to successful performance.

Introduction

Game-based learning environments (GBLEs) have been touted as a solution for failing educational outcomes across several domains. Learning with GBLEs can be particularly effective for learning because they are designed to foster engagement during learning (e.g., Sabourin et al., 2013, Sabourin and Lester, 2014). Additionally, many games require self-regulated learning, in addition to other learning processes, such as scientific reasoning (Millis et al., 2011). Scientific reasoning involves generating and testing hypotheses, and therefore students use self-regulatory processes to assist in generating and testing these hypotheses. For example, during gameplay with Crystal Island, students are required to gather clues, and create and test hypotheses, to solve a mystery. It can thus be beneficial to situate theories of SRL with scientific reasoning to investigate learning with GBLEs.

Despite the widespread enthusiasm, many critics have raised serious issues regarding the effectiveness of GBLEs for learning and problem solving (Mayer, 2015, Shute and Ventura, 2013). Unfortunately, the majority of published studies suffer from conceptual, theoretical, methodological, and analytical issues, undermining the value of GBLEs for improving learning, problem solving, and transfer of knowledge and skills across domains and age groups. Recent calls have been made to improve the quality of GBLE research by using theoretically-driven approaches and interdisciplinary methods and analytical techniques to comprehend the cognitive, affective, metacognitive, and motivational processes simultaneously during gameplay to understand their roles and impact other than the typical approach of using pre-to post-test measures and self-reports of motivation and engagement (Mayer, 2014). In this study, we address some of these major issues by using eye movements and log files to examine the cognitive and metacognitive self-regulatory processes deployed by college students while playing Crystal Island, a GBLE that incorporates microbiology content and scientific reasoning to solve the mystery of what disease has spread through a fictional remote island.

Research on self-regulated learning (SRL) indicates that students are self-regulating when they adaptively respond to both internal (e.g., use cognitive strategies during scientific reasoning) and external conditions (e.g., navigate a game environment in search of evidence) as evidenced by accurate monitoring and effective regulation of their cognitive, affective, metacognitive, and motivational processes during learning, problem solving, and performance (Azevedo et al., 2011, Azevedo et al., 2015, Winne and Azevedo, 2014, Winne and Hadwin, 1998, Winne and Hadwin, 2008, Zimmerman and Schunk, 2011). Although research has shown that engaging in cognitive, affective, metacognitive, and motivational self-regulated learning processes can be beneficial for learning (Azevedo, 2009, Azevedo, 2014, Pintrich, 2000, Schunk and Greene, in press), research has also revealed that students do not typically deploy these processes effectively and efficiently during learning with advanced learning technologies such as intelligent tutoring systems, hypermedia, multimedia (see Azevedo et al.,, Azevedo et al., 2015, Azevedo et al., 2011, Graesser, 2015, VanLehn, 2016). Recent work on GBLEs and self-regulated learning has been conducted by Lester and colleagues (e.g., Sabourin & Lester, 2014) to examine if gameplay behaviors are predictive of learning, performance, engagement, and motivation using traditional statistics, data mining and machine learning. The current study extends this work by converging eye movements and log files to examine the underlying cognitive and metacognitive processes used by college students to solve the mystery on Crystal Island.

Winne and Hadwin, 1998, Winne and Hadwin, 2008 Information Processing Theory (IPT) was used as the theoretical framework for the current study, which posits that learning occurs through a series of four cyclical phases, and information processing can occur within each phase. In the first phase, task definition, students must develop task understanding that drives their planning, monitoring and regulatory processes. In Crystal Island, students must understand the overall goal for the task, which is to solve the science mystery. In the second phase (goals and plans), students set goals for how they will accomplish the task (e.g., gather clues in each building) and plan how they will accomplish those goals (e.g., read books, complete embedded assessments). The third phase, strategy-use, is when the students enact the plans to accomplish the goals they set in the previous phase (e.g., when students actually read the books and complete the embedded assessments). Strategy use and metacognitive monitoring can be inferred by analyzing in-game behaviors collected through eye movements and log files. The fourth phase (adaptation) is not addressed in this study. It is important to note that these phases are not necessarily sequential, and students can engage in multiple phases simultaneously, and in any order.

Information processing includes students engaging in cognitive, affective, metacognitive, and motivational processes to effectively self-regulate their learning (Azevedo et al., 2011, Azevedo et al., 2015, Azevedo et al.,), and these processes are related to monitoring and control. For example, students can monitor their use of strategies based on making metacognitive judgments (i.e., is completing the in-game assessment an efficient strategy if the student does not understand the material), and return to the goals and plans phase to dynamically monitor and control their use of strategies (i.e., re-reading a book as a cognitive learning strategy). Therefore, throughout all the phases of learning, self-regulation implies that students engage in monitoring and control of self-regulated learning strategies.

Our use of Winne and Hadwin, 1998, Winne and Hadwin, 2008 model is advantageous because even though it has yet to be empirically tested, it is the only model that assesses SRL as an event that temporally unfolds over time (Winne & Azevedo, 2014). The temporality of SRL is especially important during learning with GBLEs because students are presented with complex material, and their use of SRL strategies can change depending on the context. For example, during learning of complex text within a hypermedia-learning environment, we operationalize judgments of learning (JOLs) as assessing one’s understanding of the text by having them make that judgment, followed by a content quiz (Greene & Azevedo, 2009). When making these judgments, there is a valence associated with it, such that a high rating of understanding would be a JOL+, and a JOL-would be indicative of low understanding. Although this might seem specific to hypermedia-learning environments, this can be applied to learning with GBLEs as well. During gameplay with Crystal Island, one activity students can engage in is reading books, which are associated with embedded assessments called concept matrices (Rowe, Shores, Mott, & Lester, 2011). In each concept matrix, there are questions that test students on their understanding of the material in the book. Therefore, this can be seen as a JOL because students can self-evaluate their understanding of the text, and go on to complete the concept matrix to test if they did understand the text. Furthermore, we can investigate the valence of the JOL based on the correctness of the responses. Thus, as opposed to the valence being associated with the student’s judgment of their understanding, the valence can be associated with how well the student performed on the assessment, such that low performance has a negative valence, and high performance has a positive valence. Therefore, we can apply IPT, specifically metacognitive monitoring and control to gameplay and scientific reasoning with GBLEs.

Section snippets

Game-based learning environments

The effectiveness of GBLEs across domains (e.g., math, computer science, biology, psychology, etc.) has come into question as several meta-analyses (Clark et al., 2016, Connolly et al., 2012, Girard et al., 2012, Mayer, 2014, Wouters et al., 2013) have revealed different results. More specifically, they have revealed that learning with GBLEs results in small to medium effect sizes for knowledge acquisition (d = 0.29, p < 0.01; Wouters et al., 2013), yet moderate to large effect sizes for

Current study: assessing and converging multi-channel data with Crystal Island

The current study extends previous published research on GBLEs and the work on Crystal Island by Lester and colleagues by using eye tracking and log files to assess college students’ cognitive and metacognitive SRL processes during gameplay while using Crystal Island. More specifically, we converged specific in-game behaviors with eye tracking to assess how well students performed on in-game embedded assessments during learning and gameplay with GBLEs. As such, the goal of the current study was

Participants

Fifty (N = 501) non-biology majors (56% female) from a large public university located in the southeast region of the US

Results

We used SAS software 9.4 (SAS Institute Inc., 2012) to run our analyses, with a restricted maximum likelihood (REML) estimation method, and a variance components covariance structure. The first step in using MLM requires a fully unconditional model, which allows us to determine if there is sufficient between- and within-subjects variance in our dependent variable, and the intra-class correlation coefficient (i.e., the percentage of variance explained at the between- and within-subject levels),

Discussion

Overall, results from the log-file data indicate that the number of books read and the frequency of reading each book were both negatively related to concept matrix submission attempts, when assessing their unique associations with the number of concept matrix submission attempts. In addition to main effects of each variable, there was a significant interaction between both variables, such that reading fewer books, but reading each book more frequently was associated with fewer attempts, and

Acknowledgment

The research presented in this paper has been supported by funding from the Social Sciences and Humanities Research Council of Canada (SSHRC 895–2011–1006) awarded to the third and last authors. The authors would like to thank Robert Taylor and Andrew Smith for assisting with system development, and Megan Price for assisting with running participants.

References (81)

  • G. Schraw

    Situational interest in literary text

    Contemporary Educational Psychology

    (1997)
  • E.L. Snow et al.

    Does agency matter? Exploring the impact of controlled behaviors within a game-based environment

    Computers & Education

    (2015)
  • M.J. Tsai et al.

    Visual behavior, blow and achievement in game-based learning

    Computers & Education

    (2016)
  • T. van Gog et al.

    Attention guidance during example study via the model’s eye movements

    Computers in Human Behavior

    (2009)
  • D.M. Adams et al.

    Narrative games for learning: Testing the discovery and narrative hypotheses

    Journal of Educational Psychology

    (2012)
  • R. Azevedo

    Theoretical, methodological, and analytical challenges in the research on metacognition and self-regulation: A commentary

    Metacognition & Learning

    (2009)
  • R. Azevedo

    Multimedia learning of metacognitive strategies

  • R. Azevedo

    Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues

    Educational Psychologist

    (2015)
  • R. Azevedo et al.

    Using trace data to examine the complex roles of cognitive, metacognitive, and emotional self-regulatory processes during learning with multi-agent systems

  • R. Azevedo et al.

    Use of hypermedia to convey and assess self-regulated learning

  • R. Azevedo et al.

    Technologies supporting self-regulated learning

  • Azevedo, R., Taub, M., & Mudrick, N. V. Using multi-channel trace data to infer and foster self-regulated learning...
  • D. Bondareva et al.

    Inferring learning from gaze data during interaction with an environment to support self-regulated learning

  • D.B. Clark et al.

    Digital games, design, and learning: A systematic review and meta-analysis

    Review of Educational Research

    (2016)
  • J.C. Cromley et al.

    Location information within extended hypermedia

    Educational Technology Research and Development

    (2009)
  • S.K. D’Mello

    Giving eyesight to the blind: Towards attention-aware AIED

    International Journal of Artificial Intelligence in Education

    (2016)
  • A.J. Elliot et al.

    On the measurement of achievement goals: Critique, illustration, and application

    Journal of Educational Psychology

    (2008)
  • M. Filsecker et al.

    Engagement as a volitional construct: A framework for evidence-based research on educational games

    Simulation & Gaming

    (2014)
  • C. Girard et al.

    Serious games as new educational tools: How effective are they? A meta-analysis of recent studies

    Journal of Computer Assisted Learning

    (2012)
  • A.C. Graesser

    Deeper learning with advances in discourse science and technology

    Policy Insights from Behavioral and Brain Sciences

    (2015)
  • E.Y. Ha et al.

    Goal recognition with Markov logic networks for player-adaptive games

  • J. Hyönä et al.

    Do adult readers know how they read? Evidence from eye movement patterns and verbal reports

    British Journal of Psychology

    (2006)
  • iMotions

    Attention tool (version 6.0)

    (2016)
  • N. Jaques et al.

    Predicting affect from gaze data during interaction with an intelligent tutoring system

  • S. Lee et al.

    Director agent intervention strategies for interactive narrative environments

  • S. Lee et al.

    A supervised learning framework for modeling director agent strategies in educational interactive narrative

    IEEE Transactions on Computational Intelligence and AI in Games

    (2014)
  • J. Lester et al.

    Serious games get smart: Intelligent game-based learning environments

    AI Magazine

    (2013)
  • J.C. Lester et al.

    Supporting self-regulated science learning in narrative-centered learning environments

  • R.E. Mayer

    Computer games for learning: An evidence-based approach

    (2014)
  • R.E. Mayer

    On the need for research evidence to guide the design of computer games for learning

    Educational Psychologist

    (2015)
  • Cited by (57)

    View all citing articles on Scopus
    View full text