Elsevier

Computers & Education

Volume 126, November 2018, Pages 170-182
Computers & Education

The influence of learners' perceptions of virtual humans on learning transfer

https://doi.org/10.1016/j.compedu.2018.07.005Get rights and content

Highlights

  • The revised Agent Persona Instrument fit well with the four-factor structure.

  • K-means clustering identified two clusters of response patterns on the API.

  • Cluster membership had a small influence on learning transfer scores.

Abstract

Virtual humans are often integrated into novel multimedia learning environments. However, little is known about learner's perceptions of the agents or the relationship between the perceptions and learning. In this study, the authors revise the Agent Persona Instrument, a measurement tool designed to examine how learners perceive pedagogical agents. The factor structure of the revised instrument was evaluated with confirmatory factor analysis. Next, k-means clustering was used to examine how participants' ratings on the instrument cluster into groups based on their perceptions of the virtual human. After describing the qualities of the clusters within the data, linear regression was used to examine to what extent the cluster membership influenced participant's scores on a transfer test of learning. The results indicated that cluster membership only explained a small amount of variance in transfer test scores. This study provides a revised instrument for measuring pedagogical agent persona. It implemented a novel method for investigating perceptions of pedagogical agents using k-means clustering which was able to identify two unique groups of participants based on their perceptions of the agent. Finally, it presents empirical results indicating that learner's perceptions of the agent had a small influence on their learning outcome scores.

Introduction

How we learn with computers is constantly changing, and each year novel learning technologies are developed as researchers continue to push the boundaries of what is technically possible. Yet, no matter how new or novel a learning environment may be, it will still require instructional design decisions to be made. For example, an instructional designer can choose to incorporate virtual humans within the learning environment to facilitate the learning process.

In this paper, we focus on virtual humans, how they are perceived by learners, and the influence of those perceptions on learning. By definition, virtual humans are on-screen, humanlike representations of the software that often facilitate either direct instruction (Johnson, Ozogul, Moreno, & Reisslein, 2013; Yung & Paas, 2015) or other pedagogical approaches such as vicarious learning (Twyford & Craig, 2017). The term ‘virtual human’ embraces many more specific software entities such as pedagogical agents (Heidig & Clarebout, 2011; Schroeder, Adesope, & Gilbert, 2013), conversational agents (Louwerse, Graesser, Lu, & Mitchell, 2005), and motivational agents (Van der Meij, 2013). For the purposes of this paper, we use the terms ‘virtual human’, ‘pedagogical agent’, and ‘agent’ interchangeably.

Researchers have investigated agent-enhanced learning systems across a number of different content domains, such as language learning (Choi & Clark, 2006), mathematics (Atkinson, 2002), earth science (Frechette & Moreno, 2010), physics (Craig, Gholson, Brittingham, Williams, & Shubeck, 2012), and physiology (Dunsworth & Atkinson, 2007). Researchers have also used virtual humans for facilitating learning with many different participant populations, including museum visitors (Lane et al., 2013), primary school students (Holmes, 2007; Hong, Chen, & Lan, 2014), post-secondary students (Kim, Baylor, & Shen, 2007; Veletsianos, 2010), and Amazon Mechanical Turk users (Twyford & Craig, 2017). Despite their broad implementation, research has shown that virtual humans can often aid the learning process (Schroeder et al., 2013), or at least not significantly impede learning (in some cases they have made no significant difference on learning outcomes, see Heidig & Clarebout, 2011). As such, they continue to garner the attention of instructional designers looking to add pedagogical supports to their learning environments.

A wide body of research investigates how people learn with or from agents, and the emphasis of much of this work focuses on how agents should be designed. Designing agents is a complex process, one which has been broken down into detailed levels by Domagk (2010) in her Pedagogical Agents – Levels of Design (PALD) model. The PALD model denotes that designers must consider three different levels of agent design, the detail level, the medium level, and the global level. This framework embraces design decisions ranging from the choice of an agent's gender through more encompassing questions such as should the agent appear as a human or not (Domagk, 2010). Heidig and Clarebout (2011) added to this framework with the Pedagogical Agents – Conditions of Use (PACU) model, which describes how designers should consider the agent's design, the qualities of the learners using the system, the role that the agent plays in the environment, the environment itself, and content to be learned. Using these frameworks one could characterize existing agent studies in order to better understand how to design an effective agent for their specific use case.

Researchers continue to explore agent design, often using social agency theory as their theoretical framework. Social agency theory suggests that agents can aid learning due to creating the feeling of a social interaction between the learner and the agent (Mayer, Sobko, & Mautone, 2003); a theoretical perspective which implies that the learner's perception of the interaction is of the utmost importance during the learning process. While researchers are beginning to understand how agents can be designed to facilitate learning, Schroeder, Romine, and Craig (2017) noted there is significantly less work investigating how agents are perceived and the impact of those perceptions on learning outcomes. Hence, key questions remaining in the literature relate to how people perceive virtual humans and to what extent these perceptions influence learning outcomes. Within these issues, one must also consider the measurement; how are we measuring how pedagogical agents are perceived, and do these measures have validity evidence? This work is essential for deepening our understanding of social agency theory.

In this study, we aim to expand the understanding of how agents are perceived by learners. To do this we examined the factor structure of a newly revised perception instrument designed specifically for use with agents. Next, we analyzed learners' perceptions of agents using cluster analysis, and finally examined the extent to which learners' perceptions of the agent influenced their learning outcomes.

Section snippets

Theoretical framework

When focused on improving learning outcomes in agent-enhanced systems, researchers often may use the cognitive theory of multimedia learning (CTML) as a theoretical framework. When investigating learners' perceptions of the agent or the visual design components of agents, researchers may reference social agency theory. In short, the CTML emphasizes how multimedia learning materials are processed by the brain due to the limitations of working memory (Mayer, 2003, 2014a), while social agency

Methods

The data used in this study come from a pre-existing, unpublished study that had six independent groups. Since the work reported here is focused on the measurement instrument and examining participant responses across groups, the rationale for the experimental conditions are not explained in detail. However, the conditions themselves are described below. Since this study used a variety of methodological approaches to answer the research questions, the methods are discussed as follows: first, a

RQ1: to what extent does the API-R conform to the previously established four factor structure of the API?

Through running the three models specified in section 3.3, we found that certain fit indices suggested that a model with simple structure and local independence of items fitted satisfactorily. However, accounting for cross-loading and some local dependency improved model fit. CFI and RFI values above 0.9 and SRMR values below 0.08 indicated that Model 1 fit the data satisfactorily, but RMSEA and normed chi-square values were above the criteria for good fit (Table 2). We therefore added local

RQ1: to what extent does the API-R conform to the previously established four factor structure of the API?

In previous work, authors have highlighted that limited measurement instruments exist for pedagogical agent researchers, and there is a dearth of instruments that have appropriate validity evidence (Clark & Choi, 2005; Schroeder & Gotch, 2015; Schroeder et al., 2017). Schroeder et al. (2017) examined the validity of the API using CFA and found moderate-to-good fit on the model, reporting the following fit statistics: RMSEA = .064, CFI = .99, normed chi-square = 2.66. However Schroeder et al.

Conclusions and limitations

In this study, we sought to examine three related research questions. First, we revised and re-examined the factor structure of one of the only consistently reliable instruments for measuring pedagogical agent persona that also has validity evidence. Next, we used k-means analysis to see how participants' responses clustered on the API-R, which allowed for the delineation of specific groups of participants that perceived the agent similarly. Finally, we used the clusters identified through the

Conflicts of interest

The authors declare that they have no conflict of interest.

Funding

Not applicable (no funding).

References (72)

  • J.L. Plass et al.

    Emotional design in multimedia learning: Effects of shape and color on affect and learning

    Learning and Instruction

    (2014)
  • J.L. Plass et al.

    Emotional design in digital media for learning

    In Emotions, Technology, Design, and Learning

    (2016)
  • R. Rosenberg-Kima et al.

    Interface agents as social models for female students: The effects of agent visual presence and appearance on female students' attitudes and beliefs

    Computers in Human Behavior

    (2008)
  • N.L. Schroeder et al.

    Measuring pedagogical agent persona and the influence of agent persona on learning

    Computers & Education

    (2017)
  • H. Van der Meij

    Motivating agents in software tutorials

    Computers in Human Behavior

    (2013)
  • G. Veletsianos

    Contextually relevant pedagogical agents: Visual appearance, stereotypes, and first impressions and their impact on learning

    Computers & Education

    (2010)
  • R.K. Atkinson

    Optimizing learning from examples using animated pedagogical agents

    Journal of Educational Psychology

    (2002)
  • R. Azevedo

    Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues

    Educational Psychologist

    (2015)
  • R.S. Baker et al.

    Educational data mining and learning analytics

  • A. Bandura

    Perceived self-efficacy in cognitive development and functioning

    Educational Psychologist

    (1993)
  • A.L. Baylor et al.

    Simulating instructional roles through pedagogical agents

    International Journal of Artificial Intelligence in Education

    (2005)
  • P.M. Bentler

    Comparative fit indices in structural models

    Psychological Bulletin

    (1990)
  • S. Choi et al.

    Cognitive and affective benefits of an animated pedagogical agent for learning English as a second language

    Journal of Educational Computing Research

    (2006)
  • R.E. Clark et al.

    Five design principles for experiments on the effects of animated pedagogical agents

    Journal of Educational Computing Research

    (2005)
  • J. Cohen

    A coefficient of agreement for nominal scales

    Educational and Psychological Measurement

    (1960)
  • S. Domagk

    Do pedagogical agents facilitate learner motivation and learning outcomes? The role of the appeal of the agent's appearance and voice

    Journal of Media Psychology

    (2010)
  • J.C. Dunn

    Well-separated clusters and optimal fuzzy partitions

    Journal of Cybernetics

    (1974)
  • D.J. Follmer et al.

    The role of MTurk in education research: Advantages, issues, and future directions

    Educational Researcher

    (2017)
  • E.W. Forgy

    Cluster analysis of multivariate data: Efficiency versus interpretability of classifications

    Biometrics

    (1965)
  • C. Frechette et al.

    The roles of animated pedagogical agents' presence and nonverbal communication in multimedia learning environments

    Journal of Media Psychology

    (2010)
  • A. Gulz

    Benefits of virtual characters in computer based learning environments: Claims and evidence

    International Journal of Artificial Intelligence in Education

    (2004)
  • D.J. Hauser et al.

    Attentive turkers: MTurk participants perform better on online attention checks than do subject pool participants

    Behavior Research Methods

    (2016)
  • Z.W. Hong et al.

    A courseware to script animated pedagogical agents in instructional material for elementary students in English education

    Computer Assisted Language Learning

    (2014)
  • K. Höök et al.

    Evaluating users' experience of a character-enhanced information space

    AI Communications

    (2000)
  • L.T. Hu et al.

    Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives

    Structural Equation Modeling: A Multidisciplinary Journal

    (1999)
  • C.M. Hurvich et al.

    Regression and time series model selection in small samples

    Biometrika

    (1989)
  • Cited by (24)

    • Trust influences perceptions of virtual humans, but not necessarily learning

      2021, Computers and Education
      Citation Excerpt :

      The Agent Persona Instrument – Revised (API-R; Schroeder et al., 2018) was used to measure how well the virtual human was perceived to facilitate learning, and how human-like, engaging, and credible it was perceived to be (Chiou et al., 2020). The API-R is largely consistent with the original Agent Persona Instrument (Ryu & Baylor, 2005), but some items were revised based on psychometric analyses (see Schroeder et al., 2018). The API, or revisions of it, have been broadly used to examine how virtual humans are perceived in both interactive (e.g., Baylor & Kim, 2005) and non-interactive environments (e.g., Davis et al., 2019).

    • How we trust, perceive, and learn from virtual humans: The influence of voice quality

      2020, Computers and Education
      Citation Excerpt :

      Overall, the results from this study were largely consistent with recent work in the area that questions the generalizability of voice effect, particularly when learning with a virtual human. However, while it is known that the API and API-R may not be good predictors of learning (Schroeder et al., 2017, 2018), it is unknown how these constructs are related to trust. Accordingly, in Part Two the relationships between the API-R, trust, and learning outcomes are examined.

    View all citing articles on Scopus
    View full text