1 Introduction

Adaptive Instructional Systems (AIS) are designed to accelerate knowledge and skill acquisition through mediated experiences that are managed by algorithms and processes grounded in learning science and cognitive psychology. The tenets of AISs are based on effective instructional methodologies that are captured in Artificial Intelligence (AI) modeling techniques for tracking skill acquisition and guiding system-level pedagogical decisions. What is of importance in the current training climate is quickly extending these methods to account for team and collective structures across an array of roles and missions linked to Army objectives. With a modernization strategy underway to update the Army’s simulation-based training technology base, influencing the learning science component of these maturing applications early on is critical. This involves establishing research informed best practices for managing real-time coaching and adaptation across an array of team formations and team of team structures.

From a pedagogical perspective, Vygotsky’s Zone of Proximal Development (ZPD) [1] provides a nice theoretical foundation that highlights an interplay between guidance and adaptation when managing an educational interaction (see Fig. 1), with its simplicity holding merit when also considering team formations. As an example, when a learner’s or team’s ability does not match the complexity of a problem (e.g., there are errors present or lack of understanding on what to do next), an initial approach would focus on feedback and coaching to alleviate that impasse. A system will monitor interaction and assess performance for the purpose of diagnosis. With a representation of the domain, feedback can target specific Knowledge, Skills, Abilities (KSAs) for the purpose of correcting mental models and influencing procedural and process changes.

Fig. 1.
figure 1

The Zone of Proximal Development’s teaching strategy [1].

In the instance when feedback fails to improve the observed/modeled deficiency, adapting the pedagogical approach (e.g., engage worked example, modify scenario complexity, restart task/scenario, etc.) provides a mechanism to maintain learner engagement in an effort to prevent frustration and improve understanding. Alternatively, if the scenario complexity is below the ability of the interacting units, the system should have mechanisms to increase challenge for the purpose of maintaining desirable difficulties [2]. It is through these mechanisms that an instructor will adapt training to better meet the needs of the interacting party in an effort to maximize learning outcomes (i.e., time to proficiency, retention of skill, transfer of skill, etc.).

The mechanisms by which to engage these instructional interventions is well researched in the AIS community [3, 4], however, most of the literature examines performance at the individual level and within well-defined domain spaces. While informing pedagogical strategies requires robust modeling techniques to track learner performance and competency (i.e., learner/team modeling), there is also a requirement to establish instructional design paradigms that guide the creation and configuration of context specific instructional injects that manage the experience for the purpose of optimizing outcome. In this paper, we discuss the design implications of extending AIS functions into team and collective training environments, with a focus on system-level pedagogical supports. The impetus for this research associates with the U.S. Army’s modernization strategy called the Synthetic Training Environment (STE).

2 Learning Science and the Synthetic Training Environment

The STE is a large undertaking that aims to modernize the Army’s current capability sets across Live, Virtual and Constructive (LVC) simulation-based training methods to support collective training. The overarching objective is to leverage advancements across the industry and government’s technology base to provide a modern training solution to today’s soldier that incorporates immersive interactions with high-fidelity realism to train critical skill sets across the echelon structures within the Army’s operational force. The resulting STE will provide the Army with a mechanism to rapidly simulate numerous “bloodless” battles in an effort to optimize force structure performance through exposure to realistic battle drills and operational dynamics in a multi-domain battlespace [5]. There are multiple functional capabilities required to support the maturation of STE. The one we focus on deals with the instructional components and learning science driving the design and application of the training.

2.1 Training Management Tools and Team-Based AIS

The STE subcomponent called Training Management Tools (TMT) is comprised of a set of technologies that assist in establishing relevant training content, managing the execution of that content, and organizing an AAR following interaction with that content. To organize these functions, TMT supports activities in the: (1) planning, (2) preparation, (3) execution, and (4) assessment/review phases of a STE training event. While most legacy training systems in the Army require human Observer Controllers (OC) to manage assessment and guide coaching, one capability TMT is conceptualized to offer for STE is intelligent tutoring by way of AIS methods.

As a starting point for research, the foundation of the AIS functions for the TMT baseline are leveraging the most mature components native to the U.S. Army’s Generalized Intelligent Framework for Tutoring (GIFT). GIFT was designed as a set of de-facto best practices for authoring AIS content within a domain-independent architecture [6]. Initial GIFT development focused on individual domains, with successful applications designed for training: (1) care under fire procedures for combat medics [7], (2) land navigation fundamentals [8], (3) basic rifle marksmanship [9] and enhanced situational awareness within the context of counter-insurgency [10]. While an initial focus was on establishing workflows in individualized domain spaces to support a more ubiquitous approach to authoring, teams and collective environments were not ignored. These included efforts examining the technical requirements at the architectural level associated with intelligent tutoring for collective teams [11, 12], and defining a theoretical construct by which to guide measurement design that will inform tutor decisions (i.e., defining the dimensions of teamwork and establishing behavioral markers by which to measure those dimensions [13]).

At the current state, there are two salient observations by the authors related to team-based tutoring research: (1) while there is a mature understanding of what makes an effective team and how to measure markers of performance [13], automating and generalizing those measures beyond simplistic go/no-go rule determinations will require robust research methods in a mature STE endorsed environment, and (2) there is little understanding on what a team-focused intelligent tutor should objectively do during run-time at the pedagogical level. While there have been some initial thought papers looking at how the tenets of learning science [14] and sports psychology [15] can influence an initial set of pedagogical policies, further work is required to translate those recommendations into a schema that functions within the TMT baseline.

Before we discuss the instructional decision points and implications for intelligent tutoring in STE, it is important to define the activities and workflows that are currently in place to establish these capabilities. In the following sub-sections, we define how the organizing TMT activities (i.e., plan, prepare, execute, and assess) associate with AIS requirements. It is important to note the following descriptions are explicitly represented as potential workflows for building AIS content in STE as informed by the GIFT architecture, and do not represent the other TMT functions that serve different aspects of STE exercise development.

Plan/Preparation.

The plan and preparation activities associate with defining and configuring all the AIS components that are required for real-time measurement/assessment and pedagogical injects. An objective of the STE is to automate as much of this process as possible, but it is still important to document the various dependencies that will require configuration in one way or another prior to training execution. We separate plan and preparation components, as we believe there are varying levels of background and technical expertise required to support those workflows. As technology matures, the goal is to provide tools and methods that promote a sustainable AIS training environment maintained by people who know the domain and actually use the system, rather than by contractors and support personnel required for preparation purposes. To achieve this, not only do we need robust mechanisms to assess performance and inference competency, we need robust and intuitive authoring tools to support rapid configuration of these functions across roles, scenarios, and environments.

Planning Activities.

In the plan portion of AIS implementation, performing front-end analyses is critical. This involves: (1) representing a set of training objectives that a scenario will target, (2) storyboarding a scenario with events and triggers based on prescribed tasks and conditions of a defined training objective, (3) deconstructing the training objectives into knowledge, skills, and abilities (KSAs) that are required to meet performance standards, (4) establishing what KSAs associate with the sequenced events defined in the storyboard, (5) establishing criteria thresholds for task standards based on representative conditions, and (6) defining pedagogical strategy functions at both the task and KSA level for use in real-time to influence training.

Each of the represented activities listed above assists in building a requirements list for use during system preparation (i.e., configuration). The goal is to elicit the necessary information from Subject Matter Experts (SME) through structured task analyses that are designed to link the workflow above with schema structures in an established AIS framework. The front-end analyses are important as they specify conditions for use during preparation activities. Depending on the assessment techniques (e.g., decision trees, Bayesian nets, Markov Decision Processes, etc.), the training environment must be modeled across a set of states that a trainee and/or unit may experience based on their underlying actions. This involves documenting all related tasks/events and associated triggers by building references in the environment that can be used during AIS configuration (e.g., waypoints, paths, zones of interest, objects, etc.). Until AI methods target automated processes to support the six defined tasks above, there needs to be dedicated authoring tools and generalized methods for producing these content across any STE focused event, regardless of the terrain and echelon structure. This dependency will be discussed in further detail after the following sub-sections.

Preparation Activities.

Following the planning phase, preparation activities are initiated to configure the associated intelligent tutor modules. This involves establishing a gateway (i.e., defining dataflow specifications), configuring assessments around a set of defined tasks storyboarded during planning, and building instructional interventions that can be triggered based on the established assessments. A critical component to the success of this concept is a library of generalizable measures that can be referenced during this portion of the AIS development. If a measure does not exist, it is up to the interacting party to create the new measure in source, and recompile the code. Through this mechanism, there exists a community approach to measurement, where techniques and condition classes can be shared and repurposed across numerous contexts.

In an ideal situation, tools and methods are available to the same interacting individuals in the plan phase to support these AIS preparation activities. However, AIS scenario preparation can involve complex technical tasks that require a deep understanding of the underlying architecture and its inherent dependencies when authoring automated assessments. To reduce this complexity, prior research has investigated the impact of overlay authoring functions for map-oriented simulation environments on authoring AIS logic, (see Fig. 2) with results showing significant reductions in both errors performed and time to author [16].

Fig. 2.
figure 2

AIS overlay authoring functions for establishing contextualized reference objects

The current tool provides a mechanism to quickly build points of interest, areas of interest, and paths of interest that can be referenced as contextual anchors when configuring measures. The approach is extensible and applies across any map-based simulation, with current examples in live, virtual and constructive environments. With contextual anchors, the author can easily populate measures that require references from the environment by which to base performance thresholds (e.g., designating a vulnerable area during a battle drill by using the ‘AvoidArea’ condition class and using the specified zone established with the overlay tool). This stealth assessment can provide specific interaction patterns in the environment and provides temporal associations when violations are observed for real-time feedback or logged for AAR purposes.

Once assessments are configured across a set of tasks and conditions, preparation activities are required to configure pedagogical tactics that can be executed if AIS conditions are met. This is the critical gap of team-based tutoring that is not well researched in the AIS community. Understanding how to measure an environment and infer performance is one thing; understanding how to use those measures to drive automated pedagogical decisions is another.

In an effort to maintain generalizability, it is important to represent these system actions in an abstract way that can translate across tasks, domains, and environments. In the current AIS TMT baseline leveraging the GIFT architecture, there are three pedagogical models that an author can reference. Each approach has dependencies on the assessment modeling technique, with the assessment outputs driving pedagogical logic. These three models include:

  • State Transition Model: bases pedagogical decisions on observed shifts in performance at a concept by concept level with four supported actions (provide guidance, adapt scenario, ask question, and do nothing)

  • Trend and Competency Model: examines performance over time and applies algorithms to determine focus of coaching and remediation based on task model priorities and trends. Applies same supported actions with variation in pedagogical reasoning.

  • ICAP-inspired (Interactive, Constructive, Active, Passive) Model (based on Chi’s [17] learning activity framework): formalized using Markov Decision Processes and incorporates a reinforcement learning backend [18]. Policies determine remediation interactivity based on demographics and observed patterns in performance to optimize reward functions.

Regardless of the model, content is needed. In addition, the more adaptive you want the system, the more content required. Automating the construction of feedback and remedial activities is another challenge research aims to address, but there has not been much success outside the application of “hint factories” [19]. As a starting point, there is a need to establish an initial team-focused pedagogical activity model that accounts for individual and team states in a generalizable form. It should highlight the audience and intended function of each activity, with the goal of establishing policies across the activities to determine pedagogical practice based on data driven methods. We focus on the pedagogical activity model component of the problem space later in the paper.

Execution.

After the preparation activities are complete, an AIS configured STE scenario is ready for execution. The assessments and pedagogical logic have been established around the planned storyboard conceived in the front-end analysis. The intended training audience is now ready to interact, with the underlying assessments in place to guide training when deficiencies in performance are identified. In this instance, procedures are required to initialize the appropriate AIS modules to support the Adaptive Tutoring Learning Effect Chain oriented for team and collective training structures (ATLEC, see Fig. 3) [20]. The ATLEC shows the process of using learner interaction data across a team structure to inform performance states at the individual and team level for guiding instructional decisions. Interfacing technology for observer controllers to provide injects into the ATLEC in real-time is planned, with their inputs linked to dedicated nodes within the task schema where automated measures are not support and/or feasible. The critical component here is the required instructional context in place before an adaptive intervention can be selected, with those anchors accounted for in the planning and preparation phases.

Fig. 3.
figure 3

ATLEC model for teams [20]

While the resulting AIS can operate in an automated closed-loop capacity, the platform is designed to support human-in-the-loop decisions at both the assessment and pedagogical level. In this instance, an OC uses the AIS technology to track/insert assessments and to manipulate the environment with adaptive injects and feedback. The goal is to reduce the workload on the OCs and provide access to direct manipulation and guidance functions that can be automatically carried out by the system, along with recommendations based on the underlying pedagogical policies.

Assessment/Review.

Assessment and review activities can be differentiated across two categories: (1) assessment and review for the training audience through After Action Review (AAR) activities intelligently informed from observations collected during the execution portion, and (2) assessment and review of the system-level interventions performed during the execution phase and their resulting impact on measurable performance and objective outcomes, with reinforcement learning methods applied where feasible. The former assessment and review activities apply AIS methods for facilitating a guided AAR with reference points and remediation materials to address deficiencies recognized at both the individual role and team context, while the latter institutes AI methods to support a self-optimizing system that modifies pedagogical policies based on evidence-based methods. While AAR is a critical component of team and collective training, the focus of this paper is AIS functions at the execution level.

3 Adaptive Pedagogy at the Team Level

With an understanding of the development phases that go into establishing AIS functions for teams, we spend the remainder of the paper discussing an initial pedagogical activity framework based around targeted feedback and adaptation strategies. The role of feedback and adaptations are discussed within the context of AIS, followed by considerations related to the role of each pedagogical function and how they associate across individual, team-leads, and whole team structures (i.e., global).

3.1 Role of Feedback and Adaptation

The power of AISs are in using sophisticated modeling techniques to understand an individual’s and/or team’s progress toward a task objective for the purpose of guiding that experience. While the measurement and assessment of an AIS drives the underlying selection of a pedagogical strategy, the feedback and adaptive functions are the interfacing component a trainee will experience, thus making it a critical capability need research should address.

In the context of instruction and training, feedback is credited as a fundamental principle to efficient knowledge transfer [21,22,23]. According to Narciss [24], feedback in the learning context is provided by an external source of information not directly perceivable during task execution, and is used as a means for comparing performance outcomes with desired end states. This facilitation is useful for multiple purposes. Feedback: (1) can often motivate higher levels of effort based on current performance compared to desired performance [25]; (2) reduces uncertainty of how well an individual is performing on a task [26]; and (3) is useful for correcting misconceptions and errors when executing inappropriate strategies [27]. Understanding this interaction space in the context of teams is important. Teams can be composed of individuals from various backgrounds and with a wide-array of personality traits. Managing motivation while providing coaching becomes critical as one dis-engaged member of the team can impact the effectiveness of the training. These nuances should be accounted for when designing feedback policies.

In addition, the game-based training platform itself must also have mechanisms for reacting to state and performance measures in real-time. These mechanisms are designed to impact scenario storylines and force objective reactions by those training. In-game adaptations should provide the ability to adjust difficulty levels based on inferred performance and teams, adjust the pace and flow of guidance and coaching strategies, and deliver cues in the virtual environment that may act as a form of scenario specific feedback. In the following subsections, we present feedback and adaptation taxonomies that can guide the development of a team-based pedagogical model that is represented domain-agnostically.

Feedback Target Taxonomy.

Feedback should be structured around assessment characteristics and the target of coaching. There should be explicit representations in the domain model that inform the feedback target, but this needs to be accomplished while maintaining a flexible ontological schema. In the current TMT adaptive baseline, scenarios are represented as a series of tasks that have associated conditions and standards based on the context of the environment. Now that teams are being represented in the domain model, tagging task concepts and their associated assessments with metadata can support differentiation of feedback at levels within the team structure. As a starting point, we are proposing an initial feedback taxonomy that will guide pedagogical model development (see Table 1).

Table 1. Team interaction feedback taxonomy for AIS (N: Novice, J: Journeyman, E: Expert; number in parentheses represents number of errors observed across task/concept structure)

The taxonomy establishes mechanisms at three levels of interaction (individual, team lead, and global), two levels of valence (positive and negative), and two levels of timing (real-time [RT] and AAR; number inside parentheses represents number of observed errors for an associated concept assessment). This approach supports a simplified representation that puts bounds on required content; thus, reducing the development time at the SME/instructor level. Before proceeding to system adaptations, there are a few dependencies to recognize with this approach. These include: (1) task specific concept assessments can be tagged at the process or procedure level, (2) assessment mechanisms exist (either automated or human-informed) for natural language and communication processes, and (3) checkpoints are specified that enable a battle update briefing with assessment classes in place to manage performance states at the team task level.

Adaptation Target Taxonomy.

When it comes to discussing system-level adaptations, those building AIS components are limited to the adaptive functions their integrated environment supports. However, there are common functions, depending on the simulation engine, that enables direct manipulation of actors, objects, and scenario variables in real-time. This includes adding/removing/relocating entities and non-player characters, teleporting interacting learners to any map location, adding building and environment features, adjusting time of day and weather, etc. While there are numerous ways to adapt a scenario in real-time, there needs to be a learning science informed workflow to assist in their configuration and execution.

In an effort to maintain simplicity the adaptation taxonomy has limited dimensions. There are two levels of complexity (increase and decrease) and two levels of progress check decisions (continue and restart). Increasing and decreasing complexity can be any combination of adaptations. What is important is having a pre-established sets that the system can act on automatically and in real-time. Progress check decisions associate with either allowing a team to continue on in the mission or having the system restart the scenario due to observed critical errors. In this instance, SMEs will need to identify tasks that have designated performance criteria that are deemed critical at the task level. This enables SMEs to associate specific sub-tasks that require more focused training than others. Getting SME input to inform these decisions is critical.

Next, there needs to be an agent or set of policies that manages the interplay between feedback and adaptation. While feedback can be delivered at the individual level, system adaptations will impact the scenario and team tasking at large. As a starting point, individual role and team-lead performance on tasks should not directly lead to system adaptations as errors are performed. Feedback should be provided to correct errors based on the associations in Table 1, but the mission continues until an observed checkpoint is registered in the AIS system. At this point, performance is aggregated across interacting team structures, with an ability to populate adaptation level criteria that determines if complexity should be adjusted and if the scenario continues. This provides discrete time-markers that enable dynamic elements in the interacting environment.

4 Conclusion

Team-based adaptive training is a desired capability in the future Army STE. To provide an effective solution to the soldier, research is required to determine what pedagogical interactions should be supported in these environments, and what influence do they have on the learning and team development process. In this paper we presented an initial taxonomy for both feedback and adaptation in the context of team AIS across individual roles, team leads, and global associations. The taxonomy provides an initial starting point to develop requirements for a pedagogical model that is based on policies surrounding the pedagogical activity dependencies. The leading dependency is having a robust assessment capability to support the pedagogical reasoning described above, which is its own research vector, requiring measures of task work, team work, and communication. Next steps will involve adapting the GIFT task model structure to include the assessment metadata requirements and feedback and adaptation schemas.