Keywords

1 Introduction

1.1 Tactical Combat Casualty Care

The life-saving care of a combat casualty begins moments after an injury is sustained. In these moments, the caregiver is tasked with maintaining tactical objectives and making critical decisions for care that may determine if a casualty lives. While the United States has achieved unprecedented survival rates for casualties arriving alive to combat hospitals, as high as 98%, evidence still suggests that 25% of battlefield deaths are potentially preventable. The majority of these deaths are found to occur during the pre-hospital phase of care [1].

This phase of care presents a significant opportunity for the improvement of battlefield medicine and casualty care outcomes. Having providers engage in realistic Tactical Combat Casualty Care (TC3) scenarios can optimize the leadership, teamwork, tactical, and medical skills required to succeed in the challenging situations they may encounter [2]. A hurdle facing the community is creating realistic training scenarios that adequately challenge the cognitive and decision-making processes of trainees. Patients provide explicit and implicit information to caregivers, who make observations and collect evidence to determine the best course of care. During a training scenario, with an assumingly healthy patient, it can be challenging to provide the trainee with the cues to trigger their clinical decision making.

Current training for TC3 often involves a patient being simulated by the use of a mannequin or a human actor (i.e. standardized patient). The mannequin can range in fidelity level, from providing the basic human form for practice, to providing advanced physiologic interactions to the learner. In the case of human actor, military personnel participating in a training exercise often carry a “casualty card” that instructs the person on how to portray a specific wound named on the card, if nominated to simulate a casualty. The card is also used to tell the trainee what wound to treat.

These simulated patients are often enhanced with the use of moulage to simulated the individual wounds and bring another layer of realism to the scenario. These may range from simple moulage that demonstrate some characteristics of the wound (e.g., rubber overlays with synthetic blood) to more complex sleeves that have representative blood dynamics (Fig. 1).

Fig. 1.
figure 1

Moulage simulating a wound being placed on a simulated patient.

While these techniques provide the basic information needed to support a training scenario, the simplicity of the presentation often requires the instructor to describe the wound or remind the trainee during an exercise about the qualities of the wound that are not portrayed, including how the wound is responding to treatment. For example, an instructor may spray fake blood on the moulage to simulate arterial bleeding. This effort by the instructors is there to compensate for the low-fidelity simulation, and takes away from time that could be spent providing instruction. While relatively simple, even these simulations take time and effort to create, set up, and manage, before and during the training exercise.

AR as a Solution.

Augmented Reality (AR), especially the recent boom in wearable AR headsets, has the potential to revolutionize how TC3 training happens today. AR can provide a unique mix of immersive simulation with the real environment. In a field exercise, a trainee could approach a casualty role-player or mannequin and see a simulated wound projected on the casualty. The hands-on, tactile experience combined with the simulated, dynamic wounds and casualty response has the potential to drastically increase the realism of medical training.

The proliferation of AR technologies provides an interesting opportunity to enhance training by overlaying realistic visual scenes onto the real world. The enhanced visual displays can create more compelling patients for the trainees to interact with and provide a more realistic and valuable learning opportunity.

Medical diagnosis and intervention place high demand on a human’s sense of touch and sight. Providers often rely on visual cues of the injury to assist in making the right decision for treatment. Similar visual cues are also used during treatment to determine the correct treatment procedure. Training technology capabilities that can provide these cues, or more explicit guidance to assist with diagnosis and treatment, have the potential to make significant improvements to a simulation experience.

1.2 Augmented Reality

AR typically refers to technology that allows a user to see a real environment while digital information is overlaid on that view. Heads-Up Displays (HUDs) such as in cockpits or fighter pilot helmets represent early work in AR, though typically these overlays do not register with objects in the environment. Later work includes registering information with the environment for tasks ranging from surgery, to machine maintenance, to entertainment such as the addition of AR scrimmage lines in NFL football games, or the highlighting the hockey puck in NHL games. See [1, 2] for thorough surveys of augmented reality. As mobile devices (phones, tablets) have become more capable, augmented reality has become more mobile, with game examples such as Pokemon GoTM, which provides an “AR view” option to show 3D renderings of game characters overlaid on top of camera views. More recently, wearable AR hardware has tended to focus on see-through glasses, visors, or individual lenses that allow for computer-generated imagery to be projected hands-free, while allowing the user to see the surrounding environment directly. Additionally, more sophisticated AR projections are registered with the real environment, where digital objects can be placed on real tables or seem to interact with real obstacles (Fig. 2).

Fig. 2.
figure 2

Microsoft HoloLens is one example of the commercially available AR headsets.

AR manufacturers, like Microsoft, have recognized the value of medical applications of the technology and have sponsored or supported multiple projects to explore its real feasibility, which have resulted in prototypes which validate the potential that AR has to improve training and ultimately, medical care. While the technology continues to improve, there are still several limitations with current AR systems that have real implications in training, including limited computer processing power and limited field of view; however, newer systems like those from SA Photonics and Magic Leap promise many improvements.

1.3 Adaptive Training

An interesting element of TC3 training is the multi-faceted role of the instructor. While providing classroom instruction is part of their role, instructors are also tasked with simulating patient and combat conditions during a hands-on scenario. During a scenario, instructors will question trainees about their treatments, make suggestions or give hints, or directly order the trainees as needed. The instructor may also vary the difficulty of the training to suit the particular trainee.

A possible solution to relieving some of this burden from the instructor is to rely on the use of adaptive instructional systems (AISs) to tailor training for learners. AISs have been the topic of research and development for decades in many fields, but recently has seen renewed interest in military programs like the US Army’s Synthetic Training Environment (STE) rapid development program and the US Navy’s My Navy Learning science and technology program. AISs are artificially-intelligent, computer-based systems that guide learning experiences by tailoring instruction and recommendations based on the goals, needs, and preferences of each individual learner or team in the context of domain learning objectives [3]. Examples of AISs include intelligent tutoring systems (ITSs), intelligent mentors, and intelligent instructional media.

In 2017, a NATO Research Task Group (HFM RTG 237) completed its mission to investigate various intelligent tutoring systems technologies and opportunities for their use within NATO military organizations [4]. The group noted that “military operations, especially those characteristic of current irregular warfare environments, require, among other things, improvisation, rapid judgment, and the ability to deal with the unexpected. They go beyond basic instructional objectives and call for education and training focused on higher order cognitive capabilities such as analysis, evaluation, creativity, and rapid synthesis of novel approaches – approaches that must intersperse judgment with the automatic responses provided by training involving memorization and practice of straightforward procedures. These capabilities can make the difference between success and failure in operations, and require more sophisticated forms of instruction such as one-to-one tutoring” [4] and thus the impetus for AISs.

The RTG found a substantial body of research and development of instructional technologies and especially AISs. They also concluded that while it is not practical to provide one-to-one human tutoring to every soldier, sailor, and airman, it is practical to provide computer-based tutoring that is dynamically tailored to every learner’s capabilities, preferences, and needs (e.g., knowledge and skill gaps). Technology to produce effective and efficient tutoring requires ‘intelligent’ systems that rapidly tailor instruction to individual learner abilities, prior knowledge, experience, and, to some extent, misconceptions (e.g., common errors or malformed mental models about the training domain). In an effort to exploit current and emerging technologies and to push toward needed research to create new technology, the RTG made several recommendations [4] with respect to:

  • Expanding authoring tools to new instructional domains

  • Enhancing automation in the authoring process to reduce developer workload

  • Enhancing user experiences through adaptive interfaces

  • Standardizing components and data in ITSs to enhance interoperability and reuse

  • Modeling aggregate levels of learning (e.g., team and organizational learning) resulting from ITS adoption

It is through adaptive interfaces linked to AR technologies that we might realize improved access to adaptive instruction (e.g., anytime anywhere training), more effective and efficient interaction through natural language dialogue, enhanced realism to match the complexity of training tasks, and better quality after action reviews which enable more detailed analysis of critical decision points in our training. Specifically relating to TC3 training, adaptive instruction embedded within AR training can provide even greater realism to the scenario and enhanced feedback for trainees, without added instructor burden.

2 Integration Opportunities

In following section, we discuss several ideas of providing AR-based adaptive instruction in TC3 training. AR provides a platform for immersing trainees in a visually rich and dynamic environment that places a heavy emphasis on user interaction. Thus, the focus of this paper is on the application of interaction strategies for instructional adaption. While there are several approaches to providing adaptation for learning, the application of interaction strategies leverages the highly interactive nature of AR systems.

2.1 Content Adaptation

Content adaptation involves using the AR visual display to not only progress the scenario, but to also adapt the scenario to the abilities of the learner and the instructional goals. When using AR for TC3 scenarios, the visual scene often provides imagery of an injury’s initial state, as well as the reaction of the injury to trainee input (i.e., intervention input of lack thereof).

Injury Identification.

The diagnostic and intervention process involves many different interactions with the patient, ranging from asking diagnostic questions to performing a complex surgical intervention. One example of content adaptation can be performed by adapting the level of clinical questioning that must be performed to diagnose or treat a patient. Varying amounts of visual cues may be provided to encourage a trainee to ask the right questions or order the right diagnostic tests.

Injury Timing.

Another method of adding content adaptation involves the timing of the injury exposure. To increase scenario complexity, the patient state could be degrading at a rapid pace, requiring the student to make clinical decisions faster. The patient could also have multiple injuries that need to be addressed concurrently to one another and exhibit at the same time.

Injury Response.

Simulated injuries can also be used to represent tasks of varying difficulty and challenge trainees. This can be a very natural adaptation in the scenario and driven automatically using a physiology engine (e.g., BioGears). Using a physiology engine, the patient’s health would degrade or improve according to the actions performed by the student. The AR wound would ideally adapt to the physiology as well. For example, if the trainee does not apply pressure to an arterial wound, the physiology would generate continuous blood loss in the patient’s physiology. Subsequently, the virtual injury would display significant blood loss. The adaptations could become increasingly complex in this case because the deteriorating patient state could affect other physiologic systems (e.g., respiratory system) and cause compounding challenges for the trainee.

Similarly, the injury complexity can also be increased adding compounding injuries to the visual overlay of the patient. This can not only require more complex procedures to be performed, but also require the trainee to think multi-dimensionally about the patient’s injuries. For example, an unconscious patient may instruct the trainee to perform an intubation. However, an unconscious patient with significant injury to the head and neck should indicate to the trainee that a cricothyroidotomy may be necessary.

2.2 Prompts

Using AR, trainees can receive helpful prompts via the HUD that they would not otherwise see in the real-world. These prompts can provide localization information and explanation using floating text callouts and pointers. Prompts can also be used to highlight the location at which an action can be performed, along with accompanying instructions on how to proceed.

Explicit Prompts.

One use of prompts is to provide explicit hints, reminders, or instructions to the trainee in the form of text. This approach is well suited for assisting with a procedural understanding of the task at hand. This is more relevant if the learner forgot to perform a step of the procedure. Another method of providing explicit prompts is to provide cues that show where the trainee will need to interact next. For example, a prompt box or arrows may be used to show a trainee where to insert an IV or place a tourniquet.

Implicit Prompts.

Another use of prompts is to provide more subtle hints to the trainee about the patient state or the necessary next steps. This can be done through the use of both visual and audio cues. One example is to display the vital signs to the trainee and allow them to use that as an indicator of the patient state. This is more challenging prompt because it requires the trainee to think critically about the signs and symptoms involved with injuries. Implicit prompts can also be provided through audio cues. For example, if the trainee needs to provide the patient with pain medication, the system could have the patient make groaning noises.

2.3 Assessment and Feedback

Assessment and feedback are important components of instruction and provide the learner with valuable information about their performance. AR is highly interactive and incorporates assessment and feedback in very natural ways. As the trainee is interacting with the patient and visual scene, the system is identifying the trainee actions and incorporating that into the scenario. As the system identifies the actions, the system provides a subsequent reaction. The reaction may be positive or negative to the patient state, depending on the correctness of the action. This assessment and feedback loop is an inherent part of the AR scenario.

Scaffolding Feedback.

As the trainee progresses through a scenario, the system could provide scaffolds to support the trainee by adapting the specific feedback based on previous actions. Scaffolding can be an important component in providing the appropriate amount and types of prompts. As the trainee gains proficiency, the system may fade the support that it provides. This might include moving from providing explicit scaffolding (e.g., text hints about what might need to be done) to implicit scaffolding (e.g., providing simple cues such emphasizing the blood squirting from an arterial wound or changing vitals), to implicit challenging (e.g., providing an ambiguous mix of cues on the condition of a casualty).

A layer of scaffolding includes adjusting the amount of feedback given. As the trainee progresses through the scenario, the system may provide less explicit feedback. The trainee may also receive less feedback if they are performing particularly well on the task. One consideration when adapting the amount of feedback is that many implicit feedback mechanism are needed to progress the training scenario (e.g., blood squirting), so explicit prompts are more appropriate for this type of adaptation.

This approach is consistent with known properties of effective feedback [5], but also with more fundamental learning theories, such as Vygotsky’s Zone of Proximal Development [6], in which the learning environment is adapting continuously to student learning to deliver learning situations that are consistently challenging but matched to the student’s current capability and rate of learning.

Timing of Feedback.

Another consideration for feedback is the timing of the feedback. Feedback timing can challenge to trainees by not providing them with a hint or prompt immediately. Like scaffolding, consideration must be made when using this is adaptation for implicit feedback mechanisms. Using the blood squirting example, it is important that this feedback is given immediately to the trainee to maintain the realism of the scenario. Timing adaptation however can be valuable in allowing the trainee time to contemplate the correct course of action.

3 Conclusion

Military medical personnel are the first responders of the battlefield. Their training and skill maintenance is of preeminent importance to the military; however, several challenges exist to providing the most realistic and effective training possible. The simulated patients used during training exercises often cannot replicate the injuries being trained, which requires significant intervention from instructors.

AR offers improvements over this approach by providing realistic, dynamic visual scenes that can mimic battlefield injuries. These technologies and their applications are still emerging and exploration is needed to ensure their appropriate integration into instruction. Specifically, this paper explored the options for leveraging adaptive instruction techniques into AR training.

Content adaptation involves adapting the illustration and animation of wounds and procedures, projected onto the simulated patient. Adaptation in this mode provides a much richer experience; however, the tracking and animation projection needed for this is at the very edge of current AR capabilities. Illustration and animation of wounds and procedures can be a very challenging problem. Also, creating animated models is a very labor intensive task, so this may limit the number of scenarios that can be created.

Hints and prompts are also valuable methods for providing adaptation within an AR scenario. Prompts can be particularly valuable for instruction when a student is not performing the task in a timely manner or potentially missed a critical component of the patients injuries. This relies on the HUD to deliver instructional content and information to the trainee, but does not require high-resolution, animated models, nor does it require precise placement of augmentations. These capabilities are can be met by many commercially available AR systems.

Adaptation can also be provided within the assessment and feedback mechanisms within the scenario. Automated feedback is a first class feature in an AR scenario because the dynamic visual scene is constantly providing the trainee with feedback on the patient state and the result of any treatments performed; however, the amount, timing, and type of feedback given to the trainee can be adapted to the needs of the scenario and the trainee.

The incorporation of adaptation in AR provides even further opportunity to enhance the educational content of the scenarios. TC3 training can involve a variety of learners, at varying learning levels and adaptive instruction would allow for the learners to receive instruction based on their level of expertise and instructional goals without putting more burden on the instructor. Our future work will include the incorporation of these techniques within our AR training capabilities and validation of such techniques as educational tools.