Keywords

1 The Challenge of Effective Adaptation Management

The operator tried and tried but the task was too hard. Then suddenly, the task disappeared. The operator did not realize that physiological sensors in his Augmented Cognition (AugCog) system detected a critical state – high cognitive load – and in response triggered an automation strategy to reduce task load. What sounds like a reasonable approach – if the operator is overloaded, automate certain tasks to reduce the task load, and as a result cognitive load will decrease as well – may not lead to the desired result. What if task load was not even high and a lack of experience caused the operator to experience high cognitive load? In that case, automation could in fact be counterproductive, as it would no longer allow the user to gain the necessary experience.

In a different fictional AugCog system, oculomotor metrics are used to evaluate the attentional focus of the operator. How should the system react to an inappropriate focus? Employ cueing strategies to shift the operator’s focus? Declutter the display to minimize distraction? The effectiveness of an adaptation would greatly depend on whether the operator missed a task due to over-engagement and attentional tunneling [1], or if the user fell victim to vigilance decrement and task-related fatigue after a long shift.

These examples illustrate the ease of underestimating the challenge of adaptation management. Adaptation management involves selecting and configuring appropriate and effective adaptation strategies to address detected problem states, but also monitoring their effects and effectiveness. As AugCog diagnostics detect opportunities for adaptation, adaptation strategies are usually triggered in response to a specific diagnostic outcome. As situation and context evolve, and as the effects of the adaptation kick in, that specific situation is no longer present and a once adequate adaptation strategy may become inadequate. Continued adaptation may even have negative effects on the operator and task performance, as it may occupy cognitive resources, interrupt a high priority task, or otherwise affect the operator’s attention inadequately. As an example, automating a task in a high-stress/high-workload phase may be helpful to maintain performance through this phase. Keep automating longer than necessary, however, and issues with automation complacency (e.g., [2]) and out-of-the-loop performance problems (e.g. [3, 4]) may arise.

Hence, the benefit of an adaptation should outweigh its potential cost. Cognitive costs of adaptation have been demonstrated in past implementations of adaptive human-machine interaction. For example, Dorneich et al. [5] report “a loss of situation awareness and survey knowledge of the environment” (p. iv), participants in another study reported confusion and impressions of inconsistency in the information display [6]. Fuchs et al. provide an overview of potential costs and benefits for a number of adaptation strategies previously used in Augmented Cognition systems [7]. The effects are similar to those observed in interactions with automated systems. Lessons learned from automation research (e.g. [8, 9]) should therefore be considered when designing adaptation strategies and adaptation management frameworks.

Adaptation management is a relevant topic for both adaptive operational environments and adaptive training systems; however, the objectives of adaptation are quite different between the two. In operational environments, the overall goal of adaptation is to optimize performance – adequate performance is the target state. Adaptive training aim to optimize training effectiveness and efficiency. To that end, good performance may present an opportunity to accelerate training and therefore indicate a need for adaptation. The same is true for critical cognitive states: in operational environments, degraded cognition poses safety risks and should be addressed. A training system may intentionally induce such states to train self-regulation or coping strategies. Finally, while an operational adaptive system would likely intervene to avoid human error, errors are acceptable and even desirable in training, as they offer learning opportunities.

As briefly outlined above, the challenges associated with adaptation management are substantial and remain largely unaddressed. This session on “Adaptation Strategies and Adaptation Management” aims to raise awareness for this essential component of AugCog systems and initiate scientific discourse to address these issues in future research.

2 Session Themes

The session opens with a look back at 15 years of AugCog research. Dylan Schmorrow, who founded and led the field of Augmented Cognition while serving as a Program Manager at the Defense Advanced Research Projects Agency (DARPA), will review past efforts, provide lessons-learned relevant for adaptation management, and share his vision for a 21st century human-computer symbiosis. Further contributions address more specific challenges but can be categorized into five broader themes discussed in the following sections.

2.1 Enhancing Adaptation Through Context

Some AugCog systems have relied solely on physiological indicators of cognitive states to inform adaptation. However, it has been claimed that effective adaptation management requires contextual data as it contains crucial information about the state of the system, the task, and the user. Without such information, adaptations may be triggered or withdrawn at inopportune moments, potentially disrupting or confusing the user, or leading to task switching issues, situation awareness problems, and workload increases. In these cases, adaptations may even have a negative impact on performance, outweighing the benefit of adaptation altogether. This inconsiderate use of adaptation strategies has been labeled “brute force mitigation” [10]. In contrast, context-aware adaptation frameworks would process not only information about the physiological state of the user, but also behavioral data, task state, environmental parameters, sensor information, system events, or user interactions and preferences [11]. These can then be interpreted to derive task context, possible root causes for observed performance problems, or user intent to dynamically select and configure appropriate adaptations at runtime (cf. [12]).

In this session, Baltzer et al. [13] provide insights into a context-sensitive adaptation mechanism for cooperative guidance and control of highly automated vehicles. Their conceptual approach of analyzing “interaction patterns” (patterns that combine driver activity and environmental parameters) is used to adapt driver assistance systems to the situation at hand and determine whether and how technical intervention is necessary.

2.2 Adaptation Management in Adaptive Training Environments

To increase training efficiency, many adaptive training systems detect when a trainee is ready to move on. One challenge in this domain is to find appropriate indicators to advance training to the next level.

In this session, Stephens et al. [14] provide an overview of mental states of interest for adaptive training and various approaches to operationalize them. Fortin-Côte et al. [15] report a study to determine an optimal trigger rule for adapting an adaptive training environment based on different combinations of workload and performance metrics.

2.3 Stages of Adaptation

Consider a state of high cognitive load that is addressed through an automation strategy. With automation active, tasks are offloaded from the user and cognitive workload may decrease to an uncritical level. Subsequent withdrawal of the automation, however, would lead to an increase in workload that, again, triggers adaptation. To avoid rapid adaptive state oscillation and system instability, it may thus not be sufficient to merely switch adaptation strategies on and off. One approach to avoiding this instability is a gradual approach to adaptation that is as restrained as possible but as intrusive as necessary. Tollar [16] suggests adaptations that are “graduated or delivered incrementally in levels to help operators to keep ‘in the groove’.” (p. 417). Fuchs et al. describe an approach that uses “Stages of Adaptation” to modulate the intensity and intrusiveness of adaptation based on task priority [17].

In this session, Baltzer et al. [13] present a “stepwise escalation” approach to provide context-adequate intervention in a cooperative driving task. Based on context information, an automated vehicle will dynamically add auditory and/or tactile cues to communicate a detected obstacle. If deemed necessary, the vehicle will intervene and decouple the driver from the task to initiate an appropriate maneuver.

2.4 The Adaptive Operator

Humans themselves are adaptive systems (cf. [18, 19]). They react and adapt to changing task demands within a “zone of adaptability” [20]. These adaptations may be voluntary or involuntary. For example, humans may consciously decide to invest more effort to perform better if deemed necessary, or stress may cause the release of hormones leading to higher arousal and alertness. In the context of adaptive automation, Veltman and Jansen [19] expect that adaptive technical systems are more likely to work successfully if they started reallocating tasks only when the operator’s intrinsic adaptation mechanisms are no longer able to adequately react to changing task demands. Otherwise, two adaptive systems (the “adaptive operator” and the adaptive technical system) may interact in a counterproductive manner.

Stephens et al. [14] contribute an interesting twist to the “adaptive operator” theme as they present an overview of adaptive systems aimed at enhancing the operator’s self-awareness and self-regulation. Instead of adapting the system to achieve optimal performance or efficiency, the system provides feedback to improve self-monitoring and self-regulation skills that will help the user maintain more effective mental states under critical conditions.

2.5 Machine Learning Approaches for Adaptation Management

Machine learning approaches have been extensively used for cognitive state classification, but they may also prove useful for adaptation management. Evaluating the success of certain strategies and adaptation mechanisms in real-time, learning user characteristics, strategies, and preferences, and understanding and reacting to the effects of changing conditions are all aspects that could benefit from machine learning and artificial intelligence.

Adaptive training approaches have used machine learning to identify learner needs and tailor the learning experience to the individual learner. For machine learning algorithms to be effective, however, large amounts of individual training data are necessary. In this session, Sottilare [21] presents an idea to overcome this major limitation. He proposes to employ the concept of personas to develop a community-based learner model that represents a typical learner and evolves over time as the community provides additional data points.

3 The Road Ahead

The challenges of adaptation management are manifold due to the highly dynamic nature of human operators, their experience, their strategies, and the complex task environments. This session overview provided a number of adaptation-related themes that will hopefully spark interest in the community and inspire future research.

As Augmented Cognition systems move into the real world, it is time to embrace the true level of complexity of these highly integrated human-machine systems. Outside the laboratory, it will no longer be sufficient to observe problem state X and trigger adaptation strategy AX in response. Future adaptation managers should be flexible enough to detect and account for unanticipated system states and correct or expand their future expectations and reactions as necessary. Effective AugCog systems will process extensive amounts of contextual data and use holistic models of cognition that consider the interplay and interdependencies of multiple cognitive states (cf. [22]) to dynamically mitigate the true source of detected problems.