1 Introduction

Automation is inherent in the future of unmanned system command and control (C2). Future capabilities may demand an inversion of the human-robot control scheme, where a single person or small team can control vast numbers of unmanned systems. This is the claim of the very near future, according to Defense Department unmanned systems roadmaps and scientific advisory boards [1, 2]. The advent of unmanned systems and advanced algorithmic techniques push human-automation interaction issues into the forefront of the news, and our scientific pursuits.

The increase in complexity required to realize these visions both in the laboratory and in the real world is daunting. While 50 vehicles can be controlled by a single person [3] we are a long way from flexibly intelligent autonomy. Although robots may be useful for dull, dirty and dangerous tasks, they must also become our colleagues, and share in the work as part of a team.

In the past, developers have tried and many times failed to integrate automation and the human. Approaches that focus on machines and their capabilities alone are unlikely to succeed, unless they consider the human’s role in depth as part of user-centric design. Without it, many have observed conditions where the human does not properly understand what an autonomous agent was doing, or what it would do next. These are problems even with relatively simple automated systems when poorly understood; the problems compound as complex, artificial neuro-evolutionary algorithms are used to determine autonomy behaviors.

The spread of cognitive engineering methods and the results of increased study of humans attempting to operate more complex autonomy [47], lend themselves to these team perspectives. Though these levels of interaction have been studied with small teams and within 1:1 human-to-robot ratios, less is known about how to effectively team in a supervisory context (though see [8] on mixed-initiative adaptive systems in search and rescue).

In this paper, we explain our perspective on creating human-autonomy teaming in supervision through a task-manager interface using the Input, Process, Output (IPO) model [11, 12]. The task manager is part of teaming interfaces available within a multiple unmanned system simulation and testbed (IMPACT; see also Lange & Gutzwiller, 2016, this conference). We discuss how this novel task-manager interface could facilitate human-agent teaming and the study of human-automation interaction.

2 Human-Agent Teaming for Task-Centric Operation

The use of a task manager (TM) interface serves as a task-based communication and cooperation point between the human operator and the autonomous agents that IMPACT employs. The TM tracks and organizes the myriad of high-level cognitive tasks a supervisor must manage (see Fig. 1).

Fig. 1.
figure 1

The conceptual task manager interface. Two columns of “tasks” represent the task queues of the human user (left column) and an autonomous assistant (right column) in this case. Tasks are created by pulling orders from a chat service that picks up orders and critical information.

The TM facilitates task execution by priming and linking to interface elements in the testbed, sometimes pre-configuring them for the particular task when the user starts the task. Users can give and take tasks, allowing for more control of autonomy. Indications for basic task progression (such as start; ongoing; and stop) are included for each task. As we do not presume to capture all possible tasks, the users themselves can also create tasks. Not shown is our approach to prioritize tasks within the queues, which is still under development.

The task manager is thus designed as a cognitive aid to mitigate the known costs to memory for goals resulting from cognitive overload and interruptions [9, 10]. Even without its integration as an aid for managing and collaborating with autonomy, it is likely to provide performance and awareness benefits as situations approach cognitive overload.

The TM is also useful from a teaming perspective, which is the focus here. A standard team model, the Input, Process, and Outcome (IPO) model [11, 12] can be used to understand the factors that influence human team effectiveness. We examined each major attribute of this model in reference to task-based teaming with unmanned systems in mixed-initiative supervision. Whether the TM is likely to improve each element and potential effects or consequences of implementation are discussed.

2.1 Inputs

Input variables influence team interactions as part of the properties of the team, even before work has begun. As related to the IPO model, these are elements such as individual motivation, expertise, and team characteristics such as composition, and team mental models. Because a team in our system is comprised of humans and the autonomy, each has a role in enhancing the ability to engage collaboratively. It is also agreed that trust and transparency influence team properties. The TM may improve upon both of these facets (e.g. [13]).

Motivation

reflects the willingness to act and do some task. Most team activity can be broken into tasks. While computer agents do not have traditional motivation, humans may still impute such characteristics [14]. In the current TM interface, the communication of motivation is multifaceted. Presumably, motivation is whether any agent “picks” a task and begins doing it. In our concept, this choice is made far earlier than at the moment of task arrival for the automation. Instead as part of building a team mental model, a working agreement – a set of rules and expectancies that govern which tasks and under what conditions agents take responsibility for task performance – is instantiated. The user can partially dictate motivation through working agreements to restrict conditions when agents will handle tasks. In this view, by default motivation to perform tasks becomes a first-in-first-out method since all motivation appears equal.

Motivation will also factor in for the human component of these team interactions. Humans face a variety of motivational challenges to their work. While we do not aim to explore them all, challenges seem to arise when there is no sense of accomplishment or a lack of a clear goal. Articulating tasks through the TM interface is one method to motivate operators by improving both of these antecedents. Displaying tasks in a task queue helps identify the work remaining to be done, which may motivate operators to “clear their plate” of items. Displaying task completion may improve the sense of accomplishment (especially if a history of recently completed tasks is provided in the system).

We are also considering a mechanism to display frequently reoccurring tasks, such as perimeter defense and base of operations patrols. These are inactive tasks, but can be populated or primed via the queue so that a user could have more awareness of upcoming tasks and demands. Motivation to “clear” items may increase based on the anticipation of incoming future tasking, and improvement related to clarifying goals and reducing reliance on memory.

The TM could also be an effective mechanism for wrapping mission-essential tasks together. Presumably, one could set a given mission as prioritized for the human or the autonomy, or any combination thereof. Priority, still under exploration in IMPACT, is a piece of motivational teaming. Members must decide and collaborate on how important tasks are as part of distributing the tasks between queues and for each agent within the queue. Priority expression could ultimately reflect the motivation of an autonomous member, as mirrored in the TM interface. Naturally, this changes the prior default of first-in-first-out. Determining which is optimal is a different consideration not yet addressed.

Team composition and agent expertise

relates in part to the origin of the team. Often autonomy teams are created in an ad-hoc manner based on asset positioning, capabilities, and strategic relevance. Command and control algorithms may drive the creation of ad-hoc teams. In collaboration, then, the TM may reduce load on the supervisor in creating or sourcing an effective team, allowing them to focus on completing relevant tasks at the proper level of abstraction. In other words, a task may help create a proper team of vehicles.

A task manager should succeed for ad-hoc teams by clearly defining task assignments (e.g., [15]) reducing or removing the ambiguity of responsibility. Though in the current TM we only show one human and one agent queue, it is likely that in the near future there will be many humans, and many agents. Thus considerable composition elements can be integrated, addressing the expected expansion to the number of needed agents and humans.

Team mental models

for command and control teams are particularly amenable to task-based representation [16]. Toward that notion, the TM is a good candidate for improving collaboration, as collaboration will naturally take place around tasks. Collaboration requires some shared understanding of information from the environment, along with rationale for interpreting, or acting upon it [17]. Shared understanding or “common ground” [18], is provided through the TM queue view which creates the ability to standardize communication to tasks, via ownership, initiation, progress and completion, can establish some common ground.

No matter what the tasks-based elements allow, trust is a key input to any human-autonomy teaming. Trust can be defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [19]. In the context of the task manager, the operator must trust that the autonomous agent will (a) pick up their tasks as established in the working agreement, and (b) can execute them well enough to fulfill mission requirements. Uncertainty arises concerning the tasks population of the queues because not all tasks can be determined a priori, leaving the human to identify and perform new ones as they emerge. We assume that the human is more flexible than the autonomy.

Uncertainty is also present in all forms of information from the environment. To the extent that the system is using that information, in this case pulling information from chat to populate a task queue, there will sometimes be mistakes. However, with access to both the task queues and the chat window, an operator should be able to reconcile these immediate differences.

It is important to note that, unlike other efforts on this project, the TM interface does not take the extra steps of expanding on the full reasoning process of an algorithm or agent (but see [20]).

2.2 Processes

Processes, the “P” in the IPO model, refer to the individual and team activities that turn the inputs of the team into outputs. These are facets of teaming like communication, cooperation, coordination of execution and shared awareness. The TM could facilitate two types of teaming processes: taskwork (the processes and methods for goal-based performance) and teamwork (interactions needed between members of a team) [21]. By combating memory failures and attention problems, the TM may enhance human taskwork.

The TM may also enhance teamwork. Coordination and communication form the core teamwork processes. Coordination is the process of sharing foundational information that guides actions in the team. The process can often be aided with information displays to easily identify agent-task pairs, and timelines for mission execution (see for example the interface in [22]). These help assign roles and identify collaborative points. Since the teaming here is oriented around tasks, the TM is a natural facilitator. Issuing orders through a delegation interface becomes an example of coordination for providing each vehicle in a heterogeneous team with their role and actions toward a higher task. IMPACT is already using a playbook-style interface for unmanned vehicle teams [6]. Sharing expectations and knowledge between agents and the supervisor facilitated by the TM lays a foundation for good coordination [23, 24]. Coordination suffers as communication degrades, because the shared understanding of roles and functions declines [25]. The TM interface may improve coordination with clear task assignment in the queues, and in the formulation of the working agreements that guide action.

Communications, as the second of these teamwork processes, refers to how (and whether) agents share information during team activity. Agents may attempt different methods to share intent, rationale, and information past, present and future [16]. There is an underlying assumption of the ability to formulate a comprehensible statement about each of those aspects. Humans often understand intentions through the observation of activities and make initial sense of communications by understanding others’ mental models [26]. In teaming with autonomy in task-centric environments, tasks are within operator view; our queue display and tracking facilitate a small part of the communication process.

Task-centricity allows the human to observe the behavior of the autonomy itself. For example, selecting an ongoing task in the autonomous agents queue could retrieve all of the necessary information it would take a human to perform the task as well, allowing the human to be “on” the loop if they choose. But the same interface provides shared awareness of what the human members are doing to the system and the autonomy.

2.3 Output

The final component of the IPO model, Output, is the result and byproduct of team process execution. Outputs are most commonly assessed via performance of the team [16], including quality, quantity, and safety measures. How well the team did at the tasks, how many products were created (and/or the avoidance of critical errors) are typical measures. Note in these assessments, if joint human-autonomy teaming improves, the measurement of output will indicate the improvement.

Outcomes are measures that are easily comparable between the operation of only human or only autonomous agents in isolation and the mix of both. Whether the task manager supports collaboration is very testable as related to mission outputs. Manipulations to elements of the TM that may affect Input and Process can be evaluated against output criteria.

In the iterative, multi-mission domain of unmanned system operations, there will be many opportunities to evaluate use of the TM. And with each evaluation, there is the capability to provide feedback to the autonomy, and to the human agents. Both feed into learning capabilities of each agent. We have incorporated various measures into our plans to learn from operations which rely on use of the TM interface [27]. The TM is important in capturing unique output measures, such as task throughput– the amount of tasks assigned to queues, performance times, and how many are exiting the queue. Throughput may be more operationally relevant than any one tasks success with its elements of quantity and quality.

3 Summary

Task management is a central concern in command and control, as described here. Whereas many previous systems operate with function-centric design, tasks align agents both human and computer toward understandable discrete pieces of overall mission performance. The TM interface tracks and provides the necessary controls to delegate responsibility between multiple agents performing tasks.

A task-centric conceptualization has been promoted for unmanned system C2 [28], and we take this basic notion and apply it within task management. However, we also conceptualize an autonomous agent that manages these tasks and chooses, within the constraints of working agreements, how and who should execute. The management interface itself serves as a team-collaboration tracking tool, in which the interplay between humans and the system is visible, grounded in tasks that matter to mission viability, and provides agents with useful information about each other. We used the IPO framework to outline the applications through input, process, and outputs. In each aspect of team effectiveness, expected benefits were outlined for the TM.

Motivation is not a standard aspect of an autonomous systems design. Here it was linked with task responsibility and prioritization, two key components of system performance. By providing a manager for tasks, clearly displayed attributes are communicated between agents. The mechanisms of the TM using queues and tasks also should reduce or remove ambiguity in assignments, a problem for ad-hoc teams and a method for improving team mental models.

We expect the TM to improve the taskwork of the human members of a team by reducing the cost of memory failures and interruption. However, is also helps display the behavior of the autonomous agent in terms of tasks, improving the communication process.

In summary, we believe that team methods can be applied to supervisory control and aided by a task management interface which integrates human performance knowledge in this domain (e.g., [27]). We also note a similar effort to develop templates for teaming (Lange & Gutzwiller, HCII 2016) providing useful “plans” for when certain team configurations and communication patterns may be needed. We will be exploring how to develop these plans within the context of the task management system used here.

We believe the next logical step is to demonstrate experimentally how the task manager supports cognition, human automation interaction and in general, human-automation teaming. Our plans include investigations of each of the major areas of potential benefit outlined here as part of the IMPACT project.