Abstract
How does one collaborate with and supervise a team? Here, we discuss a novel interface for managing tasks, developed as part of a multi-heterogeneous unmanned systems testbed, that aids cognitive operations and teaming. Existing models of team effectiveness among humans can frame cooperative teaming of computer agents and human supervisors. We use the three main characteristics of the input – process – output model to frame discussions of the task manager interface as a potential teaming facilitator, finding it should facilitate effectiveness on several elements. We conclude with the expectation of examination and support from future experiments.
This work was prepared by an employee of the United States government as part of the employee’s official duties and is not subject to United States copyright. Foreign copyrights may apply.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Automation is inherent in the future of unmanned system command and control (C2). Future capabilities may demand an inversion of the human-robot control scheme, where a single person or small team can control vast numbers of unmanned systems. This is the claim of the very near future, according to Defense Department unmanned systems roadmaps and scientific advisory boards [1, 2]. The advent of unmanned systems and advanced algorithmic techniques push human-automation interaction issues into the forefront of the news, and our scientific pursuits.
The increase in complexity required to realize these visions both in the laboratory and in the real world is daunting. While 50 vehicles can be controlled by a single person [3] we are a long way from flexibly intelligent autonomy. Although robots may be useful for dull, dirty and dangerous tasks, they must also become our colleagues, and share in the work as part of a team.
In the past, developers have tried and many times failed to integrate automation and the human. Approaches that focus on machines and their capabilities alone are unlikely to succeed, unless they consider the human’s role in depth as part of user-centric design. Without it, many have observed conditions where the human does not properly understand what an autonomous agent was doing, or what it would do next. These are problems even with relatively simple automated systems when poorly understood; the problems compound as complex, artificial neuro-evolutionary algorithms are used to determine autonomy behaviors.
The spread of cognitive engineering methods and the results of increased study of humans attempting to operate more complex autonomy [4–7], lend themselves to these team perspectives. Though these levels of interaction have been studied with small teams and within 1:1 human-to-robot ratios, less is known about how to effectively team in a supervisory context (though see [8] on mixed-initiative adaptive systems in search and rescue).
In this paper, we explain our perspective on creating human-autonomy teaming in supervision through a task-manager interface using the Input, Process, Output (IPO) model [11, 12]. The task manager is part of teaming interfaces available within a multiple unmanned system simulation and testbed (IMPACT; see also Lange & Gutzwiller, 2016, this conference). We discuss how this novel task-manager interface could facilitate human-agent teaming and the study of human-automation interaction.
2 Human-Agent Teaming for Task-Centric Operation
The use of a task manager (TM) interface serves as a task-based communication and cooperation point between the human operator and the autonomous agents that IMPACT employs. The TM tracks and organizes the myriad of high-level cognitive tasks a supervisor must manage (see Fig. 1).
The TM facilitates task execution by priming and linking to interface elements in the testbed, sometimes pre-configuring them for the particular task when the user starts the task. Users can give and take tasks, allowing for more control of autonomy. Indications for basic task progression (such as start; ongoing; and stop) are included for each task. As we do not presume to capture all possible tasks, the users themselves can also create tasks. Not shown is our approach to prioritize tasks within the queues, which is still under development.
The task manager is thus designed as a cognitive aid to mitigate the known costs to memory for goals resulting from cognitive overload and interruptions [9, 10]. Even without its integration as an aid for managing and collaborating with autonomy, it is likely to provide performance and awareness benefits as situations approach cognitive overload.
The TM is also useful from a teaming perspective, which is the focus here. A standard team model, the Input, Process, and Outcome (IPO) model [11, 12] can be used to understand the factors that influence human team effectiveness. We examined each major attribute of this model in reference to task-based teaming with unmanned systems in mixed-initiative supervision. Whether the TM is likely to improve each element and potential effects or consequences of implementation are discussed.
2.1 Inputs
Input variables influence team interactions as part of the properties of the team, even before work has begun. As related to the IPO model, these are elements such as individual motivation, expertise, and team characteristics such as composition, and team mental models. Because a team in our system is comprised of humans and the autonomy, each has a role in enhancing the ability to engage collaboratively. It is also agreed that trust and transparency influence team properties. The TM may improve upon both of these facets (e.g. [13]).
Motivation
reflects the willingness to act and do some task. Most team activity can be broken into tasks. While computer agents do not have traditional motivation, humans may still impute such characteristics [14]. In the current TM interface, the communication of motivation is multifaceted. Presumably, motivation is whether any agent “picks” a task and begins doing it. In our concept, this choice is made far earlier than at the moment of task arrival for the automation. Instead as part of building a team mental model, a working agreement – a set of rules and expectancies that govern which tasks and under what conditions agents take responsibility for task performance – is instantiated. The user can partially dictate motivation through working agreements to restrict conditions when agents will handle tasks. In this view, by default motivation to perform tasks becomes a first-in-first-out method since all motivation appears equal.
Motivation will also factor in for the human component of these team interactions. Humans face a variety of motivational challenges to their work. While we do not aim to explore them all, challenges seem to arise when there is no sense of accomplishment or a lack of a clear goal. Articulating tasks through the TM interface is one method to motivate operators by improving both of these antecedents. Displaying tasks in a task queue helps identify the work remaining to be done, which may motivate operators to “clear their plate” of items. Displaying task completion may improve the sense of accomplishment (especially if a history of recently completed tasks is provided in the system).
We are also considering a mechanism to display frequently reoccurring tasks, such as perimeter defense and base of operations patrols. These are inactive tasks, but can be populated or primed via the queue so that a user could have more awareness of upcoming tasks and demands. Motivation to “clear” items may increase based on the anticipation of incoming future tasking, and improvement related to clarifying goals and reducing reliance on memory.
The TM could also be an effective mechanism for wrapping mission-essential tasks together. Presumably, one could set a given mission as prioritized for the human or the autonomy, or any combination thereof. Priority, still under exploration in IMPACT, is a piece of motivational teaming. Members must decide and collaborate on how important tasks are as part of distributing the tasks between queues and for each agent within the queue. Priority expression could ultimately reflect the motivation of an autonomous member, as mirrored in the TM interface. Naturally, this changes the prior default of first-in-first-out. Determining which is optimal is a different consideration not yet addressed.
Team composition and agent expertise
relates in part to the origin of the team. Often autonomy teams are created in an ad-hoc manner based on asset positioning, capabilities, and strategic relevance. Command and control algorithms may drive the creation of ad-hoc teams. In collaboration, then, the TM may reduce load on the supervisor in creating or sourcing an effective team, allowing them to focus on completing relevant tasks at the proper level of abstraction. In other words, a task may help create a proper team of vehicles.
A task manager should succeed for ad-hoc teams by clearly defining task assignments (e.g., [15]) reducing or removing the ambiguity of responsibility. Though in the current TM we only show one human and one agent queue, it is likely that in the near future there will be many humans, and many agents. Thus considerable composition elements can be integrated, addressing the expected expansion to the number of needed agents and humans.
Team mental models
for command and control teams are particularly amenable to task-based representation [16]. Toward that notion, the TM is a good candidate for improving collaboration, as collaboration will naturally take place around tasks. Collaboration requires some shared understanding of information from the environment, along with rationale for interpreting, or acting upon it [17]. Shared understanding or “common ground” [18], is provided through the TM queue view which creates the ability to standardize communication to tasks, via ownership, initiation, progress and completion, can establish some common ground.
No matter what the tasks-based elements allow, trust is a key input to any human-autonomy teaming. Trust can be defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [19]. In the context of the task manager, the operator must trust that the autonomous agent will (a) pick up their tasks as established in the working agreement, and (b) can execute them well enough to fulfill mission requirements. Uncertainty arises concerning the tasks population of the queues because not all tasks can be determined a priori, leaving the human to identify and perform new ones as they emerge. We assume that the human is more flexible than the autonomy.
Uncertainty is also present in all forms of information from the environment. To the extent that the system is using that information, in this case pulling information from chat to populate a task queue, there will sometimes be mistakes. However, with access to both the task queues and the chat window, an operator should be able to reconcile these immediate differences.
It is important to note that, unlike other efforts on this project, the TM interface does not take the extra steps of expanding on the full reasoning process of an algorithm or agent (but see [20]).
2.2 Processes
Processes, the “P” in the IPO model, refer to the individual and team activities that turn the inputs of the team into outputs. These are facets of teaming like communication, cooperation, coordination of execution and shared awareness. The TM could facilitate two types of teaming processes: taskwork (the processes and methods for goal-based performance) and teamwork (interactions needed between members of a team) [21]. By combating memory failures and attention problems, the TM may enhance human taskwork.
The TM may also enhance teamwork. Coordination and communication form the core teamwork processes. Coordination is the process of sharing foundational information that guides actions in the team. The process can often be aided with information displays to easily identify agent-task pairs, and timelines for mission execution (see for example the interface in [22]). These help assign roles and identify collaborative points. Since the teaming here is oriented around tasks, the TM is a natural facilitator. Issuing orders through a delegation interface becomes an example of coordination for providing each vehicle in a heterogeneous team with their role and actions toward a higher task. IMPACT is already using a playbook-style interface for unmanned vehicle teams [6]. Sharing expectations and knowledge between agents and the supervisor facilitated by the TM lays a foundation for good coordination [23, 24]. Coordination suffers as communication degrades, because the shared understanding of roles and functions declines [25]. The TM interface may improve coordination with clear task assignment in the queues, and in the formulation of the working agreements that guide action.
Communications, as the second of these teamwork processes, refers to how (and whether) agents share information during team activity. Agents may attempt different methods to share intent, rationale, and information past, present and future [16]. There is an underlying assumption of the ability to formulate a comprehensible statement about each of those aspects. Humans often understand intentions through the observation of activities and make initial sense of communications by understanding others’ mental models [26]. In teaming with autonomy in task-centric environments, tasks are within operator view; our queue display and tracking facilitate a small part of the communication process.
Task-centricity allows the human to observe the behavior of the autonomy itself. For example, selecting an ongoing task in the autonomous agents queue could retrieve all of the necessary information it would take a human to perform the task as well, allowing the human to be “on” the loop if they choose. But the same interface provides shared awareness of what the human members are doing to the system and the autonomy.
2.3 Output
The final component of the IPO model, Output, is the result and byproduct of team process execution. Outputs are most commonly assessed via performance of the team [16], including quality, quantity, and safety measures. How well the team did at the tasks, how many products were created (and/or the avoidance of critical errors) are typical measures. Note in these assessments, if joint human-autonomy teaming improves, the measurement of output will indicate the improvement.
Outcomes are measures that are easily comparable between the operation of only human or only autonomous agents in isolation and the mix of both. Whether the task manager supports collaboration is very testable as related to mission outputs. Manipulations to elements of the TM that may affect Input and Process can be evaluated against output criteria.
In the iterative, multi-mission domain of unmanned system operations, there will be many opportunities to evaluate use of the TM. And with each evaluation, there is the capability to provide feedback to the autonomy, and to the human agents. Both feed into learning capabilities of each agent. We have incorporated various measures into our plans to learn from operations which rely on use of the TM interface [27]. The TM is important in capturing unique output measures, such as task throughput– the amount of tasks assigned to queues, performance times, and how many are exiting the queue. Throughput may be more operationally relevant than any one tasks success with its elements of quantity and quality.
3 Summary
Task management is a central concern in command and control, as described here. Whereas many previous systems operate with function-centric design, tasks align agents both human and computer toward understandable discrete pieces of overall mission performance. The TM interface tracks and provides the necessary controls to delegate responsibility between multiple agents performing tasks.
A task-centric conceptualization has been promoted for unmanned system C2 [28], and we take this basic notion and apply it within task management. However, we also conceptualize an autonomous agent that manages these tasks and chooses, within the constraints of working agreements, how and who should execute. The management interface itself serves as a team-collaboration tracking tool, in which the interplay between humans and the system is visible, grounded in tasks that matter to mission viability, and provides agents with useful information about each other. We used the IPO framework to outline the applications through input, process, and outputs. In each aspect of team effectiveness, expected benefits were outlined for the TM.
Motivation is not a standard aspect of an autonomous systems design. Here it was linked with task responsibility and prioritization, two key components of system performance. By providing a manager for tasks, clearly displayed attributes are communicated between agents. The mechanisms of the TM using queues and tasks also should reduce or remove ambiguity in assignments, a problem for ad-hoc teams and a method for improving team mental models.
We expect the TM to improve the taskwork of the human members of a team by reducing the cost of memory failures and interruption. However, is also helps display the behavior of the autonomous agent in terms of tasks, improving the communication process.
In summary, we believe that team methods can be applied to supervisory control and aided by a task management interface which integrates human performance knowledge in this domain (e.g., [27]). We also note a similar effort to develop templates for teaming (Lange & Gutzwiller, HCII 2016) providing useful “plans” for when certain team configurations and communication patterns may be needed. We will be exploring how to develop these plans within the context of the task management system used here.
We believe the next logical step is to demonstrate experimentally how the task manager supports cognition, human automation interaction and in general, human-automation teaming. Our plans include investigations of each of the major areas of potential benefit outlined here as part of the IMPACT project.
References
DoD, Unmanned Systems Integrated Roadmap FY2013-2038 (2013)
Department of Defense Science Board, “The role of autonomy in DoD systems”, Off. Undersecretary Def. Acquis. Technol. Logist., July 2012
Bishop, R.: Record-breaking drone swarm sees 50 UAVs controlled by a single person, Popular Mechanics, p. 2 (2015)
Squire, P.N., Parasuraman, R.: Effects of automation and task load on task switching during human supervision of multiple semi-autonomous robots in a dynamic environment. Ergonomics 53(8), 951–961 (2010)
Ruff, H.A., Calhoun, G., Draper, M., Fontejon, J.V., Guilfoos, B.J.: Exploring automation issues in supervisory control of multiple UAVs. In: Proceedings of Human Performance, Situation Awareness, Automaion Technology Conference, pp. 218–222 (2004)
Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors 49(1), 57–75 (2007)
Chen, J.Y.C., Barnes, M.J.: Human–agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum. Mach. Syst. 44(1), 13–29 (2014)
Hardin, B., Goodrich, M.: On using mixed-initiative control: a perspective for managing large-scale robotic teams. In: Proceedings of ACM/IEEE International Conference on Human Robot Interaction, pp. 165–172 (2009)
Dismukes, R.: Remembrance of things future: prospective memory in laboratory, workplace, and everyday settings. Rev. Hum. factors Ergon. 6, 1–86 (2010)
Altmann, E.M., Trafton, J.G., Hambrick, D.Z.: Momentary interruptions can derail the train of thought. J. Exp. Psychol. Gen. 142(1), 1–12 (2013)
Ilgen, D.R., Hollenbeck, J.R., Johnson, M., Jundt, D.: Teams in organizations: from input-process-output models to IMOI models. Annu. Rev. Psychol. 56, 517–543 (2005)
Mathieu, J.E., Maynard, M.T., Rapp, T., Gilson, L.: Team effectiveness 1997–2007: a review of recent advancements and a glimpse into the future. J. Manage. 34(3), 410–476 (2008)
Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness–based agent transparency, ARL Technical report 6905 (2014)
Waytz, A., Cacioppo, J., Epley, N.: Who sees human?: the stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 5(3), 219–232 (2010)
Kolbe, M., Künzle, B., Enikö, Z., Wacker, J., Grote, G.: Measuring coordination behaviour in anaesthesia teams during induction of general anaesthetics. In: Flin, R., Mitchell, L. (eds.) Safer Surgery: Analysing Behaviour in the Operating Theatre, pp. 203–221. Ashgate Publishing Ltd., Aldershot (2009)
Burtscher, M.J., Manser, T.: Team mental models and their potential to improve teamwork and safety: a review and implications for future research in healthcare. Saf. Sci. 50(5), 1344–1354 (2012)
Malin, J.T., Schreckenghost, D.L., Woods, D.D., Potter, S.S., Johannesen, L., Holloway, M., Forbus, K.D.: Making intelligent systems team players: Case studies and design issues. volume 1: human-computer interaction design. NASA Technol. Memo. 104738, 1–276 (1991)
Klein, G., Bradshaw, J.M., Feltovich, J.M., Woods, D.D.: Common ground and coordination in joint activity. In: Rouse, W.B., Boff, K.R. (eds.) Organizational Simulation, pp. 139–184. John Wiley & Sons, New York (2005)
Lee, J.D., See, K.A.: Trust in automation: Designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Chen, J.Y.C., Barnes, M.J.: Agent transparency for human-agent teaming effectiveness. In: IEEE International Conference on System Man and Cybernetics, pp. 1381–1385 (2015)
McIntyre, R., Salas, E.: Measuring and managing for team performance: emerging principles from complex environments. In: Guzzo, R., Salas, E. (eds.) Team Effectiveness and Decision Making in Organizations, pp. 9–45. Jossey-Bass, San Francisco (1995)
Cummings, M.L., How, J., Whitten, A., Toupet, O.: The impact of human-automation collaboration in decentralized multiple unmanned vehicle control. In: Proceedings of IEEE (2011)
Patterson, E.S., Watts-Perotti, J., Woods, D.D.: Voice loops as coordination aids in space shuttle mission control. Comput. Support. Coop. Work 8(4), 353–371 (1999)
Schuster, D., Ososky, S., Phillips, E., Lebiere, C., Evans, A.W.: A research approach to shared mental models and situation assessment in future robot teams. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 456–460 (2011)
Fiore, S.M., Jentsch, F., Becerra-fernandez, I., Salas, E., Finkelstein, N.: Integrating field data with laboratory training research to improve the understanding of expert human-agent teamwork. In: Proceedings of Hawaii International Conference on System Science, pp. 1–10 (2005)
Klein, G.: Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making. MIT Press, Cambridge (2009)
Gutzwiller, R.S., Lange, D.S., Reeder, J., Morris, R.L., Rodas, O.: Human-computer collaboration in adaptive supervisory control and function allocation of autonomous system teams. In: Shumaker, R., Lackey, S. (eds.) VAMR 2015. LNCS, vol. 9179, pp. 447–456. Springer, Heidelberg (2015)
Cummings, M., Bertucelli, L., Macbeth, J., Surana, A.: Task versus vehicle-based control paradigms in multiple unmanned vehicle supervision by a single operator. IEEE Trans. Hum. Mach. Syst. 44(3), 353–361 (2014)
Acknowledgements
This work was supported by the Space and Naval Warfare Systems Center Pacific Naval Innovative Science and Engineering Program. The US Department of Defense Autonomy Research Pilot Initiative under the project entitled “Realizing Autonomy via Intelligent Adaptive Hybrid Control” also supported this work.
This manuscript is submitted with the understanding that it is the work of a U.S. government employee done as part of his/her official duties and may not be copyrighted. We request that the publication of this work include a notice to this effect.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Gutzwiller, R.S., Lange, D.S. (2016). Tasking Teams: Supervisory Control and Task Management of Autonomous Unmanned Systems. In: Lackey, S., Shumaker, R. (eds) Virtual, Augmented and Mixed Reality. VAMR 2016. Lecture Notes in Computer Science(), vol 9740. Springer, Cham. https://doi.org/10.1007/978-3-319-39907-2_38
Download citation
DOI: https://doi.org/10.1007/978-3-319-39907-2_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-39906-5
Online ISBN: 978-3-319-39907-2
eBook Packages: Computer ScienceComputer Science (R0)