Keywords

1 Introduction

One of the purposes of this study is present a literature review that examines the principles of design that have been used in Human-Robots Interaction (HRI) guidelines for the development user interfaces for the interaction with robots. Within the interface design theme, the study of the visual conception of robotic systems happens in a similar way to many other human-computer interaction (HCI) systems such as the web and applications for mobile phones or tablets. What changes is how interactions between humans and robots occur compared to interactions between man and computer. For Scholtz [1], HRI differs from HCI because “it concerns complex systems, which have dynamic control systems, present autonomy, and can operate in changing real-world environments”. These differences can be seen through different spectrums of the interaction and its agent roles, along with physical nature of robots, number of systems a user may be asked to interact with, and the environment of interaction.

Robotic systems are a combination of different sensors; electronics and mechanical parts, and most of time the innovation is constituted through the way its components are being grouped for a specific purpose. This work is concerned with the design of software interfaces that is embodied in industrial robots as part of greater solutions. In other words, the interface of systems, which interact with other physical and virtual components and require their functionalities to be grouped into one place. The goal of the article is to examine ergonomic aspects of interfaces designed to let users control robots either manually or autonomously. It also concerns the simplest case of HRI [2], which occurs when one human works with one robot, which in this case, is a manipulator robot. Robot manipulators are widely used in research and development (R&D) and suited for a variety of tasks and commonly adapted as part of a greater solution in robotics.

If at first robots were created in the industry to release humans from risk and dangerous tasks, they are now getting closer of tasks related to all kinds of human needs [3]. On the other hand, even though intelligent systems become more and more reality, completely autonomous systems are far from being the norm. In actual fact, we need to have humans who monitor the systems, intervening when necessary [1] as well as design systems, that can be used by domain experts but not robotics experts. It is important to establish a partnership between humans and robots with greater capabilities than the individual components.

The challenges presented in the development of these interfaces range from the aspects of interaction to the composition of teams; certain software’s may require training and learning from human operator, as well as the adaptation or creation of design guidelines aligned with the needs of a given project. Also, robotic user interfaces have a rarer character in terms of design output, a niche market more specific and with team compositions often structured in software engineering and mechatronics, but lacking designers to deal with usability and aesthetics aspects in such projects [4]. Thus, many times these interfaces are designed and developed by the same individuals who developed and built the robot.

2 Ergonomics and HRI

Human-robot interaction (HRI) can be defined as “the study of the humans, robots, and the ways they influence each other” [6]. As a discipline, it regards the analysis, design, modeling, implementation, and evaluation of robots for the use of humans. The field is strongly related to human-computer interaction (HCI) and human-machine interaction (HMI). Nonetheless, it is distinct in some aspects of its interactions; robots may present dynamic systems with varied levels of autonomy that operate in real-world world environments. Another characteristic that distinguishes models are the differences that occur in the types of interactions and its roles respectively, the physical nature of robots, the quantity of systems the user may have to interact with at the same time, and the environment that these interactions may occur [6].

Human-computer interaction is the term most commonly used to indicate a computer application and files associated with the objects manipulated, not a physical system controlled by a computer [2]. The concept of human-robot interaction only became possible in the last decade as advances were accomplished in field robotics such as reasoning, perception, and programing that made semi-autonomous systems feasible [6]. At first, HRI was solely associated with teleoperation of factory robotic platforms but sooner, as the interest in robotics and its possible applications grown, the importance of research in usability and interface design become a crucial contributor to effectiveness in the performance of complex systems.

To examine guidelines that are important for the development of solutions composed of other parts, by a robot manipulator, it is necessary to define the context of interaction and interaction roles playing apart in the operation. The interest of this study is in collaboration between humans and manipulator robots to accomplish remote tasks where the operator cannot be present, conceding a set of autonomous behaviors to be executed by the robot. This could vary from a robot responding to precise commands from a human about some the position of a controlled arm to a more complex robot system planning and executing trajectory [6].

2.1 Interaction with Manipulator Robots

With the evolution of robotics research, the later incorporation of industrial robots into other types of production processes created new requirements that called for more flexibility and intelligence in industrial robots. These new trends in robotics Research have been denominated service robotics because of their general goal of getting robots closer to human social needs [3]. In a world where robots maximize capabilities and can perform more tasks in an autonomous manner, it is important to think about interactions between humans and robots that are aligned with the software architecture and interface design that may accommodate the human-in-the-loop [6].

More complex tasks can be solved if humans and industrial robots combine their individual skills and merge workspaces (Fig. 1) [5]. This is primarily because the weaknesses of one partner can be complemented by the strengths of the other. For instance, from an ergonomic perspective, humans are not suited for heavy and repetitive handling tasks whereas robots do not tire because of repetitive material handling [6]. To ensure that the efficiency of overall systems is as high as possible, it is necessary that the human operator feels comfortable when interacting with the robot.

Fig. 1.
figure 1

(Source:Gizmodo)

Man and a robot manipulator collaborating in a task.

In the beginning, robot tasks we taught using a teaching pendant. Driving individual joints and bringing the tool centering point to the desired location moved the robot accordingly. Once the location was correct, the robot joint were values stored in a file [7]. Thus, a typical robot program would inhere of a list of move instructions. Within the growing demand for robot applications, programming languages were created to facilitate complex task descriptions. Nonetheless, as industries looked to robotics to improve their revenue and release humans from risk, the disadvantages of online teaching emerged; programming online represented downtime for the robot once the costly equipment would be used for the preparation of and development instead of production [7]. In addition, program quality was not consistent and heavily relied on programmer and feedback was poor. Offline programming was then a solution for these issues, and different robot programs were created enabling different applications from textual programming interfaces to the simulation of environments in 3D graphical systems. As robotics research evolved so did the devices and graphical representation of 3D spacing computer-aided production engineering (CAPE), making interface design a topic of great importance for robot controlling and interaction.

From the perspective of an efficient interaction, ideally, the presence of a robot is transparent to the human, which interacts directly with the world. The human explores the world through some elements of control and gathers information about it through some element of information presentation. A remote human cannot interact directly with the world but instead, interacts with the world through the robot as an intermediary. In such a situation, the control commands are passed from human to robot, modulated by the autonomy of the robot and then executed in the world. In the other spectrum of interaction, information returns directly to the human but is obtained through robot sensors where they are interpreted and perhaps used to identify a robot’s behavior. In this context, there are two circuits of interaction: the human interacts with the robot through the interface, and the robot interacts with the world through the autonomous mode. The robot has limitations in relation to what it can do autonomously, how it can act and where it can function autonomously. In the same way, the interface is also inclined to limitations.

2.2 Supervisory Control

The central role of humans in systems that have automation is to assume so-called supervisory control [8]. This control establishes a new relationship between humans and machines as their interaction differs from the traditional human-machine-task model where the human performs a task from a computer or system with lower capabilities. In the supervisory control paradigm, the human or operator becomes a strategist, interacting at the beginning and at the end of the operation, delegating to the machines all the operational steps.

Per Parasuraman [8], the human supervisory control paradigm is characterized by the concept of automation, where the human stop being the main actor of the task, becoming the strategist. At this point, the human plans, monitors, and controls the necessary actions, delegating the operational part to the robot. Supervisory control interfaces are designed for high-level command generation. To effect supervisory control, the operator divides a problem into a sequence of sub-tasks, which the robot then executes on its own. Thus, it requires that the robot can achieve goals (even limited ones) while keeping it safe.

To develop more useful and acceptable robotic user interfaces that will allow the better interactions with industrial robots is necessary to consider the complex systems, in which these robots are integrated, as a simultaneous composition of software behaviors, that must be understood in the context and dimension of the interaction model of HRI. “This changes the user’s role from being in control to monitoring and intervening when necessary. This introduces the concept of ‘being-out-of-the-loop’ and raises the issue of how to alert the user to an exception and how to bring the user up to speed to quickly and effectively intervene” [1].

2.3 Situational Awareness in HRI

This study is interested is the simplest case of HRI, which occurs when one human works with one robot. More precisely by calling out distinct needs of consciousness to the human and the robot, this “base case” makes clear the unequal consciousness relationship between both agents. Drury [2] points out that given one human and one robot collaborating in a task, HRI awareness is the understanding that the human has information about location, status, and surroundings of the robot; along with the knowledge that the robot received from the human the necessary commands to direct its activities and restraints in which it will operate.

Supervisory control interfaces are well-suited for applications that must operate remotely, and the awareness factor is extremely important to select guidelines and metrics that are in accordance with the level of robot’s autonomy [8] and consequently how interface design elements must structured and to an efficient interaction. Situational Awareness (SA) [9] constitutes an important human factor in the interaction with supervisory control of remote applications. It represents a continuous diagnosis of the state of a dynamic world [7]; thus, is quite distinct from the choice or decision of what action to take because of the diagnosis. The accurate choice will then depend on good SA, but choice is not the same as SA. Therefore, HRI awareness focuses on the person who most directly controls the robot’s activities, which is the operator. For this reason, awareness for HRI analysis can help determined the most of the critical incidents with robot manipulators in remote operations.

2.4 Robot User Interfaces (RUI’s)

Robot user interface (RUI) specifies a user interface (UI) that supplements virtual access to human operators interacting with robots working in challenging environments such as In situ maintenance or rescue missions [10]. In other words, RUI’s are developed and deployed to play an important role in controlling robots and ensuring security in complex environments. The advance in robotic applications naturally created the need to interface different parts of integrated solutions in virtual environments. With the advent of devices with physical and visual feedback to our lives, these interfaces are presented under different outputs such as computers, smartphones or tablets.

Currently, RUIs are often difficult to understand and grasp as they require some training and learning from human operators. These interfaces end up being designed and developed by the same people who built the robot [10], which usually results in un-intuitive and without standard interfaces and creates cognitive handling issues for the operators [5]. The goal of RUI’s built to interface systems, which interact with other physical and virtual components and require their functionalities to be grouped into one place. RUI’s are usually refined in applied research and development, which implies distinct character in terms of design production output (Fig. 2).

Fig. 2.
figure 2

(Source: Parasuraman, 2006.)

Diagram of general functions of the supervisory control paradigm.

Nearly, RUI’s are developed to play the crucial role of controlling or monitoring operations via remote interface. In his article about multi-robot interface design, Naveed [10] conducted a survey of HCI principles to improve an existing RUI’s used to control a team o mobile exploratory robots. Considering an interaction with multiple robots the main issue he had was that the map-view, robot camera view and sensor data of the system would not appear on the same screen. Therefore, the constant need to switch between windows create a burden to the user. The RUI proposed as a solution (Fig. 3) provided all relevant information in the one single display, a recurrent and important principle when interacting with complex systems.

Fig. 3.
figure 3

The new design of RUI by providing all the relevant information on a single display [10].

3 Principles of Interaction for Efficient Interface Design

Guidelines are general rules, pieces of advice to enforce a certain principle with the intent to obey to a good foundation of work. This study aims to present a literature review that examines the principles of design that have been used in HRI guidelines for the development of RUI’s. The goal is to illustrate how designers can apply this knowledge in practice when developing robotic interfaces.

Interface design research has different authors who proposed principles of usability to better interaction with products and systems. In this context, ideas and principles to facilitate the overall use and satisfaction of systems may encounter some repetition throughout literature. In other words these principles are similar in some ways but approached in different perspectives. HCI fields has produced most of the guidelines for interface design [14, 15], but authors like Norman [11, 12] defined more generic principles that can be related to the interactions with physical and computer systems, which perhaps allows a relationship with the HRI model once it approaches systems and elements in real time.

In the field of HRI, the importance of supporting human interaction with robots naturally evolved through the concept of better communication and use of systems, which is directly connected with the efficient user interfaces. Literature review in HRI shows that many authors proposed guidelines for more efficient interaction between humans, robots. Goodrich and Olsen [16] highlighted the importance of creating the metrics of neglect tolerance and efficient interfaces. They compiled their findings in a set of principles for efficient interactions in HRI. In this case, the principles are since human intent is transformed and restricted by the availability of technology and, that every transformation may introduce a potential workload or an error condition. Therefore, the principles were designed to act against the effects of bottlenecks.

We are going to describe these principles and relate them to different guidelines found in the literature review.

3.1 Implicitly Switching Interfaces and Autonomy Modes

When controlling a robot, the operator must be able to change forms of control and the ways he/she receives information from the robot. These changes are often imposed by the environment but can also be made at the discretion of the human being. The operator is who must determine, through the established operational context, when it is necessary to switch from manual to an autonomous mode. Mode changes are necessary but should not overload the operator, if the user wants to change operating mode, picking up the manual control (which can be done by joystick, for example) should be sufficient to prove one’s intention.

Allowing an implicitly of interaction modes means to create efficient ways for a human controller to interact with robots or (semi) autonomous machines [6]. It also can be related to the decrease of cognitive load and provide detailed and complete control capability of what can be accomplished [15, 17, 18]. In terms of interface design, it should then acknowledge the changes made to avoid any confusion from the user’s part. When it comes to cognitive processing, the user does not need a mental model that tells them the steps necessary to alternate modes of interaction, which evokes the idea of the lower cognitive load.

3.2 Use Natural Human Cues

Our natural interaction with people brings us vast experience in dealing with each other and a natural set of forms of expression. Nonetheless, natural language is an indescribable goal, and many other forms of natural expression are useful to as such visual feedback and gestures. With regard to the processing of cognitive information, naturalness means that one’s mental model and the proposed system are aligned, making attention to sensory stimuli, and short-term memory uses a practice employed. Thus, naturalness is compatible with effective interaction because it invokes the generation of an intuitive response.

Many types of research describe forms of making interactions more natural and overall usability of systems friendlier. Fong, Thorpe and Baur [5], pointed that interfaces must be designed such that users can unambiguously impart intentions and receive feedback. Also, human-computer dialogue requires a vocabulary, which associates labels with concepts (e.g., command words) and sequences of actions (grammar) allowing a natural cognitive relationship to accomplish tasks with the system [8]. Interface designers who wish to use this principle should be aware of visual representations that are disseminated in our lives through visual languages and devices we use; maps are a good example of how visual repertoire and well-adjusted presentations can help in the ease of use of systems [6, 8, 18].

3.3 Directing Manipulating the World

The purpose of interacting with a remote robot is to accomplish some task in the world. Therefore, RUI’s should be designed to allow tasks to be accomplished, rather than drawing attention to the robot or the interface per se. In terms of cognitive information processing, interacting with the robot-world requires a mental model while interacting with the robot requires another separate model. If the robot is transparent to the user, then only one model is necessary. This principle entails that working memory is less likely to be overloaded [15, 18] with extra data in short-term memory and extra mental models. Thus, transparency is a desired element during the interaction.

3.4 Manipulate the Robot-World Relationship

Support autonomy that directly supports the world manipulation is not an easy task. Under these circumstances, human attention may need to be drawn to the robot, and it is most helpful if this attention remains focused on pertinent elements of the world and the task. More precisely, information regarding the status of the robot in relation to a goal state or information that relates robots pose to world coordinates is useful. Thus, the user understands the world through the representation of the robot, and regarding cognitive information processing, the relationship between the robot and world must be known before the user can plan what the robot should do.

3.5 Information Is Meant to Be Manipulated

One primary purpose of RUI is to present information about the world to the user, as well as the relationship between the world and the robot, and solely about the robot. When information is displayed to a user the purpose is to support decision making by the user at perhaps the most fundamental level; this information is used to determine environment path and eventual obstacles that may be present. For example, if an obstacle is presented than the user may wish to guide the robot around it, thus its necessary spatial information about robot´s surroundings [17], so the user is aware [2] of how to guide the robot towards his goal. In terms of cognitive information and processing model, if information can be manipulated directly there is no need for a mental model that translates this information into an action that will occur in a different modality.

3.6 Externalize Memory

One of the difficulties with teleoperation a robot via a camera perspective is that the user cannot see where the robot’s parts are [20]. A common occurrence in HRI is when the user projects himself into the machine so under his perception the machine becomes an extension of the user [21]. This greatly simplifies the task of keeping the correct mental model resident in working memory, but it is limited if the sense of priority and perception is missing. Without this sense, the user must either (a) maintain all relevant information in short-term memory [19] and then integrate this information into a mental representation [18], or (b) consult other sensors and integrate all sensors into a consistent whole [22]. This can make the task of guiding a robot all compassing. One form of simplifying cognitive loading associated with navigation is to externalize memory.

3.7 Support Attention Management

Attention seems to be a major bottleneck in cognitive information processing. As mentioned before situation awareness is a human factor of extreme importance when dealing with supervisory control [2, 8]. Even if sufficient information is presented to the user if there’s no attention on the provided information than incorrect decisions can be made. In terms of interface design, it is important to manage a user’s attention by providing status of the system functioning [6].

4 Evaluation

A comparative analysis (Table 1) was made to illustrate a relationship of different HRI guidelines presented in literature examine the reviewed articles. Based on supervisory control, recurrent aspects of interaction were selected and then group accordingly. Furthermore, it was evaluated what would represent a combination of these factors and what must be taken into consideration that may not be present on available guidelines. In this case, the base of reference is the interaction of one human working with one robot and related guidelines and principles around supervisory control interactions in remote operations.

Table 1. Acomparative analysis of HRI guidelines table related to the supervisory control.

When examining different guidelines around the same theme, it is possible to note certain recurrence and redundancy once the main objectives that permeate similar ideas. To enhance system performance is a principle mentioned by authors in both fields of HRI [2, 17] and HCI [11,12,13,14,15];it has a direct relationship with the design more efficient user interfaces and reducing the number of interactive steps [11, 24]. Considering supervisory, interfaces would include the availability of required information to make the right decision [2, 17], as well as access time to the interface for the needed actions and the amount of any distraction or attentional shift between the robot scene and the control interface [2, 5]. The presentation of information is then an important principle to remote operations [18, 23, 24]; visibility of system status [2, 6, 24] and implicitly of switching interfaces and autonomy mode [16] are essential to an efficient collaboration with robot manipulators in a remote work environment [5]. Furthermore, other guidelines on HRI can be related to this goal; to avoid any deadlock circumstances [2, 16]; providing all relevant information in one single display [18, 22, 23] allows the operator to have necessary means to plan, teach, monitor and intervene when necessary.

5 Conclusion

A literature review was conducted with articles regarding interface design for HRI to collect guidelines that could be useful for the development of RUI’s in remote operations with robot manipulators. The objective of this analysis is to be able to comprise guidelines that support better HRI interfaces but also to identify aspects that are recurrence and therefore important for monitoring this kind of robots in remote operations. The table shows recurrence of guidelines that refer to awareness creating a relationship with the supervisory control and the function of monitoring a single robot operating in the remote environment the factors of awareness can provide a significant framework to evaluate the needs of the operator. It also shows a recurrence of guidelines that required all relevant information displayed to a user with intent to support decision-making, which reducing the workload of the operator and diminishes the number of interactive steps, consequently improving system performance.

Interface designers involved in the construction and development of RUI’s under the mentioned circumstances should be then, able to provide all relevant information in one single display, organizing system information per its relevance steps that must be taken further for the effectiveness of the system. An interface designer should be able to represent robot world in a natural perspective; the perception of robot world does not require a different mental model, which brings a fluid interaction and avoid bottlenecks in the system. Therefore, directly present information about this relationship as such messages of notifications regarding actions and system status must be clear to the user.