Keywords

1 Introduction

With an increasing technical reliability of autonomous systems, more and more human responsibilities are carried out by machines. The increasing level of autonomy shall increase the efficiency and the safety and shall simultaneously decrease the human workload [1]. Traditionally, the design of autonomous systems focuses on the technical implementation aspects, especially technology-heavy disciplines such as computer vision and robotics, ranging from the system’s functionality to associated sensors and software [2]. Previous research in the field of human-robot interaction, such as [36], already intensively analyzes the utility of computer vision technology for autonomous drones. However, these projects do not focus on the experience of interacting with vision-based drones.

In this paper, we want to take this thought further and investigate the relation between autonomy and User Experience (UX) under different levels of perceived workload. We see this consideration as a key issue in the success of future assistive technologies. To create different levels of perceived workload, we chose to conduct a case study with four teams in competitive settings using (semi)-autonomously flying drones as exemplary systems. The teams used four different control mechanisms based on state-of-the art computer vision algorithms (see Fig. 1). Hence, the research question of this study can be summarized as follows:

Fig. 1.
figure 1

Levels of autonomy of the team’s drone prototypes using a checkerboard (1), a light source (2), a human face (3), and a floor marking (4) as control unit.

RQ: “How does the user’s experience when interacting with flying robots differ in situations with different perceived workload?”

This study provides two main contributions: First, based on the analysis of four different drone prototypes, each based on an individual (semi-)autonomous interaction design, we investigated the relation between autonomy level and UX. Second, we propose concrete design recommendations for the UX-oriented design of future (semi-)autonomous flying robots.

The goal of this paper is to foster the discussion of experiences with flying robots in the computer vision community and to encourage researchers and practitioners to consider both technical and UX-related attributes when building next generation of assistive flying robots.

2 Background

The term UX is an established concept in a variety of different disciplines, ranging from ergonomics to human factors and human-computer interaction. An established approach to consolidate the variety of different perspectives is the breakdown of UX into pragmatic and hedonic product attributes [7]. However, pragmatic product attributes (i.e., the usability) of technological tools is more and more taken for granted [8]. With an increasing technological maturity researchers should put more emphasis on hedonic product attributes in order to ensure the quality of everyday actions - particularly when designing assistive technologies.

As found by Fitts [9], machines perform better than human operators in certain aspects, such as precision and efficiency, in ensuring consistent quality in repetitious tasks, or in moving heavy loads smoothly. In other aspects humans outperform machines, e.g., in improvising and using flexible procedures, in identifying visual patterns, in reasoning or in exercising judgement. Consequently, when done properly, exploiting machine benefits generally leads to a reduction of workload for users and decreased stress, fatigue, or human error. To make these benefits accessible to users, interaction with systems is necessary, yet at the same time systems need to be able to execute tasks or subtasks on their own. How independently a system can operate is generally referred to as its autonomy. The term itself, as coined in research on human-robot interaction [10], has multiple definitions in the literature [1114] with varying characterizations. Sheridan and Verplank [14] characterized it by distinguishing ten levels of autonomy (LOA) ranging from ’Human does it all (1)’ to’Computer acts entirely autonomously (10)’ with increasing autonomy for each level. How autonomously a system can operate is determined by its design (e.g.,’Computer executes alternative if human approves (5)’). In some use cases a more or respectively less autonomous design is desirable. Therefore, flexible or adaptive autonomy approaches with a dynamically changing level of autonomy were proposed e.g., by Miller and Parasuraman [15]. Looking at the consequences for users and results when interacting with such systems, they describe an inevitable trade-off between workload and unpredictability: The more autonomously systems operate, the more workloadFootnote 1 is taken off the user’s shoulders. In consequence, however, the unpredictability of the results increases as users are no longer in control of the execution details. The more users need or want to be in control of the execution details on the other hand, the more their workload increases in turn.

Drones can serve well as a practical example in applying Sheridan and Verplanks LOA as they incorporate multiple at once. One reason for their popularity is their ease of control compared to remote controlled helicopters, for instance. This is due to their four (or more) rotor design leading to easier in-air stabilization. The stabilization is done fully autonomously by a built-in control unit (10). The different LOA can be used depending on usage contexts such as manual control for recording landscape or semi-autonomous tracking of and circling around a protagonist as in action sports.

3 Related Work

A range of prior work investigated interactions between humans and autonomously controlled systems (i.e., ground and aerial robots) in a variety of different settings.

With an increasing interest in the interaction between humans and autonomous systems, researchers move away from a pure analysis of input devices towards the investigation of more natural control gestures. Ende et al. [17] thereby focus on co-working tasks of technical robots, whereas Nagi et al. [18] analyze the interplay of gesture and facial recognition. Based on the analysis of human-drone interaction, Cauchard et al. [19] illustrate that natural gesture control generally lead to more personal relations to autonomous systems. The work of Ng and Sharlin [20] that examines body controls of drones inspired by falconeering gestures supports this view on natural human-drone interaction. Furthermore, Cid et al. [21], Heenan et al. [22], and Szafir et al. [23] highlight that visual feedback increases the level of empathy of human-robot interaction.

Against the background of these studies, we want to investigate how different levels of autonomy of an autonomous drone influence the interaction with the associated UX. First attempts to analyze the perception of different levels of autonomy are mentioned by Rödel et al. [24] and Hassenzahl and Klapprich [1]. These studies, however, do not comprehensively analyze the complexity of autonomous system but remain on a higher level of automation tasks (see [1]) or focus on the indication of the presumable UX of future autonomous cars (see [24]).

Based on the NASA TLX, researchers have already shown that with an increasing level of system autonomy the perceived workload decreases [16, 25]. The challenge of analyzing the interaction with autonomous systems is based on the subjective interpretation of each facet of the experienced interaction, ranging from usability over workload to experience. For the course of this study we want to investigate existing measurement tools in order to derive an interview guideline that is applicable for our particular research question. The interview guideline is comprehensively explained in the next section Methodology.

4 Methodology

As the implementation of autonomous flying robots is still on the rise, we decided to organize a student competition in order to develop various prototypes. We chose a a Parrot AR Drone 2.0 with the goal to implement different interaction designs.

The student competition was conducted in the form of a case study. First, students from our research institution were able to sign up for a drone course. Within this course, the students developed different prototypes. Second, the course ended in a competition, where the prototypes were put into practice. Third, we conducted interviews with the participants in order to analyze their experiences of the interaction with the drone.

4.1 Development of Prototypes

The case study was announced as a one-week student competition at our research institutionFootnote 2. The course itself consisted of two steps: Initially, the registered students were coupled in teams and had one week the develop a drone prototype. After one week, the student teams put their work into practice in three different settings, as described below.

Participants, Setting, and Task: In total, eight participants from different academic backgrounds (6x Computer Science, 1x Electrical Engineering, and 1x Communication Studies) and ages (\(22\, \hbox {to} \,26\, \mu =24\)) signed up for the one-week student competition without a financial reward. At the beginning of the competition, the students were randomly coupled in four teams of two. Over the course of the initial development phase, the participants were trained in python programming, image processing, computer vision, feedback control theory, state estimation, and autonomous navigation by academic and industry experts to ensure an equally distributed level of knowledge regarding the design of autonomous systems.

In the first phase of the case study the teams had to program a Parrat AR Drone 2.0 (52,5 cm x 51,5 cm), a quadcopter with an integrated HD camera, using a open-source python APIFootnote 3. The student teams were asked to process the video stream of the drone in real-time in order to fly and compete autonomously in a race at the end of the course. However, the teams were not dictated an obligatory interaction design. All four teams were told to individually develop a prototype with a desired level of autonomy at their own discretion. In the final race, each drone prototype had to pass the same predetermined track consisting of three hockey goals that were positioned in a L-shaped track. Figure 3 shows an impression of the drone race.

Prototypes: The four student teams programmed and implemented four unique types of drone interactions that cover different levels of autonomy. For the analysis in this paper, we were able to distinguish two types of autonomous interaction designs: “Semi-autonomous” when the drone “executes an alternative if the human approves” and “full-autonomous” when “the drone decides everything” - related to the LOA according to Sheridan and Verplank [14]. Figure 1 illustrates the four different drone prototypes and the associated interaction design whereas algorithm 1 exemplarily for all four teams demonstrates the algorithm of team 1 as described below.

Team 1: Recognition of a printed checkerboard. Team 1 implemented an algorithm based on Rufli et al. [26] that enabled the drone’s front camera to detect and follow the movements of a checkerboard that was printed on a piece of paper. Based on the known geometry the center of the checkerboard is found using corner and edge detection. The drone is steered and controlled as it tries to keep the centroid in the center of the image frame. Furthermore, through the identification of the outer-most square it is possible to push and pull the drone forward and backward. In order to avoid oscillation, a PID controller is used to improve the magnitude of the movement speeds. This interaction is semi-autonomous.

figure a

Team 2: Recognition of a color/light source. Team 2 employed an algorithm based on Comaniciu et al. [27] that allowed the drone’s front camera to detect a homogeneously colored object or a light source. This mechanism had a setup phase, in which the algorithm was trained to recognize either a colored object or a light source. In the final competition, the team used a light source to control the drone. After the setup phase the drone tried to center the light-source in the image frame and follow the track of the light-source. This interaction is semi-autonomous.

Team 3: Recognition of a human face. Team 3 programmed a face detection algorithm based on Viola and Jones [28] that recognizes a human face from the drone’s front camera. In this approach the drone tried to center a human face in the image frame and therefore follow the track and movements of the respective human. Furthermore, an additionally implemented emergency mode allowed the drone to keep its position through “hovering” as soon as the face recognition is interrupted. This interaction is semi-autonomous.

Team 4: Recognition of a floor marking. Team 4 implemented an algorithm based on Hart [29] that can detect and follow a colored line on the ground using the drone’s bottom camera as an input device. Thereby, the drone is positioned at a certain height above the particular line. In the final race at the end of the student competition, the team used a red tape to mark the respective line on the ground. The algorithm recognized the line (i.e., the tape) and constantly tried to keep this line in the center of the bottom camera frame. As soon as the line is not centered anymore the correct angle to approach the line again is calculated. This interaction is full-autonomous.

4.2 Competition, Data Collection, and Analysis

In order to analyze the experience of interacting with flying robots in situations with differently perceived workloads we identified four suitable settings for the final race. We distinguished different perceived workloads through the setting dimensions “Competition vs. No Competition” and “Manual Control vs. Autonomous Control”. As Cauchard et al. [19] already conducted an elaborate study on manually controlled human-drone interactions in a setting without competition we concentrated on (1) Competition/Autonomous, (2) No Competition/Autonomous, and (3) Competition/Manual Control as described below and illustrated in Fig. 2. In all three settings, both participants of the four teams had three attempts to finish the track. As the best run of each participant counted we ended up with 24 eligible runs in total.

Fig. 2.
figure 2

Allocation of the three analyzed settings. (1) Competition/Autonomous, (2) No Competition/Autonomous and (3) Competition/Manual Control.

1. Competition/Autonomous: In the Competition/Autonomous setting the student teams had to compete in the race using the autonomous control algorithm of their drone prototype. A manual interaction would disqualify them for the current run. The best student team was rewarded with a gift.

2. No Competition/Autonomous: In the No Competition/Autonomous setting the student teams where asked to autonomously direct their drone prototype through the track. However, no time was tracked in this setting.

3. Competition/Manual Control: In the Competition/Manual Control setting the student teams had to use the official Parrot App for Smartphones to steer the drone manually through the track. The autonomous control mechanisms were deactivated in this setting. The best student team was rewarded with a gift.

After all three attempts per setting we conducted interviews with all eight participants (in total 24 interviews, each between 15 and 20 min) to analyze experience-related aspects of the interaction with the drone prototypes. Our interviews were semi-structured and audio-recorded for post-hoc analysis.

In order to meet the requirements of our research question we developed an interview guideline that served as a basis for the semi-structured interviews (see Table 1). Inspired by related work in the fields of UX, usability, and workload evaluation (as indicated in Table 1), relevant experience- and workload-related categories (e.g., “User” and “Environment”) and dimensions (e.g., “Mental Demand” and “Frustration Level”) and associated interview questions were developed by the first and the second author.

Post-hoc coding was conducted according to Mayring and Fenzl [30] by the second author, who has a broad experience in open-coding of interview data. Interview statements were therefore clustered according to the questions’ categories (see table 1). The objective was to identify key issues across the study settings and to derive design recommendations that strengthen the linkage of computer vision and robotics and the interaction of technological tools with people.

Table 1. Semi-structured interview guideline.

5 Results

The next sections represent the results of our case study. First, we demonstrate the outcome of our interviews with regards to the three study settings. Thereby, we focus on the perceived workload (based on the dimension “Competition vs. No Competition”) as well as the participants’ experiences with autonomous and manual control mechanisms (based on the dimension “Manual Control vs. Autonomous Control”). Second, based on these outcomes we derive design recommendations for (semi-)autonomous flying robots.

5.1 Interview Outcomes

To analyze the relation between autonomy and UX (i.e., the associated perceived workload) of vision-based drones we consolidated key findings of our interviews. These key findings allow the consequent derivation of design recommendations to understand the interaction of people with flying robots.

System Feedback Enhances the Experience of interactions: All participants enjoyed interacting with their drones regardless of the respective level of autonomy. Having established a feeling of control, directing the (semi-)autonomously controlled drone was considered as very enjoyable (setting 1 and 2). The pleasure of being in control arose either through feedback from the (semi-autonomous) drone, a tangible input device (semi-autonomous) or through a reduced workload (autonomous drone). A student from team 1 (semi-autonomous drone) mentioned: “I think it was very enjoyable [...] that we could take very direct influence on the drone using the checkerboard. It kind of was like in the circus, where you have a tiger and you say ’jump over this’ [...] and we basically made the same thing with the drone navigating it through the obstacle course” [P1]. The instant feedback from the prototype facilitated the development of a feeling of being in control. However, external factors as well as latency reduced the feeling of being in control: “When the drone reacted to my input or my actions without much of a delay I felt confident. For example, when moving the light [source] to left or right [and] the drone also directly rotated to the left or the right, I had the feeling of complete control. So I think it is also a matter of latency” [P3]. The team that used the autonomously controlled drone (team 4), however, described the decreased workload as enjoyable, “[The] most enjoyable moment in the race was, when the drone surprisingly went along the path without any [manual] corrections” [P7].

Direct feedback mechanisms also positively influenced the ease of use of the prototypes: “ [The interaction] did not really need a lot of time to explain someone who has never seen this specific drone and implementation or control. You just say ’here is the checkerboard’. And even with small movements you [realize] how the drone moves and it is very easy to keep the drone on track” [P2]. However, participants from team 2 (recognition of color/light source) and team 3 (recognition of a human face) mentioned difficulties regarding the ease of use due to a lack of robustness of the associated algorithms. External factors such as different lighting, fast movements of the tracked object, and a lack of control mechanisms (e.g., a PID controller) led to difficulties in the interaction with the drone. The participants highlighted that they had difficulties “If the background is too bright, it [did not] work [recognition of a light source]” [P4]. and while “Holding it stable, while moving, shaking it not too much, was difficult” [P3].

Environment Perception Influences the Feeling of control: Environmental factors played an important role in the autonomous setting with competition (setting 1). Unexpected environmental conditions, bystanders, and the orientation of the drone in space were the most prominent environmental factors as, for example, “this direct sunlight completely misguided the drone. [...] We did not anticipate that problem” [P2]. Furthermore, “There were a lot of faces in the room [...] and also different parts of walls were recognized as faces” [P6]. For others it was “hard to locate where the obstacle is, relatively to the drone, because [the student] was looking at the drone and then while flying fast [one] can not really see if the path [the drone is] taking will work out or if [the drone will] touch something” [P8]. All in all, these unexpected environmental factors lowered the perceived feeling of control. In the manual setting (setting 3), the participants were less bothered by external influences. The possibility to use an additional input device even increased one student’s risk tolerance: “I would try to check whether you can even increase the speed in the setting, lower the limitations of the drone. So basically taking away safety features” [P3].

Fig. 3.
figure 3

Impressions from the autonomous drone race competition. Four student teams had to program a Parrot AR Drone 2.0 in order to fly autonomously in a drone race. In this picture a semi-autonomous interaction using face recognition is depicted.

5.2 Design Recommendations

Based on the investigation of the three different perceived workload settings, we derived three design recommendations for autonomous flying robots. The goal of these recommendations is to support a user-centered design of future autonomous flying robots and to carry on the concept of UX in the field of computer vision and robotics.

Maneuvering in 3D Space: Autonomous systems such as naval or aerial drones move in 3D space. We observed that maneuvering and interacting with a flying drone in 3D Space was mentally demanding for all participants, particularly at the beginning of each race. Adding an additional degree of freedom led to a high cognitive load, since the participants were accustomed to 2D movements, such as walking or driving a car.

Experiences from the case study: In the manual controlled setting, the participants needed a certain amount of time to familiarize themselves with the control mechanism in a 3D space. “I think it’s getting better and better the more I try. So it’s really something which is dependent on my skills” [P2]. In the autonomous controlled setting, the participants reduced the complexity of the (semi-)autonomously controlled drone in 3D space by reducing the numbers of allowed movement directions. Team 2, for example, disabled the backward pitch movement of their drone to overcome the obstacles. Team 1 restricted the drone to a fixed altitude to simplify the semi-autonomous interaction. “We lacked the controls to move the drone up and downwards. We just thought the drone will fit through the gates in the end” [P1].

Recommendation: With an increasing number of degrees of freedom, familiarization with the control of a system becomes more time consuming. For manual and autonomous interactions, we recommend to restrict the number of possible movements to the movements that are necessary in the respective use case. For example, one can fix or autonomously adjust the altitude of an autonomous system (e.g., of a surveillance drone) or restrict the system to one type of movement at a time. Consequently, (semi-)autonomous control mechanisms can support the handling in complex situations.

Precision, Feedback, and Latency: The interaction with autonomous systems requires a precise, direct, and instant feedback to foster the feeling of control. Latency in performing an interaction or the lack of feedback can substantially reduce the perceived feeling of control.

Experiences from the case study: We observed that for both systems, semi-autonomous and autonomous, a precise and direct feedback of the system led to a high feeling of control and consequently a positive UX. “It was a great feeling, [...] I could feel [...] the small changes and when I changed the position of the paper [i.e., the checkerboard] it was following it” [P3]. In contrast, latency within the interaction with the drone harmed the feeling of control, although it was regained again afterwards. “I thought it actually lost [the detection of] my face but it didn’t. So again the [latency] problem solved itself by being a little bit more patient” [P7].

Recommendation: As a conclusion, we suggest to design direct feedback mechanisms, as similarly mentioned by Cauchard et al. [19]. Moreover, the implementation of advanced and precise control procedures, such as a PID Controller, and the reduction of latency through a stable interaction design can promote a higher feeling of control and consequently a better UX.

Natural Emergency Procedures: Dealing with emergency situations is one of the key issues in designing autonomous flying robots for assistive purposes. Emergency situations are unforeseen and potentially harm people or the environment. Thus, the interaction with autonomous flying robots in an emergency situation is generally demanding. The challenge in emergency situations is based on the loss of control of the user and the consequential requirement of a suitable emergency procedure. In our case study four emergency actions were possible: direct control, immediate stop, immediate landing, and hovering (i.e., constant positioning in 3D space).

Experiences from the case study: In manual interactions we observed that in emergency situations the participants automatically used the immediate stop mechanism or the landing function. “In the second run [of the manual competition] I first anticipated the drone’s path [...] when I lost control I tried to to regain control, but then I emergency landed it” [P3]. In autonomous interactions we observed that participants resolved emergency situations initially using the hovering mechanism and later using immediate landing. “I bumped into the goal [i.e., one of the obstacles], which was not a big problem because [...] you could just wait a few seconds, the drone hovered and you could just start again” [P1].

Recommendation: With an increasing level of autonomy the importance of emergency considerations increases as users have to rely on the system to function correctly. Therefore, we suggest to design natural emergency handling schemes (i.e., hovering for drones) according to the level of autonomy in order to assist the user in potential breakdowns. Natural emergency procedures allow the user to realize and understand the need to interfere. Thus, a positive UX can be ensured.

6 Limitations and Future Work

This study aims to foster an multilateral discourse about autonomous systems. However, experiences and associated evaluations are subjective in nature, thus complicating generalization. Extensive and diverse studies are required to comprehensively understand users’ feelings and emotions. For our case study we were able to count eight registered participants from our research institution. We asked the participants to develop an individual interaction design for a aerial robot (i.e., a flying drone) in teams of two. Thus, we ended up with four different drone prototypes, whereas the analysis of more interaction designs as well as different levels of autonomy can lead to further and more profound insights. Nevertheless, we were able to derive reasonable insights and design recommendation across all prototypes. Here, the study can serve as a basis and provide comparative data for future research.

To ensure the comparability of the experienced interactions of all participants we chose drones as the development object for all teams. As a consequence, we focused on merely one specific aspect (i.e., the relation between autonomy and UX) in our case study and neglected further peculiarities of drones, such as noise generation of the rotor blades or specific flight characteristics. Moreover, the particular study setting (i.e., participants developed the interaction design themselves) may have resulted in a higher personal attachment compared to just using the system. We therefore want to motivate other researchers to take the concept of UX-oriented, autonomous systems further to additional application domains, such as ground or naval robots.

7 Conclusion

The central issue of this study was to analyze the relation between different levels of autonomy and the associated UX. To investigate this relation, we implemented a case study in the form of a student competition and selected flying drones as exemplary autonomous systems. In the end, we were able to contrast four different human-drone interactions based on semi-structured interviews with all participants. Altogether, we derive two main contributions from this study. First, we found autonomy-specific insights on the UX of human-drone interaction. Second, we presented three design recommendations for the future design of autonomous flying robots.

In summary, we see our work as a step towards the design of UX-sensitive autonomous flying robots. We want to highlight the consideration of UX as a crucial factor and foster an ongoing discussion in the field of computer vision and robotics research.