skip to main content
research-article
Open access

Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos

Published: 23 October 2024 Publication History

Abstract

The deployment of autonomous delivery robots in urban environments presents unique challenges in navigating complex traffic conditions and interacting with diverse road and sidewalk users. Effective communication between robots and road and sidewalk users is crucial to address these challenges. This study investigates real-world encounter scenarios where delivery robots and road and sidewalk users interact, seeking to understand the essential role of communication in ensuring seamless encounters. Following an online ethnography approach, we collected 117 user-generated videos from TikTok and their associated 2,067 comments. Our systematic analysis revealed several design opportunities to augment communication between delivery robots and road and sidewalk users, which include facilitating multi-party path negotiation, managing unexpected robot behaviour via transparency information, and expressing robot limitations to request human assistance. Moreover, the triangulation of video and comments analysis provides a set of design considerations to realise these opportunities. The findings contribute to understanding the operational context of delivery robots and offer insights for designing interactions with road and sidewalk users, facilitating their integration into urban spaces.

1 Introduction

The deployment of autonomous delivery robots (ADRs) is becoming increasingly prevalent in urban spaces, holding promise for enhancing last-mile delivery efficiency [59], as well as reducing energy consumption and emissions [23]. ADRs encompass various robotic platforms, including self-driving vehicles, autonomous delivery pods, and sidewalk delivery robots, also known as personal delivery devices (PDDs). This study focuses on small-scale PDDs weighing between 20 and 250 kilograms and operating at a maximum speed of 12 m/s, which often share sidewalks with other sidewalk users [63].
While other types of mobile robots, such as assistive robots [53] or cleaning robots [26], typically operate in closed-off and more predictable environments (e.g., workplaces or domestic spaces [18, 100]), delivery robots must navigate complex and dynamic urban traffic environments and interact with a diverse range of road and sidewalk users, including pedestrians, cyclists, and vehicle users [29]. This requires delivery robots to be highly adaptable and capable of handling unexpected situations, which can pose challenges to their smooth deployment. Although advances in robotics technology continue to enhance the operational capability of delivery robots, potential challenges during real-world deployment are difficult to fully anticipate during the robot development process [83]. To shed light on the practical considerations of these delivery robots, recent studies have turned their attention to the real-world deployment of delivery robots, for example, evaluating traffic accessibility [4, 21, 65] or observing people's interactions with these robots [2, 27, 92, 97].
When designing ubiquitous technologies for urban contexts, it is crucial to consider non-users as stakeholders [87]. When delivery robots operate in public spaces, they could potentially affect incidental road and sidewalk users who encounter the robot without intending to interact with it, who have been referred to as incidentally co-present persons (InCoPs) [72]. In situations such as unmarked crossings and shared spaces where formal traffic rules are lacking, road and sidewalk users often rely on social norms to convey intentions and anticipate behaviours [69, 71]. To navigate such complex environments effectively, delivery robots and other autonomous vehicles (AVs) must be equipped with the ability to communicate with other road and sidewalk users and follow social norms [79, 94]. Numerous studies have focused on probing driver-pedestrian interaction patterns and designing external human–machine interfaces (eHMIs) for traditional road conditions [20]. However, the communication requirements for delivery robots extend beyond these contexts, as they operate more frequently in unstructured areas such as sidewalks and shared pedestrian zones and often encounter sidewalk users in close proximity. Furthermore, the lack of transparency in artificial intelligence (AI)-driven systems can create difficulties in understanding the behaviours of delivery robots, negatively impacting the quality of interaction and diminishing people's trust and acceptance of the technology [17, 79].
In September 2022, a video capturing a delivery robot ignoring police tape and barging through a crime scene went viral on social media, sparking discussion about the readiness of such robotics technology in real-world settings. This incident is symbolic of the many scenarios that delivery robots need to address in real-world operations and underscores the need for human–robot interaction (HRI) research to study how these systems interact with people in real-world settings. Delivery robot technology companies have been increasingly conducting pilot programs in public spaces, such as Starship robot1 pilot programs across various U.S. campuses and Europe, as well as Kiwibot2 pilot program in Pittsburgh [27, 29, 77], resulting in a notable presence of delivery robots in urban environments. Consequently, there is a growing amount of user-generated content on social media about people's encounters with these robots. These resources provide valuable data for HRI researchers to study real-world deployment scenarios and people's attitudes towards delivery robots, which can offer insights for designing better external interfaces to facilitate interactions between road and sidewalk users and delivery robots [62].
To inform the interaction design of delivery robots through insights from real-world deployment scenarios and the public's attitudes towards delivery robots, we conducted an online ethnographic study. Our study utilised a systematic search approach to collect user-generated videos depicting delivery robot operations in urban spaces on video-sharing platform TikTok,3 along with their corresponding comments. We conducted video content analysis, identifying scenarios in which effective communication from the delivery robot is essential to facilitate smooth interactions between the robot and sidewalk users, as well as people's behavioural patterns when encountering robots. Furthermore, through a thematic analysis of the corresponding comments on these videos, we identified several themes regarding people's attitudes towards delivery robots, including acceptance, perceptions and information needs. This study makes twofold contributions. First, the identified scenarios delve into potential design opportunities to augment communication between delivery robots and other road and sidewalk users in complex urban contexts and situations, going beyond the conventional focus on path negotiation. Second, our triangulated analysis of videos and comments provides insights into design considerations for the interaction design between delivery robots and other road and sidewalk users.

2 Related Work

2.1 Delivery Robots in Urban Environments

ADRs have become one of the latest personal delivery innovations, promising to provide more efficient, environmental-friendly and flexible delivery options [24, 42], with a projected growth to fulfill 85% of last-mile deliveries by 2025 [59]. This emerging technology has spurred a growing body of academic literature, including works on operational improvements such as efficiency optimisation [82] and human-aware social navigation [60], economic perspectives such as consumer acceptance of delivery robots [1, 45], and regulatory frameworks [34].
The potential of delivery robots to enhance the efficiency of last-mile delivery has been demonstrated by promising results from pilot programs launched by companies such as Starship Technologies across the United States and Europe [85]. These programs have reported average delivery times of less than 15 minutes in California, suggesting that delivery robots could serve as a viable alternative to traditional delivery methods (e.g., carried out by delivery drivers) [77]. However, despite their potential benefits, issues arise due to their use of shared spaces with other road and sidewalk users, concerning matters such as traffic congestion, hindered pedestrian mobility, increased risks of collisions, creating accessibility barriers for people with disabilities [7, 27, 49, 59] and the potential to induce irritation, anxiety or frustration among people [9]. Thus, there is a growing need to study the impact these delivery robots have on co-present road and sidewalk users. Rosenthal-von der Pütten et al. [72] have stressed the importance of a holistic, human-centred approach that accounts for the interactions with these bystanders. Several existing observation studies have provided a brief overview of people's reactions towards these robots in public environments, however, there has been little research on how they interact with delivery robots in public traffic settings.
Several observation studies were conducted through a Wizard of Oz (WoZ) approach [16], wherein operators remotely controlled the robot's navigation while researchers observed people's interactions with the robots in public spaces. Vroon et al. [92] pointed out the shortcomings of existing social navigation approaches in catering for the dynamic interactions between delivery robots and pedestrians. Taking a different approach, they conducted a field observation study using a remote-controlled robot that deliberately ignores people in order to elicit actual conflicts in real-world contexts. Their findings suggested that individuals tended to actively avoid the robot or pause to observe it due to the novelty of the robot's presence. Abrams et al. [2] built a delivery robot mock-up controlled by a human operator and conducted a field observation study to examine HRIs in an urban setting in Aachen, Germany. Their study found that children often exhibited exploratory behaviours in response to the robot, while some adult pedestrians demonstrated a lack of acceptance or resistance towards the presence of these delivery robots on sidewalks. Following a similar approach, van Mierlo [90] conducted a field observation on a sidewalk in a park in Utrecht. Their observation indicated that the majority of people ignored the presence of the prototypical robots, while a smaller percentage of people exhibited a fleeing response when encountering them.
In addition to WoZ studies, pilot delivery robot programs provide researchers with opportunities to observe the real-world deployment of these robots. Gehrke et al. [27] analysed field-recorded videos captured during a 1-week delivery robot pilot program conducted on the Northern Arizona University campus. They found the presence of delivery robots disrupted the mobility and efficiency of other road and sidewalk users, as they had to modify their paths to avoid potential collisions with the robots. Dobrosovestnova et al. [21] conducted a field study in Tallinn, Estonia, to examine the operation of delivery robots in challenging conditions (i.e., snow-covered sidewalks). The researchers found overall positive reactions from people towards the robots, and in instances where the robots got stuck in the snow, some people voluntarily assisted the robots to resume their operation. In a recent observation study conducted in Pittsburgh [97], researchers found that delivery robots occasionally caused distractions and obstructions with sidewalk users. Similar to Dobrosovestnova et a.'s study [21], they also observed instances where people voluntarily helped immobilised robots.

2.2 AV–Pedestrian Communication

Developing external interfaces to facilitate interactions between AVs, ranging from large transportation vehicles to smaller mobile robots, and other road users is crucial for building trust in automated systems [38] and has been a key focus of research in the field of pedestrian-vehicle interactions. Extensive research has been conducted on exploring a wide range of eHMIs to effectively communicate AVs’ status, intentions [19, 95] and awareness [57, 91] to pedestrians, leveraging various modalities, such as on-road projections [55, 61], on-vehicle displays (e.g., LED light bands [19]) and augmented reality (AR) [89]. However, despite the wide range of communication strategies and channels that have been investigated, the majority of these studies are limited to traditional traffic scenarios, such as uncontrolled zebra crossings [13, 20], and did not encompass the broad range of scenarios in which delivery robots and other road and sidewalk users may interact in shared spaces [46].
Delivery robots, in contrast to large transportation vehicles, frequently navigate sidewalks in close proximity to a broader range of road users and sidewalk users [97]. This unique operational context presents communication challenges that extend beyond mere path negotiation at intersections, which are typically the main focus of AV–pedestrian communication design concepts. However, in comparison to the extensive body of work on communication design methods for large transportation AVs, research specifically focused on the design of communication for delivery robots is relatively limited. Kannan et al. [44] acknowledged this gap and conducted an online survey to investigate the comprehensibility of display and light-based interfaces that convey the delivery robot's navigational intent to pedestrians under common navigation scenarios. Inoue et al. [40] designed an AR display that showed real-time information about a delivery robot such as the operation status, destination and speed and found that it positively reduced user anxiety around them. However, the design was evaluated by pedestrians standing at a fixed position, which limits its real-world applicability. This limitation has also been recognised in a recent literature review [69], which highlighted that the majority of empirical studies on AV–pedestrian communication have primarily focused on lateral interactions at pedestrian crossings. The review pointed out the need for investigating more diverse interaction configurations of AV–pedestrian interactions in shared spaces. Furthermore, the communication needs of road users other than pedestrians, such as car users, have been largely overlooked in the existing literature. Therefore, a more comprehensive investigation is needed to understand the challenges and opportunities for interaction between delivery robots and other road and sidewalk users in these dynamic and complex environments.

2.3 Social Media as a Resource for HRI Research

The examination of social media content [66]—commonly also referred to as online ethnography [88] — has emerged as a valuable approach in HRI research for assessing public attitudes and perceptions towards robots, as well as exploring HRIs in real-world settings [21, 37, 62, 76, 86, 99]. Strait et al. [86] conducted a study examining the commentary on YouTube4 videos depicting robots to investigate the public's opinions regarding the emergence of highly humanoid robots compared to robots with more prototypical robotic appearances. The results were consistent with the theory of the uncanny valley [33], as people more frequently expressed aversion towards highly humanoid robots. Moreover, the discourse observed on online platforms highlighted concerns regarding the potential ‘technology takeover’ associated with the robots’ realistic appearances. Following a similar approach, the study conducted by Hover et al. [37], analysed online comments on videos that featured robots with varying degrees of human-like attributes and gender to investigate how people perceive and interact with humanoid robots. In the context of delivery robots, Dobrosovestnova et al. [21] analysed online comments as a supplementary resource to their field observation study on delivery robots facing operational challenges on snow-covered sidewalks. They pointed out the significance of considering ethical implications when commercial technology relies on the assistance of bystanders to accomplish its tasks, as reflected in the online comments. Lee and Toombs [51] analysed online discourse about delivery robots in a university's subreddit, shedding light on people's perception of delivery robots as objects of affection and social members. However, their study had demographic limitations since it focused on a specific group of people (i.e., campus students from the university where the delivery robot was deployed) and the social media platform they investigated.
Apart from textual comments and posts, video-sharing platforms such as YouTube or TikTok have become an increasingly popular source of empirical data, utilised by many HCI researchers to gain insights into people's natural interactions with technology outside the lab [6, 8, 64]. This is particularly valuable in the context of shared spaces, where human–robot encounters are often casual and spontaneous, and natural and intuitive reactions to robots are difficult to study in a controlled laboratory setting [2]. Thus, behaviours observed in user-generated videos may have emerged more naturally compared to those in controlled lab studies, where participants are assigned tasks to complete [62]. Nevertheless, leveraging social media data comes with inherent limitations, such as the lack of demographic information about content creators [86] and potential bias in commenting [47]. Moreover, some researchers raised concerns that social media videos might be performative or staged, thus not accurately representing natural behaviours [64].
So far, the analysis of user-generated videos in HRI research is relatively rare. To the best of our knowledge, the recent study by Nielsen et al. [62] is the only online ethnography study that analyses user-generated YouTube videos to investigate unguided interactions between people and public service robots. While their study provides valuable insights for designing service robots to operate effectively in complex and unstructured environments, they solely focus on contexts in which the robot intends to provide service to people and is limited to indoor settings (e.g., shopping malls and train stations). Our study, on the other hand, aims to investigate casual encounter scenarios between delivery robots and pass-by road users (i.e., without intention in engaging in interactions) in urban traffic settings.

2.4 Summary

In summary, the deployment of delivery robots and the interactions between these robots and other road and sidewalk users in shared urban spaces pose complex challenges. While progress has been made in optimising robot efficiency [82], implementing social navigation [60] and developing eHMIs for traditional traffic settings [13, 20], there is a need for a more comprehensive understanding of the dynamic and complex interaction scenarios that emerge through real-world deployments of delivery robots [69]. While delivery robot operations are currently limited to small-scale pilot studies [27, 29, 77, 97], this presents geographical and temporal constraints for researchers aiming to observe their real-world operations. Moreover, observing people's natural and spontaneous behavioural responses to encountering robots is challenging in lab studies, where participants are typically given particular tasks to perform [62]. Our work addresses these challenges by leveraging social media content to identify real-world interaction configurations and gain insights into people's attitudes towards delivery robots. This analysis explores the design contexts for interactions between delivery robots and other road and sidewalk users, providing insights to inform the design of effective interaction strategies.

3 Methodology

To address the current lack of empirical investigation into the real-world deployment of delivery robots, we conducted an online ethnography study [88] to analyse user-uploaded videos on TikTok that capture encounters between delivery robots and the recording person or other surrounding people. We analysed those videos to identify real-world scenarios which necessity communication of delivery robots to facilitate smooth interactions with road and sidewalk users. Additionally, we conducted a thematic analysis of comments in response to videos to gain further insights into people's attitudes towards delivery robots and how these can inform the interaction design of delivery robots.
In sum, our online ethnography was guided by the following research questions:
RQ1: What are the real-world encounters documented online that reflect scenarios in which communication between delivery robots and road and sidewalk users is necessary?
RQ2: What are the public's attitudes towards delivery robots, and how can these attitudes inform the interaction design of delivery robots?

3.1 Data Collection

3.1.1 Initial Search Phase.

To ensure a systematic and comprehensive approach, we conducted an initial search across diverse video-sharing platforms such as TikTok and YouTube, as well as the search engine Google, to obtain a broad overview of videos depicting interactions with delivery robots. After trialing various combinations of keywords across these platforms, we selected TikTok as the most suitable platform for our study due to its extensive collection of user-generated content that captures genuine and spontaneous encounters with delivery robots. Furthermore, we decided on a search keyword strategy of combining technology-related terms and case-specific keywords (as proposed in prior research studies [3, 50]), and ultimately settled on the search terms delivery robot + lost and delivery robot + why. To broaden the scope of the search, the names of three popular delivery robots, including Starship robot, Coco robot5 and Postmates robot6 were included as alternative search terms for delivery robot. To ensure that our search was comprehensive yet efficient, we followed the stop criterion suggested in previous research [3, 62] and stopped the search after reviewing at least 25 successive videos that were deemed irrelevant.
The search was conducted by the first author between 5 November and 12 November 2022, and yielded an initial set of 612 video samples. Meta information for each video was recorded during the search process, including the link to the video, the upload date, the comments, the video's views and the number of likes. Table 1 provides an overview of all the delivery robots included in the dataset.
Table 1.
NameImageLevel of autonomy\({}^{\mathrm{a}}\)Communication modalityMain deployment regions\({}^{\mathrm{b}}\)
StarshipLevel 4 to Level 5Natural language speech, sonic notification, flashing lightsThe U.S. (e.g., California, Washington, D.C.)and Europe (e.g., Helsinki, London)
KiwibotLevel 2 to Level 3On-screen displayed eyesCities and college campuses of the U.S. such as California, Arizona and Florida
Tiny Miles \({}^{\mathrm{c}}\)Level 2 to Level 3On-screen displayed eyes, flashing lightsCities in the U.S such as North California and Miami
CocoLevel 4Flashing light, sonic notificationCities in the U.S such as Santa Monica, Los Angeles and Miami
PostmatesLevel 4LED-lights in eye shape, flashing lightCities in the U.S such as Los Angeles, San Francisco and Miami
Scout \({}^{\mathrm{d}}\)Level 4Flashing lightCities in the U.S such as California, and Atlanta, Georgia
Table 1. Delivery Robots Included in the Study
\({}^{\mathrm{a}}\)The autonomy level information presented in the table was obtained from the official websites of the respective technical companies associated with each robot, the autonomy level classification is based on the definitions provided by International [41].
\({}^{\mathrm{b}}\)The deployment area information presented in the table was obtained from the official websites of the respective technical companies.
\({}^{\mathrm{c}}\) https://tinymile.ai/, last accessed: April 2023.
\({}^{\mathrm{d}}\) https://www.aboutamazon.com/news/innovation-at-amazon/, last accessed: April 2023.

3.1.2 Video Screening and Filtering.

After collecting the initial video samples, we conducted two rounds of screening and filtering to ensure the quality of the data. During the first round, duplicated and inaccessible videos were removed following the common criteria used in the initial filtering process [50, 62, 64, 73], which resulted in 475 videos. In the second round, we adopted similar exclusion criteria used in [50, 62] to exclude videos that did not feature a delivery robot in operation or contained advertisements, staged acts or non-English speech. The exclusion of non-English videos, totalling 17, was based on concerns regarding potential misinterpretation and loss of originality due to translation. Additionally, videos with excessive editing, disrupted chronological order [43] or rapid short clips were removed, following the concerns raised by [62] regarding the impact on the validity and neutrality of the recording. Two videos that were no longer accessible during the later analysis process were also excluded. In the end, the dataset consisted of a total of 117 videos that were eligible for analysis.

3.1.3 Comments Extracting.

To gain further insights into the public's attitudes towards delivery robots, we extracted the top 50 independent comments (i.e., comments that are not replies or threads) from each of the 117 eligible videos. We decided on this criterion to address the wide variation in comment counts across videos and to create a dataset that is both rich and manageable, thus allowing us to maintain a balance between in-depth analysis and practicality. This comment extraction protocol was adapted from previous online ethnography studies in HRI [37, 86]. To ensure the richness and diversity of video scenarios, we included videos with a lower number of comments, even though similar studies tend to exclude such videos if they have fewer comments than the number they aim to extract. Thus, in cases where the number of comments was less than 50, all eligible comments were extracted.
To standardise the dataset, the comments were further processed by all three coders (i.e., the first, third, and fourth authors) based on a set of exclusion criteria agreed upon by consensus. Comments that met the following exclusion criteria were excluded: (1) non-English comments; (2) comments unrelated to robots; (3) comments with ambiguous meaning or reference, which could hinder coders from accurately interpreting the underlying sentiments and (4) comments that contained jokes which, upon discussion and agreement among all coders, were determined not to reflect people's true attitudes. The filtering process resulted in a final dataset of 2,067 comments.

3.2 Data Analysis

This section outlines the methodology used in the study for analysing both the video content and the comments posted under the videos. The methodology includes the use of thematic analysis [11] to analyse and look for patterns in the comments and an approach inspired by open coding [15] to identify the encounter scenarios in the video content.

3.2.1 Identifying the Scenarios.

We began the video analysis procedure with open coding of the content depicted in the videos [15]. To do so, we first transcribed the spoken words in the videos and annotated the videos with a comprehensive description of the scenarios and the behaviours of individuals captured in the footage, including both the recorder and other people featured in the video. Timestamps were added to the transcripts to offer traceability for the analysis process. Additionally, the behaviours of individuals were coded by all three coders using a coding scheme developed and agreed upon by all coders to supplement the scenario analysis. An example video annotation can be found in Table 2.
Table 2.
Scenario descriptionVideo recorderPeople captured in video
The robot operates on the sidewalk (00:00)--
Stops at red traffic lights (00:10)The recorder shouts with surprise: ‘Oh my god, it's crazy. It stops at the right light!’ (00:11)A pedestrian smiles to and agrees with the recorder (00:11)
--A pedestrian pretends to kick it (00:15)
Moves when light turns green (00:21)-A pedestrian turns his head to look at it (00:23)
Table 2. Example of Video Annotation
Given the aim of RQ1 is to identify the real-world encounters documented online that reflect scenarios that require communication between delivery robots and road and sidewalk users, certain eligible videos from the filtering process may not explicitly reflect this need. For example, some videos may only capture a functioning delivery robot without featuring instances that highlight potential communication breakdowns with passersby. To ensure that only videos containing relevant scenarios were included in the scenario identification process, we applied a set of selection criteria. Specifically, we only included scenarios where the lack of communication between the delivery robot and road and sidewalk users could result in confusion, misunderstanding, degraded experience, or potential interaction failure, resulting in 89 videos. The inclusion of scenarios was determined by the lack of communication that disrupted smooth interactions, as directly observed in the video or articulated by individuals depicted. It should be noted that, although selection criteria were implemented to ensure that only videos containing relevant scenarios were included in the scenario identification process, the comments that accompanied these videos were not excluded from the below-mentioned comment analysis. This decision was made to enable an examination of people's attitudes towards the delivery robots within a broader context, rather than limiting the analysis solely to their opinions on communication breakdown situations.
The selected videos were then grouped and summarised into high-level categories based on the robot's behaviours and the interactions that occurred between robots and road and sidewalk users as observed in the video. The behaviours of road and sidewalk users were first directly transcribed from the videos, then systematically coded and categorised based on recurring patterns. The derived categories of the scenarios and the observed behaviours of road and sidewalk users are reported in the results section.

3.2.2 Comment Analysis.

We employed a combination of deductive and inductive thematic analysis approaches to analyse the comment dataset. The coding process began with a bottom-up approach and was refined through an iterative process to develop a robust coding scheme. In order to incorporate diverse perspectives from researchers in related fields, the data analysis was conducted collaboratively with three coders, including the first author, as well as the third and fourth authors, who are HCI researchers specialising in AV–pedestrian interaction.
The first author conducted a comprehensive examination of the data and selected a representative subset that constituted 10% of the total comments. This subset was subjected to independent coding using an inductive approach by all three coders. The resulting coded data were consolidated into one spreadsheet, and a 1.5-hour meeting was held to review the codes, discuss agreements and disagreements among the coders and deliberate on the initial themes identified. Following the meeting, the coders collaborated to develop and agree upon an initial coding scheme. The coders then applied a deductive approach to independently code another subset of comments (10%) using the collaboratively developed initial coding scheme. The inter-coder reliability check [25] was performed on the second subset of 10% comments, which yielded a moderate level of percentage agreement at 0.65.
To further increase the reliability of the coding scheme, we initiated another discussion to iterate over the initial coding scheme. During the second coding discussion meeting, we adhered to a process similar to that of the first meeting, with a specific emphasis on addressing codes with lower agreement rates, in order to ensure a consistent and coherent understanding of the coding scheme among the three coders. We then applied the revised coding scheme to another subset of the 10% comments. The second round of inter-coder reliability checks yielded a high level of inter-coder reliability [25], indicated by a high percentage agreement of 0.85, and a good Krippendorff's alpha [31] of 0.746. This suggests that the coding process was reliable and consistent and that the codes assigned to the data were valid and accurately reflected the content of the comments, which ensures the credibility of the findings derived from the coded data. Finally, the three coders applied the coding scheme independently to an equal subset of the remaining data. The themes that were identified throughout our comment analysis are presented as part of the results section.

4 Video Content Analysis Results

This section presents the results of our video content analysis, which starts with an introduction to the various categories of scenarios where road and sidewalk users require communication from robots to improve their interactions. The related issues that may arise in each situation with the absence of effective communication are also highlighted alongside these categories. We then discuss people's behaviour patterns when encountering robots identified through video content analysis. Our analysis considered various agents of behaviour, including pedestrians, vehicle drivers, cyclists, and other individuals present on sidewalks (e.g., people sitting at cafes). The inclusion of this diverse group aligns with the definition of InCoPs in [72], as they are all stakeholders who may potentially be influenced by the presence of delivery robots.

4.1 Extracted Scenarios

To address RQ1, a detailed analysis was conducted by annotating, open-coding, and clustering the video content. The analysis identified five typical scenario categories where the lack of communication between the delivery robot and road and sidewalk users could lead to confusion, misunderstanding, degraded experience, or potential interaction failure. In this section, we will present these scenario categories and discuss the issues identified during the analysis process.

4.1.1 Scenario 1: The Robot Is Incapable of Performing Its Task in Complex Traffic Conditions.

Despite significant technological advancements that have enhanced delivery robots’ mobility, our analysis found that these robots could still face substantial challenges in complex urban traffic environments, impeding their smooth operation. Unpredictable obstacles (as shown in Figure 1(a)) and diverse urban terrains, such as road curbs (as depicted in Figure 1(b)), can obstruct the path of delivery robots and cause them to become immobilised. Moreover, traffic infrastructures dedicated to human use can present additional challenges for delivery robots. These robots are primarily designed for transportation purposes and lack manipulation functionality, leading to further inefficiencies and delays that require human intervention.
Fig. 1.
Fig. 1. Screenshots of example cases of scenario 1: (a) a delivery robot was blocked by a scooter; (b) a delivery robot was unable to climb up a slope; (c) a pedestrian pressed the traffic light bottom for a delivery robot (not captured in the screenshot).
In these scenarios, despite passersby showing care towards the delivery robot, the lack of effective communication from the robot creates uncertainty among people about its status and whether they should offer assistance. This is exemplified by the recorder in Figure 1(a) discussing with two other pedestrians whether they should help the robot: ‘What happened? Should we help the robot?’. Furthermore, the absence of communication has the potential to undermine people's trust in the robot. For instance, a comment under a video capturing a stuck robot doubted the capability of the robot, ‘Why are we helping them? They’re meant to be smart[…]’.

4.1.2 Scenario 2: The Robot Abruptly Stops or Redirects, Interrupting Its Smooth Operation.

The sudden stops or redirections of a delivery robot, which disrupt its consistent operating state, can often cause confusion among nearby road users. As shown in Figure 2(a), where the robot suddenly stopped on a sidewalk with no visible obstacles, the recorder of the video falsely assumed that the robot had detected their presence and stopped, saying. ‘It sees me, so it stopped’. A similar situation occurred in Figure 2(b), where the recorder assumed that the delivery robot's repetitive back-and-forth turning was due to their presence obstructing the robot's path, as reflected by their intention to give way for the robot, ‘Hold on, let me get out of your way’. The uncertainty surrounding the cause of delivery robots’ operating interruptions and whether pedestrians are involved can cause them to hesitate or alter their course, resulting in reduced pedestrian efficiency and potential safety hazards in complex urban environments.
Fig. 2.
Fig. 2. Screenshots of example cases of scenario 2: (a) a delivery robot came to a halt without obstacles in front of it; (b) a delivery robot repeatedly moved back and forth; (c) a delivery robot stopped beside a driver, turned around to proceed in a different direction.
Furthermore, unexpected movement interruptions can also lead to assumptions of robot malfunction, which can decrease people's trust. For instance, in Figure 2(c), the recorder mistakenly assumed that the robot was ‘lost’ when it turned around and headed in a different direction. In a previous study examining people's interactions with service robots in public spaces, unexpected path deviations such as detours were also found to have a negative impact on people's trust in the robot [62]. Therefore, it is essential for delivery robots to provide explanations for changes in their operating state to prevent misunderstandings and avoid disruptions in the traffic flow of other road users.

4.1.3 Scenario 3: The Robot Needs Negotiation with Other Road and Sidewalk Users at Intersections.

Intersections require effective communication between delivery robots and road and sidewalk users to facilitate successful negotiation among multiple parties. Our video analysis found that the path-planning mechanism of delivery robots often assigns themselves the lowest priority at intersections, leading to extended wait times until there are no vehicles on the road before crossing. However, while this prioritisation may be for safety reasons, the lack of communication with vehicle users can hinder traffic efficiency and lead to frustration among drivers. For example, in Figure 3(a), even though the robot did not exhibit any signs of crossing intention, three out of five drivers came to a full stop and waited for the delivery robot to cross, causing unnecessary delays. In a similar situation, a delivery robot's prolonged wait at the intersection made a driver mistakenly believe that they were obstructing the robot's path, resulting in the driver reversing their vehicle to give way to the robot. Moreover, the absence of effective communication can lead to impatience and frustration among drivers. This was exemplified by a driver's angry shouting towards a stopped robot at a zebra crossing (see Figure 3(b)), where the robot had to stop to give way to another vehicle.
Fig. 3.
Fig. 3. Screenshots of example cases of Scenario 3: (a) a delivery robot waiting at an uncontrolled intersection; (b) a delivery robot stopping to yield to another vehicle (isn’t shown in the screenshot) when crossing and blocked a car; (c) a delivery robot waiting alongside pedestrians.
From the pedestrian perspective, even though they are not engaged in the same negotiation process with delivery robots as drivers at intersections, the lack of communication can negatively impact pedestrian mobility, experience and safety when crossing the street. In some instances, when the delivery robot stopped and waited to cross, pedestrians were observed attempting to guide the robot in various ways, which not only impeded their own crossing but also potentially increased the risk of traffic accidents. Moreover, the delivery robot's waiting behaviour can influence pedestrians’ decision to cross. For example, a pedestrian asked a waiting delivery robot ‘Are you waiting for me to move?’ (Figure 3(c), instead of crossing the street immediately after the traffic light turned green.

4.1.4 Scenario 4: The Robot Encounters Other Road and Sidewalk Users in Close Proximity.

When delivery robots navigate urban environments, conflicts with other road and sidewalk users are inevitable, particularly on narrow sidewalks or in bottleneck traffic situations. While delivery robots can typically avoid pedestrians autonomously, the lack of transparency regarding the rules they follow to navigate around people can raise questions about the right of way in such situations. Figure 4(a) illustrates a scenario where an elderly woman in a mobility scooter had to stop and find a way to navigate around the robot, while another man had to step out onto the driveway. In response to this video, one comment raised the question, ‘Can she not give way?’ Furthermore, the lack of communication can also impact the social interactions of groups of pedestrians and their overall comfort in the urban environment. As shown in Figure 4(b), a group of three pedestrians who were chatting together had to scatter as the robot approached, disrupting their social activity by causing them to stop their conversation and temporarily shift their attention from the interpersonal interaction to the robot's movements.
Fig. 4.
Fig. 4. Screenshots of example cases of Scenario 4: (a) a delivery robot had path conflicts with an elderly woman in the mobility scooter; (b) a group of people scattered because of the delivery robot; (c) a person expressed shouted due to concerns about being hit by a delivery robot approaching them.
Moreover, as shown in Figure 4(c), pedestrians may be startled by oncoming delivery robots that lack communication regarding their intention to stop. In this scenario, the video recorder was shouting ‘stop!’ in terror at the robot approaching them due to concerns about being hit by the robot.

4.1.5 Scenario 5: The Robot Does Not Comply with Conventions or Regulations.

Our video analysis revealed instances where delivery robots failed to comply with conventions and traffic regulations, potentially due to the challenges posed by complex urban environments and technological imperfections. A typical example of such a scenario is a video showing a delivery robot ignoring police tape and entering a crime scene, as shown in Figure 5(a). A similar case can be seen in Figure 5(b), where a person was trying to direct a robot falsely entering a marching band to leave. These unexpected behaviours led some people to doubt the reliability of the technology, as demonstrated in a comment under the video of Figure 5(a) referring to the robot's actions as a ‘tech blunder’.
Fig. 5.
Fig. 5. Screenshots of example cases of Scenario 5: (a) a delivery robot ignored the police tape and entered the crime scene; (b) a delivery robot entered a marching band; (c) a delivery robot violated traffic rules by jaywalking.
Furthermore, it is inevitable that delivery robots may make errors while in operation, and in some cases, even violate traffic regulations. As illustrated in Figure 5(c), the robot was observed jaywalking, and crossing over the motor vehicle lane. These behaviours elicited comments such as ‘drives as crazy as a human’ or mention the potential of ‘causing a car accident’ under the video, indicating people's safety concerns about the robots. In addition, errors in robot behaviour can lead to a decrease in people's trust in them, as suggested in [30]. Even though effective communication of the robot's internal state cannot prevent the malfunction from happening, it can still help people better comprehend the situation and plan their own path accordingly, potentially avoiding hazardous outcomes.

4.2 Behaviours of Road and Sidewalk Users

In this section, we present six typical behaviour categories summarised from people's interactions with the robot captured in the video. These behaviour categories provide a comprehensive understanding of the dynamics and responses exhibited by individuals when encountering delivery robots, which can offer valuable insights into the design of interactions between robots and other road and sidewalk users.
To account for the potential influence of being recorded, we highlight the number of behaviour instances initiated by the video recorder, as well as the number of protagonists aware of the recording, when reporting the behaviour instance count.

4.2.1 Attention.

When encountering the delivery robot, the most frequent behaviour among road and sidewalk users that we observed was slowing down or stopping to gaze7 at the robot (n = 59). In two instances, pedestrians even followed the robot for a brief period of time to observe it more closely. Notably, people's attention was more attracted to scenarios where the robot's movement was interrupted or the robot behaved abnormally, as shown in Figure 6.
Fig. 6.
Fig. 6. Alluvial diagram mapping scenarios to road user behaviours. Left column: Different scenario categories, including an Other scenarios category for videos that couldn’t be categorised due to brief encounters or lack of contextual information.; Middle column: Behaviour category summarised from road user's interactions with the robot; Right column: Road user's interactions with the robot captured in the videos. (It is worth noting that multiple interaction codes can be applied to one single interaction, as behaviours often occur simultaneously or in combination with one another. The left and middle columns may have a higher count than the total, as some behaviours can fall into both the conversational communication category and another category.)
Furthermore, our observation indicated that some people showed noticeable interest in the robot's perception channel, such as the camera or sensor. This observation is consistent with the results from our comment analysis result that the robot's perception is one aspect of people's potential information needs. Specifically, five pedestrians approached the front of the delivery robot or got close to its camera to inspect it closely (thereof three were protagonists aware of the recording).

4.2.2 Making Way for the Robot.

During encounters with the delivery robot, road and sidewalk users frequently altered their paths (n = 20, 6 recorders) or stopped (n = 8, 3 recorders) to make way for the robot, particularly when it was in close proximity (as shown in Figure 6). These behaviours suggested that road and sidewalk users were generally respectful and accommodating towards the delivery robot. Notably, some pedestrians even stepped off the sidewalk onto the driveway (n = 1) or the lawn (n = 2) to allow the robot to pass on narrow sidewalks. It is worth noting that people do not only yield to the oncoming robots as two pedestrians were observed to stop and step aside for a robot coming from behind after noticing its approach.
Apart from pedestrians, vehicle users also exhibited the willingness to yield for the robot (n = 5, two recorders) when encountering a stopped delivery robot at an intersection, with three of them stopping completely in front of the robot to make way for it. In three instances, drivers (all recorders) even backed their cars or drove away to maintain a larger distance from the delivery robot, as they believed that their car was detected by the robot and blocking the robot's path.

4.2.3 Assistance.

Among the videos in our dataset, there were 14 captured instances where the delivery robot was unable to operate independently due to challenging traffic conditions, and in 13 of those cases, the robot received assistance from passing pedestrians. Seven pedestrians physically pushed the delivery robot when it was stuck (thereof five were recorders and two were protagonists aware of the recording), with one of them even observed escorting the robot with their arms surrounding the robot after it resumed movement. Additionally, six pedestrians assisted the robot in pressing the traffic light button or removing obstacles due to the robot's inability to manipulate traffic infrastructure or move objects (four recorders). Furthermore, our observation also noted instances of expressed joy and excitement following the provision of assistance help, as evidenced by laughing and changes in their speech tone (n = 7, thereof 4 were recorders and 3 were protagonists aware of the recording).
In addition to offering help when the delivery robot encountered difficulty, 11 road and sidewalk users were observed attempting to aid the robot's operation by directing it through verbal or gestural cues when the robot was not moving or had entered a restricted area (e.g., the crime scene as shown in Figure 5(a)) (six recorders). For instance, in one video where the robot was not moving despite the green traffic light being on, a pedestrian used hand gestures of curling their fingers towards themselves and said ‘come on’ to direct the robot to cross the intersection.

4.2.4 Displaying Etiquette.

Our observations indicate that some individuals interacted with the delivery robot in a socially conscious manner. The demonstration of social etiquette towards the robot suggests that road and sidewalk users may perceive the robot as a social agent rather than a simple machine, which is consistent with the results of the comment analysis. The social interactions observed include greetings upon encountering the robot (n = 10, thereof 8 were recorders and 1 were protagonists aware of the recording), bidding farewell when it left (n = 8, thereof 5 were recorders and 1 protagonist aware of the recording), and expressing apologies after obstructing the robot's path (n = 2, thereof 1 recorder and 1 protagonist aware of the recording). For example, one pedestrian made a prayer-like hand gesture to express apology towards the robot and gestured an ‘after you’ motion to indicate their intention of yielding the way for the robot after blocking its path.
Moreover, one of the delivery robots included in our study was equipped with verbal communication capabilities to express gratitude to pedestrians who provided assistance. In these instances, all eight individuals responded to the robot's gratitude with expressions such as ‘you’re welcome’ (thereof 6 recorders and 2 protagonists aware of the recording), accompanied by a surprised (n = 8) or joyful emotional expression (n = 7). This observation suggests that reciprocal social etiquette from a robot could lead to positive social interaction between humans and robots.
Fig. 7.
Fig. 7. Themes, categories and codes identified in the comment analysis, along with the number of comment instances for codes. (Sub-categories are not shown in the figure due to the space constraints, and the complete coding scheme can be found in the following tables.)

4.2.5 Interference.

The actions of pedestrians may pose a challenge to the smooth operation of delivery robots. Six pedestrians intentionally tested the robot's operation by intentionally stepping in front of it (thereof 2 records and 2 protagonists aware of the recording). Notably, one of them pretended to tie their shoelaces to mask their intent from the robot instead of standing directly in front of it. Moreover, we observed eight instances of playful behaviours towards the robot, including people playfully chasing it (n = 5, including 3 children, 1 recorder), pretending to kick it (n = 2) or placing a beverage can on top of it as it passed by (n = 1). Notably, no interference from road and sidewalk users was observed in scenarios where robots encountered operational difficulties and are incapable of performing their tasks.

4.2.6 Conversational Communication.

Our video content analysis recorded 43 instances of conversational interactions between the delivery robot and video recorders, with 35 initiated by the recorder or protagonists who were aware of the recording, and 8 initiated by the robot expressing gratitude as mentioned in the above section. Among these interactions, 11 were questions posed by the recorder to the robot, such as inquiring about its destination, ‘Where are you going?’, or checking its status, ‘Are you lost?’ when the delivery robot exhibited less-than-smooth operation. In such cases, three recorders expressed encouragement for the robot, for instance, by shouting ‘You made it!’ when the robot resumed movement from a temporary breakdown. In contrast, one driver expressed frustration by yelling angrily at the robot for blocking their path. Among the observation of conversational interactions between the delivery robot and video recorders, 21 instances of verbal communication were related to social etiquette as introduced above.
Table 3.
CategorySubcategoryCodeDefinition
AnthropomorphismAnthropomorphic referralGender pronounUsing gendered language such as ‘he’ or ‘she’ when referring to the robot
Anthropomorphic appellationUsing anthropomorphic appellation words like ‘little guy’ or ‘little buddy’ when referring to the robots
Robotic appellationUsing robotic appellation words like ‘machine’ or ‘vehicle’ when referring to the robots
Anthropomorphic suggestionAnthropomorphic suggestionSuggesting to add anthropomorphic features to the robots
MentalisationAscribing human thoughtAscribing human thoughts when trying to interpret the robot's behaviour
Ascribing human traitAscribing human traits to the robot such as ‘sensitive’
Ascribing human feelingsAscribing human-like feelings to the robots such as ‘sad’
Analogy to human behavioursMaking an analogy between the behaviours of robots and that of humans
Science-fiction influenceRobot domination associationBeing reminded of robot domination plot in sci-fi movies when seeing the delivery robot
Robot characters associationBeing reminded of robot characters in sci-fi movies when seeing the delivery robot
Social agentRobot to follow social normsSocial communicationExpecting/appreciating the robot to follow social norms when communicating to people
Social navigationExpecting/appreciating the robot to follow social norms when navigating around people
Loss of social interactionsConcerning the loss of social interactions comparing to human delivery
Human to follow social normsSocial interaction intentionExpressing intentions to engage in social interactions with the robot
Social etiquette appreciationAppreciating people to show social etiquette when interacting with the robots
Novelty and cutenessCutenessPleasant impressionConsidering the robot to be cute or adorable
Unpleasant impressionConsidering the robot to be scary or creepy
NoveltySurprise to see the robotExpressing surprise when seeing the robot
Being impressed by the robotConsidering the robot to be cool or awesome
Table 3. Categories, Subcategories, and Codes Belonging to the Theme: Perception

5 Comment Analysis Results

In this section, we present the results from the thematic analysis of the user-generated comments pertaining to the general public's attitudes toward delivery robots (addressing RQ2). The analysis reveals three broad themes.
The first theme pertains to people's Perceptions of delivery robots, including the tendency for people to anthropomorphise the robots, the perception of the robots as social agents, and the overall impression that delivery robots are cute and novel. The second theme concerns the Acceptance of delivery robots, including people's attitudes towards the robot presence, their willingness to collaborate with delivery robots, and the factors that influence their acceptance. The third theme covers the Information that people would like to know about the delivery robots, such as reasons behind delivery robots’ behaviours, as well as information regarding several technical aspects.

5.1 Perception

5.1.1 Anthropomorphism.

The analysis of user-generated comments revealed that people tend to anthropomorphise delivery robots, despite the robots evaluated in the study featuring predominantly mechanical appearances or exhibiting only minimal anthropomorphic traits (e.g., displayed eyes). This was supported by the frequent use of gendered pronouns (n = 87, 22.1%)8 or personification appellations such as ‘little guy’ or ‘little buddy’ (n = 39, 10.0%) when referring to the robots. In contrast, a smaller proportion of people perceived the delivery robots as mere machines (n = 20, 5.1%), as demonstrated by their use of robotic appellations when referring to the robot, such as ‘box on wheels’. In addition, five comments suggested adding anthropomorphic features to the robot, such as putting ‘googly eyes on them’.
The tendency to anthropomorphise delivery robots is also reflected by people's attribution of human thoughts (n = 73, 18.6%), traits (n = 37, 9.4%) and feelings (n = 30, 7.6%) to the robots. This process, known as mentalisation in psychology [28], has been found to exist in how people interpret robots, as suggested in previous research [58, 75]. People's interpretation of the robot's behaviour was often guided by their assignment of human-like thoughts to the robot, as demonstrated by one comment assigning thoughts ‘Why is it stopped like “I remember you! What do you want human […]”’ to the robot stopping in front of the human in the video. In addition, people also speculated about the robot's characteristics and feelings, as reflected in comments describing the robot as ‘polite’ or ‘sensitive’, ‘embarrassed’, ‘tired’, or ‘nervous’. The assignment of the emotion of being ‘scared’ (n = 5, 1.3%) was used when the robot was observed waiting at an intersection to cross the road or stopped because of human presence. In addition, people often draw analogies between the behaviours of robots and those of humans (n = 47, 12.0%). For instance, one comment described a delivery robot stopping at the sidewalk as ‘fell asleep’.
Moreover, our analysis results identified that science fiction films have an impact on the public's perception of delivery robots. A number of comments mentioned their associations with robot domination (n = 39, 10.0%) or well-known robot characters, such as ‘Wall-E’ (n = 16, 4.1%) upon seeing the robots from the video. Although some of these comments may contain a humorous tone, they underscore the pervasive influence of science fiction narratives in media, such as movies, on shaping people's anthropomorphic perceptions. This finding highlights the role that media and cultural representations play in shaping the public's attitudes toward autonomous robot delivery.
Table 4.
CategorySubcategoryCodeDefinition
Robot presencePositive attitudesAffection towards robotExpressing their affection towards the delivery robot
Interests in using/seeingBeing willing to use the delivery robot or see its deployment
The robot is futureConsidering the robot's deployment as a future trend
Negative attitudesIntended reckless behaviourIntention to perform reckless behaviours towards the delivery robot
Reluctance to useRefusing to use the delivery robot
Aversion towards robotExpressing their disliking of the delivery robot
Collaboration with robotSupportive attitudesHumans should help robotsExpressing their opinions that humans should help robots when they have troubles in operating
SympathyExpressing their sympathy towards the robot in need of help captured in the video
Criticising people for not helping/interferenceCriticising people in the video for not helping the robot or intentionally interfering with the robot's operation
Intention to offer helpExpressing their willingness to help the robot in need of help captured in the video
Unsupportive attitudesOpposing human helping robotExpressing their opinions that humans should not help the robot
Factors influencing acceptanceRobot capabilityConcernsExpressing concerns around the delivery robot's capability of completing delivery task efficiently
Favourable assessmentExpressing positive assessment of the delivery robot's capability of completing the delivery task efficiently
Vulnerability in tricky situationsInterference by peopleConcerns that the robot is not able to complete its task when being interfered with by people
Challenging traffic conditionsConcerns that the robot is not able to complete its task when encountering challenging traffic conditions
Social impactJob lossExpressing concerns of job loss resulting from increased automation
Traffic impactTraffic safetyExpressing concerns that delivery robot will cause accidents
Traffic efficiencyExpressing concerns that delivery robot will reduce traffic efficiency
Table 4. Categories, Subcategories, and Codes Belonging to the Theme: Acceptance

5.1.2 Social Agent.

Our analysis revealed that people tend to perceive delivery robots as social agents, as indicated by their expectation and appreciation of robots’ adherence to social norms (n = 67, 69.8%). In contrast, only a limited number of comments expressed concerns about the potential decrease in social interactions that could result from relying on robots for delivery instead of humans (n = 3, 3.1%), such as ‘(losing) small talks with the delivery drivers.’
Our analysis further highlights the significance of robots’ social communication (n = 49, 51.0%) abilities in determining their perceived sociability. For instance, one of the delivery robots in our study was equipped with the communication ability to verbally request human assistance and express gratitude through phrases like ‘thank you’, which elicited generally positive reactions from the comments (n = 29, 30.2%). Furthermore, politeness is a crucial element of social communication, and people expect robots to display it when seeking human assistance. Some comments criticised the robots for their lack of politeness (n = 3, 3.1%), such as one comment stating ‘not even a please, it can wait’. In addition, non-verbal communication modalities such as facial expressions (i.e., screens displaying simple facial expressions) (n = 7, 7.3%) and music responses (i.e., playing a short music tune after customers picked up their delivery) (n = 4, 4.2%) received positive feedback in 11 (11.5%) comments, which could also contribute to the delivery robot's perceived sociability.
In addition to social communication abilities, people also expect robots to navigate around other road and sidewalk users in a socially polite manner (n = 10, 9.6%). For instance, eight comments considered it polite behaviour when the robot stopped or altered its trajectory to give way to other road and sidewalk users. In contrast, in a video where an elderly woman in a mobility scooter was giving way to a robot, two comments argued that the robot should have given way to the woman as a sign of politeness.
The way people intended or appreciated people in the video to interact with delivery robots socially also demonstrated their perception of these robots as social agents. Twenty-six (17.1%) comments expressed people's intentions to engage in social interactions with the delivery robots, including actions such as greeting, ‘hug(ging)’ and ‘hold(ing) hands’. Furthermore, eight comments (8.3%) expressed appreciation for individuals in the video who demonstrated social etiquette when interacting with the robots. In one video, a person's response of ‘You are welcome’ to the robot's gratitude elicited a positive reaction from a comment, which stated, ‘It made me smile and giggle when it thanked her and she said ”you’re welcome”.’

5.1.3 Cuteness and Novelty.

Our analysis revealed that people have generally positive impressions of the delivery robot (n = 113, 57.7%), with cuteness as a type of attractiveness being the predominant impression that people associate with the robot (n = 96, 49.0%). This was indicated by the adjectives used to describe them, such as ‘cute’ or ‘adorable’. While this could be related to people's tendency to anthropomorphise robots, three commenters explicitly pointed out that they found the robot to be cute despite its mechanical appearance, e.g., ‘WHY ARE THEY SO CUTE?!? They’re boxes on wheels and I still have feelings for them!’. In contrast, unpleasant impressions of robots were relatively rare (n = 12, 6.1%). A few comments used terms such as ‘scary’ or ‘creepy’ to describe the delivery robot.
The novelty of delivery robots is another common impression that they leave on people, as this technology has not yet been widely adopted as a common delivery method in most parts of the world. This is reflected in 73 (37.2%) comments expressing people's surprise upon seeing the delivery robot in the video or asking about it, with phrases like ‘What is that [the robot]?’. In addition, 17 (8.7%) comments used adjectives like ‘cool’ or ‘awesome’ to express admiration for the robot representing an innovative technology.

5.2 Acceptance

5.2.1 Robot Presence.

The results of our comment analysis suggest a generally positive attitude towards the presence of delivery robots as a service in urban settings (n = 119, 68.8%). Specifically, many comments expressed affection for the delivery robot (n = 63, 36.4%) and interest in seeing or using the delivery robot (n = 43, 24.9%). In addition, 13 (7.5%) comments suggested that the robot represents the ‘future’. In contrast, a relatively small minority of comments expressed a resistant attitude towards accepting delivery robot deployment, with some expressing reluctance to use the service (n = 15, 9.2%) or aversions towards the robots (n = 4, 2.3%). The negative attitudes towards the presence of robots can also be reflected by people's intentions to perform reckless behaviour towards the robot (n = 35, 20.2%), such as ‘kick it over’ or ‘ram it with my car.’

5.2.2 Collaboration with Robot.

In addition to the explicit attitudes expressed towards delivery robots, the acceptance of these robots by the general public can also be inferred from people's opinions on emerging human–robot collaborations (HRCs). Most comments (n = 204, 98.6%) expressed supportive views towards humans offering help or expressed sympathy for the robots in situations in which they encounter operation difficulties and require human interventions. In contrast, only a minority of respondents (n = 3, 1.4%) expressed opposition to offering assistance for delivery robots.
Eighty-eight (42.5%) comments align with the pattern that commenters agree with or express their opinion that people should help robots when they are in need. In several instances, commenters even expressed their intention to assist the robot captured in the video when it encountered operational difficulties (n = 15, 7.2%). For example, one comment stated that ‘I’d have walked it across the road’ in reference to a robot in the video waiting for a long time to cross an intersection. Furthermore, some comments even criticised the people in the video for not helping the robot or intentionally interfering with it (n = 33, 15.9%), as one person stated their angry feeling of being ‘heated’ at drivers who didn’t yield for the delivery robot.
Moreover, we found that emotional connections and sympathy could be formed with the delivery robot (n = 68, 32.9%), as evidenced by people expressing feelings of sadness (n = 23, 11.1%) or a desire to ‘cry’ for the robot (n = 11, 5.3%) when it struggled to complete its tasks. The question of the robot's right of way was also raised, with three comments advocating for the robot to have the same rights as pedestrians, as noted by one comment: ‘[…] cars are supposed to stop for them. They’re considered pedestrians, and it's illegal not to stop at crosswalks.’ These findings could suggest that some people have the tendency to view delivery robots as entities deserving of respect and certain right in public traffic settings.

5.2.3 Factors Influencing Acceptance.

The comments analysed in our study provide insights into various factors that could impact the acceptance of delivery robot deployments by the general public. A key concern identified in the analysis is the robot's capability to efficiently carry out delivery tasks. While some comments expressed favourable evaluations of the robots’ capabilities (n = 40, 12.1%) and even preferred them over human delivery (n = 5), a larger proportion of comments expressed concerns regarding the robots’ capability to complete delivery tasks (n = 56, 17.0%) or their delivery efficiency (n = 37, 11.2%).
Instead of directly doubting the delivery robots’ operational capabilities, 130 comments expressed concerns about the robots’ vulnerability in complex or challenging scenarios, particularly regarding their ability to confront deliberate interference by pedestrians (n = 97, 29.4%), such as bullying or delivery theft. For example, one person regretfully commented: ‘The sad thing is as soon as we saw the robots we knew they were going to get stolen and kicked and messed with.’ In addition to concerns about intentional human interference, challenging road conditions, which could potentially hinder the robots’ operation, also emerged as another concern raised in some comments (n = 33, 10.0%). For example, one comment expressed their worries about the robot's performance in snowy areas as follows: ‘Good luck on our snow-covered sidewalks’.
Some comments also addressed the potential impacts of delivery robot deployment on traffic (n = 25, 7.5%) or society at large (n = 42, 12.7%), highlighting the significance of these factors in shaping people's acceptance of delivery robot technology. People's concerns regarding the societal impact of delivery robots were particularly focused on the prospect of job loss resulting from increased automation (n = 42, 12.7%). With regard to traffic impacts, despite the smaller size of delivery robots and their consequently reduced likelihood of posing a threat to the safety of other road and sidewalk users, the potential for increased hazards on sidewalks could still result in the reluctance to accept the robots (n = 14, 4.2%), as expressed in one comment: ‘More hazards on the pavements. It's a no from me’. Besides traffic safety concerns, people also expressed concerns about the possible negative impact of delivery robots on traffic efficiency (n = 11, 3.3%), as demonstrated by one comment referring to the video in which a robot blocked the traffic as ‘the future of waiting.’
Table 5.
CategorySubcategoryCodeDefinition
Behaviour informationReasons for behavioursReasons for behavioursQuestioning or making assumptions of the reasons for the robot behaviours
Upcoming actionsUpcoming actionsQuestioning or making assumptions of the robots upcoming actions
Technical informationOperation modeControlled by humanAssuming the robot is controlled by human operators
Controlled by AIAssuming the robot is controlled by AI
QuestionQuestioning whether the robot is controlled by human or AI
Robot perceptionRobot perceptionQuestioning or making assumptions of how the robot perceives the environment
Robot localisationRobot localisationQuestioning or making assumptions of how the robot's localisation system (e.g., maps or GPS) works
Table 5. Categories, Subcategories, and Codes Belonging to the Theme: Information

5.3 Information

5.3.1 Behaviour Information.

Based on the comment analysis, it was found that people request additional information to be communicated by delivery robots, beyond their current features of simple flashing lights or facial expressions. The most frequently discussed topic among the comments was the need to understand the reasons behind the robots’ behaviours (n = 98, 94.2%). This was evidenced by comments where people attempted to interpret the robots’ behaviours (n = 85, 81.7%) or questioned why they behaved in a particular way (n = 13, 12.5%). For example, one commenter attempted to understand the unexpected path-planning of a delivery robot when it drove off the sidewalk and into the lawn, suggesting that ‘Is that his tracks from before? Maybe it's done the route before and it goes in the exact same pattern’. Compared to the reasons behind the robots’ behaviour and actions, the delivery robots’ future actions received less attention among commenters (n = 6, 9.6%). For instance, one commenter made assumptions that ‘after the robot waits 5 minutes it auto returns’ in response to a video depicting a delivery robot stopping on a sidewalk.

5.3.2 Technical Information.

Technological knowledge related to delivery robots is another common topic of discussion among people. The mode of operation of the robot, specifically whether it is controlled by humans or operated autonomously, is the most frequently discussed aspect (n = 62, 72.1%). The majority of the comments hold the belief that the current technology does not allow for fully autonomous operation, so delivery robots are controlled manually by human operators (n = 41, 47.7%). Despite the fact that our study included robots with varying levels of autonomy, either fully-autonomous (e.g., Starship robot) or with human operators assisting their operation (e.g., Coco robot), people's assumptions about robot control suggest a limited understanding of robot technology and the necessity of making the operational modes of the delivery robot visible to road and sidewalk users.
Furthermore, specific technical information regarding the delivery robot's perception and localisation was of interest to some commenters (n = 24, 28.0%). Comments concerning the robot's sensors (e.g. camera and lidar) and inquiries about whether the robot was ‘watching’ revealed their curiosity about how the robot perceives its surroundings (n = 17, 19.8%). The localisation system, such as maps and GPS, was another topic of interest in several comments (n = 7, 8.1%). For instance, in one video where the delivery robot chose a circuitous path on the pavement instead of a more direct route, one comment assumed that the robot must have ‘mapped all pavements separately from streets’, attempting to justify the robot's behaviour. These comments indicate a need for opening more technical details of delivery robots to other road and sidewalk users, which could enhance public understanding of the robots’ capabilities in urban environments, thereby fostering greater trust and acceptance.

6 Discussion

Building upon the findings derived from our video and comments analysis, we have identified several design opportunities for interactions between delivery robots and road and sidewalk users. These design opportunities transcend the primary communication purpose of facilitating co-navigation between robots and pedestrians on sidewalks, embracing a broader spectrum of road and sidewalk users and encounter scenarios and recognising the diverse interactions that can occur in different urban environments. Moreover, our investigation has revealed people's additional communication needs for robot transparency information to foster a better comprehension of the robot's behaviour. Additionally, the potential for involving bystanders’ intervention in ensuring the robot's operation presents an intriguing avenue for further exploration. In the subsequent sections, we discuss the identified design opportunities and offer design considerations that draw from both the insights derived from our detailed analysis and related HRI literature.

6.1 Path Negotiation with Diverse Road and Sidewalk Users

Previous studies have investigated the communication of the robot's motion intentions to humans to facilitate path negotiation, which is similar to the situations observed in the videos where conflicts occur on sidewalks and pedestrians need to navigate a path with delivery robots. These studies have investigated various communication methods, such as using ground-projected arrow graphics [14, 32, 81] to indicate directional intent, and AR through personal devices to convey future trajectory [93, 98]. The use of intuitive graphics has been suggested as an effective approach that does not require extensive training [32, 81] making it an applicable solution to solve path conflict scenarios where delivery robots interact with pedestrians. However, it is important to note that these studies have primarily been conducted in controlled lab settings with adult participants, raising questions about the generalisability of their findings to real-world path negotiation scenarios involving a more diverse demographic. In our study, we observed, for example, the involvement of children or people of different mobility levels. Similarly to Bennett et al. [7] who documented a case where an individual with mobility issues panicked when blocked by a stopping robot at a crossing, we observed sidewalk users with reduced mobility levels (e.g., the elderly woman in a mobility scooter in Figure 4(a)), having difficulties navigate around the delivery robot. This further underscores the importance of communication for those with limited mobility, taking into account that the ability of these participants to perceive the communication from their physical perspectives and comprehend it may differ from the general population. Therefore, it is necessary to consider the diverse needs and capabilities of individuals with varying demographic conditions when designing delivery robot communication for path negotiation.
Although delivery robots primarily operate on sidewalks, they occasionally necessitate moving beyond these pedestrian-only zones. This poses communication challenges among multiple parties, including pedestrians and vehicle users, as intersections serve as dynamic spaces where various road and sidewalk users interact and must navigate their paths through coordinated efforts. From the perspective of a car user attempting to pass through an intersection, delivery robots waiting at uncontrolled zebra crossings could be perceived as pedestrians, given that they leave the sidewalk and cross the pedestrian crossing. Existing research on communication design for AVs has predominantly focused on their ability to transmit information to pedestrians, leaving a communication gap between delivery robots and other vehicles at intersections [70]. Pedestrians often signal their intent to cross the road to drivers using various methods, such as eye contact and hand gestures, to ensure safe and efficient negotiation. Delivery robots lack these communication channels, and combined with uncertainty over their right of way, negotiating paths with drivers becomes particularly challenging, leading to decreased intersection efficiency [70]. Our video analysis also revealed instances where drivers mistakenly believed that delivery robots had detected their presence, causing them to reverse to give way (as shown in Figure 3(a)). Thus, the communication between delivery robots and car users at intersections needs to clearly signal the right of way, which is essential for promoting safe and efficient negotiation, thus avoiding confusion and minimising delays. Additionally, the communication modalities employed by delivery robots need to consider the accessibility for car users, due to the physical distance and spatial isolation between them.
Research has shown that pedestrians are heavily influenced by the behaviour of others when making crossing decisions [22]. For example, when a neighbour starts to cross the street, a person is 1.5–2.5 times more likely to follow them. Our study also identified scenarios where pedestrians’ crossings behaviour was influenced by the delivery robot waiting alongside. For example, some pedestrians exhibited hesitation at uncontrolled zebra crossings because of the waiting delivery robot alongside them, even though the pedestrians had the right of way and were safe to cross. In addition to reducing pedestrian crossing efficiency, the influence of delivery robots could also pose potential hazards if it causes pedestrians to make unsafe decisions, such as following the robot and running a red light. Consequently, investigating the impact of delivery robots on pedestrian behaviour becomes imperative. Furthermore, effective communication is needed for the robot to convey its state of operation at intersections to its surrounding pedestrians, whether it is waiting or moving, to prevent confusion and ambiguity. However, it is equally important to design communication strategies that do not exert undue influence on pedestrians, thereby avoiding potential risks. Hence, careful consideration must be given to strike a balance between communicative effectiveness and minimising any leading impact on pedestrian behaviour. In our study, it was observed that individuals often examine a robot's movements to infer its intentions, highlighting the potential for incorporating such implicit communication strategies into the design of urban robots’ communication approaches. To this note, leveraging communication embodied in the robot itself, such as its motion and physical features as suggested by Schött et al. [79], can be an effective means of conveying the robot's internal state without interfering with the interaction itself.

6.2 Additional Information to Support Delivery Robot Operation Transparency

Transparent communication is critical for establishing trust in intelligent systems [35, 79]. In human–robot collaborative contexts, transparency in communication can improve cooperation and teamwork efficiency [56, 80]. Our study highlights the need for transparent communication in the context of delivery robots operating in urban shared spaces, even if users may not directly interact with the robot. This is evident from the recurring theme of discussions regarding the behaviours of delivery robots among people's comments. In addition to inquiries about the reason behind the robot's behaviours and actions, some people specifically expressed curiosity about the technical functioning of the robot, such as its operational mode, perception system, and localisation system. This curiosity was further supported by our video content analysis, which revealed pedestrians closely examining the robot's camera and intentionally testing its detection abilities. Furthermore, the comment analysis results indicate that the robot's capabilities play a significant role in the acceptance of this technology. Therefore, providing transparent information beyond conveying motion intent for path negotiation, including clarification of the robot's decision-making and capabilities, is crucial for enhancing public understanding and acceptance of delivery robots.
Considering the complexity of offering such in-depth explanations, Schött et al. [79] suggested using communication modalities that are inherent to the robot, such as text on the screen and language explanations speech. Nonetheless, it is important to acknowledge that the work by Schött et al. [79] primarily focuses on scenarios where people have sufficient time for information interpretation, whereas encounters with delivery robots often involve brief, at-a-glance interactions. Therefore, immediate interpretability is essential for delivery robots to communicate transparent information, making lengthy text or speech explanations unsuitable in such contexts.
In addition, our study also found that people's informational needs may differ based on the situation: for example, people seem to have a heightened interest in seeking additional information when the delivery robot's behaviour deviates from expected norms or appears faulty. This is evidenced by the increased gaze behaviours towards the robot in those situations (as shown in Figure 6), for example, when the robot is abruptly stopping or violating regulations (as exemplified by Figures 2 and 5). In contrast, we observed that people paid less attention to the robot in situations where it operated smoothly without any noticeable incidents (as shown in Figure 6). Recent research on robot transparency communication has highlighted the potential issue of information overload when providing pedestrians with too much information that seems unnecessary to them [98]. Therefore, to address people's informational needs in a more targeted manner and avoid causing information overload, the design of delivery robot's transparent communication could consider providing additional information upon request, depending on the situation or by detecting people's attention level (e.g., using gazing detection technology [54]).
Furthermore, our observations found that people tend to actively engage in conversational communication with the robot when they perceive its behaviour to be confusing. For example, a pedestrian asked a robot that appeared to be lost: ‘Do you know where you are going?’. Recent accelerating progress in large language models [96], such as ChatGPT,9 provides an opportunity to overcome the communication barriers between humans and robots through the development of interactive verbal communication strategies. Utilising the latest technological advancement for the design of transparent communication for robots, i.e., using responsive speech to address individuals’ inquiries regarding the robot's internal state, could be a feasible solution for providing intuitive communication upon individual requests. This may not only include robots responding to direct inquiries about their intentions, but also system responses to implicit user input, for example, when recognising patterns in people's gestures and facial expressions. Furthermore, due to the anthropomorphic perceptions of robots as reflected in the thematic analysis of people's comments, the design of robot speech should be carefully considered to align with these perceptions. However, it's crucial to balance conversational interaction with the efficiency of the robot's operations, as engaging in less critical chats with bystanders could potentially delay their primary task, i.e., the delivery of goods.

6.3 Bystander Assistance to Support Autonomous Mobile Robot Operation

While extensive research has focused on improving the technical and functional aspects of delivery robots to address the unpredictable urban environment, such as developing more efficient algorithms and robust robotic systems [52, 68], there are inevitably situations where the robot cannot accomplish its mission without assistance from humans. Our study identified several common scenarios that are difficult for delivery robots to handle on their own during autonomous operation, such as being blocked by obstacles or unable to manipulate traffic infrastructure specifically designed for humans (as shown in Figure 1). Furthermore, our comment analysis and video observations revealed that people overwhelmingly express sympathy towards delivery robots that appear to be in need of help, and they exhibit positive attitudes towards offering assistance to them. This is evidenced by the predominantly supportive views of HRC expressed in the comment analysis, as well as the frequent observations of assistive behaviours from pedestrians. This finding aligns with recent field observations, where researchers found that people voluntarily helped robots that were stuck [21] or removed unpredictable obstacles that blocked robots [97]. These findings suggest a design opportunity to leverage bystander assistance as a resource to enhance the efficient operation of delivery robots.
Weinberg et al. [97] reported in their observation study that in some instances where robots needed assistance, the absence of communication from these robots necessitated that pedestrians had to use context clues to make sense of the situation. Our findings add to this suggesting that individuals hesitate or are unsure how to offer help in the absence of a direct request for assistance from the robot (see Figure 1(a)). To leverage human assistance in robot task operation, several HRI studies have started investigating strategies for robots to request help from people. The conventional approach for robots soliciting help often involves the use of natural language speech as the communication modality [10, 12, 39]. While effective in indoor settings such as domestic spaces or offices [39, 84], its applicability in noisy public spaces with diverse passersby and multiple languages spoken could be limited. Additionally, Hüttenrauch and Eklundh [39] suggested that bystanders’ willingness to help robots is influenced by the situation and their own state of occupation, which is particularly relevant in urban encounters with delivery robots where people are engaged in various tasks and heading to different locations. Therefore, designing effective help request interactions for delivery robots in such dynamic public spaces poses challenges.
Acknowledging the challenges for robot help requests in dynamic public spaces, Holm et al. [36] explored the role of expressive movement of robots in effectively eliciting help from individuals. They found that the movement increased people's willingness to help the robot, while static, beeping and silent robots received much less assistance. The results from our comment analysis also showed that people's sympathy and intentions to help the robot can be evoked simply by implicit expressions that are intrinsic to a non-anthropomorphic robot, such as its movement. For example, people mentioned feeling ‘sad’ when seeing the robot's motion of struggling to cross the road and wanted to ‘hold its hand to cross the street’. At the same time, people's tendency to anthropomorphise and infantilise the delivery robots (as evidenced by the comment analysis), e.g., referring to them as a ‘cute little guy’, can also contribute to people's empathetic responses and willingness to provide assistance, as suggested by research in psychology that childlike characteristics can elicit instinctive caring behaviours [67].
In addition to raising people's empathy, expressive movement is also an effective means for robots to convey their incapability and communicate the specific help they require. For instance, employing repetitive back-and-forth movements can be used to signal the need for help in removing obstacles [48]. The study of Schulz et al. [78] on robot breakdown situations also suggested that carefully designed expressive movements can make the breakdown situation easier to understand. Therefore, the design of the delivery robot help request communication could consider using implicit expression channels, such as movement, to communicate their incapability and elicit help from bystanders.

6.4 Limitation and Future Work

We acknowledge that there are some limitations to this study. Firstly, online ethnography of user-generated content may be subject to posting bias. For example, individuals who engage in negative behaviours towards delivery robots may be less likely to post videos or comments about their experiences. This could be a possible explanation for the rare occurrence of robot-bullying behaviour in comparison to previous field studies where robot-bullying has been observed more frequently [5, 74]. Secondly, it is important not to disregard the potential underlying performative elements that come into play when recorders engage in interactions with the delivery robot, such as increased verbal communication with the robot to enhance the video's attractiveness. This could not only influence our video observation results but also affect the comments responding to the video. Thirdly, the video dataset in our study consisted of content created by and predominantly featuring users of the TikTok platform, which is likely to exhibit demographic biases due to TikTok's user composition (i.e., the majority of TikTok users are aged between 18–34 years in the US). This could result in infrequent observations of child interactions with robots, compared to more frequent observations of those in a recent field study of delivery robots [97]. Despite these limitations, the online ethnography method offers advantages over field observation, including broader geographical and temporal coverage and reduced observer influence. Additionally, the reliability of our method has been validated in a recent field study [79], as their observations align with some of our findings regarding people's voluntary assistance and verbal communication with the robot.
Furthermore, the limited information available on the deployment history of the delivery robots featured in the videos makes it challenging to differentiate the influence of the novelty effect from that of long-term exposure on people's perceptions and behaviours towards the delivery robots. Future studies may benefit from comparing longitudinal data of real-world deployment of delivery robots to better understand the impact of familiarity on people's interactions with them. Another limitation of this study is the heterogeneity of user-generated content in terms of length and quality. Despite the fact that we adopted strict screening criteria used in similar online ethnography studies [62, 86], it is challenging to ensure that the videos we analysed captured the full extent of interactions between the delivery robots and people. Moreover, while this study focuses on identifying real-world scenarios where communication breakdowns happen, we recognise that examining successful interactions—where communication flows smoothly and effectively—represents a significant opportunity for future research. Such investigations could yield valuable insights into design strategies for enhancing the interactions between urban robots and bystanders, paving the way for improved human-robot dynamics in urban settings.

7 Conclusion

In this study, we systematically analysed 117 user-generated videos posted on TikTok that captured encounters with delivery robots, along with 2067 comments that responded to these videos. We identified a range of real-world deployment scenarios in which communication between delivery robots and road and sidewalk users is necessary, providing valuable contextual background for the future design of external interactions of delivery robots. Moreover, design opportunities regarding path negotiation among diverse road and sidewalk users, transparent communication upon individual request, and leveraging bystander assistance in robot operations were identified based on the triangulation of video and comment analysis results. Additionally, we discuss several important design considerations that should be taken into account when addressing these design opportunities. Our investigation into the real-world deployment of delivery robots contributes to a better understanding of the design context and provides valuable insights for the design of interactions between delivery robots and road and sidewalk users, which can serve as a foundation for facilitating better integration of delivery robots into the urban landscape.

Footnotes

1
https://www.starship.xyz/, last accessed: April 2023.
2
https://www.kiwibot.com/, last accessed: April 2023.
3
https://www.tiktok.com/, last accessed: April 2023.
4
https://www.youtube.com/, last accessed: April 2023.
5
https://cocodelivery.com/, last accessed: April 2023.
6
https://serve.postmates.com/, last accessed: April 2023.
7
The gaze behaviours of the recorders were not counted because all of the videos were captured from a first-person perspective using only the outward-facing cameras of the recorders’ smartphones.
8
The percentage is calculated in relation to the total count of comments for each sub-category.
9
https://openai.com/blog/chatgpt, last accessed: April 2023.

References

[1]
Anna M. H. Abrams, Pia S. C. Dautzenberg, Carla Jakobowsky, Stefan Ladwig, and Astrid M. Rosenthal-von der Pütten. 2021. A Theoretical and Empirical Reflection on Technology Acceptance Models for Autonomous Delivery Robots. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 272–280. DOI:
[2]
Anna M. H. Abrams, Laura Platte, and Astrid Marieke Rosenthal-von der Pütten. 2020. Field Observation: Interactions between Pedestrians and a Delivery Robot. Crowdbot Workshop: Robots from Pathways to Crowds: Ethical, Legal and Safety Concerns of Robots Navigating Human Environments. Retrieved from https://publications.rwth-aachen.de/record/801763
[3]
Lisa Anthony, YooJin Kim, and Leah Findlater. 2013. Analyzing User-Generated YouTube Videos to Understand Touchscreen Use by People with Motor Impairments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1223–1232. DOI:
[4]
E. M. Arntz, J. H. R. Van Duin, A. J. Van Binsbergen, L. A. Tavasszy, and T. Klein. 2023. Assessment of Readiness of a Traffic Environment for Autonomous Delivery Robots. Frontiers in Future Transportation 4 (2023), 2673–5210. DOI:
[5]
Franziska Babel, Johannes Kraus, and Martin Baumann. 2022. Findings From a Qualitative Field Study with an Autonomous Robot in Public: Exploration of User Reactions and Conflicts. International Journal of Social Robotics 14, 7 (2022), 1625–1655.
[6]
Ava Bartolome and Shuo Niu. 2023. A Literature Review of Video-Sharing Platform Research in HCI. (2023).
[7]
Cynthia Bennett, Emily Ackerman, Bonnie Fan, Jeffrey Bigham, Patrick Carrington, and Sarah Fox. 2021. Accessibility and the Crowded Sidewalk: Micromobility's Impact on Public Space. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (DIS ’24). ACM, New York, NY, 365–380. DOI:
[8]
Mark Blythe and Paul Cairns. 2009. Critical Methods and User Generated Content: The IPhone on YouTube. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09). ACM, New York, NY, 1467–1476. DOI:
[9]
Susanne Boll, Marion Koelle, and Jessica Cauchard. 2019. Understanding the Socio-Technical Impact of Automated (Aerial) Vehicles on Casual Bystanders. In 1st International Workshop on Human-Drone Interaction. Ecole Nationale de l’Aviation Civile [ENAC], Glasgow, United Kingdom. Retrieved from https://hal.science/hal-02128379
[10]
Annika Boos, Markus Zimmermann, Monika Zych, and Klaus Bengler. 2022. Polite and Unambiguous Requests Facilitate Willingness to Help an Autonomous Delivery Robot and Favourable Social Attributions. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 1620–1626. DOI:
[11]
Virginia Braun and Victoria Clarke. 2006. Using Thematic Analysis in Psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. DOI:
[12]
David Cameron, Emily C. Collins, Adriel Chua, Samuel Fernando, Owen Mcaree, Uriel Martinez-Hernandez, Jonathan M. Aitken, Luke Boorman, and James Law. 2015. Help! I Can’t Reach the Buttons: Facilitating Helping Behaviors Towards Robots. In Proceedings of the 4th International Conference on Biomimetic and Biohybrid Systems, Vol. 9222. Springer-Verlag, Berlin, 354–358. DOI:
[13]
Mark Colley, Stefanos Can Mytilineos, Marcel Walch, Jan Gugenheimer, and Enrico Rukzio. 2020. Evaluating Highly Automated Trucks as Signaling Lights. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’20). ACM, New York, NY, 111–121. DOI:
[14]
Michael D. Coovert, Tiffany Lee, Ivan Shindev, and Yu Sun. 2014. Spatial Augmented Reality as a Method for a Mobile Robot to Communicate Intended Movement. Computers in Human Behavior 34 (2014), 241–248. DOI:
[15]
Anselm Strauss and Juliet Corbin. 1990. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. (1990).
[16]
Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz Studies: Why and How. In Proceedings of the 1st International Conference on Intelligent User Interfaces (IUI ’93). ACM, New York, NY, 193–200. DOI:
[17]
Maartje M.A. de Graaf, Bertram F. Malle, Anca Dragan, and Tom Ziemke. 2018. Explainable Robotic Systems. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18). ACM, New York, NY, 387–388.
[18]
Maartje M. A. de Graaf, Somaya Ben Allouch, and Jan A. G. M. van Dijk. 2019. Why Would I Use This in My Home? A Model of Domestic Social Robot Acceptance. Human–Computer Interaction 34, 2 (2019), 115–173. DOI:
[19]
Debargha Dey, Azra Habibovic, Melanie Berger, Devanshi Bansal, Raymond H. Cuijpers, and Marieke Martens. 2022. Investigating the Need for Explicit Communication of Non-Yielding Intent through a Slow-Pulsing Light Band (SPLB) EHMI in AV-Pedestrian Interaction. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’22). ACM, New York, NY, 307–318. DOI:
[20]
Debargha Dey, Azra Habibovic, Andreas Löcken, Philipp Wintersberger, Bastian Pfleging, Andreas Riener, Marieke Martens, and Jacques Terken. 2020. Taming the eHMI Jungle: A Classification Taxonomy to Guide, Compare, and Assess the Design Principles of Automated Vehicles’ External Human-Machine Interfaces. Transportation Research Interdisciplinary Perspectives 7 (2020), 100174. DOI:
[21]
Anna Dobrosovestnova, Isabel Schwaninger, and Astrid Weiss. 2022. With a Little Help of Humans. An Exploratory Study of Delivery Robots Stuck in Snow. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 1023–1029. DOI:
[22]
Jolyon J. Faria, Stefan Krause, and Jens Krause. 2010. Collective Behavior in Road Crossing Pedestrians: The Role of Social Information. Behavioral Ecology 21, 6 (09 2010), 1236–1242. DOI: https://academic.oup.com/beheco/article-pdf/21/6/1236/17280250/arq141.pdf
[23]
Miguel Figliozzi and Dylan Jennings. 2020. Autonomous Delivery Robots and Their Potential Impacts on Urban Freight Energy Consumption and Emissions. Transportation Research Procedia 46 (2020), 21–28. DOI:
[24]
Miguel A. Figliozzi. 2020. Carbon Emissions Reductions in Last Mile and Grocery Deliveries Utilizing Air and Ground Autonomous Vehicles. Transportation Research Part D: Transport and Environment 85 (2020), 102443. DOI:
[25]
Joseph L. Fleiss, Bruce Levin, and Myunghee Cho Paik. 2013. Statistical Methods for Rates and Proportions. John Wiley & Sons.
[26]
Jodi Forlizzi. 2007. How Robotic Products Become Social Products: An Ethnographic Study of Cleaning in the Home. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI ’07). ACM, New York, NY, 129–136. DOI:
[27]
Steven R. Gehrke, Christopher D. Phair, Brendan J. Russo, and Edward J. Smaglik. 2023. Observed Sidewalk Autonomous Delivery Robot Interactions with Pedestrians and Bicyclists. Transportation Research Interdisciplinary Perspectives 18 (2023), 100789. DOI:
[28]
Heather M. Gray, Kurt Gray, and Daniel M. Wegner. 2007. Dimensions of Mind Perception. Science 315, 5812 (2007), 619–619. DOI:
[29]
Bern Grush. 2022. Personal Delivery Robots: How Will Cities Manage Multiple, Automated, Logistics Fleets in Pedestrian Spaces? Springer International Publishing, Cham, Switzerland, 1254–1265. DOI:
[30]
Kasper Hald, Katharina Weitz, Elisabeth André, and Matthias Rehm. 2021. “An Error Occurred!” - Trust Repair with Virtual Robot Using Levels of Mistake Explanation. In Proceedings of the 9th International Conference on Human-Agent Interaction (HAI ’24). ACM, New York, NY, 218–226. DOI:
[31]
Andrew F. Hayes and Klaus Krippendorff. 2007. Answering the Call for a Standard Reliability Measure for Coding Data. Communication Methods and Measures 1, 1 (2007), 77–89. DOI:
[32]
Nicholas J. Hetherington, Elizabeth A. Croft, and H.F. Machiel Van der Loos. 2021. Hey Robot, Which Way Are You Going? Nonverbal Motion Legibility Cues for Human-Robot Spatial Interaction. IEEE Robotics and Automation Letters 6, 3 (2021), 5010–5015. DOI:
[33]
Chin-Chang Ho and Karl F. MacDorman. 2010. Revisiting The Uncanny Valley Theory: Developing and Validating an Alternative to the Godspeed Indices. Computers in Human Behavior 26, 6 (2010), 1508–1518. DOI:
[34]
Thomas Hoffmann and Gunnar Prause. 2018. On the Regulatory Framework for Last-Mile Delivery Robots. Machines 6, 3 (2018). DOI:
[35]
Daniel Holliday, Stephanie Wilson, and Simone Stumpf. 2016. User Trust in Intelligent Systems: A Journey Over Time. In Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI ’16). ACM, New York, NY, 164–168. DOI:
[36]
Daniel Gahner Holm, Rasmus Peter Junge, Mads Østergaard, Leon Bodenhagen, and Oskar Palinko. 2022. What Will It Take to Help a Stuck Robot? Exploring Signaling Methods for a Mobile Robot. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 797–801. DOI:
[37]
Quirien R. M. Hover, Ella Velner, Thomas Beelen, Mieke Boon, and Khiet P. Truong. 2021. Uncanny, Sexy, and Threatening Robots: The Online Community's Attitude to and Perceptions of Robots Varying in Humanlikeness and Gender. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’21). ACM, New York, NY, 119–128.
[38]
Ahmed Hussein, Fernando García, José María Armingol, and Cristina Olaverri-Monreal. 2016. P2V and V2P communication for Pedestrian Warning on the Basis of Autonomous Vehicles. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), 2034–2039. DOI:
[39]
Helge Hüttenrauch and Kerstin Eklundh. 2006. To Help or Not to Help a Service Robot: Bystander Intervention as a Resource in Human-Robot Collaboration. Interaction Studies 7 (11 2006), 455–477. DOI:
[40]
Madoka Inoue, Kensuke Koda, Kelvin Cheng, Toshimasa Yamanaka, and Soh Masuko. 2022. Improving Pedestrian Safety around Autonomous Delivery Robots in Real Environment with Augmented Reality (VRST ’22). ACM, New York, NY, Article 63, 2 pages. DOI:
[41]
SAE International. 2018. Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE, Warrendale, PA, 3016.
[42]
Dylan Jennings and Miguel Figliozzi. 2019. Study of Sidewalk Autonomous Delivery Robots and Their Potential Impacts on Freight Efficiency and Travel. Transportation Research Record 2673, 6 (2019), 317–326. DOI:
[43]
Carey Jewitt. 2012. An Introduction to Using Video for Research. Retrieved from https://eprints.ncrm.ac.uk/id/eprint/2259
[44]
Shyam Sundar Kannan, Ahreum Lee, and Byung-Cheol Min. 2021. External Human-Machine Interface on Delivery Robots: Expression of Navigation Intent of the Robot. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 1305–1312. DOI:
[45]
Sebastian Kapser and Mahmoud Abdelrahman. 2020. Acceptance of Autonomous Delivery Vehicles for Last-Mile Delivery in Germany – Extending UTAUT2 with Risk Perceptions. Transportation Research Part C: Emerging Technologies 111 (2020), 210–225. DOI:
[46]
Auttapone (Aut) Karndacharuk, Douglas J. Wilson, and Roger C. M. Dunn. 2013. Analysis of Pedestrian Performance in Shared-Space Environments. Transportation Research Record 2393, 1 (2013), 1–11. DOI:
[47]
M. Laeeq Khan. 2017. Social Media Engagement. Computers in Human Behavior 66, (Jan 2017), 236–247. DOI:
[48]
Kazuki Kobayashi and Seiji Yamada. 2009. Making a Mobile Robot to Express Its Mind by Motion Overlap. In Advances in Human-Robot Interaction. Vladimir A. Kulyukin (Ed.). IntechOpen, Rijeka, Croatia, Chapter 7. DOI:
[49]
Le Yi Koh and Kum Fai Yuen. 2023. Consumer Adoption of Autonomous Delivery Robots in Cities: Implications on Urban Planning and Design Policies. Cities 133 (2023), 104125. DOI:
[50]
Aida Komkaite, Liga Lavrinovica, Maria Vraka, and Mikael B. Skov. 2019. Underneath the Skin: An Analysis of YouTube Videos to Understand Insertable Device Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, 1–12. DOI:
[51]
Ahreum Lee and Austin L. Toombs. 2020. Robots on Campus: Understanding Public Perception of Robots Using Social Media. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing (CSCW ’20 Companion). ACM, New York, NY, 305–309. DOI:
[52]
Daegyu Lee, Gyuree Kang, Boseong Kim, and D. Hyunchul Shim. 2021. Assistive Delivery Robot Application for Real-World Postal Services. IEEE Access 9 (2021), 141981–141998. DOI:
[53]
Hee Rin Lee and Laurel D. Riek. 2018. Reframing Assistive Robots to Promote Successful Aging. Journal of Human-Robot Interactions 7, 1, Article 11 (May 2018), 23 pages. DOI:
[54]
Jiajia Li, Grace Ngai, Hong Va Leong, and Stephen C. F. Chan. 2016. Multimodal Human Attention Detection for Reading from Facial Expression, Eye Gaze, and Mouse Dynamics. SIGAPP Applied Computing Review 16, 3 (Nov. 2016), 37–49. DOI:
[55]
Andreas Löcken, Carmen Golling, and Andreas Riener. 2019. How Should Automated Vehicles Interact with Pedestrians? A Comparative Analysis of Interaction Concepts in Virtual Reality (AutomotiveUI ’19). ACM, New York, NY, 262–274. DOI:
[56]
Joseph B. Lyons and Paul R. Havig. 2014. Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent. In Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments, Randall Shumaker and Stephanie Lackey (Eds.). Springer International Publishing, Cham, Switzerland,181–190.
[57]
Karthik Mahadevan, Sowmya Somanath, and Ehud Sharlin. 2018. Communicating Awareness and Intent in Autonomous Vehicle-Pedestrian Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, 1–12. DOI:
[58]
Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Jairo Perez-Osorio, Ebru Baykara, and Agnieszka Wykowska. 2019. Do We Adopt the Intentional Stance Toward Humanoid Robots? Frontiers in Psychology 10 (2019). DOI:
[59]
Mason Marks. 2019. Robots in Space: Sharing Our World with Autonomous Delivery Vehicles. (2019).
[60]
Christoforos Mavrogiannis, Francesca Baldini, Allan Wang, Dapeng Zhao, Pete Trautman, Aaron Steinfeld, and Jean Oh. 2023. Core Challenges of Social Robot Navigation: A Survey. Journal of. Human.-Robot Interactions 12, 3, Article 36 (Apr. 2023), 39 pages. DOI:
[61]
Trung Thanh Nguyen, Kai Holländer, Marius Hoggenmueller, Callum Parker, and Martin Tomitsch. 2019. Designing for Projection-Based Communication between Autonomous Vehicles and Pedestrians. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’19). ACM, New York, NY, 284–294. DOI:
[62]
Sara Nielsen, Mikael B. Skov, Karl Damkjær Hansen, and Aleksandra Kaszowska. 2022. Using User-Generated YouTube Videos to Understand Unguided Interactions with Robots in Public Places. Journal of Human-Robot Interactions (Aug. 2022). DOI:
[63]
Minnesota Department of Transportation. 2021. Personal Delivery Devices White Paper - Minnesota Department of Transportation. Retrieved from https://dot.state.mn.us/automated/docs/personal-delivery-device-white-paper.pdf
[64]
Jeni Paay, Jesper Kjeldskov, and Mikael B. Skov. 2015. Connecting in the Kitchen: An Empirical Study of Physical Interactions While Cooking Together at Home. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’15). ACM, New York, NY, USA, 276–287. DOI:
[65]
Martin Plank, Clément Lemardelé, Tom Assmann, and Sebastian Zug. 2022. Ready For Robots? Assessment of Autonomous Delivery Robot Operative Accessibility in German Cities. Journal of Urban Mobility 2 (2022), 100036.
[66]
John Postill and Sarah Pink. 2012. Social Media Ethnography: The Digital Researcher in a Messy Web. Media International Australia 145, 1 (2012), 123–134. DOI:
[67]
Stephanie D. Preston. 2013. The Origins of Altruism in Offspring Care. Psychological Bulletin 139, 6 (2013), 1305–1341.
[68]
Saian Protasov, Pavel Karpyshev, Ivan Kalinov, Pavel Kopanev, Nikita Mikhailovskiy, Alexander Sedunin, and Dzmitry Tsetserukou. 2021. CNN-Based Omnidirectional Object Detection for HermesBot Autonomous Delivery Robot with Preliminary Frame Classification. In Proceedings of the 2021 20th International Conference on Advanced Robotics (ICAR), 517–522. DOI:
[69]
Manon Prédhumeau, Anne Spalanzani, and Julie Dugdale. 2023. Pedestrian Behavior in Shared Spaces with Autonomous Vehicles: An Integrated Framework and Review. IEEE Transactions on Intelligent Vehicles 8, 1 (2023), 438–457. DOI:
[70]
Amir Rasouli, Iuliia Kotseruba, and John K. Tsotsos. 2018. Towards Social Autonomous Vehicles: Understanding Pedestrian-Driver Interactions. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 729–734. DOI:
[71]
Amir Rasouli and John K. Tsotsos. 2020. Autonomous Vehicles That Interact With Pedestrians: A Survey of Theory and Practice. IEEE Transactions on Intelligent Transportation Systems 21, 3 (2020), 900–918. DOI:
[72]
Astrid Rosenthal-von der Pütten, David Sirkin, Anna Abrams, and Laura Platte. 2020. The Forgotten in HRI: Incidental Encounters with Robots in Public Spaces. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20). ACM, New York, NY, 656–657. DOI:
[73]
Dana Rotman and Jennifer Preece. 2010. The ’WeTube’ in YouTube – Creating an Online Community through Video Sharing. Interntional Journal of Web Based Communities 6, 3 (Jun 2010), 317–333. DOI:
[74]
P. Salvini, G. Ciaravella, W. Yu, G. Ferri, A. Manzi, B. Mazzolai, C. Laschi, S.R. Oh, and P. Dario. 2010. How Safe Are Service Robots in Urban Environments? Bullying a Robot. In 19th International Symposium in Robot and Human Interactive Communication, 1–7. DOI:
[75]
Sau-lai Lee, Sara B. Kiesler, Ivy Yee-man Lau, and Chi-yue Chiu. 2005. Human Mental Models of Humanoid Robots. Proceedings of the 2005 IEEE International Conference on Robotics and Automation (2005), 2767–2772. DOI:
[76]
Nina Savela, David Garcia, Max Pellert, and Atte Oksanen. 2021. Emotional Talk about Robotic Technologies on Reddit: Sentiment Analysis of Life Domains, Motives, and Temporal Themes. New Media & Society. DOI:
[77]
Paul Sawers. 2018. Starship Technologies Launches Autonomous Robot Delivery Services for Campuses. Retrieved from https://venturebeat.com/ai/starship-technologies-launches-autonomous-robot-delivery-services-for-campuses/
[78]
Trenton Schulz, Rebekka Soma, and Patrick Holthaus. 2021. Movement Acts in Breakdown Situations: How a Robot'S Recovery Procedure Affects Participants’ Opinions. Journal of Behavioral Robotics 12, 1 (2021), 336–355. DOI:
[79]
Svenja Y. Schött, Rifat Mehreen Amin, and Andreas Butz. 2023. A Literature Survey of How to Convey Transparency in Co-Located Human-Robot Interaction. Multimodal Technologies and Interaction 7, 3 (2023). 2414–4088. DOI:
[80]
Anthony R. Selkowitz, Cintya A. Larios, Shan G. Lakhmani, and Jessie Y.C. Chen. 2017. Displaying Information to Support Transparency for Autonomous Platforms. In Advances in Human Factors in Robots and Unmanned Systems, Pamela Savage-Knepshield and Jessie Chen (Eds.). Springer International Publishing, Cham, Switzerland, 161–173. DOI:
[81]
Moondeep C. Shrestha, Tomoya Onishi, Ayano Kobayashi, Mitsuhiro Kamezaki, and Shigeki Sugano. 2018. Communicating Directional Intent in Robot Navigation using Projection Indicators. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 746–751. DOI:
[82]
Marc-Oliver Sonneberg, Max Leyerer, Agathe Kleinschmidt, Florian Knigge, and Michael H. Breitner. 2019. Autonomous Unmanned Ground Vehicles for Urban Logistics: Optimization of Last Mile Delivery Operations. Retrieved from http://hdl.handle.net/10125/59594
[83]
Sharan Srinivas, Surya Ramachandiran, and Suchithra Rajendran. 2022. Autonomous Robot-Driven Deliveries: A Review of Recent Developments and Future Directions. Transportation Research Part E: Logistics and Transportation Review 165 (2022), 102834, 1366–5545. DOI:
[84]
Vasant Srinivasan and Leila Takayama. 2016. Help Me Please: Robot Politeness Strategies for Soliciting Help From Humans. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, 4945–4955. DOI:
[85]
Starship. 2020. Starship Technologies Launches Commercial Rollout of Autonomous Delivery. Retrieved from https://www.starship.xyz/press_releases/2708/
[86]
Megan K. Strait, Cynthia Aguillon, Virginia Contreras, and Noemi Garcia. 2017. The Public's Perception of Humanlike Robots: Online Social Commentary Reflects an Appearance-Based Uncanny Valley, a General Fear of a “Technology Takeover”, and the Unabashed Sexualization of Female-Gendered Robots. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 1418–1423. DOI:
[87]
Martin Tomitsch. 2017. Making Cities Smarter: Designing Interactive Urban Applications. JOVIS Verlag GmbH.
[88]
Martin Tomitsch, Cara Wrigley, Madeleine Borthwick, Naseem Ahmadpour, Jessica Frawley, A. Baki Kocaballi, Claudia Nunez-Pacheco, and Karla Straker. 2018. Design Think. Make. Break. Repeat. A Handbook of Methods. Retrieved from https://eprints.qut.edu.au/215542/
[89]
Tram Thi Minh Tran, Callum Parker, Yiyuan Wang, and Martin Tomitsch. 2022. Designing Wearable Augmented Reality Concepts to Support Scalability in Autonomous Vehicle-Pedestrian Interaction. Frontiers in Computer Science 4 (2022). DOI:
[90]
Shianne van Mierlo. 2021. Field Observations of Reactions of Incidentally Copresent Pedestrians to a Seemingly Autonomous Sidewalk Delivery Vehicle: An Exploratory Study. Master's th esis.
[91]
Rutger Verstegen, Debargha Dey, and Bastian Pfleging. 2021. CommDisk: A Holistic 360° EHMI Concept to Facilitate Scalable, Unambiguous Interactions between Automated Vehicles and Other Road Users. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’21 Adjunct) . ACM, New York, NY, 132–136. DOI:
[92]
Jered Vroon, Zoltán Rusak, and Gerd Kortuem. 2020. Context-Confrontation: Elicitation and Exploration of Conflicts for Delivery Robots on Sidewalks. In 1st international Workshop on Designerly HRI Knowledge: Held in conjunction with the 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), Designerly HRI Knowledge. Retrieved from http://hridesign.eu/assets/pdf/Vroon
[93]
Michael Walker, Hooman Hedayati, Jennifer Lee, and Daniel Szafir. 2018. Communicating Robot Motion Intent with Augmented Reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18). ACM, New York, NY, 316–324. DOI:
[94]
Yiyuan Wang, Luke Hespanhol, Stewart Worrall, and Martin Tomitsch. 2022. Pedestrian-Vehicle Interaction in Shared Space: Insights for Autonomous Vehicles. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’22). ACM, New York, NY, 330–339. DOI:
[95]
Florian Weber, Ronee Chadowitz, Kathrin Schmidt, Julia Messerschmidt, and Tanja Fuest. 2019. Crossing the Street Across the Globe: A Study on the Effects of eHMI on Pedestrians in the US, Germany and China. In HCI in Mobility, Transport, and Automotive Systems, Heidi Krömker (Ed.). Springer International Publishing, Cham, Switzerland, 515–530.
[96]
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent Abilities of Large Language Models. arXiv:2206.07682. Retrieved from https://arxiv.org/abs/2206.07682
[97]
David Weinberg, Healy Dwyer, Sarah E. Fox, and Nikolas Martelaro. 2023. Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA. Multimodal Technologies and Interaction 7, 5 (2023). 2414–4088. DOI:
[98]
Xinyan Yu, Marius Hoggenmüller, and Martin Tomitsch. 2023. Your Way Or My Way: Improving Human-Robot Co-Navigation Through Robot Intent and Pedestrian Prediction Visualisations. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’23). ACM, New York, NY, 211–221.
[99]
Frauke Zeller, David Harris Smith, Jacky Au Duong, and Alanna Mager. 2020. Social Media in Human–Robot Interaction. International Journal of Social Robotics 12 (2020), 389–402. DOI:
[100]
Brian J. Zhang, Ryan Quick, Ameer Helmi, and Naomi T. Fitter. 2020. Socially Assistive Robots at Work: Making Break-Taking Interventions More Pleasant, Enjoyable, and Engaging. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 11292–11299.

Cited By

View all
  • (2025)Making Sense of Public Space for Robot DesignProceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3721488.3721511(152-162)Online publication date: 4-Mar-2025
  • (2024)Mapping Pedestrian-to-Driver Gestures: Implications for Autonomous Vehicle Bidirectional InteractionAdjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications10.1145/3641308.3685014(1-7)Online publication date: 22-Sep-2024

Index Terms

  1. Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Human-Robot Interaction
      ACM Transactions on Human-Robot Interaction  Volume 13, Issue 4
      December 2024
      492 pages
      EISSN:2573-9522
      DOI:10.1145/3613735
      Issue’s Table of Contents
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 October 2024
      Online AM: 17 July 2024
      Accepted: 20 June 2024
      Revised: 25 February 2024
      Received: 10 June 2023
      Published in THRI Volume 13, Issue 4

      Check for updates

      Author Tags

      1. delivery robots
      2. online ethnography
      3. human-robot interaction

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,092
      • Downloads (Last 6 weeks)347
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Making Sense of Public Space for Robot DesignProceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3721488.3721511(152-162)Online publication date: 4-Mar-2025
      • (2024)Mapping Pedestrian-to-Driver Gestures: Implications for Autonomous Vehicle Bidirectional InteractionAdjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications10.1145/3641308.3685014(1-7)Online publication date: 22-Sep-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Full Access

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media