skip to main content
introduction
Open access

Introduction to the Special Issue on Artificial Intelligence for Human–Robot Interaction (AI-HRI)

Published: 26 September 2024 Publication History
This special issue of the ACM Transactions on Human-Robot Interaction (ACM THRI) highlights, documents, and explores the interface between artificial intelligence (AI) and human–robot interaction (HRI). The application of AI to HRI domains has proven to be a powerfully effective mechanism for achieving robust, interactive, and autonomous systems with applications ranging from personalized tutors to smart manufacturing collaborators to healthcare assistants and nearly everything in between. Developing such systems often involves innovations and integrations between many diverse technical areas, including but not limited to task and motion planning, learning from demonstration, dialogue synthesis, activity recognition and prediction, human behavior modeling, and shared control. For this special issue, we received high-quality, original articles that present the design and/or evaluation of novel computational techniques and systems at the intersection of AI and HRI. It brought together various articles to showcase the state-of-the-art in AI-HRI within a single issue of the world’s leading journal on HRI research.
This special issue of ACM THRI presents a collection of 11 articles bringing to attention multiple ways AI can support HRI in a wide diversity of paradigms. This collection of articles covers a broad scope of application domains, robot designs, and bases for assessment. It starts with four articles exploring different aspects of teleoperation and shared control: a teleoperation system anticipating operator commands to facilitate robot control (1), a shared control approach for multi-step teleoperation (2), a shared control framework for urban air mobility (3), and finally body–machine interfaces to control robots (4). We follow up with two articles focused on teaching robots, with a first one on unified learning from demonstrations (5) and a second one on using verbal corrections commands in teaching (6). Then, we move on to monitor autonomous robots through augmented reality (AR) interfaces (7). We conclude our special issue with four articles focused on improving human–robot collaboration by predicting team motions (8), planning for adaptation in collaborative tasks (9), automating gesture generation (10), and finally, reviewing the impact of vulnerability on trust in HRI (11).
(1)
Assistance in Teleoperation of Redundant Robots through Predictive Joint Maneuvering.” In this article, Brooks et al. present two predictive models designed to anticipate operator commands during teleoperation. These models allow optimization over an expected trajectory of future motion rather than consideration of local motion alone.
(2)
Experimental Assessment of Human-Robot Teaming for Multi-Step Remote Manipulation with Expert Operators.” Pérez-D’Arpino et al. explore the advantages of multiple methods for remote robot operation by experts. Through their study involving expert operators, such as previous operators for the Defense Advanced Research Projects Agency Robotics challenge, they showed that teleautonomy approaches with assisted planning could complete complex manipulation tasks as fast as direct teleoperation, but with significantly lower workload and manipulation errors.
(3)
Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario.” In this article, Byeon et al. present a new personalized shared control framework, in which an assistance model is learned from human experts and the shared control policy is a Gaussian mixture with a finite time horizon, based on the distance of the user’s trajectory from the expert trajectory. The framework is evaluated on an urban air mobility simulation, where it was compared to a baseline approach, showing improved performance.
(4)
Learning to Control Complex Robots Using High-Dimensional Body–Machine Interfaces.” In this article, Lee et al. demonstrate that a population of uninjured participants can learn to control an arm with high degrees of freedom through body–machine interfaces. They also investigate and discuss the effect of joint and task control space on learning, in terms of intuitiveness, learnability, and their consequences on cognitive load during learning.
(5)
Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction.” In this article, Mehta and Losey present a method of formalizing and unifying robot learning from demonstrations. A loss function is developed for training a variety of reward models from given demonstrations, corrections, and preferences; then, the learned reward is converted into the desired task trajectory. The authors use both simulations and a user study comparing to existing baselines to demonstrate that the new approach more accurately learns manipulation tasks from physical human interactions when the robot is faced with new or unexpected objectives.
(6)
‘Do This Instead’ – Robots That Adequately Respond to Corrected Instructions.” In this article, Thierauf et al. present a system to easily incorporate verbal corrections during verbal task instructions. Verbal corrections occur before, during, and after verbally taught sequences of tasks, demonstrating that the proposed methods enable fast corrections.
(7)
Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments.” Reardon et al. present an AR visualization solution to assist humans in interpreting data from a mobile robot that is autonomously detecting novel changes in an environment. They experimentally investigate the effect of three-dimensional visualization in AR and human movement in the operational environment on shared situational awareness in human–robot teams.
(8)
IMPRINT: Interactional Dynamics-Aware Motion Prediction in Teams Using Multimodal Context.” In this article, Yasar et al. present a multi-agent motion prediction framework that models the interactional dynamics and incorporates the multi-modal context to accurately predict the motion of all the agents in a team (both humans and/or robots).
(9)
UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration.” Ramachandruni et al. proposed the User-Aware Hierarchical Task Planning (UHTP) framework for robot adaptation to humans in collaborative tasks. With UHTP, a robot performs actions by monitoring its human partner’s current activity to maximize the efficiency of the collaborative task. In turn, the human partner benefits from UHTP’s adaptation algorithms by completing collaborative tasks without having to wait for the robot. A user study shows that UHTP can adapt to a wide range of human behaviors, requires no communication, reduces cognitive workload during collaboration, and is preferred over a non-adaptive baseline.
(10)
Face2Gesture: Translating Facial Expressions into Robot Movements through Shared Latent Space Neural Networks.” In this article, Suguitan et al. present a method to automatically generate affective robot movements in reaction to emotive facial expressions. Using autoencoder neural networks to compress robot movement data and facial expression images into a shared latent embedding space, movements can be reconstructed to align the embeddings by emotion classes rather than data modality.
(11)
A Meta-analysis of Vulnerability and Trust in Human-Robot Interaction.” In this article, McKenna et al. explore the specific impact of vulnerability in trust building in HRI. While vulnerability is key to build bonds between humans, its impact is underexplored in HRI. Authors tackle this aspect through a meta-analysis and modeling to provide suggestions for building effective trust between humans and robots.
We are thrilled by the diversity of AI-assisted HRI paradigms covered in this Special Issue on AI for HRI, from shared control and teleoperation to learning and modeling. This diversity showcases how key AI and machine learning are to HRI. We would like to extend our deepest gratitude to the reviewers and the editors-in-chief and associate managing editors of THRI who dedicated time and efforts to make this special issue possible.
Tufts University, Medford, MA, USA
University of South Florida, Tampa, FL, USA
U.S. National Institute of Standards and Technology, Gaithersburg, MD, USA
Swansea University, Swansea, Wales, UK
King’s College London, London, UK
Semio, Los Angeles, CA, USA
Bar Ilan University, Ramat Gan, Israel
Idiap Research Institute, Martigny, Switzerland

Index Terms

  1. Introduction to the Special Issue on Artificial Intelligence for Human–Robot Interaction (AI-HRI)
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Transactions on Human-Robot Interaction
          ACM Transactions on Human-Robot Interaction  Volume 13, Issue 3
          September 2024
          345 pages
          EISSN:2573-9522
          DOI:10.1145/3613680
          Issue’s Table of Contents
          This work is licensed under a Creative Commons Attribution International 4.0 License.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 26 September 2024
          Online AM: 20 July 2024
          Accepted: 05 June 2024
          Revised: 30 May 2024
          Received: 30 May 2024
          Published in THRI Volume 13, Issue 3

          Check for updates

          Author Tags

          1. Artificial intelligence
          2. human-robot interaction
          3. teleoperation
          4. human-robot collaboration
          5. machine learning

          Qualifiers

          • Introduction

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 500
            Total Downloads
          • Downloads (Last 12 months)500
          • Downloads (Last 6 weeks)117
          Reflects downloads up to 15 Feb 2025

          Other Metrics

          Citations

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Login options

          Full Access

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media