Elsevier

Knowledge-Based Systems

Volume 42, April 2013, Pages 49-59
Knowledge-Based Systems

Assessment of adaptive human–robot interactions

https://doi.org/10.1016/j.knosys.2013.01.003Get rights and content

Abstract

One of the overarching goals of robotics research is that robots ultimately coexist with people in human societies as an integral part of them. In order to achieve this goal, robots need to be accepted by people as natural partners within the society. It is therefore essential for robots to have adaptive learning mechanisms that can intelligently update a human model for effective human–robot interaction (HRI). This might be critical in interactions with elderly and disabled people in their daily activities. This research has developed and evaluated an intelligent HRI system that enables a mobile robot to learn adaptively about the behaviors and preferences of the people with whom it interacts. Various learning algorithms have been compared and a Bayesian learning mechanism has been implemented by estimating and updating a parameter set that models behaviors and preferences of people. Every time a user interacts with the robot, the model is updated. The robot then uses the model to predict future actions of its user. A variety of HRI modalities including speech recognition, sound source localization, simple natural language understanding, face detection, face recognition, and attention gaining/losing systems, along with a navigation system, have been integrated with the learning system. The integrated system has been successfully implemented on a Pioneer 3-AT mobile robot. The system has also been evaluated using 25 subjects who interacted with the robot using adaptive and non-adaptive interfaces. This study showed that adaptive interaction is preferred over non-adaptive interaction by the participants at a statistically significant level.

Introduction

A social robot is defined as an autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioral norms expected by the people with whom the robot is intended to interact [1]. Social Robotics [2] focuses on the development of robots that operate with people to meet or address some social needs. Mataric in [3] defines socially assistive robotics as the intersection of assistive robotics and socially interactive robotics. One active area of research in Social Robotics is investigating specifically how to socially equip robots to respond to the needs of people. These needs can include social companionship or entertainment. Systems addressing these needs elicit social responses from people, such as Kismet [4] and Sony Aibo [5]. The continuum continues toward the development of systems that draw upon social attitudes to address specific needs of people, such as caregiving in healthcare [6]; autonomous systems, such as in response to the AAAI Robotics Challenge [7]; and human-like personal assistance systems, such as ISAC and Cog [8], [9]. This area utilizes studies in interpersonal interaction for applications to interactions between people and systems. Studies have shown that people respond to artificial systems with an unconscious similarity to similar interpersonal situations, including a tendency to anthropomorphize or attribute human qualities [10], [11]. Social robots need to have multi-modal HRI mechanisms that enable them to interact or collaborate with humans in a natural and unencumbered fashion. Sound source localization, speech recognition, motion detection, face detection, face recognition, and natural language understanding are important modalities in effective HRI.

Understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and behaviors of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction.

According to [12], [13], the percentage of the population over the age of 60 will increase more than 20% by 2050 and the greatest increase will be among people age 85 and over. In the United States, the population of people over age 65 will increase to up to 19.2% of the population in 2030. Japan will need major help for elderly care since the lifetime expectancy is getting higher as the birth rate declines. As a result, fewer people will be available to serve as caregivers for the elderly. We expect that social robots will be widely employed to care for the elderly or improve the quality of life of disabled people.

Although robots have been widely used in many diverse areas, they are not part of our daily life yet, partly because they lack some social skills that are needed to be socially accepted by people. Current robots do not largely follow the behavioral and social norms expected by people. A robot that assists in household settings should continuously learn about the people it serves, so that it can help in an efficient and less disturbing manner. This is important for social acceptance of robots in daily life. A social robot should possess certain characteristics to be accepted as a social agent in a society.

This work focuses on development of an adaptive HRI system that enables a mobile robot to learn preferences and activities of its users so that it can gradually improve interactions with its users. The system incorporates multi-modal interaction mechanisms, including speech recognition, sound source localization, a simple natural language understanding system, face detection, face recognition, Internet information filtering, and attention gaining/losing. The adaptive system is evaluated using human subjects and compared with non-adaptive systems.

A majority of the related work on robot learning is on learning by demonstration and active teaching. There is a gap in literature on passive robot learning, which is essential for a robot to be an integral part of daily life. According to our best knowledge, there is no work in literature regarding human subject experiments on comparing the human perceptions of a learning robot and a non-learning robot.

A robot learning toolkit that provides algorithms for reinforcement learning and learning by demonstration is described in [14]. A human trained can provide feedback to speed up learning. In [15], a robot learning algorithm has been implemented for improving mobile robot navigation. A user simply teaches a mobile robot about the objects of interest as the robot navigates. van den Berg et al. [16] describes a surgical robot that can learn a task from multiple human demonstrations. Then, the robot can perform those tasks much faster than a human surgeon. In [17], a socially guided machine learning technique is implemented. The internal state of the robot is revealed to the human teacher who can utilize this for providing relevant information for the robot. A learning by observation algorithm based on Bayesian networks and game pattern graphs is introduced in [18]. A robot learns the rules of a game by observing human actions. In [19], a group of human subjects from Chinese and German college students were used to evaluate how communication styles and culture may affect people’s recommendations from robots.

A socially assitive robot that monitors and socially interacts with post-stroke users is described in [20]. The robot does not have any physical contact with the patients while it encourages, monitors, and assists them. It can also generate reports for physicians and therapists.

The u-Bot5 is a mobile robot equipped with some multi-modal HRI tools [21]. It can call 911 in case of emergency and remind its clients to take their medicine. It can recognize some human activities such as walking or sitting. In [22], [23], the design, implementation, and testing of a voice-controllable, adaptive user interface for a mobile robot in navigational tasks are described. The interface offers different graphical user interface (GUI) components for a group of users depending on their capabilities and preferences and the part of the task that they are interested in. The interface learns the users’ capabilities and preferences in time as they interact with the robot more. In [24], the effects of spatial reasoning ability, location, and prior knowledge of the environment for mobile robot control were investigated. In [25], a method based on partially observable Markov decision processes is introduced for learning flexible action selection from observing multi-modal, human–human interactions.

A socially intelligent robot should be autonomous to some extent and should be able to communicate and interact with humans in similar ways humans interact with each other. Vocal communication is a natural means of interaction between people. Speech recognition, sound source localization [24], [26], speaker identification, and natural language understanding are of importance for vocal communication. Social robots are also expected to identify people they detect [27]. Face recognition [28] is widely used for identifying people. In [29], a human-in-the-loop system that gets help from human users for helping the robot learn is described.

Attention mechanisms are particularly important for a robot to interact with multiple people simultaneously. In [30], an artificial model of visual attention was develop to divert the robot’s attention towards visually salient or behaviorally relevant stimuli.

According to our best knowledge, this is the first study that statistically evaluates the effectiveness of a mobile robot that gradually learns the preferences and behaviors of its users in a natural setting. This research assesses if an adaptively learning robot (that may not need to take direct orders from its user after a learning process) would be preferred over a non-adaptive robot that takes direct orders from its user during interaction. Although there are many robotic platforms that are designed to provide social interactions with people [19], [31], [32], there are not many studies that use human subjects to evaluate effectiveness of an adaptively learning robot. This work also combines many different HRI modalities with an adaptive learning mechanism to enable a mobile robot to naturally interact with people in an increasingly more effective fashion.

This paper is organized as follows: Section 2 gives a treatment of mobile robot learning. It briefly describes the Bayesian learning and introduces various HRI modalities that can be employed for natural HRI. Section 3 describes the experimental procedure, and Section 4 presents the experimental results. Section 5 lists some conclusions and motivates the future work.

Section snippets

Mobile robot learning

The goal of this work is to develop a natural HRI system that enables a mobile robot to learn its user preferences and behaviors over a period of time, so that it can serve its user in an increasingly efficient manner. Fig. 1 illustrates the system architecture. A variety of HRI modalities and a learning system are used for social interactions with humans.

Development platform

A Pioneer 3-AT, shown in Fig. 12, is used in the experiments. The robot has a laser scanner, 16 ultrasonic rangefinders, a pan-tilt-zoom (PTZ) camera, bumpers, and a gripper. A component-based software architecture is used for programming. Each HRI modality is implemented as a software component so that it can easily be integrated.

Subject selection and training

Twenty-five (25) participants (undergraduate students, graduate students, and staff members) from Tennessee State University were selected. Each subject was given an

Experimental results

This section discusses the results of experiments. Each subject was asked to rate (from 1 to 10, with 1 least preferable and 10 most preferable) the adaptive and non-adaptive systems based on his/her experience during the interactions.

Conclusions and open issues

In this work, a mobile robot learning system was developed and implemented on a Pioneer 3-AT mobile robot. The robot can learn its user’s behaviors and preferences as the robot and its user interact more. Such a system is essential, especially for improving social and assistive robotics (e.g. robots helping the elderly in home settings). The learning system was integrated with a variety of HRI modalities so that the robot can handle some tasks requested by the user(s).

The mobile robot learning

References (46)

  • R. Simmons et al., Grace and george: autonomous robots for the AAAI robot challenge, in: AAAI 2004 Mobile Robot...
  • K. Kawamura et al.

    Implementation of cognitive control for a humanoid robot

    International Journal of Humanoid Robotics

    (2008)
  • B. Scassellati, Theory of Mind for a Humanoid Robot, Ph.D. Dissertation, MIT,...
  • B. Reeves et al.

    The Media Equation

    (1996)
  • R.S. Kiesler et al., Mental models and cooperation with robotic assistants, in: CHI 2002 Extended Abstracts, 2002, pp....
  • M.E. Pollack

    Intelligent technology for an aging population: the use of ai to assist the elderly with cognitive impairment

    AI Magazine

    (2005)
  • M. Heerink, B. Krose, B. Wielings, V. Evera, Human–Robot User Studies in Eldercare: Lessons Learned, ICOST,...
  • W. Ertel, M. Schneider, R. Cubek, M. Tokicy, The teaching-box: a universal robot learning framework, in: International...
  • A. Gopalakrishnan, S. Greene, A. Sekmen, Vision-based mobile robot learning and navigation, in: IEEE International...
  • J. van den Berg, S. Miller, D. Duckworth, H. Hu, A. Wan, X.-Y. Fu, K. Goldberg, P. Abbeel, Superhuman performance of...
  • C. Chao, M. Cakmak, A. Thomaz, Transparent active learning for robots, in: 5th ACM/IEEE International Conference on,...
  • H. Lee, H. Kim, K.-H. Park, J.-H. Park, Robot learning by observation based on bayesian networks and game pattern...
  • A. Tapus et al.

    User-robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy

    Intelligent Service Robotics: Multidisciplinary Collaboration for Socially Assistive Robotics

    (2008)
  • Cited by (48)

    • αPOMDP: POMDP-based user-adaptive decision-making for social robots

      2019, Pattern Recognition Letters
      Citation Excerpt :

      In the latter, the history of interactions is used as basis for the adaptation of future actions. Beyond the usage of MDPs and POMDPs, user-adaptive robots have been used, for instance, as tutors that can adapt to the pupil’s level of skill [17], as learning guides that ultimately become completely autonomous [26], or as therapy companions able to deliver assistance adapted to the user’s personality traits [24,34,35]. Most of these works demonstrate their performance against non-adaptive systems, and establish the need for, and the usefulness of, user-adaptive human-robot-interaction systems.

    • Development and Validation of a Motion Dictionary to Create Emotional Gestures for the NAO Robot

      2023, IEEE International Workshop on Robot and Human Communication, RO-MAN
    • A Theoretical Approach to Designing Interactive Robots, Using Restaurant Assistants as an Example

      2023, 2023 20th International Conference on Ubiquitous Robots, UR 2023
    View all citing articles on Scopus
    View full text