Abstract:
Humans prefer to use voice commands to guide their peer companions in daily assistive tasks. In human perspective, they expect same behavior from assistive robot companio...Show MoreMetadata
Abstract:
Humans prefer to use voice commands to guide their peer companions in daily assistive tasks. In human perspective, they expect same behavior from assistive robot companions as well. The paper presents an approach towards using such voice instructions based on uncertain and qualitative information to describe object placements. Consider a case in which a set of objects has to be arranged on a table in a particular spatial area. In such a situation, humans will prefer to use a single command regarding overall arrangement rather than repeating the same command for each and every object placement. In such scenarios, people will be comfortable with commands which are simple and with non-technical words. Most of such commands include uncertain spatial terms such as “Left”, “Middle”, “Right” and uncertain qualitative terms such as “Together”, “Little separately”, “Separately” to describe the arrangement. For example “Keep all objects together in the middle of the table” and “Keep objects separately on the right side of the table” can be considered. But in some situations, these commands will not give a direct idea about the placement location of the objects. For instance, “Keep the middle of the table free” can be cited. Therefore, the robot must be able to understand precisely such information in commands before executing them. The experiments are conducted in a simulated domestic environment. Results of the experiment are presented and discussed.
Date of Conference: 21-25 May 2018
Date Added to IEEE Xplore: 13 September 2018
ISBN Information:
Electronic ISSN: 2577-087X