Abstract:
For machines to converse with humans, they must at times resolve ambiguities. We are developing a conversational robot which is able to gather information about its world...Show MoreMetadata
Abstract:
For machines to converse with humans, they must at times resolve ambiguities. We are developing a conversational robot which is able to gather information about its world through sensory actions such as touch and active shifts of visual attention. The robot is also able to gain new information linguistically by asking its human partner questions. Each kind of action, sensing and speech, has associated costs and expected payoffs with respect to the robot's goals. Traditionally, question generation and sensory action planning have been treated as disjoint problems. However, for an agent to fluidly act and speak in the world, it must be able to integrate motor and speech acts in a single planning framework. We present a planning algorithm that treats both types of actions in a common framework. This algorithm enables a robot to integrate both kinds of action into coherent behavior, taking into account their costs and expected goal-oriented information-theoretic rewards. The algorithm's performance under various settings is evaluated and possible extensions are discussed.
Date of Conference: 03-05 October 2012
Date Added to IEEE Xplore: 14 February 2013
ISBN Information: