Abstract
This article proposes a multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. The language is designed for untrained users who need support in their daily lives from cost-effective robots. In this language, the users can combine spoken and pointing messages in an interactive manner in order to convey their intentions to the robots. Spoken messages include verb and noun phrases which describe intentions. Pointing messages are given when the user’s finger touches a camera image, a picture containing a robot body, or a button on a touch screen at hand which convey a location in their environment, a direction, a body part of the robot, a cue, a reply to a query, or other information to help the robot. This work presents the philosophy and structure of the language.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Prasad R, Saruwatari H, Shikano K (2004) Robots that can hear, understand and talk. Adv Robotics 18(5):533–564
Jurafsky D, Martin JH (2000) Speech and language processing. Prentice Hall, Englewood Cliffs
Bos J, Oka T (2007) A spoken language interface with a mobile robot. Artif Life Robotics 11(1):42–47
Knapp ML, Hall JA (2010) Nonverbal communication in human interaction. Wadsworth, Belmont
Perzanowski D, et al (2001) Building a multimodal human-robot interface. IEEE Intell Syst 16(1):16–21
Iba S, Paredis CJJ, Adams W, et al (2004) Interactive multi-modal robot programming. 9th International Symposium on Experimental Robotics (ISER’ 04), pp 503–512
Igarashi T (2008) User interface for robots (in Japanese). J Robotics Soc Jpn 28(3):246–248
Oka T, Abe T, Shimoji M, et al (2008) Directing humanoids in a multi-modal command language. 17th International Symposium on Robot and Human Interactive Communication
Oka T, Abe T, Sugita K, et al (2009) RUNA: a multi-modal command language for home robot users. Artif Life Robotics 13(2):455–459
Oka T, Abe T, Sugita K, et al (2009) Success rates in a multimodal command language for home robot users. Artif Life Robotics 14(2):219–223
Oka T, Sugita K, Yokota M (2010) Commanding a humanoid to move objects in a multimodal language. J Artif Life Robotics 15(1):17–20
Oka T, Abe T, Sugita K, et al (2011) User study of a life-supporting humanoid directed in a multimodal language. 16th International Symposium on Artificial Life and Robotics (AROB11)
Oka T, Sugita K, Yokota M (2009) Spoken command language to direct a robot cleaner. FAN2009
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was presented in part at the 16th International Symposium on Artificial Life and Robotics, Oita, Japan, January 27–29, 2011
About this article
Cite this article
Oka, T., Matsumoto, H. & Kibayashi, R. A multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. Artif Life Robotics 16, 292–296 (2011). https://doi.org/10.1007/s10015-011-0924-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-011-0924-x