Skip to main content
Log in

Success rates in a multimodal command language for home robot users

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

This article considers the success rates in a multimodal command language for home robot users. In the command language, the user specifies action types and action parameter values to direct robots in multiple modes such as speech, touch, and gesture. The success rates of commands in the language can be estimated by user evaluations in several ways. This article presents some user evaluation methods, as well as results from recent studies on command success rates. The results show that the language enables users without much training to command home robots at success rates as high as 88%–100%. It is also shown that multimodal commands combining speech and button-press actions included fewer words and were significantly more successful than single-modal spoken commands.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Oka T, Yokota M (2007) Designing a multimodal language for directing multipurpose home robots. Proceedings of the 12th International Symposium on Artificial Life and Robotics, ISAROB, Beppu, Japan, pp 790–793

    Google Scholar 

  2. Oka T, Abe T, Sugita K, et al (2009) RUNA: a multimodal command language for home robot users. J Artif Life Robotics 13:455–459

    Article  Google Scholar 

  3. Oka T, Abe T, Shimoji M, et al (2008) Directing humanoids in a multimodal command language. 17th IEEE International Symposium on Robot and Human Interactive Communication, IEEE, Munich, Germany, pp 580–585

    Google Scholar 

  4. Cheyer A, Martin D (2001) The open agent architecture. J Auton Agents Multi-Agent Syst 4:143–148

    Article  Google Scholar 

  5. Lee A, Kawahara A, Shikano K (2001) Julius: an open source real-time large vocabulary recognition engine. Proceedings of the 7th European Conference on Speech Communication and Technology, ISCA, Aalborg, Denmark, pp 1691–1694

    Google Scholar 

  6. Michael O (2004) Webots: professional mobile robot simulation. Int J Adv Robotic Syst 1:39–42

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tetsushi Oka.

Additional information

This work was presented in part at the 14th International Symposium on Artificial Life and Robotics, Oita, Japan, February 5–7, 2009

About this article

Cite this article

Oka, T., Abe, T., Sugita, K. et al. Success rates in a multimodal command language for home robot users. Artif Life Robotics 14, 219–223 (2009). https://doi.org/10.1007/s10015-009-0657-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-009-0657-2

Key words