Skip to main content

Multimodal Mobile Robot Control Using Speech Application Language Tags

  • Conference paper
  • 705 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2875))

Abstract

This paper describes the design and architecture of a multimodal interface for controlling a mobile robot. The architecture is build up from standardized components and uses Speech Application Language Tags. We show how these components can be used to build complex multimodal interfaces. Basic design patterns for such interfaces are presented and discussed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bagnall, B.: Core Lego Mindstorms Progamming, p. 196. Prentice Hall, Englewood Cliffs (2002)

    Google Scholar 

  2. Java 2 Platform, Enterprise Edition (J2EE), http://java.sun.com/j2ee

  3. Johnston, M.: Unification-based Multimodal Parsing. In: COLING-ACL, Montreal, Canada, pp. 624–630 (1998)

    Google Scholar 

  4. Lejos - Java for the RCX, http://www.lejos.org/

  5. Maes, P., Brooks, R.A.: Learning to Coordinate Behaviors, pp. 796–798. AAAI, Boston (1990)

    Google Scholar 

  6. Microsoft .NET Speech SDK Version 1.0 Beta, http://www.microsoft.com/speech/getsdk

  7. Mindstorms - Robotics Invention System 2.0, http://www.mindstorms.com/

  8. SALTFORUM: Speech Application Language Tags, http://www.saltforum.org

  9. W3C: Multimodal req. for Voice Markup Lang. Working draft 10 (July 2000), http://www.w3.org/TR/multimodal-reqs

  10. W3C, http://www.w3.org/XML/

  11. W3C, http://www.w3.org/MarkUp/

  12. W3C, http://www.w3.org/Protocols/

  13. Wahlster, W.: SmartKom: Symmetric Multimodality in an Adaptive and Reusable Dialogue Shell. In: Krahl, R., Gunther, D. (eds.) Proceedings of the Human Computer Interaction Status Conference 2003, June 2003, pp. 47–62. DLR, Berlin (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pucher, M., Képesi, M. (2003). Multimodal Mobile Robot Control Using Speech Application Language Tags. In: Aarts, E., Collier, R.W., van Loenen, E., de Ruyter, B. (eds) Ambient Intelligence. EUSAI 2003. Lecture Notes in Computer Science, vol 2875. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39863-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39863-9_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20418-3

  • Online ISBN: 978-3-540-39863-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics