Abstract
After building computers that paid no intention to communicating with humans, the computer science community has devoted significant effort over the years to more sophisticated interfaces that put the "human in the loop" of computers. These interfaces have improved usability by providing more appealing output (graphics, animations), more easy to use input methods (mouse, pointing, clicking, dragging) and more natural interaction modes (speech, vision, gesture, etc.). Yet all these interaction modes have still mostly been restricted to human-machine interaction and made severely limiting assumptions on sensor setup and expected human behavior. (For example, a gesture might be presented clearly in front of the camera and have a clear start and end time). Such assumptions, however, are unrealistic and have, consequently, limited the potential productivity gains, as the machine still operates in a passive mode, requiring the user to pay considerable attention to the technological artifact.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Waibel, A. (2010). Multimodal Interfaces in Support of Human-Human Interaction. In: Kopp, S., Wachsmuth, I. (eds) Gesture in Embodied Communication and Human-Computer Interaction. GW 2009. Lecture Notes in Computer Science(), vol 5934. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12553-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-12553-9_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-12552-2
Online ISBN: 978-3-642-12553-9
eBook Packages: Computer ScienceComputer Science (R0)