Abstract
Two experimental evaluations were conducted to compare interaction modes on a CAD system and a map system respectively. For the CAD system, the results show that, in terms of total manipulation time (drawing and modification time) and subjective preferences, the “pen + speech + mouse” combination was the best of the seven interaction modes tested. On the map system, the results show that the “pen + speech” combination mode is the best of fourteen interaction modes tested. The experiments also provide information on how users adapt to each interaction mode and the ease with which they are able to use these modes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bekker, M. M., Nes, F. L.van, and Juola, J. F., A comparison of mouse and speech input control of a text-annotation system, Behaviour & Information Technology, 14, 1 (1995), 14–22.
Damper, R. I., and Wood, S. D., Speech versus keying in command and control applications, Int. J. of Human-Computer Studies, 42 (1995), 289–305.
Cheyer, A. and Juha, L., Multimodal maps: an agent-based approach; SRI International, 1996, http://www.ai.sri.com/~cheyer/papers/mmap/mmap.html.
Fukui, M., Shibazaki, Y., Sasaki, K., and Takebayashi, Y., Multimodal personal information provider using natural language and emotion understanding form speech and keyboard input, The special interest group notes of Information Processing Societv of.Tanan. 64. 8 (19961. 43–48.
Hauptmann A.G., Speech and gestures for graphic Image manipulation, In Proc. of the CHI’89 Conference on Human Factors in Computing Systems, ACM, New York (1989), 241–245.
Ohashi, T., Yamanouchi, T., Matsunaga, A., and Ejima, T., Multimodal interlace with speech and motion of stick: CoSMoS, Symbiosis of Human and Artifact, Else-vier B.V. (1995), 207–212.
Oviatt. S., Toward empirically-based design of multimodal dialogue system, In Proc. of AAAI 1994-IM4S, Stanford (1994), 30–36.
Oviatt, S., DeAngeli, A., and Kuhn, K., Integration and synchronization ol input modes during multimodal human-computer interaction, In Proc. of the CHI’97 Conference on Human Factors in Computing Systems, ACM, New York (1997), 415–422.
Suhm B., Myers B., and Waibel A., Model-based and empirical evaluation of multimodal interactive error correction, In Proc. of the CHI’99 Conference on Human Factors in Computing Systems, ACM, New York (1999), 584–591.
Whittaker, S., Hyland, P., and Wiley, M., Flochat: handwritten notes provide access to recorded conversations, In Proc. of the CHI’94 Conference on Human Factors in Computing Systems, ACM, New York (1994), 271–277.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ren, X., Zhang, G., Dai, G. (2000). An Experimental Study of Input Modes for Multimodal Human-Computer Interaction. In: Tan, T., Shi, Y., Gao, W. (eds) Advances in Multimodal Interfaces — ICMI 2000. ICMI 2000. Lecture Notes in Computer Science, vol 1948. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-40063-X_7
Download citation
DOI: https://doi.org/10.1007/3-540-40063-X_7
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-41180-2
Online ISBN: 978-3-540-40063-9
eBook Packages: Springer Book Archive