Abstract
According to [1] pointing one’s finger at a graphical object then at an empty location on the screen while saying “Put this here,” are semiotic gestures, since they contribute to the meaning of the concomitant utterance. On the other hand, dragging one’s fingertip on the surface of the screen may be termed an ‘ergotic’ gesture, inasmuch as it represents an action, namely the drawing of a 2D graphic or the moving of an icon, according to the current context.
We have conducted a Wizard of Oz experiment on the spontaneous use of speech and 2D gestures for interacting with standard graphical software. Overall results [9, 6, 2] indicate that, in such contexts, hand gestures are used either for pointing at objects and locations on the screen or for acting on a 2D representation of the application.
Our study of the subjects’ multimodal expression being completed, we have now focused our analysis on their use of gestures. We aim at defining useful criteria for the design of gestural human-computer interaction. In this paper, we present user profiles that were defined from a thorough analysis of the subjects’ gestures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
C. Cadoz. Le geste canal de communication homme/machine. Technique et Science Informatiques, 13 (1): 31–61, 1994.
N. Carbonell and C. Mignot. Natural multimodal HCI: Experimental results on the use of spontaneous speech and hand gestures. In Multimodal Human-Computer Interaction, ERCIM Workshop Report, pages 97–112. Rocquencourt(F): INRIA, 1994.
N. Carbonell, C. Valot, C. Mignot, and P. Dauchy. Etude empirique: usage du geste et de la parole en situation de communication homme-machine. In ErgoIA’94 Ergonomie et Informatique Avance,Biarritz, 1994. Bayonne(F): IDLS.
J. Cosnier. Communications et langages gestuels. In J. Cosnier, J. Coulon, A. Berrendonner, and C. Orecchioni, editors, Les voix du langage, communications verbales, gestuelles et animales, chapter 4, pages 255–304. Paris: Dunod, 1982.
J. Coutaz and J. Caelen. A taxonomy for multimedia and multimodal user interfaces. In Proceedings of the ERCIM Workshop, pages 143–148, Lisbon Portugal, November 1991.
P. Dauchy, C. Mignot, and C. Valot. Joint speech and gesture analysis — some experimental results on multimodal interface. In EUROSPEECH’93, pages 1315–1318, Berlin, September 1993.
P. Ekman and W. V. Friesen. The repertoire of nonverbal behavior: categories, origins, usage, and coding. Semiotica, 1 (1): 49–98, 1969.
C. Mignot. Usage de la parole et du geste dans les interfaces multimodales — tude exprimentale et modlisation. Doctorat de l’Universit Henri Poincar, Nancy, 1995.
C. Mignot, C. Valot, and N. Carbonell. An experiment study of future ‘natural’ multimodal human-computer interaction. In INTERCHI’93,pages 67–69, Amsterdam, April 1993. New York: ACM Press, Addison Wesley.
B. Rim and L. Schiaratura. Gesture and speech. In R. S. Feldman and B. Rim, editors, Fundamentals of nonverbal behavior, chapter 7, pages 229–238. Cambridge University Press, 1991.
B. Shneiderman. The future of interactive systems and the emergence of direct manipulation. Behaviour and Information Technology, 1 (3): 237–256, 1982.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer-Verlag London
About this paper
Cite this paper
Robbe, S., Carbonell, N., Dauchy, P. (1997). How Do Users Manipulate Graphical Icons? An Empirical Study. In: Harling, P.A., Edwards, A.D.N. (eds) Progress in Gestural Interaction. Springer, London. https://doi.org/10.1007/978-1-4471-0943-3_16
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0943-3_16
Publisher Name: Springer, London
Print ISBN: 978-3-540-76094-8
Online ISBN: 978-1-4471-0943-3
eBook Packages: Springer Book Archive