Abstract
With the aim of building a spatial gesture generation mechanism in Metaverse avatars, we report on an empirical study for multimodal direction-giving dialogues and propose a prototype system for gesture generation. First, we conducted an experiment in which a direction receiver asked for directions to some place on a university campus, and the direction giver gave directions. Then, using a machine learning technique, we annotated the direction giver’s right-hand gestures automatically and analyzed the distribution of the direction of the gestures. As a result, we proposed four types of proxemics and found that the distribution of gesture directions differs with the type of proxemics between the conversational participants. Finally, we implement a gesture generation mechanism into a Metaverse application and demonstrate an example.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Argyle, M.: Non-verbal communication in human social interaction. In: Hinde, R.A. (ed.) Non-verbal Communication. Cambridge University Press, Cambridge (1972)
Bergmann, K., Kopp, S.: GNetIc – Using Bayesian Decision Networks for Iconic Gesture Generation. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 76–89. Springer, Heidelberg (2009)
Breitfuss, W., Predinger, H., Ishizuka, M.: Automatic generation of gaze and gestures for dialogues between embodied conversational agents. Int’l Journal of Semantic Computing 2(1), 71–90 (2008)
Bull, P.E.: Posture and Gesture. Pergamon Press, Elmsford (1987)
Kendon, A.: Some functions of gaze-direction in social interaction. Acta Psycholigica 26, 22–63 (1967)
McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)
Nakano, Y.I., Okamoto, M., Kawahara, D., Li, Q., Nishida, T.: Converting Text into Agent Animations: Assigning Gestures to Text. In: Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004), Companion Volume, Boston (2004)
Tepper, P., Kopp, S., Cassell, J.: Content in Context: Generating Language and Iconic Gesture without a Gestionary. In: Proc. of the Workshop on Balanced Perception and Action in ECAs at AAMAS 2004 (2004)
Tsukamoto, T., Nakano, Y.: Gesture Generation for Metaverse Avatars using Linguistic and Spatial Information. In: Proc. of the 74th National Convention of IPSJ (in Japanese) (to appear)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Tsukamoto, T., Muroya, Y., Okamoto, M., Nakano, Y. (2012). Collection and Analysis of Multimodal Interaction in Direction-Giving Dialogues: Towards an Automatic Gesture Selection Mechanism for Metaverse Avatars. In: Beer, M., Brom, C., Dignum, F., Soo, VW. (eds) Agents for Educational Games and Simulations. AEGS 2011. Lecture Notes in Computer Science(), vol 7471. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32326-3_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-32326-3_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-32325-6
Online ISBN: 978-3-642-32326-3
eBook Packages: Computer ScienceComputer Science (R0)