Abstract
Large–display environments like Reality Center or Powerwall are recent equipments used in the Virtual Reality (VR) field. In contrast to HMDs or similar displays, they allow several unadorned users to visualize a virtual environment. Bringing interaction possibilities to those displays must not suppress the users’ liberty. Thus, devices based on trackers like DataGlove or wand should be forgotten as they oblige users to don such gear. On the contrary, video cameras seem very promising in those environments: their use could range from looking for a laser dot on the display to recovering each user’s full body posture. The goal we are considering is to film one’s hand in front of a large display in order to recover its posture, which will then be interpreted according to a predefined interaction technique. While most of such systems rely on appearance–based approaches, we have chosen to investigate how far a model–based one could be efficient. This paper presents the first steps of this work, namely the real–time results obtained by using hand silhouette feature and some further conclusions related to working in a large–display VR environment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Leibe, B., Starner, T., Ribarsky, W., Wartell, Z., Krum, D., Singletary, B., Hodges, L.: The perceptive workbench: Toward spontaneous and natural interaction in semi–immersive virtual environments. In: IEEE Virtual Reality, pp. 13–20 (2000)
Moeslund, T., Störring, M., Granum, E.: Vision–based user interface for interacting with a virtual environment. In: DANKOMB (2000)
Nölker, C., Ritter, H.: Illumination independant recognition of deictic arm postures. In: 24th Annual Conference of the IEEE Industrial Electronic Society, pp. 2006–2011 (1998)
Sato, Y., Saito, M., Koike, H.: Real–time input of 3d hand pose and gestures of a user’s hand and its applications for hci. In: Virtual Reality Conference (2001)
Freeman, W., Tanaka, K., Ohta, J., Kyuma, K.: Computer vision for computer games. In: FG 1996, pp. 100–105 (1996)
Rehg, J., Kanade, T.: Digiteyes: Vision–based hand tracking for human computer interaction. In: Workshop on Motion of Non–Rigid and Articulated Objects, pp. 16–22 (1994)
Kameda, Y., Minoh, M., Ikeda, K.: Three dimensional pose estimation of an articulated object from its silhouette image. In: Asian Conference on Computer Vision, pp. 612–615 (1993)
Kameda, Y., Minoh, M., Ikeda, K.: Three dimensional motion estimation of a human body using a difference image sequence. In: Asian Conference on Computer Vision, pp. 181–185 (1995)
Heap, T., Hogg, D.: Towards 3d hand tracking using a deformable model. In: Conference on Automatic Face and Gesture Recognition, pp. 140–145 (1996)
Shimada, N., Shirai, Y., Kuno, Y.: J.Miura: Hand gesture estimation and model refinment using monocular camera – ambiguity limitation by inequality constraints. In: 3rd conference on Face and Gesture Recognition, pp. 268–273 (1998)
Wu, Y., Huang, T.: Capturing articulated human hand motion: A divide–and– conquer approach. In: International Conference on Computer Vision, pp. 606– 611 (1999)
McDonald, J., Toro, J., Alkoby, K., Berthiaume, A., Carter, R., Chomwong, P., Christopher, J., Davidson, M., Furst, J., Konie, B., Lancaster, G., Roychoudhuri, L., Sedgwick, E., Tomuro, N., Wolfe, R.: An improved articulated model of the human hand. The Visual Computer 17, 158–166 (2001)
Segen, J., Kumar, S.: Shadow gestures: 3d hand pose estimation using a single camera. In: IEEE conference on Computer Vision and Pattern Recognition, pp. 479–485 (1999)
Leubner, C., Brockman, C., Müller, H.: Computer–vision–based human-computer interaction with a back projection wall using arm gestures. In: Euromicro Conference (2001)
Kang, S., Ikeuchi, K.: Toward automatic robot instruction from perception — temporal segmentation of tasks from human hand motion. Transactions on Robotics and Automation 11 (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
de la Rivière, JB., Guitton, P. (2004). Hand Postures Recognition in Large–Display VR Environments. In: Camurri, A., Volpe, G. (eds) Gesture-Based Communication in Human-Computer Interaction. GW 2003. Lecture Notes in Computer Science(), vol 2915. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24598-8_24
Download citation
DOI: https://doi.org/10.1007/978-3-540-24598-8_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21072-6
Online ISBN: 978-3-540-24598-8
eBook Packages: Springer Book Archive