Abstract
As video conferencing systems transition to using head-mounted displays (HMD), non-contacting (3D) hand gestures are likely to replace conventional input devices by providing more efficient interactions with less cost. This paper presents the design of an experimental video conferencing system with optical see-through HMD, Leap Motion hand tracker, and RGB cameras. Both the skeleton-based dynamic hand gesture recognition and ergonomic-based gesture lexicon design were studied. The proposed gesture recognition algorithm fused hand shape and hand direction feature and used Temporal Pyramid to obtain a high dimension feature and predicted the gesture classification through linear SVM machine learning. Subjects (N = 16) self-generated different hand gestures for 25 different tasks related to video conferencing and object manipulation and rated gestures on ease of making the gesture, match to the command, and arm fatigue. Based on these outcomes, a gesture lexicon is proposed for controlling a video conferencing system and for manipulating virtual objects.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Wu, M., Balakrishnan, R.: Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays. In: Proceedings of ACM Symposium on User Interface Software and Technology, pp. 193–202 (2003)
Ramadan, A., Hemeda, H., Sarhan, A.: Touch-input based continuous authentication using gesture-level and session-level features. In: IEEE Information Technology, Electronics and Mobile Communication Conference, pp. 222–229 (2017)
Şen, F., Díaz, L., Horttana, T.: A novel gesture-based interface for a VR simulation: re-discovering Vrouw Maria. In: International Conference on Virtual Systems and Multimedia, pp. 323–330 (2013)
Dardas, N.H., Alhaj, M.: Hand gesture interaction with a 3D virtual environment. In: JCM Conference on Innovation in Computing & Engineering Machinery (2011)
Kurakin, A., Zhang, Z., Liu, Z.: A real time system for dynamic hand gesture recognition with a depth sensor. In: Signal Processing Conference, pp. 1975–1979 (2012)
Zhang, J., Zhou, W., Xie, C., Pu, J., Li, H.: Chinese sign language recognition with adaptive HMM. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2016)
Nai, W., Liu, Y., Rempel, D., Wang, Y.: Fast hand posture classification using depth features extracted from random line segments. Pattern Recognit. 65, 1–10 (2016)
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM (2011)
Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: SIGCHI Conference on Human Factors in Computing Systems, pp. 1083–1092 (2009)
Pereira, A., Wachs, J.P., Park, K., Rempel, D.: A user-developed 3-D hand gesture set for human-computer interaction. Hum. Factors 57, 607 (2015)
Shotton, J., et al.: Real-time human pose recognition in parts from single depth images. In: Computer Vision and Pattern Recognition, pp. 1297–1304 (2011)
Smedt, Q.D., Wannous, H., Vandeborre, J.P.: Skeleton-based dynamic hand gesture recognition. In: Computer Vision and Pattern Recognition Workshops, pp. 1206–1214 (2016)
Rempel, D., Camilleri, M.J., Lee, D.L.: The design of hand gestures for human–computer interaction: lessons from sign language interpreters ☆. Int. J. Hum.-Comput. Stud. 72, 728–735 (2014)
Lin, W., Du, L., Harris-Adamson, C., Barr, A., Rempel, D.: Design of hand gestures for manipulating objects in virtual reality. In: Kurosu, M. (ed.) HCI 2017. LNCS, vol. 10271, pp. 584–592. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58071-5_44
Marin, G., Dominio, F., Zanuttigh, P.: Hand gesture recognition with jointly calibrated Leap Motion and depth sensor. Multimed. Tools Appl. 75, 1–25 (2016)
Evangelidis, G., Singh, G., Horaud, R.: Skeletal quads: human action recognition using joint quadruples. In: International Conference on Pattern Recognition, pp. 4513–4518 (2014)
Acknowledgment
This work was supported in part by the National High Technology Research and Development Program of China (2015AA016303), the National Natural Science Foundation of China (61631010), and the Office Ergonomics Research Committee.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, G., Liu, Y., Wang, Y., David, R. (2018). Study on User-Generated 3D Gestures for Video Conferencing System with See-Through Head Mounted Display. In: Wang, Y., Jiang, Z., Peng, Y. (eds) Image and Graphics Technologies and Applications. IGTA 2018. Communications in Computer and Information Science, vol 875. Springer, Singapore. https://doi.org/10.1007/978-981-13-1702-6_60
Download citation
DOI: https://doi.org/10.1007/978-981-13-1702-6_60
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-1701-9
Online ISBN: 978-981-13-1702-6
eBook Packages: Computer ScienceComputer Science (R0)