Abstract
This paper proposes a vision-based methodology that recognizes the users’ fingertips so that the users can perform various mouse operations by gestures as well as implements multi-mouse operations. By using the Ramer-Douglas-Peucker algorithm, the system retrieves the coordinates of the finger from the palm of the hand. The system also recognizes the users’ intended operation on the mouse through the movements of recognized fingers. When the system recognizes several palms of hands, it changes its mode to the multi-mouse mode so that several users can coordinate their works on the same screen. The number of mice is the number of recognized palms. In order to implement our proposal, we have employed the Kinect motion capture camera and have used the tracking function of the Kinect to recognize the fingers of users. Operations on the mouse pointers are reflected in the coordinates of the detected fingers. In order to demonstrate the effectiveness of our proposal, we have conducted several user experiments. We have observed that the Kinect is suitable equipment to implement the multi-mouse operations. The users who participated in the experiments quickly learned the multi-mouse environment and performed naturally in front of the Kinect motion capture camera.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Farhadi-Niaki, F., Aghaei, R.G., Arya, A.: Empirical Study of a Vision-based Depth Sensitive Human-Computer Interaction System. In: Tenth Asia Pacific Conference in Computer Human Interaction, pp. 101–108. ACM Press (2012)
Ueda, M., Takeuchi, I.: Mouse Cursors Surf the Net - Developing Multi-computer Multi-mouse Systems. In: IPSJ Programming Symposium, pp. 25–32 (2007) (in Japanese)
Viola, P., Jones, M.: Robust real-time object detection. In: Second International Workshop on Statistical and Computational Theories of Vision (2001)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple feature. IEEE Computer Vision and Pattern Recognition 1, 511–518 (2001)
Chen, Q., Cordea, M.D., Petriu, E.M., Varkonyi-Kockzy, A.R., Whalen, T.E.: Human-computer interaction for smart environment applications using hand-gesture and facial-expressions. International Journal of Advanced Media and Communication 3(1/2), 95–109 (2009)
Kolsch, M., Tuck, M.: Robust hand detection. In: International Confernce on Automatic Face and Gesture Recognition, pp. 614–619 (2004)
Kolsch, M., Tuck, M.: Analysis of rotational robustness of hand detection with a Viola-Jones detector. In: IAPR International Conference of Pattern Recognition, vol. 3, pp. 107–110 (2004)
Zhang, Q., Chen, F., Liu, X.: Hand gesture detection and segmentation based on difference background image with complex background. In: International Conference on Embedded Software and Systems, pp. 338–343 (2008)
Anton-Canalis, L., Sanchez-Nielsen, E., Castrillon-Santana, M.: Hand pose detection for vision-based gesture interfaces. In: Conference on Machine Vision Applications, pp. 506–509 (2005)
Marcel, S., Bernier, O., Viallet, J.E., Collobert, D.: Hand gesture recognition using input-output hidden Markov models. In: Conference on Automatic Face and Gesture Recognition, pp. 456–461 (2000)
Yu, C., Wang, X., Huang, H., Shen, J., Wu, K.: Vision-based hand gesture recognition using combinational features. In: Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 543–546 (2010)
Chiba, S., Yosimitsu, K., Maruyama, M., Toyama, K., Iseki, H., Muragaki, Y.: Opect: Non-contact image processing system using Kinect. J. Japan Society of Computer Aided Surgery 14(3), 150–151 (2012) (in Japanese)
Nichi, Opect: Non-contact image processing system using Kinect, http://www.nichiiweb.jp/medical/category/hospital/opect.html
Ahn, S.C., Lee, T., Kim, I., Kwon, Y., Kim, H.: Computer vision-based interactive presentation system. In: Asian Conference for Computer Vision (2004)
Wagner, B.: Effective C# (Cover C# 4.0): 50 Specific Ways to Improve Your C#, 2nd edn. Addision-Wesley Professional (2010)
OpenNI: The standard framework for 3D sensing, http://www.openni.org/
NiTE 2: OpenNI, http://www.openni.org/files/nite/
Douglas, D., Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. In: The Canadian Cartographer, pp. 112–122 (1973)
Kinect for Windows: Voice, Movement & Gesture Recognition Technology, http://www.microsoft.com/en-us/kinectforwindows/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer International Publishing Switzerland
About this paper
Cite this paper
Onodera, Y., Kambayashi, Y. (2013). Vision-Based User Interface for Mouse and Multi-mouse System. In: Yoshida, T., Kou, G., Skowron, A., Cao, J., Hacid, H., Zhong, N. (eds) Active Media Technology. AMT 2013. Lecture Notes in Computer Science, vol 8210. Springer, Cham. https://doi.org/10.1007/978-3-319-02750-0_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-02750-0_2
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02749-4
Online ISBN: 978-3-319-02750-0
eBook Packages: Computer ScienceComputer Science (R0)