Skip to main content

Vision-Based User Interface for Mouse and Multi-mouse System

  • Conference paper
Active Media Technology (AMT 2013)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8210))

Included in the following conference series:

Abstract

This paper proposes a vision-based methodology that recognizes the users’ fingertips so that the users can perform various mouse operations by gestures as well as implements multi-mouse operations. By using the Ramer-Douglas-Peucker algorithm, the system retrieves the coordinates of the finger from the palm of the hand. The system also recognizes the users’ intended operation on the mouse through the movements of recognized fingers. When the system recognizes several palms of hands, it changes its mode to the multi-mouse mode so that several users can coordinate their works on the same screen. The number of mice is the number of recognized palms. In order to implement our proposal, we have employed the Kinect motion capture camera and have used the tracking function of the Kinect to recognize the fingers of users. Operations on the mouse pointers are reflected in the coordinates of the detected fingers. In order to demonstrate the effectiveness of our proposal, we have conducted several user experiments. We have observed that the Kinect is suitable equipment to implement the multi-mouse operations. The users who participated in the experiments quickly learned the multi-mouse environment and performed naturally in front of the Kinect motion capture camera.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Farhadi-Niaki, F., Aghaei, R.G., Arya, A.: Empirical Study of a Vision-based Depth Sensitive Human-Computer Interaction System. In: Tenth Asia Pacific Conference in Computer Human Interaction, pp. 101–108. ACM Press (2012)

    Google Scholar 

  2. Ueda, M., Takeuchi, I.: Mouse Cursors Surf the Net - Developing Multi-computer Multi-mouse Systems. In: IPSJ Programming Symposium, pp. 25–32 (2007) (in Japanese)

    Google Scholar 

  3. Viola, P., Jones, M.: Robust real-time object detection. In: Second International Workshop on Statistical and Computational Theories of Vision (2001)

    Google Scholar 

  4. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple feature. IEEE Computer Vision and Pattern Recognition 1, 511–518 (2001)

    Google Scholar 

  5. Chen, Q., Cordea, M.D., Petriu, E.M., Varkonyi-Kockzy, A.R., Whalen, T.E.: Human-computer interaction for smart environment applications using hand-gesture and facial-expressions. International Journal of Advanced Media and Communication 3(1/2), 95–109 (2009)

    Article  Google Scholar 

  6. Kolsch, M., Tuck, M.: Robust hand detection. In: International Confernce on Automatic Face and Gesture Recognition, pp. 614–619 (2004)

    Google Scholar 

  7. Kolsch, M., Tuck, M.: Analysis of rotational robustness of hand detection with a Viola-Jones detector. In: IAPR International Conference of Pattern Recognition, vol. 3, pp. 107–110 (2004)

    Google Scholar 

  8. Zhang, Q., Chen, F., Liu, X.: Hand gesture detection and segmentation based on difference background image with complex background. In: International Conference on Embedded Software and Systems, pp. 338–343 (2008)

    Google Scholar 

  9. Anton-Canalis, L., Sanchez-Nielsen, E., Castrillon-Santana, M.: Hand pose detection for vision-based gesture interfaces. In: Conference on Machine Vision Applications, pp. 506–509 (2005)

    Google Scholar 

  10. Marcel, S., Bernier, O., Viallet, J.E., Collobert, D.: Hand gesture recognition using input-output hidden Markov models. In: Conference on Automatic Face and Gesture Recognition, pp. 456–461 (2000)

    Google Scholar 

  11. Yu, C., Wang, X., Huang, H., Shen, J., Wu, K.: Vision-based hand gesture recognition using combinational features. In: Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 543–546 (2010)

    Google Scholar 

  12. Chiba, S., Yosimitsu, K., Maruyama, M., Toyama, K., Iseki, H., Muragaki, Y.: Opect: Non-contact image processing system using Kinect. J. Japan Society of Computer Aided Surgery 14(3), 150–151 (2012) (in Japanese)

    Google Scholar 

  13. Nichi, Opect: Non-contact image processing system using Kinect, http://www.nichiiweb.jp/medical/category/hospital/opect.html

  14. Ahn, S.C., Lee, T., Kim, I., Kwon, Y., Kim, H.: Computer vision-based interactive presentation system. In: Asian Conference for Computer Vision (2004)

    Google Scholar 

  15. Wagner, B.: Effective C# (Cover C# 4.0): 50 Specific Ways to Improve Your C#, 2nd edn. Addision-Wesley Professional (2010)

    Google Scholar 

  16. OpenNI: The standard framework for 3D sensing, http://www.openni.org/

  17. NiTE 2: OpenNI, http://www.openni.org/files/nite/

  18. Douglas, D., Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. In: The Canadian Cartographer, pp. 112–122 (1973)

    Google Scholar 

  19. Kinect for Windows: Voice, Movement & Gesture Recognition Technology, http://www.microsoft.com/en-us/kinectforwindows/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer International Publishing Switzerland

About this paper

Cite this paper

Onodera, Y., Kambayashi, Y. (2013). Vision-Based User Interface for Mouse and Multi-mouse System. In: Yoshida, T., Kou, G., Skowron, A., Cao, J., Hacid, H., Zhong, N. (eds) Active Media Technology. AMT 2013. Lecture Notes in Computer Science, vol 8210. Springer, Cham. https://doi.org/10.1007/978-3-319-02750-0_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-02750-0_2

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-02749-4

  • Online ISBN: 978-3-319-02750-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics