skip to main content
10.1145/971478.971512acmotherconferencesArticle/Chapter ViewAbstractPublication PagespuiConference Proceedingsconference-collections
Article

Hand tracking for human-computer interaction with Graylevel VisualGlove: turning back to the simple way

Authors Info & Claims
Published:15 November 2001Publication History

ABSTRACT

Recent developments in the manufacturing and marketing of low power-consumption computers, small enough to be "worn" by users and remain almost invisible, have reintroduced the problem of overcoming the outdated paradigm of human-computer interaction based on use of a keyboard and a mouse. Approaches based on visual tracking seem to be the most promising, as they do not require any additional devices (gloves, etc.) and can be implemented with off-the-shelf devices such as webcams. Unfortunately, extremely variable lighting conditions and the high degree of computational complexity of most of the algorithms available make these techniques hard to use in systems where CPU power consumption is a major issue (e.g. wearable computers) and in situations where lighting conditions are critical (outdoors, in the dark, etc.). This paper describes the work carried out at VisiLAB at the University of Messina as part of the VisualGlove Project to develop a real-time, vision-based device able to operate as a substitute for the mouse and other similar input devices. It is able to operate in a wide range of lighting conditions, using a low-cost webcam and running on an entry-level PC. As explained in detail below, particular care has been taken to reduce computational complexity, in the attempt to reduce the amount of resources needed for the whole system to work.

References

  1. A. Blake and M. Isard. Active Contours. Springer-Verlag, 1998.]]Google ScholarGoogle ScholarCross RefCross Ref
  2. J. F. Canny. Finding edges and lines in images. Technical Report Tech. Rep. AI-TR-720, MIT Artificial Intelligence Laboratory, Cambridge, MA, 1983.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. F. Canny. A computational approach to edge detection. IEEE Trans. on PAMI, 8(6):679--698, June 1986.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. Cipolla and A. Pentland. Computer Vision for Human-Machine Interaction. Cambridge University Press, 1998.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Ditlea. The pc goes ready-to-wear. IEEE Spectrum, 37(10):34--39, October 2000.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. R. Gonzalez and R. Woods. Digital Image Processing. Addison-Wesley, 1992.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. R. Grzeszczuk, G. Bradski, M. H. Chu, and J.-Y. Bouguet. Stereo based gesture recognition invariant to 3d pose and lighting. In ICCV98, pages 826--833, 2000.]]Google ScholarGoogle Scholar
  8. D. P. Huttenlocker, G. A. Klanderman, and W. J. Rucklidge. Comparing images using the hausdorff distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15, September 1993.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Kohler and S. Schroter. A survey of video-based gesture recognition - stereo and mono systems. Technical report, Fachbereich Informatik, Dortmund University, 44221 Dortmund, Germany, 1998.]]Google ScholarGoogle Scholar
  10. V. I. Pavlovic, R. Sharma, and T. S. Huang. Visual interpretation of hand gestures for human-computer interaction: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):677--695, July 1997.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. W. Rucklidge. Efficient Visual Recognition Using the Hausdorff Distance. Springer-Verlag, 1996.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Y. Sato, Y. Kobayashi, and H. Koike. Fast tracking of hands and fingertips in infrared images for augmented desk interface. In Proc. of IEEE Automatic Face and Gesture Recognition (FG2000), pages 462--467, 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. Sonka, V. Hlavac, and R. Boyle. Image Processing, Analysis and Machine Vision. Chapman & Hall Computing, 1993.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Starner and A. Pentland. Visual recognition of american sign language using hidden markov models. In Proc. of Int. Workshop on Automatic Face and Gesture Recognition, pages 189--194, Zurich, Switzerland, June 1995.]]Google ScholarGoogle Scholar
  15. Ibm wearable computers news. http://www.ibm.com/news/ls/1998/09/jp_3.phtml.]]Google ScholarGoogle Scholar
  16. Xybernaut home page. http://www.xybernaut.com/.]]Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    PUI '01: Proceedings of the 2001 workshop on Perceptive user interfaces
    November 2001
    241 pages
    ISBN:9781450374736
    DOI:10.1145/971478

    Copyright © 2001 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 15 November 2001

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • Article

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader