Skip to main content

A wearable computer based American sign language recognizer

  • Chapter
  • First Online:
Assistive Technology and Artificial Intelligence

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1458))

Abstract

Modern wearable computer designs package workstation level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed the most for the handicapped: everyday mobile environments. This paper describes a research effort to make a wearable computer that can recognize (with the possible goal of translating) sentence level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40 word lexicon.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Baum. “An inequality and associated maximization technique in statistical estimation of probabilistic functions of Markov processes.≓ Inequalities, 3:1–8, 1972.

    Google Scholar 

  2. L. Campbell, D. Becker, A. Azarbayejani, A. Bobick, and A. Pentland “Invariant features for 3-D gesture recognition≓, Intl. Conf. on Face and Gesture Recogn., pp. 157–162, 1996

    Google Scholar 

  3. C. Charayaphan and A. Marble. “Image processing system for interpreting motion in American Sign Language.≓ Journal of Biomedical Engineering, 14:419–425, 1992.

    Google Scholar 

  4. Y. Cui and J. Weng. “Learning-based hand sign recognition.≓ Intl. Work. Auto. Face Gest. Recog. (IWAFGR) '95 Proceedings, p. 201–206, 1995

    Google Scholar 

  5. T. Darrell and A. Pentland. “Space-time gestures.≓ CVPR, p. 335–340, 1993.

    Google Scholar 

  6. B. Dorner. “Hand shape identification and tracking for sign language interpretation.≓ IJCAI Workshop on Looking at People, 1993.

    Google Scholar 

  7. I. Essa, T. Darrell, and A. Pentland. “Tracking facial motion.≓ IEEE Workshop on Nonrigid and articulated Motion, Austin TX, Nov. 94.

    Google Scholar 

  8. B. Horn. Robot Vision. MIT Press, NY, 1986.

    Google Scholar 

  9. X. Huang, Y. Ariki, and M. Jack. Hidden Markov Models for Speech Recognition. Edinburgh Univ. Press, Edinburgh, 1990.

    Google Scholar 

  10. T. Humphries, C. Padden, and T. O'Rourke. A Basic Course in American Sign Language. T. J. Publ., Inc., Silver Spring, MD, 1980.

    Google Scholar 

  11. B. Juang. “Maximum likelihood estimation for mixture multivariate observations of Markov chains.≓ AT&T Tech. J., 64:1235–1249, 1985.

    MATH  MathSciNet  Google Scholar 

  12. W. Kadous. “Recognition of Australian Sign Language using instrumented gloves.≓ Bachelor's thesis, University of New South Wales, October 1995.

    Google Scholar 

  13. C. Lee and Y. Xu, “Online, interactive learning of gestures for human/robot interfaces.≓ IEEE Int. Conf. on Robotics and Automation, pp 2982–2987, 1996.

    Google Scholar 

  14. R. Liang and M. Ouhyoung, “A real-time continuous gesture interface for Taiwanese Sign Language.≓ Submitted to UIST, 1997.

    Google Scholar 

  15. S. Mann “Mediated reality≓. “MIT Media Lab, Perceptual Computing Group TR# 260≓

    Google Scholar 

  16. L. Messing, R. Erenshteyn, R. Foulds, S. Galuska, and G. Stern. “American Sign Language computer recognition: Its Present and its Promise≓ Conf. the Intl. Society for Augmentative and Alternative Communication, 1994, pp. 289–291.

    Google Scholar 

  17. K. Murakami and H. Taguchi. “Gesture recognition using recurrent neural networks.≓ CHI '91 Conference Proceedings, p. 237–241, 1991.

    Google Scholar 

  18. H. Poizner, U. Bellugi, and V. Lutes-Driscoll. “Perception of American Sign Language in dynamic point-light displays.≓ J. Exp. Pyschol.: Human Perform., 7:430–440, 1981.

    Article  Google Scholar 

  19. L. Rabiner and B. Juang. “An introduction to hidden Markov models.≓ IEEE ASSP Magazine, p. 4–16, Jan. 1996.

    Google Scholar 

  20. J. Rehg and T. Kanade. “DigitEyes: vision-based human hand tracking.≓ School of Computer Science Technical Report CMU-CS-93-220, Carnegie Mellon Univ., Dec. 1993.

    Google Scholar 

  21. J. Schlenzig, E. Hunter, and R. Jain. “Recursive identification of gesture inputers using hidden Markov models.≓ Proc. Second Ann. Conf. on Appl. of Comp. Vision, p. 187–194, 1994.

    Google Scholar 

  22. G. Sperling, M. Landy, Y. Cohen, and M. Pavel. “Intelligible encoding of ASL image sequences at extremely low information rates.≓ Comp. Vision, Graphics, and Image Proc., 31:335–391, 1985.

    Article  Google Scholar 

  23. T. Starner and A. Pentland. “Real-Time Ameerican Sign Language Recognition from Video Using Hidden Markov Models.≓ MIT Media Laboratory, Perceptual Computing Group TR#375, Presented at ISCV'95.

    Google Scholar 

  24. T. Starner. “Visual Recognition of American Sign Language Using Hidden Markov Models.≓ Master's thesis, MIT Media Laboratory, Feb. 1995.

    Google Scholar 

  25. T. Starner, J. Makhoul, R. Schwartz, and G. Chou. “On-line cursive handwriting recognition using speech recognition methods.≓ ICASSP, V-125, 1994.

    Google Scholar 

  26. T. Takahashi and F. Kishino. “Hand gesture coding based on experiments using a hand gesture interface device.≓ SIGCHI Bul., 23(2):67–73, 1991.

    Google Scholar 

  27. S. Tamura and S. Kawasaki. “Recognition of sign language motion images.≓ Pattern Recognition, 21:343–353, 1988.

    Article  Google Scholar 

  28. A. Wilson and A. Bobick. “Learning visual behavior for gesture analysis.≓ Proc. IEEE Int'l. Symp. on Comp. Vis., Nov. 1995.

    Google Scholar 

  29. J. Yamato, J. Ohya, and K. Ishii. “Recognizing human action in time-sequential images using hidden Markov models.≓ Proc. 1992 ICCV, p. 379–385. IEEE Press, 1992.

    Google Scholar 

  30. S. Young. HTK: Hidden Markov Model Toolkit V1.5. Cambridge Univ. Eng. Dept. Speech Group and Entropic Research Lab. Inc., Washington DC, Dec. 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Vibhu O. Mittal Holly A. Yanco John Aronis Richard Simpson

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Starner, T., Weaver, J., Pentland, A. (1998). A wearable computer based American sign language recognizer. In: Mittal, V.O., Yanco, H.A., Aronis, J., Simpson, R. (eds) Assistive Technology and Artificial Intelligence. Lecture Notes in Computer Science, vol 1458. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0055972

Download citation

  • DOI: https://doi.org/10.1007/BFb0055972

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64790-4

  • Online ISBN: 978-3-540-68678-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics