Abstract
Modern wearable computer designs package workstation level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed the most for the handicapped: everyday mobile environments. This paper describes a research effort to make a wearable computer that can recognize (with the possible goal of translating) sentence level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40 word lexicon.
Preview
Unable to display preview. Download preview PDF.
References
L. Baum. “An inequality and associated maximization technique in statistical estimation of probabilistic functions of Markov processes.≓ Inequalities, 3:1–8, 1972.
L. Campbell, D. Becker, A. Azarbayejani, A. Bobick, and A. Pentland “Invariant features for 3-D gesture recognition≓, Intl. Conf. on Face and Gesture Recogn., pp. 157–162, 1996
C. Charayaphan and A. Marble. “Image processing system for interpreting motion in American Sign Language.≓ Journal of Biomedical Engineering, 14:419–425, 1992.
Y. Cui and J. Weng. “Learning-based hand sign recognition.≓ Intl. Work. Auto. Face Gest. Recog. (IWAFGR) '95 Proceedings, p. 201–206, 1995
T. Darrell and A. Pentland. “Space-time gestures.≓ CVPR, p. 335–340, 1993.
B. Dorner. “Hand shape identification and tracking for sign language interpretation.≓ IJCAI Workshop on Looking at People, 1993.
I. Essa, T. Darrell, and A. Pentland. “Tracking facial motion.≓ IEEE Workshop on Nonrigid and articulated Motion, Austin TX, Nov. 94.
B. Horn. Robot Vision. MIT Press, NY, 1986.
X. Huang, Y. Ariki, and M. Jack. Hidden Markov Models for Speech Recognition. Edinburgh Univ. Press, Edinburgh, 1990.
T. Humphries, C. Padden, and T. O'Rourke. A Basic Course in American Sign Language. T. J. Publ., Inc., Silver Spring, MD, 1980.
B. Juang. “Maximum likelihood estimation for mixture multivariate observations of Markov chains.≓ AT&T Tech. J., 64:1235–1249, 1985.
W. Kadous. “Recognition of Australian Sign Language using instrumented gloves.≓ Bachelor's thesis, University of New South Wales, October 1995.
C. Lee and Y. Xu, “Online, interactive learning of gestures for human/robot interfaces.≓ IEEE Int. Conf. on Robotics and Automation, pp 2982–2987, 1996.
R. Liang and M. Ouhyoung, “A real-time continuous gesture interface for Taiwanese Sign Language.≓ Submitted to UIST, 1997.
S. Mann “Mediated reality≓. “MIT Media Lab, Perceptual Computing Group TR# 260≓
L. Messing, R. Erenshteyn, R. Foulds, S. Galuska, and G. Stern. “American Sign Language computer recognition: Its Present and its Promise≓ Conf. the Intl. Society for Augmentative and Alternative Communication, 1994, pp. 289–291.
K. Murakami and H. Taguchi. “Gesture recognition using recurrent neural networks.≓ CHI '91 Conference Proceedings, p. 237–241, 1991.
H. Poizner, U. Bellugi, and V. Lutes-Driscoll. “Perception of American Sign Language in dynamic point-light displays.≓ J. Exp. Pyschol.: Human Perform., 7:430–440, 1981.
L. Rabiner and B. Juang. “An introduction to hidden Markov models.≓ IEEE ASSP Magazine, p. 4–16, Jan. 1996.
J. Rehg and T. Kanade. “DigitEyes: vision-based human hand tracking.≓ School of Computer Science Technical Report CMU-CS-93-220, Carnegie Mellon Univ., Dec. 1993.
J. Schlenzig, E. Hunter, and R. Jain. “Recursive identification of gesture inputers using hidden Markov models.≓ Proc. Second Ann. Conf. on Appl. of Comp. Vision, p. 187–194, 1994.
G. Sperling, M. Landy, Y. Cohen, and M. Pavel. “Intelligible encoding of ASL image sequences at extremely low information rates.≓ Comp. Vision, Graphics, and Image Proc., 31:335–391, 1985.
T. Starner and A. Pentland. “Real-Time Ameerican Sign Language Recognition from Video Using Hidden Markov Models.≓ MIT Media Laboratory, Perceptual Computing Group TR#375, Presented at ISCV'95.
T. Starner. “Visual Recognition of American Sign Language Using Hidden Markov Models.≓ Master's thesis, MIT Media Laboratory, Feb. 1995.
T. Starner, J. Makhoul, R. Schwartz, and G. Chou. “On-line cursive handwriting recognition using speech recognition methods.≓ ICASSP, V-125, 1994.
T. Takahashi and F. Kishino. “Hand gesture coding based on experiments using a hand gesture interface device.≓ SIGCHI Bul., 23(2):67–73, 1991.
S. Tamura and S. Kawasaki. “Recognition of sign language motion images.≓ Pattern Recognition, 21:343–353, 1988.
A. Wilson and A. Bobick. “Learning visual behavior for gesture analysis.≓ Proc. IEEE Int'l. Symp. on Comp. Vis., Nov. 1995.
J. Yamato, J. Ohya, and K. Ishii. “Recognizing human action in time-sequential images using hidden Markov models.≓ Proc. 1992 ICCV, p. 379–385. IEEE Press, 1992.
S. Young. HTK: Hidden Markov Model Toolkit V1.5. Cambridge Univ. Eng. Dept. Speech Group and Entropic Research Lab. Inc., Washington DC, Dec. 1993.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Starner, T., Weaver, J., Pentland, A. (1998). A wearable computer based American sign language recognizer. In: Mittal, V.O., Yanco, H.A., Aronis, J., Simpson, R. (eds) Assistive Technology and Artificial Intelligence. Lecture Notes in Computer Science, vol 1458. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0055972
Download citation
DOI: https://doi.org/10.1007/BFb0055972
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64790-4
Online ISBN: 978-3-540-68678-1
eBook Packages: Springer Book Archive