Skip to main content

Towards an Automatic Sign Language Recognition System Using Subunits

  • Conference paper
  • First Online:
Gesture and Sign Language in Human-Computer Interaction (GW 2001)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2298))

Included in the following conference series:

Abstract

This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer B. and H. Hienz: Relevant Features for Video-Based Continuous Sign Language Recognition In: Proceedings of the 4th International Conference on Automatic Face and Gesture Recognition FG 2000, pp. 440–445, March 28–30, Grenoble, ISBN 0-7695-0580-5

    Google Scholar 

  2. Boyes Braem, P.: Einführung in die Gebärdensprache und ihre Erforschung. Signum Press, Hamburg, 1995.

    Google Scholar 

  3. Braffort, A.: ARGo: An Architecture for Sign Language Recognition and Interpretation. In P. Harling and A. Edwards (Editors): Progress in Gestural Interaction, pp. 17–30, Springer, 1996.

    Google Scholar 

  4. Coulter, G. R.: Phonetics and Phonology, VOLUME3 Current Issues in ASL Phonology. Academic Press, Inc. SanDiego, California, ISBN 0-12-193270.

    Google Scholar 

  5. Hienz, H. and K. Grobel: Automatic Estimation of Body Regions from Video Images. In Wachsmuth, I. and M. Fröhlich (Editors): Gesture and Sign Language in Human Computer Interaction, International Gesture Workshop Bielefeld 1997, pp. 135–145, Bielefeld (Germany), Springer, 1998.

    Google Scholar 

  6. Jelinek, F.: Self-organized Language Modeling for Speech Recognition. In A. Waibel and K.-F. Lee (Editors): Readings in Speech Recognition, pp.450–506, Morgan Kaufmann Publishers, Inc., 1990.

    Google Scholar 

  7. Jelinek, F.: Statistical Methods For Speech Recognition. MIT Press 1998, ISBN 0262-10066-5

    Google Scholar 

  8. Liang, R. H. and M. Ouhyoung:A Real-Time Continuous Gesture Recognition System for Sign Language. In Proceedings of the Third Int. Conference on Automatic Face and Gesture Recognition, Nara (Japan), pp. 558–565 1998.

    Google Scholar 

  9. Liddel, S. K. and R. E. Johnson.American Sign Language The phonological base.In: Sign Language Studies, 64: 195–277, 1989

    Google Scholar 

  10. Rabiner, L. R. and B. H. Juang: An Introduction to Hidden Markov Models. In IEEE ASSP Magazin, pp. 4–16, 1989.

    Google Scholar 

  11. Schukat-Talamazzini, E.G: Automatische Spracherkennung. Vieweg Verlag, 1995.

    Google Scholar 

  12. Starner, T., J. Weaver and A. Pentland: Real-Time American Sign Language Recognition using Desk-and Wearable Computer-Based Video.In IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1371–1375, 1998

    Article  Google Scholar 

  13. Stokoe, W. C.:Sign Language Structure: An Outline of the Visual Communication System of the American Deaf. Studies in Linguistics: Occasional Papers Linstok Press, Silver Spring, MD, 1960, Revised 1978

    Google Scholar 

  14. Stokoe, W., D. Armstrong and S. Wilcox: Gesture and the Nature of Language. Cambridge University Press, Cambridge (UK), 1995.

    Google Scholar 

  15. Vogler, C. and D. Metaxas: Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes. In: Int. Gesture Workshop Gif-sur-Yvette, France 1999

    Google Scholar 

  16. Vogler, C. and D. Metaxas: Adapting Hidden Markov Models for ASL Recognition by using Three-Dimensional Computer Vision Mehtods. In Proceedings of IEEE Int. Conference of Systems, Man and Cybernetics, pp. 156–161, Orlando (USA), 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bauer, B., Karl-Friedrich, K. (2002). Towards an Automatic Sign Language Recognition System Using Subunits. In: Wachsmuth, I., Sowa, T. (eds) Gesture and Sign Language in Human-Computer Interaction. GW 2001. Lecture Notes in Computer Science(), vol 2298. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47873-6_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-47873-6_7

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43678-2

  • Online ISBN: 978-3-540-47873-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics