Skip to main content

ARGo: An Architecture for Sign Language Recognition and Interpretation

  • Conference paper

Abstract

This paper presents a recognition and interpretation architecture dedicated to Sign Language.

A sign is composed of several co-occurring parameters that allows several heterogeneous bits of information to be emitted simultaneously depending on the variation of one of these parameters. Sign languages vary from one country to another and each has a specific vocabulary. These signs are called conventional signs and can be listed in dictionaries. A second kind of signs is extremely frequent in sign language communications: the non-conventional signs. They are created during discourse, depending on need and context, and cannot be listed in dictionaries. Moreover, some conventional signs may have one or more variable parameters, depending on context. These signs are named Variable signs.

Sign language functioning is based on simultaneousness of information and spatial rules governing sign relationships, and the vocabulary is not completely known a priori. For these reasons, classical sequential treatments are not sufficient for sign recognition. Our architecture tries to take into account such a functioning. It is composed of recognition and interpretation modules. The first module is based on Hidden Markov Models and allows us to classify conventional, non-conventional and variable signs. In the interpretation module, a virtual scene allows to store context and to complete the meaning of variable and non-conventional signs.

Because of its ability to deal with non-conventional signs, this system can be used both for sign language and co-verbal gestures.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Braffort. Reconnaissance et compréhension de gestes, application à la langue des signes. PhD thesis, L’université d’Orsay, 1996.

    Google Scholar 

  2. A. Braffort, T. Baudel, and D. Teil. Utilisation des gestes de la main pour l’interaction homme-machine. In GDR-PRC-CHM (Ed.), IHM’92, pages 193196, Paris, 1992.

    Google Scholar 

  3. J. Cagin. Une étude sur la reconnaissance de formes dynamiques - application à la langue des signes. Projet de fin d’études de l’ENSTA, Paris, 1993.

    Google Scholar 

  4. C. Cuxac. Autour de la langue des signes. Journée d’études n°10, Université Paris V, Paris, 1983. (In French).

    Google Scholar 

  5. S. S. Fels. Glove-TalkII: Mapping Hand Gestures to Speech Using Neural Networks-An Approach to Building Adaptive Interfaces. PhD thesis, Department of Computer Science, University of Toronto, Canada, 1994.

    Google Scholar 

  6. J. Gauvain, L. Lamel, and M. Adda-Decker. Developments in large vocabulary dictation: The LIMSI Nov94 NAB system. In ARPA SLT’95, Austin (Texas), 1995.

    Google Scholar 

  7. L. Messing, R. Erenshteyn, R. Foulds, S. Galuska, and G. Stern. American Sign Language computer recognition: Its present and its promise. In ISAAC’94, pages 289–291, Maastricht, NL, 1994.

    Google Scholar 

  8. B. Moody. La langue des signes. Histoire et grammaire. Ellipses, Paris, 1983. (In French).

    Google Scholar 

  9. K. Murakami and H. Taguchi. Gesture recognition using recurrent neural networks. In CHI ‘81 Proceedings, pages 237–242, New Orleans (Louisiana), 1991.

    Google Scholar 

  10. L. R. Rabiner. A tutorial on Hidden Markov Models and selected applications in speech recognition. IEEE ASSP Magazine, 77 (2): 257–285, 1989.

    Google Scholar 

  11. D. Rubine. The automatic recognition of gestures. PhD thesis, Carnegie Mellon University, 1991.

    Google Scholar 

  12. H. Sagawa, H. Sakou, and M. Abe. Sign Language translation using continuous DP matching. In MVA’92-IAPR Workshop on Machine Vision Applications, Tokyo, 1992.

    Google Scholar 

  13. T. Starner. Visual recognition of American Sign Language using Hidden Markov Models. Master’s thesis, Media Arts and Sciences, MIT, 1995.

    Google Scholar 

  14. T. Starner and A. Pentland. Visual recognition of American Sign Language using Hidden Markov Models. In M. Bichsel, editor, Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, pages 189–194, 26–28 June 1995.

    Google Scholar 

  15. D. Sturman. Whole-hand Input. PhD thesis, MIT, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag London

About this paper

Cite this paper

Braffort, A. (1997). ARGo: An Architecture for Sign Language Recognition and Interpretation. In: Harling, P.A., Edwards, A.D.N. (eds) Progress in Gestural Interaction. Springer, London. https://doi.org/10.1007/978-1-4471-0943-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0943-3_3

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76094-8

  • Online ISBN: 978-1-4471-0943-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics