Skip to main content
Log in

Sign language applications: preliminary modeling

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

For deaf persons to have ready access to information and communication technologies (ICTs), the latter must be usable in sign language (SL), i.e., include interlanguage interfaces. Such applications will be accepted by deaf users if they are reliable and respectful of SL specificities—use of space and iconicity as the structuring principles of the language. Before developing ICT applications, it is necessary to model these features, both to enable analysis of SL videos and to generate SL messages by means of signing avatars. This paper presents a signing space model, implemented within a context of automatic analysis and automatic generation, which are currently under development.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. A proform is a handshape, which refers to an entity previously signed in the discourse. The proform not only characterizes an entity among several ones, but also provides a particular point of view on this entity regarding the context. It is used to spatialize an entity in the signing space and to express entity relations and actions.

  2. Allen’s temporal relationships are expressed as follows: v: or, =: equal, <: precedes, m: immediately precedes, o: partially overlaps, e: completely overlaps at end.

  3. http://lsscript.limsi.fr

References

  1. Allen, J.F.: Towards a general theory of action, time. In: Allen, J., Hendler, J., Tate, A. (eds.) Readings in Planning, pp. 464–479. Kaufmann, San Mateo (1990)

  2. Baader, F., et al. (eds.) The Description Logic Handbook. Cambridge University Press, Cambridge. ISBN 0521781760 (2003)

  3. Bowden, R., Windridge, D., Kadir, T., Zisserman, A., Brady, M.: A linguistic feature vector for the visual interpretation of sign language. In: Pajdla, T., Matas, J. (eds.) Proceedings of 8th European Conference on Computer Vision, ECCV04. LNCS3022, vol. 1, pp. 391–401. Springer (2004)

  4. Braffort, A.: Reconnaissance et Compréhension de gestes, application à la langue des signes. PhD thesis, Université Paris-XI Orsay (1996)

  5. Braffort, A.: ARGo: an architecture for sign language recognition and interpretation. In: Harling, P., Edwards, A. (eds.) “Progress in Gestural Interaction”, 1st International Gesture Workshop (GW’96), Springer, Heidelberg (1997)

  6. Braffort, A.: Research on computer science and sign language: ethical aspects. In: Wachsmuth, I., Sowa, T. (eds.) “Gesture and Sign Language in Human-Computer Interaction”, selected revised papers of the 4th International Gesture Workshop (GW’01), LNCS LNAI 2298, Springer, Heidelberg (2002)

  7. Braffort, A., Bossard, B., Segouat, J., Bolot, L. et Lejeune, F.: Modélisation des relations spatiales en langue des signes française. In: Proceedings of traitement Automatique de la Langue des Signes, CNRS, ATALA (2005)

  8. Braffort, A., Lejeune, F.: Spatialised semantic relations in French sign language: toward a computational modelling. In: Gibet, S. (ed.) “Gesture in Human-Computer Interaction and Simulation”, selected revised papers of the 6th International Gesture Workshop (GW’05), LNCS LNAI 3881, Springer, Heidelberg (2006)

  9. Cuxac, C.: French sign language: proposition of a structural explanation by iconicity. In: Braffort, A., Gherbi, R., Gibet, S., et al. (eds.) “Gesture-based Communication in Human-Computer Interaction”, selected revised papers of the 3rd International Gesture Workshop (GW’99), LNCS LNAI 1739, Springer, Heidelberg (1999)

  10. Dalle, P., Lenseigne, B.: Vision-based sign language processing using a predictive approach and linguistic knowledge. In: IAPR Conference on Machine Vision Applications–MVA Tsukuba Science City, Japan. IAPR, pp. 510–513 (2005)

  11. Fasel, B., Luettin, J.: Automatic facial expression analysis : a survey. Pattern Recognit. 36, 259–275 (2003)

    Article  MATH  Google Scholar 

  12. Filhol, M., Braffort, A.: A sequential approach to lexical sign description, LREC 2006—Workshop on Sign Languages, Genova, Italy (2006)

  13. Garcia, B., Boutet, D., Braffort, A. Dalle, P.: Sign language in graphical form: methodology, modellisation and representations for gestural communication, in Interacting Bodies (ISGS), Lyon, France (2005)

  14. Gavrila, D.M.: The visual analysis of human movement: a survey. Comput. Vis. Image Underst. 73(1):82–98 (1999)

    Article  MATH  Google Scholar 

  15. Hanke T.: HamNoSys—an introductory guide. Signum, Hamburg (1989)

    Google Scholar 

  16. Huenerfauth, M.: Spatial representation of classifier predicates for machine translation into American sign language. In: Workshop on Representation and Processing of Sign Language, 4th International Conference on Language Resources and Evaluation (LREC 2004), pp. 24–31, Lisbon, Portugal (2004)

  17. Kennaway, R.: Synthetic animation of deaf signing gestures. In: Wachsmuth, I., Sowa, T. (eds.) “Gesture and Sign Language in Human-Computer Interaction”, selected revised papers of the 4th International Gesture Workshop (GW’01), LNCS LNAI 2298, Springer, Heidelberg (2002)

  18. Lenseigne, B., Gianni, F., Dalle, P.: A new gesture representation for sign language analysis. In: Workshop on Representation and Processing of Sign Language, 4th International Conference on Language Resources and Evaluation (LREC 2004), pp. 85–90, Lisbon, Portugal (2004)

  19. Lenseigne, B., Dalle, P.: Using signing space as a representation for sign language processing. In: Gibet, S. (ed.) “Gesture in Human-Computer Interaction and Simulation”, selected revised papers of the 6th International Gesture Workshop (GW’05), LNCS LNAI 3881, Springer, Heidelberg (2006)

  20. Liddell, S.: Grammar, Gesture and Meaning in American Sign Language. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  21. Marshall, I., Safar, E.: Sign language generation in an ALE HPSG (invited speaker), in HPSG-2004. In: Muller, S. (ed.) The Proceedings of the 11th International Conference on Head-Driven Phrase Structure Grammar Center for Computational Linguistics, Katholieke Universiteit, Leuven, pp. 189–201 (2004)

  22. Mercier, H, Peyras, J., Dalle, P.: Toward an efficient and accurate AAM fitting on appearance varying faces. In: 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, pp. 363–368 (2005)

  23. Ong, S., Ranganath, S.: Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 2(6), 873–891 (2005)

    Article  Google Scholar 

  24. Vogler, C., Metaxas, D.: Handshapes and movements: multiple-channel American sign language recognition. In: Camurri, A., Volpe, G. (eds.) “Gesture-based Communication in Human-Computer Interaction”, selected revised papers of the 5th International Gesture Workshop (GW’03), LNCS LNAI, vol. 2915, Springer, Heidelberg (2004)

  25. Yang, M., Egman, D.J., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. PAMI 24(1), 34–58 (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Annelies Braffort.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Braffort, A., Dalle, P. Sign language applications: preliminary modeling. Univ Access Inf Soc 6, 393–404 (2008). https://doi.org/10.1007/s10209-007-0103-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-007-0103-y

Keywords

Navigation