Skip to main content
Log in

Effect of spatial reference and verb inflection on the usability of sign language animations

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Computer-generated animations of American Sign Language (ASL) can improve the accessibility of information, communication, and services for the significant number of deaf adults in the US with difficulty in reading English text. Unfortunately, there are several linguistic aspects of ASL that current automatic generation or translation systems cannot produce (or are time-consuming for human animators to create). To determine how important such phenomena are to user satisfaction and the comprehension of ASL animations, studies were conducted in which native ASL signers evaluated ASL animations with and without: establishment of spatial reference points around the virtual human signer representing entities under discussion, pointing pronoun signs, contrastive role shift, and spatial inflection of ASL verbs. It was found that adding these phenomena to ASL animations led to a significant improvement in user comprehension of the animations, thereby motivating future research on automating the generation of these animations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Abbreviations

ASL:

American sign language

HCI:

Human-computer interaction

MT:

Machine translation

BSL:

British sign language

References

  1. Huenerfauth, M.: Improving spatial reference in American sign language animation through data collection from native ASL signers. In: Proceedings of the Universal Access in Human Computer Interaction conference (UAHCI’09), pp. 530–539. (2009). doi:10.1007/978-3-642-02713-0_56

  2. Mitchell, R., Young, T.A., Bachleda, B., Karchmer, M.A.: How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6(4), 306–335 (2006)

    Article  Google Scholar 

  3. Traxler, C.: The Stanford achievement test, ninth edition: national norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5(4), 337–348 (2000). doi:10.1093/deafed/5.4.337

    Article  Google Scholar 

  4. Huenerfauth, M., Hanson, V.L.: Sign language in the interface: access for Deaf signers. In: Stephanidis, C. (ed.) The Universal Access Handbook. Lawrence Erlbaum Associates, Mahwah (2009)

    Google Scholar 

  5. Lane, H., Hoffmeister, R., Bahan, B.: A Journey into the Deaf World. DawnSign Press, San Diego (1996)

    Google Scholar 

  6. Padden, C., Humphries, T.: Inside Deaf Culture. Harvard University Press, Cambridge (2005)

    Google Scholar 

  7. Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2006). doi:10.1007/s10209-007-0102-z

    Article  Google Scholar 

  8. Kennaway, J., Glauert, J., Zwitserlood, I.: Providing signed content on the Internet by synthesized animation. ACM Trans. Comput. Hum. Interact. 14(3), 1–29 (2007). doi:10.1145/1279700.1279705

    Article  Google Scholar 

  9. VCom3D: Sign Smith Studio. http://www.vcom3d.com/signsmith.php. Accessed 11 Mar 2010 (2010)

  10. Chiu, Y.H., Wu, C.H., Su, H.Y., Cheng, C.J.: Joint optimization of word alignment and epenthesis generation for Chinese to Taiwanese sign synthesis. IEEE Trans. Pattern Anal. Mach. Intell. 29(1):28–39. IEEE Press, New York (2007). doi:10.1109/TPAMI.2007.15

    Google Scholar 

  11. Fotinea, S.E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Univ. Access Inf. Soc. 6(4), 405–418 (2008). doi:10.1007/s10209-007-0094-8

    Article  Google Scholar 

  12. Marshall, I., Safar, E.: Grammar development for sign language avatar-based synthesis. In: Stephanidis, C. (ed.) Universal Access in HCI: Exploring New Dimensions of Diversity—Volume 8 of the Proceedings of the 11th International Conference on Human-Computer Interaction, Lawrence Erlbaum Associates, Mahwah (2005)

  13. Karpouzis, K., Caridakis, G., Fotinea, S.E., Efthimiou, E.: Educational resources and implementation of a Greek sign language synthesis architecture. Comput. Educ. 49(1), 54–74 (2007). doi:10.1016/j.compedu.2005.06.004

    Article  Google Scholar 

  14. Stein, D., Bungeroth, J., Ney, H.: Morpho-syntax based statistical methods for sign language translation. In: Proceedings of the European Association for Machine Translation, pp. 169–177. European Association for Machine Translation, Allschwil (2006)

  15. Morrissey, S., Way, A.: An example-based approach to translating sign language. In: Proceedings of the Workshop on Example-Based Machine Translation, pp 109–116 (2005)

  16. Shionome, T., Kamata, K., Yamamoto, H., Fischer, S.: Effects of display size on perception of Japanese sign language—mobile access in signed language. In: Proceedings of the Human-Computer Interaction Conference, pp 22–27 (2005)

  17. Sumihiro, K., Yoshihisa, S., Takao, K.: Synthesis of sign animation with facial expression and its effects on understanding of sign language. IEIC Tech. Rep. 100(331), 31–36 (2000)

    Google Scholar 

  18. van Zijl, L., Barker, D.: South African sign language MT system. In: Proceedings of AFRIGRAPH, pp. 49–52 (2003). doi:10.1145/602330.602339

  19. Zhao, L., Kipper, K., Schuler, W., Vogler, C., Badler, N., Palmer, M.: A machine translation system from English to American sign language. In: Proceedings of the 4th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future (Lecture Notes in Computer Science 1934). Springer, Heidelberg, pp. 54–67 (2000). doi:10.1007/3-540-39965-8_6

  20. Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., Lee, R.: The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. MIT Press, Cambridge (2000)

    Google Scholar 

  21. Liddell, S.: Grammar Gesture and Meaning in American Sign Language. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  22. Padden, C.: Interaction of Morphology and Syntax in American Sign Language. Outstanding Dissertations in Linguistics, Series IV. Garland Press, New York (1988)

  23. Braffort, A., Dalle, P.: Sign language applications: preliminary modeling. Univ. Access Inf. Soc. 6(4), 393–404 (2008). doi:10.1007/s10209-007-0103-y

    Article  Google Scholar 

  24. Marshall, I., Safar, E.: A prototype text to British sign language (BSL) translation system. In: Companion Volume to the Proceedings of the Association for Computational Linguistics Conference, pp 113–116 (2003). doi:10.3115/1075178.1075194

  25. Iwarsson, S., Stahl, A.: Accessibility, usability and universal design-positioning and definition of concepts describing person-environment relationships. Disabil. Rehabil. 25(2), 57–66 (2003). doi:10.1080/0963828021000007969

    Google Scholar 

  26. Nielsen, J.: Usability Engineering. Academic Press, Boston (1993)

    MATH  Google Scholar 

  27. International Organization for Standardization: ISO 9241-11: Guidance on Usability. International Organization for Standardization. http://www.iso.org/iso/en/CatalogueListPage (1998)

  28. Huenerfauth, M.: Evaluation of a psycholinguistically motivated timing model for animations of american sign language. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 129–136, ACM Press, New York (2008). doi:10.1145/1530064.1530067

  29. Huenerfauth, M.: A linguistically motivated model for speed and pausing in animations of American sign language. ACM Trans. Access. Comput. 2(2):1–31. ACM Press, New York (2009). doi:10.1145/1530064.1530067

  30. Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluation of American sign language generation by native ASL signers. ACM Trans. Access. Comput. 1(1):1–27. ACM Press, New York (2008). doi:10.1145/1361203.1361206

  31. Lucas, C., Valli, C.: Language Contact in the American Deaf Community. Academic Press, San Diego (1992)

    Google Scholar 

  32. Campbell, N.: Speech synthesis evaluation. In: Human Language Technologies (HLT) Evaluation Workshop, European Language Resources Association (ELRA). http://www.elra.info/hltevaluationworkshop/img/pdf/Nick%20Campbell.ATR.Speech%20Synthesis%20Evaluation.pdf. Accessed 13 Mar 2010 (2005)

  33. Van Bezooijen, R., Pols, L.: Evaluating text-to-speech systems: Some methodological aspects. Speech Commun. 9(4), 263–270 (1990). doi:10.1016/0167-6393(90)90002-Q

    Article  Google Scholar 

  34. Huenerfauth, M., Lu, P.: Annotating spatial reference in a motion-capture corpus of American sign language discourse. In: Proceedings of the Fourth Workshop on the Representation and Processing of Signed Languages: Corpora and Sign Language Technologies, the 7th International Conference on Language Resources and Evaluation (LREC 2010). ELRA, Paris (2010)

  35. Bungeroth, J., Stein, D., Dreuw, P., Zahedi, M., Ney, H.: A German sign language corpus of the domain weather report. In: Vettori, C. (ed.) 2nd Workshop on the Representation and Processing of Sign Languages, pp. 2000–2003. ELRA, Paris (2006)

  36. Crasborn, O., Sloetjes, H., Auer, E., Wittenburg, P.: Combining video and numeric data in the analysis of sign languages within the ELAN annotation software. In: Vettori, C. (ed.) 2nd Workshop on the Representation and Processing of Sign Languages, the 5th International Conference on Language Resources and Evaluation (LREC 2006), pp. 82–87. ELRA, Paris (2006)

  37. Efthimiou, E., Fotinea, S.E.: GSLC: Creation and annotation of a Greek sign language corpus for HCI. In: Universal Access in Human Computer Interaction. (Lecture Notes in Computer Science 4554), pp. 657–666. Springer, Heidelberg (2007)

  38. Brashear, H., Starner, T., Lukowicz, P., Junker, H.: Using multiple sensors for mobile sign language recognition. IEEE International Symposium on Wearable Computers, p. 45, IEEE Press, New York (2003). doi:10.1109/ISWC.2003.1241392

  39. Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., Abbott, S.: Tessa, a system to aid communication with deaf people. In: 5th International ACM Conference on Assistive Technologies, pp. 205–212. ACM Press, New York (2002). doi:10.1145/638249.638287

  40. Vogler, C., Metaxas, D.: Handshapes and movements: Multiple-channel ASL recognition. (Lecture Notes in Artificial Intelligence 2915), pp. 247–258, Springer, Heidelberg (2004). doi:10.1007/11678816

Download references

Acknowledgments

This research was supported in part by the US. National Science Foundation under award number 0746556, by The City University of New York PSC-CUNY Research Award Program, by Siemens A&D UGS PLM Software through a Go PLM Academic Grant, and by Visage Technologies AB through a free academic license for character animation software. Jonathan Lamberton prepared experimental materials and organized data collection for the ASL animation studies discussed in Sects. 2 and 3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matt Huenerfauth.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Huenerfauth, M., Lu, P. Effect of spatial reference and verb inflection on the usability of sign language animations. Univ Access Inf Soc 11, 169–184 (2012). https://doi.org/10.1007/s10209-011-0247-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-011-0247-7

Keywords

Navigation