Skip to main content

Sign Language Avatars: Animation and Comprehensibility

  • Conference paper
Intelligent Virtual Agents (IVA 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6895))

Included in the following conference series:

Abstract

Many deaf people have significant reading problems. Written content, e.g. on internet pages, is therefore not fully accessible for them. Embodied agents have the potential to communicate in the native language of this cultural group: sign language. However, state-of-the-art systems have limited comprehensibility and standard evaluation methods are missing. In this paper, we present methods and discuss challenges for the creation and evaluation of a signing avatar. We extended the existing EMBR character animation system with prerequisite functionality, created a gloss-based animation tool and developed a cyclic content creation workflow with the help of two deaf sign language experts. For evaluation, we introduce delta testing, a novel way of assessing comprehensibility by comparing avatars with human signers. While our system reached state-of-the-art comprehensibility in a short development time we argue that future research needs to focus on nonmanual aspects and prosody to reach the comprehensibility levels of human signers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Univers. Access Inf. Soc. 6, 375–391 (2008)

    Article  Google Scholar 

  2. Filhol, M.: Zebedee: a lexical description model for sign language synthesis. Tech. Rep. 2009-08, LIMSI (2009)

    Google Scholar 

  3. Hanke, T.: iLex - a tool for sign language lexicography and corpus analysis. In: Proceedings of the 3rd International Conference on Language Resources and Evaluation, pp. 923–926 (2002)

    Google Scholar 

  4. Heloir, A., Kipp, M.: Realtime animation of interactive agents: Specification and realization. Journal of Applied Artificial Intelligence 24(6), 510–529 (2010)

    Article  Google Scholar 

  5. Heloir, A., Kipp, M., Gibet, S., Courty, N.: Specifying and evaluating data-driven style transformation for gesturing embodied agents. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 215–222. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Hodgins, J., Jörg, S., O’Sullivan, C., Park, S.I., Mahler, M.: The saliency of anomalies in animated human characters. ACM Trans. Appl. Percept. 7, 22:1–22:14 (2010)

    Google Scholar 

  7. Holt, J.A.: Demographic, Stanford achievement test - 8th edition for deaf and hard of hearing students: Reading comprehension subgroup results. Amer. Annals Deaf. 138, 172–175 (1993)

    Google Scholar 

  8. Huenerfauth, M.: A linguistically motivated model for speed and pausing in animations of american sign language. ACM Trans. Access. Comput. 2, 9:1–9:31 (2009)

    Google Scholar 

  9. Huenerfauth, M., Hanson, V.L.: Sign language in the interface: Access for deaf signers. In: Stephanidis, C. (ed.) The Universal Access Handbook. CRC Press, Boca Raton (2009)

    Google Scholar 

  10. Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluating american sign language generation through the participation of native ASL signers. In: Proc. of the 9th International ACM Conference on Computers and Accessibility (ASSETS), pp. 211–218. ACM, New York (2007)

    Chapter  Google Scholar 

  11. Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluating american sign language generation by native ASL signers. ACM Transactions on Access Computing 1(1), 1–27 (2008)

    Article  Google Scholar 

  12. Johnston, T.: Australian Sign Language (Auslan): An introduction to sign language linguistics. Cambridge University Press, Cambridge (2007)

    Book  Google Scholar 

  13. Kennaway, J.R., Glauert, J.R.W., Zwitserlood, I.: Providing signed content on the internet by synthesized animation. ACM Transactions on Computer-Human Interaction (TOCHI) 14(3), 15–29 (2007)

    Article  Google Scholar 

  14. Kipp, M.: Anvil: The video annotation research tool. In: Durand, J., Gut, U., Kristofferson, G. (eds.) Handbook of Corpus Phonology. Oxford University Press, Oxford (to appear, 2011)

    Google Scholar 

  15. Kipp, M., Heloir, A., Schröder, M., Gebhard, P.: Realizing multimodal behavior: Closing the gap between behavior planning and embodied agent presentation. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 57–63. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  16. Kipp, M., Neff, M., Albrecht, I.: An Annotation Scheme for Conversational Gestures: How to economically capture timing and form. Journal on Language Resources and Evaluation - Special Issue on Multimodal Corpora 41(3-4), 325–339 (2007)

    Google Scholar 

  17. Kipp, M., Nguyen, Q.: Multitouch Puppetry: Creating coordinated 3D motion for an articulated arms. In: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (2010)

    Google Scholar 

  18. Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In: Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS). ACM Press, New York (2011)

    Google Scholar 

  19. Kita, S., van Gijn, I., van der Hulst, H.: Movement phases in signs and co-speech gestures, and their transcription by human coders. In: Wachsmuth, I., Fröhlich, M. (eds.) GW 1997. LNCS (LNAI), vol. 1371, pp. 23–35. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  20. Liddell, S.K., Johnson, R.E.: American sign language: The phonological base. Sign Language Studies 64, 195–277 (1989)

    Google Scholar 

  21. McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)

    Google Scholar 

  22. Neff, M., Kipp, M., Albrecht, I., Seidel, H.P.: Gesture Modeling and Animation Based on a Probabilistic Recreation of Speaker Style. ACM Transactions on Graphics 27(1), 1–24 (2008)

    Article  Google Scholar 

  23. Niewiadomski, R., Bevacqua, E., Mancini, M., Pelachaud, C.: Greta: An interactive expressive ECA systems. In: Proc. of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1399–1400 (2009)

    Google Scholar 

  24. Pfau, R., Quer, J.: Nonmanuals: their prosodic and grammatical roles. In: Brentari, D. (ed.) Sign languages (Cambridge Language Surveys), pp. 381–402. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  25. Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: HamNoSys Version 2.0. Hamburg Notation System for Sign Language. An Introductory Guides, Signum (1989)

    Google Scholar 

  26. Schröder, M., Trouvain, J.: The german text-to-speech synthesis system mary: A tool for research, development and teaching. International Journal of Speech Technology 6, 365–377 (2003)

    Article  Google Scholar 

  27. Sheard, M., Schoot, S., Zwitserlood, I., Verlinden, M., Weber, I.: Evaluation reports 1 and 2 of the EU project essential sign language information on government networks, Deliverable D6.2 (March 2004)

    Google Scholar 

  28. Stokoe, W.C.: Sign language structure: An outline of the visual communication system of the American deaf. Studies in linguistics, Occasional papers 8 (1960)

    Google Scholar 

  29. Swisher, V., Christie, K., Miller, S.: The reception of signs in peripheral vision by deaf persons. Sign Language Studies 63, 99–125 (1989)

    Google Scholar 

  30. Thiebaux, M., Marshall, A., Marsella, S., Kallman, M.: SmartBody: Behavior realization for embodied conversational agents. In: Proc. of the 7th Int. Conf. on Autonomous Agents and Multiagent Systems, AAMAS (2008)

    Google Scholar 

  31. Wolfe, R., McDonald, J., Davidson, M.J., Frank, C.: Using an animation-based technology to support reading curricula for deaf elementary schoolchildren. In: The 22nd Annual International Technology & Persons with Disabilities Conference (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kipp, M., Heloir, A., Nguyen, Q. (2011). Sign Language Avatars: Animation and Comprehensibility. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds) Intelligent Virtual Agents. IVA 2011. Lecture Notes in Computer Science(), vol 6895. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23974-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23974-8_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23973-1

  • Online ISBN: 978-3-642-23974-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics