Skip to main content

State of the Art and Future Challenges of the Portrayal of Facial Nonmanual Signals by Signing Avatar

  • Conference paper
  • First Online:
Book cover Universal Access in Human-Computer Interaction. Design Methods and User Experience (HCII 2021)

Abstract

Researchers have been developing avatars to portray sign languages as a necessary component of automatic translation systems between signed and spoken languages. Although sign language avatar technology has improved significantly in recent years, there are still open questions as to how best portray the linguistic and paralinguistic information that occurs on a signer’s face. Three interdisciplinary themes influence the current state of the art. The first, linguistic discovery, defines the facial activity that an avatar must carry out. The second, Computer Generated Imagery (CGI), supplies the tools and technology required to build avatars, and which determines the fidelity of an avatar’s appearance. In contrast, the third theme, Sign Language Representation Systems, determines the fidelity of timing of facial co-occurrences. This paper discusses the current state of the art and demonstrates how these themes contribute to the overall goal of creating avatars that can produce legible signed utterances that are acceptable to viewers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Grieve-Smith, A.B.: English to American Sign Language machine translation of weather reports. In: Proceedings of the Second High Desert Student Conference in Linguistics (HDSL2), Albuquerque, NM (1999)

    Google Scholar 

  2. Verlinden, M., Zwitserlood, I., Frowein, H.: Multimedia with animated sign language for deaf learners. In: EdMedia+ Innovate Learning (2005)

    Google Scholar 

  3. Cox, S., et al.: The development and evaluation of a speech-to-sign translation system to assist transactions. Int. J. Hum.-Comput. Interact. 16, 141–161 (2003)

    Article  Google Scholar 

  4. Furst, J., Alkoby, K., Lancaster, G., McDonald, J., Wolfe, R.: Making airport security accessible to the deaf. In: Proceedings of the Fifth IASTED International Conference on Computer Graphics and Imaging, Kaua’i, HI (2002)

    Google Scholar 

  5. Ebling, S., Glauert, J.: Building a Swiss German Sign Language avatar with JASigning and evaluating it among the deaf community. Univ. Access Inf. Soc. 15(4), 577–587 (2015)

    Article  Google Scholar 

  6. Stokoe, W.C.: Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf (Studies in Linguistics, Occasional Papers 8). University of Buffalo, Buffalo, NY (1960)

    Google Scholar 

  7. Baker, C.: Focusing on the nonmanual components of American Sign Language. Understanding Language Through Sign Language Research (1978)

    Google Scholar 

  8. Baker-Shenk, C.: A Microanalysis of the Nonmanual Components of Questions in American Sign Language (1983)

    Google Scholar 

  9. Bellugi, U., Fischer, S.: A comparison of sign language and spoken language. Cognition 1, 173–200 (1972)

    Article  Google Scholar 

  10. Bahan, B.: Non-manual Realization of Agreement in American Sign Language (1997)

    Google Scholar 

  11. Wilbur, R.B.: Eyeblinks and ASL phrase structure. Sign Lang. Stud. 1084(1), 221–240 (1994)

    Article  Google Scholar 

  12. Wilbur, R.B.: Effects of varying rate of signing on ASL manual signs and nonmanual markers. Lang. Speech 52, 245–285 (2009)

    Article  Google Scholar 

  13. Reilly, I., Anderson, D.: FACES: the aquisition of non-manual morphology in ASL. Direct. Sign Lang. Acquisit. 2, 159–182 (2002)

    Article  Google Scholar 

  14. Crasborn, O.A., Van Der Kooij, E., Waters, D., Woll, B., Mesch, J.: Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Lang. Linguist. 11(1), 45–67 (2008)

    Article  Google Scholar 

  15. Elliott, E.A., Jacobs, A.M.: Facial expressions, emotions, and sign languages. Front. Psychol. 4, 115 (2013)

    Article  Google Scholar 

  16. Sallandre, M.: Simultaneity in French sign language discourse. In: Amsterdam Studies in the Theory and History of Linguistic Science Series 4, vol. 281, p. 103 (2007)

    Google Scholar 

  17. Braem, P.B.: Functions of the mouthing component in the signing of deaf early and late learners of Swiss German Sign Language. In: Brentari, D. (ed.) Foreign Vocabulary in Sign Languages: A Cross-Linguistic Investigation of Word Formation, pp. 1–47. Erlbaum, Mahwah (2001)

    Google Scholar 

  18. Shumaker, C.: NMS Facial Expression, 2 Feb 2016. https://www.youtube.com/watch?v=NbbNwVwdfGg. [Accessed 15 Apr 2020]

  19. Foster, H.: Non-Manual Markers in ASL/North Carolina Division of Services for the Deaf and Hard of Hearing, 12 Sept 2019. https://www.youtube.com/watch?v=8HIc0IRe-dE&feature=youtu.be

  20. Wilbur, R.: Phonological and prosodic layering of nonmanuals in American Sign Language. In: Emmrey, K., Lane, H.L., Bellugi, U., Klima, E. (eds.) The Signs of Language Revisited: Festscrift for Ursula Bellugi and Edward Klima, pp. 213-241 (2000)

    Google Scholar 

  21. Nunes, A., Maciel, A., Meyer, G., John, N., Baranoski, G., Walter, M.: Appearance modelling of living human tissues. Comput. Graph. Forum 38(6), 43–65 (2019)

    Article  Google Scholar 

  22. Noll, A.M.: Computer-generated Three-dimensional Movies. Bell Telephone Laboratories (1965)

    Google Scholar 

  23. Bergeron, P., Lachapelle, P., Langlois, D., Robidoux, P. (Directors): Tony de Peltrie. [Film] (1985)

    Google Scholar 

  24. Mori, M.: The uncanny valley. Energy 7(4), 33–35 (1970)

    Google Scholar 

  25. Marmor, D.: Facial Rigging Blend Shape, 22 Oct 2011. http://www.cgfeedback.com/cgfeedback/showthread.php?t=2119. Accessed 30 Mar 2020

  26. Edwards, G. (Director): Rogue One: A Star Wars Story [Film]. Lucasefilm; Walt Disney Pictures; Allison Shearmur Productions, USA (2016)

    Google Scholar 

  27. Xu, Y., Feng, A.W., Marsella, S., Shapiro, A.: A practical and configurable lip sync method for games. In: Proceedings of Motion on Games, pp. 131–140 (2013)

    Google Scholar 

  28. Martin, G.C.: Preston Blair phoneme series, 4 May 2018 http://www.garycmartin.com/mouth_shapes.html

  29. Wachowski, L., Wachowski, L. (Directors): The Matrix Revolutions [Film]. Warner Bros, USA (2003)

    Google Scholar 

  30. Good, C.: When CGI artists say it takes “X” amount of time to render a frame, are they talking literally or figuratively? 11 Sep 2016. https://www.quora.com/When-CGI-artists-say-it-takes-X-amount-of-time-to-render-a-frame-are-they-talking-literally-or-figuratively

  31. Royce, B: The Matrix is getting a fourth movie – so can we have The Matrix Online back now please?, 21 Aug 2019. https://massivelyop.com/2019/08/21/the-matrix-is-getting-a-fourth-movie-so-can-we-have-the-matrix-online-back-now-please/. Accessed 16 Apr 2020

  32. Ekman, R.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, USA (1997)

    Google Scholar 

  33. Weast, T.P.: Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows. ProQuest, Ann Arbor, MI (2008)

    Google Scholar 

  34. Garchery, S., Boulic, R., Capin, T.: Standards for Virtual Human Animation: H-ANIM and MPEG4 (2004)

    Google Scholar 

  35. Stokoe, W.C., Casterline, D.C., Croneberg, C.G.: A Dictionary of American Sign Language on Linguistic Principles. Gallaudet College Press, Washington, DC (1965)

    Google Scholar 

  36. Hanke, T.: HamNoSys – Representing sign language data in language resources and language processing contexts. In: Fourth International Conference on Language Resources and Evaluation (LREC 2004). Representation and Processing of Sign Languages Workshop, Paris (2004)

    Google Scholar 

  37. Efthimiou, E., et al.: Dicta-sign–sign language recognition, generation and modelling: a research effort with applications in deaf communication. In: Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (2010)

    Google Scholar 

  38. Hanke, T., Storz, J.: iLex – A database tool for integrating sign language corpus linguistics and sign language lexicography. In: Workshop on the Representation and Processing of Sign Language, at the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco (2008)

    Google Scholar 

  39. Brugman, H., Russel, A.: Annotating multi-media/multi-modal resources with ELAN. In: Proceedings of the 2nd Workshop on the Representation and Processing of Sign Languages: Lexicographic Matters and Didactic Scenarios, Paris (2004)

    Google Scholar 

  40. Fabian Benitez-Quiroz, C., Gökgöz, K., Wilbur, R., Martinez, A.: Discriminant features and temporal structure of nonmanuals in American Sign Language. PLoS ONE 9(2), e86268 (2014)

    Article  Google Scholar 

  41. Adamo-Villani, N., Wilbur, R.B.: Asl-pro: American sign language animation with prosodic elements. In: International Conference on Universal Access in Human-Computer Interaction (2015)

    Google Scholar 

  42. Filhol, M., McDonald, J.: Extending the AZee-Paula shortcuts to enable natural proform synthesis. In: 8th Workshop on the Representation and Processing of Sign Languages, pp. 45–52 (2018)

    Google Scholar 

  43. Filhol, M., Hadjadj, M.N.: Juxtaposition as a form feature; syntax captured and explained rather than assumed and modelled. In: Language Resources and Evaluation Conference (LREC), Representation and Processing of Sign Languages, Portorož, Slovenia (2016)

    Google Scholar 

  44. Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. ACM (2011)

    Google Scholar 

  45. Pauser, S.: Prototypentest SiMAX im Rahmen des Innovationsschecks, 19 Mar 2019. https://www.equalizent.com/fileadmin/user_upload/News/2019_04_Avatar_Projektbericht.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Rosalee Wolfe or John McDonald .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wolfe, R. et al. (2021). State of the Art and Future Challenges of the Portrayal of Facial Nonmanual Signals by Signing Avatar. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Design Methods and User Experience. HCII 2021. Lecture Notes in Computer Science(), vol 12768. Springer, Cham. https://doi.org/10.1007/978-3-030-78092-0_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78092-0_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78091-3

  • Online ISBN: 978-3-030-78092-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics