skip to main content
10.1145/2049536.2049556acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article

Evaluating importance of facial expression in american sign language and pidgin signed english animations

Published:24 October 2011Publication History

ABSTRACT

Animations of American Sign Language (ASL) and Pidgin Signed English (PSE) have accessibility benefits for many signers with lower levels of written language literacy. In prior experimental studies we conducted evaluating animations of ASL, native signers gave informal feedback in which they critiqued the insufficient and inaccurate facial expressions of the virtual human character. While face movements are important for conveying grammatical and prosodic information in human ASL signing, no empirical evaluation of their impact on the understandability and perceived quality of ASL animations had previously been conducted. To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information.

References

  1. Anderson-Hsieh, J., Johnson, R., Koehler, K. 1992. The relationship between native speaker judgments of non native pronunciation and deviance in segmentals, prosody and syllable structure. Language Learning, 42: 529--555.Google ScholarGoogle ScholarCross RefCross Ref
  2. Allbritton, D.W., Mckoon, G., Ratcliff, R. 1996. Reliability of prosodic cues for resolving syntactic ambiguity. Journal of Experimental Psychology: Learning, Memory, & Cognition, 22: 714--735.Google ScholarGoogle ScholarCross RefCross Ref
  3. Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M. 1994. Animated conversation: Rule based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In Computer Graphics Annual Conference Series (SIGGRAPH'94), 413--420. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Cokely, D.R. 1983. When is a Pidgin not a Pidgin? An alternate analysis of the ASL-English contact situation. Sign Language Studies, 12(38): 1--24.Google ScholarGoogle ScholarCross RefCross Ref
  5. Dahan, D., Tanenhaus, M., Chambers, C. 2002. Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47: 292--314.Google ScholarGoogle ScholarCross RefCross Ref
  6. Ekman, P. 1982. Emotion in the human face. Cambridge, England: Cambridge University Press.Google ScholarGoogle Scholar
  7. Grandstrom, B., House, D., Lundeberg, M. 1999. Prosodic cues in multimodal speech perception. In Proc. Int'l Congress of Phonetic Sciences (ICPhS 99), 655--658.Google ScholarGoogle Scholar
  8. Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E. 2008. Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ Access Inf Soc 6(4), 375--391. Berlin: Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Flecha-Garcia, M.L. 2009. Eyebrow raises in dialogue and their relation to discourse structure, utterance function and pitch accents in English. Speech Communication, 52:542--554. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Filhol, M., Delorme, M., Braffort, A. 2010. Combining constraint-based models for Sign Language synthesis. In Proc. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Language Resources and Evaluation Conference (LREC), Valetta, Malta.Google ScholarGoogle Scholar
  11. Fotinea, S.E., E. Efthimiou, G. Caridakis, K. Karpouzis. 2008. A knowledge-based sign synthesis architecture. Univ Access Inf Soc 6(4):405--418. Berlin: Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Garofolo, J.S., Lamel, L.F., Fisher, W.M., Fiscus, J.G., Pallett, D.S., Dahlgrena, N,L., Zue, V. 1993. TIMIT Acoustic-Phonetic Continuous Speech Corpus. Philadelphia, PA: Linguistic Data Consortium.Google ScholarGoogle Scholar
  13. Granström, B., House, D., Swerts, M. 2002. Multimodal feedback cues in human-machine interactions. In Proc. of Speech Prosody (SP-2002), 347--350.Google ScholarGoogle Scholar
  14. Hedberg, N., Sosa, J. 2007. The prosody of topic and focus in spontaneous English dialogue. In: Topic and Focus: Cross-Linguistic Perspectives on Meaning and Intonation. Berlin: Springer.Google ScholarGoogle Scholar
  15. Huenerfauth, M., Hanson, V. 2009. Sign language in the interface: access for deaf signers. In C. Stephanidis (ed.), Universal Access Handbook. NJ: Erlbaum. 38.1--38.18.Google ScholarGoogle Scholar
  16. Huenerfauth, M., L. Zhao, E. Gu, J. Allbeck. 2008. Evaluation of American sign language generation by native ASL signers. ACM Trans Access Comput 1(1):1--27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Huenerfauth, M. 2009. A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language. ACM Trans. Access. Comput. 2, 2, Article 9 (June 2009), 31 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Hirschberg, J., Nakatani, C. 1996. A prosodic analysis of discourse segments in direction-giving monologues. In Proceedings of the 34th conference on Association for Computational Linguistics, 286--293. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Juslin, P.N., Laukka, P. 2003. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin 5.Google ScholarGoogle Scholar
  20. Krahmer, E., Swerts, M. 2007. The effect of visual beats on prosodic prominence: Acoustic analyses, auditory perception and visual perception. Journal of Memory and Language, 57(3): 396--414.Google ScholarGoogle ScholarCross RefCross Ref
  21. Lucas, C. 2001. The Sociolinguistics of Sign Languages. Washington, DC: Gallaudet University Press.Google ScholarGoogle Scholar
  22. Massaro, D., Beskow, J. 2002. Multimodal speech perception: A paradigm for speech science. In B. Granstrom, D. House, & I. Karlsson (eds.), Multilmodality in language and speech systems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 45--71.Google ScholarGoogle Scholar
  23. Mitchell, R., Young, T., Bachleda, B., & Karchmer, M. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang Studies, 6(3):306--335.Google ScholarGoogle ScholarCross RefCross Ref
  24. Neidle, C., D. Kegl, D. MacLaughlin, B. Bahan, R.G. Lee. 2000. The syntax of ASL: functional categories and hierarchical structure. Cambridge: MIT Press.Google ScholarGoogle Scholar
  25. Novick, D., Hansen, B., & Ward, K. 1996. Coordinating turn-taking with gaze. In Proceedings of ICSLP-96, Philadelphia, PA, 3, 1888--91.Google ScholarGoogle Scholar
  26. Pelachaud, C., Badler, N. I., Steedman, M. 1996. Generating Facial Expressions for Speech. Cognitive Science, 20:1--46.Google ScholarGoogle ScholarCross RefCross Ref
  27. Price, P., Ostendorf, M., Shattuck-Hufnagel, S., Fong, C. 1991. The use of prosody in syntactic disambiguation. Journal of the Acoustical Society of America.Google ScholarGoogle ScholarCross RefCross Ref
  28. Rosenberg, A. 2010. AuToBI - A Tool for Automatic ToBI Annotation. In Proc. 11th Annual Conference of the International Speech Communication Association INTERSPEECH 2010.Google ScholarGoogle Scholar
  29. Srinivasan, R., Massaro, D. 2003. Perceiving prosody from the face and voice: distinguishing statements from echoic questions in English. Language and Speech, 46(1): 1--22.Google ScholarGoogle ScholarCross RefCross Ref
  30. Traxler, C. 2000. The Stanford achievement test, 9th edition: national norming and performance standards for deaf & hard-of-hearing students. J Deaf Stud & Deaf Educ 5(4):337--348.Google ScholarGoogle ScholarCross RefCross Ref
  31. VCom3D. 2011. Homepage. http://www.vcom3d.com.Google ScholarGoogle Scholar
  32. Ward, G., Hirschberg, J. 1985. Implicating uncertainty: The pragmatics of fall-rise intonation. Language, 61: 747--776.Google ScholarGoogle ScholarCross RefCross Ref
  33. Young, S.J. 1994. The HTK Hidden Markov Model Toolkit: Design and Philosophy. Entropic Cambridge Research Laboratory, Ltd. 2: 2--44.Google ScholarGoogle Scholar

Index Terms

  1. Evaluating importance of facial expression in american sign language and pidgin signed english animations

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ASSETS '11: The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
            October 2011
            348 pages
            ISBN:9781450309202
            DOI:10.1145/2049536

            Copyright © 2011 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 24 October 2011

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate436of1,556submissions,28%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader