ABSTRACT
Animations of American Sign Language (ASL) and Pidgin Signed English (PSE) have accessibility benefits for many signers with lower levels of written language literacy. In prior experimental studies we conducted evaluating animations of ASL, native signers gave informal feedback in which they critiqued the insufficient and inaccurate facial expressions of the virtual human character. While face movements are important for conveying grammatical and prosodic information in human ASL signing, no empirical evaluation of their impact on the understandability and perceived quality of ASL animations had previously been conducted. To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information.
- Anderson-Hsieh, J., Johnson, R., Koehler, K. 1992. The relationship between native speaker judgments of non native pronunciation and deviance in segmentals, prosody and syllable structure. Language Learning, 42: 529--555.Google ScholarCross Ref
- Allbritton, D.W., Mckoon, G., Ratcliff, R. 1996. Reliability of prosodic cues for resolving syntactic ambiguity. Journal of Experimental Psychology: Learning, Memory, & Cognition, 22: 714--735.Google ScholarCross Ref
- Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M. 1994. Animated conversation: Rule based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In Computer Graphics Annual Conference Series (SIGGRAPH'94), 413--420. Google ScholarDigital Library
- Cokely, D.R. 1983. When is a Pidgin not a Pidgin? An alternate analysis of the ASL-English contact situation. Sign Language Studies, 12(38): 1--24.Google ScholarCross Ref
- Dahan, D., Tanenhaus, M., Chambers, C. 2002. Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47: 292--314.Google ScholarCross Ref
- Ekman, P. 1982. Emotion in the human face. Cambridge, England: Cambridge University Press.Google Scholar
- Grandstrom, B., House, D., Lundeberg, M. 1999. Prosodic cues in multimodal speech perception. In Proc. Int'l Congress of Phonetic Sciences (ICPhS 99), 655--658.Google Scholar
- Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E. 2008. Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ Access Inf Soc 6(4), 375--391. Berlin: Springer. Google ScholarDigital Library
- Flecha-Garcia, M.L. 2009. Eyebrow raises in dialogue and their relation to discourse structure, utterance function and pitch accents in English. Speech Communication, 52:542--554. Google ScholarDigital Library
- Filhol, M., Delorme, M., Braffort, A. 2010. Combining constraint-based models for Sign Language synthesis. In Proc. 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Language Resources and Evaluation Conference (LREC), Valetta, Malta.Google Scholar
- Fotinea, S.E., E. Efthimiou, G. Caridakis, K. Karpouzis. 2008. A knowledge-based sign synthesis architecture. Univ Access Inf Soc 6(4):405--418. Berlin: Springer. Google ScholarDigital Library
- Garofolo, J.S., Lamel, L.F., Fisher, W.M., Fiscus, J.G., Pallett, D.S., Dahlgrena, N,L., Zue, V. 1993. TIMIT Acoustic-Phonetic Continuous Speech Corpus. Philadelphia, PA: Linguistic Data Consortium.Google Scholar
- Granström, B., House, D., Swerts, M. 2002. Multimodal feedback cues in human-machine interactions. In Proc. of Speech Prosody (SP-2002), 347--350.Google Scholar
- Hedberg, N., Sosa, J. 2007. The prosody of topic and focus in spontaneous English dialogue. In: Topic and Focus: Cross-Linguistic Perspectives on Meaning and Intonation. Berlin: Springer.Google Scholar
- Huenerfauth, M., Hanson, V. 2009. Sign language in the interface: access for deaf signers. In C. Stephanidis (ed.), Universal Access Handbook. NJ: Erlbaum. 38.1--38.18.Google Scholar
- Huenerfauth, M., L. Zhao, E. Gu, J. Allbeck. 2008. Evaluation of American sign language generation by native ASL signers. ACM Trans Access Comput 1(1):1--27. Google ScholarDigital Library
- Huenerfauth, M. 2009. A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language. ACM Trans. Access. Comput. 2, 2, Article 9 (June 2009), 31 pages. Google ScholarDigital Library
- Hirschberg, J., Nakatani, C. 1996. A prosodic analysis of discourse segments in direction-giving monologues. In Proceedings of the 34th conference on Association for Computational Linguistics, 286--293. Google ScholarDigital Library
- Juslin, P.N., Laukka, P. 2003. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin 5.Google Scholar
- Krahmer, E., Swerts, M. 2007. The effect of visual beats on prosodic prominence: Acoustic analyses, auditory perception and visual perception. Journal of Memory and Language, 57(3): 396--414.Google ScholarCross Ref
- Lucas, C. 2001. The Sociolinguistics of Sign Languages. Washington, DC: Gallaudet University Press.Google Scholar
- Massaro, D., Beskow, J. 2002. Multimodal speech perception: A paradigm for speech science. In B. Granstrom, D. House, & I. Karlsson (eds.), Multilmodality in language and speech systems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 45--71.Google Scholar
- Mitchell, R., Young, T., Bachleda, B., & Karchmer, M. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang Studies, 6(3):306--335.Google ScholarCross Ref
- Neidle, C., D. Kegl, D. MacLaughlin, B. Bahan, R.G. Lee. 2000. The syntax of ASL: functional categories and hierarchical structure. Cambridge: MIT Press.Google Scholar
- Novick, D., Hansen, B., & Ward, K. 1996. Coordinating turn-taking with gaze. In Proceedings of ICSLP-96, Philadelphia, PA, 3, 1888--91.Google Scholar
- Pelachaud, C., Badler, N. I., Steedman, M. 1996. Generating Facial Expressions for Speech. Cognitive Science, 20:1--46.Google ScholarCross Ref
- Price, P., Ostendorf, M., Shattuck-Hufnagel, S., Fong, C. 1991. The use of prosody in syntactic disambiguation. Journal of the Acoustical Society of America.Google ScholarCross Ref
- Rosenberg, A. 2010. AuToBI - A Tool for Automatic ToBI Annotation. In Proc. 11th Annual Conference of the International Speech Communication Association INTERSPEECH 2010.Google Scholar
- Srinivasan, R., Massaro, D. 2003. Perceiving prosody from the face and voice: distinguishing statements from echoic questions in English. Language and Speech, 46(1): 1--22.Google ScholarCross Ref
- Traxler, C. 2000. The Stanford achievement test, 9th edition: national norming and performance standards for deaf & hard-of-hearing students. J Deaf Stud & Deaf Educ 5(4):337--348.Google ScholarCross Ref
- VCom3D. 2011. Homepage. http://www.vcom3d.com.Google Scholar
- Ward, G., Hirschberg, J. 1985. Implicating uncertainty: The pragmatics of fall-rise intonation. Language, 61: 747--776.Google ScholarCross Ref
- Young, S.J. 1994. The HTK Hidden Markov Model Toolkit: Design and Philosophy. Entropic Cambridge Research Laboratory, Ltd. 2: 2--44.Google Scholar
Index Terms
- Evaluating importance of facial expression in american sign language and pidgin signed english animations
Recommendations
Comparing native signers' perception of American Sign Language animations and videos via eye tracking
ASSETS '13: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and AccessibilityAnimations of American Sign Language (ASL) have accessibility benefits for signers with lower written-language literacy. Our lab has conducted prior evaluations of synthesized ASL animations: asking native signers to watch different versions of ...
Modeling and synthesizing spatially inflected verbs for American sign language animations
ASSETS '10: Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibilityAnimations of American Sign Language (ASL) have accessibility benefits for many signers with lower levels of written language literacy. This paper introduces a novel method for modeling and synthesizing ASL animations based on movement data collected ...
Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation
We are studying techniques for producing realistic and understandable animations of American Sign Language (ASL); such animations have accessibility benefits for signers with lower levels of written language literacy. This article describes and ...
Comments