Skip to main content
Log in

Toward an intuitive sign language animation authoring system for the deaf

  • Long paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

This paper presents an ongoing effort toward an online collaborative framework allowing Deaf individuals to author intelligible signs using a dedicated 3D animation authoring interface. The results that are presented mainly focus on the design of a dedicated user interface assisted by novel input devices. This design cannot only benefit the Deaf but also the linguists studying sign language by providing them with a novel kind of study material. This material would consist of a symbolic representation of intelligible sign language animations together with a fine-grained log of the user’s edit actions. Two user studies demonstrate how Leap Motion and Kinect-like devices can be used together for recording and authoring hand trajectories as well as facial animation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Following the convention of writing Deaf with a capitalized D to refer to members of the Deaf community [23] who use sign language as their preferred language, whereas deaf refers to the audiological condition of not hearing.

  2. http://www.microsoft.com/en-us/kinectforwindows/ (23 Jan 2014).

  3. http://leapmotion.com (23 Jan 2014).

  4. Glosses are transcriptions of lexical and morphological SL items of a sign language (e.g., ASL) into the own language of the researcher(s) studying the sign language.

  5. http://hebergcck224.rnu.tn/ws/_site/demo.php (1 Sep 2014).

  6. Natural user interface, or NUI, or natural interface is used by designers and developers of human–machine interfaces to refer to a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions.

  7. http://www.culinan.net/ (1 Sep 2014).

  8. http://www.elix-lsf.fr/ (1 Sep 2014).

  9. http://www.faceshift.com/ (1 Sep 2014).

  10. http://www.blender.org/ (1 Sep 2014).

  11. http://en.wikipedia.org/wiki/Touchscreen#.22Gorilla_arm.22 (1 Sep 2014).

  12. http://slsi.dfki.de/software-and-resources/ (1 Sept 2014).

References

  1. Adamo-Villani, N., Popescu, V., Lestina, J.: A non-expert-user interface for posing signing avatars. Disabil. Rehabil. 8(3), 238–248 (2013)

    Google Scholar 

  2. Battocchi, A., Pianesi, F., Goren-Bar, D.: DaFEx: database of facial expressions. In: Proceedings of the First international conference on Intelligent Technologies for Interactive Entertainment. INTETAIN’05, pp. 303–306. Springer, Berlin, Heidelberg (2005)

  3. Battocchi, A., Pianesi, F., Goren-Bar, D.: A first evaluation study of a database of kinetic facial expressions (DaFEx). In: Proceedings of the 7th International Conference on Multimodal Interfaces. ICMI ’05, pp. 214–221. ACM, New York (2005)

  4. Cavender, A.C., Otero, D.S., Bigham, J.P., Ladner, R.E.: Asl-stem forum: enabling sign language to grow through online collaboration. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2075–2078. ACM Press, New York (2010)

  5. Delorme, M., Filhol, M., Braffort, A.: Animation generation process for sign language synthesis. In: Advances in Computer-Human Interactions, 2009. ACHI ’09, pp. 386–390. IEEE, New York (2009)

  6. Ekman, P.: Emotion in the human face: guide-lines for research and an integration of findings. No. 11 in Pergamon general psychology series. Pergamon Press, New York (1972)

  7. Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Univers. Access Inf. Soc. 6(4), 375–391 (2008)

    Article  Google Scholar 

  8. Filhol, M.: Combining two synchronisation methods in a linguistic model to describe sign language. In: Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, vol. 7206, pp. 194–203. Springer, Berlin, Heidelberg (2012)

  9. Forlines, C., Wigdor, D., Shen, C., Balakrishnan, R.: Direct-touch vs. mouse input for tabletop displays. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’07, pp. 647–656. ACM, New York (2007)

  10. Gibet, S., Courty, N., Duarte, K., Naour, T.L.: The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1(1), 1–23 (2011)

    Article  Google Scholar 

  11. Glauert, J., Elliott, R.: Extending the sigml notation-a progress report. In: Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT) (2011)

  12. Hanke, T., Matthes, S., Regen, A., Storz, J., Worseck, S., Eliott, R., Glauert, J., Kennaway, R.: Using timing information to improve the performance of avatars. In: Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT) (2011)

  13. Heloir, A., Kipp, M.: EMBR—a realtime animation engine for interactive embodied agents. In: Proceedings of the 9th International Conference on Intelligent Virtual Agents (IVA-09) (2009)

  14. Jayaprakash, R., Matthes, S., Hanke, T.: Extending the design editor for specifying more dynamic signing. In: Third International Workshop on Sign Language Translation and Avatar Technology (SLTAT) (2013)

  15. Jemni, M., Elghoul, O.: A system to make signs using collaborative approach. Computers Helping People with Special Needs, no. 5105 in Lecture Notes in Computer Science, pp. 670–677. Springer, Berlin Heidelberg (2008)

  16. Kipp, M., Heloir, A., Nguyen, Q.: Sign language avatars: animation and comprehensibility. In: Proceedings of the 10th International Conference on Intelligent Virtual Agents. IVA’11, pp. 113–126. Springer, Berlin, Heidelberg (2011)

  17. Kipp, M., Nguyen, Q.: Multitouch puppetry: creating coordinated 3D motion for an articulated arm. In: ACM International Conference on Interactive Tabletops and Surfaces, pp. 147–156. ACM Press, New York (2010)

  18. Lasseter, J.: Principles of traditional animation applied to 3D computer animation. ACM SIGGRAPH Comput. Graph. 21(4), 3544 (1987)

    Article  Google Scholar 

  19. Lebourque, T., Gibet, S.: High level specification and control of communication gestures: the GESSYCA system. In: Computer Animation, 1999. Proceedings, pp. 24–35. IEEE, New York (1999)

  20. Lombardo, V., Nunnari, F., Damiano, R.: A virtual interpreter for the italian sign language. In: Proceedings of the 10th International Conference on Intelligent Virtual Agents, IVA’10, p. 201207. Springer, Berlin, Heidelberg (2010)

  21. Losson, O., Vannobel, J.M.: Sign language formal description and synthesis. Int. J. Virtual Real. 3(4), 2734 (1998)

    Google Scholar 

  22. Lu, P., Huenerfauth, M.: Data-driven synthesis of spatially inflected verbs for american sign language animation. ACM Trans. Accessible Comput. 4(1), 1–29 (2011)

    Article  Google Scholar 

  23. Padden, C., Humphries, T.: Deaf in America: Voices from a Culture. Harvard University Press, Cambridge, Mass (1988)

    Google Scholar 

  24. Pighin, F., Lewis, J.P.: Facial motion retargeting. In: ACM SIGGRAPH 2006 Courses, SIGGRAPH ’06. ACM, New York, (2006)

  25. Prillwitz, S.L., Leven, R., Zienert, H., Zienert, R., T.Hanke, Henning, J.: HamNoSys Version 2.0. International Studies on Sign Language and Communication of the Deaf (1989)

  26. Sanna, A., Lamberti, F., Paravati, G., Rocha, F.D.: A kinect-based interface to animate virtual characters. J. Multimodal User Interfaces 7(4), 269–279 (2013)

    Article  Google Scholar 

  27. Sears, A., Shneiderman, B.: High precision touchscreens: design strategies and comparisons with a mouse. Int. J. Man-Machine Stud. 34(4), 593–613 (1991)

    Article  Google Scholar 

  28. Stokoe, W.C.: Sign language structure: an outline of the visual communication system of the american deaf. J. Deaf Stud. Deaf Educ. 10(1), 3–37 (2005)

    Article  Google Scholar 

  29. Thorne, M., Burke, D., van de Panne, M.: Motion doodles. ACM Trans. Graph. 23(3), 424 (2004)

    Article  Google Scholar 

  30. Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. In: ACM SIGGRAPH 2011 papers, SIGGRAPH ’11, pp. 77:1–77:10. ACM, New York (2011)

  31. Wolfe, R., McDonald, J., Davidson, M.J., Frank, C.: Using an animation-based technology to support reading curricula for deaf elementary schoolchildren. In: The 22nd Annual International Technology and Persons with Disabilities Conference (2007)

Download references

Acknowledgments

This research has been carried out within the framework of the Excellence Cluster Multimodal Computing and Interaction (MMCI) at Saarland University, funded by the German Research Foundation (DFG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexis Heloir.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Heloir, A., Nunnari, F. Toward an intuitive sign language animation authoring system for the deaf. Univ Access Inf Soc 15, 513–523 (2016). https://doi.org/10.1007/s10209-015-0409-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-015-0409-0

Keywords

Navigation