Skip to main content
Log in

Stories and signs in an e-learning environment for deaf people

  • Long paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

An important field for model-driven development of interfaces is the consideration of users with disabilities. Interface design for deaf people presents specific problems, since it needs to be based on visual communication, incorporating unusual forms of interaction, in particular gesture-based ones. Standard solutions for model-driven development of visual interfaces lack specific constructs for structuring these more sophisticated forms of interaction. This paper discusses such issues in the context of the development of a deaf-centered e-learning environment. Sign Languages enter this context as a suitable alternative communication code, both in video form and through one of their most successful written forms, namely SignWriting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. http://www.signwriting.org/.

  2. http://www.signedstories.com.

  3. http://moodle.org.

  4. http://www.bris.ac.uk/deaf.

  5. http://digitalstory.osu.edu.

  6. http://www.dictasign.eu.

  7. http://neyric.github.com/wireit.

  8. http://www.json.org.

  9. http://developer.yahoo.com/yui.

  10. http://neyric.github.com/inputex.

  11. http://www.learningandteaching.info/learning/dalebruner.

  12. http://www.omg.org/mof.

  13. http://www.yawlfoundation.org.

  14. http://www.usixml.org/index.php?mod=pages&id=15.

  15. http://code.google.com/p/openmeetings/.

  16. http://www.movementwriting.org/symbolbank/.

References

  1. Antinoro Pizzuto, E. et al.: Language resources and visual communication in a deaf centered multimodal E-learning environment: issues to be addressed. In: Proceedings of the LREC 2010, pp. 18–23 (2010)

  2. Beyer, H., Holtzblatt, K.: Contextual design. Interactions 6(1), 32–42 (1999)

    Article  Google Scholar 

  3. Bolognesi, T., Brinksma, E.: Introduction to the ISO Specification Language LOTOS. Comput. Netw. ISDN Syst. 14(1), 25–59 (1987)

    Article  Google Scholar 

  4. Bottoni, P., et al.: DELE: a deaf-centered E-learning environment. Chiang Mai J. Sci. 38, 31–57 (2011)

    Google Scholar 

  5. Bruner, J.S.: The narrative construction of reality. Critical Inquiry 18(1), 1–21 (1991)

    Article  Google Scholar 

  6. Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.: A unifying reference framework for multi-target user interfaces. Interact. Comput. 15(3), 289–308 (2003)

    Article  Google Scholar 

  7. Capuano, D., et al.: A deaf-centred e-learning environment (DELE): challenges and considerations. J. Assist. Technol. 5(4), 257–263 (2011)

    Article  Google Scholar 

  8. Cox, S. et al.: TESSA, a system to aid communication with deaf people. In: Proceedings of the 5th SIGCAPH. pp. 205–212. ACM, New York (2002)

  9. Cuxac, C.: French sign language: proposition of a structural explanation by iconicity. In: Braffort, A., Gherbi, R., Gibet, S., Teil, D., Richardson, J. (eds.) Gesture-Based Communication in Human-Computer Interaction. Lecture Notes in Computer Science vol. 1739/1999, pp. 165–184 (1999)

  10. Debevc, M., Safaric, R., Golob, M.: Hypervideo application on an experimental control system as an approach to education. Comput. Appl. Eng. Education 16(1), 31–44 (2008)

    Google Scholar 

  11. Efthimiou, E., Fotinea, S.-E., Vogler, C., Hanke, T., Glauert, J., Bowden, R., Braffort, A., Collet, C., Maragos, P., Jérémie Segouat, J.: Sign language recognition, generation, and modelling: a research effort with applications in deaf communication. Universal Access in Human-Computer Interaction. Addressing Diversity. Lecture Notes in Computer Science, vol. 5614/2009, pp. 21–30 (2009)

  12. Elliott, R., et al.: An overview of the SiGML notation and SiGMLSigning software system. Proc. LREC 2004, 98–104 (2004)

    Google Scholar 

  13. Fajardo, I., Vigo, M., Salmeron, L.: Technology for supporting web information search and learning in sign language. Interact. Comp. 21(4), 243–256 (2009)

    Article  Google Scholar 

  14. Fels, D.I., Richards, J., Hardman, J., Lee, D.G.: Sign language web pages. Am. Ann. Deaf. 151(4), 423–433 (2006)

    Article  Google Scholar 

  15. Fotinea, S.-E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. UAIS 6(4), 405–418 (2008)

    Article  Google Scholar 

  16. Göhner, P. et al.: Integrated accessibility models of user interfaces for IT and automation systems. In: Proceedings of CAINE 2008, pp. 280–285 (2008)

  17. Haesen, M. et al.: Using storyboards to integrate models and informal design knowledge. In: Hussmann, H., Meixner, G., Zuehlke, D. (eds.), Model-Driven Development of Advanced User Interfaces. Springer, New York pp. 87–106 (2011)

  18. Johnson, M.: The Meaning of the Body. University of Chicago Press, Chicago (2007)

    Google Scholar 

  19. Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In” Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘11), pp 107–114. ACM, New York, NY, USA

  20. Lakoff, G., Johnson, M.: Metaphor We Live By. University of Chicago Press, Chicago (1980)

    Google Scholar 

  21. Limbourg, Q., Vanderdonckt, J.: UsiXML: a user interface description language supporting multiple levels of independence. In: Matera, M., Comai, S. (eds.) Engineering Advanced Web Applications, pp. 325–338. Rinton Press, Paramus (2004)

    Google Scholar 

  22. McDrury, J., Alterio, M.: Learning Through Storytelling in Higher Education: Using Reflection and Experience to Improve Learning. Dunmore Press, Palmerston North (2002)

    Google Scholar 

  23. Paternò, F.: Model-Based Design and Evaluation of Interactive Applications. Springer, New York (1999)

    Google Scholar 

  24. Paternò, F., Santoro, C., Spano, L.D.: MARIA: a universal, declarative, multiple abstraction-level language for service-oriented applications in ubiquitous environments. ACM TOCHI. 16(4) (2009)

  25. Perfetti, C. A., Sandak, R.: Reading optimally builds on spoken language: Implications for deaf readers. J. Deaf Stud. Deaf Education 5, 32–50 (2000)

    Google Scholar 

  26. Roberts, V.L., Fels, D.I.: Methods for inclusion: employing think aloud protocols in software usability studies with individuals who are deaf. Int. J. Human-Comput. Stud. 64(6), 489–501 (2006)

    Google Scholar 

  27. Sharma, R. et al.: Speech-gesture driven multimodal interfaces for crisis management. In: Proceedings of the IEEE, pp. 1327–1354 (2003)

  28. Slaughter, L., Norman, K.L., Shneiderman, B.: Assessing users’ subjective satisfaction with the information system for youth services (isys). In: VA Tech Proceedings of Third Annual Mid-Atlantic Human Factors Conference, pp. 164–170 (1995)

  29. Spano, L.D.: A model-based approach for gesture interfaces. In: Proceedings of EICS 2011, pp. 327–330. ACM, New York (2011)

  30. Stanciulescu, A.: A Methodology for Developing Multimodal User Interfaces of Information System. Ph.D. thesis, Université Catholique de Louvain, Louvain-la-Neuve, Belgium (2008)

  31. Sutton, V.: A way to analyze American Sign Language and any other Sign Language without translation into any spoken language. In: National Symposium on Sign Language Research and Teaching (1980)

  32. Szechter, L.E., Liben, L.S.: Parental guidance in preschoolers’ understanding of spatial-graphic representations. Child Dev. 75(3), 869–885 (2004)

    Google Scholar 

  33. Van Hees, K., Engelen, J.: Non-visual access to GUIs: leveraging abstract user interfaces. In: Proceedings of ICCHP’2006, LNCS 4061, pp.1063–1070. Springer, New York (2006)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paolo Bottoni.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bottoni, P., Borgia, F., Buccarella, D. et al. Stories and signs in an e-learning environment for deaf people. Univ Access Inf Soc 12, 369–386 (2013). https://doi.org/10.1007/s10209-012-0283-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-012-0283-y

Keywords

Navigation