Abstract
An important field for model-driven development of interfaces is the consideration of users with disabilities. Interface design for deaf people presents specific problems, since it needs to be based on visual communication, incorporating unusual forms of interaction, in particular gesture-based ones. Standard solutions for model-driven development of visual interfaces lack specific constructs for structuring these more sophisticated forms of interaction. This paper discusses such issues in the context of the development of a deaf-centered e-learning environment. Sign Languages enter this context as a suitable alternative communication code, both in video form and through one of their most successful written forms, namely SignWriting.
Similar content being viewed by others
Notes
References
Antinoro Pizzuto, E. et al.: Language resources and visual communication in a deaf centered multimodal E-learning environment: issues to be addressed. In: Proceedings of the LREC 2010, pp. 18–23 (2010)
Beyer, H., Holtzblatt, K.: Contextual design. Interactions 6(1), 32–42 (1999)
Bolognesi, T., Brinksma, E.: Introduction to the ISO Specification Language LOTOS. Comput. Netw. ISDN Syst. 14(1), 25–59 (1987)
Bottoni, P., et al.: DELE: a deaf-centered E-learning environment. Chiang Mai J. Sci. 38, 31–57 (2011)
Bruner, J.S.: The narrative construction of reality. Critical Inquiry 18(1), 1–21 (1991)
Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.: A unifying reference framework for multi-target user interfaces. Interact. Comput. 15(3), 289–308 (2003)
Capuano, D., et al.: A deaf-centred e-learning environment (DELE): challenges and considerations. J. Assist. Technol. 5(4), 257–263 (2011)
Cox, S. et al.: TESSA, a system to aid communication with deaf people. In: Proceedings of the 5th SIGCAPH. pp. 205–212. ACM, New York (2002)
Cuxac, C.: French sign language: proposition of a structural explanation by iconicity. In: Braffort, A., Gherbi, R., Gibet, S., Teil, D., Richardson, J. (eds.) Gesture-Based Communication in Human-Computer Interaction. Lecture Notes in Computer Science vol. 1739/1999, pp. 165–184 (1999)
Debevc, M., Safaric, R., Golob, M.: Hypervideo application on an experimental control system as an approach to education. Comput. Appl. Eng. Education 16(1), 31–44 (2008)
Efthimiou, E., Fotinea, S.-E., Vogler, C., Hanke, T., Glauert, J., Bowden, R., Braffort, A., Collet, C., Maragos, P., Jérémie Segouat, J.: Sign language recognition, generation, and modelling: a research effort with applications in deaf communication. Universal Access in Human-Computer Interaction. Addressing Diversity. Lecture Notes in Computer Science, vol. 5614/2009, pp. 21–30 (2009)
Elliott, R., et al.: An overview of the SiGML notation and SiGMLSigning software system. Proc. LREC 2004, 98–104 (2004)
Fajardo, I., Vigo, M., Salmeron, L.: Technology for supporting web information search and learning in sign language. Interact. Comp. 21(4), 243–256 (2009)
Fels, D.I., Richards, J., Hardman, J., Lee, D.G.: Sign language web pages. Am. Ann. Deaf. 151(4), 423–433 (2006)
Fotinea, S.-E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. UAIS 6(4), 405–418 (2008)
Göhner, P. et al.: Integrated accessibility models of user interfaces for IT and automation systems. In: Proceedings of CAINE 2008, pp. 280–285 (2008)
Haesen, M. et al.: Using storyboards to integrate models and informal design knowledge. In: Hussmann, H., Meixner, G., Zuehlke, D. (eds.), Model-Driven Development of Advanced User Interfaces. Springer, New York pp. 87–106 (2011)
Johnson, M.: The Meaning of the Body. University of Chicago Press, Chicago (2007)
Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In” Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘11), pp 107–114. ACM, New York, NY, USA
Lakoff, G., Johnson, M.: Metaphor We Live By. University of Chicago Press, Chicago (1980)
Limbourg, Q., Vanderdonckt, J.: UsiXML: a user interface description language supporting multiple levels of independence. In: Matera, M., Comai, S. (eds.) Engineering Advanced Web Applications, pp. 325–338. Rinton Press, Paramus (2004)
McDrury, J., Alterio, M.: Learning Through Storytelling in Higher Education: Using Reflection and Experience to Improve Learning. Dunmore Press, Palmerston North (2002)
Paternò, F.: Model-Based Design and Evaluation of Interactive Applications. Springer, New York (1999)
Paternò, F., Santoro, C., Spano, L.D.: MARIA: a universal, declarative, multiple abstraction-level language for service-oriented applications in ubiquitous environments. ACM TOCHI. 16(4) (2009)
Perfetti, C. A., Sandak, R.: Reading optimally builds on spoken language: Implications for deaf readers. J. Deaf Stud. Deaf Education 5, 32–50 (2000)
Roberts, V.L., Fels, D.I.: Methods for inclusion: employing think aloud protocols in software usability studies with individuals who are deaf. Int. J. Human-Comput. Stud. 64(6), 489–501 (2006)
Sharma, R. et al.: Speech-gesture driven multimodal interfaces for crisis management. In: Proceedings of the IEEE, pp. 1327–1354 (2003)
Slaughter, L., Norman, K.L., Shneiderman, B.: Assessing users’ subjective satisfaction with the information system for youth services (isys). In: VA Tech Proceedings of Third Annual Mid-Atlantic Human Factors Conference, pp. 164–170 (1995)
Spano, L.D.: A model-based approach for gesture interfaces. In: Proceedings of EICS 2011, pp. 327–330. ACM, New York (2011)
Stanciulescu, A.: A Methodology for Developing Multimodal User Interfaces of Information System. Ph.D. thesis, Université Catholique de Louvain, Louvain-la-Neuve, Belgium (2008)
Sutton, V.: A way to analyze American Sign Language and any other Sign Language without translation into any spoken language. In: National Symposium on Sign Language Research and Teaching (1980)
Szechter, L.E., Liben, L.S.: Parental guidance in preschoolers’ understanding of spatial-graphic representations. Child Dev. 75(3), 869–885 (2004)
Van Hees, K., Engelen, J.: Non-visual access to GUIs: leveraging abstract user interfaces. In: Proceedings of ICCHP’2006, LNCS 4061, pp.1063–1070. Springer, New York (2006)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bottoni, P., Borgia, F., Buccarella, D. et al. Stories and signs in an e-learning environment for deaf people. Univ Access Inf Soc 12, 369–386 (2013). https://doi.org/10.1007/s10209-012-0283-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10209-012-0283-y