Skip to main content

Advertisement

Log in

Multimodal user interface for the communication of the disabled

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

In this paper, a novel system is proposed to provide alternative tools and interfaces to blind and deaf-and-mute people and enable their communication and interaction with the computer. Several modules are developed to transform signals into other perceivable forms so that the transmitted message is conveyed despite one’s disabilities. The proposed application integrates haptics, audio and visual output, computer vision, sign language analysis and synthesis, speech recognition and synthesis to provide an interactive environment where the blind and deaf-and-mute users can collaborate. All the involved technologies are integrated into a treasure hunting game application that is jointly played by the blind and deaf-and-mute user. The integration of the multimodal interfaces into a game application serves both as an entertainment and a pleasant education tool to the users.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Jaimes A, Sebe N (2007) Multimodal human–computer interaction: a survey. Comput Vis Image Underst 108(1–2):116–134

    Article  Google Scholar 

  2. Richter K, Hellenschmidt M (2004) Interacting with the ambience: multimodal interaction and ambient intelligence. In: Proceedings of the W3C workshop on multi-modal interaction, vol 19, Sophia Antipolis, France, July 2004

  3. Marsic I, Medl A, Flanagan J (2000) Natural communication with information systems. Proc IEEE 88:1354–1366

    Article  Google Scholar 

  4. Lumsden J, Brewster SA (2003) A paradigm shift: alternative interaction techniques for use with mobile and wearable devices. In: Proceedings of the 13th annual IBM centers for advanced studies conference (CASCON 2003), Toronto, Canada, pp 97–100

  5. Tangelder JWH, Schouten BAM (2006) Transparent face recognition in an unconstrained environment using a sparse representation from multiple still images. In: ASCI 2006 conference, Lommel, Belgium, June 2006

  6. Raman TV (2003) Multimodal interaction design principles for multimodal interaction. In: Proceedings of computer human interaction (CHI 2003), Fort Lauderdale, USA, pp 5–10

  7. Luciano C, Banerjee P, Florea L, Dawe G (2005) Design of the ImmersiveTouch™: a high-performance haptic augmented virtual reality system. In: Proceedings of the 11th international conference on human-computer interaction, Las Vegas, Nevada, July 2005

  8. Sjostrom C (1999) Touch access for people with disabilities. Licentiate thesis, CERTEC Lund University, Sweden, 1999

  9. Nelson B, Ketelhut D, Clarke J, Bowman C, Dede C (2005) Design-based research strategies for developing a scientific inquiry curriculum in a multi-user virtual environment. Educ Technol 45(1):21–28

    Google Scholar 

  10. Lim CP, Nonis D, Hedberg J (2006) Gaming in a 3D multiuser virtual environment: engaging students in Science lessons. Br J Educ Technol 37(2):211–231

    Article  Google Scholar 

  11. Scoy V, Kawai I, Darrah S, Rash F (2000) Haptic display of mathematical functions for teaching mathematics to students with vision disabilities. In: Haptic human-computer interaction workshop

  12. Moustakas K, Nikolakis G, Tzovaras D, Deville B, Marras I, Pavlek J (2000) Multimodal tools and interfaces for the intercommunication between visually impaired and deaf-and-mute people. In: Proceedings of eNTERFACE 2006, Dubrovnik, Croatia, July 2006

  13. Tamura S, Iwano K, Furui S (2005) A stream-weight optimization method for multi-stream HMMS based on likelihood value normalization. In: Proceedings of the IEEE international conference on acoustics, speech, and signal processing (ICASSP’05), vol 1

  14. Erzin E, Yemez Y, Tekalp A (2005) Multimodal speaker identification using an adaptive classifier cascade based on modality reliability. IEEE Trans Multimedia 7(5):840–852

    Article  Google Scholar 

  15. Yound S et al (2006) The HTK book, HTK Version 3.4. Cambridge University Engineering Department

  16. Rabiner L, Juang B (1993) Fundamentals of speech recognition. Englewood Cliffs, Prentice-Hall

    Google Scholar 

  17. Nefian A, Liang L, Pi X, Liu X, Murphy K (2002) Dynamic Bayesian networks for audio-visual speech recognition. EURASIP J Appl Signal Process 2002(11):1274–1288

    Article  MATH  Google Scholar 

  18. Jayaram S, Schmugge S, Shin MC, Tsap LV (2004) Effect of colorspace transformation, the illuminance component, and color modeling on skin detection. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR)

  19. Aran O, Akarun L (2006) Recognizing two handed gestures with generative, discriminative and ensemble methods via Fisher kernels. In: International workshop on multimedia content representation, classification and security (MRCS ’06), Istanbul, Turkey, September 2006

  20. Papadogiorgaki M, Grammalidis N, Tzovaras D, Strintzis MG (2005) Text-to-sign language synthesis tool. In: 13th European signal processing conference (EUSIPCO2005), Antalya, Turkey, September 2005

  21. http://avrlab.iti.gr/SIMILAR/GameEvaluation.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Savvas Argyropoulos.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Argyropoulos, S., Moustakas, K., Karpov, A.A. et al. Multimodal user interface for the communication of the disabled. J Multimodal User Interfaces 2, 105–116 (2008). https://doi.org/10.1007/s12193-008-0012-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-008-0012-2

Keywords

Navigation