Skip to main content

Multi-modal Sign Icon Retrieval for Augmentative Communication

  • Conference paper
  • First Online:
  • 629 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2195))

Abstract

This paper addresses a multi-modal sign icon retrieval and prediction technology for generating sentences from ill-formed Taiwanese sign language (TSL) for people with speech or hearing impairments. The design and development or this PC-based TSL augmented and alternative communication (AAC) system aims to improve the input rate and accuracy of communication aids. This study focuses on 1) developing an effective TSL icon retrieval method, 2) investigating TSL prediction strategies for input rate enhancement, 3) using a predictive sentence template (PST) tree for sentence generation. The proposed system assists people with language disabilities in sentence formation. To evaluate the performance of our approach, a pilot study for clinical evaluation and education training was undertaken. The evaluation results show that the retrieval rate and subjective satisfactory level for sentence generation was significantly improved.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Demarco, P. and McCoy, K. F.: Generating Text from Compressed Input: An Intelligent Interface for People with Severe Motor Impairments. Communication of the ACM, Vol. 35. (1992) 68–78

    Article  Google Scholar 

  2. Simpson, R. C. and Koester, H. H.: Adaptive One-Switch Row-Column Scanning. IEEE Transaction on Rehabilitation Engineering, Vol. 7. (1999) 464–473

    Article  Google Scholar 

  3. Colombo, C.; Del Bimbo, A.; Pala, P.: Semantics in Visual Multimedia Information Retrieval. IEEE Multimedia, Vol. 63. (1999) 38–53

    Article  Google Scholar 

  4. Wu, C.H., Yen, G.L. and Chen, Y. J.: Integration of Phonetic and Prosodic Information for Robust Utterance Verification. IEE Proceedings, Vision, Image and Signal Processing, Vol.147, pp.55–61. (2000)

    Article  Google Scholar 

  5. Valli, Clayton and Lucas, Ceil: Linguistics of American Sign Language: An Introduction. Gallaudet University Press, Washington, D.C. (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heildelberg

About this paper

Cite this paper

Wu, CH., Chiu, YH., Cheng, KW. (2001). Multi-modal Sign Icon Retrieval for Augmentative Communication. In: Shum, HY., Liao, M., Chang, SF. (eds) Advances in Multimedia Information Processing — PCM 2001. PCM 2001. Lecture Notes in Computer Science, vol 2195. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45453-5_77

Download citation

  • DOI: https://doi.org/10.1007/3-540-45453-5_77

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42680-6

  • Online ISBN: 978-3-540-45453-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics