ABSTRACT
We present a design for an interactive American Sign Language game geared for language development for deaf children. In addition to work on game design, we show how Wizard of Oz techniques can be used to facilitate our work on ASL recognition. We report on two Wizard of Oz studies which demonstrate our technique and maximize our iterative design process. We also detail specific implications to the design raised from working with deaf children and possible solutions.
- H. Brashear, T. Starner, P. Lukowicz and H. Junker. Using Multiple Sensors for Mobile Sign Language Recognition. In Proceedings of the Seventh IEEE International Symposium on Wearable Computers, pp. 45--52, 2003. Google ScholarDigital Library
- Texas School for the Deaf and G. Pollard. Aesop: Four Fables. Product Information on the World Wide Web. Texas School for the Deaf, Austin, TX. http://www.readingonline.org/electronic/project/post2.html, 1998.Google Scholar
- G. Fang, W. Gao and D. Zhao. Large Vocabulary Sign Language Recognition Based on Hierarchical Decision Trees. In Intl. Conf. on Multimodal Interfaces, pp. 125--131, 2003. Google ScholarDigital Library
- W. Gao, G. Fang, D. Zhao and Y. Chen. Transition Movement Models for Large Vocabulary Continuous Sign Language Recognition (CSL). In Proceedings. Sixth IEEE Intl. Conference on Automatic Face and Gesture Recognition, pp. 553--558, 2004. Google ScholarDigital Library
- H. Hamilton and T. Holzman. Linguistic Encoding in Short-Term Memory as a Function of Stimulus Type. Memory and Cognition. 17 (5). pp. 541--50, 1989.Google ScholarCross Ref
- H. Hamilton and D. Lillo-Martin. Imitative Production of Verbs of Movement and Location: A Comparative Study. Sign Language Studies. 50. pp. 29--57, 1986.Google Scholar
- J. L. Hernandez-Rebollar, N. Kyriakopoulos and R. W. Lindeman. A New Instrumented Approach for Translating American Sign Language into Sound and Text. In Proceedings. Sixth IEEE Intl. Conference on Automatic Face and Gesture Recognition, pp. 547--552, 2004. Google ScholarDigital Library
- J. Höysniemi, P. Hämäläinen and L. Turkki. Wizard of Oz Prototyping of Computer Vision Based Action Games for Children. In Proceedings of Interaction Design and Children, pp. 27--34, 2004. Google ScholarDigital Library
- J. S. Kim, W. Jang and Z. Bien. A Dynamic Gesture Recognition System for the Korean Sign Language KSL. IEEE Transactions on Systems, Man, and Cybernetics. 26 (2). pp. 354--359, 1996. Google ScholarDigital Library
- R. Liang and M. Ouhyoung. A Real-time Continuous Gesture Recognition System for Sign Language. In Third International Conference on Automatic Face and Gesture Recognition, pp. 558--565, 1998. Google ScholarDigital Library
- R. I. Mayberry and E. B. Eichen. The Long-Lasting Advantage of Learning Sign Language in Childhood: Another Look at the Critical Period for Language Acquisition. Journal of Memory and Language. 30. pp. 486--498, 1991.Google ScholarCross Ref
- R. M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Henderson, H. Brashear and D. S. Ross. Towards a One-Way American Sign Language Translator. In Proceedings. Sixth IEEE Intl. Conference on Automatic Face and Gesture Recognition, pp. 620--625, 2004. Google ScholarDigital Library
- K. Murakami and H. Taguchi. Gesture Recognition Using Recurrent Neural Networks. In CHI '91 Conference Proceedings, pp. 237--241, 1991. Google ScholarDigital Library
- E. L. Newport. Maturational Constraints on Language Learning. Cognitive Science. 14. pp. 11--28, 1990.Google Scholar
- Institute for Disabilities Research and Training Inc. (IDRT). Con-SIGN-tration. Product Information on the World Wide Web. http://www.idrt.com/ProductInfo.php?ID=32&u=1, 1999.Google Scholar
- D. Roberts, U. Foehr, V. Rideout and M. Brodie. Kids and Media @ the New Millennium. Kaiser Family Foundation Report. Menlo Park, CA, 1999.Google Scholar
- P. Spencer and A. Lederberg. Different Modes, Different Models: Communication and Language of Young Deaf Children and Their Mothers. in M. Romski, ed., Communication and Language: Discoveries from Atypical Development. Harvard University Press. pp. 203--30. 1997.Google Scholar
- T. Starner and A. Pentland. Visual Recognition of American Sign Language Using Hidden Markov Models. In Proc. of the Intl. Workshop on Automatic Face and Gesture Recognition, pp. 189--194, 1995.Google Scholar
- Gallaudet University. Regional and National Summary Report of Data from the 1999--2000 Annual Survey of Deaf and Hard of Hearing Children and Youth. Washington, D. C., 2001.Google Scholar
- C. Vogler and D. Metaxas. Adapting Hidden Markov Models for ASL recognition by using three-dimensional computer vision methods. In Proceedings of the IEEE Intl. Conference on Systems, Man and Cybernetics, pp. 156--161, 1997.Google ScholarCross Ref
- C. Vogler and D. Metaxas. ASL Recognition Based on a Coupling Between HMMs and 3D Motion Analysis. In Proceedings of the IEEE Intl. Conference on Computer Vision, pp. 363--369, 1998. Google ScholarDigital Library
- G. Wells. Language Development in Preschool Years. Harvard University Press. Cambridge, MA. 1985.Google Scholar
- J. Yamato, J. Ohya and K. Ishii. Recognizing Human Action in Time-Sequential Images using Hidden Markov Models. In Proceedings of the IEEE Intl. Conference on Computer Vision and Pattern Recognition, pp. 379--385, 1992.Google ScholarCross Ref
Index Terms
- Development of an American Sign Language game for deaf children
Recommendations
CopyCat: Using Sign Language Recognition to Help Deaf Children Acquire Language Skills
CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing SystemsDeaf children born to hearing parents lack continuous access to language, leading to weaker working memory compared to hearing children and deaf children born to Deaf parents. CopyCat is a game where children communicate with the computer via American ...
American sign language recognition in game development for deaf children
Assets '06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibilityCopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on ...
A gesture-based american sign language game for deaf children
CHI EA '05: CHI '05 Extended Abstracts on Human Factors in Computing SystemsWe present a system designed to facilitate language development in deaf children. The children interact with a computer game using American Sign Language (ASL). The system consists of three parts: an ASL (gesture) recognition engine; an interactive, ...
Comments