skip to main content
10.1145/3517428.3550366acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
poster

Context-responsive ASL Recommendation for Parent-Child Interaction

Published: 22 October 2022 Publication History

Abstract

Parental language input in early childhood plays a critical role in lifelong neuro-cognitive and social development. Deaf and Hard of Hearing (DHH) children are often at risk of language deprivation due to hearing parents’ limited knowledge of sign language - the natural language for DHH children at birth. To offer an immersive sign language environment for DHH children, we designed a novel computer-mediated communication technology named Table Top Interactive System (TIPS). It aims to provide context-responsive recommendation of American Sign Language (ASL) in real-time for hearing parents during face-to-face joint play with their DHH children. The system emphasizes supporting parent autonomy by adapting ASL recommendations using parent’s speech during play, and minimizes obtrusion for face-to-face interaction through an Augmented Reality (AR) display. This paper describes the design and development of an initial working prototype of TIPS and preliminary results of the system’s efficiency regarding system latency and accuracy for ASL recommendation and visualization. Next, we plan to conduct a user study to gather expert and parent feedback about the system design and ASL recommendation strategies for long-term and personalized usage.

References

[1]
Sedeeq Al-Khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. Modeling the speed and timing of American Sign Language to generate realistic animations. In Proceedings of the 20th international ACM SIGACCESS conference on computers and accessibility. 259–270.
[2]
ZHEN BAI, ELIZABETH M CODICK, ASHELY TENESACA, WANYIN HU, XIURONG YU, PEIRONG HAO, CHIGUSA KURUMADA, and WYATTE HALL. 2022. Signing-on-the-Fly: Technology Preferences to Reduce Communication Gap between Hearing Parents and Deaf Children. (2022).
[3]
Dhruva Bansal, Prerna Ravi, Matthew So, Pranay Agrawal, Ishan Chadha, Ganesh Murugappan, and Colby Duke. 2021. Copycat: Using sign language recognition to help deaf children acquire language skills. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–10.
[4]
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”.
[5]
Naomi K Caselli, Wyatte C Hall, and Jonathan Henner. 2020. American Sign Language interpreters in public schools: An illusion of inclusion that perpetuates language deprivation. Maternal and Child Health Journal 24, 11 (2020), 1323–1329.
[6]
Nathan A Chi, Peter Washington, Aaron Kline, Arman Husic, Cathy Hou, Chloe He, Kaitlyn Dunlap, and Dennis Wall. 2022. Classifying Autism from Crowdsourced Semi-Structured Speech Recordings: A Machine Learning Approach. arXiv preprint arXiv:2201.00927(2022).
[7]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805(2018).
[8]
Rina A Doherty and Paul Sorenson. 2015. Keeping users in the flow: mapping system responsiveness with user experience. Procedia Manufacturing 3(2015), 4384–4391.
[9]
Yuxiang Gao and Chien-Ming Huang. 2019. PATI: a projection-based augmented table-top interface for robot programming. In Proceedings of the 24th international conference on intelligent user interfaces. 345–355.
[10]
Judith C Goodman, Philip S Dale, and Ping Li. 2008. Does frequency count? Parental input and the acquisition of vocabulary. Journal of child language 35, 3 (2008), 515–531.
[11]
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, 2020. Conformer: Convolution-augmented transformer for speech recognition. arXiv preprint arXiv:2005.08100(2020).
[12]
Matthew L Hall, Wyatte C Hall, and Naomi K Caselli. 2019. Deaf children need language, not (just) speech. First Language 39, 4 (2019), 367–395.
[13]
Wyatte C Hall. 2017. What you don’t know can hurt you: The risk of language deprivation by impairing sign language development in deaf children. Maternal and child health journal 21, 5 (2017), 961–965.
[14]
Wyatte C Hall, Leonard L Levin, and Melissa L Anderson. 2017. Language deprivation syndrome: A possible neurodevelopmental disorder with sociocultural origins. Social psychiatry and psychiatric epidemiology 52, 6(2017), 761–776.
[15]
Wei Han, Zhengdong Zhang, Yu Zhang, Jiahui Yu, Chung-Cheng Chiu, James Qin, Anmol Gulati, Ruoming Pang, and Yonghui Wu. 2020. Contextnet: Improving convolutional neural networks for automatic speech recognition with global context. arXiv preprint arXiv:2005.03191(2020).
[16]
Bernd Huber, Richard F Davis III, Allison Cotter, Emily Junkin, Mindy Yard, Stuart Shieber, Elizabeth Brestan-Knight, and Krzysztof Z Gajos. 2019. SpecialTime: Automatically detecting dialogue acts from speech to support parent-child interaction therapy. In Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare. 139–148.
[17]
Tom Humphries, Poorna Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, and Scott Smith. 2016. Avoiding linguistic neglect of deaf children. Social Service Review 90, 4 (2016), 589–619.
[18]
Tom Humphries, Poorna Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, and Scott R Smith. 2012. Language acquisition for deaf children: Reducing the harms of zero tolerance to the use of alternative approaches. Harm Reduction Journal 9, 1 (2012), 1–9.
[19]
Inseok Hwang, Chungkuk Yoo, Chanyou Hwang, Dongsun Yim, Youngki Lee, Chulhong Min, John Kim, and Junehwa Song. 2014. TalkBetter: family-driven mobile intervention care for children with language delay. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 1283–1296.
[20]
Dhruv Jain, Leah Findlater, Jamie Gilkeson, Benjamin Holland, Ramani Duraiswami, Dmitry Zotkin, Christian Vogler, and Jon E Froehlich. 2015. Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 241–250.
[21]
Ajay Kumar, Michael E Behen, Piti Singsoonsud, Amy L Veenstra, Cortney Wolfe-Christensen, Emily Helder, and Harry T Chugani. 2014. Microstructural abnormalities in language and limbic pathways in orphanage-reared children: a diffusion tensor imaging study. Journal of child neurology 29, 3 (2014), 318–325.
[22]
Taeahn Kwon, Minkyung Jeong, Eon-Suk Ko, and Youngki Lee. 2022. Captivate! Contextual Language Guidance for Parent–Child Interaction. In CHI Conference on Human Factors in Computing Systems. 1–17.
[23]
Emily Laubscher and Janice Light. 2020. Core vocabulary lists for young children and considerations for early language development: A narrative review. Augmentative and Alternative Communication 36, 1 (2020), 43–53.
[24]
Kaliappan Madasamy, Vimal Shanmuganathan, Vijayalakshmi Kandasamy, Mi Young Lee, and Manikandan Thangadurai. 2021. OSDDY: embedded system-based object surveillance detection system with small drone using deep YOLO. EURASIP Journal on Image and Video Processing 2021, 1(2021), 1–14.
[25]
Melissa Malzkuhn and Melissa Herzig. 2013. Bilingual storybook app designed for deaf children based on research principles. In Proceedings of the 12th International Conference on Interaction Design and Children. 499–502.
[26]
Huisheng Mao, Ziqi Yuan, Hua Xu, Wenmeng Yu, Yihe Liu, and Kai Gao. 2022. M-SENA: An Integrated Platform for Multimodal Sentiment Analysis. arXiv preprint arXiv:2203.12441(2022).
[27]
Rachel I Mayberry and Robert Kluender. 2018. Rethinking the critical period for language: New insights into an old question from American Sign Language. Bilingualism: Language and Cognition 21, 5 (2018), 886–905.
[28]
Richard E Mayer and Valerie K Sims. 1994. For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning.Journal of educational psychology 86, 3 (1994), 389.
[29]
Liyu Meng, Yuchen Liu, Xiaolong Liu, Zhaopei Huang, Wenqiang Jiang, Tenggan Zhang, Yuanyuan Deng, Ruichen Li, Yannan Wu, Jinming Zhao, 2022. Multi-modal Emotion Estimation for in-the-wild Videos. arXiv preprint arXiv:2203.13032(2022).
[30]
Liyu Meng, Yuchen Liu, Xiaolong Liu, Zhaopei Huang, Wenqiang Jiang, Tenggan Zhang, Chuanhe Liu, and Qin Jin. 2022. Valence and Arousal Estimation Based on Multimodal Temporal-Aware Features for Videos in the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2345–2352.
[31]
Ross E Mitchell and Michaela Karchmer. 2004. Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign language studies 4, 2 (2004), 138–163.
[32]
Alex Olwal, Kevin Balke, Dmitrii Votintcev, Thad Starner, Paula Conn, Bonnie Chinh, and Benoit Corda. 2020. Wearable subtitles: Augmenting spoken communication with lightweight eyewear for all-day captioning. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 1108–1120.
[33]
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 5206–5210.
[34]
Linkai Peng, Yingming Gao, Binghuai Lin, Dengfeng Ke, Yanlu Xie, and Jinsong Zhang. 2022. Text-Aware End-to-end Mispronunciation Detection and Diagnosis. arXiv preprint arXiv:2206.07289(2022).
[35]
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
[36]
Lorna Quandt. 2020. Teaching ASL signs using signing avatars and immersive learning in virtual reality. In The 22nd international ACM SIGACCESS conference on computers and accessibility. 1–4.
[37]
Brian Scassellati, Jake Brawer, Katherine Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, Arcangelo Merla, Ari Shapiro, 2018. Teaching language to deaf infants with a robot and a virtual human. In Proceedings of the 2018 CHI Conference on human Factors in computing systems. 1–13.
[38]
Steven C Seow. 2008. Designing and engineering time: The psychology of time perception in software. Addison-Wesley Professional.
[39]
Qijia Shao, Amy Sniffen, Julien Blanchet, Megan E Hillis, Xinyu Shi, Themistoklis K Haris, Jason Liu, Jason Lamberton, Melissa Malzkuhn, Lorna C Quandt, 2020. Teaching american sign language in mixed reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1–27.
[40]
Ashely Tenesaca, Jung Yun Oh, Crystal Lee, Wanyin Hu, and Zhen Bai. 2019. Augmenting Communication Between Hearing Parents and Deaf Children. In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 431–434.
[41]
Niraj Tolia, David G Andersen, and Mahadev Satyanarayanan. 2006. Quantifying interactive user experience on thin clients. Computer 39, 3 (2006), 46–52.
[42]
Yile Wang, Leyang Cui, and Yue Zhang. 2019. How Can BERT Help Lexical Semantics Tasks?arXiv preprint arXiv:1911.02929(2019).
[43]
Kimberly A Weaver and Thad Starner. 2011. We need to communicate! helping hearing parents of deaf children learn american sign language. In The proceedings of the 13th international ACM SIGACCESS Conference on Computers and Accessibility. 91–98.
[44]
Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT make any sense? Interpretable word sense disambiguation with contextualized embeddings. arXiv preprint arXiv:1909.10430(2019).
[45]
Kristin Williams, Karyn Moffatt, Jonggi Hong, Yasmeen Faroqi-Shah, and Leah Findlater. 2016. The cost of turning heads: A comparison of a head-worn display to a smartphone for supporting persons with aphasia in conversation. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 111–120.
[46]
Cheng Yi, Jianzhong Wang, Ning Cheng, Shiyu Zhou, and Bo Xu. 2020. Applying wav2vec2. 0 to speech recognition in various low-resource languages. arXiv preprint arXiv:2012.12121(2020).

Cited By

View all
  • (2024)Communication, Collaboration, and Coordination in a Co-located Shared Augmented Reality Game: Perspectives From Deaf and Hard of Hearing PeopleProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642953(1-14)Online publication date: 11-May-2024
  • (2023)Communication and Collaboration Among DHH People in a Co-located Collaborative Multiplayer AR EnvironmentProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614479(1-5)Online publication date: 22-Oct-2023
  • (2023)DHH People in Co-located Collaborative Multiplayer AR EnvironmentsCompanion Proceedings of the Annual Symposium on Computer-Human Interaction in Play10.1145/3573382.3616039(344-347)Online publication date: 6-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASSETS '22: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
October 2022
902 pages
ISBN:9781450392587
DOI:10.1145/3517428
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2022

Check for updates

Author Tags

  1. American Sign Language
  2. Augmented Reality
  3. Computer-mediated communication
  4. Parent-Child Interaction

Qualifiers

  • Poster
  • Research
  • Refereed limited

Conference

ASSETS '22
Sponsor:

Acceptance Rates

ASSETS '22 Paper Acceptance Rate 35 of 132 submissions, 27%;
Overall Acceptance Rate 436 of 1,556 submissions, 28%

Upcoming Conference

ASSETS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)13
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Communication, Collaboration, and Coordination in a Co-located Shared Augmented Reality Game: Perspectives From Deaf and Hard of Hearing PeopleProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642953(1-14)Online publication date: 11-May-2024
  • (2023)Communication and Collaboration Among DHH People in a Co-located Collaborative Multiplayer AR EnvironmentProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614479(1-5)Online publication date: 22-Oct-2023
  • (2023)DHH People in Co-located Collaborative Multiplayer AR EnvironmentsCompanion Proceedings of the Annual Symposium on Computer-Human Interaction in Play10.1145/3573382.3616039(344-347)Online publication date: 6-Oct-2023
  • (2023)Building User-Centered ASL Communication Technologies for Parent-Child Interactions2023 IEEE MIT Undergraduate Research Technology Conference (URTC)10.1109/URTC60662.2023.10534918(1-5)Online publication date: 6-Oct-2023
  • (2023)Behavior Analysis of Parent-Child Interactions from Text2023 International Conference on Machine Learning and Applications (ICMLA)10.1109/ICMLA58977.2023.00176(1175-1180)Online publication date: 15-Dec-2023
  • (2022)“Out of Sight, Out of Mind” Comparing Deaf and Hard of Hearing Child’s Response to ASL Recommendation Systems2022 IEEE MIT Undergraduate Research Technology Conference (URTC)10.1109/URTC56832.2022.10002237(1-6)Online publication date: 30-Sep-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media