skip to main content
10.1145/3491102.3501987acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Analyzing Deaf and Hard-of-Hearing Users’ Behavior, Usage, and Interaction with a Personal Assistant Device that Understands Sign-Language Input

Published: 29 April 2022 Publication History

Abstract

As voice-based personal assistant technologies proliferate, e.g., smart speakers in homes, and more generally as voice-control of technology becomes increasingly ubiquitous, new accessibility barriers are emerging for many Deaf and Hard of Hearing (DHH) users. Progress in sign-language recognition may enable devices to respond to sign-language commands and potentially mitigate these barriers, but research is needed to understand how DHH users would interact with these devices and what commands they would issue. In this work, we directly engage with the DHH community, using a Wizard-of-Oz prototype that appears to understand American Sign Language (ASL) commands. Our analysis of video recordings of DHH participants revealed how they woke-up the device to initiate commands, structured commands in ASL, and responded to device errors, providing guidance to future designers and researchers. We share our dataset of over 1400 commands, which may be of interest to sign-language-recognition researchers.

References

[1]
Charlotte Baker-Shenk and Dennis Cokley. 2002. American Sign Language: A Teachers Resource Text on Grammar and Culture. Gallaudet University Press.
[2]
Danielle Bragg, Naomi Caselli, Julie A. Hochgesang, Matt Huenerfauth, Leah Katz-Hernandez, Oscar Koller, Raja Kushalnagar, Christian Vogler, and Richard E. Ladner. 2021. The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective. ACM Trans. Access. Comput. 14, 2, Article 7 (July 2021), 45 pages. https://doi.org/10.1145/3436996
[3]
Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 16–31. https://doi.org/10.1145/3308561.3353774
[4]
Convo. 2021. Convo VRS. https://www.convorelay.com/
[5]
Michael Erard. 2017. Why Sign-Language Gloves Don’t Help Deaf People. https://www.theatlantic.com/technology/archive/2017/11/why-sign-language-gloves-dont-help-deaf-people/545441/
[6]
Abraham Glasser. 2019. Automatic Speech Recognition Services: Deaf and Hard-of-Hearing Usability. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3308461
[7]
Abraham Glasser, Vaishnavi Mande, and Matt Huenerfauth. 2021. Understanding Deaf and Hard-of-Hearing Users’ Interest in Sign-Language Interaction with Personal-Assistant Devices. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3430263.3452428
[8]
Google. 2021. Use gestures on your Pixel phone. https://support.google.com/pixelphone/answer/7443425?hl=en
[9]
Maarten Grootendorst. 2020. BERTopic: Leveraging BERT and c-TF-IDF to create easily interpretable topics.https://doi.org/10.5281/zenodo.4381785
[10]
Matthew B. Hoy. 2018. Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Medical Reference Services Quarterly 37, 1 (2018), 81–88. https://doi.org/10.1080/02763869.2018.1404391
[11]
Xuedong Huang. 2017. Microsoft researchers achieve new conversational speech recognition milestone. https://www.microsoft.com/en-us/research/blog/microsoft-researchers-achieve-new-conversational-speech-recognition-milestone/
[12]
IBM. 2017. Reaching new records in speech recognition. https://www.ibm.com/blogs/watson/2017/03/reaching-new-records-in-speech-recognition/
[13]
Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth. 2020. Artificial Intelligence Fairness in the Context of Accessibility Research on Intelligent Systems for People Who Are Deaf or Hard of Hearing. SIGACCESS Access. Comput.125, Article 4 (March 2020), 1 pages. https://doi.org/10.1145/3386296.3386300
[14]
Svetlana Kouznetsova. 2016. Why the Signing Gloves Hype Needs to Stop. https://audio-accessibility.com/news/2016/05/signing-gloves-hype-needs-stop/
[15]
Abner Li. 2020. ‘Hey Google’ hotword training updated to boost Voice Match accuracy. https://9to5google.com/2020/04/23/hey-google-voice-match/
[16]
Ceil Lucas and Clayton Valli. 1992. Language Contact in the American Deaf Community. Brill.
[17]
Vaishnavi Mande, Abraham Glasser, Becca Dingman, and Matt Huenerfauth. 2021. Deaf Users’ Preferences Among Wake-Up Approaches during Sign-Language Interaction with Personal Assistant Devices. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411763.3451592
[18]
Richard Meier. 1990. Person deixis in American sign language. Theoretical issues in sign language research 1 (1990), 175–190.
[19]
Chelsea M. Myers, Luis Fernando Laris Pardo, Ana Acosta-Ruiz, Alessandro Canossa, and Jichen Zhu. 2021. “Try, Try, Try Again:” Sequence Analysis of User Interaction Data with a Voice User Interface. In CUI 2021 - 3rd Conference on Conversational User Interfaces (Bilbao (online), Spain) (CUI ’21). Association for Computing Machinery, New York, NY, USA, Article 18, 8 pages. https://doi.org/10.1145/3469595.3469613
[20]
Carol Neidle, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee. 2008. The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. MIT Press.
[21]
Carol Neidle, Ashwin Thangali, and Stan Sclaroff. 2012. Challenges in development of the American Sign Language Lexicon Video Dataset (ASLLVD) corpus. In 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon(LREC 2012). http://www.bu.edu/linguistics/UG/LREC2012/LREC-asllvd-final.pdf
[22]
Christi Olson and Kelli Kemery. 2019. 2019 Microsoft Voice report. https://about.ads.microsoft.com/en-us/insights/2019-voice-report
[23]
Alisha Pradhan, Kanika Mehta, and Leah Findlater. 2018. ”Accessibility Came by Accident”: Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174033
[24]
Jason Rodolitz, Evan Gambill, Brittany Willis, Christian Vogler, and Raja Kushalnagar. 2019. Accessibility of Voice-Activated Agents for People who are Deaf or Hard of Hearing. Journal on Technology and Persons with Disabilities 7 (2019), 144–156. http://hdl.handle.net/10211.3/210397
[25]
John Shinal. 2017. Making sense of Google CEO Sundar Pichai’s plan to move every direction at once. https://www.cnbc.com/2017/05/18/google-ceo-sundar-pichai-machine-learning-big-data.html
[26]
SignGenius. 2020. Do’s & Don’ts - Getting Attention in the Deaf Community. https://www.signgenius.com/info-dos&donts.html
[27]
Greg Sterling. 2018. Study: Google Assistant most accurate, Alexa most improved virtual assistant. https://searchengineland.com/study-google-assistant-most-accurate-alexa-most-improved-virtual-assistant-296936
[28]
Xuchen Yao, Guoguo Chen, and Yuan Cao. 2017. Developing Your Own Wake Word Engine Just Like ’Alexa’ and ’OK Google’. https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=112905
[29]
Sihan Yuan, Birgit Brüggemeier, Stefan Hillmann, and Thilo Michael. 2020. User Preference and Categories for Error Responses in Conversational User Interfaces. In Proceedings of the 2nd Conference on Conversational User Interfaces (Bilbao, Spain) (CUI ’20). Association for Computing Machinery, New York, NY, USA, Article 5, 8 pages. https://doi.org/10.1145/3405755.3406126

Cited By

View all
  • (2024)You have interrupted me again!: making voice assistants more dementia-friendly with incremental clarificationFrontiers in Dementia10.3389/frdem.2024.13430523Online publication date: 12-Mar-2024
  • (2024)Sign Language-Based versus Touch-Based Input for Deaf Users with Interactive Personal Assistants in Simulated Kitchen EnvironmentsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651075(1-9)Online publication date: 11-May-2024
  • (2024)What Factors Motivate Culturally Deaf People to Want Assistive Technologies?Extended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650871(1-7)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Analyzing Deaf and Hard-of-Hearing Users’ Behavior, Usage, and Interaction with a Personal Assistant Device that Understands Sign-Language Input

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
      April 2022
      10459 pages
      ISBN:9781450391573
      DOI:10.1145/3491102
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 29 April 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Accessibility
      2. Deaf and Hard of Hearing
      3. Personal Assistants
      4. Sign Language

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      CHI '22
      Sponsor:
      CHI '22: CHI Conference on Human Factors in Computing Systems
      April 29 - May 5, 2022
      LA, New Orleans, USA

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)139
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 30 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)You have interrupted me again!: making voice assistants more dementia-friendly with incremental clarificationFrontiers in Dementia10.3389/frdem.2024.13430523Online publication date: 12-Mar-2024
      • (2024)Sign Language-Based versus Touch-Based Input for Deaf Users with Interactive Personal Assistants in Simulated Kitchen EnvironmentsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651075(1-9)Online publication date: 11-May-2024
      • (2024)What Factors Motivate Culturally Deaf People to Want Assistive Technologies?Extended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650871(1-7)Online publication date: 11-May-2024
      • (2024)Designing and Evaluating an Advanced Dance Video Comprehension Tool with In-situ Move Identification CapabilitiesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642710(1-19)Online publication date: 11-May-2024
      • (2024)Assessment of Sign Language-Based versus Touch-Based Input for Deaf Users Interacting with Intelligent Personal AssistantsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642094(1-15)Online publication date: 11-May-2024
      • (2024)Deaf and Hard of Hearing People’s Perspectives on Augmented Reality Interfaces for Improving the Accessibility of Smart SpeakersUniversal Access in Human-Computer Interaction10.1007/978-3-031-60881-0_21(334-357)Online publication date: 29-Jun-2024
      • (2023)U.S. Deaf Community Perspectives on Automatic Sign Language TranslationProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614507(1-7)Online publication date: 22-Oct-2023
      • (2023)FingerSpeller: Camera-Free Text Entry Using Smart Rings for American Sign Language Fingerspelling RecognitionProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614491(1-5)Online publication date: 22-Oct-2023
      • (2023)Towards MOOCs for Lipreading: Using Synthetic Talking Heads to Train Humans in Lipreading at Scale2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV56688.2023.00225(2216-2225)Online publication date: Jan-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media