skip to main content
10.1145/3517428.3544827acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article

ASL Wiki: An Exploratory Interface for Crowdsourcing ASL Translations

Published: 22 October 2022 Publication History

Abstract

The Deaf and Hard-of-hearing (DHH) community faces a lack of information in American Sign Language (ASL) and other signed languages. Most informational resources are text-based (e.g. books, encyclopedias, newspapers, magazines, etc.). Because DHH signers typically prefer ASL and are often less fluent in written English, text is often insufficient. At the same time, there is also a lack of large continuous sign language datasets from representative signers, which are essential to advancing sign langauge research and technology. In this work, we explore the possibility of crowdsourcing English-to-ASL translations to help address these barriers. To do this, we present a novel bilingual interface that enables the community to both contribute and consume translations. To shed light on the user experience with such an interface, we present a user study with 19 participants using the interface to both generate and consume content. To better understand the potential impact of the interface on translation quality, we also present a preliminary translation quality analysis. Our results suggest that DHH community members find real-world value in the interface, that the quality of translations is comparable to those created with state-of-the-art setups, and shed light on future research avenues.

References

[1]
Oliver Alonzo, Matthew Seita, Abraham Glasser, and Matt Huenerfauth. 2020. Automatic Text Simplification Tools for Deaf and Hard of Hearing Adults: Benefits of Lexical Simplification and Providing Users with Autonomy. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376563
[2]
Oliver Alonzo, Jessica Trussell, Becca Dingman, and Matt Huenerfauth. 2021. Comparison of Methods for Evaluating Complexity of Simplified Texts among Deaf and Hard-of-Hearing Adults at Different Literacy Levels. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445038
[3]
Larwan Berke, Sushant Kafle, and Matt Huenerfauth. 2018. Methods for Evaluation of Imperfect Captioning Tools by Deaf or Hard-of-Hearing Users at Different Reading Literacy Levels. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173665
[4]
Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, 2010. Vizwiz: nearly real-time answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology. 333–342.
[5]
Erin Brady, Meredith Ringel Morris, and Jeffrey P Bigham. 2015. Gauging receptiveness to social microvolunteering. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1055–1064.
[6]
Danielle Bragg, Naomi Caselli, John W. Gallagher, Miriam Goldberg, Courtney J. Oka, and William Thies. 2021. ASL Sea Battle: Gamifying Sign Language Data Collection. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445416
[7]
Danielle Bragg, Naomi Caselli, Julie A. Hochgesang, Matt Huenerfauth, Leah Katz-Hernandez, Oscar Koller, Raja Kushalnagar, Christian Vogler, and Richard E. Ladner. 2021. The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective. ACM Trans. Access. Comput. 14, 2, Article 7 (jul 2021), 45 pages. https://doi.org/10.1145/3436996
[8]
Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 16–31. https://doi.org/10.1145/3308561.3353774
[9]
Danielle Bragg, Oscar Koller, Naomi Caselli, and William Thies. 2020. Exploring Collection of Sign Language Datasets: Privacy, Participation, and Model Performance. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS ’20). Association for Computing Machinery, New York, NY, USA, Article 33, 14 pages. https://doi.org/10.1145/3373625.3417024
[10]
Danielle Bragg, Kyle Rector, and Richard E Ladner. 2015. A user-powered American Sign Language dictionary. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1837–1848.
[11]
B Cartwright. 2017. Signing Savvy. https://www.signingsavvy.com/
[12]
[12] ASL Clear.2022. https://aslclear.org/
[13]
Helen Cooper, Eng-Jon Ong, Nicolas Pugeault, and Richard Bowden. 2012. Sign Language Recognition Using Sub-Units. J. Mach. Learn. Res. 13, 1 (jul 2012), 2205–2231.
[14]
[14] ASL Core.2022. https://aslcore.org/
[15]
Amanda Duarte, Shruti Palaskar, Lucas Ventura, Deepti Ghadiyaram, Kenneth DeHaan, Florian Metze, Jordi Torres, and Xavier Giro-i Nieto. 2021. How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language. In Conference on Computer Vision and Pattern Recognition (CVPR).
[16]
Karen Emmorey, Chuchu Li, Jennifer Petrich, and Tamar H. Gollan. 2020. Turning languages on and off: Switching into and out of code-blends reveals the nature of bilingual language control.Journal of Experimental Psychology: Learning, Memory, and Cognition 46, 3(2020), 443–454. https://doi.org/10.1037/xlm0000734
[17]
Jens Forster, Christoph Schmidt, Thomas Hoyoux, Oscar Koller, Uwe Zelle, Justus Piater, and Hermann Ney. 2012. Rwth-phoenix-weather: A large vocabulary sign language recognition and translation corpus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12). 3785–3789.
[18]
Ann Grafstein. 2002. HandSpeak: A Sign Language Dictionary Online. https://www.handspeak.com/
[19]
Julie Hochgesang, Onno Crasborn, and Diane Lillo-Martin. 2022. Sign Bank. https://aslsignbank.haskins.yale.edu/
[20]
[20] Leala Holcomb and Jonathan McMillan.2022. http://www.handsland.com/
[21]
Matt Huenerfauth and Vicki Hanson. 2009. Sign language in the interface: access for deaf signers. Universal Access Handbook. NJ: Erlbaum 38 (2009), 14.
[22]
[22] Deaf Studies Digital Journal.2009. https://www.deafstudiesdigitaljournal.org/
[23]
Walter Lasecki, Christopher Miller, Adam Sadilek, Andrew Abumoussa, Donato Borrello, Raja Kushalnagar, and Jeffrey Bigham. 2012. Real-time captioning by groups of non-experts. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 23–34.
[24]
Walter S Lasecki, Christopher D Miller, Raja Kushalnagar, and Jeffrey P Bigham. 2013. Legion scribe: real-time captioning by the non-experts. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility. 1–2.
[25]
Sooyeon Lee, Abraham Glasser, Becca Dingman, Zhaoyang Xia, Dimitris Metaxas, Carol Neidle, and Matt Huenerfauth. 2021. American Sign Language Video Anonymization to Support Online Participation of Deaf and Hard of Hearing Users. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3441852.3471200
[26]
Sooyeon Lee, Abraham Glasser, Becca Dingman, Zhaoyang Xia, Dimitris Metaxas, Carol Neidle, and Matt Huenerfauth. 2021. American Sign Language Video Anonymization to Support Online Participation of Deaf and Hard of Hearing Users. In The 23nd International ACM SIGACCESS Conference on Computers and Accessibility.
[27]
Colin Lualdi. 2022. Sign School. https://www.signschool.com/
[28]
Matt Malzkuhn and Melissa Malzkuhn. 2022. The ASL App. https://theaslapp.com/
[29]
The Daily Moth. 2022. The Daily Moth. https://www.dailymoth.com/
[30]
National Association of the Deaf. 2022. Position Statement On ASL and English Bilingual Education. https://www.nad.org/about-us/position-statements/position-statement-on-asl-and-english-bilingual-education/
[31]
Eng-Jon Ong, Helen Cooper, Nicolas Pugeault, and R. Bowden. 2012. Sign Language Recognition using Sequential Pattern Trees. 2012 IEEE Conference on Computer Vision and Pattern Recognition (2012), 2200–2207.
[32]
Eng-Jon Ong, Oscar Koller, Nicolas Pugeault, and Richard Bowden. 2014. Sign Spotting Using Hierarchical Sequential Patterns with Temporal Intervals. In 2014 IEEE Conference on Computer Vision and Pattern Recognition. 1931–1938. https://doi.org/10.1109/CVPR.2014.248
[33]
Jingnan Peng. 2020. Bringing light to the news, for those who can’t hear it (video). https://www.csmonitor.com/The-Culture/2020/0731/Bringing-light-to-the-news-for-those-who-can-t-hear-it-video
[34]
Manaswi Saha, Michael Saugstad, Hanuma Teja Maddali, Aileen Zeng, Ryan Holland, Steven Bower, Aditya Dash, Sage Chen, Anthony Li, Kotaro Hara, 2019. Project sidewalk: A web-based crowdsourcing tool for collecting sidewalk accessibility data at scale. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
[35]
Elliot Salisbury, Ece Kamar, and Meredith Morris. 2017. Toward scalable social alt text: Conversational crowdsourcing as a tool for refining vision-to-language technology for the blind. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 5. 147–156.
[36]
Elliot Salisbury, Ece Kamar, and Meredith Ringel Morris. 2018. Evaluating and Complementing Vision-to-Language Technology for People who are Blind with Conversational Crowdsourcing. In IJCAI. 5349–5353.
[37]
Zed Sevcikova Sehyr, Naomi Caselli, Ariel M Cohen-Goldberg, and Karen Emmorey. 2021. The ASL-LEX 2.0 Project: A Database of Lexical and Phonological Properties for 2,723 Signs in American Sign Language. The Journal of Deaf Studies and Deaf Education 26, 2 (02 2021), 263–277. https://doi.org/10.1093/deafed/enaa038 arXiv:https://academic.oup.com/jdsde/article-pdf/26/2/263/36643382/enaa038.pdf
[38]
Ather Sharif, Paari Gopal, Michael Saugstad, Shiven Bhatt, Raymond Fok, Galen Weld, Kavi Asher Mankoff Dey, and Jon E. Froehlich. 2021. Experimental Crowd+ AI Approaches to Track Accessibility Features in Sidewalk Intersections Over Time. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–5.
[39]
Carol Bloomquist Traxler. 2000. The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students. The Journal of Deaf Studies and Deaf Education 5, 4 (09 2000), 337–348. https://doi.org/10.1093/deafed/5.4.337 arXiv:https://academic.oup.com/jdsde/article-pdf/5/4/337/9835826/337.pdf
[40]
[40] Wikipedia.2001. https://www.wikipedia.org/
[41]
Wikipedia. 2022. Deaf News. https://en.wikipedia.org/wiki/Deaf_News
[42]
[42] Alicia Wooten and Barbara Spiecker.2022. https://www.atomichands.com/

Cited By

View all
  • (2023)Best practices for sign language technology researchUniversal Access in the Information Society10.1007/s10209-023-01039-1Online publication date: 7-Sep-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASSETS '22: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
October 2022
902 pages
ISBN:9781450392587
DOI:10.1145/3517428
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ASL data collection
  2. Bilingual
  3. Corpus
  4. Crowdsourcing
  5. Deaf and Hard-of-Hearing
  6. Education
  7. Interface
  8. Sign Language

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ASSETS '22
Sponsor:

Acceptance Rates

ASSETS '22 Paper Acceptance Rate 35 of 132 submissions, 27%;
Overall Acceptance Rate 436 of 1,556 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)70
  • Downloads (Last 6 weeks)2
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Best practices for sign language technology researchUniversal Access in the Information Society10.1007/s10209-023-01039-1Online publication date: 7-Sep-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media