skip to main content
10.1145/3517428.3550367acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
poster

Understanding ASL Learners’ Preferences for a Sign Language Recording and Automatic Feedback System to Support Self-Study

Published: 22 October 2022 Publication History

Abstract

Advancements in AI will soon enable tools for providing automatic feedback to American Sign Language (ASL) learners on some aspects of their signing, but there is a need to understand their preferences for submitting videos and receiving feedback. Ten participants in our study were asked to record a few sentences in ASL using software we designed, and we provided manually curated feedback on one sentence in a manner that simulates the output of a future automatic feedback system. Participants responded to interview questions and a questionnaire eliciting their impressions of the prototype. Our initial findings provide guidance to future designers of automatic feedback systems for ASL learners.

References

[1]
Jodie M Ackerman, Ju-Lee A Wolsey, M Diane Clark, 2018. Locations of L2/Ln Sign Language Pedagogy. Creative Education 9, 13 (2018), 2037.
[2]
Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2019. Effect of Automatic Sign Recognition Performance on the Usability of Video-Based Search Interfaces for Sign Language Dictionaries. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 56–67. https://doi.org/10.1145/3308561.3353791
[3]
Pierre Badin, Atef Ben Youssef, Gérard Bailly, Frédéric Elisei, and Thomas Hueber. 2010. Visual articulatory feedback for phonetic correction in second language learning. In L2SW, Workshop on” Second Language Studies: Acquisition, Learning, Education and Technology. P1–10.
[4]
Dhruva Bansal, Prerna Ravi, Matthew So, Pranay Agrawal, Ishan Chadha, Ganesh Murugappan, and Colby Duke. 2021. CopyCat: Using Sign Language Recognition to Help Deaf Children Acquire Language Skills. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411763.3451523
[5]
Helene Brashear. 2007. Improving the efficacy of automated sign language practice tools. ACM SIGACCESS Accessibility and Computing89 (2007), 11–17.
[6]
Marinella Caruso, Nicola Fraschini, and Sabine Kuuse. 2019. Online Tools for Feedback Engagement in Second Language Learning. International Journal of Computer-Assisted Language Learning and Teaching (IJCALLT) 9, 1 (2019), 58–78.
[7]
Kirsten Ellis, Neil Ray, and Cheryl Howard. 2011. Learning a physical skill via a computer: a case study exploring Australian Sign Language. In Proceedings of the 23rd Australian Computer-Human Interaction Conference. 98–103.
[8]
Julie Fisher, Kirsten Ellis, Louisa Willoughby, and Jan Carlo Barca. 2014. Taking a user centred design approach for designing a system to teach sign language. ACIS.
[9]
National Center for Education Statistics (NCES). 2018. Digest of Education Statistics Number and percentage distribution of course enrollments in languages other than English at degree-granting postsecondary institutions, by language and enrollment level: Selected years, 2002 through 2016. https://nces.ed.gov/programs/digest/d18/tables/dt18_311.80.asp
[10]
David Goldberg, Dennis Looney, and Natalia Lusin. 2015. Enrollments in Languages Other than English in US. Institutions of Higher Ed., Fall’13.
[11]
Saad Hassan, Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2020. Effect of ranking and precision of results on users’ satisfaction with search-by-video sign-language dictionaries. In Sign Language Recognition, Translation and Production (SLRTP) Workshop-Extended Abstracts, Vol. 4. Computer Vision – ECCV 2020 Workshops, Virtual, 6 pages.
[12]
Saad Hassan, Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2021. Effect of Sign-Recognition Performance on the Usability of Sign-Language Dictionary Search. ACM Trans. Access. Comput. 14, 4, Article 18 (oct 2021), 33 pages. https://doi.org/10.1145/3470650
[13]
Saad Hassan, Akhter Al Amin, Alexis Gordon, Sooyeon Lee, and Matt Huenerfauth. 2022. Design and Evaluation of Hybrid Search for American Sign Language to English Dictionaries: Making the Most of Imperfect Sign Recognition. In CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 195, 13 pages. https://doi.org/10.1145/3491102.3501986
[14]
Saad Hassan, Aiza Hasib, Suleman Shahid, Sana Asif, and Arsalan Khan. 2019. Kahaniyan - Designing for Acquisition of Urdu as a Second Language. In Human-Computer Interaction – INTERACT 2019, David Lamas, Fernando Loizides, Lennart Nacke, Helen Petrie, Marco Winckler, and Panayiotis Zaphiris (Eds.). Springer International Publishing, Cham, 207–216.
[15]
Tobias Haug, Sarah Ebling, Katja Tissi, Sandra Sidler-Miserez, and Penny Boyes Braem. 2022. Development of a Technology-Assisted Assessment for Sign Language Learning.Intl. Journal of Emerging Technologies in Learning 17 (2022).
[16]
Evan Hibbard, Martin Gerdzhev, Tara Stevens, Daniel Roush, and Deborah Fels. 2020. Getting a Sign in Edgewise: User-Centered Design Considerations in Creating a Signed Language Mentoring Management System. Sign Language Studies 20, 2 (2020), 264–300.
[17]
Matt Huenerfauth, Elaine Gale, Brian Penly, Sree Pillutla, Mackenzie Willard, and Dhananjai Hariharan. 2017. Evaluation of Language Feedback Methods for Student Videos of American Sign Language. ACM Trans. Access. Comput. 10, 1, Article 2 (apr 2017), 30 pages. https://doi.org/10.1145/3046788
[18]
Matt Huenerfauth, Elaine Gale, Brian Penly, Mackenzie Willard, and Dhananjai Hariharan. 2015. Comparing Methods of Displaying Language Feedback for Student Videos of American Sign Language. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers and Accessibility (Lisbon, Portugal) (ASSETS ’15). Association for Computing Machinery, New York, NY, USA, 139–146. https://doi.org/10.1145/2700648.2809859
[19]
Teerawat Kamnardsiri, Ler-on Hongsit, Pattaraporn Khuwuthyakorn, and Noppon Wongta. 2017. The Effectiveness of the Game-Based Learning System for the Improvement of American Sign Language using Kinect. Electronic Journal of e-Learning 15, 4 (2017), pp283–296.
[20]
Ella Mae Lentz. 1994. ” Signing Naturally”: An American Sign Language Curriculum. Parallel Views: Education and Access for Deaf People in France and the United States (1994), 131.
[21]
Ross Mitchell, Travas Young, Bellamie Bachleda, and Michael Karchmer. 2006. How Many People Use ASL in the United States? Why Estimates Need Updating. Sign Language Studies 6 (03 2006). https://doi.org/10.1353/sls.2006.0019
[22]
Yuval Nirkin, Yosi Keller, and Tal Hassner. 2019. FSGAN: Subject Agnostic Face Swapping and Reenactment. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[23]
Han Duy Phan, Kirsten Ellis, and Alan Dorin. 2018. MIC, an interactive sign language teaching system. In Proceedings of the 30th Australian Conference on Computer-Human Interaction. 544–547.
[24]
Han Duy Phan, Kirsten Ellis, Alan Dorin, and Patrick Olivier. 2020. Feedback strategies for embodied agents to enhance sign language vocabulary learning. In Proceedings of the 20th ACM International Conf. on Intelligent Virtual Agents. 1–8.
[25]
Jerry Schnepp, Rosalee Wolfe, Gilbert Brionez, Souad Baowidan, Ronan Johnson, and John McDonald. 2020. Human-Centered Design for a Sign Language Learning Application. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Corfu, Greece) (PETRA ’20). Association for Computing Machinery, New York, NY, USA, Article 60, 5 pages. https://doi.org/10.1145/3389189.3398007
[26]
Elahe Vahdani, Longlong Jing, Yingli Tian, and Matt Huenerfauth. 2021. Recognizing American Sign Language Nonmanual Signal Grammar Errors in Continuous Videos. In 2020 25th International Conference on Pattern Recognition (ICPR). 1–8. https://doi.org/10.1109/ICPR48806.2021.9413126
[27]
Kimberly A. Weaver, Harley Hamilton, Zahoor Zafrulla, Helene Brashear, Thad Starner, Peter Presti, and Amy Bruckman. 2010. Improving the Language Ability of Deaf Signing Children through an Interactive American Sign Language-Based Video Game. In Proceedings of the 9th International Conference of the Learning Sciences - Volume 2 (Chicago, Illinois) (ICLS ’10). International Society of the Learning Sciences, 306–307.
[28]
Kimberly A. Weaver and Thad Starner. 2011. We Need to Communicate! Helping Hearing Parents of Deaf Children Learn American Sign Language. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (Dundee, Scotland, UK) (ASSETS ’11). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/2049536.2049554
[29]
Jin Ha Woo and Heeyoul Choi. 2021. Systematic Review for AI-based Language Learning Tools. arXiv preprint arXiv:2111.04455(2021).
[30]
Yuancheng Ye, Yingli Tian, Matt Huenerfauth, and Jingya Liu. 2018. Recognizing American Sign Language Gestures From Within Continuous Videos. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Workshops.
[31]
Chenyang Zhang, Yingli Tian, and Matt Huenerfauth. 2016. Multi-modality American Sign Language recognition. In 2016 IEEE International Conference on Image Processing (ICIP). 2881–2885. https://doi.org/10.1109/ICIP.2016.7532886

Index Terms

  1. Understanding ASL Learners’ Preferences for a Sign Language Recording and Automatic Feedback System to Support Self-Study

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        ASSETS '22: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
        October 2022
        902 pages
        ISBN:9781450392587
        DOI:10.1145/3517428
        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 22 October 2022

        Check for updates

        Author Tags

        1. American Sign Language
        2. Automatic feedback
        3. Education
        4. Feedback
        5. Interface design
        6. Language learning
        7. Sign languages

        Qualifiers

        • Poster
        • Research
        • Refereed limited

        Conference

        ASSETS '22
        Sponsor:

        Acceptance Rates

        ASSETS '22 Paper Acceptance Rate 35 of 132 submissions, 27%;
        Overall Acceptance Rate 436 of 1,556 submissions, 28%

        Upcoming Conference

        ASSETS '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 136
          Total Downloads
        • Downloads (Last 12 months)40
        • Downloads (Last 6 weeks)2
        Reflects downloads up to 15 Feb 2025

        Other Metrics

        Citations

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media