skip to main content
10.1145/3517428.3544883acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article
Public Access

Support in the Moment: Benefits and use of video-span selection and search for sign-language video comprehension among ASL learners

Published: 22 October 2022 Publication History

Abstract

As they develop comprehension skills, American Sign Language (ASL) learners often view challenging ASL videos, which may contain unfamiliar signs. Current dictionary tools require students to isolate a single sign they do not understand and input a search query, by selecting linguistic properties or by performing the sign into a webcam. Students may struggle with extracting and re-creating an unfamiliar sign, and they must leave the video-watching task to use an external dictionary tool. We investigate a technology that enables users, in the moment, i.e., while they are viewing a video, to select a span of one or more signs that they do not understand, to view dictionary results. We interviewed 14 American Sign Language (ASL) learners about their challenges in understanding ASL video and workarounds for unfamiliar vocabulary. We then conducted a comparative study and an in-depth analysis with 15 ASL learners to investigate the benefits of using video sub-spans for searching, and their interactions with a Wizard-of-Oz prototype during a video-comprehension task. Our findings revealed benefits of our tool in terms of quality of video translation produced and perceived workload to produce translations. Our in-depth analysis also revealed benefits of an integrated search tool and use of span-selection to constrain video play. These findings inform future designers of such systems, computer vision researchers working on the underlying sign matching technologies, and sign language educators.

Supplementary Material

Study1_Videos.csv: This file contains genres, title, and URLs to the 4 sample videos shown to participants in study 1. Study2_Videos.csv: This file contains genres, URLs, start times, end times, duration, and descriptions of the 9 videos used in the study 2 (prototype study). NASA_TLX_Table.csv: The file contains the scaled values of the 6 NASA TLX sub-scales for both conditions in study 2. The rightmost column contains the p values and U values from Mann?Whitney U test. (assets22a-sub8747-cam-i40.zip)

References

[1]
Alikhan Abutalipov, Aigerim Janaliyeva, Medet Mukushev, Antonio Cerone, and Anara Sandygulova. 2021. Handshape Classification in a Reverse Dictionary of Sign Languages for the Deaf. In From Data to Models and Back, Juliana Bowles, Giovanna Broccia, and Mirco Nanni (Eds.). Springer International Publishing, Cham, 217–226.
[2]
Chanchal Agrawal and Roshan L Peiris. 2021. I see what you’re saying: A literature review of eye tracking research in communication of Deaf or Hard of Hearing Users. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 41, 13 pages. https://doi.org/10.1145/3441852.3471209
[3]
Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2019. Effect of automatic sign recognition performance on the usability of video-based search interfaces for sign language dictionaries. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 56–67. https://doi.org/10.1145/3308561.3353791
[4]
Stavroula Sokoli Athens and Stavroula Sokoli. 2007. Stavroula Sokoli (Athens) Learning via Subtitling (LvS) : A tool for the creation of foreign language learning activities based on film subtitling. In MuTra 2006 – Audiovisual Translation Scenarios: Conference Proceedings. MuTra, Copenhagen, Denmark, 8 pages.
[5]
Vassilis Athitsos, Carol Neidle, Stan Sclaroff, Joan Nash, Alexandra Stefan, Ashwin Thangali, Haijing Wang, and Quan Yuan. 2010. Large lexicon project: American Sign Language video corpus and sign language indexing/retrieval algorithms. In Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (CSLT), Vol. 2. European Language Resources Association (ELRA), Valletta, Malta, 11–14.
[6]
Danielle Bragg, Kyle Rector, and Richard E. Ladner. 2015. A user-powered American Sign Language dictionary. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing (Vancouver, BC, Canada) (CSCW ’15). Association for Computing Machinery, New York, NY, USA, 1837–1848. https://doi.org/10.1145/2675133.2675226
[7]
Fabio Buttussi, Luca Chittaro, and Marco Coppo. 2007. Using web3D technologies for visualization and search of signs in an international sign language dictionary. In Proceedings of the Twelfth International Conference on 3D Web Technology (Perugia, Italy) (Web3D ’07). Association for Computing Machinery, New York, NY, USA, 61–70. https://doi.org/10.1145/1229390.1229401
[8]
Naomi K. Caselli, Zed Sevcikova Sehyr, Ariel M. Cohen-Goldberg, and Karen Emmorey. 2017. ASL-LEX: A lexical database of American Sign Language. Behavior Research Methods 49, 2 (01 Apr 2017), 784–801. https://doi.org/10.3758/s13428-016-0742-0
[9]
Sheila Castilho, Stephen Doherty, Federico Gaspari, and Joss Moorkens. 2018. Approaches to human and machine translation quality assessment. Springer International Publishing, Cham, 9–38. https://doi.org/10.1007/978-3-319-91241-7_2
[10]
Konstantinos Chorianopoulos and Michail N. Giannakos. 2013. Usability design for video lectures. In Proceedings of the 11th European Conference on Interactive TV and Video (Como, Italy) (EuroITV ’13). Association for Computing Machinery, New York, NY, USA, 163–164. https://doi.org/10.1145/2465958.2465982
[11]
Christopher Conly, Zhong Zhang, and Vassilis Athitsos. 2015. An integrated RGB-D system for looking up the meaning of signs. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Corfu, Greece) (PETRA ’15). Association for Computing Machinery, New York, NY, USA, Article 24, 8 pages. https://doi.org/10.1145/2769493.2769534
[12]
Eberhard, David M., Gary F. Simons, and Charles D. Fennig (eds.).2021. Sign language. https://www.ethnologue.com/subgroups/sign-language
[13]
Ralph Elliott, Helen Cooper, John Glauert, Richard Bowden, and François Lefebvre-Albaret. 2011. Search-by-example in multilingual sign language databases. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT). SLTAT, Dundee, Scotland, 8 pages.
[14]
Karen Emmorey, Robin Thompson, and Rachael Colvin. 2009. Eye gaze during comprehension of American Sign Language by native and beginning signers. Journal of deaf studies and deaf education 14, 2 (2009), 237–243.
[15]
National Center for Education Statistics (NCES). 2018. Digest of education statistics number and percentage distribution of course enrollments in languages other than English at degree-granting postsecondary institutions, by language and enrollment level: Selected years, 2002 through 2016. https://nces.ed.gov/programs/digest/d18/tables/dt18_311.80.asp
[16]
Susan M Gass, Jennifer Behney, and Luke Plonsky. 2020. Second language acquisition: An introductory course (5 ed.). Routledge, New York. 774 pages.
[17]
David Goldberg, Dennis Looney, and Natalia Lusin. 2015. Enrollments in languages other than English in United States Institutions of Higher Education, Fall 2013.
[18]
Debbie B Golos and Annie M Moses. 2011. How teacher mediation during video viewing facilitates literacy behaviors. Sign Language Studies 12, 1 (2011), 98–118.
[19]
Michael Andrew Grosvald. 2009. Long-distance coarticulation: A production and perception study of English and American Sign Language. University of California, Davis, 1 Shields Ave, Davis, CA 95616.
[20]
Wyatte C Hall, Leonard L Levin, and Melissa L Anderson. 2017. Language deprivation syndrome: A possible neurodevelopmental disorder with sociocultural origins. Social psychiatry and psychiatric epidemiology 52, 6(2017), 761–776.
[21]
Sandra G Hart. 2006. NASA-task Load Index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 50. Sage Publications Sage CA, Sage publications, Los Angeles, CA, 904–908.
[22]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, Amsterdam, Netherlands, 139–183.
[23]
Saad Hassan. 2022. Designing and experimentally evaluating a video-based American Sign Language look-up system. In ACM SIGIR Conference on Human Information Interaction and Retrieval (Regensburg, Germany) (CHIIR ’22). Association for Computing Machinery, New York, NY, USA, 383–386. https://doi.org/10.1145/3498366.3505804
[24]
Saad Hassan, Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2020. Effect of ranking and precision of results on users’ satisfaction with search-by-video sign-language dictionaries. In Sign Language Recognition, Translation and Production (SLRTP) Workshop-Extended Abstracts, Vol. 4. Computer Vision – ECCV 2020 Workshops, Virtual, 6 pages.
[25]
Saad Hassan, Oliver Alonzo, Abraham Glasser, and Matt Huenerfauth. 2021. Effect of Sign-Recognition Performance on the Usability of Sign-Language Dictionary Search. ACM Trans. Access. Comput. 14, 4, Article 18 (oct 2021), 33 pages. https://doi.org/10.1145/3470650
[26]
Saad Hassan, Akhter Al Amin, Alexis Gordon, Sooyeon Lee, and Matt Huenerfauth. 2022. Design and Evaluation of Hybrid Search for American Sign Language to English Dictionaries: Making the Most of Imperfect Sign Recognition. In CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 195, 13 pages. https://doi.org/10.1145/3491102.3501986
[27]
Saad Hassan, Aiza Hasib, Suleman Shahid, Sana Asif, and Arsalan Khan. 2019. Kahaniyan - Designing for Acquisition of Urdu as a Second Language. In Human-Computer Interaction – INTERACT 2019, David Lamas, Fernando Loizides, Lennart Nacke, Helen Petrie, Marco Winckler, and Panayiotis Zaphiris (Eds.). Springer International Publishing, Cham, 207–216.
[28]
Robert J Hoffmeister. 2000. A piece of the puzzle: ASL and reading comprehension in deaf children. Mahwah, N.J. : Lawrence Erlbaum Associates, New Jersey, USA. 143–163 pages.
[29]
Simon Hooper, Charles Miller, Susan Rose, and George Veletsianos. 2007. The effects of digital video quality on learner comprehension in an American Sign Language assessment environment. Sign Language Studies 8, 1 (2007), 42–58.
[30]
iMotions A/S. 2019. iMotions Biometric Research Platform. imotions. https://imotions.com/academy/
[31]
Adobe Inc.2008. Adobe Premiere Pro.https://www.adobe.com/products/premiere.html. [Online; accessed 03-March-2022].
[32]
Apple Inc.2008. Apple Finalcut. http://aiweb.techfak.uni-bielefeld.de/content/bworld-robot-control-software/. [Online; accessed 03-March-2022].
[33]
Apple Inc.2008. Apple iMovie. https://www.apple.com/imovie/. [Online; accessed 03-March-2022].
[34]
Tero Jokela, Minna Karukka, and Kaj Mäkelä. 2007. Mobile Video Editor: Design and Evaluation. In Proceedings of the 12th International Conference on Human-Computer Interaction: Interaction Platforms and Techniques (Beijing, China) (HCI’07). Springer-Verlag, Berlin, Heidelberg, 344–353.
[35]
Jonathan Keane, Diane Brentari, and Jason Riggle. 2012. Coarticulation in ASL fingerspelling.
[36]
Annette Klosa-Kückelhaus and Frank Michaelis. 2022. The Design of Internet Dictionaries. The Bloomsbury Handbook of Lexicography 1 (2022), 405.
[37]
Pradeep Kumar, Rajkumar Saini, Partha Pratim Roy, and Debi Prosad Dogra. 2018. A position and rotation invariant framework for sign language recognition (SLR) using Kinect. Multimedia Tools and Applications 77, 7 (2018), 8823–8846.
[38]
Marlon Kuntze, Debbie Golos, and Charlotte Enns. 2014. Rethinking literacy: Broadening opportunities for visual learners. Sign Language Studies 14, 2 (2014), 203–224.
[39]
The language archive. 2018. ELAN - The Max Planck Institute for Psycholinguistics. https://archive.mpi.nl/tla/elan
[40]
J. Lapiak. 2021. Handspeak. https://www.handspeak.com/
[41]
Scott K Liddell and Robert E Johnson. 1986. American Sign Language compound formation processes, lexicalization, and phonological remnants. Natural Language & Linguistic Theory 4, 4 (1986), 445–513.
[42]
Ching (Jean) Liu, Chi-Lan Yang, Joseph Jay Williams, and Hao-Chuan Wang. 2019. NoteStruct: Scaffolding Note-Taking While Learning from Online Videos. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312878
[43]
Carolyn McCaskill, Ceil Lucas, Robert Bayley, and Joseph Christopher Hill. 2011. The hidden treasure of Black ASL: Its history and structure. Gallaudet University Press Washington, DC, Gallaudet University Press, 800 Florida Avenue, NE, Washington, DC 20002-3695.
[44]
John Milton and Vivying S. Y. Cheng. 2010. A toolkit to assist L2 learners become independent writers. In Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids (Los Angeles, California) (CL&W ’10). Association for Computational Linguistics, USA, 33–41.
[45]
Daniel Mitchell. 2021. British Sign Language BSL dictionary. https://www.signbsl.com/
[46]
Ross Mitchell, Travas Young, Bellamie Bachleda, and Michael Karchmer. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Language Studies 6 (03 2006). https://doi.org/10.1353/sls.2006.0019
[47]
Anshul Mittal, Pradeep Kumar, Partha Pratim Roy, Raman Balasubramanian, and Bidyut B Chaudhuri. 2019. A modified LSTM model for continuous sign language recognition using leap motion. IEEE Sensors Journal 19, 16 (2019), 7056–7063.
[48]
J Murray. 2020. World Federation of the deaf.http://wfdeaf.org/our-work/
[49]
Tobii Pro Nano. 2014. Tobii Pro Lab. Tobii Technology. https://www.tobiipro.com/
[50]
Carol Neidle and Christian Vogler. 2012. A new web interface to facilitate access to corpora: Development of the ASLLRP data access interface (DAI). In Proc. 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC. Citeseer, OpenBU, Istanbul, Turkey, 8 pages. https://open.bu.edu/handle/2144/31886
[51]
Razieh Rastgoo, Kourosh Kiani, and Sergio Escalera. 2021. Sign language recognition: A deep survey. Expert Systems with Applications 164 (2021), 113794. https://doi.org/10.1016/j.eswa.2020.113794
[52]
Kishore K Reddy and Mubarak Shah. 2013. Recognizing 50 human action categories of web videos. Machine vision and applications 24, 5 (2013), 971–981.
[53]
Jerry Schnepp, Rosalee Wolfe, Gilbert Brionez, Souad Baowidan, Ronan Johnson, and John McDonald. 2020. Human-Centered design for a sign language learning application. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Corfu, Greece) (PETRA ’20). Association for Computing Machinery, New York, NY, USA, Article 60, 5 pages. https://doi.org/10.1145/3389189.3398007
[54]
Jérémie Segouat. 2009. A study of sign language coarticulation. SIGACCESS Accessible Computing 1, 93 (Jan 2009), 31–38. https://doi.org/10.1145/1531930.1531935
[55]
Zatorre RJ Shiell MM, Champoux F. 2014. Enhancement of visual motion detection thresholds in early Deaf people.PloS one 9, 2 (2014), e90498. https://doi.org/10.1371/journal.pone.0090498
[56]
ShuR. 2021. SLintoDictionary. http://slinto.com/us
[57]
Namrata Srivastava, Sadia Nawaz, Joshua Newn, Jason Lodge, Eduardo Velloso, Sarah M. Erfani, Dragan Gasevic, and James Bailey. 2021. Are You with Me? Measurement of Learners’ Video-Watching Attention with Eye Tracking. In LAK21: 11th International Learning Analytics and Knowledge Conference (Irvine, CA, USA) (LAK21). Association for Computing Machinery, New York, NY, USA, 88–98. https://doi.org/10.1145/3448139.3448148
[58]
Ted Supalla. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. Ph.D. Dissertation. University of California, San Diego.
[59]
Nazif Can Tamer and Murat Saraçlar. 2020. Improving keyword search performance in sign language with hand shape features. In Computer Vision – ECCV 2020 Workshops, Adrien Bartoli and Andrea Fusiello (Eds.). Springer International Publishing, Cham, 322–333.
[60]
Carolina Tannenbaum-Baruchi and Paula Feder-Bubis. 2018. New sign language new (S): the globalization of sign language in the smartphone era. Disability & society 33, 2 (2018), 309–312.
[61]
Kimberly A. Weaver and Thad Starner. 2011. We need to communicate! Helping hearing parents of Deaf children learn American Sign Language. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (Dundee, Scotland, UK) (ASSETS ’11). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/2049536.2049554
[62]
Polina Yanovich, Carol Neidle, and Dimitris Metaxas. 2016. Detection of major ASL sign types in continuous signing for ASL Recognition. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). European Language Resources Association (ELRA), Portorož, Slovenia, 3067–3073. https://www.aclweb.org/anthology/L16-1490
[63]
Zahoor Zafrulla, Helene Brashear, Thad Starner, Harley Hamilton, and Peter Presti. 2011. American Sign Language recognition with the Kinect. In Proceedings of the 13th International Conference on Multimodal Interfaces (Alicante, Spain) (ICMI ’11). Association for Computing Machinery, New York, NY, USA, 279–286. https://doi.org/10.1145/2070481.2070532
[64]
Mikhail A. Zagot and Vladimir V. Vozdvizhensky. 2014. Translating Video: Obstacles and challenges. Procedia - Social and Behavioral Sciences 154 (2014), 268–271. https://doi.org/10.1016/j.sbspro.2014.10.149

Cited By

View all
  • (2024)Exploring the Benefits and Applications of Video-Span Selection and Search for Real-Time Support in Sign Language Video Comprehension among ASL LearnersACM Transactions on Accessible Computing10.1145/369064717:3(1-35)Online publication date: 4-Oct-2024
  • (2024)Designing and Evaluating an Advanced Dance Video Comprehension Tool with In-situ Move Identification CapabilitiesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642710(1-19)Online publication date: 11-May-2024
  • (2023)Supporting ASL Communication Between Hearing Parents and Deaf ChildrenProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614511(1-5)Online publication date: 22-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASSETS '22: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
October 2022
902 pages
ISBN:9781450392587
DOI:10.1145/3517428
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ASL Learning
  2. American Sign language
  3. Continuous Signing
  4. Integrated Search
  5. Search Interface
  6. Sign Language Learning
  7. Sign Language Videos
  8. Sign Languages
  9. Sign Look-up
  10. Video Selection

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

ASSETS '22
Sponsor:

Acceptance Rates

ASSETS '22 Paper Acceptance Rate 35 of 132 submissions, 27%;
Overall Acceptance Rate 436 of 1,556 submissions, 28%

Upcoming Conference

ASSETS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)336
  • Downloads (Last 6 weeks)34
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring the Benefits and Applications of Video-Span Selection and Search for Real-Time Support in Sign Language Video Comprehension among ASL LearnersACM Transactions on Accessible Computing10.1145/369064717:3(1-35)Online publication date: 4-Oct-2024
  • (2024)Designing and Evaluating an Advanced Dance Video Comprehension Tool with In-situ Move Identification CapabilitiesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642710(1-19)Online publication date: 11-May-2024
  • (2023)Supporting ASL Communication Between Hearing Parents and Deaf ChildrenProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614511(1-5)Online publication date: 22-Oct-2023
  • (2023)Sign Spotter: Design and Initial Evaluation of an Automatic Video-Based American Sign Language Dictionary SystemProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614497(1-5)Online publication date: 22-Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media