skip to main content
10.1145/3025453.3026032acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Leveraging Complementary Contributions of Different Workers for Efficient Crowdsourcing of Video Captions

Published: 02 May 2017 Publication History

Abstract

Hearing-impaired people and non-native speakers rely on captions for access to video content, yet most videos remain uncaptioned or have machine-generated captions with high error rates. In this paper, we present the design, implementation and evaluation of BandCaption, a system that combines automatic speech recognition with input from crowd workers to provide a cost-efficient captioning solution for accessible online videos. We consider four stakeholder groups as our source of crowd workers: (i) individuals with hearing impairments, (ii) second-language speakers with low proficiency, (iii) second-language speakers with high proficiency, and (iv) native speakers. Each group has different abilities and incentives, which our workflow leverages. Our findings show that BandCaption enables crowd workers who have different needs and strengths to accomplish micro-tasks and make complementary contributions. Based on our results, we outline opportunities for future research and provide design suggestions to deliver cost-efficient captioning solutions.

Supplementary Material

suppl.mov (pn4783p.mp4)
Supplemental video

References

[1]
Maribel Acosta, Amrapali Zaveri, Elena Simperl, Dimitris Kontokostas, Sören Auer, and Jens Lehmann. 2013. Crowdsourcing linked data quality assessment. In International Semantic Web Conference. Springer, 260--276.
[2]
Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2015. Soylent: A Word Processor with a Crowd Inside. Commun. ACM 58, 8 (July 2015), 85--94.
[3]
Debra L Blackwell, Jacqueline W Lucas, and Tainya C Clarke. 2014. Summary health statistics for US adults: national health interview survey, 2012. Vital and health statistics. Series 10, Data from the National Health Survey 260 (2014), 1--161.
[4]
Alessandro Bozzon, Marco Brambilla, and Andrea Mauri. 2012. A Model-Driven Approach for Crowdsourcing Search. In CrowdSearch. 31--35.
[5]
Judy Chai, Rosemary Erlam, and others. 2008. The effect and the influence of the use of video and captions on second language learning. New Zealand Studies in Applied Linguistics 14, 2 (2008), 25.
[6]
Martine Danan. 2004. Captioning and subtitling: Undervalued language learning strategies. Meta: Translators? Journal 49, 1 (2004), 67--77.
[7]
Jrgen Froehlich. 1988. German Videos with German Subtitles: A New Approach to Listening Comprehension Development. Die Unterrichtspraxis / Teaching German 21, 2 (1988), 199--203. http://www.jstor.org/stable/3530283
[8]
Thomas J. Garza. 1991. Evaluating the Use of Captioned Video Materials in Advanced Foreign Language Learning. Foreign Language Annals 24, 3 (1991), 239--258.
[9]
Y. Gaur, W.S. Lasecki, F. Metze, and J.P. Bigham. 2016. The Effects of Automatic Speech Recognition Quality on Human Transcription Latency. In Proceedings of the International Web for All Conference (W4A 2016). 10.
[10]
C Grimmer. 1992. Supertext English language subtitles: A boon for English language learners. EA Journal 10, 1 (1992), 66--75.
[11]
Rebecca Perkins Harrington and Gregg C. Vanderheiden. 2013. Crowd Caption Correction (CCC). In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '13). ACM, New York, NY, USA, Article 45, 2 pages.
[12]
Timothy J. Hazen. 2006. Automatic alignment and error correction of human generated transcripts for long speech recordings. In In Proc. Interspeech.
[13]
Jing-Fong Jane Hsu. 1994. Computer assisted language learning (CALL): The effect of ESL students' use of interactional modifications on listening comprehension. (1994).
[14]
Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW '13). ACM, New York, NY, USA, 1301--1318.
[15]
Salla Kurhila. 2001. Correction in talk between native and non-native speaker. Journal of Pragmatics 33, 7 (2001), 1083--1110.
[16]
Raja S Kushalnagar, Walter S Lasecki, and Jeffrey P Bigham. 2014. Accessibility evaluation of classroom captions. ACM Transactions on Accessible Computing (TACCESS) 5, 3 (2014), 7.
[17]
Walter Lasecki, Christopher Miller, Adam Sadilek, Andrew Abumoussa, Donato Borrello, Raja Kushalnagar, and Jeffrey Bigham. 2012. Real-time Captioning by Groups of Non-experts. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST '12). ACM, New York, NY, USA, 23--34.
[18]
Walter S. Lasecki, Raja Kushalnagar, and Jeffrey P. Bigham. 2014. Legion Scribe: Real-time Captioning by Non-experts. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '14). ACM, New York, NY, USA, 303--304.
[19]
Chia-ying Lee and James R Glass. 2011. A Transcription Task for Crowdsourcing with Automatic Quality Control. In Interspeech, Vol. 11. Citeseer, 3041--3044.
[20]
Paul Legris, John Ingham, and Pierre Collerette. 2003. Why do people use information technology? A critical review of the technology acceptance model. Information Management 40, 3 (2003), 191--204.
[21]
Paul Markham and Lizette Peter. 2003. The influence of English language and Spanish language captions on foreign language listening/reading comprehension. Journal of Educational Technology Systems 31, 3 (2003), 331--341.
[22]
Cosmin Munteanu, Ron Baecker, and Gerald Penn. 2008. Collaborative Editing for Improved Usefulness and Usability of Transcript-enhanced Webcasts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 373--382.
[23]
Susan B Neuman and Patricia Koskinen. 1992. Captioned television as comprehensible input: Effects of incidental word learning from context for language minority students. Reading Research Quarterly (1992), 95--106.
[24]
Scott Novotney and Chris Callison-Burch. 2010. Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-expert Transcription. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT '10). Association for Computational Linguistics, Stroudsburg, PA, USA, 207--215. http: //dl.acm.org/citation.cfm?id=1857999.1858023
[25]
Institute of International Education. 2015. Open Doors 2015 Report on International Educational Exchange. (2015). http://www.iie.org/Research-and-Publications/ Open-Doors#.V0S1MZMrKAw
[26]
Jan L Plass, Dorothy M Chun, Richard E Mayer, and Detlev Leutner. 1998. Supporting visual and verbal learning preferences in a second-language multimedia learning environment. Journal of educational psychology 90, 1 (1998), 25.
[27]
Lawrence Rabiner and Biing-Hwang Juang. 1993. Fundamentals of speech recognition. (1993).
[28]
Brent N. Shiver and Rosalee J. Wolfe. 2015. Evaluating Alternatives for Better Deaf Accessibility to Selected Web-Based Multimedia. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '15). ACM, New York, NY, USA, 231--238.
[29]
Robert Vanderplank. 1988. The value of teletext sub-titles in language learning. ELT Journal 42, 4 (1988), 272--281.
[30]
M Wald. 2013. Concurrent Collaborative Captioning. SERP.
[31]
Paula Winke, Susan Gass, and Tetyana Sydorenko. 2013. Factors Influencing the Use of Captions by Foreign Language Learners: An Eye-Tracking Study. The Modern Language Journal 97, 1 (2013), 254--275.

Cited By

View all
  • (2024)Envisioning Collective Communication Access: A Theoretically-Grounded Review of Captioning Literature from 2013-2023Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675649(1-18)Online publication date: 27-Oct-2024
  • (2023)Exploring Community-Driven Descriptions for Making Livestreams AccessibleProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608425(1-13)Online publication date: 22-Oct-2023
  • (2023)Accessibility Research in Digital Audiovisual Media: What Has Been Achieved and What Should Be Done Next?Proceedings of the 2023 ACM International Conference on Interactive Media Experiences10.1145/3573381.3596159(94-114)Online publication date: 12-Jun-2023
  • Show More Cited By

Index Terms

  1. Leveraging Complementary Contributions of Different Workers for Efficient Crowdsourcing of Video Captions

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
    May 2017
    7138 pages
    ISBN:9781450346559
    DOI:10.1145/3025453
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 May 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. complementary contributions
    2. crowdsourcing
    3. video caption

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CHI '17
    Sponsor:

    Acceptance Rates

    CHI '17 Paper Acceptance Rate 600 of 2,400 submissions, 25%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)25
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Envisioning Collective Communication Access: A Theoretically-Grounded Review of Captioning Literature from 2013-2023Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675649(1-18)Online publication date: 27-Oct-2024
    • (2023)Exploring Community-Driven Descriptions for Making Livestreams AccessibleProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608425(1-13)Online publication date: 22-Oct-2023
    • (2023)Accessibility Research in Digital Audiovisual Media: What Has Been Achieved and What Should Be Done Next?Proceedings of the 2023 ACM International Conference on Interactive Media Experiences10.1145/3573381.3596159(94-114)Online publication date: 12-Jun-2023
    • (2023)A Literature Review of Video-Sharing Platform Research in HCIProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581107(1-20)Online publication date: 19-Apr-2023
    • (2023)Effects of Increasing Working Opportunity on Result Quality in Labor-Intensive CrowdsourcingInformation for a Better World: Normality, Virtuality, Physicality, Inclusivity10.1007/978-3-031-28035-1_19(277-293)Online publication date: 10-Mar-2023
    • (2022)Being a Solo Endeavor or Team Worker in Crowdsourcing Contests? It is a Long-term Decision You Need to MakeProceedings of the ACM on Human-Computer Interaction10.1145/35555956:CSCW2(1-32)Online publication date: 11-Nov-2022
    • (2022)”Mirror, Mirror, on the Wall” - Promoting Self-Regulated Learning using Affective States Recognition via Facial MovementsProceedings of the 2022 ACM Designing Interactive Systems Conference10.1145/3532106.3533500(1300-1314)Online publication date: 13-Jun-2022
    • (2022)An Exploration of Captioning Practices and Challenges of Individual Content Creators on YouTube for People with Hearing ImpairmentsProceedings of the ACM on Human-Computer Interaction10.1145/35129226:CSCW1(1-26)Online publication date: 7-Apr-2022
    • (2022)Exploring collaborative caption editing to augment video-based learningEducational technology research and development10.1007/s11423-022-10137-570:5(1755-1779)Online publication date: 15-Jul-2022
    • (2021)Comparing Generic and Community-Situated Crowdsourcing for Data Validation in the Context of Recovery from Substance Use DisordersProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445399(1-17)Online publication date: 6-May-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media