Skip to main content

Preferences of Deaf or Hard of Hearing Users for Live-TV Caption Appearance

  • Conference paper
  • First Online:
Book cover Universal Access in Human-Computer Interaction. Access to Media, Learning and Assistive Environments (HCII 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12769))

Included in the following conference series:

Abstract

There is a wide range of visual appearance of captions during television programming (e.g. text color, typeface, caption background, number of lines, caption placement), especially during live or near-live broadcasts in local markets. The effect of these visual properties of captions on Deaf and Hard of Hearing (DHH) users’ TV-watching experience have been less explored in existing research-based guidelines nor in the design of state-of-the-art caption evaluation metrics. Therefore, we empirically investigated what visual attributes of captions are preferred by DHH viewers while watching captioned live TV programs. We convened two focus groups where participants watched videos consisting of captions with various display properties and provided subjective open-ended feedback. By analyzing the focus-group responses, we observed DHH users’ preference for specific contrast between caption text and background color such as, black text on white background or vice-versa, and caption placement not occluding onscreen salient content. Our findings also revealed for preferences genre-adaptive caption typeface and movement during captioned live TV programming.

The contents of this paper were developed under a grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant number #90DPCP0002). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this paper do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Malyshev: Modern technologies of education at the University. - SPb.: Department of operational printing, HSE, p. 134 (2011)

    Google Scholar 

  2. Blanchfield, B.B., Feldman, J.J., Dunbar, J.L., Gardner. E.N.: The severely to profoundly hearing-impaired population in the united states: prevalence estimates and demographics. J. Am. Acad. Audiol. 12(4), 183–189 (2001)

    Google Scholar 

  3. Federal Communications Commission: Closed Captioning Quality Report and Order, Declaratory Ruling, FNRMP (2014). https://www.fcc.gov/document/closed-captioning-quality-report-and-order-declaratory-ruling-fnprm. Accessed 30 Dec 2020

  4. The Described and Captioned Media Program: Captioning Key for Educational Media (2010). Rhttp://access-ed.r2d2.uwm.edu/resources/captioning-key.pdf. Accessed 23 Dec 2020

  5. BBC: BBC Subtitle Guidelines (2019). https://bbc.github.io/subtitle-guidelines. Accessed 26 Dec 2020

  6. Nam, S., Fels, D.I., Chignell. M.H.: Modeling closed captioning subjective quality assessment by deaf and hard of hearing viewers. IEEE Trans. Comput. Soc. Syst. 7(3), 621–631 (2020)

    Google Scholar 

  7. Berke, L.: Displaying confidence from imperfect automatic speech recognition for captioning. Assoc. Comput. Mach. SIGACCESS Access. Comput. 117, 14–18 (2017)

    Google Scholar 

  8. Berke, L., Albusays, K., Seita, M., Huenerfauth, M.: Preferred appearance of captions generated by automatic speech recognition for deaf and hard-of-hearing viewers. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19), pp. 1–6. Association for Computing Machinery, New York, NY, USA (2019)

    Google Scholar 

  9. Media Access Group (WGBH): Closed Captioning on TV in the United States 101 (2019). https://blog.snapstream.com/closed-captioning-on-tv-in-the-united-states-101. Accessed 10 Dec 2020

  10. Ali, A., Renals, S.: Word error rate estimation for speech recognition: e-WER. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol. 2: Short Papers), pp. 0–24. Association for Computational Linguistics, Melbourne, Australia, July (2018)

    Google Scholar 

  11. Romero-Fresco, P., Martínez Pérez, J.: Accuracy Rate in Live Subtitling: The NER Model. Audiovisual Translation in a Global Context. Palgrave Studies in Translating and Interpreting, Palgrave Macmillan, London (2015)

    Google Scholar 

  12. Apone, M.B.T., Botkin, B., Goldberg. L.: Caption accuracy metrics project research into automated error ranking of real-time captions in live television news programs (2011)

    Google Scholar 

  13. Kafle, S., Huenerfauth, M.: Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Trans. Access. Comput. 12(2), (June 2019)

    Google Scholar 

  14. Ofcom: Measuring live subtitling quality, UK (2019). https://www.ofcom.org.uk/__data/assets/pdf_file/0019/45136/sampling-report.pdf. Accessed 19 Dec 2020

  15. English-language Working Group: Closed Captioning Standards and Protocol for Canadian English Language Television Programming Services (2008). https://www.cab-acr.ca/english/social/captioning/captioning.pdf. Accessed 19 Nov 2020

  16. Hong, R., Wang, M., Xu, M., Yan, S., Chua, T-S.: Dynamic captioning: Video accessibility enhancement for hearing impairment. In: Proceedings of the 18th ACM International Conference on Multimedia (MM ’10) (2010)

    Google Scholar 

  17. Hong, R., et al.: Video accessibility enhancement for hearing-impaired users. ACM Trans. Multimedia Comput. Commun. Appl. 7S, 1, Article 24 (2011), 19 p. https://doi.org/10.1145/2037676.2037681

  18. Brown, A., et al.: Dynamic subtitles: the user experience. In: Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX 2015), pp. 103–112. Association for Computing Machinery, New York, NY, USA (2015)

    Google Scholar 

  19. Kushalnagar, R., Kushalnagar, K.: Subtitle formatter: making subtitles easier to read for deaf and hard of hearing viewers on personal devices. In: Miesenberger, K., Kouroupetroglou, G. (eds.) ICCHP 2018. LNCS, vol. 10896, pp. 211–219. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94277-3_35

    Chapter  Google Scholar 

  20. Jiang, B., Liu, S., He, L., Wu, W., Chen, H., Shen, Y.: Subtitle positioning for e-learning videos based on rough gaze estimation and saliency detection. InSIGGRAPH Asia Posters 15–16 (2017)

    Google Scholar 

  21. Hu, Y., Kautz, J., Yu, Y., Wang, W.: Speaker-following video subtitles. ACM Trans. Multimedia Comput. Commun. Appl. 11(2), (2015)

    Google Scholar 

  22. Kurzhals, K., Göbel, F., Angerbauer, K., Sedlmair, M., Raubal, M.: A view on the viewer: caze-adaptive captions for videos. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI 2020), pp. 1–12. Association for Computing Machinery, New York, NY, USA (2020)

    Google Scholar 

  23. Kim, M., et al.: The effect of font and display sizes on the readability for mobile devices. In: Proceedings of HCI Korea (HCIK 2016), pp. 468–475. Hanbit Media Inc., Seoul, KOR (2016)

    Google Scholar 

  24. Brooks, M., Apone, T., Botkin, B., Goldberg. L.: Research into Automated Error Ranking of Real-time Captions in Live Television News Programs, National Center for Accessible Media (NCAM)

    Google Scholar 

  25. Krueger, R.A., Casey, M.C.: Focus Groups A Practical Guide for Applied Research. Sage Publications, London (2009)

    Google Scholar 

  26. Rabiee, F.: Focus-group interview and data analysis. Proc. Nutr. Soc. 63(4), 655–660 (2004)

    Google Scholar 

  27. Vigier, T., Baveye, Y., Rousseau, J., Le Callet, P.: Visual attention as a dimension of QoE: Subtitles in UHD videos. In: Proceedings of the Eighth International Conference on Quality of Multimedia Experience. pp. 1–6 (2016)

    Google Scholar 

  28. Morón, O.G., Szarkowska, A., Woll, B.: The impact of text segmentation on subtitle reading. J. Eye Move. Res. 65, 2 (2018)

    Google Scholar 

  29. Kushalnagar, R.S, Lasecki, W.S., Bigham, J.P.: Accessibility evaluation of classroom captions. ACM Trans. Access. Comput. 5(3), 7 (January 2014), 24 p. https://doi.org/10.1145/2543578

  30. Rander, A., Looms, P.O.: The accessibility of television news with live subtitling on digital television. In: Proceedings of the 8th European Conference on Interactive TV and Video (Tampere, Finland) (EuroITV 2010). Association for Computing Machinery, pp. 155–160. New York, NY, USA (2010). https://doi.org/10.1145/1809777.1809809

  31. Kafle, S., Yeung, P., Huenerfauth, M.: Evaluating the highlighting key benefit of words in captions for people who are deaf or hard of hearing. In: The 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS 2019). pp . 43–55, Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3308561.3353781

  32. Waller, J.M., Kushalnagar, R.S.: Evaluation of automatic caption segmentation. In: Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (Reno, Nevada, USA) (ASSETS 2016). pp. 331–332. Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2982142.2982205

  33. Lee, D.G., Fels, D.I., Do. J.P.: Emotive captioning. Comput. Entertain. 5(2), 11 (April 2007), 15p. https://doi.org/10.1145/1279540.1279551

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matt Huenerfauth .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Amin, A.A., Glasser, A., Kushalnagar, R., Vogler, C., Huenerfauth, M. (2021). Preferences of Deaf or Hard of Hearing Users for Live-TV Caption Appearance. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Access to Media, Learning and Assistive Environments. HCII 2021. Lecture Notes in Computer Science(), vol 12769. Springer, Cham. https://doi.org/10.1007/978-3-030-78095-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78095-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78094-4

  • Online ISBN: 978-3-030-78095-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics