Skip to main content

Deaf and Hard of Hearing Viewers’ Preference for Speaker Identifier Type in Live TV Programming

  • Conference paper
  • First Online:
Universal Access in Human-Computer Interaction. Novel Design Approaches and Technologies (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13308))

Included in the following conference series:

Abstract

When there are multiple people shown onscreen at one time, people who are Deaf and Hard of Hearing (DHH) viewing captions may find it challenging to determine who the current speaker is, especially when speakers interrupt each other abruptly or when there is a lot of turn-taking. Prior research has proposed several methods of indicating speakers, including in-text methods and methods in which the caption is dynamically located onscreen. However, prior work has not examined the effectiveness of various speaker-identifier methods for conveying who is speaking when the number of speakers on the screen increases. To determine which speaker-identifier methods are effective for DHH viewers, as the number of speakers on screen varies, we have conducted an empirical study with 31 DHH participants. We observed DHH viewers preference for speaker-identifier types, for videos that vary in the number of speakers shown onscreen. Determining the relationship between DHH viewers’ preference for speaker-identifier methods and the number of onscreen speakers can guide broadcasters to select appropriate speaker-identifier methods based on the number of speakers that appear on the screen.

The contents of this paper were developed under a grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant number #90DPCP0002). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this paper do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhu, X., Guo, J., Li, S., Hao, T.: Facing cold-start: a live TV recommender system based on neural networks. IEEE Access 8, 131286–131298 (2020)

    Article  Google Scholar 

  2. Federal Communication Commission: Closed Captioning Quality Report and Order, Declaratory Ruling, FNPRM (2014)

    Google Scholar 

  3. BBC: BBC Subtitle Guidelines 2018 (2018)

    Google Scholar 

  4. The Described and Captioned Media Program: Captioning key for educational media, guidelines and preferred technique

    Google Scholar 

  5. Yoon, J.-O., Kim, M.: The effects of captions on deaf students’ content comprehension, cognitive load, and motivation in online learning. Am. Ann. Deaf 156(3), 283–289 (2011)

    Article  Google Scholar 

  6. Kushalnagar, R.S., et al.: RTTD-ID: tracked captions with multiple speakers for deaf students. In: 2018 ASEE Annual Conference & Exposition (2018)

    Google Scholar 

  7. Hirvenkari, L., Ruusuvuori, J., Saarinen, V.-M., Kivioja, M., Peräkylä, A., Hari, R.: Influence of turn-taking in a two-person conversation on the gaze of a viewer. PLoS ONE 8, 1–6 (2013)

    Article  Google Scholar 

  8. Kruger, J.-L., Hefer, E., Matthew, G.: Measuring the impact of subtitles on cognitive load: eye tracking and dynamic audiovisual texts. In: Proceedings of the 2013 Conference on Eye Tracking South Africa, ETSA 2013, New York, NY, USA, pp. 62–66. Association for Computing Machinery (2013)

    Google Scholar 

  9. Amin, A.A., Glasser, A., Kushalnagar, R., Vogler, C., Huenerfauth, M.: Preferences of deaf or hard of hearing users for live-TV caption appearance. In: Antona, M., Stephanidis, C. (eds.) HCII 2021. LNCS, vol. 12769, pp. 189–201. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78095-1_15

    Chapter  Google Scholar 

  10. Society of Cable Telecommunications Engineers. SCTE: Standard for carriage of VBI data in cable digital transport streams. Technical report (October 2012)

    Google Scholar 

  11. Amin, A.A., Hassan, S., Huenerfauth, M.: Effect of occlusion on deaf and hard of hearing users’ perception of captioned video quality. In: Antona, M., Stephanidis, C. (eds.) HCII 2021. LNCS, vol. 12769, pp. 202–220. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78095-1_16

    Chapter  Google Scholar 

  12. Nam, S., Fels, D.I., Chignell, M.H.: Modeling closed captioning subjective quality assessment by deaf and hard of hearing viewers. IEEE Trans. Comput. Soc. Syst. 7, 621–631 (2020)

    Article  Google Scholar 

  13. Gulliver, S.R., Ghinea, G.: How level and type of deafness affect user perception of multimedia video clips. Inf. Soc. J. 2(4), 374–386 (2003)

    Google Scholar 

  14. Gulliver, S.R., Ghinea, G.: Impact of captions on hearing impaired and hearing perception of multimedia video clips. In: Proceedings of the IEEE International Conference on Multimedia and Expo (2003)

    Google Scholar 

  15. Pérez-González, L.: The Routledge Handbook of Audiovisual Translation, 1st edn. Routledge (2018)

    Google Scholar 

  16. Szarkowska, A.: Subtitling for the deaf and the hard of hearing. In: Bogucki, Ł, Deckert, M. (eds.) The Palgrave Handbook of Audiovisual Translation and Media Accessibility. PSTI, pp. 249–268. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-42105-2_13

    Chapter  Google Scholar 

  17. Lavner, Y., Rosenhouse, J., Gath, I.: The prototype model in speaker identification by human listeners. Int. J. Speech Technol. 4, 63–74 (2001)

    Article  Google Scholar 

  18. Ge, J., Herring, S.C.: Communicative functions of emoji sequences on Sina Weibo. First Monday 23 (November 2018)

    Google Scholar 

  19. Brown, A., et al.: Dynamic subtitles: the user experience. In: Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video, pp. 103–112 (2015)

    Google Scholar 

  20. Peng, Y.-H., et al.: SpeechBubbles: enhancing captioning experiences for deaf and hard-of-hearing people in group conversations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, New York, NY, USA, pp. 1–10. Association for Computing Machinery (2018)

    Google Scholar 

  21. Ofcom: Measuring live subtitling quality, UK

    Google Scholar 

  22. Gallagher, A., McCartney, T., Xi, Z., Chaudhuri, S.: Captions based on speaker identification (2017)

    Google Scholar 

  23. Vy, Q.V., Fels, D.I.: Using avatars for improving speaker identification in captioning. In: Gross, T., et al. (eds.) INTERACT 2009. LNCS, vol. 5727, pp. 916–919. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03658-3_110

    Chapter  Google Scholar 

  24. Amin, A.A., Hassan, S., Huenerfauth, M.: Caption-occlusion severity judgments across live-television genres from deaf and hard-of-hearing viewers. In: Proceedings of the 18th International Web for All Conference, W4A 2021, New York, NY, USA. Association for Computing Machinery (2021)

    Google Scholar 

  25. Kafle, S., Huenerfauth, M.: Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Trans. Access. Comput. 12, 1–32 (2019)

    Article  Google Scholar 

  26. Zhou, X., et al.: EAST: an efficient and accurate scene text detector. In: The Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matt Huenerfauth .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Amin, A.A., Mendis, J., Kushalnagar, R., Vogler, C., Lee, S., Huenerfauth, M. (2022). Deaf and Hard of Hearing Viewers’ Preference for Speaker Identifier Type in Live TV Programming. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Novel Design Approaches and Technologies. HCII 2022. Lecture Notes in Computer Science, vol 13308. Springer, Cham. https://doi.org/10.1007/978-3-031-05028-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05028-2_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05027-5

  • Online ISBN: 978-3-031-05028-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics