Abstract
When there are multiple people shown onscreen at one time, people who are Deaf and Hard of Hearing (DHH) viewing captions may find it challenging to determine who the current speaker is, especially when speakers interrupt each other abruptly or when there is a lot of turn-taking. Prior research has proposed several methods of indicating speakers, including in-text methods and methods in which the caption is dynamically located onscreen. However, prior work has not examined the effectiveness of various speaker-identifier methods for conveying who is speaking when the number of speakers on the screen increases. To determine which speaker-identifier methods are effective for DHH viewers, as the number of speakers on screen varies, we have conducted an empirical study with 31 DHH participants. We observed DHH viewers preference for speaker-identifier types, for videos that vary in the number of speakers shown onscreen. Determining the relationship between DHH viewers’ preference for speaker-identifier methods and the number of onscreen speakers can guide broadcasters to select appropriate speaker-identifier methods based on the number of speakers that appear on the screen.
The contents of this paper were developed under a grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR grant number #90DPCP0002). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this paper do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zhu, X., Guo, J., Li, S., Hao, T.: Facing cold-start: a live TV recommender system based on neural networks. IEEE Access 8, 131286–131298 (2020)
Federal Communication Commission: Closed Captioning Quality Report and Order, Declaratory Ruling, FNPRM (2014)
BBC: BBC Subtitle Guidelines 2018 (2018)
The Described and Captioned Media Program: Captioning key for educational media, guidelines and preferred technique
Yoon, J.-O., Kim, M.: The effects of captions on deaf students’ content comprehension, cognitive load, and motivation in online learning. Am. Ann. Deaf 156(3), 283–289 (2011)
Kushalnagar, R.S., et al.: RTTD-ID: tracked captions with multiple speakers for deaf students. In: 2018 ASEE Annual Conference & Exposition (2018)
Hirvenkari, L., Ruusuvuori, J., Saarinen, V.-M., Kivioja, M., Peräkylä, A., Hari, R.: Influence of turn-taking in a two-person conversation on the gaze of a viewer. PLoS ONE 8, 1–6 (2013)
Kruger, J.-L., Hefer, E., Matthew, G.: Measuring the impact of subtitles on cognitive load: eye tracking and dynamic audiovisual texts. In: Proceedings of the 2013 Conference on Eye Tracking South Africa, ETSA 2013, New York, NY, USA, pp. 62–66. Association for Computing Machinery (2013)
Amin, A.A., Glasser, A., Kushalnagar, R., Vogler, C., Huenerfauth, M.: Preferences of deaf or hard of hearing users for live-TV caption appearance. In: Antona, M., Stephanidis, C. (eds.) HCII 2021. LNCS, vol. 12769, pp. 189–201. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78095-1_15
Society of Cable Telecommunications Engineers. SCTE: Standard for carriage of VBI data in cable digital transport streams. Technical report (October 2012)
Amin, A.A., Hassan, S., Huenerfauth, M.: Effect of occlusion on deaf and hard of hearing users’ perception of captioned video quality. In: Antona, M., Stephanidis, C. (eds.) HCII 2021. LNCS, vol. 12769, pp. 202–220. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78095-1_16
Nam, S., Fels, D.I., Chignell, M.H.: Modeling closed captioning subjective quality assessment by deaf and hard of hearing viewers. IEEE Trans. Comput. Soc. Syst. 7, 621–631 (2020)
Gulliver, S.R., Ghinea, G.: How level and type of deafness affect user perception of multimedia video clips. Inf. Soc. J. 2(4), 374–386 (2003)
Gulliver, S.R., Ghinea, G.: Impact of captions on hearing impaired and hearing perception of multimedia video clips. In: Proceedings of the IEEE International Conference on Multimedia and Expo (2003)
Pérez-González, L.: The Routledge Handbook of Audiovisual Translation, 1st edn. Routledge (2018)
Szarkowska, A.: Subtitling for the deaf and the hard of hearing. In: Bogucki, Ł, Deckert, M. (eds.) The Palgrave Handbook of Audiovisual Translation and Media Accessibility. PSTI, pp. 249–268. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-42105-2_13
Lavner, Y., Rosenhouse, J., Gath, I.: The prototype model in speaker identification by human listeners. Int. J. Speech Technol. 4, 63–74 (2001)
Ge, J., Herring, S.C.: Communicative functions of emoji sequences on Sina Weibo. First Monday 23 (November 2018)
Brown, A., et al.: Dynamic subtitles: the user experience. In: Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video, pp. 103–112 (2015)
Peng, Y.-H., et al.: SpeechBubbles: enhancing captioning experiences for deaf and hard-of-hearing people in group conversations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, New York, NY, USA, pp. 1–10. Association for Computing Machinery (2018)
Ofcom: Measuring live subtitling quality, UK
Gallagher, A., McCartney, T., Xi, Z., Chaudhuri, S.: Captions based on speaker identification (2017)
Vy, Q.V., Fels, D.I.: Using avatars for improving speaker identification in captioning. In: Gross, T., et al. (eds.) INTERACT 2009. LNCS, vol. 5727, pp. 916–919. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03658-3_110
Amin, A.A., Hassan, S., Huenerfauth, M.: Caption-occlusion severity judgments across live-television genres from deaf and hard-of-hearing viewers. In: Proceedings of the 18th International Web for All Conference, W4A 2021, New York, NY, USA. Association for Computing Machinery (2021)
Kafle, S., Huenerfauth, M.: Predicting the understandability of imperfect English captions for people who are deaf or hard of hearing. ACM Trans. Access. Comput. 12, 1–32 (2019)
Zhou, X., et al.: EAST: an efficient and accurate scene text detector. In: The Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Amin, A.A., Mendis, J., Kushalnagar, R., Vogler, C., Lee, S., Huenerfauth, M. (2022). Deaf and Hard of Hearing Viewers’ Preference for Speaker Identifier Type in Live TV Programming. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Novel Design Approaches and Technologies. HCII 2022. Lecture Notes in Computer Science, vol 13308. Springer, Cham. https://doi.org/10.1007/978-3-031-05028-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-05028-2_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05027-5
Online ISBN: 978-3-031-05028-2
eBook Packages: Computer ScienceComputer Science (R0)