Skip to main content
Log in

An automatic caption alignment mechanism for off-the-shelf speech recognition technologies

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

With a growing number of online videos, many producers feel the need to use video captions in order to expand content accessibility and face two main issues: production and alignment of the textual transcript. Both activities are expensive either for the high labor of human resources or for the employment of dedicated software. In this paper, we focus on caption alignment and we propose a novel, automatic, simple and low-cost mechanism that does not require human transcriptions or special dedicated software to align captions. Our mechanism uses a unique audio markup and intelligently introduces copies of it into the audio stream before giving it to an off-the-shelf automatic speech recognition (ASR) application; then it transforms the plain transcript produced by the ASR application into a timecoded transcript, which allows video players to know when to display every single caption while playing out the video. The experimental study evaluation shows that our proposal is effective in producing timecoded transcripts and therefore it can be helpful to expand video content accessibility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. http://googleblog.blogspot.it/2009/11/automatic-captions-in-youtube.html

  2. http://www.overstream.net/

  3. http://captiontube.appspot.com/

  4. http://www.tubecaption.com/

  5. http://www.dotsub.com

  6. http://www.universalsubtitles.org

  7. http://ncam.wgbh.org/invent_build/web_multimedia/tools-guidelines/magpie

  8. http://www.techsmith.com/camtasia.html

  9. http://www.w3.org/AudioVideo/

  10. http://zuggy.wz.cz/

  11. http://www.matroska.org/technical/specs/subtitles/ssa.html

  12. http://www.nuance.com

  13. http://www.proz.com/forum/subtitling/170722-reading_speed_in_different_countries.html

  14. http://www.cpcweb.com/faq/

  15. Captioning for quicktime, Web Accessibility in Mind. Available at http://webaim.org/techniques/captions/quicktime/caption_file.

References

  1. Canadian Association of Broadcasters (2008) Closed captioning standards and protocol for canadian english language television programming services. In: CAB’s closed captioning manual

  2. Carnegie Mellon University CMU-Sphinx—open source toolkit for speech recognition. http://cmusphinx.sourceforge.net/wiki. Accessed 19 Sep 2012

  3. Federico M, Furini M (2012) Enhancing learning accessibility through fully automatic captioning. In: Proceedings of the international cross-disciplinary conference on web accessibility, W4A ’12. New York, NY, USA, ACM, pp 40:1–40:4

  4. Furini M (2008) Fast play: A novel feature for digital consumer video devices. IEEE Trans Consum Electron 54(2):513–520

    Article  Google Scholar 

  5. Garza T (1991) Evaluating the use of captioned video materials in advanced foreign language learning. Foreign Lang Ann 24(3):239–258

    Article  Google Scholar 

  6. Haubold A, Kender JR (2007) Alignment of speech to highly imperfect text transcriptions. In: Proceedings of the 2007 IEEE international conference on multimedia and expo, ICME 2007. IEEE, Beijing, China, pp 224–227, 2–5 July 2007

  7. Hong R, Wang M, Xu M, Yan S, Chua TS (2010) Dynamic captioning: video accessibility enhancement for hearing impairment. In: Proceedings of the international conference on multimedia, MM ’10. New York, NY, USA, ACM, pp 421–430

  8. Huang CW, Hsu W, Chang SF (2003) Automatic closed caption alignment based on speech recognition transcripts. Technical report, Columbia University

  9. Jelinek L, Jackson DW (2001) Television literacy: comprehension of program content using closed captions for the deaf. J Deaf Stud Deaf Educ 6(1):43–53

    Article  Google Scholar 

  10. Johnson K (2011) Acoustic and auditory phonetics, 3rd edn. Wiley-Blackwell, Malden

    Google Scholar 

  11. Kemp T, Schmidt M, Westphal M, Waibel A (2000) Strategies for automatic segmentation of audio data. In: Proceedings of the international IEEE conference on acoustics, speech, and signal processing (ICASSP), pp 1423–1426

  12. Kim SK, Hwang DS, Kim JY, Seo YS (2005) An effective news anchorperson shot detection method based on adaptive audio/visual model generation. In: Proceedings of the international conference on image and video retrieval (CIVR), pp 276–285

  13. Knight A, Almeroth KC (2010) Fast caption alignment for automatic indexing of audio. Int J Multimed Data Eng Manag 1(2):1–17

    Article  Google Scholar 

  14. Martone AF, Taskiran, CM, Delp EJ (2004) Automated closed-captioning using text alignment. In: SPIE Proceedings of Storage and retrieval methods and applications for multimedia, vol 5307. SPIE, pp 108–116

  15. Reager SE (2009) Closed captioning for online video. In: Streaming media industry sourcebook, pp 100–102

  16. Shimogori N, Ikeda T, Tsuboi S (2010) Automatically generated captions: will they help non-native speakers communicate in english? In: Proceedings of the 3rd international conference on intercultural collaboration, ICIC ’10. New York, NY, USA, ACM, pp 79–86

  17. Zhang X, Zhao Y, Schopp L (2007) A novel method of language modeling for automatic captioning in tc video teleconferencing. IEEE Trans Inf Technol Biomed 11(3):332–337

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco Furini.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Federico, M., Furini, M. An automatic caption alignment mechanism for off-the-shelf speech recognition technologies. Multimed Tools Appl 72, 21–40 (2014). https://doi.org/10.1007/s11042-012-1318-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-012-1318-3

Keywords

Navigation