Skip to main content

Multiple Source Alignment for Video Analysis

  • Reference work entry
Encyclopedia of Multimedia

Definition

High-level semantic information, which is otherwise very difficult to derive from the audiovisual content, can be extracted automatically using both audiovisual signal processing as well as screenplay processing and analysis.

Multimedia content analysis of video data so far has relied mostly on the information contained in the raw visual, audio and text signals. In this process the fact that the film production starts with the original screenplay is usually ignored. However, using screenplay information is like using the recipe book for the movie. We demonstrated that high-level semantic information that is otherwise very difficult to derive from the audiovisual content can be extracted automatically using both audiovisual signal processing as well as screenplay processing and analysis.

Here we present the use of screenplay as a source of ground truth for automatic speaker/character identification. Our speaker identification method consists of screenplay parsing, extraction...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 449.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. J. Foote, “Methods for the Automatic Analysis of Music and Audio,” TR FXPAL-TR-99–038, 1999.

    Google Scholar 

  2. S. Frank, “Minority Report,” Early and revised Drafts, available from Drew's Script-o-rama, http://www.script-o-rama.com.

  3. N. Patel and I. Sethi. “Video Classification Using Speaker Identification,” IS&E SPIE Proceedings of Storage and Retrieval for Image and Video Databases V, January 1997, San Jose, California.

    Google Scholar 

  4. R. Ronfard and T.T. Thuong, “A Framework for Aligning and Indexing Movies with their Script,” Proceedings of ICME 2003, Baltimore, MD, July 2003.

    Google Scholar 

  5. A. Salway and E. Tomadaki, “Temporal information in collateral texts for indexing moving images,” LREC Workshop on Annotation Standards for Temporal Information in Natural Language, 2002.

    Google Scholar 

  6. R. Turetsky and D.P.W. Ellis. “Ground-Truth Transcriptions of Real Music from Force-Aligned MIDI Syntheses,” ISMIR 2003.

    Google Scholar 

  7. R. Turetsky and N. Dimitrova, “Screenplay Alignment for Closed-System Speaker Identification and Analysis of Feature Films”, ICME 2004, Taipei, Taiwan.

    Google Scholar 

  8. J. Wachman and R.W. Picard, “Tools for browsing a TV situation comedy based on content specific attributes,” Multimedia Tools and Applications, Vol. 13, No. 3, 2001, pp. 255–284.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag

About this entry

Cite this entry

Dimitrova, N., Turetsky, R. (2008). Multiple Source Alignment for Video Analysis. In: Furht, B. (eds) Encyclopedia of Multimedia. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-78414-4_162

Download citation

Publish with us

Policies and ethics