Skip to main content
Log in

Automated music video generation using multi-level feature-based segmentation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

We show how to create a music video automatically, using computable characteristics of the video and music to promote coherent matching. We analyze the flow of both music and video, and then segment them into sequences of near-uniform flow. We extract features from the both video and music segments, and then find matching pairs. The granularity of the matching process can be adapted by extending the segmentation process to several levels. Our approach drastically reduces the skill required to make simple music videos.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. http://visualcomputing.yonsei.ac.kr/personal/yoon/music.htm.

References

  1. Avid Technology Inc (2007) User guide for pinnacle studio 11. Avid Technology Inc, Tewksbury

    Google Scholar 

  2. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698

    Article  Google Scholar 

  3. Foote J, Cooper M, Girgensohn A (2002) Creating music videos using automatic media analysis. In: Proceedings of ACM multimedia. ACM, New York, pp 553–560

    Google Scholar 

  4. Gose E, Johnsonbaugh R, Jost S (1996) Pattern recognition and image analysis. Prentice Hall, Englewood Cliffs

    Google Scholar 

  5. Goto M (2001) An audio-based real-time beat tracking system for music with or without drum-sounds. J New Music Res 30(2):159–171

    Article  Google Scholar 

  6. Helmholtz HL (1954) On the sensation of tone as a physiological basis for the theory of music. Dover (translation of original text 1877)

  7. Hu M (1963) Visual pattern recognition by moment invariants. IRE Trans Inf Theo 8(2):179–187

    Google Scholar 

  8. Hua XS, Lu L, Zhang HJ (2003) Ave—automated home video editing. In: Proceedings of ACM multimedia. ACM, New York, pp 490–497

    Google Scholar 

  9. Hua XS, Lu L, Zhang HJ (2004) Automatic music video generation based on temporal pattern analysis. In: 12th ACM international conference on multimedia. ACM, New York, pp 472–475

    Chapter  Google Scholar 

  10. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  11. Jehan T, Lew M, Vaucelle C (2003) Cati dance: self-edited, self-synchronized music video. In: SIGGRAPH conference abstracts and applications. SIGGRAPH, Sydney, pp 27–31

    Google Scholar 

  12. Lan DJ, Ma YF, Zhang HJ (2003) A novel motion-based representation for video mining. In: Proceedings of the IEEE international conference on multimedia and expo. IEEE, Piscataway, pp 6–9

    Google Scholar 

  13. Lee HC, Lee IK (2005) Automatic synchronization of background music and motion in computer animation. In: Proceedings of eurographics 2005, Dublin, 29 August–2 September 2005, pp 353–362

  14. Lucas B, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of 7th international joint conference on artificial intelligence (IJCAI), Vancouver, August 1981, pp 674–679

  15. Ma YF, Zhang HJ (2003) Contrast-based image attention analysis by using fuzzy growing. In: Proceedings of the 11th ACM international conference on multimedia. ACM, New York, pp 374–381

    Google Scholar 

  16. Mulhem P, Kankanhalli M, Hasan H, Ji Y (2003) Pivot vector space approach for audio-video mixing. IEEE Multimed 10:28–40

    Article  Google Scholar 

  17. Murat Tekalp A (1995) Digital video processing. Prentice Hall, Englewood Cliffs

    Google Scholar 

  18. Scheirer ED (1998) Tempo and beat analysis of acoustic musical signals. J Acoust Soc Am 103(1):588–601

    Article  Google Scholar 

Download references

Acknowledgements

This research is accomplished as the result of the promotion project for culture contents technology research center supported by Korea Culture & Content Agency (KOCCA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to In-Kwon Lee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yoon, JC., Lee, IK. & Byun, S. Automated music video generation using multi-level feature-based segmentation. Multimed Tools Appl 41, 197–214 (2009). https://doi.org/10.1007/s11042-008-0225-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-008-0225-0

Keywords

Navigation