Abstract
We show how to create a music video automatically, using computable characteristics of the video and music to promote coherent matching. We analyze the flow of both music and video, and then segment them into sequences of near-uniform flow. We extract features from the both video and music segments, and then find matching pairs. The granularity of the matching process can be adapted by extending the segmentation process to several levels. Our approach drastically reduces the skill required to make simple music videos.
Similar content being viewed by others
References
Avid Technology Inc (2007) User guide for pinnacle studio 11. Avid Technology Inc, Tewksbury
Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698
Foote J, Cooper M, Girgensohn A (2002) Creating music videos using automatic media analysis. In: Proceedings of ACM multimedia. ACM, New York, pp 553–560
Gose E, Johnsonbaugh R, Jost S (1996) Pattern recognition and image analysis. Prentice Hall, Englewood Cliffs
Goto M (2001) An audio-based real-time beat tracking system for music with or without drum-sounds. J New Music Res 30(2):159–171
Helmholtz HL (1954) On the sensation of tone as a physiological basis for the theory of music. Dover (translation of original text 1877)
Hu M (1963) Visual pattern recognition by moment invariants. IRE Trans Inf Theo 8(2):179–187
Hua XS, Lu L, Zhang HJ (2003) Ave—automated home video editing. In: Proceedings of ACM multimedia. ACM, New York, pp 490–497
Hua XS, Lu L, Zhang HJ (2004) Automatic music video generation based on temporal pattern analysis. In: 12th ACM international conference on multimedia. ACM, New York, pp 472–475
Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259
Jehan T, Lew M, Vaucelle C (2003) Cati dance: self-edited, self-synchronized music video. In: SIGGRAPH conference abstracts and applications. SIGGRAPH, Sydney, pp 27–31
Lan DJ, Ma YF, Zhang HJ (2003) A novel motion-based representation for video mining. In: Proceedings of the IEEE international conference on multimedia and expo. IEEE, Piscataway, pp 6–9
Lee HC, Lee IK (2005) Automatic synchronization of background music and motion in computer animation. In: Proceedings of eurographics 2005, Dublin, 29 August–2 September 2005, pp 353–362
Lucas B, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of 7th international joint conference on artificial intelligence (IJCAI), Vancouver, August 1981, pp 674–679
Ma YF, Zhang HJ (2003) Contrast-based image attention analysis by using fuzzy growing. In: Proceedings of the 11th ACM international conference on multimedia. ACM, New York, pp 374–381
Mulhem P, Kankanhalli M, Hasan H, Ji Y (2003) Pivot vector space approach for audio-video mixing. IEEE Multimed 10:28–40
Murat Tekalp A (1995) Digital video processing. Prentice Hall, Englewood Cliffs
Scheirer ED (1998) Tempo and beat analysis of acoustic musical signals. J Acoust Soc Am 103(1):588–601
Acknowledgements
This research is accomplished as the result of the promotion project for culture contents technology research center supported by Korea Culture & Content Agency (KOCCA).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yoon, JC., Lee, IK. & Byun, S. Automated music video generation using multi-level feature-based segmentation. Multimed Tools Appl 41, 197–214 (2009). https://doi.org/10.1007/s11042-008-0225-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-008-0225-0