Skip to main content

Advertisement

Log in

Embedded learning for computerized production of movie trailers

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Movie trailers are usually extracted from the most exciting, interesting, or other noteworthy parts of the movies in order to attract the audience and persuade them to see the film. At present, hand-crafted movie trailers currently occupy almost all the filming market, which is costly and time-consuming. In this paper, we propose an embedded learning algorithm to generate movie trailers automatically without human interventions. Firstly, we use CNN to extract features of candidate frames from the film by a rank-tracing technique. Secondly, SURF algorithm is utilized to match the frames of the movie with the corresponding trailer, thus the labeled and unlabeled dataset are prepared. Thirdly, the mutual information theory is introduced into the embedded machine learning to formulate a new embedded classification algorithm and hence characterize similar key elements of the trailers. Finally, semi-supervised support vector machine is applied as the classifier to obtain the satisfactory key frames to produce the predicted trailers. By treating several famous movies and their manual handling trailers as the ground-truth, series of experiments are carried out, which indicate that our method is feasible and competitive, providing a good potential for promoting the rapid development of the film industry in terms of publicity, as well as providing users with possible solutions for filtering large amounts of Internet videos.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Abd-Almageed W (2008) Online, simultaneous shot boundary detection and key frame extraction for sports videos using rank tracing. In: Proceedings of the IEEE International Conference on Image Processing pp 3200–3203

  2. Almuashi M, Hashim SZM, Mohamad D et al (2017) Automated kinship verification and identification through human facial images: a survey[J]. Multimed Tools Appl 76(1):265–307

    Article  Google Scholar 

  3. Bay H, Tuytelaars T, Gool LV (2006) Speeded-up robust features (SURF). In: Proceedings of the European Conference on Computer Vision pp 404–417

  4. Chatfield K, Simonyan K, Vedaldi A et al (2014) Return of the devil in the details: delving deep into convolutional nets. In: Proceedings of the British Machine Vision Conference, BMVA Press

  5. Cheng D, Nie F, Sun J et al (2017) A weight-adaptive laplacian embedding for graph-based clustering[J]. Neural Comput 29(7):1902–1918

    Article  Google Scholar 

  6. Cheng G, Yang C, Yao X et al (2018) When deep learning meets metric learning: remote sensing image scene classification via learning discriminative CNNs. IEEE Transactions on Geoscience & Remote Sensing pp 1–11

  7. Ding C, Zhang L (2015) Double adjacency graphs-based discriminant neighborhood embedding[J]. Pattern Recogn 48(5):1734–1742

    Article  Google Scholar 

  8. Ejaz N, Tariq TB, Baik SW (2012) Adaptive key frame extraction for video summarization using an aggregation mechanism[J]. J Vis Commun Image Represent 23(7):1031–1040

    Article  Google Scholar 

  9. Han Y, Yang Y, Wu F et al (2015) Compact and discriminative descriptor inference using multi-cues. IEEE Trans Image Process 24(12):5114–5126

    Article  MathSciNet  Google Scholar 

  10. Han Y, Yang Y, Yan Y et al (2015) Semisupervised feature selection via spline regression for video semantic recognition. IEEE Trans Neural Netw Learn Syst 26(2):252–264

    Article  MathSciNet  Google Scholar 

  11. Huang F, Wen C, Luo H et al (2016) Local quality assessment of point clouds for indoor mobile mapping[J]. Neurocomputing 196(C):59–69

    Article  Google Scholar 

  12. Joachims T (1999) Transductive inference for text classification using support vector machines. Sixteenth Int Conf Mach Learn 117(827):200–209

    Google Scholar 

  13. Kim J, Lee JK, Lee KM (2016) Accurate image super-resolution using very deep convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp 1646–1654

  14. Li J, Yao T, Ling Q et al (2017) Detecting shot boundary with sparse coding for video summarization[J]. Neurocomputing 266(C):66–78

    Article  Google Scholar 

  15. Liu J, Pengren A, Ge Q et al (2017) Gabor tensor based face recognition using the boosted nonparametric maximum margin criterion[J]. Multimed Tools Appl 1–15

  16. Liu P, Guo JM, Wu CY et al (2017) Fusion of deep learning and compressed domain features for content based image retrieval. IEEE Trans Image Process 26(12):5706–5717

    Article  MathSciNet  Google Scholar 

  17. Li YF, Zhou ZH (2015) Towards making unlabeled data never hurt. IEEE Trans Pattern Anal Mach Intell 37(1):175–188

    Article  Google Scholar 

  18. Lu S (2004) Content analysis and summarization for video documents. PhD thesis, Research Associate, VIEW lab, the Chinese University of Hong Kong, Department of Computer Science & Engineering

  19. Maronidis A, Tefas A, Pitas I (2015) Subclass graph embedding and a marginal fisher analysis paradigm[J]. Pattern Recogn 48(12):4024–4035

    Article  Google Scholar 

  20. Martinez E, Mu T, Jiang J et al (2013) Automated induction of heterogeneous proximity measures for supervised spectral embedding. IEEE Trans Neural Netw Learn Syst 24(10):1575–1587

    Article  Google Scholar 

  21. Mu T, Jiang J, Wang Y et al (2012) Adaptive data embedding framework for multiclass classification. IEEE Trans Neural Netw Learn Syst 23(8):1291–1303

    Article  Google Scholar 

  22. Otani M, Nakashima Y, Sato T et al (2017) Video summarization using textual descriptions for authoring video blogs[J]. Multimed Tools Appl 76(9):12097–12115

    Article  Google Scholar 

  23. Pfeiffer S, Lienhart R, Fischer S et al (1996) Abstracting digital movies automatically[J]. Vis Commun Image Represent 7(4):345–353

    Article  Google Scholar 

  24. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding[J]. Science 290(5500):2323–2326

    Article  Google Scholar 

  25. Sheng J, Jiang J (2014) Recognition of chinese artists via windowed and entropy balanced fusion in Classification of their authored ink and wash paintings (IWPs). Pattern Recogn 47(2):612–622

    Article  Google Scholar 

  26. Sheng J, Jiang J (2013) Style-based classification of ink and wash Chinese paintings[J]. Opt Eng 52(9):093101-1-093101-8

    Article  Google Scholar 

  27. Smeaton AF, Lehane B, O'Connor NE et al (2006) Automatically selecting shots for action movie trailers. ACM Sigmm International Workshop on Multimedia Information Retrieval, Mir 2006, October 26-27, Santa Barbara, California, USA. DBLP 231–238

  28. Sun S, Wang F, He L (2017) Movie summarization using bullet screen comments[J]. Multimed Tools Appl 1–18

  29. Tenenbaum JB, De SV, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction[J]. Science 290(5500):2319–2323

    Article  Google Scholar 

  30. Theaters advocate shorter trailers, marketing (2014) MarketingMovies.net. http://www.marketingmovies.net/news/theaters-advocate-shorter-trailers-marketing (accessed 2014.01.28)

  31. Yao T, Mei T, Rui Y (2016) Highlight detection with pairwise deep ranking for first-person video summarization. In: IEEE International Conference on Computer Vision and Pattern Recognition pp 982–990

  32. Yao X, Han J, Zhang D et al (2017) Revisiting co-saliency detection: A novel approach based on two-stage multi-view spectral rotation co-clustering. IEEE Trans Image Process 26(7):3196–3209

    Article  MathSciNet  Google Scholar 

  33. Zhang D, Meng D, Han J (2017) Co-saliency detection via a self-paced multiple-instance learning framework. IEEE Trans Pattern Anal Mach Intell 39(5):865–878

    Article  Google Scholar 

  34. Zhang J, Han Y, Jiang J (2017) Semi-supervised tensor learning for image classification[J]. Multimedia Systems 23(1):63–73

    Article  Google Scholar 

  35. Zhang K, Chao WL, Sha F et al (2016) Summary transfer: exemplar-based subset selection for video summarization. IEEE Conference on Computer Vision and Pattern Recognition, pp 1059–1067

  36. Zhu J, Pu Y, Xu D et al (2016) The effect of image quality for visual art analysis[J]. J Comput Aided Des Comput Graph 28(8):1269–1278

    Google Scholar 

Download references

Acknowledgements

The authors wish to acknowledge the financial support for the research work under National Natural Science Foundation in China (Grant No.61502331, No.61602338, No.11701410), Natural Science Foundation of Tianjin (Grant No.15JCQNJC00800).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sheng, J., Chen, Y., Li, Y. et al. Embedded learning for computerized production of movie trailers. Multimed Tools Appl 77, 29347–29365 (2018). https://doi.org/10.1007/s11042-018-5943-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-5943-3

Keywords

Navigation