skip to main content
10.1145/3532213.3532306acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccaiConference Proceedingsconference-collections
research-article

A GCN-Based Framework for Generating Trailers

Published:13 July 2022Publication History

ABSTRACT

The film-television industry continues to generate a large amount of information at all times, and the massive amount of movie data has promoted an increase in the demand for trailers. It has become a major research challenge to choose a movie of interest from the massive amount of movie data. This has driven the growth in demand for trailer production. Using computer technology to generate trailers automatically has two benefits: on the one hand, it can help people browse the content of a movie quickly and decide whether to pay for the movie; on the other hand, it can reduce the work of video creators and help them attract viewers with less cost. In this article, we construct a GCN-based convolution joint framework, which selects the trailer shots in the full-length movie according to the visual characteristics of the shots and the relationship between the shots. Firstly, the movie data is preprocessed for its division into sparse shots, followed by shot boundary detection and stratified sampling. Secondly, the visual features of shots are learned through multi-layer CNN. The topological relationship between shots is established by GCN to extract the features that include the shot relationship. These extracted features are then intelligently fused based on the assignment of different weights; shots with the fusion score higher than a certain threshold are selected for the trailer generation. The proposed framework is shown to other video summarization methods in the field of trailer generation.

References

  1. Tsai, C. M., Kang, L. W., Lin, C. W., & Lin, W. (2013). Scene-Based Movie Summarization Via Role-Community Networks. IEEE Transactions on Circuits and Systems for Video Technology, 23(11), 1927-1940Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Hesham, M., Hani, B., Fouad, N., & Amer, E. (2018Smart trailer: Automatic generation of movie trailer using only subtitles. Paper presented at the 2018 First International Workshop on Deep and Representation Learning (IWDRL).Google ScholarGoogle Scholar
  3. Papalampidi, P., Keller, F., & Lapata, M. (2020). Movie Summarization via Sparse Graph ConstructionGoogle ScholarGoogle Scholar
  4. Koutras, P., Zlatintsi, A., Iosif, E., Katsamanis, A., & Potamianos, A. (2015Predicting audio-visual salient events based on visual, audio and text modalities for movie summarization. Paper presented at the ICIP-2015, Quebec, Canada, 2015.Google ScholarGoogle Scholar
  5. Chung, Y. N., Lu, T. C., Yeh, M. T., Huang, Y. X., & Wu, C. Y. (2015). Applying the video summarization algorithm to surveillance systems. Journal of Image and Graphics, 3(1).Google ScholarGoogle ScholarCross RefCross Ref
  6. Yadav, A., & Vishwakarma, D. K. (2020). A unified framework of deep networks for genre classification using movie trailer - ScienceDirect. Applied Soft Computing, 96Google ScholarGoogle Scholar
  7. Liu, X., & Jiang, J. (2015Semi-supervised Learning Towards Computerized Generation of Movie Trailers. Paper presented at the 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC).Google ScholarGoogle Scholar
  8. Smith, J. R., Joshi, D., Huet, B., Hsu, W., & Cota, J. (2017Harnessing A.I. for Augmenting Creativity: Application to Movie Trailer Creation. Paper presented at the the 2017 ACM.Google ScholarGoogle Scholar
  9. Sheng, J., Chen, Y., Li, Y., & Liang, L. (2018). Embedded learning for computerized production of movie trailers. Multimedia Tools and Applications, 77(22), 29347-29365Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Iuh, A., Km, B., Th, A., Jdsc, D., Ms, E.,... Swb, A. (2021). QuickLook: Movie Summarization using Scene-based Leading Characters with Psychological Cues Fusion. Information FusionGoogle ScholarGoogle Scholar
  11. Kip F, T. N., & Welling, M. (2016). Semi-Supervised Classification with Graph Convolutional NetworksGoogle ScholarGoogle Scholar
  12. Wang, X., & Gupta, A. (2018). Videos as Space-Time Region Graphs: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part V: Computer Vision – ECCV 2018.Google ScholarGoogle Scholar
  13. Zeng, R., Huang, W., Gan, C., Tan, M., & Huang, J. (2019Graph Convolutional Networks for Temporal Action Localization. Paper presented at the 2019 IEEE/CVF International Conference on Computer Vision (ICCV).Google ScholarGoogle Scholar
  14. Liu, H., Xiao, Z., Fan, B., Zeng, H., & Jiang, G. (2021). PrGCN: Probability prediction with graph convolutional network for person re-identification. Neurocomputing, 423(12), 57-70Google ScholarGoogle ScholarCross RefCross Ref
  15. Neyman, J. (1934). On the Two Different Aspects of the Representative Method The Method of Stratified Sampling and the Method of Purposive Selection. Journal of the Royal Statistical Society, 97(4), 558-625Google ScholarGoogle ScholarCross RefCross Ref
  16. Liu, T., Fan, W., & Agrawal, G. (2010). Stratified Sampling for Data Mining on the Deep Web. IEEEGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  17. Mello, R. D., Silva, P., & Travassos, G. H. (2015). Investigating probabilistic sampling approaches for large-scale surveys in software engineering. Journal of Software Engineering Research and Development, 3(1), 8Google ScholarGoogle ScholarCross RefCross Ref
  18. Snyder, B., Jones, R., Bygrave, S., Llp, P. W., Bromwich, D.,... Mungello, D. (2013). Save the cat! : the last book on screenwriting you'll ever need: Save the cat! : the last book on screenwriting you'll ever need.Google ScholarGoogle Scholar
  19. Mccallum, A., Nigam, K., & Ungar, L. H. (2000)Efficient clustering of high-dimensional data sets with application to reference matching. Paper presented at the Proceedings of the Sixth International Conference on Knowledge Discovery and Data Mining.Google ScholarGoogle Scholar
  20. Hamerly, G., & Drake, J. (2015). Accelerating Lloyd's Algorithm for k-Means Clustering. Springer International PublishingGoogle ScholarGoogle Scholar
  21. Fajtl, J., Sokeh, H. S., Argyriou, V., Monekosso, D., & Remagnino, P. (2018Summarizing videos with attention. Paper presented at the Asian Conference on Computer Vision.Google ScholarGoogle Scholar
  22. Mahasseni, B., Lam, M., & Todorovic, S. (2017)Unsupervised video summarization with adversarial lstm networks. Paper presented at the Proceedings of the IEEE conference on Computer Vision and Pattern Recognition.Google ScholarGoogle Scholar
  23. Zhou, K., Qiao, Y., & Xiang, T. (2017). Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness RewardGoogle ScholarGoogle Scholar
  24. Zhang, K., Chao, W. L., Sha, F., & Grauman, K. (2016Video Summarization with Long Short-term Memory. Paper presented at the Springer International Publishing.Google ScholarGoogle Scholar
  25. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., & Rabinovich, A. (2014). Going Deeper with Convolutions. IEEE Computer SocietyGoogle ScholarGoogle Scholar
  26. B. Castellano. (2018). Pyscenedetect: Intelligent scene cut detection and video splitting tool 2018.2, 2018, from https://pyscenedetect.readthedocs.io/en/latest/Google ScholarGoogle Scholar
  27. He, K., Zhang, X., Ren, S., & Sun, J. (2016Deep residual learning for image recognition. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.Google ScholarGoogle Scholar
  28. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N.,... Polosukhin, I. (2017Attention is all you need. Paper presented at the Advances in neural information processing systems.Google ScholarGoogle Scholar
  29. Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal Loss for Dense Object Detection. IEEE Transactions on Pattern Analysis & Machine Intelligence, PP(99), 2999-3007Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICCAI '22: Proceedings of the 8th International Conference on Computing and Artificial Intelligence
    March 2022
    809 pages
    ISBN:9781450396110
    DOI:10.1145/3532213

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 July 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)22
    • Downloads (Last 6 weeks)3

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format