Skip to main content

How Does Computer Animation Affect Our Perception of Emotions in Video Summarization?

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12510))

Included in the following conference series:

Abstract

With the exponential growth of film productions and the popularization of the web, the summary of films has become a useful and important resource. Movies data specifically has become one of the most entertaining sources for viewers, especially during quarantine. However, browsing a movie in enormous collections and searching for a desired scene within a complete movie is a tedious and time-consuming task. As a result, automatic and personalized movie summarization has become a common research topic. In this paper, we focus on emotion summarization for videos with one shot and apply three independent methods for its summarization. We provide two different ways to visualize the main emotions of the generated summary and compare both approaches. The first one uses the original frames of the video and the other uses an open source facial animation tool to create a virtual assistant that provides the emotion summarization. For evaluation, we conducted an extrinsic evaluation using a questionnaire to measure the quality of each generated video summary. Experimental results show that even though both videos had similar answers, a different technique for each video had the most satisfying and informative summary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    excel-easy.com/examples/t-test.html.

  2. 2.

    excel-easy.com/examples/anova.html.

References

  1. Correa, E., Jonker, A., Ozo, M., Stolk, R.: Emotion recognition using deep convolutional neural networks. Technical report IN4015 (2016)

    Google Scholar 

  2. Cuculo, V., D’Amelio, A.: OpenFACS: an open source FACS-based 3D face animation system. In: Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C. (eds.) ICIG 2019. LNCS, vol. 11902, pp. 232–242. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34110-7_20

    Chapter  Google Scholar 

  3. Do, T.T.H., Tran, Q.H.B., Tran, Q.D.: Movie indexing and summarization using social network techniques. Vietnam J. Comput. Sci. 5(2), 157–164 (2018). https://doi.org/10.1007/s40595-018-0111-2

    Article  Google Scholar 

  4. Ekman, P., Friesen, W.V.: Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto (1978)

    Google Scholar 

  5. Ellouze, M., Boujemaa, N., Alimi, A.M.: Im (s) 2: interactive movie summarization system. J. Vis. Commun. Image Represent. 21(4), 283–294 (2010)

    Article  Google Scholar 

  6. Evangelopoulos, G., et al.: Multimodal saliency and fusion for movie summarization based on aural, visual, and textual attention. IEEE Trans. Multimedia 15(7), 1553–1568 (2013)

    Article  Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Hesham, M., Hani, B., Fouad, N., Amer, E.: Smart trailer: automatic generation of movie trailer using only subtitles. In: 2018 First International Workshop on Deep and Representation Learning (IWDRL), pp. 26–30. IEEE (2018)

    Google Scholar 

  9. Kaggle: Challenges in representation learning: facial expression recognition challenge (2012). https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge

  10. Kannan, R., Ghinea, G., Swaminathan, S.: What do you wish to see? a summarization system for movies based on user preferences. Inf. Process. Manag. 51(3), 286–305 (2015)

    Article  Google Scholar 

  11. Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. (1932)

    Google Scholar 

  12. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  13. Ngo, C.W., Ma, Y.F., Zhang, H.J.: Video summarization and scene detection by graph modeling. IEEE Trans. Circ. Syst. Video Technol. 15(2), 296–305 (2005)

    Article  Google Scholar 

  14. Otani, M., Nakashima, Y., Rahtu, E., Heikkilä, J., Yokoya, N.: Video summarization using deep semantic features. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10115, pp. 361–377. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54193-8_23

    Chapter  Google Scholar 

  15. Potapov, D., Douze, M., Harchaoui, Z., Schmid, C.: Category-specific video summarization. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 540–555. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_35

    Chapter  Google Scholar 

  16. Sang, J., Xu, C.: Character-based movie summarization. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 855–858 (2010)

    Google Scholar 

  17. Taskiran, C., Chen, J.Y., Albiol, A., Torres, L., Bouman, C.A., Delp, E.J.: Vibe: a compressed video database structured for active browsing and search. IEEE Trans. Multimedia 6(1), 103–118 (2004)

    Article  Google Scholar 

  18. Ul Haq, I., Ullah, A., Muhammad, K., Lee, M.Y., Baik, S.W.: Personalized movie summarization using deep CNN-assisted facial expression recognition. Complexity 2019 (2019)

    Google Scholar 

  19. Zhang, D., Han, J., Jiang, L., Ye, S., Chang, X.: Revealing event saliency in unconstrained video collection. IEEE Trans. Image Process. 26(4), 1746–1758 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgment

The authors would like to thank CNPq and CAPES for partially funding this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Camila Kolling .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kolling, C., Araujo, V., Barros, R.C., Musse, S.R. (2020). How Does Computer Animation Affect Our Perception of Emotions in Video Summarization?. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2020. Lecture Notes in Computer Science(), vol 12510. Springer, Cham. https://doi.org/10.1007/978-3-030-64559-5_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64559-5_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64558-8

  • Online ISBN: 978-3-030-64559-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics