Skip to main content
Log in

An improved algorithm of video quality assessment by danmaku analysis

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

A Correction to this article was published on 16 December 2021

This article has been updated

Abstract

Video quality assessment (VQA) algorithms play an significant role in many fields of video analysis. To improve the accuracy of video quality evaluation, many scholars start by imitating viewers’ subjective feelings to make the algorithm more suitable for the subjective feelings of the video viewers. On the other hand, many scholars are also working to reduce some unnecessary time costs caused by frame-by-frame analysis. To achieve this goal, it also caused a lot of unnecessary time costs. To solve these problems, this paper proposes an improved algorithm based on danmaku analysis. First of all, through the analysis of eye movement data, we find that the concentrated expression of the subjective emotion of the video viewers contains a strong direct relationship with the quality of the video, and the danmaku is an element of such subjective emotion. Then we improve the existing video quality assessment algorithm, analyze the significant role of danmaku in subjective emotion, extract specific keyframes, and get the corresponding objective score as the result. The experimental results show that the algorithm improves the efficiency of time optimization, and from the Pearson correlation coefficient (PCC), the results are more fit for viewers’ subjective feelings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Change history

References

  1. Andrew, M., Anthony, S.: Cinematic virtual reality: evaluating the effect of display type on the viewing experience for panoramic video. In: 2017 IEEE Virtual Reality (VR), pp. 45-54 (2017). https://doi.org/10.1109/VR.2017.7892230

  2. Soobeom, J., Jong-Seok, L.: On evaluating perceptual quality of online user-generated videos. IEEE Transa. Multimed. 18(9), 1808–1818 (2016). https://doi.org/10.1109/TMM.2016.2581582

    Article  Google Scholar 

  3. Duanmu, Z., Rehman, A., Wang, Z.: A quality-of-experience database for adaptive video streaming. IEEE Trans. Broadcast. 64(2), 474–487 (2018). https://doi.org/10.1109/TBC.2018.2822870

    Article  Google Scholar 

  4. Ivchenko, A., Kononyuk, P., Dvorkovich, A., Antiufrieva, L.: Study on the assessment of the quality of experience of streaming video. In: 2020 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO), pp. 1-12 (2020). https://doi.org/10.1109/SYNCHROINFO49631.2020.9166092

  5. Banitalebi-Dehkordi, A., Pourazad, M.T., Nasiopoulos, P.: An efficient human visual system based quality metric for 3D video. Multimed. Tools Appl. 75(8), 4187–4215 (2016). https://doi.org/10.1007/s11042-015-2466-z

    Article  Google Scholar 

  6. Banitalebi-Dehkordi, A., Pourazad, M.T., Nasiopoulos, P.: 3D video quality metric for mobile applications. In: Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 1-5 (2013). https://doi.org/10.1109/ICASSP.2013.6638355

  7. Xu, M., Chen, J., Wang, H., Liu, S., Li, G., Bai, Z.: C3DVQA: full-reference video quality assessment with 3D convolutional neural network. In: IEEE ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4447-4451 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053031

  8. Meng, Q., Ma, C., Lu, W., Yang, J., Zhu, Y.: Stereoscopic video quality assessment based on 3D convolutional neural networks. Neurocomputing 309(2), 83–93 (2018). https://doi.org/10.1016/j.neucom.2018.04.072

    Article  Google Scholar 

  9. Zeina, S., Conrad, B.A.: Large-scale study of perceptual video quality. IEEE Trans. Image Process. 28(2), 612–627 (2018). https://doi.org/10.1109/TIP.2018.2869673

    Article  MathSciNet  MATH  Google Scholar 

  10. Vega, M.T., Mocanu, D.C., Stavrou, S., Liotta, A.: Predictive no-reference assessment of video quality. Signal Processing. Image Commun. Publ. Eur. Assoc. Signal Process. 52, 20–32 (2017). https://doi.org/10.1016/jimage.2016.12.001

    Article  Google Scholar 

  11. Barkowsky, M., Sedano, I., Brunnstrom, K., Leszczuk, M., Staelens, N.: Hybrid video quality prediction: reviewing video quality measurement for widening application scope. Multimed. Tools Appl. 74(2), 323–343 (2015). https://doi.org/10.1007/s11042-014-1978-2

    Article  Google Scholar 

  12. Matthew, R.E., Gardner, A.K., Dunkin, B.J., Linda, S., Pryor, A.D.: Liane: video-based assessment for laparoscopic fundoplication: initial development of a robust tool for operative performance assessment. Surg. Endosc. 34(7), 3176–3183 (2020). https://doi.org/10.1007/s00464-019-07089-y

    Article  Google Scholar 

  13. Chaabouni, A., Gaudeau, Y., Lambert, J., Moureaux, J.M., Gallet, P.: Subjective and objective quality assessment for H264 compressed medical video sequences. In: International Conference on Image Processing Theory, Tools and Applications, pp. 1-5 (2014). https://doi.org/10.1109/IPTA.2014.7001922

  14. Szczotka, A.B., Shakir, D.I., Clarkson, M.J., Pereira, S.P., Vercauteren, T.: Zero-shot super-resolution with a physically-motivated downsampling kernel for endomicroscopy. IEEE Trans. Med. Imaging (2021). https://doi.org/10.1109/TMI.2021.3067512

  15. Khan, K.A., Beghdadi, A., Cheikh, F.A., et al.: Towards a video quality assessment based framework for enhancement of laparoscopic videos. In: Medical imaging 2020: image perception, observer performance, and technology assessment. 11316 (2020). https://doi.org/10.1117/12.2549266

  16. Kumar, K., Shrimankar, D.D., Singh, N.: Equal partition based clustering approach for event summarization in videos. In: 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), pp. 119-126 (2016). https://doi.org/10.1109/SITIS.2016.27

  17. Kumar, K., Shrimankar, D.D.: F-DES: fast and deep event summarization. IEEE Trans. Multimed. 20(2), 323–334 (2018). https://doi.org/10.1109/TMM.2017.2741423

    Article  Google Scholar 

  18. Kumar, K., Shrimankar, D.D., Singh, N.: Event BAGGING: A novel event summarization approach in multiview surveillance videos. In: 2017 International Conference on Innovations in Electronics, Signal Processing and Communication (IESC), pp. 106-111 (2017). https://doi.org/10.1109/IESPC.2017.8071874

  19. Kumar, K.: Text query based summarized event searching interface system using deep learning over cloud. Multimed. Tools Appl. 80, 11079–11094 (2021). https://doi.org/10.1007/s11042-020-10157-4

    Article  Google Scholar 

  20. Weng, M., Huang, G., Da, X.: A new interframe difference algorithm for moving target detection. In: 2010 3rd International Congress on Image and Signal Processing, pp. 285-289 (2010). https://doi.org/10.1109/CISP.2010.5648259

  21. Ries, M., Crespi, C., Nemethova, O., Rupp, M.: Content based video quality estimation for H.264/AVC video streaming. In: 2007 IEEE Wireless Communications and Networking Conference, pp. 2668-2673 (2017). https://doi.org/10.1109/WCNC.2007.496

  22. Li, C., Xu, M., Du, X., Wang, Z.: Bridge the gap between VQA and human behavior on omnidirectional video. In: ACM Press 2018 ACM Multimedia Conference, pp. 932-940 (2018). https://doi.org/10.1145/3240508.3240581

  23. Lu, W., Li, X., Gao, X., Tang, W., Li, J., Tao, D.: A video quality assessment metric based on human visual system. Cogn. Comput. 2(2), 120–131 (2020). https://doi.org/10.1007/s12559-010-9040-9

    Article  Google Scholar 

  24. Guan, J., Yi, S., Zeng, X., Cham, W.K., Wang, X.: Visual importance and distortion guided deep image quality assessment framework. IEEE Trans. Multimed. 19(11), 2505–2520 (2017). https://doi.org/10.1109/TMM.2017.2703148

    Article  Google Scholar 

  25. Li, D., Jiang, T., Jiang, M.: Quality assessment of in-the-wild videos. In: ACM Press the 27th ACM International Conference, pp. 2351-2359 (2019). https://doi.org/10.1145/3343031.3351028

  26. Bommisetty, R.M., Prakash, O., Khare, A.: Keyframe extraction using Pearson correlation coefficient and color moments. Multimed. Syst. 26, 267–299 (2020). https://doi.org/10.1007/s00530-019-00642-8

    Article  Google Scholar 

  27. Zhou, P., Xu, T., Yin, Z., Liu, D., Li, C.: Character-oriented video summarization with visual and textual cues. IEEE Transa. Multimed. (2019). https://doi.org/10.1109/TMM.2019.2960594

    Article  Google Scholar 

  28. Luo, J., Papin, C., Costello, K.: Towards extracting semantically meaningful key frames from personal video clips: from humans to computers. IEEE Trans. Circ. Syst. Video Technol. 19(2), 289–301 (2009). https://doi.org/10.1109/TCSVT.2008.2009241

    Article  Google Scholar 

  29. Kumar, K., Shrimankar, D.D., Singh, N.: Eratosthenes sieve based key-frame extraction technique for event summarization in videos. Multimed. Tools Appl. 77, 7383–7404 (2018). https://doi.org/10.1007/s11042-017-4642-9

    Article  Google Scholar 

  30. Kumar, K.: EVS-DK: event video skimming using deep keyframe. J. Vis. Commun. Image Represent. 58, 345–352 (2019). https://doi.org/10.1016/j.jvcir.2018.12.009

    Article  Google Scholar 

  31. Kumar, K., Shrimankar, D.D., Singh, N.: SOMES: An efficient SOM technique for event summarization in multi-view surveillance videos. In: Sa P., Bakshi S., Hatzilygeroudis I., Sahoo M. (eds) Recent Findings in Intelligent Computing Techniques. Advances in Intelligent Systems and Computing, 709, 383-389 (2018). https://doi.org/10.1007/978-981-10-8633-5_38

  32. Wu, Q., Sang, Y., Huang, Y.: A New paradigm of social interaction via online videos. ACM Trans. Soc. Comput. 2(2), 1–24 (2019). https://doi.org/10.1145/3329485

    Article  Google Scholar 

  33. Bai, Q., Wei, K., Zhou, J., Xiong, C., He, L.: Entity-level sentiment prediction in Danmaku video interaction. J Supercomput. (2021). https://doi.org/10.1007/s11227-021-03652-4

  34. Bai, Q., Hu, Q.V., Ge, L., He, L.: Stories that big danmaku data can tell as a new media. IEEE Access 7, 53509–53519 (2019). https://doi.org/10.1109/ACCESS.2019.2909054

    Article  Google Scholar 

  35. Gao, M., Yang, T.: Danmaku video recommendation combining collaborative filtering and topic model. Appl. Res. Comput. 37(12), 3565–3568 (2020). https://doi.org/10.19734/j.issn.1001-3695.2019.09.0530

    Article  Google Scholar 

  36. Guzmán-Pando, A., Chacon-Murguia, M.I.: Detection of dynamic objects in videos using LBSP and fuzzy gray level difference histograms. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1-6 (2019). https://doi.org/10.1109/FUZZ-IEEE.2019.8858967

Download references

Acknowledgements

The authors wish to thank the teachers and students who give assistance to me and anonymous reviewers for their valuable comments and suggestions on this paper. This work was supported in part by grants from the National Key R&D Program of China and the National Science Foundation of China (Grant No. 61802334).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongliang Guo.

Additional information

Communicated by Y. Zhang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Guo, D., Liu, W. et al. An improved algorithm of video quality assessment by danmaku analysis. Multimedia Systems 28, 573–582 (2022). https://doi.org/10.1007/s00530-021-00858-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-021-00858-7

Keywords

Navigation