skip to main content
10.1145/3474085.3475486acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Perceptual Quality Assessment of Internet Videos

Published:17 October 2021Publication History

ABSTRACT

With the fast proliferation of online video sites and social media platforms, user, professionally and occupationally generated content (UGC, PGC, OGC) videos are streamed and explosively shared over the Internet. Consequently, it is urgent to monitor the content quality of these Internet videos to guarantee the user experience. However, most existing modern video quality assessment (VQA) databases only include UGC videos and cannot meet the demands for other kinds of Internet videos with real-world distortions. To this end, we collect 1,072 videos from Youku, a leading Chinese video hosting service platform, to establish the Internet video quality assessment database (Youku-V1K). A special sampling method based on several quality indicators is adopted to maximize the content and distortion diversities within a limited database, and a probabilistic graphical model is applied to recover reliable labels from noisy crowdsourcing annotations. Based on the properties of Internet videos originated from Youku, we propose a spatio-temporal distortion-aware model (STDAM). First, the model works blindly which means the pristine video is unnecessary. Second, the model is familiar with diverse contents by pre-training on the large-scale image quality assessment databases. Third, to measure spatial and temporal distortions, we introduce the graph convolution and attention module to extract and enhance the features of the input video. Besides, we leverage the motion information and integrate the frame-level features into video-level features via a bi-directional long short-term memory network. Experimental results on the self-built database and the public VQA databases demonstrate that our model outperforms the state-of-the-art methods and exhibits promising generalization ability.

References

  1. AGH University of Science and Technology. [n. d.]. Video Quality Indicators. http://vq.kt.agh.edu.pl/metrics.html.Google ScholarGoogle Scholar
  2. Christos G Bampis, Zhi Li, and Alan C Bovik. 2018. Spatiotemporal feature integration and model fusion for full reference video quality assessment. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 29, 8 (2018), 2256--2270.Google ScholarGoogle ScholarCross RefCross Ref
  3. BT, RECOMMENDATION ITU-R. 2002. Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union (2002).Google ScholarGoogle Scholar
  4. Zhibo Chen, Wei Zhou, and Weiping Li. 2017. Blind stereoscopic video quality assessment: From depth perception to overall experience. IEEE Transactions on Image Processing, Vol. 27, 2 (2017), 721--734.Google ScholarGoogle ScholarCross RefCross Ref
  5. Sathya Veera Reddy Dendi and Sumohana S Channappayya. 2020. No-Reference Video Quality Assessment Using Natural Spatiotemporal Scene Statistics. IEEE Transactions on Image Processing, Vol. 29 (2020), 5612--5624.Google ScholarGoogle ScholarCross RefCross Ref
  6. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.Google ScholarGoogle ScholarCross RefCross Ref
  7. D.G.Eugene. [n. d.]. Understanding the language of the camera. https://www.thehindu.com/in-school/signpost/understanding-the-language-of-the-camera/article8580792.ece.Google ScholarGoogle Scholar
  8. Yuming Fang, Hanwei Zhu, Yan Zeng, Kede Ma, and Zhou Wang. 2020. Perceptual Quality Assessment of Smartphone Photography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3677--3686.Google ScholarGoogle ScholarCross RefCross Ref
  9. Gunnar Farneback. 2003. Two-frame motion estimation based on polynomial expansion. In Scandinavian conference on Image analysis. Springer, 363--370. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ganlu(Z5130000). [n. d.]. UGC, PGC and OGC? https://z5130000.wordpress.com/2018/06/01/ugc-pgc-and-ogc/.Google ScholarGoogle Scholar
  11. Deepti Ghadiyaram and Alan C Bovik. 2017. Perceptual quality prediction on authentically distorted images using a bag of features approach. Journal of vision, Vol. 17, 1 (2017), 32--32.Google ScholarGoogle ScholarCross RefCross Ref
  12. Deepti Ghadiyaram, Janice Pan, Alan C Bovik, Anush Krishna Moorthy, Prasanjit Panda, and Kai-Chieh Yang. 2017. In-capture mobile video distortions: A study of subjective behavior and objective algorithms. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 28, 9 (2017), 2061--2077.Google ScholarGoogle ScholarCross RefCross Ref
  13. Franz Götz-Hahn, Vlad Hosu, Hanhe Lin, and Dietmar Saupe. 2019. No-reference video quality assessment using multi-level spatially pooled features. arXiv preprint arXiv:1912.07966 (2019).Google ScholarGoogle Scholar
  14. Video Quality Experts Group et al. 2000. Final report from the video quality experts group on the validation of objective models of video quality assessment. In VQEG meeting, Ottawa, Canada, March, 2000 .Google ScholarGoogle Scholar
  15. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  16. Raimund Schatz, and Sebastian Egger. 2011. SOS: The MOS is not enough!. In 2011 third international workshop on quality of multimedia experience. IEEE, 131--136.Google ScholarGoogle Scholar
  17. Vlad Hosu, Franz Hahn, Mohsen Jenadeleh, Hanhe Lin, Hui Men, Tamás Szirányi, Shujun Li, and Dietmar Saupe. 2017. The Konstanz natural video database (KoNViD-1k). In 2017 Ninth international conference on quality of multimedia experience (QoMEX). IEEE, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  18. P ITU-T RECOMMENDATION. 1999. Subjective video quality assessment methods for multimedia applications. International telecommunication union (1999).Google ScholarGoogle Scholar
  19. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google ScholarGoogle Scholar
  20. Jari Korhonen. 2019. Two-level approach for no-reference consumer video quality assessment. IEEE Transactions on Image Processing, Vol. 28, 12 (2019), 5923--5938.Google ScholarGoogle ScholarCross RefCross Ref
  21. Debarati Kundu, Deepti Ghadiyaram, Alan C Bovik, and Brian L Evans. 2017. No-reference quality assessment of tone-mapped HDR pictures. IEEE Transactions on Image Processing, Vol. 26, 6 (2017), 2957--2971. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Dingquan Li, Tingting Jiang, and Ming Jiang. 2019. Quality assessment of in-the-wild videos. In Proceedings of the 27th ACM International Conference on Multimedia. 2351--2359. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jing Li, Suiyi Ling, Junle Wang, Zhi Li, and Patrick Le Callet. 2020 b. A probabilistic graphical model for analyzing the subjective visual quality assessment data from crowdsourcing. In Proceedings of the 28th ACM International Conference on Multimedia . Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, and Qi Tian. 2020 a. Dynamic Multiscale Graph Neural Networks for 3D Skeleton Based Human Motion Prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 214--223.Google ScholarGoogle ScholarCross RefCross Ref
  25. Hanhe Lin, Vlad Hosu, and Dietmar Saupe. 2018. KonIQ-10K: Towards an ecologically valid and large-scale IQA database. arXiv preprint arXiv:1803.08489 (2018).Google ScholarGoogle Scholar
  26. Kwan-Yee Lin and Guanxiang Wang. 2018. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 732--741.Google ScholarGoogle ScholarCross RefCross Ref
  27. Dong Liu, Rohit Puri, Nagendra Kamath, and Subhabrata Bhattacharya. 2020. Composition-Aware Image Aesthetics Assessment. In The IEEE Winter Conference on Applications of Computer Vision. 3569--3578.Google ScholarGoogle Scholar
  28. Wentao Liu, Zhengfang Duanmu, and Zhou Wang. 2018. End-to-End Blind Quality Assessment of Compressed Videos Using Deep Neural Networks.. In ACM Multimedia. 546--554. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Wen Lu, Ran He, Jiachen Yang, Changcheng Jia, and Xinbo Gao. 2019. A spatiotemporal model of video quality assessment via 3D gradient differencing. Information Sciences, Vol. 478 (2019), 141--151.Google ScholarGoogle ScholarCross RefCross Ref
  30. Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. 2012a. No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, Vol. 21, 12 (2012), 4695--4708. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Anish Mittal, Michele A Saad, and Alan C Bovik. 2015. A completely blind video integrity oracle. IEEE Transactions on Image Processing, Vol. 25, 1 (2015), 289--300.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. 2012b. Making a "completely blind" image quality analyzer. IEEE Signal processing letters, Vol. 20, 3 (2012), 209--212.Google ScholarGoogle Scholar
  33. Anush Krishna Moorthy, Lark Kwon Choi, Alan Conrad Bovik, and Gustavo De Veciana. 2012. Video quality assessment on mobile devices: Subjective, behavioral and objective studies. IEEE Journal of Selected Topics in Signal Processing, Vol. 6, 6 (2012), 652--671.Google ScholarGoogle ScholarCross RefCross Ref
  34. Mikko Nuutinen, Toni Virtanen, Mikko Vaahteranoksa, Tero Vuori, Pirkko Oittinen, and Jukka H"akkinen. 2016. CVD2014?? database for evaluating no-reference video quality assessment algorithms. IEEE Transactions on Image Processing, Vol. 25, 7 (2016), 3073--3086.Google ScholarGoogle Scholar
  35. Stéphane Péchard, Romuald Pépion, and Patrick Le Callet. 2008. Suitable methodology in subjective video quality assessment: a resolution dependent paradigm.Google ScholarGoogle Scholar
  36. Michele A Saad, Alan C Bovik, and Christophe Charrier. 2014. Blind prediction of natural video quality. IEEE Transactions on Image Processing, Vol. 23, 3 (2014), 1352--1365. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, Vol. 45, 11 (1997), 2673--2681. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Kalpana Seshadrinathan and Alan Conrad Bovik. 2009. Motion tuned spatio-temporal quality assessment of natural videos. IEEE transactions on image processing, Vol. 19, 2 (2009), 335--350. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. K. Seshadrinathan and A. C. Bovik. 2011. Temporal hysteresis model of time varying subjective video quality. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1153--1156.Google ScholarGoogle Scholar
  40. Kalpana Seshadrinathan, Rajiv Soundararajan, Alan Conrad Bovik, and Lawrence K Cormack. 2010. Study of subjective and objective quality assessment of video. IEEE transactions on Image Processing, Vol. 19, 6 (2010), 1427--1441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  42. Zeina Sinno and Alan Conrad Bovik. 2018. Large-scale study of perceptual video quality. IEEE Transactions on Image Processing, Vol. 28, 2 (2018), 612--627.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jinqiu Sun, and Yanning Zhang. 2020. Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3667--3676.Google ScholarGoogle ScholarCross RefCross Ref
  44. Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. 2020. UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content. arXiv preprint arXiv:2005.14354 (2020).Google ScholarGoogle Scholar
  45. Phong V Vu and Damon M Chandler. 2014. ViS3: an algorithm for video quality assessment via analysis of spatial and spatiotemporal slices. Journal of Electronic Imaging, Vol. 23, 1 (2014), 013016.Google ScholarGoogle ScholarCross RefCross Ref
  46. Yilin Wang, Sasi Inguva, and Balu Adsumilli. 2019. Youtube UGC dataset for video compression research. In 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). IEEE, 1--5.Google ScholarGoogle ScholarCross RefCross Ref
  47. Zhou Wang, Ligang Lu, and Alan C Bovik. 2004. Video quality assessment based on structural distortion measurement. Signal processing: Image communication, Vol. 19, 2 (2004), 121--132.Google ScholarGoogle Scholar
  48. Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. 2018. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV). 3--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Jingtao Xu, Peng Ye, Qiaohong Li, Haiqing Du, Yong Liu, and David Doermann. 2016. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, Vol. 25, 9 (2016), 4444--4457.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Wufeng Xue, Xuanqin Mou, Lei Zhang, Alan C Bovik, and Xiangchu Feng. 2014. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Transactions on Image Processing, Vol. 23, 11 (2014), 4850--4862.Google ScholarGoogle ScholarCross RefCross Ref
  51. Peng Ye, Jayant Kumar, Le Kang, and David Doermann. 2012. Unsupervised feature learning framework for no-reference image quality assessment. In 2012 IEEE conference on computer vision and pattern recognition. IEEE, 1098--1105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Junyong You, Touradj Ebrahimi, and Andrew Perkis. 2013. Attention driven foveated video quality assessment. IEEE Transactions on Image Processing, Vol. 23, 1 (2013), 200--213. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Lin Zhang, Lei Zhang, and Alan C Bovik. 2015. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, Vol. 24, 8 (2015), 2579--2591.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Yu Zhang, Xinbo Gao, Lihuo He, Wen Lu, and Ran He. 2018. Blind video quality assessment with weakly supervised learning and resampling strategy. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 29, 8 (2018), 2244--2255.Google ScholarGoogle ScholarCross RefCross Ref
  55. Wei Zhou and Zhibo Chen. 2020. Deep Local and Global Spatiotemporal Feature Aggregation for Blind Video Quality Assessment. arXiv preprint arXiv:2009.03411 (2020).Google ScholarGoogle Scholar
  56. Wei Zhou, Zhibo Chen, and Weiping Li. 2018. Stereoscopic video quality prediction based on end-to-end dual stream deep neural networks. In Pacific Rim Conference on Multimedia. Springer, 482--492.Google ScholarGoogle ScholarCross RefCross Ref
  57. Wei Zhou, Qiuping Jiang, Yuwang Wang, Zhibo Chen, and Weiping Li. 2020. Blind quality assessment for image superresolution using deep two-stream convolutional networks. Information Sciences (2020).Google ScholarGoogle Scholar
  58. Hancheng Zhu, Leida Li, Jinjian Wu, Weisheng Dong, and Guangming Shi. 2020. MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14143--14152.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Perceptual Quality Assessment of Internet Videos

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        MM '21: Proceedings of the 29th ACM International Conference on Multimedia
        October 2021
        5796 pages
        ISBN:9781450386517
        DOI:10.1145/3474085

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 October 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate995of4,171submissions,24%

        Upcoming Conference

        MM '24
        MM '24: The 32nd ACM International Conference on Multimedia
        October 28 - November 1, 2024
        Melbourne , VIC , Australia

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader