No-reference artifacts measurements based video quality metric

https://doi.org/10.1016/j.image.2019.07.015Get rights and content

Highlights

  • We made new no-reference VQA metric called AMB-VQM, which achieves high performance.

  • We made FERIT-RTRK-2 video database with 486 signals and their subjective scores.

  • We made new user-friendly subjective VQA tool with graphical user interface.

  • We performed the comparison for 9 objective quality metrics on 4 different databases.

  • AMB-VQM outperforms most of the analyzed popular and often used video quality metrics.

Abstract

In multimedia delivery, perceived quality of video signal has a significant role in overall users Quality of Experience (QoE). Therefore, multimedia service providers must constantly measure and monitor perceived video quality, which is usually performed using no-reference (NR) video quality metrics. In this paper, a novel NR objective video quality metric named Artifacts Measurements Based Video Quality Metric (AMB-VQM) is proposed. In addition to artifacts measures computed by blocking (BL), packet-loss (PL), and freezing (FZ) artifacts detection algorithms, the metric incorporates artifacts masking based on spatial and temporal video content complexity. Furthermore, a newly created FERIT-RTRK-2 video quality database, which contains 486 Full HD test video sequences impaired by video compression (MPEG-2, H.264/AVC, and H.265), packet-loss, frame freezing, and combinations of these procedures, is presented in this paper. FERIT-RTRK-2 database is publicly available for scientific community at http://www.rt-rk.com/other/VideoDBReadme.html. Additionally, newly created user-friendly subjective video quality assessment tool with graphical user interface, which can be used for conducting of subjective video quality experiments, is presented. In experimental part the performance of the proposed AMB-VQM is compared to this of 10 other objective video quality metrics (PSNR, SSIM, VSNR, PSNRHVS, PSNRHVSM, VIFP, ViS3, ST-MAD, BRISQUE, VIIDEO) using distorted video sequences from four different video quality databases: CSIQ, LIVE, FERIT-RTRK and FERIT-RTRK-2. The results show that AMB-VQM achieves high performance when predicting the video quality for videos distorted in different manners. AMB-VQM results outperform the results of most of the analyzed popular and very often used video quality metrics.

Introduction

Video services have a large area of application that is still growing. Therefore, video quality monitoring has become a field of great interest and different methods for video quality assessment (VQA) are created. Video quality is most accurately quantified by conduction of subjective VQA experiments. The main drawbacks of subjective VQA experiments are that they are expensive and time consuming. This increases the demand for more convenient VQA methods — objective video quality metrics which aim to predict visual quality as perceived by human visual system (HVS).

There are three types of objective VQA metrics based on the availability of reference video sequence — full-reference (FR), reduced-reference (RR) and no-reference (NR) metrics. FR metrics assume availability of reference signal. These metrics estimate test video sequence quality by comparing it to its reference video sequence. RR metrics require partial information about the reference video signal and use this information to estimate the test video sequence quality. NR metrics assume no availability of any information regarding the reference video sequence and estimate video quality using only information available within the test video sequence. In real-time video applications, which are rapidly expanding, there is no access to original non-compressed video signals. Therefore, in these applications only NR objective VQA metrics can be used for video quality measuring and monitoring. This is one of the main reasons for increased interest in NR video quality metrics.

In order to conduct reliable performance testing of objective video quality metrics, it is necessary to have the access to video quality databases containing video sequences degraded by different video artifact types and associated subjective video quality data. However, to create such video quality database and make it available for use by scientific community, firstly it is necessary to create a large number of video sequences that contain actual video artifacts, and then conduct associated extensive subjective video quality experiments and to perform appropriate data analysis.

In this paper, a new NR objective video quality metric called Artifacts Measurements Based Video Quality Metric (AMB-VQM), a new video quality database called FERIT-RTRK-2, and a new user-friendly subjective VQA tool with graphical user interface are presented. FERIT-RTRK-2 database contains test video sequences degraded by the most often degradation types, i.e. video compression (MPEG-2, H.264/AVC, H.265), simulated IP packet-loss, frame freezing, and combinations of these procedures along with associated subjective video quality scores acquired by conducted subjective VQA experiments. The proposed NR AMB-VQM estimates video quality based on video artifact measurements computed by blocking (BL), packet-loss (PL), and freezing (FZ) artifact detection algorithms. Along with these video artifacts measurements, the proposed metric takes into account spatial and temporal video content complexity and how they affect artifacts masking, i.e. artifacts visibility. The proposed AMB-VQM was tested using video sequences from FERIT-RTRK-2, CSIQ [1], LIVE [2], [3], and FERIT-RTRK [4] video quality databases and its results are compared to these of 10 free publicly available state-of-the-art video quality metrics. For the observed sets of video sequences with different video parameters and quality levels, scores estimated by AMB-VQM are highly correlated to subjective scores and outperform scores estimated by some popular and very often used state-of-the-art VQA metrics.

The main contributions of this paper are as follows:

(1) Newly created NR VQA metric, AMB-VQM, which achieves high performance when predicting the quality of videos of different content from different databases, distorted by different artifact types;

(2) Newly created video quality database FERIT-RTRK-2, which contains a large set of 486 signals distorted by compression algorithms based on the three most widely used video compression standards (MPEG-2, H.264/AVC, and H.265) and also by two artifact types which are the most often artifact types caused by error prone network video transmission — packet loss and freezing. FERIT-RTRK-2 database is publicly available for scientific community at http://www.rt-rk.com/other/VideoDBReadme.html.

(3) Newly created user-friendly subjective VQA tool with graphical user interface (GUI), which can be used for conduction of subjective video quality experiments. Created tool is publicly available for scientific community at http://www.rt-rk.com/other/VideoDBReadme.html.

The rest of paper is organized as follows. In Section 2, information about previous work related to existing video quality databases and NR objective video quality metrics is given. Newly created video quality database and subjective VQA tool are presented in Section 3, whereas the proposed NR objective video quality metric is presented in Section 4. In Section 5, information regarding metric training and testing is given, along with the experimental results. Finally, related conclusions are given in Section 6.

Section snippets

Related work

This section gives information about existing related work. Specifically, a brief review of some of the existing video quality databases and NR objective video quality metrics is provided.

Newly created FERIT-RTRK-2 video quality database

FERIT-RTRK-2 is created as an extension of a subset of video sequences from the previously created FERIT-RTRK [4] video quality database which contains sequences degraded only by video compression, where compression parameter values were chosen so that sets of test video sequences generated using different encoding algorithms (according to MPEG-2, H.264 and H.265 standards) have similar PSNR value ranges for each of the reference video sequences (more details can be seen in [4]). Specifically,

New no-reference objective video quality metric

A new NR objective metric which predicts video quality based on video artifacts measurements is proposed, called Artifacts Measurements Based Video Quality Metric (AMB-VQM). Specifically, three artifact measurements computed by algorithms for NR BL [48], NR PL [49], and NR FZ [50] video artifacts detection are used. In addition to these video artifacts measurements, the proposed metric takes into account how video content complexity, in terms of SI and TI values, affects video artifacts masking.

Proposed metric training and testing

In order to perform the reliable performance evaluation of the proposed NR AMB-VQM, it was trained and tested using video sequences from FERIT-RTRK, FERIT-RTRK-2, CSIQ, and LIVE video quality databases. For metric training, non-linear least squares method was used. As input vector for Minkowski time pooling function, values returned by BL artifacts detection algorithm were used for H.264/AVC video sequences and values returned by PL artifacts detection algorithm were used for MPEG-2 and MJPEG

Conclusion

In this paper, a new NR objective video quality metric, called AMB-VQM was proposed, along with a new free publicly available FERIT-RTRK-2 video quality database (which contains sequences degraded by video compression, PL, FZ, and combinations of these processes) and new subjective VQA tool with graphical user interface (GUI). The proposed NR objective video quality metric is based on artifact measurements computed by BL, PL, and FZ artifact detection algorithms, while also taking into account

Acknowledgments

This work was supported by Josip Juraj Strossmayer University of Osijek, Croatia business fund through the internal competition for the research and artistic projects “IZIP-2016-55”.

References (65)

  • VranješM. et al.

    Review of objective video quality metrics and performance comparison using different databases

    Signal Process., Image Commun.

    (2013)
  • VuP.V. et al.

    ViS3: an algorithm for video quality assessment via analysis of spatial and spatiotemporal slices

    J. Electron. Imaging

    (2014)
  • SeshadrinathanK. et al.

    Study of subjective and objective quality assessment of video

    IEEE Trans. Image Process.

    (2010)
  • SeshadrinathanK. et al.

    A Subjective Study to Evaluate Video Quality Assessment Algorithms

    (2010)
  • BajcinovciV. et al.

    Subjective and objective quality assessment of MPEG-2, H.264 and H.265 videos

  • Video Quality Experts Group (VQEG). [Online]. Available:...
  • Video Quality Experts Group (VQEG). [Online]. Available:...
  • MoorthyA.K. et al.

    Video quality assessment on mobile devices: Subjective, behavioral and objective studies

    IEEE J. Sel. Top. Signal Process.

    (2012)
  • VranjesM. et al.

    ECVQ And EVVQ video quality databases

  • De SimoneF. et al.

    Subjective assessment of H.264/AVC video sequences transmitted over a noisy channel

  • Image and Video Processing Laboratory. [Online]. Available:...
  • ZhangF. et al.

    BVI-HD: A video quality database for HEVC compressed and texture synthesized content

    IEEE Trans. Multimedia

    (2018)
  • PapadopoulosM.A. et al.

    A video texture database for perceptual compression and quality assessment

  • MackinA. et al.

    A study of subjective video quality at various frame rates

  • Video Quality Experts Group (VQEG). [Online]. Available: https://www.its.bldrdoc.gov/vqeg/projects/hdtv/hdtv.aspx...
  • DumićE. et al.

    3D Video subjective quality: a new database and grade comparison study

    Multimedia Tools Appl.

    (2017)
  • ChikkerurS. et al.

    Objective video quality assessment methods: A classification, review, and performance comparison

    IEEE Trans. Broadcast.

    (2011)
  • SaadM.A. et al.

    Blind prediction of natural video quality

    IEEE Trans. Image Process.

    (2014)
  • ZhuKongfeng et al.

    No-reference video quality assessment based on artifact measurement and statistical analysis

    IEEE Trans. Circuits Syst. Video Technol.

    (2015)
  • PitreyY. et al.

    Influence of the Source Content and Encoding Configuration on the Perceived Quality for Scalable Video Coding

    (2012)
  • LiY.

    No-reference video quality assessment with 3D shearlet transform and convolutional neural networks

    IEEE Trans. Circuits Syst. Video Technol.

    (2016)
  • SilvaA.F. et al.

    Perceptual annoyance models for videos with combinations of spatial and temporal artifacts

    IEEE Trans. Multimedia

    (2016)
  • LiX. et al.

    Spatiotemporal statistics for video quality assessment

    IEEE Trans. Image Process.

    (2016)
  • A. Mittal, M.A. Saad, A.C. Bovik, VIIDEO Software Release, 2014. [Online]. Available:...
  • MittalA. et al.

    A completely blind video integrity oracle

    IEEE Trans. Image Process.

    (2016)
  • SogaardJ. et al.

    No-reference video quality assessment using codec analysis

    IEEE Trans. Circuits Syst. Video Technol.

    (2015)
  • XueY. et al.

    A novel no-reference video quality metric for evaluating temporal jerkiness due to frame freezing

    IEEE Trans. Multimedia

    (2015)
  • NaTaeyoung et al.

    A novel no-reference PSNR estimation method with regard to deblocking filtering effect in H.264/AVC bitstreams

    IEEE Trans. Circuits Syst. Video Technol.

    (2014)
  • SeyedebrahimiM. et al.

    Model and performance of a no-reference quality assessment metric for video streaming

    IEEE Trans. Circuits Syst. Video Technol.

    (2013)
  • WuJ. et al.

    No-reference image quality assessment with center–surround based natural scene statistics

    Multimedia Tools Appl.

    (2018)
  • JoshiP. et al.

    Retina inspired no-reference image quality assessment for blur and noise

    Multimedia Tools Appl.

    (2017)
  • FanQ. et al.

    Metrics and methods of video quality assessment: a brief review

    Multimedia Tools Appl.

    (2017)
  • Cited by (7)

    View all citing articles on Scopus

    No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.image.2019.07.015.

    View full text