No-reference artifacts measurements based video quality metric☆
Introduction
Video services have a large area of application that is still growing. Therefore, video quality monitoring has become a field of great interest and different methods for video quality assessment (VQA) are created. Video quality is most accurately quantified by conduction of subjective VQA experiments. The main drawbacks of subjective VQA experiments are that they are expensive and time consuming. This increases the demand for more convenient VQA methods — objective video quality metrics which aim to predict visual quality as perceived by human visual system (HVS).
There are three types of objective VQA metrics based on the availability of reference video sequence — full-reference (FR), reduced-reference (RR) and no-reference (NR) metrics. FR metrics assume availability of reference signal. These metrics estimate test video sequence quality by comparing it to its reference video sequence. RR metrics require partial information about the reference video signal and use this information to estimate the test video sequence quality. NR metrics assume no availability of any information regarding the reference video sequence and estimate video quality using only information available within the test video sequence. In real-time video applications, which are rapidly expanding, there is no access to original non-compressed video signals. Therefore, in these applications only NR objective VQA metrics can be used for video quality measuring and monitoring. This is one of the main reasons for increased interest in NR video quality metrics.
In order to conduct reliable performance testing of objective video quality metrics, it is necessary to have the access to video quality databases containing video sequences degraded by different video artifact types and associated subjective video quality data. However, to create such video quality database and make it available for use by scientific community, firstly it is necessary to create a large number of video sequences that contain actual video artifacts, and then conduct associated extensive subjective video quality experiments and to perform appropriate data analysis.
In this paper, a new NR objective video quality metric called Artifacts Measurements Based Video Quality Metric (AMB-VQM), a new video quality database called FERIT-RTRK-2, and a new user-friendly subjective VQA tool with graphical user interface are presented. FERIT-RTRK-2 database contains test video sequences degraded by the most often degradation types, i.e. video compression (MPEG-2, H.264/AVC, H.265), simulated IP packet-loss, frame freezing, and combinations of these procedures along with associated subjective video quality scores acquired by conducted subjective VQA experiments. The proposed NR AMB-VQM estimates video quality based on video artifact measurements computed by blocking (BL), packet-loss (PL), and freezing (FZ) artifact detection algorithms. Along with these video artifacts measurements, the proposed metric takes into account spatial and temporal video content complexity and how they affect artifacts masking, i.e. artifacts visibility. The proposed AMB-VQM was tested using video sequences from FERIT-RTRK-2, CSIQ [1], LIVE [2], [3], and FERIT-RTRK [4] video quality databases and its results are compared to these of 10 free publicly available state-of-the-art video quality metrics. For the observed sets of video sequences with different video parameters and quality levels, scores estimated by AMB-VQM are highly correlated to subjective scores and outperform scores estimated by some popular and very often used state-of-the-art VQA metrics.
The main contributions of this paper are as follows:
(1) Newly created NR VQA metric, AMB-VQM, which achieves high performance when predicting the quality of videos of different content from different databases, distorted by different artifact types;
(2) Newly created video quality database FERIT-RTRK-2, which contains a large set of 486 signals distorted by compression algorithms based on the three most widely used video compression standards (MPEG-2, H.264/AVC, and H.265) and also by two artifact types which are the most often artifact types caused by error prone network video transmission — packet loss and freezing. FERIT-RTRK-2 database is publicly available for scientific community at http://www.rt-rk.com/other/VideoDBReadme.html.
(3) Newly created user-friendly subjective VQA tool with graphical user interface (GUI), which can be used for conduction of subjective video quality experiments. Created tool is publicly available for scientific community at http://www.rt-rk.com/other/VideoDBReadme.html.
The rest of paper is organized as follows. In Section 2, information about previous work related to existing video quality databases and NR objective video quality metrics is given. Newly created video quality database and subjective VQA tool are presented in Section 3, whereas the proposed NR objective video quality metric is presented in Section 4. In Section 5, information regarding metric training and testing is given, along with the experimental results. Finally, related conclusions are given in Section 6.
Section snippets
Related work
This section gives information about existing related work. Specifically, a brief review of some of the existing video quality databases and NR objective video quality metrics is provided.
Newly created FERIT-RTRK-2 video quality database
FERIT-RTRK-2 is created as an extension of a subset of video sequences from the previously created FERIT-RTRK [4] video quality database which contains sequences degraded only by video compression, where compression parameter values were chosen so that sets of test video sequences generated using different encoding algorithms (according to MPEG-2, H.264 and H.265 standards) have similar PSNR value ranges for each of the reference video sequences (more details can be seen in [4]). Specifically,
New no-reference objective video quality metric
A new NR objective metric which predicts video quality based on video artifacts measurements is proposed, called Artifacts Measurements Based Video Quality Metric (AMB-VQM). Specifically, three artifact measurements computed by algorithms for NR BL [48], NR PL [49], and NR FZ [50] video artifacts detection are used. In addition to these video artifacts measurements, the proposed metric takes into account how video content complexity, in terms of SI and TI values, affects video artifacts masking.
Proposed metric training and testing
In order to perform the reliable performance evaluation of the proposed NR AMB-VQM, it was trained and tested using video sequences from FERIT-RTRK, FERIT-RTRK-2, CSIQ, and LIVE video quality databases. For metric training, non-linear least squares method was used. As input vector for Minkowski time pooling function, values returned by BL artifacts detection algorithm were used for H.264/AVC video sequences and values returned by PL artifacts detection algorithm were used for MPEG-2 and MJPEG
Conclusion
In this paper, a new NR objective video quality metric, called AMB-VQM was proposed, along with a new free publicly available FERIT-RTRK-2 video quality database (which contains sequences degraded by video compression, PL, FZ, and combinations of these processes) and new subjective VQA tool with graphical user interface (GUI). The proposed NR objective video quality metric is based on artifact measurements computed by BL, PL, and FZ artifact detection algorithms, while also taking into account
Acknowledgments
This work was supported by Josip Juraj Strossmayer University of Osijek, Croatia business fund through the internal competition for the research and artistic projects “IZIP-2016-55”.
References (65)
- et al.
Review of objective video quality metrics and performance comparison using different databases
Signal Process., Image Commun.
(2013) - et al.
ViS3: an algorithm for video quality assessment via analysis of spatial and spatiotemporal slices
J. Electron. Imaging
(2014) - et al.
Study of subjective and objective quality assessment of video
IEEE Trans. Image Process.
(2010) - et al.
A Subjective Study to Evaluate Video Quality Assessment Algorithms
(2010) - et al.
Subjective and objective quality assessment of MPEG-2, H.264 and H.265 videos
- Video Quality Experts Group (VQEG). [Online]. Available:...
- Video Quality Experts Group (VQEG). [Online]. Available:...
- et al.
Video quality assessment on mobile devices: Subjective, behavioral and objective studies
IEEE J. Sel. Top. Signal Process.
(2012) - et al.
ECVQ And EVVQ video quality databases
- et al.
Subjective assessment of H.264/AVC video sequences transmitted over a noisy channel
BVI-HD: A video quality database for HEVC compressed and texture synthesized content
IEEE Trans. Multimedia
A video texture database for perceptual compression and quality assessment
A study of subjective video quality at various frame rates
3D Video subjective quality: a new database and grade comparison study
Multimedia Tools Appl.
Objective video quality assessment methods: A classification, review, and performance comparison
IEEE Trans. Broadcast.
Blind prediction of natural video quality
IEEE Trans. Image Process.
No-reference video quality assessment based on artifact measurement and statistical analysis
IEEE Trans. Circuits Syst. Video Technol.
Influence of the Source Content and Encoding Configuration on the Perceived Quality for Scalable Video Coding
No-reference video quality assessment with 3D shearlet transform and convolutional neural networks
IEEE Trans. Circuits Syst. Video Technol.
Perceptual annoyance models for videos with combinations of spatial and temporal artifacts
IEEE Trans. Multimedia
Spatiotemporal statistics for video quality assessment
IEEE Trans. Image Process.
A completely blind video integrity oracle
IEEE Trans. Image Process.
No-reference video quality assessment using codec analysis
IEEE Trans. Circuits Syst. Video Technol.
A novel no-reference video quality metric for evaluating temporal jerkiness due to frame freezing
IEEE Trans. Multimedia
A novel no-reference PSNR estimation method with regard to deblocking filtering effect in H.264/AVC bitstreams
IEEE Trans. Circuits Syst. Video Technol.
Model and performance of a no-reference quality assessment metric for video streaming
IEEE Trans. Circuits Syst. Video Technol.
No-reference image quality assessment with center–surround based natural scene statistics
Multimedia Tools Appl.
Retina inspired no-reference image quality assessment for blur and noise
Multimedia Tools Appl.
Metrics and methods of video quality assessment: a brief review
Multimedia Tools Appl.
Cited by (7)
3D-DWT cross-band statistics and features for No-Reference Video Quality Assessment (NR-VQA)
2021, OptikCitation Excerpt :Implementation of feature extraction algorithm is done on MATLAB, while SVR model is implemented on Python with scikit-learn toolbox. We have considered various other state-of-the-art methods PSNR, SSIM [2], MS-SSIM [3], VIF [4], ST-MAD [28], NIQE [14], V-BLIINDS [12], VIIDEO [15], AMB-VQM [29], V-MEON [18] and Dendi et al. [16] for the performance comparison. We used publicly available MATLAB implementations from the authors of the published benchmark methods.
Compressed Video Quality Metric Based on Just-Noticeable-Difference and Saliency-aware Blocking Detection
2021, 2021 7th International Conference on Computer and Communications, ICCC 2021Research on Video Quality Evaluation of Sparring Motion Based on BPNN Perception
2021, Computational Intelligence and NeurosciencePEA265: Perceptual Assessment of Video Compression Artifacts
2020, IEEE Transactions on Circuits and Systems for Video TechnologyFostering accounting's learning outcomes through video scribe in vocational high schools
2020, International Journal of Information and Education TechnologyNo-Reference Video Quality Assessment (VQA) Using Novel Inter Sub-band 3-D DWT Features
2020, 26th National Conference on Communications, NCC 2020
- ☆
No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.image.2019.07.015.