The context effect for blind image quality assessment
Introduction
In the area of Image Quality Assessment (IQA), in more general terms measurement metric categorized into subjective and objective quality assessment.Currently,objective image quality assessment can be broadly divided into Opinion-aware and Opinion-unaware.In Opinion-aware,objective image quality assessment algorithms based on the mean opinion score(MOS)[9] obtained from subjective experiments.Over last two decades years, many significant works have been well-established.
IQA plays a significant role in many foundational visual problems.The capability of diverse electronic visual content continues to advance rapidly in network applications.However,there is a wide variety of annoying distortions when these diversity of images are being accessed, uploaded, and displayed.Therefore,How to accurately describe this quality change is very important for the optimization of system parameters or the transmission costs[3].This quantization process has a wide range of applications.In the literature, some full reference image quality assessment (FR-IQA) methods[4] inspired from humanvisial system(HVS).On the other hand,some NR-IQA methods[5], [6] which rely on natural scene statistics(NSS) considered image spatial domain and transform domain features.Since the reference image is not required, the application scenario of NR-IQA evaluation is broader than FR-IQA. Based on whether human subjective scores need to be used in the process of designing a perceptual model to evaluate.
Following Marr [7] and Newell [8], the quality of an image cannot just be simply defined as a visibility of its distortions, but should be regarded as the adequacy of this image as input to the vision stage of the interaction process. This interaction of visuo-cognitive is not an isolated process but instead as an essential stage in human interaction with the environment [8]. Image quality assessment can also be regarded as a process of visuo-cognitive. However, the existing IQA methods ignores the impact of environment to human perception. For example, when evaluating the image quality, the existing IQA methods only rely on some regularity of distorted image and use the common machine learning model to build the fitting function of MOS. But the MOS is generated by human subjective evaluation, which means it is influenced by both visuo-cognitive and background environment. Those image-based regularity cannot depicts the relationship between the human perception and the human interaction with the environment. Therefore, the evaluation results cannot really reflect the process of human perception to image quality.
In this work, inspired by the study of cognitive psychology, we propose a novel BIQA method based on the context effect [1]. The context effect refers to a psychological effect that the quality evaluation made by the HVS is related to the contrast between the distorted image and the background environment. According to this fact, we use a graphical model to describe the mechanism of context effect on IQA.Based on the established graph, we construct the context relation between the distorted image and the background environment by Matchnet[2]. The Matchnet is trained to rank images in terms of perceptual quality, so the context relation is actually the quality difference between the distorted image and the other image in the background environment. After that, a context feature is constructed based on the obtained context relation, and a quality-related feature is extracted by a deep neural network from the pixel-level of the distorted image. Finally, The context feature and the quality-relatd feature are combined to form the final quality descriptor, and then regress to quality score.
In summary, the main contributions of this paper are as follows:
- 1.
It is the first time to take the process of human interaction with the environment into account when we do the IQA task, and the context effect is introduced to describe the impact of environment to human perception.
- 2.
We build a graphical model to describe the mechanism of the context effect on IQA, which makes it have good interpretability.
- 3.
We propose a novel BIQA method based on the context effect, which can be combined with other deep neural network to get siginificant performance improvement.
Section snippets
Related work
The idea of using hand-crafted features to construct BIQA methods has been the mainstream for a long time. The early research was focused on the distortion-specific features which are designed for particular distortion type. Then is the well-known features family of NSS which assume that natural scenes have statistical regularities that can describe the change of perceptual quality[9]. In addition, there are also some other methods such as using the semantic saliency or codebooks to construct
Proposed model
In this section, we establish a graphical model to describe the mechanism of the context effect on IQA, and then propose a novel BIQA method, which is called DeepCE. The implementation of DeepCE can be divided into three phases. First, a Matchnet is trained to generate a feature which is about the context effect. Then a classical deep neural network which has pre-trained in the ImageNet[35] is fine-tuned on the IQA dataset. Finally, the feature of context effect and the feature extracted by the
Experimental results
In this section, we first describe the experimental protocol, including database, criteria, and implementation details. Then, we compare with state-of-the-art BIQA methods in terms of full-reference and no-reference performance, respectively. comparisons separately. We design a series of experiments to verify the effect of background images.
Conclusion
In this paper, we establish a graphical model to describe the context effect in IQA for the first time. Then based on this graph, the role of context effect is refined into the CE-feature. By combining the CE-feature with other pixel-level quality-related features, we propose a novel BIQA model, which is called DeepCE. Experimental results show that the proposed DeepCE is comparable with other state-of-the art BIQA methods. In addition, the other contribution of our work is that it is possible
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This research was supported in part by the National Natural Science Foundation of China (Grant No. 61871311), the Key Industrial Innovation Chain Project in Industrial Domain of Shaanxi Province (Grant No. 2020ZDLGY05-01). Aeronautical Science Foundation of China (Grant No. 2020Z071081004).
Zehong Liang received the B.S. degree from the School of Electronic Engineering, Xidian University, Xian,China, in 2021, where he is currently pursuing the masters degree with the School of Electronic Engineering. His current research interests include machine learning, visual information processing and image quality assessment.
References (48)
- et al.
Context effects in perceived environmental quality assessment: scene selection and landscape quality ratings
Journal of Environmental Psychology
(1987) - et al.
Abnet: Adaptive balanced network for multiscale object detection in remote sensing imagery
IEEE Transactions on Geoscience and Remote Sensing
(2022) - et al.
Image Database TID2013: Peculiarities, results and perspectives
Signal Processing: Image Communication
(2015) - X. Han, T. Leung, Y. Jia, R. Sukthankar, A.C. Berg, Matchnet: Unifying feature and metric learning for patch-based...
- Z. Wang, A.C. Bovik, L. Lu, Why is image quality assessment so difficult?, in: Proceedings of International Conference...
- et al.
Human visual system-based fundus image quality assessment of portable fundus camera photographs
IEEE transactions on medical imaging
(2015) - et al.
Naturalness-aware deep no-reference image quality assessment
IEEE Transactions on Multimedia
(2019) - K.-Y. Lin, G. Wang, Hallucinated-iqa: No-reference image quality assessment via adversarial learning, in: Proceedings...
Vision: a computational investigation into the human representation and processing of visual information
(1982)Unified theories of cognition
(1994)
Blind image quality assessment: From natural scene statistics to perceptual quality
IEEE transactions on Image Processing
Blind image quality assessment based on high order statistics aggregation
IEEE Transactions on Image Processing
Nima: Neural image assessment
IEEE transactions on image processing
Blindly assess image quality in the wild guided by a self-adaptive hyper network, in
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Uncertainty-aware blind image quality assessment in the laboratory and wild
IEEE Transactions on Image Processing
Very deep convolutional networks for large- scale image recognition
Computational and Biological Learning Society
Deep neural networks for no-reference and full-reference image quality assessment
IEEE Transactions on Image Processing
On the use of deep learning for blind image quality assessment
Signal, Image and Video Processing
dipiq: Blind image quality assessment by learning-to-rank discriminable image pairs
IEEE Transactions on Image Processing
End-to-end blind image quality assessment using deep neural networks
IEEE Transactions on Image Processing
Fully deep blind image quality predictor
IEEE Journal of selected topics in signal processing
Cited by (5)
MFHOD: Multi-modal image fusion method based on the higher-order degradation model
2024, Expert Systems with ApplicationsSherVine: A graphical dependency modeling for shearlet transform and its application in image quality assessment
2023, Expert Systems with ApplicationsNo-Reference Image Quality Assessment Based on a Multitask Image Restoration Network
2023, Applied Sciences (Switzerland)
Zehong Liang received the B.S. degree from the School of Electronic Engineering, Xidian University, Xian,China, in 2021, where he is currently pursuing the masters degree with the School of Electronic Engineering. His current research interests include machine learning, visual information processing and image quality assessment.
Wen Lu received the BSc, MSc and PhD degrees in signal and information processing from Xidian University, Xi an, China, in 2002, 2006 and 2009, respectively. Since 2009, he joined the School of School of Electronic Engineering at Xidian University. From 2010 to 2012, he was a Post-doctoral Research Fellow with the Department of Electronic Engineering, Stanford University, U.S. He is currently a professor at School of Electronic Engineering, Xidian University. His current research interests include multimedia analysis, computer vision, pattern recognition, deep learning. He has published 2 books and around 50 technical articles in refereed journals and proceedings including IEEE Transactions on Image Processing, IEEE Transactions on Cybernetics, Information Science, Neurocomputing, etc. He is also on the editorial boards and serves as reviewers for many journals, such as IEEE Transactions on Image Processing, IEEE Transactions on Multimedia, etc.
Yong Zheng received the B.S. degree from the School of Electronic Engineering, Xidian University, Xian,China, in 2020, where he is currently pursuing the masters degree with the School of Electronic Engineering. His current research interests include machine learning, visual information processing and image quality assessment.
Weiquan He received the B.S. degree from the School of Telecommunication and Information Engineering, Xi an University of Posts and Telecommunications, Xian,China, in 2017. He is currently working at Alibaba,research interests include machine learning, visual information processing and image quality assessment.
Jiachen Yang IEEE Senior Member; a member of New Century Outstanding Talents Support Program of the Ministry of Education; professor of Communication and Information Engineering, Tianjin University; Doctoral supervisor. Has been in charge of more than 30 projects in the past five years, including two projects of National Natural Science Foundation, and has gotten more than 15 million yuan cumulative funds. He has published more than 40 papers on the SCI. He also has applied for more than 40 patents, and more than 20 of them have been authorized.