Elsevier

Neurocomputing

Volume 521, 7 February 2023, Pages 172-180
Neurocomputing

The context effect for blind image quality assessment

https://doi.org/10.1016/j.neucom.2022.11.026Get rights and content

Abstract

Image quality assessment (IQA) is a process of visuo-cognitive, which is an essential stage in human interaction with the environment. The study of the context effect (Brown and Daniel, 1987) also shows that the evaluation results made by the human vision system (HVS) is related to the contrast between the distorted image and the background environment. However, the existing IQA methods carry out the quality evaluation that only depends on the distorted image itself and ignores the impact of environment to human perception. In this paper, we propose a novel blind image quality assessment(BIQA) based on the context effect. At first, we use a graphical model to describe how the context effect influences human perception of image quality. Based on the established graph, we construct the context relation between the distorted image and the background environment by the X. Han et al. (2015). Then the context features are extracted from the constructed relation, and the quality-related features are extracted by the fine-tuned neural network from the distorted image in pixel-wise. Finally, these features are concatenated to quantify image quality degradations and then regress to quality scores. In addition, the proposed method is adaptive to various deep neural networks. Experimental results show that the proposed method not only has the state-of-art performance on the synthetic distorted images, but also has a great improvement on the authentic distorted images.

Introduction

In the area of Image Quality Assessment (IQA), in more general terms measurement metric categorized into subjective and objective quality assessment.Currently,objective image quality assessment can be broadly divided into Opinion-aware and Opinion-unaware.In Opinion-aware,objective image quality assessment algorithms based on the mean opinion score(MOS)[9] obtained from subjective experiments.Over last two decades years, many significant works have been well-established.

IQA plays a significant role in many foundational visual problems.The capability of diverse electronic visual content continues to advance rapidly in network applications.However,there is a wide variety of annoying distortions when these diversity of images are being accessed, uploaded, and displayed.Therefore,How to accurately describe this quality change is very important for the optimization of system parameters or the transmission costs[3].This quantization process has a wide range of applications.In the literature, some full reference image quality assessment (FR-IQA) methods[4] inspired from humanvisial system(HVS).On the other hand,some NR-IQA methods[5], [6] which rely on natural scene statistics(NSS) considered image spatial domain and transform domain features.Since the reference image is not required, the application scenario of NR-IQA evaluation is broader than FR-IQA. Based on whether human subjective scores need to be used in the process of designing a perceptual model to evaluate.

Following Marr [7] and Newell [8], the quality of an image cannot just be simply defined as a visibility of its distortions, but should be regarded as the adequacy of this image as input to the vision stage of the interaction process. This interaction of visuo-cognitive is not an isolated process but instead as an essential stage in human interaction with the environment [8]. Image quality assessment can also be regarded as a process of visuo-cognitive. However, the existing IQA methods ignores the impact of environment to human perception. For example, when evaluating the image quality, the existing IQA methods only rely on some regularity of distorted image and use the common machine learning model to build the fitting function of MOS. But the MOS is generated by human subjective evaluation, which means it is influenced by both visuo-cognitive and background environment. Those image-based regularity cannot depicts the relationship between the human perception and the human interaction with the environment. Therefore, the evaluation results cannot really reflect the process of human perception to image quality.

In this work, inspired by the study of cognitive psychology, we propose a novel BIQA method based on the context effect [1]. The context effect refers to a psychological effect that the quality evaluation made by the HVS is related to the contrast between the distorted image and the background environment. According to this fact, we use a graphical model to describe the mechanism of context effect on IQA.Based on the established graph, we construct the context relation between the distorted image and the background environment by Matchnet[2]. The Matchnet is trained to rank images in terms of perceptual quality, so the context relation is actually the quality difference between the distorted image and the other image in the background environment. After that, a context feature is constructed based on the obtained context relation, and a quality-related feature is extracted by a deep neural network from the pixel-level of the distorted image. Finally, The context feature and the quality-relatd feature are combined to form the final quality descriptor, and then regress to quality score.

In summary, the main contributions of this paper are as follows:

  • 1.

    It is the first time to take the process of human interaction with the environment into account when we do the IQA task, and the context effect is introduced to describe the impact of environment to human perception.

  • 2.

    We build a graphical model to describe the mechanism of the context effect on IQA, which makes it have good interpretability.

  • 3.

    We propose a novel BIQA method based on the context effect, which can be combined with other deep neural network to get siginificant performance improvement.

Section snippets

Related work

The idea of using hand-crafted features to construct BIQA methods has been the mainstream for a long time. The early research was focused on the distortion-specific features which are designed for particular distortion type. Then is the well-known features family of NSS which assume that natural scenes have statistical regularities that can describe the change of perceptual quality[9]. In addition, there are also some other methods such as using the semantic saliency or codebooks to construct

Proposed model

In this section, we establish a graphical model to describe the mechanism of the context effect on IQA, and then propose a novel BIQA method, which is called DeepCE. The implementation of DeepCE can be divided into three phases. First, a Matchnet is trained to generate a feature which is about the context effect. Then a classical deep neural network which has pre-trained in the ImageNet[35] is fine-tuned on the IQA dataset. Finally, the feature of context effect and the feature extracted by the

Experimental results

In this section, we first describe the experimental protocol, including database, criteria, and implementation details. Then, we compare with state-of-the-art BIQA methods in terms of full-reference and no-reference performance, respectively. comparisons separately. We design a series of experiments to verify the effect of background images.

Conclusion

In this paper, we establish a graphical model to describe the context effect in IQA for the first time. Then based on this graph, the role of context effect is refined into the CE-feature. By combining the CE-feature with other pixel-level quality-related features, we propose a novel BIQA model, which is called DeepCE. Experimental results show that the proposed DeepCE is comparable with other state-of-the art BIQA methods. In addition, the other contribution of our work is that it is possible

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research was supported in part by the National Natural Science Foundation of China (Grant No. 61871311), the Key Industrial Innovation Chain Project in Industrial Domain of Shaanxi Province (Grant No. 2020ZDLGY05-01). Aeronautical Science Foundation of China (Grant No. 2020Z071081004).

Zehong Liang received the B.S. degree from the School of Electronic Engineering, Xidian University, Xian,China, in 2021, where he is currently pursuing the masters degree with the School of Electronic Engineering. His current research interests include machine learning, visual information processing and image quality assessment.

References (48)

  • A.K. Moorthy et al.

    Blind image quality assessment: From natural scene statistics to perceptual quality

    IEEE transactions on Image Processing

    (2011)
  • J. Xu et al.

    Blind image quality assessment based on high order statistics aggregation

    IEEE Transactions on Image Processing

    (2016)
  • X. Liu, J. van de Weijer, A.D. Bagdanov, Rankiqa: Learning from rankings for no-reference image quality assessment, in:...
  • L. Kang, P. Ye, Y. Li, D. Doermann, Convolutional neural networks for no-reference image quality assessment, in:...
  • H. Talebi et al.

    Nima: Neural image assessment

    IEEE transactions on image processing

    (2018)
  • S. Su et al.

    Blindly assess image quality in the wild guided by a self-adaptive hyper network, in

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    (2020)
  • W. Zhang et al.

    Uncertainty-aware blind image quality assessment in the laboratory and wild

    IEEE Transactions on Image Processing

    (2021)
  • K. Simonyan et al.

    Very deep convolutional networks for large- scale image recognition

    Computational and Biological Learning Society

    (2015)
  • S. Bosse et al.

    Deep neural networks for no-reference and full-reference image quality assessment

    IEEE Transactions on Image Processing

    (2018)
  • H. Zeng, L. Zhang, A. C. Bovik, Blind image quality assessment with a probabilistic quality representation, in:...
  • S. Bianco et al.

    On the use of deep learning for blind image quality assessment

    Signal, Image and Video Processing

    (2018)
  • K. Ma et al.

    dipiq: Blind image quality assessment by learning-to-rank discriminable image pairs

    IEEE Transactions on Image Processing

    (2017)
  • K. Ma et al.

    End-to-end blind image quality assessment using deep neural networks

    IEEE Transactions on Image Processing

    (2018)
  • J. Kim et al.

    Fully deep blind image quality predictor

    IEEE Journal of selected topics in signal processing

    (2017)
  • Cited by (5)

    Zehong Liang received the B.S. degree from the School of Electronic Engineering, Xidian University, Xian,China, in 2021, where he is currently pursuing the masters degree with the School of Electronic Engineering. His current research interests include machine learning, visual information processing and image quality assessment.

    Wen Lu received the BSc, MSc and PhD degrees in signal and information processing from Xidian University, Xi an, China, in 2002, 2006 and 2009, respectively. Since 2009, he joined the School of School of Electronic Engineering at Xidian University. From 2010 to 2012, he was a Post-doctoral Research Fellow with the Department of Electronic Engineering, Stanford University, U.S. He is currently a professor at School of Electronic Engineering, Xidian University. His current research interests include multimedia analysis, computer vision, pattern recognition, deep learning. He has published 2 books and around 50 technical articles in refereed journals and proceedings including IEEE Transactions on Image Processing, IEEE Transactions on Cybernetics, Information Science, Neurocomputing, etc. He is also on the editorial boards and serves as reviewers for many journals, such as IEEE Transactions on Image Processing, IEEE Transactions on Multimedia, etc.

    Yong Zheng received the B.S. degree from the School of Electronic Engineering, Xidian University, Xian,China, in 2020, where he is currently pursuing the masters degree with the School of Electronic Engineering. His current research interests include machine learning, visual information processing and image quality assessment.

    Weiquan He received the B.S. degree from the School of Telecommunication and Information Engineering, Xi an University of Posts and Telecommunications, Xian,China, in 2017. He is currently working at Alibaba,research interests include machine learning, visual information processing and image quality assessment.

    Jiachen Yang IEEE Senior Member; a member of New Century Outstanding Talents Support Program of the Ministry of Education; professor of Communication and Information Engineering, Tianjin University; Doctoral supervisor. Has been in charge of more than 30 projects in the past five years, including two projects of National Natural Science Foundation, and has gotten more than 15 million yuan cumulative funds. He has published more than 40 papers on the SCI. He also has applied for more than 40 patents, and more than 20 of them have been authorized.

    View full text