Elsevier

Signal Processing

Volume 124, July 2016, Pages 210-219
Signal Processing

Biologically inspired image quality assessment

https://doi.org/10.1016/j.sigpro.2015.08.012Get rights and content

Highlights

  • We propose a novel IQA approach named biologically inspired feature similarity (BIFS), which is demonstrated to be highly consistent with the human perception.

  • In the proposed approach, biologically inspired features (BIFs) of the test image and the relevant reference image are first extracted.

  • Afterwards, local similarities between the reference BIFs and the distorted ones are calculated and then combined to obtain a final quality index.

  • Thorough experiments on a number of IQA databases demonstrate that the proposed method is highly effective and robust, and outperform state-of-the-art FR-IQA methods across various datasets.

Abstract

Image quality assessment (IQA) aims at developing computational models that can precisely and automatically estimate human perceived image quality. To date, various IQA methods have been proposed to mimic the processing of the human visual system, with limited success. Here, we present a novel IQA approach named biologically inspired feature similarity (BIFS), which is demonstrated to be highly consistent with the human perception. In the proposed approach, biologically inspired features (BIFs) of the test image and the relevant reference image are first extracted. Afterwards, local similarities between the reference BIFs and the distorted ones are calculated and then combined to obtain a final quality index. Thorough experiments on a number of IQA databases demonstrate that the proposed method is highly effective and robust, and outperform state-of-the-art FR-IQA methods across various datasets.

Introduction

The past decades have witnessed a dramatic increase in the number of images with the tremendous development of social networking websites, smartphones, and cameras. And various systems have been developed to deal with such a large scale of images. In these systems, image quality usually plays a significant role. For example, images of poor quality may lead to obstacles in learning or applying such systems for practical applications, e.g. scene recognition [1], image retrieval [2], and so on. In addition, image quality can be adopted as a criterion for evaluating the performance of image processing systems [3], [4], [5], optimizing image processing algorithms, and monitoring the working condition of devices [6]. Thus it is meaningful to develop image quality assessment (IQA) methods that can precisely and automatically estimate human perceived image quality.

In recent years, many IQA methods have been developed and we can classify them into three classes [6]: full-reference (FR) IQA [7], reduced-reference (RR) IQA [8], [9], and no-reference (NR) or blind IQA [10], [11]. FR-IQA methods need all the information of the reference image, i.e. the undistorted version of the test image, is needed. In contrast, RR-IQA and NR-IQA methods only need part of or none of the information about the reference image. Consequently, the quality prediction accuracies of FR-IQA methods are usually better than present RR-IQA and NR-IQA methods.

Generally speaking, the intrinsic idea of FR-IQA is to estimate the quality of a test image by measuring the similarity or difference between the test image and the corresponding reference image. For example, in peak signal-to-noise ratio (PSNR) and root mean squared error (RMSE), the most widely used two IQA methods, the differences between the reference image and the test image are calculated pixel by pixel, and then combined into a single value. Because PSNR and MSE do not always consistent well with human perception [12], great efforts have been paid to develop progressive methods for quality assessment in the past decades [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28]. And many of them have shown impressive and inspiring consistency with human perception over a large range of datasets [26], [27], [28].

Since the goal of IQA is to approximate human beings’ judgments of image quality, it is meaningful to develop IQA methods that mimic the perception mechanism of the human visual system (HVS). Although many attempts have been proposed, most of them only consider some particular properties of HVS, e.g. contrast sensitivity function (CSF) [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], just noticeable difference (JND) [30], and visual attention (VA) [31], etc. Usually they do not perform as well as state-of-the-art FR-IQA methods. To date, only limited IQA methods have been proposed to formula the processing in the visual cortex [27]. And the properties of primary visual cortex, V1, have not been well explored for IQA, although neuroscientists have demonstrated that V1 plays a much significant in visual processing [32].

In this paper, we utilize biologically inspired feature (BIF) models [33] to mimic the properties of (S1) and complex (C1) cells in V1, and construct a novel IQA index by measuring the similarity between the BIFs of the reference image and those of the test image. Although BIFs have been introduced to FR-IQA before, it was adopted for estimating visual attention [34]. In contrast, in the proposed method, BIFs are deplored for representing the input image in the primary visual cortex and directly utilized for quality prediction. Thorough experiments conducted on various IQA databases demonstrate that the proposed method is in highly consistency with human perception and outperform state-of-the-art FR-IQA methods across a number of datasets. The highlights of the proposed method are summarized below as follows:

  • a.

    We explore BIF for FR-IQA by employing it to mimic the processing in the primary visual cortex;

  • b.

    We construct a novel FR-IQA framework by measuring the similarity between the BIFs of the test image and the BIFs of the corresponding reference image; and

  • c.

    Thorough experiments on existing databases demonstrate that the proposed method is highly comparable with state-of-the-art FR-IQA methods.

  • d.

    The rest of the paper is organized as follows. Section II introduces the calculation of BIFs. In Section III, we present the framework of the proposed quality evaluation method. Extensive experiments conducted on standard IQA datasets are presented and analyzed in Section IV. Section V concludes the paper.

Section snippets

Biologically inspired features

Biologically inspired feature models mimic the tuning properties of the simple and complex cells in V1 and have been demonstrated to be effective and efficient for solving various image processing problems, e.g. scene classification [33], object recognition [35], [36], visual attention detection [32], [33], [34], [35], [36], [37], and so on. We thus choose to use BIF for representing an image in the proposed research. Specially, we follow the work presented in [33], and adopt the C1 units,

Quality assessment framework

The proposed IQA approach simulates the process in the visual cortex and can be divided into three main components: the biologically inspired feature maps, the similarity maps between the distorted feature maps and the relevant reference feature maps, and the percentile pooling based quality prediction. And we term the proposed IQA method as biologically inspired feature similarity (BIFS). The flowchart of the BIFS is shown in Fig. 2. Details will be discussed below.

Experimental results

To evaluate the performance of the proposed method, we test it on several existing IQA databases and compare it with a number of state-of-the-art FR-IQA methods.

Conclusions

In this paper, biologically inspired feature is introduced to mimic the processing in the primary visual cortex. Afterwards, the similarity between the BIFs of the reference image and the BIFS of the distorted image is calculated for quality prediction. The comparison of both proposed algorithm with state-of-the-art FR-IQA metrics on a number of databases shows it has an impressive consistency with human perception and overwhelming superiority over the state-of-the-art FR-IQA metrics for

Acknowledgements

This paper is supported by the National Natural Science Foundation of China (No 61472110), the Program for New Century Excellent Talents in University (NECT-12-0323), the HongKong Scholar Programme (XJ2013038), Zhejiang Provincial Natural Science Foundation of China (No. LR15F020002).

References (49)

  • J. Yu et al.

    Semantic embedding for indoor scene recognition by weighted hypergraph learning

    Signal Process.

    (2015)
  • M. Song et al.

    Color-to-Gray based on chance of happening preservation

    Neurocomputing

    (2013)
  • C. Xu et al.

    Multi-view intact space learning

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2015)
  • J. Yu et al.

    Learning to rank using user clicks and visual features for image retrieval

    IEEE Trans. Cybern.

    (2015)
  • Y. Wang et al.

    Video tonal stabilization color states smoothing

    IEEE Trans. Image Process.

    (2014)
  • M. Song et al.

    Color to gray: visual cue preservation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2010)
  • Z. Wang et al.

    Modern Image Quality Assessment

    (2006)
  • H.R. Sheikh et al.

    A statistical evaluation of recent full reference image quality assessment algorithms

    IEEE Trans. Image Process.

    (2006)
  • Z. Wang et al.

    Quality-aware images

    IEEE Trans. Image Process.

    (2006)
  • X. Gao et al.

    Image quality assessment based on multiscale geometric analysis

    IEEE Trans. Image Process.

    (2009)
  • L. He et al.

    Sparse representation for blind image quality assessment

    Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)

    (2012)
  • X. Gao et al.

    Universal blind image quality assessment metrics via natural scene statistics and multiple kernel learning

    IEEE Trans. Neural Netw. Learn. Syst.

    (2013)
  • Z. Wang et al.

    Mean squared error: love it or leave it?-A new look at signal fidelity measures

    IEEE Signal Process. Mag.

    (2009)
  • A.B. Watson et al.

    Visibility of wavelet quantization noise

    IEEE Trans. Image Process.

    (1997)
  • A.P. Bradley

    A wavelet difference predictor

    IEEE Trans. Image Process.

    (1999)
  • Z. Wang et al.

    Image quality assessment: from error visibility to structural similarity

    IEEE Trans. Image Process.

    (2004)
  • H.R. Sheikh et al.

    An information fidelity criterion for image quality assessment using natural scene statistics

    IEEE Trans. Image Process.

    (2005)
  • H.R. Sheikh et al.

    Image information and visual quality

    IEEE Trans. Image Process.

    (2006)
  • M.D. Chandler et al.

    VSNR: a Wavelet-based visual signal-to-noise ratio for natural images

    IEEE Trans. Image Process.

    (2007)
  • X.B. Gao et al.

    A content-based image quality metric

  • Z. Wang et al.

    Information content weighting for perceptual image quality assessment

    IEEE Trans. Image Process.

    (2011)
  • L. He et al.

    A novel metric based on MCA for image quality

    Int. J. Wavel. Multiresolution Inf. Process.

    (2011)
  • L. He et al.

    Image quality assessment based on S-CIELAB model

    Signal Image Video Process.

    (2011)
  • L. He et al.

    Color Fractal Structure Model for Reduced-reference Colorful Image Quality Assessment

    (2012)
  • Cited by (62)

    • Representation learning of image composition for aesthetic prediction

      2020, Computer Vision and Image Understanding
      Citation Excerpt :

      To this end, exploring hierarchical deep features (Jun et al., 2019; Yu et al., 0000; Gao et al., 2018) and metric learning (Jun et al., 2017a) are promising solution. In addition, photo aesthetic is highly correlated with photo fidelity, i.e. traditional image quality (Gao and Yu, 2016; Gao et al., 2017). In the future, we will pay efforts to combine both aesthetic and fidelity indices for image restoration and enhancement.

    • Attentive and ensemble 3D dual path networks for pulmonary nodules classification

      2020, Neurocomputing
      Citation Excerpt :

      Traditionally, researchers explore hand-crafted features and use a classifier to predict the category of a nodule [2–4]. Nowadays, deep learning techniques, especially Convolutional Neural Networks (CNNs) have achieved great success in various high-level visual understanding tasks [5–10]. Researchers are thus inspired to employ CNNs for pulmonary nodule classification.

    View all citing articles on Scopus
    View full text