Elsevier

Neurocomputing

Volume 212, 5 November 2016, Pages 121-127
Neurocomputing

Fast image quality assessment via supervised iterative quantization method

https://doi.org/10.1016/j.neucom.2016.01.116Get rights and content

Abstract

No-reference/Blind image quality assessment (NR-IQA/BIQA) is significant for image processing and yet very challenging, especially for real-time application and big image data processing. Traditional NR-IQA metrics usually train complex models such as support vector machine, neural network, and probability graph model, which result in long computational time and poor robustness. To overcome these weaknesses, the paper proposes a fast no-reference image quality assessment via hash coding method, named NRHC. First, the image is divided into overlapped patches to extract the spatial statistical features of natural scene images. Then the features are encoded to produce binary hash codes via supervised iterative quantization (SITQ) method. Finally, the Hamming distances between the hash code of the test image and those of original undistorted images are calculated to obtain the final image quality. Thorough experiments on benchmark databases demonstrate that the proposed approach achieves comparable performance and has higher computational efficiency and stronger robustness compared with the state-of-the-art NR-IQA methods.

Introduction

With the tremendous development of intelligent network, ultra-high resolution display, and wearable devices, high quality and credible visual information (e.g., image and video) is significant for the end user to obtain a satisfactory quality of experience (QoE). Assessing the quality of visual information, especially no-reference or blind image quality assessment (NR-IQA or BIQA) method, plays an important role in numerous visual information processing system and applications [1]. Moreover, effective (high prediction accuracy) and efficient (low computational complexity) NR-IQA method is essential and has attracted a large number of attentions.

NR-IQA metric is designed to automatically and accurately predict image quality without reference images. It is a difficult and challenging work, but it has attracted many researchers' attentions. Traditional methods focus on designing distortion-specific methods [2], [3], [4], which means that these methods evaluate quality of images with only one kind of distortions effectively, such as JPEG or JPEG2000 compression distortions, white noise, and Gaussian blurring. Therefore, it is imperative to build the general purpose NR-IQA metric to handle different types of distortions and even multi-distortions.

Recently, great effort has been made to develop general purpose NR-IQA metrics named distortion-agnostic NR-IQA method, which can predict quality of images without known the type of distortions. Almost all of the proposed NR-IQA methods [5], [6] contain quality-aware feature extraction and effective evaluation model designing, which are two key processes for building a distortion-agnostic metric. Generally, natural scene statistical (NSS) properties [7] are most popular utilized features, which are extracted by generalized Gaussian distribution (GGD) in wavelet domain usually. Also other features are extracted through Gabor in spatial domain [8] or statistical characteristics in discrete cosine transformation (DCT) domain [9]. All of features which are extracted using statistical method can reveal the naturalness of natural scene images. Another key point is designing the prediction modeling, which can be divided into two categories, two-steps strategy and transductive approach. The former first determines the type of distortions in the test image and then employs an associated distortion-specific no-reference image quality assessment metric to predict the quality of the given image, e.g. BIQI [5] and DIIVINE [10]. The BIQI trains a support vector machine (SVM) model to divide five different types of distortions and trains five different support vector regressions (SVR) model for a particular distortion to predict image quality. The DIIVINE, which is the extended work of BIQI, also is built in the two-steps framework. While the transductive approach aims to build a model to directly project image features to image quality without distinguishing different types of distortions, such as LBIQ [11], BLIINDS [12], BLIINDS-II [9], CORNIA [8], [13], NIQE [14], and SRNSS [15]. In those metrics, a large number of machine learning methods are utilized to train the quality prediction model, such as multiple kernels learning (MKL) [16], neural network [17], [18], and the probabilistic model [19]. Therefore, the reported metrics face a significant problem that they need long training and test time. This is because that complex machine learning model is adopted, the parameters are mostly defined by experience, and a large number of samples are utilized to train the prediction model. And these methods also would reduce the robustness of the quality evaluation system.

In order to solve the above problems, this paper proposes a novel no-reference image quality assessment metric via supervised iterative quantization method, which is simple yet very fast. The proposed method first divides the image into overlapped patches and extracts the image spatial quality evaluator features for each patch. Then the features are encoded into hash codes via a supervised iterative quantization method. Finally, the Hamming distance between the hash code of the test image and the original undistorted image is calculated to predict image quality. In the proposed method, the quality prediction includes hash coding and the Hamming distance calculation [20], [21]. They have the properties of fast speed and high efficiency. Hence, the proposed can satisfy the real-time applications and big image data processing.

The rest of the paper is organized as follows. Section 2 illustrates the proposed no-reference image quality assessment method. Detailed experimental results are summarized and discussed in Section 3, and Section 4 concludes the paper.

Section snippets

NR-IQA via hash code

In order to assess the image quality effectively and efficiently, a novel no-reference image quality assessment method is presented in the paper. The proposed method includes three major steps: feature extraction, hashing coding, and quality evaluation. For convenience, the proposed metric is named NRHC, which is short for fast No-Reference image quality assessment via Hash Code. And the framework of proposed NRHC is shown in Fig. 1.

Experimental results and analysis

To validate the effectiveness and robustness of the proposed NR-IQA method, four experiments are conducted, including the consistency experiment, database independence, time cost experiment and parameter analysis.

Databases: LIVE II [27], TID [28], CSIQ [29], IVC [30], and MICT [31] are used as the standard databases. The LIVE (the Laboratory of Image and Video Engineering at the University of TEXAS at Austin) contains 29 high-resolution 24-bits/pixel RGB color original images and a series of

Conclusions

This paper proposed a novel no-reference image quality assessment method via hash codes. It is effective and efficient which are demonstrated by the analysis and experiments. The proposed method first extracts the spatial natural scene statistics features, embeds the features into hash codes via an supervised iterative quantization method, and calculates the Hamming distance between the hash code of the test image and the original undistorted image to predict image quality. The hash coding and

Acknowledgments

This research was supported partially by the National Natural Science Foundation of China (Nos. 61372130, 61432014, 61501349, 61571343), the Fundamental Research Funds for the Central Universities (Nos. BDY081426, JB140214, XJS14042), the Program for New Scientific and Technological Star of Shaanxi Province (No. 2014KJXX-47), the Project Funded by China Postdoctoral Science Foundation (No. 2014M562378).

Lihuo He is currently a Postdoctoral Fellow at Xidian University. He received the B.Sc. degree in Electronic and Information Engineering and Ph.D. degree in Pattern Recognition and Intelligent Systems from Xidian University, Xi'an, China, in 2008 and 2013. His research interests focus on image/video quality assessment, cognitive computing, and computational vision.

References (34)

  • D. Wang et al.

    Semi-supervised constraints preserving hashing

    Neurocomputing

    (2015)
  • Z. Wang, A.C. Bovik, Modern Image Quality Assessment, Morgan and Claypool, New York,...
  • H.R. Sheikh et al.

    No-reference quality assessment using natural scene statisticsJPEG2000

    IEEE Trans. Image Process.

    (2005)
  • Z. Wang, A.C. Bovik, B.L. Evans, Blind measurement of blocking artifacts in images, Proc. IEEE Int. Conf. Image...
  • L. Li et al.

    Compression quality prediction model for JPEG2000

    IEEE Trans. Image Process.

    (2010)
  • A.K. Moorthy et al.

    A two-step framework for constructing blind image quality indices

    IEEE Signal Process. Lett.

    (2010)
  • K. Gu et al.

    Using free energy principle for blind image quality assessment

    IEEE Trans. Multimed.

    (2015)
  • R.W. Buccigrossi et al.

    Image compression via joint statistical characterization in the wavelet domain

    IEEE Trans. Image Process.

    (1999)
  • P. Ye et al.

    No-reference image quality assessment in the spatial domain

    IEEE Trans. Image Process.

    (2012)
  • M.A. Saad et al.

    Blind image quality assessmenta natural scene statistics approach in the DCT domain

    IEEE Trans. Image Process.

    (2012)
  • A.K. Moorthy et al.

    Blind image quality assessmentfrom natural scene statistics to perceptual quality

    IEEE Trans. Image Process.

    (2011)
  • H. Tang, N. Joshi, A. Kapoor, Learning a blind measure of perceptual image quality, Proc. IEEE Conf. Comput. Vis....
  • M.A. Saad et al.

    A DCT statistics based blind image quality index

    IEEE Trans. Image Process.

    (2011)
  • P. Ye et al.

    No-reference image quality assessment using visual codebooks

    IEEE Trans. Image Process.

    (2012)
  • A. Mittal et al.

    Making a completely blind image quality analyzer

    IEEE Signal Process. Lett.

    (2013)
  • L.H. He, D.C. Tao, X.L. Li, X.B. Gao, Sparse representation for blind image quality assessment, Proc. IEEE Conf....
  • X. Gao et al.

    Universal blind image quality assessment metrics via natural scene statistics and multiple kernel learning

    IEEE Trans. Neural Netw. Learn. Syst.

    (2013)
  • Cited by (6)

    • Learning picture quality from visual distraction: Psychophysical studies and computational models

      2017, Neurocomputing
      Citation Excerpt :

      In the scenarios where the reference is partially available (e.g., in complex communication networks), reduced-reference metrics are meant to assess image quality by using information extracted from the reference [10,11]. No-reference metrics attempt to predict the perceived quality purely based on the distorted image, and are useful for scenarios where there is no access to the reference at all [12,13]. Generally, full-reference IQMs are more accurate and reliable than reduced-reference and no-reference IQMs.

    • Perceptual image quality assessment: a survey

      2020, Science China Information Sciences
    • Full reference image quality assessment from IQA Datasets: A review

      2019, Proceedings of the 2019 6th International Conference on Computing for Sustainable Global Development, INDIACom 2019
    • Pseudo-inverse locality preserving iterative hashing

      2017, Proceedings of 2017 International Conference on Progress in Informatics and Computing, PIC 2017

    Lihuo He is currently a Postdoctoral Fellow at Xidian University. He received the B.Sc. degree in Electronic and Information Engineering and Ph.D. degree in Pattern Recognition and Intelligent Systems from Xidian University, Xi'an, China, in 2008 and 2013. His research interests focus on image/video quality assessment, cognitive computing, and computational vision.

    Di Wang received the B.S. degree in Computer Science from Changan University, Xi'an, China, in 2011. She is currently working toward the Ph.D. degree in the School of Electronic Engineering at Xidian University. Her research interests include machine learning and multimedia information retrieval.

    Qi Liu is currently pursuing the Ph.D. degree in Pattern Recognition and Intelligent system at Xidian University, Xi'an, China. His research interests include pattern recognition, computer vision and image enhancement.

    Wen Lu received the B.Sc., M.Sc. and Ph.D. degrees in Signal and Information Processing from Xidian University, China, in 2002, 2006 and 2009 respectively. From 2010 to 2012, he was a postdoctoral fellow in the department of electrical engineering at Stanford University, USA. He is currently an Associate Professor at Xidian University. His research interests include image & video quality metric, human vision system, computational vision. He has published 2 books and around 30 technical articles in refereed journals and proceedings including IEEE TIP, TSMC, Neurocomputing, Signal processing, etc.

    View full text