Skip to main content

Advertisement

Log in

Feature-segmentation strategy based convolutional neural network for no-reference image quality assessment

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Convolutional neural networks (CNN) have been shown to deliver outstanding performance for image quality assessment (IQA). Most CNN models are trained using small image patches with a fixed resolution of 32 × 32. However, more information of image content and human visual system should be taken into account. This paper proposes a post segmentation based CNN model for no-reference quality assessment without any pre-processing. The network consists of five convolutional layers with max pooling, one special fully connected layer feature-segmentation and one output layer. This paper adopts the feature-segmentation strategy to assure enough training data. We modify the structure of fully connected layer and regard every feature vector of the last pooling maps as an independent sample to train our proposed model. In this way, the raw images can be fed into the input and do not need to be split into patches, to avoid any hand-crafted features. Moreover, the size of the input images is not fixed, and the size of the extracted feature vector is invariant. Experiments on LIVE, CSIQ and TID2008 databases demonstrate that our approach has high consistency with the subjective evaluation scores. The experimental results show that the proposed network outperforms state-of-the-art no-reference IQA algorithms and is comparable to some full-reference IQA algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bosse S, Maniry D, Mller K-R, et al. (2016) Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans Image Process 27 (1):206–219

    Article  MathSciNet  Google Scholar 

  2. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. Sardinia, pp 249–256

  3. Group VQE (2000) Final report from the video quality experts group on the validation of objective models of video quality assessment, Phase II (FR TV2), VQEG meeting, Ottawa, Canada

  4. Gu J, Meng G, Redi JA, et al. (2018) Blind image quality assessment via vector regression and object oriented pooling. IEEE Trans Multimed 20(5):1140–1153

    Article  Google Scholar 

  5. Hao S, Pan D, Guo Y, et al. (2016) Image detail enhancement with spatially guided filters. Signal Process 120:789–796

    Article  Google Scholar 

  6. He K, Zhang X, Ren S, et al. (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, Nevada, pp 770–778

  7. Holzinger A (2018) From machine learning to explainable AI. 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA)

  8. Holzinger A, Langs G, Denk H, et al. (2019) Causability and explainability of AI in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery

  9. Hou W, Gao X, Tao D, et al. (2015) Blind image quality assessment via deep learning. IEEE Trans Neural Netw Learning Sys 26(6):1275–1286

    Article  MathSciNet  Google Scholar 

  10. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift

  11. Jin X, Wu L, Li X, et al. (2016) ILGNet: inception modules with connected local and global features for efficient image aesthetic quality classification using domain adaptation. IET Computer Vision: 1–7

  12. Kang L, Ye P, Li Y, et al. (2014) Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1733–1740

  13. Kim J, Lee S (2017) Fully deep blind image quality predictor. IEEE J Select Topics Signal Process 11(1):206–220

    Article  Google Scholar 

  14. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. In: International Conference on Learning Representations. Banff, Canada

  15. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  16. Larson EC, Chandler DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Imaging 19(1):011006

    Article  Google Scholar 

  17. LeCun YA, Bottou L, Orr GB, et al. (2012) Efficient BackProp, in Neural Networks: Tricks of the Trade (Lecture Notes in Computer Science) 7700: 9–48

  18. Liu X, Joost VDW, Bagdanov AD (2017) RankIQA: Learning from Rankings for No-reference Image Quality Assessment

  19. Ma K, Liu W, Zhang K, et al. (2018) End-to-end blind image quality assessment using deep neural networks[J]. IEEE Trans Image Process 27(3):1202–1213

    Article  MathSciNet  Google Scholar 

  20. Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695–4708

    Article  MathSciNet  Google Scholar 

  21. Moorthy AK, Bovik AC (2010) A two-step framework for constructing blind image quality indices. IEEE Signal Process Lett 17(5):513–516

    Article  Google Scholar 

  22. Ponomarenko N, Lukin V, Zelensky A, et al. (2009) TID2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics 10(4):30–45

    Google Scholar 

  23. Saad MA, Bovik AC, Charrier C (2012) Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans Image Process 21 (8):3339–3352

    Article  MathSciNet  Google Scholar 

  24. Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans on Image Process 15 (11):3440–3451

    Article  Google Scholar 

  25. Shijie H, Yanrong G, Zhongliang W (2019) Lightness-aware contrast enhancement for images with different illumination conditions. Multimed Tools Appl 78:3817–3830

    Article  Google Scholar 

  26. Wu J, Zhang M, Shi G, et al. (2018) No-reference image quality assessment with orientation selectivity mechanism[C]// IEEE International Conference on Image Processing. IEEE

  27. Xue W, Mou X, Zhang L, et al. (2014) Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans Image Process 23(11):4850–4862

    Article  MathSciNet  Google Scholar 

  28. Zhang L, Zhang L, Mou X, et al. (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research is partially supported by the National Natural Science Foundation of China (No. 61520106002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lili Shen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, L., Hang, N. & Hou, C. Feature-segmentation strategy based convolutional neural network for no-reference image quality assessment. Multimed Tools Appl 79, 11891–11904 (2020). https://doi.org/10.1007/s11042-019-08298-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-08298-2

Keywords

Navigation