Skip to main content

Visual Saliency Based Blind Image Quality Assessment via Convolutional Neural Network

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10639))

Abstract

Image quality assessment (IQA), as one of the fundamental techniques in image processing, is widely used in many computer vision and image processing applications. In this paper, we propose a novel visual saliency based blind IQA model, which combines the property of human visual system (HVS) with features extracted by a deep convolutional neural network (CNN). The proposed model is totally data-driven thus using no hand-crafted features. Instead of feeding the model with patches selected randomly from images, we introduce a salient object detection algorithm to calculate regions of interest which are acted as training data. Experimental results on the LIVE and CSIQ database demonstrate that our approach outperforms the state-of-art methods compared.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bosse, S., Maniry, D., Müller, K.R., Wiegand, T., Samek, W.: Deep neural networks for no-reference and full-reference image quality assessment. arXiv preprint arxiv:1612.01697 (2016)

  2. Cheng, M.M., Mitra, N.J., Huang, X., Torr, P.H., Hu, S.M.: Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 569–582 (2015)

    Article  Google Scholar 

  3. Hou, W., Gao, X., Tao, D., Li, X.: Blind image quality assessment via deep learning. IEEE Trans. Neural Netw. Learn. Syst. 26(6), 1275–1286 (2015)

    Article  MathSciNet  Google Scholar 

  4. Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1733–1740 (2014)

    Google Scholar 

  5. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arxiv:1412.6980 (2014)

  6. Larson, E.C., Chandler, D.M.: Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imaging 19(1), 011006 (2010)

    Article  Google Scholar 

  7. Li, J., Zou, L., Yan, J., Deng, D., Qu, T., Xie, G.: No-reference image quality assessment using prewitt magnitude based on convolutional neural networks. SIViP 10(4), 609–616 (2016)

    Article  Google Scholar 

  8. Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Moorthy, A.K., Bovik, A.C.: Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans. Image Process. 20(12), 3350–3364 (2011)

    Article  MathSciNet  Google Scholar 

  10. Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: a natural scene statistics approach in the dct domain. IEEE Trans. Image Process. 21(8), 3339–3352 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. Sheikh, H.R., Sabir, M.F., Bovik, A.C.: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15(11), 3440–3451 (2006)

    Article  Google Scholar 

  12. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  13. Ye, P., Kumar, J., Kang, L., Doermann, D.: Unsupervised feature learning framework for no-reference image quality assessment. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1098–1105. IEEE (2012)

    Google Scholar 

  14. Zhang, L., Shen, Y., Li, H.: VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23(10), 4270–4281 (2014)

    Article  MathSciNet  Google Scholar 

  15. Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

This research is supported by the National High-Tech R&D Program of China (863 Program) under Grant 2015AA016402 and Shanghai Natural Science Foundation under Grant 14Z111050022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Li, J., Zhou, Y. (2017). Visual Saliency Based Blind Image Quality Assessment via Convolutional Neural Network. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10639. Springer, Cham. https://doi.org/10.1007/978-3-319-70136-3_58

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70136-3_58

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70135-6

  • Online ISBN: 978-3-319-70136-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics