Skip to main content

Advertisement

Log in

Multiscale convolutional neural network for no-reference image quality assessment with saliency detection

  • 1221: Deep Learning for Image/Video Compression and Visual Quality Assessment
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent years, Convolutional Neural Network (CNN) has been gradually applied to Image Quality Assessment (IQA). Most CNNs segment the image into patches for training, which lead to increase of data and affect calculation speed of the model. Meanwhile, the parameters of CNN usually reach millions, which is the root cause of overfitting. In this paper, a multiscale CNN for NR-IQA is established to solve these problems. Since IQA simulates the perception of Human Visual System (HVS) on image quality, salient areas are more valuable for reference. Therefore a patch sampling method was designed based on saliency detection. Firstly, patches with salient values between given thresholds are retained as training data. Secondly, the sampled patches are fed into multiscale CNN. The network consists of three branches with multiscale convolutional kernels. Finally, the weighted average of the quality scores from the salient patches is the final score. The CNN was trained on LIVE dataset and cross-validated on CSIQ dataset. The experimental results show that the proposed method can achieve better performance with fewer parameters compared with state-of-the-art NR-IQA algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Achanta R, Estrada F, Wils P (2008) Salient region detection and segmentation. In: IEEE international conference on computer vision systems (ICVS), pp 66-75. https://doi.org/10.1007/978-3-540-79547-6_7

  2. Achanta R, Hemami S, Estrada F, Susstrunk S (2009) Frequency-tuned salient region detection. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1597-1604. https://doi.org/10.1109/CVPR.2009.5206569

  3. Bosse S, Maniry D, Wiegand T, Samek W (2016) A deep neural network for image quality assessment. In: IEEE international conference on image processing (ICIP), pp 3773-3777. https://doi.org/10.1109/ICIP.2016.7533065

  4. Bosse S, Maniry D, Müller K, Wiegand T, Samek W (2018) Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans Image Process 27(1):206–219. https://doi.org/10.1109/TIP.2017.2760518

    Article  MathSciNet  MATH  Google Scholar 

  5. Chandler EC, Larson DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging 19(1):011006. https://doi.org/10.1117/1.3267105

    Article  Google Scholar 

  6. Cheng M, Zhang G, Mitra NJ, Huang X, Hu S (2011) Global contrast based salient region detection. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 409-416. https://doi.org/10.1109/CVPR.2011.5995344

  7. Gu K, Zhai G, Lin W, Yang X, Zhang W (2015) Visual saliency detection with free energy theory. IEEE Signal Process Lett 22(10):1552–1555. https://doi.org/10.1109/LSP.2015.2413944

    Article  Google Scholar 

  8. Hu J, Shen L, Albanie S, Sun G, Wu E (2020) Squeeze-and-excitation networks. IEEE Trans Pattern Anal Mach Intell 42(8):2011–2023. https://doi.org/10.1109/TPAMI.2019.2913372

    Article  Google Scholar 

  9. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259. https://doi.org/10.1109/34.730558

    Article  Google Scholar 

  10. Jia S, Zhang Y (2018) Saliency-based deep convolutional neural network for no-reference image quality assessment. Multimed Tools Appl 77(12):14859–14872. https://doi.org/10.1007/s11042-017-5070-6

    Article  Google Scholar 

  11. Kang L, Ye P, Li Y, Doermann D (2014) Convolutional neural networks for no-reference image quality assessment. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1733-1740. https://doi.org/10.1109/CVPR.2014.224

  12. Li L, Zhou Y (2017) Visual saliency based blind image quality assessment via convolutional neural network. In: IEEE international conference on neural information processing (ICONIP), pp 550-557. https://doi.org/10.1007/978-3-319-70136-3_58

  13. Li J, Zou L, Yan J, Deng D, Qu T, Xie G (2015) No-reference image quality assessment using prewitt magnitude based on convolutional neural networks. Signal Image Vid Process 10:609–616. https://doi.org/10.1007/s11760-015-0784-2

    Article  Google Scholar 

  14. Li Y, Po L, Feng L, Yuan F (2016) No-reference image quality assessment with deep convolutional neural networks. In: IEEE international conference on digital signal processing (DSP), pp 685-689. https://doi.org/10.1109/ICDSP.2016.7868646

  15. Ma J, Wu J, Li L, Dong W, Lin W (2021) Blind image quality assessment with active inference. IEEE Transactions on Image Processing, pp 99:1–1. https://doi.org/10.1109/TIP.2021.3064195

    Article  Google Scholar 

  16. Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695–4678. https://doi.org/10.1109/TIP.2012.2214050

    Article  MathSciNet  MATH  Google Scholar 

  17. Pan C, Xu Y, Yan Y, Gu K, Yang X (2016) Exploiting neural models for no-reference image quality assessment. In: IEEE visual communications and image processing (VCIP), pp1–4. https://doi.org/10.1109/VCIP.2016.7805524

  18. Po LM, Liu M, YuenWilson YF et al (2019) A novel patch variance biased convolutional neural network for no-reference image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology 29(4):1223–1229. https://doi.org/10.1109/TCSVT.2019.2891159

    Article  Google Scholar 

  19. Sheikh H, Wang Z, Cormack L, Bovik A (2004) LIVE image quality assessment dataset release 2. http://live.ece.utexas.edu/research/quality

  20. Sun C, Li H, Li W (2016) No-reference image quality assessment based on global and local content perception. In: IEEE visual communications and image processing (VCIP), pp 1-4. https://doi.org/10.1109/VCIP.2016.7805544

  21. Xiong Y, Shao F, Meng Y, Zhou B, Ho YS (2019) Sparse representation of salient regions for no-reference image quality assessment. IEEE Access 13(5):106295–106306. https://doi.org/10.1177/1729881416669486

    Article  Google Scholar 

  22. Ye P, Kumar J, Kang L, Doermann D (2012) Unsupervised feature learning framework for no-reference image quality assessment. In: IEEE conference on computer vision and pattern recognition (CVPR), pp1098–1105. https://doi.org/10.1109/CVPR.2012.6247789

  23. Yun Z, Shah M (2006) Visual attention detection in video sequences using spatiotemporal cues. Proceedings of the 14th ACM international conference on multimedia, pp 815-824. https://doi.org/10.1145/1180639.1180824

  24. Zhang L, Gu Z, Li H (2013) SDSP: a novel saliency detection method by combining simple priors. In: IEEE international conference on image processing (ICIP), pp 171-175. https://doi.org/10.1109/ICIP.2013.6728036

  25. Zhang P, Zhou W, Wu L, Li H (2015) SOM: semantic obviousness metric for image quality assessment. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 2394-2402. https://doi.org/10.1109/CVPR.2015.7298853

  26. Zhou ZN, Zhou Z, Huang J (2021) Gauss-guided patch-based deep convolutional neural networks for no-reference image quality assessment. J Intell Fuzzy Syst 41(1):1–10. https://doi.org/10.3233/JIFS-210063

    Article  Google Scholar 

  27. Zoph B, Vasudevan V, Shlens J, Le QV (2017) Learning transferable architectures for scalable image recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 8697–8710. https://doi.org/10.48550/arXiv.1707.07012

Download references

Acknowledgements

This work was supported by the National Natural, Science Foundation of China under Grant 61976027, Liaoning Provincial Department of Education under Grant LJ2019011, LJKZ1026, Liaoning Natural Foundation Guidance Plan under Grant 2019-ZD-0502, and Liaoning Revitalization Talents Program under Grant XLYC2008002.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Fan.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, X., Wang, Y., Wang, C. et al. Multiscale convolutional neural network for no-reference image quality assessment with saliency detection. Multimed Tools Appl 81, 42607–42619 (2022). https://doi.org/10.1007/s11042-022-13477-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-13477-9

Keywords

Navigation