Abstract
Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors.







Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Wang Z, Chen J, Hoi S C (2020) Deep learning for image super-resolution: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence
Agustsson E, Timofte R (2017) Ntire 2017 challenge on single image super-resolution: Dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 126–135
Zhang K, Luo S, Li M Q (2020) Learning stacking regressors for single image super-resolution. Appl Intell 50:4325–4341
Zhang K, Tao D, Gao X, Li X, Li J (2016) Coarse-to-fine learning for single-image super-resolution. IEEE Trans Neural Netw Learn Syst 28(5):1109–1122
Zeng K, Ding S, Jia W (2019) Single image super-resolution using a polymorphic parallel CNN. Appl Intell 49(1), 292-300
Zhang Y, Sun Y, Liu S (2022) Deformable and residual convolutional network for image super-resolution. Appl Intell 52(1):295–304
Lan R, Sun L, Liu Z, Lu H, Pang C, Luo X (2020) MADNet: a fast and lightweight network for single-image super resolution. IEEE Trans Cybern 51(3):1443–1453
Yan B, Bare B, Ma C, Li K, Tan W (2019) Deep objective quality assessment driven single image super-resolution. IEEE Trans Multimed 21(11):2957–2971
Ma C, Yang C Y, Yang X, Yang M H (2017) Learning a no-reference quality metric for single-image super-resolution. Comput Vis Image Underst 158:1–16
Zhou F, Yao R, Liu B, Qiu G (2019) Visual quality assessment for super-resolved images: database and method. IEEE Trans Image Process 28(7):3528–3541
Zhang K, Zhu D, Li J, Gao X, Gao F, Lu J (2021) Learning stacking regression for no-reference super-resolution image quality assessment. Signal Process 178:107771
Li Z, Yang J, Liu Z (2019) Feedback network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3867–3876
Blau Y, Mechrez R, Timofte R, Michaeli T, Zelnik-Manor L (2018) The 2018 pirm challenge on perceptual image super-resolution. In: Proceedings of European Conference on Computer Vision (ECCV) Workshops
Tang L, Sun K, Liu L, Wang G, Liu Y (2019) A reduced-reference quality assessment metric for super-resolution reconstructed images with information gain and texture similarity. Signal Process: Image Commun 79:32–39
Hu Q, Sheng Y, Yang L, Li Q, Chai L (2019) Reduced-reference image quality assessment for single-image super-resolution based on Wavelet domain. In: 2019 Chinese Control And Decision Conference, pp 2067–2071
Bare B, Li K, Yan B, Feng B, Yao C (2018) A deep learning based no-reference image quality assessment model for single-image super-resolution. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 1223–1227
Fang Y, Zhang C, Yang W, Liu J, Guo Z (2018) Blind visual quality assessment for image super-resolution by convolutional neural network. Multimed Tools Appl 77(22):29829–29846
Zhou W, Jiang Q, Wang Y, Chen Z, Li W (2020) Blind quality assessment for image superresolution using deep two-stream convolutional networks. Inf Sci 528:205–218
Jiang X, Shen L, Ding Q, Zheng L, An P (2020) Screen content image quality assessment based on convolutional neural networks. J Vis Commun Image Represent:67
Xu J, Zhou W, Chen Z (2021) Blind omnidirectional image quality assessment with viewport oriented graph convolutional networks. IEEE Trans Circ Syst Video Technol 31(5): 1724–1737
Yan Q, Gong D, Zhang Y (2018) Two-stream convolutional networks for blind image quality assessment. IEEE Trans Image Process 28(5):2200–2211
Liu L, Liu B, Huang H, Bovik A C (2014) No-reference image quality assessment based on spatial and spectral entropies. Signal Process: Image Commun 29(8):856–863
Saad M A, Bovik A C, Charrier C (2011) DCT statistics model-based blind image quality assessment. In: 2011 18th IEEE International Conference on Image Processing, pp3093–3096
Kang L, Ye P, Li Y, Doermann D (2014) Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1733–1740
Kang L, Ye P, Li Y, Doermann D (2015) Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In: 2015 IEEE international conference on image processing (ICIP), pp 2791–2795
Zhang W, Ma K, Yan J, Deng D, Wang Z (2018) Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans Circ Syst Video Technol 30(1):36–47
Jia S, Zhang Y (2018) Saliency-based deep convolutional neural network for no-reference image quality assessment. Multimed Tools Appl 77(12):14859–14872
Bosse S, Maniry D, Wiegand T, Samek W (2017) Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans Image Process 27(1):206–219
Yang S, Jiang Q, Lin W, Wang Y (2019) Sgdnet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In: Proceedings of the 27th ACM International Conference on Multimedia, pp 1383–1391
Woo S, Park J, Lee J Y, Kweon I S (2018) CBAM: Convolutional block attention module. In: Proceedings of the European conference on computer, pp 3–19
Zhou K, Yang Y, Cavallaro A, Xiang T (2019) Omni-scale feature learning for person re-identification. In: Proceedings of the IEEE International Conference on Computer, pp 3702– 3712
Gao F, Wang Y, Li P, Tan M, Yu J, Zhu Y (2017) DeepSim: Deep similarity for image quality assessment. Neurocomputing 257:104–s114
Gao F, Yu J, Zhu S, Huang Q, Tian Q (2018) Blind image quality prediction by exploiting multi-level deep representations. Pattern Recogn 81:432–442
Kim J, Nguyen A D, Ahn S, Luo C, Lee S (2018) Multiple level feature-based universal blind image quality assessment model. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp 291–295
Tariq T, Tursun O T, Kim M, Didyk P (2020) Why are deep representations good perceptual quality features. In: European Conference on Computer, pp 445–461
Nizami I F, Majid M, Khurshid K (2018) New feature selection algorithms for no-reference image quality assessment. Appl Intell 48(10):3482–3501
Kim J, Lee S (2016) Fully deep blind image quality predictor. IEEE J Sel Top signal Process 11(1):206–220
Liu W, Wang H, Luo H, Zhang K, Lu J, Xiong Z (2021) Pseudo-label growth dictionary pair learning for crowd counting. Appl Intell:1–15
Kingma D P, Ba J (2014) Adam: A method for stochastic optimization. arXiv:1412.6980
Su S, Yan Q, Zhu Y, Zhang C, Ge X (2020) Blindly assess image quality in the wild guided by a self-adaptive hyper network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3667–3676
Wang G, Li L, Li Q (2017) Perceptual evaluation of single-image super-resolution reconstruction. In: IEEE International Conference on Image Processing, pp 3145–3149
Shi G, Wan W, Wu J, Xie X, Dong W, Wu H (2019) SISRSet: Single image super-resolution subjective evaluation test and objective quality assessment. Neurocomputing 360:37–51
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant 61971339, Grant 61471161, and Grant 61972136, in part by the Textile Intelligent Equipment Information and Control Innovation Team of Shaanxi Innovation Ability Support Program under Grant 2021TD-29, in part by the Textile Intelligent Equipment Information and Control Innovation Team of Shaanxi Innovation Team of Universities, in part by the Key Project of the Natural Science Foundation of Shaanxi Province under Grant 2018JZ6002, and in part by the Doctoral Startup Foundation of Xi’an Polytechnic University under Grant BS1616.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhang, T., Zhang, K., Xiao, C. et al. Joint channel-spatial attention network for super-resolution image quality assessment. Appl Intell 52, 17118–17132 (2022). https://doi.org/10.1007/s10489-022-03338-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-03338-1