Abstract
With the development of deep learning (DL), convolutional neural networks (CNNs) have shown great reconstruction performance in single image super-resolution (SISR). However, some methods blindly deepen the networks to purchase the performance, which neglect to make full use of the multi-scale information of different receptive fields and ignore the efficiency in practice. In this paper, a lightweight SISR network with multi-scale information fusion blocks (MIFB) is proposed to fully extract information via a multiple ranges of receptive fields. The features are refined in a coarse-to-fine manner within each block. Group convolutional layers are employed in each block to reduce the number of parameters and operations. Results of extensive experiments on the benchmarks show that our method achieves better performance than the state-of-the-arts with comparable parameters and multiply–accumulate (MAC) operations.








Similar content being viewed by others
References
Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 126–135 (2017)
Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: The European Conference on Computer Vision (ECCV) (2018)
Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: The British Machine Vision Conference (BMVC) (2012)
Chen, Y., Fan, H., Xu, B., Yan, Z., Kalantidis, Y., Rohrbach, M., Yan, S., Feng, J.: Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
Choi, J.S., Kim, M.: A deep convolutional neural network with selection units for super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 154–160 (2017)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2014)
Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. European Conference on Computer Vision, pp. 391–407. Springer, Berlin (2016)
Fan, Y., Yu, J., Liu, D., Huang, T.S.: Scale-wise convolution for image restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence (2020)
Gao, S., Cheng, M.M., Zhao, K., Zhang, X.Y., Yang, M.H., Torr, P.H.: Res2net: A new multi-scale backbone architecture. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2019)
Han, S., Mao, H., Dally, W.J.: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. ICLR, Addis Ababa (2016)
Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
He, D., Xie, C.: Semantic Image Segmentation Algorithm in a Deep Learning Computer Network. Multimedia Systems, pp. 1–13. Springer, Berlin pp (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2016)
He, X., Mo, Z., Wang, P., Liu, Y., Yang, M., Cheng, J.: Ode-inspired network design for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5197–5206 (2015)
Hui, Z., Wang, X., Gao, X.: Fast and accurate single image super-resolution via information distillation network. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134 (2017)
Kim, J., Kwon, L.J., Mu, L.K.: Accurate image super-resolution using very deep convolutional networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Kim, J., Kwon, L.J., Mu, L.K.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637–1645 (2016)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: The European Conference on Computer Vision (ECCV) (2018)
Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Lim, B., Son, S., Kim, H., Nah, S., Mu, L.K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144 (2017)
Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2, pp. 416–423. IEEE (2001)
Matsui, Y., Ito, K., Aramaki, Y., Fujimoto, A., Ogawa, T., Yamasaki, T., Aizawa, K.: Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 76(20), 21811–21838 (2017)
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, pp. 8024–8035. Springer, Berlin (2019)
Qiu, Y., Wang, R., Tao, D., Cheng, J.: Embedded block residual network: A recursive restoration model for single-image super-resolution. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
Rott, S.T., Dekel, T., Michaeli, T.: Singan: Learning a generative model from a single natural image. In: Computer Vision (ICCV), IEEE International Conference (2019)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)
Shocher, A., Bagon, S., Isola, P., Irani, M.: Ingan: Capturing and retargeting the “dna” of a natural image. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
Sukumar, A., Subramaniyaswamy, V., Ravi, L., Vijayakumar, V., Indragandhi, V.: Robust image steganography approach based on riwt-laplacian pyramid and histogram shifting using deep learning. Multimedia Systems, pp. 1–16. Springer, Berlin (2020)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: A persistent memory network for image restoration. In: Proceedings of the IEEE international conference on computer vision, pp. 4539–4547 (2017)
Takalkar, M.A., Xu, M., Chaczko, Z.: Manifold feature integration for micro-expression recognition. Multimed. Syst. 26(5), 535–551 (2020)
Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
Wang, P., Wu, J., Wang, H., Li, X., Yang, Y.: Low-light-level image enhancement algorithm based on integrated networks. Multimedia Systems, pp. 1–11. Springer, Berlin (2020)
Yu, J., Fan, Y., Yang, J., Xu, N., Wang, Z., Wang, X., Huang, T.: Wide activation for efficient and accurate image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2018)
Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. International Conference on Curves and Surfaces, pp. 711–730. Springer, Berlin (2010)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Acknowledgements
The research in our paper is sponsored by National Natural Science Foundation of China (No. 61701327).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zou, Y., Yang, X., Albertini, M.K. et al. LMSN:a lightweight multi-scale network for single image super-resolution. Multimedia Systems 27, 845–856 (2021). https://doi.org/10.1007/s00530-020-00720-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00530-020-00720-2