Abstract
Recently, there has been a growing demand to implement super-resolution (SR) networks on devices with constrained resources. Nevertheless, most existing SR networks must remain consistent during both training and testing, restricting the model’s adaptability in real scenarios. Consequently, attaining elastic reconstruction without retraining is a crucial challenge. To accomplish this, we propose a novel model compression and acceleration framework through the Channel Splitting and Progressive Self-distillation (CSPS) strategy. Specifically, we construct a compact student network from the target teacher network by employing the channel splitting strategy, which removes a certain proportion of channel dimensions from the teacher network. Afterward, we incorporate auxiliary upsampling layers into the intermediate feature maps and propose the progressive self-distillation. Once trained, our CSPS can achieve elastic reconstruction by adjusting the channel splitting ratio and the number of feature extraction blocks. Extensive experiments demonstrate that the proposed CSPS can effectively compress and accelerate various off-the-shelf SR models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1122–1131 (2017)
Ahn, N., Kang, B., Sohn, K.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV, vol. 11214, pp. 256–272 (2018)
Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: British Machine Vision Conference, BMVC, pp. 1–10 (2012)
Cui, Y., Tao, Y., Jing, L., Knoll, A.: Strip attention for image restoration. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 645–653 (2023)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)
Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: ECCV, vol. 9906, pp. 391–407 (2016)
He, Z., Dai, T., Lu, J., Jiang, Y., Xia, S.: Fakd: feature-affinity based knowledge distillation for efficient image super-resolution. In: IEEE International Conference on Image Processing, pp. 518–522 (2020)
Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network (2015). CoRR arXiv:abs/1503.02531
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications (2017). CoRR arXiv:abs/1704.04861
Huang, J., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 5197–5206 (2015)
Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)
Lai, W., Huang, J., Ahuja, N., Yang, M.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5835–5843 (2017)
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 105–114 (2017)
Li, J., Fang, F., Zeng, T., Zhang, G., Wang, X.: Adjustable super-resolution network via deep supervised learning and progressive self-distillation. Neurocomputing 500, 379–393 (2022)
Liang, J., Cao, J., Sun, G., Zhang, K., Gool, L.V., Timofte, R.: Swinir: image restoration using swin transformer. In: IEEE International Conference on Computer Vision Workshops, pp. 1833–1844 (2021)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1132–1140 (2017)
Martin, D.R., Fowlkes, C.C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: IEEE/CVF International Conference on Computer Vision, pp. 416–425 (2001)
Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2798 (2017)
Tian, C., Xu, Y., Zuo, W., Lin, C., Zhang, D.: Asymmetric CNN for image superresolution. IEEE Trans. Syst. Man Cybern. Syst. 52(6), 3718–3730 (2022)
Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: IEEE International Conference on Computer Vision, ICCV, pp. 4809–4817 (2017)
Wang, Y., et al.: Towards compact single image super-resolution via contrastive self-distillation. In: Zhou, Z. (ed.) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, pp. 1122–1128 (2021)
Wang, Z., Liu, D., Yang, J., Han, W., Huang, T.S.: Deep networks for image super-resolution with sparse prior. In: IEEE International Conference on Computer Vision, pp. 370–378 (2015)
Xie, J., Gong, L., Shao, S., Lin, S., Luo, L.: Hybrid knowledge distillation from intermediate layers for efficient single image super-resolution. Neurocomputing 554, 126592 (2023)
Yoon, D., Park, J., Cho, D.: Lightweight deep CNN for natural image matting via similarity-preserving knowledge distillation. IEEE Signal Process. Lett. 27, 2139–2143 (2020)
Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: 5th International Conference on Learning Representations, ICLR (2017)
Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J., Chenin, P., Cohen, A., Gout, C., Lyche, T., Mazure, M., Schumaker, L.L. (eds.) Curves and Surfaces-7th International Conference, vol. 6920, pp. 711–730. Springer (2010)
Zhang, Y., Chen, H., Chen, X., Deng, Y., Xu, C., Wang, Y.: Data-free knowledge distillation for image super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 7852–7861 (2021)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV, vol. 11211, pp. 294–310 (2018)
Zhou, Y., Zhang, Y., Xie, X., Kung, S.: Image super-resolution based on dense convolutional auto-encoder blocks. Neurocomputing 423, 98–109 (2021)
Acknowledgments
This work is supported in part by the Research Funding of Science and Technology on Information System Engineering Laboratory under Grant Number 6142101230202, and supported by the Postdoctoral Fellowship Program of CPSF under Grant Number GZB20240115.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yu, X., Zhang, D., Liu, C., Dong, Q., Duan, G. (2025). Towards Elastic Image Super-Resolution Network via Progressive Self-distillation. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15038. Springer, Singapore. https://doi.org/10.1007/978-981-97-8685-5_10
Download citation
DOI: https://doi.org/10.1007/978-981-97-8685-5_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-8684-8
Online ISBN: 978-981-97-8685-5
eBook Packages: Computer ScienceComputer Science (R0)