Abstract
Replacing multiplication with addition can effectively reduce the computational complexity. Based on this idea, adder neural networks (AdderNets) are proposed. Thereafter, AdderNets are applied to super-resolution (SR) task to obtain AdderSR, which significantly reduces the energy consumption caused by SR models. However, the weak fitting ability of AdderNets makes AdderSR only applicable to the low-complexity pixel-wise loss, and the performance of the model drops sharply when the high-complexity perceptual loss is used. Enhanced AdderSR (EAdderSR) is proposed to overcome the limitations of AdderSR in SR tasks. Specifically, current adder networks have serious gradient precision loss problem, which affects the training stability. The normalization layer is adjusted to normalize the output of the adder layer to a reasonably narrow range, which can reduce the amount of precision loss. Then, a coarse-grained knowledge distillation (CGKD) method is developed to give adder networks an efficient guidance to reduce the fitting burden. The experimental results show that the proposed method not only further improves the performance of adder networks, but also ensures the quality of the output results when the complexity of the loss function increases.
Similar content being viewed by others
Data Availability
The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
References
Dong C, Loy CC, He K, Tang X (2014) Learning a deep convolutional network for image super-resolution. In: European conference on computer vision. Springer, pp 184–199
Kim J, Lee JK, Lee KM (2016) Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1646–1654
Lim B, Son S, Kim H, Nah S, Mu Lee K (2017) Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 136–144
Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y (2018) Residual dense network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2472–2481
Denton E L, Zaremba W, Bruna J, LeCun Y, Fergus R (2014) Exploiting linear structure within convolutional networks for efficient evaluation. Adv Neural Inf Process Syst:27
Song H, Mao H, Dally W J (2016) Deep compression: compressing deep neural networks with pruning trained quantization and huffman coding. In: ICLR
Hou Z, Kung SY (2020) Efficient image super resolution via channel discriminative deep neural network pruning
Guo Y, Yao A, Chen Y (2016) Dynamic network surgery for efficient dnns. Adv Neural Inf Process Syst:29
Wang H, Gui S, Yang H, Liu J, Wang Z (2020) Gan slimming: all-in-one gan compression by a unified optimization framework. In: European conference on computer vision. Springer, pp 54–73
Jiang X, Wang N, Xin J, Xia X, Yang X, Gao X (2021) Learning lightweight super-resolution networks with weight pruning. Neural Netw 144:21–32
Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. Comput Sci 14(7):38–39
Gao Q, Zhao Y, Li G, Tong T (2018) Image super-resolution using knowledge distillation. In: Asian conference on computer vision. Springer, pp 527–541
Fu Y, Chen W, Wang H, Li H, Lin Y, Wang Z (2020) Autogan-distiller: searching to compress generative adversarial networks. JMLR.org, ICML’20
Xu Y, Xu C, Chen X, Zhang W, Xu C, Wang Y (2020) Kernel based progressive distillation for adder neural networks. Adv Neural Inf Process Syst 33:12322–12333
Gao G, Li W, Li J, Wu F, Lu H, Yu Y (2022) Feature distillation interaction weighting network for lightweight image super-resolution. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, pp 661-669
Courbariaux M, Bengio Y, David J P (2015) Binaryconnect: training deep neural networks with binary weights during propagations. Adv Neural Inf Process Syst:28
Hubara I, Courbariaux M, Soudry D, El-Yaniv R, Bengio Y (2016) Binarized neural networks. Adv Neural Inf Process Syst :29
Ma Y, Xiong H, Hu Z, Ma L (2019) Efficient super resolution using binarized neural network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 0–0
Li H, Yan C, Lin S, Zheng X, Zhang B, Yang F, Ji R (2020) Pams: quantized super-resolution via parameterized max scale. In: European conference on computer vision. Springer, pp 564–580
Xin J, Wang N, Jiang X, Li J, Huang H, Gao X (2020 ) Binarized neural network for single image super resolution. In: European conference on computer vision. Springer, pp 91–107
Liu Z, Shen Z, Li S, Helwegen K, Huang D, Cheng KT (2021a) How do adam and training strategies help bnns optimization. In: International conference on machine learning. PMLR, pp 6936–6946
Liu C, Ding W, Hu Y, Zhang B, Liu J, Guo G, Doermann D (2021b) Rectified binary convolutional networks with generative adversarial learning. Int J Comput Vis 129(4):998–1012
Gao T, Zhou Y, Duan S, Hu X (2022) Memristive kdg-bnn: memristive binary neural networks trained via knowledge distillation and generative adversarial networks. Knowl-Based Syst 249:108962
Chen H, Wang Y, Xu C, Shi B, Xu C, Tian Q, Xu C (2020) Addernet: do we really need multiplications in deep learning?. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1468–1477
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Comput Sci
Krizhevsky A, Sutskever I, Hinton G E (2012) Imagenet classification with deep convolutional neural networks. Advances in Neural Inf Process Syst:25
Song D, Wang Y, Chen H, Xu C, Xu C, Tao D (2021) Addersr: towards energy efficient image super-resolution. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 15648–15657
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision,. Springer, pp 694–711
Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. PMLR, pp 448–456
Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z et al (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690
Shi W, Caballero J, Huszár F, Totz J, Aitken AP, Bishop R, Rueckert D, Wang Z (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1874–1883
Han K, Wang Y, Tian Q, Guo J, Xu C, Xu C (2020) Ghostnet: more features from cheap operations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1580–1589
Ahn N, Kang B, Sohn KA (2018) Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the European conference on computer vision (ECCV), pp 252–268
Hui Z, Wang X, Gao X (2018) Fast and accurate single image super-resolution via information distillation network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 723–731
Muqeet A, Hwang J, Yang S, Kang JH, Kim Y, Bae SH (2020) Ultra lightweight image super-resolution with multi-attention layers, vol 2(5)
Zeng C, Li G, Chen Q, Xiao Q (2022) Lightweight global-locally connected distillation network for single image super-resolution. Appl Intell:1–13
Romero A, Ballas N, Kahou S E, Chassang A, Gatta C, Bengio Y (2015) Fitnets: hints for thin deep nets. Computer ence
Shen L, Ziaeefard M, Meyer B, Gross W, Clark JJ (2022) Conjugate adder net (caddnet)-a space-efficient approximate cnn. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2793–2797
Li W, Chen X, Bai J, Ning X, Wang Y (2022) Searching for energy-efficient hybrid adder-convolution neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1943–1952
You H, Li B, Huihong S, Fu Y, Lin Y (2022) Shiftaddnas: hardware-inspired search for more accurate and efficient neural networks. In: International conference on machine learning. PMLR, pp 25566–25580
Elhoushi M, Chen Z, Shafiq F, Tian YH, Li JY (2021) Deepshift: towards multiplication-less neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2359–2368
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770– 778
Oliveira NAPd et al (2018) Single image super-resolution method based on linear regression and box-cox transformation
Ramponi G, Strobel N K, Mitra S K, Yu T H (1996) Nonlinear unsharp masking methods for image contrast enhancement. J Electr Imaging 5(3):353–366
Agustsson E, Timofte R (2017) Ntire 2017 challenge on single image super-resolution: dataset and study. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 126–135
Kingma D, Ba J (2014) Adam: a method for stochastic optimization. Comput Sci
Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 586–595
Han D (2013) Comparison of commonly used image interpolation methods. In: Conference of the 2nd international conference on computer science and electronics engineering (ICCSEE 2013). Atlantis Press, pp 1556–1559
Horowitz M (2014) Computing’s energy problem (and what we can do about it). In: 2014 IEEE international solid- state circuits conference (ISSCC)
Sze V, Chen Y H, Yang T J, Emer J S (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE, vol 105(12)
Acknowledgements
This work was supported by the National Natural Science Foundation for youth scientists of China (Grant No. 61802161), the Natural Science Foundation of Liaoning Province, China (Grant No. 20180550886, Grant No. 2020-MS-292), and the Scientific Research Foundation of Liaoning Provincial Education Department, China (No. JZL202015402).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Song, J., Yi, H., Xu, W. et al. EAdderSR: enhanced AdderSR for single image super resolution. Appl Intell 53, 20998–21011 (2023). https://doi.org/10.1007/s10489-023-04536-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-023-04536-1