Skip to main content

Advertisement

Log in

Efficient automatically evolving convolutional neural network for image denoising

  • Regular research paper
  • Published:
Memetic Computing Aims and scope Submit manuscript

Abstract

Convolutional neural networks (CNNs) have achieved effective results in image denoising tasks. However, CNN architectures for image denoising tasks are mainly designed manually, which not only relies on CNN-related professional knowledge, but also requires adjustment to different datasets for competitive performance. Algorithms for automatically evolving CNN architectures have been proposed, but most of them are designed for solving image classification tasks and consume considerable computational time and resources. To address these issues, an efficient automatically evolving CNN architecture algorithm for image denoising tasks using genetic algorithm is proposed, which is called fast block-based evolutionary denoising CNN (FBE-DnCNN). In FBE-DnCNN, a genetic encoding strategy based on both deep and wide net blocks is designed to effectively represent the image denoising CNNs for automatic architecture design. With the purpose of solving time-consuming and resource-dependent problems, the partial dataset-based technology is used. A novel refined fitness evaluation method with prior knowledge on parameters of CNNs is designed to improve reliability. For better feature extraction of shallow network layers, convolutional operation, prevention of overfitting, and improvement of the representational capacity, the Feature Block, Transition Block, Dropout Block, and SENet module are introduced in FBE-DnCNN to generate problem-specific search space. With block-specific crossover and mutation, a local search near the good solution is implemented to find better solutions. Experiments show that FBE-DnCNN can evolve distinguished image denoising CNNs with deep and wide architectures in a very short time. FBE-DnCNN achieves competitive performance for the image denoising tasks with different noise levels compared to the traditional approaches, state-of-the-art CNN-based algorithms, and NAS-based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  2. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp 91–99

  3. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  4. Dabov K, Foi A, Katkovnik V, Egiazarian K (2007) Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans Image Process 16(8):2080–2095. https://doi.org/10.1109/TIP.2007.901238

    Article  MathSciNet  Google Scholar 

  5. Anwar S, Barnes N (2019) Real image denoising with feature attention. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3155–3164

  6. Gu S, Zhang L, Zuo W, Feng X (2014) Weighted nuclear norm minimization with application to image denoising. In: 2014 IEEE conference on computer vision and pattern recognition, pp 2862–2869. https://doi.org/10.1109/CVPR.2014.366

  7. Zoran D, Weiss Y (2011) From learning models of natural image patches to whole image restoration. In: 2011 International conference on computer vision, pp 479–486. https://doi.org/10.1109/ICCV.2011.6126278

  8. Burger HC, Schuler CJ, Harmeling S (2012) Image denoising: Can plain neural networks compete with BM3D? In: 2012 IEEE conference on computer vision and pattern recognition, pp 2392–2399. https://doi.org/10.1109/CVPR.2012.6247952

  9. Schmidt U, Roth S (2014) Shrinkage fields for effective image restoration. In: 2014 IEEE conference on computer vision and pattern recognition, pp 2774–2781. https://doi.org/10.1109/CVPR.2014.349

  10. Chen Y, Pock T (2017) Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans Pattern Anal Mach Intell 39(6):1256–1272. https://doi.org/10.1109/TPAMI.2016.2596743

    Article  Google Scholar 

  11. Zhang K, Zuo W, Chen Y, Meng D, Zhang L (2017) Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process 26(7):3142–3155. https://doi.org/10.1109/TIP.2017.2662206

    Article  MathSciNet  MATH  Google Scholar 

  12. Zhang H, Li Y, Chen H, Shen C (2020) Memory-efficient hierarchical neural architecture search for image denoising. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3657–3666

  13. Chen Y-C, Gao C, Robb E, Huang J-B (2020) NAS-DIP: learning deep image prior with neural architecture search. In: Computer vision—ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, proceedings, part XVIII 16, pp 442–459. Springer

  14. Liu Y, Sun Y, Xue B, Zhang M (2020) Evolving deep convolutional neural networks for hyperspectral image denoising. In: 2020 International joint conference on neural networks (IJCNN), pp 1–8. IEEE

  15. Sun Y, Xue B, Zhang M, Yen GG (2019) Completely automated CNN architecture design based on blocks. IEEE Trans Neural Netw Learn Syst 31:1242–1254

    Article  MathSciNet  Google Scholar 

  16. Xie L, Yuille A (2017) Genetic CNN. In: Proceedings of the IEEE international conference on computer vision, pp 1379–1388

  17. Liu H, Simonyan K, Vinyals O, Fernando C, Kavukcuoglu K (2017) Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436

  18. Cai H, Chen T, Zhang W, Yu Y, Wang J (2018) Efficient architecture search by network transformation. In: Thirty-second AAAI conference on artificial intelligence

  19. Zhong Z, Yan J, Wu W, Shao J, Liu C-L (2018) Practical block-wise neural network architecture generation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2423–2432

  20. Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A (2017) Large-scale evolution of image classifiers. In: Proceedings of the 34th international conference on machine learning, vol 70, pp 2902–2911. JMLR. org

  21. Suganuma M, Shirakawa S, Nagao T (2017) A genetic programming approach to designing convolutional neural network architectures. In: Proceedings of the genetic and evolutionary computation conference, pp 497–504. ACM

  22. Zoph B, Le QV (2016) Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578

  23. Baker B, Gupta O, Naik N, Raskar R (2016) Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167

  24. Wang B, Sun Y, Xue B, Zhang M (2018) Evolving deep convolutional neural networks by variable-length particle swarm optimization for image classification. In: 2018 IEEE congress on evolutionary computation (CEC), pp 1–8. IEEE

  25. Sutton RS, Barto AG (1999) Reinforcement learning. J Cogn Neurosci 11(1):126–134

    Google Scholar 

  26. Lorenzo PR, Nalepa J, Ramos LS, Pastor JR (2017) Hyper-parameter selection in deep neural networks using parallel particle swarm optimization. In: Proceedings of the genetic and evolutionary computation conference companion, pp 1864–1871. ACM

  27. LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  28. Nam H, Han B (2016) Learning multi-domain convolutional neural networks for visual tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4293–4302

  29. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  30. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

  31. Hochreiter S (1998) The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int J Uncertain Fuzziness Knowl Based Syst 6(02):107–116

    Article  MATH  Google Scholar 

  32. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  33. Santurkar S, Tsipras D, Ilyas A, Madry A (2018) How does batch normalization help optimization? In: Advances in neural information processing systems, pp 2483–2493

  34. Sun Y, Xue B, Zhang M., Yen GG (2019) Completely automated CNN architecture design based on blocks. IEEE Trans Neural Netw Learn Syst

  35. Zhou Y, Yen GG, Yi Z (2019) Evolutionary compression of deep neural networks for biomedical image segmentation. IEEE Trans Neural Netw Learn Syst 31(8):2916–2929

    Article  Google Scholar 

  36. Sun Y, Yen GG, Yi Z (2019) IGD indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans Evol Comput 23(2):173–187

    Article  Google Scholar 

  37. Li Y, Yuan Y (2017) Convergence analysis of two-layer neural networks with relu activation. In: Advances in neural information processing systems, pp 597–607

  38. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  39. Koza JR (2007) Introduction to genetic programming. In: Proceedings of the 9th annual conference companion on genetic and evolutionary computation, pp 3323–3365

  40. Blickle T (2000) Tournament selection. Evol Comput 1:181–186

    Google Scholar 

  41. Lim SM, Sultan ABM, Sulaiman MN, Mustapha A, Leong KY (2017) Crossover and mutation operators of genetic algorithms. Int J Mach Learn Comput 7(1):9–12

    Article  Google Scholar 

  42. Lorenzo PR, Nalepa J, Kawulok M, Ramos LS, Pastor JR (2017) Particle swarm optimization for hyper-parameter selection in deep neural networks. In: Proceedings of the genetic and evolutionary computation conference, pp 481–488

  43. Bottou L (2010) Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010, pp 177–186. Springer

  44. Aviris N (2012) Indiana’s Indian Pines 1992 data set

  45. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  46. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

    Article  MathSciNet  MATH  Google Scholar 

  47. Renza D, Martinez E, Arquero A (2013) A new approach to change detection in multispectral images by means of ERGAS index. IEEE Geosci Remote Sens Lett 10(1):76–80. https://doi.org/10.1109/LGRS.2012.2193372

    Article  Google Scholar 

Download references

Funding

This work was supported in part by the National Natural Science foundation of China under Grant 62073155, 62002137, 62106088, and 62206113, in part by “Blue Project” in Jiangsu Universities, China, in part by Guangdong Provincial Key Laboratory under Grant 2020B121201001, in part by Innovative Research Foundation of Ship General Performance under Grant 22422213.

Author information

Authors and Affiliations

Authors

Contributions

WF: Supervision. ZZ, ZH: Methodology, software. Writing—original draft preparation. JS: Funding acquisition, English language. XW: Resources.

Corresponding author

Correspondence to Fang Wei.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This work does not contain any studies with human participants performed by any of the authors.

Informed consent

Informed consent was obtained from all individual participants included in this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, F., Zhenhao, Z., Zhou, H. et al. Efficient automatically evolving convolutional neural network for image denoising. Memetic Comp. 15, 219–235 (2023). https://doi.org/10.1007/s12293-022-00385-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12293-022-00385-6

Keywords

Navigation