Skip to main content
Log in

A new growing pruning deep learning neural network algorithm (GP-DLNN)

  • WSOM 2017
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

During the last decade, a significant research progress has been drawn in both the theoretical aspects and the applications of Deep Learning Neural Networks. Besides their spectacular applications, optimal architectures of these neural networks may speed up the learning process and exhibit better generalization results. So far, many growing and pruning algorithms have been proposed by many researchers to deal with the optimization of standard Feedforward Neural Network architectures. However, applying both the growing and the pruning on the same net may lead a good model for a big data set and hence good selection results. This work is devoted to propose a new Growing and pruning Learning algorithm for Deep Neural Networks. This new algorithm is presented and applied on diverse medical data sets. It is shown that this algorithm outperforms various other artificial intelligent techniques in terms of accuracy and simplicity of the resulting architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Ai F (2011) A new pruning algorithm for feedforward neural networks. In: The fourth international workshop on advanced computational intelligence, pp 286–289

  2. Albarqouni S, Baur C, Achilles F, Belagiannis V, Demirci S, Navab N (2016) Aggnet: deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans Med Imaging 35(5):1313–1321

    Google Scholar 

  3. Augasta MG, Kathirvalavakumar T (2011) A novel pruning algorithm for optimizing feedforward neural network of classification problems. Neural Process Lett 34(3):241

    Google Scholar 

  4. Augasta MG, Kathirvalavakumar T (2013) Pruning algorithms of neural networks—a comparative study. Cent Eur J Comput Sci 3(3):105–115

    Google Scholar 

  5. Behera L, Kumar S, Patnaik A (2006) On adaptive learning rate that guarantees convergence in feedforward networks. IEEE Trans Neural Netw 17(5):1116–1125

    Google Scholar 

  6. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Google Scholar 

  7. Castellano G, Fanelli AM, Pelillo M (1997) An iterative pruning algorithm for feedforward neural networks. IEEE Trans Neural Netw 8(3):519–531

    Google Scholar 

  8. Chandra B, Sharma RK (2016) Deep learning with adaptive learning rate using Laplacian score. Expert Syst Appl 63:1–7

    Google Scholar 

  9. Chen CLP, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Netw Learn Syst 29(1):10–24

    MathSciNet  Google Scholar 

  10. Choi B, Lee JH, Kim DH (2008) Solving local minima problem with large number of hidden nodes on two-layered feed-forward artificial neural networks. Neurocomputing 71(16), 3640–3643 (2008). Advances in Neural Information Processing (ICONIP 2006)/Brazilian Symposium on Neural Networks (SBRN 2006)

  11. Cireşan DC, Giusti A, Gambardella LM, Schmidhuber J (2013) Mitosis detection in breast cancer histology images with deep neural networks. Springer, Berlin, pp 411–418

    Google Scholar 

  12. Dhungel N, Carneiro G, Bradley AP (2017) A deep learning approach for the analysis of masses in mammograms with minimal user intervention. Med Image Anal 37:114–128

    Google Scholar 

  13. Duffner S, Garcia C (2007) An online backpropagation algorithm with validation error-based adaptive learning rate. Springer, Berlin, pp 249–258

    Google Scholar 

  14. Engelbrecht AP (2001) A new pruning heuristic based on variance analysis of sensitivity information. IEEE Trans Neural Netw 12(6):1386–1399

    Google Scholar 

  15. Faris H, Aljarah I, Mirjalili S (2016) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45(2):322–332

    Google Scholar 

  16. Fnaiech N, Abid S, Fnaiech F, Cheriet M (2004) A modified version of a formal pruning algorithm based on local relative variance analysis. In: First international symposium on control, communications and signal processing 2004, pp 849–852

  17. Fnaiech N, Fnaiech F, Jervis B, Cheriet M (2009) The combined statistical stepwise and iterative neural network pruning algorithm. Intell Autom Soft Comput 15(4):573–589

    Google Scholar 

  18. Fnaiech N, Fnaiech F, Jervis BW (2011) Feedforward neural networks pruning algorithms, industrial electronics handbook, 2nd edn., vol 5, j.d. irwin, chap. 15, pp 15–1 to 15–15

  19. Franco L, Jerez JM (eds) (2009) Constructive neural networks, vol 258. Springer, Berlin, Heidelberg

    Google Scholar 

  20. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Gordon GJ, Dunson DB (eds) Proceedings of the fourteenth international conference on artificial intelligence and statistics (AISTATS-11), vol 15, Journal of machine learning research—workshop and conference proceedings, pp 315–323

  21. Goodfellow IJ, Warde-farley D, Mirza M, Courville A, Bengio Y (2013) Maxout networks. In: In ICML

  22. Greenspan H, van Ginneken B, Summers RM (2016) Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 35(5):1153–1159

    Google Scholar 

  23. Hagiwara M (1994) A simple and effective method for removal of hidden units and weights. Backpropagation, part IV. Neurocomputing 6(2):207–218

    Google Scholar 

  24. Han H, Qiao J (2010) A self-organizing fuzzy neural network based on a growing-and-pruning algorithm. IEEE Trans Fuzzy Syst 18(6):1129–1143

    Google Scholar 

  25. Han HG, Qiao JF (2013) A structure optimisation algorithm for feedforward neural network construction. Neurocomputing 99:347–357

    Google Scholar 

  26. Han HG, Zhang S, Qiao JF (2017) An adaptive growing and pruning algorithm for designing recurrent neural network. Neurocomputing 242:51–62

    Google Scholar 

  27. Han Z, Wei B, Zheng Y, Yin Y, Li K, Li S (2017) Breast cancer multi-classification from histopathological images with structured deep learning model. Sci Rep 7(1):4172

    Google Scholar 

  28. Hassibi B, Stork DG, Wolff GJ (1993) Optimal brain surgeon and general network pruning. In: IEEE international conference on neural networks, vol 1, pp 293–299

  29. He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans Knowl Data Eng 21(9):1263–1284

    Google Scholar 

  30. Hosseini-Asl E, Zurada JM, Nasraoui O (2016) Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints. IEEE Trans Neural Netw Learn Syst 27(12):2486–2498

    Google Scholar 

  31. Huang GB, Saratchandran P, Sundararajan N (2004) An efficient sequential learning algorithm for growing and pruning rbf (gap-rbf) networks. IEEE Trans Syst Man Cybern Part B (Cybern) 34(6):2284–2292

    Google Scholar 

  32. Huang SC, Huang YF (1991) Bounds on the number of hidden neurons in multilayer perceptrons. IEEE Trans Neural Netw 2(1):47–55

    Google Scholar 

  33. Huynh TQ, Setiono R (2005) Effective neural network pruning using cross-validation. In: Proceedings of the 2005 IEEE international joint conference on neural networks, 2005, vol 2, pp 972–977

  34. Islam MM, Sattar MA, Amin MF, Yao X, Murase K (2009) A new adaptive merging and growing algorithm for designing artificial neural networks. IEEE Trans Syst Man Cybern Part B (Cybern) 39(3):705–722

    Google Scholar 

  35. Islam MM, Sattar MA, Amin MF, Yao X, Murase K (2009) A new constructive algorithm for architectural and functional adaptation of artificial neural networks. IEEE Trans Syst Man Cybern Part B (Cybern) 39(6):1590–1605

    Google Scholar 

  36. Islam MM, Yao X, Murase K (2003) A constructive algorithm for training cooperative neural network ensembles. IEEE Trans Neural Netw 14(4):820–834

    Google Scholar 

  37. Jacek C, Zarzycki H (2003) Application of rough sets in the presumptive diagnosis of urinary system diseases. In: Artificial intelligence and security in computing systems, ACS’2002 9th international conference proceedings. Kluwer Academic Publishers, pp 41–51

  38. Jia F, Lei Y, Lin J, Zhou X, Lu N (2016) Deep neural networks: a promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mech Syst Signal Process 72–73:303–315

    Google Scholar 

  39. Kooi T, Litjens G, van Ginneken B, Gubern-Mérida A, Sánchez CI, Mann R, den Heeten A, Karssemeijer N (2017) Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal 35:303–312

    Google Scholar 

  40. Kwok TY, Yeung DY (1997) Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans Neural Netw 8(3):630–645

    Google Scholar 

  41. Kwok TY, Yeung DY (1997) Objective functions for training new hidden units in constructive neural networks. IEEE Trans Neural Netw 8(5):1131–1148

    Google Scholar 

  42. Lan Y, Soh YC, Huang GB (2010) Constructive hidden nodes selection of extreme learning machine for regression. Neurocomputing 73(16):3191–3199. 10th Brazilian symposium on neural networks (SBRN2008)

  43. Lauret P, Fock E, Mara TA (2006) A node pruning algorithm based on a Fourier amplitude sensitivity test method. IEEE Trans Neural Netw 17(2):273–293

    Google Scholar 

  44. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Google Scholar 

  45. LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Touretzky DS (ed) Advances in neural information processing systems 2. Morgan-Kaufmann, Burlington, pp 598–605

    Google Scholar 

  46. Lichman M (2013) UCI machine learning repository

  47. van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11):2579–2605

    MATH  Google Scholar 

  48. Mahmud M, Kaiser M, Hussain A, Vassanelli S (2018) Applications of deep learning and reinforcement learning to biological data. IEEE Trans Neural Netw Learn Syst 29:2063–2079

    MathSciNet  Google Scholar 

  49. Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the 2002 international joint conference on neural networks, 2002. IJCNN ’02, vol 2, pp 1895–1899

  50. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) Op-elm: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162

    Google Scholar 

  51. Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: Proceedings of the 11th international joint conference on artificial intelligence-volume 1, IJCAI’89. Morgan Kaufmann Publishers Inc., San Francisco, pp 762–767

  52. Narasimha PL, Delashmit WH, Manry MT, Li J, Maldonado F (2008) An integrated growing-pruning method for feedforward network training. Neurocomputing 71(13), 2831 – 2847. Artificial neural networks (ICANN 2006)/engineering of intelligent systems (ICEIS 2006)

  53. Nayak J, Naik B, Behera H (2016) A novel nature inspired firefly algorithm with higher order neural network: performance analysis. Int J Eng Sci Technol 19(1):197–211

    Google Scholar 

  54. Nielsen AB, Hansen LK (2008) Structure learning by pruning in independent component analysis. Neurocomputing 71(10):2281–2290. Neurocomputing for vision research advances in blind signal processing

  55. Parekh R, Yang J, Honavar V (2000) Constructive neural-network learning algorithms for pattern classification. IEEE Trans Neural Netw 11(2):436–451

    Google Scholar 

  56. Parekh RG, Yang J, Honavar V (1997) Constructive neural network learning algorithms for multi-category real-valued pattern classification. Technical report. ISU-CS- TR97-06 146. Department of Computer Science, Iowa State Univ

  57. Pérez-Sánchez B, Fontenla-Romero O, Guijarro-Berdiñas B (2016) A review of adaptive online learning for artificial neural networks. Artif Intell Rev 49:281–299

    Google Scholar 

  58. Ponnapalli PVS, Ho KC, Thomson M (1999) A formal selection and pruning algorithm for feedforward artificial neural network optimization. IEEE Trans Neural Netw 10(4):964–968

    Google Scholar 

  59. Puma-Villanueva WJ, dos Santos EP, Zuben FJV (2012) A constructive algorithm to synthesize arbitrarily connected feedforward neural networks. Neurocomputing 75(1), 14–32. Brazilian symposium on neural networks (SBRN 2010) international conference on hybrid artificial intelligence systems (HAIS 2010)

  60. Qiao J, Li F, Han H, Li W (2016) Constructive algorithm for fully connected cascade feedforward neural networks. Neurocomputing 182:154–164

    Google Scholar 

  61. Qiao J, Zhang Y, Han H (2008) Fast unit pruning algorithm for feedforward neural network design. Appl Math Comput 205(2):622 – 627. Special issue on advanced intelligent computing theory and methodology in applied mathematics and computation

  62. Reed R (1993) Pruning algorithms—a survey. IEEE Trans Neural Netw 4(5):740–747

    Google Scholar 

  63. Rifai S, Vincent P, Muller X, Glorot X, Bengio Y (2011) Contractive auto-en- coders: explicit invariance during feature extraction. In: Proceedings of the 28th international conference on machine learning (ICML-11), pp 833–840

  64. Sabo D, Yu XH (2008) A new pruning algorithm for neural network dimension analysis. In: 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), pp 3313–3318

  65. Saha M, Chakraborty C, Arun I, Ahmed R, Chatterjee S (2017) An advanced deep learning approach for ki-67 stained hotspot detection and proliferation rate scoring for prognostic evaluation of breast cancer. Sci Rep 7(1):3213

    Google Scholar 

  66. Saha M, Chakraborty C, Racoceanu D (2018) Efficient deep learning model for mitosis detection using breast histopathology images. Comput Med Imaging Graph 64:29–40

    Google Scholar 

  67. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Google Scholar 

  68. Setiono R (1997) A penalty-function approach for pruning feedforward neural networks. Neural Comput 9(1):185–204

    MATH  Google Scholar 

  69. Shrestha SB, Song Q (2015) Adaptive learning rate of spikeprop based on weight convergence analysis. Neural Netw 63:185–198

    MATH  Google Scholar 

  70. Shrestha SB, Song Q (2017) Robust learning in spikeprop. Neural Netw 86:54–68

    MATH  Google Scholar 

  71. Sietsma J, Dow RJF (1988) Neural net pruning-why and how. In: IEEE 1988 international conference on neural networks, vol 1, pp 325–333

  72. Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: 2008 conference on human system interactions, pp 60–65

  73. Spanhol FA, Oliveira LS, Petitjean C, Heutte L (2016) Breast cancer histopathological image classification using convolutional neural networks. In: 2016 international joint conference on neural networks (IJCNN), pp 2560–2567

  74. Sridhar SS, Ponnavaikko M (2012) A novel constructive neural network architecture based on improved adaptive learning strategy for pattern classification. Springer, Berlin, pp 423–433

    Google Scholar 

  75. Subirats JL, Franco L, Jerez JM (2012) C-mantec: a novel constructive neural network algorithm incorporating competition between neurons. Neural Netw 26:130–140

    Google Scholar 

  76. Sun W, Tseng TLB, Zhang J, Qian W (2017) Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput Med Imaging Graph 57, 4–9. Recent developments in machine learning for medical imaging applications

  77. Thivierge JP, Rivest F, Shultz TR (2003) A dual-phase technique for pruning constructive networks. In: Proceedings of the international joint conference on neural networks, 2003, vol 1, pp 559–564

  78. Tomè D, Monti F, Baroffio L, Bondi L, Tagliasacchi M, Tubaro S (2016) Deep convolutional neural networks for pedestrian detection. Signal Process Image Commun 47:482–489

    Google Scholar 

  79. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408

    MathSciNet  MATH  Google Scholar 

  80. Wahab N, Khan A, Lee YS (2017) Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput Biol Med 85:86–97

    Google Scholar 

  81. Wan W, Mabu S, Shimada K, Hirasawa K, Hu J (2009) Enhancing the generalization ability of neural networks through controlling the hidden layers. Appl Soft Comput 9(1):404–414

    Google Scholar 

  82. Wang G (2016) A perspective on deep imaging. IEEE Access 4:8914–8924

    Google Scholar 

  83. Wolberg WH, Mangasarian O (1990) Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proc Natl Acad Sci USA 87:9193–9196

    MATH  Google Scholar 

  84. Wu X, Różycki P, Wilamowski BM (2015) A hybrid constructive algorithm for single-layer feedforward networks learning. IEEE Trans Neural Netw Learn Syst 26(8):1659–1668

    MathSciNet  Google Scholar 

  85. Xing HJ, Hu BG (2009) Two-phase construction of multilayer perceptrons using information theory. IEEE Trans Neural Netw 20(4):715–721

    Google Scholar 

  86. Xu J, Ho DW (2006) A new training and pruning algorithm based on node dependence and Jacobian rank deficiency. Neurocomputing 70(1):544–558

    Google Scholar 

  87. Xu J, Luo X, Wang G, Gilmore H, Madabhushi A (2016) A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing 191:214–223

    Google Scholar 

  88. Yang XS, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24(1):169–174

    Google Scholar 

  89. Yao X (1993) A review of evolutionary artificial neural networks. Int J Intell Syst 8(4):539–567

    Google Scholar 

  90. Yu H, Yang X, Zheng S, Sun C (2018) Active learning from imbalanced data: a solution of online weighted extreme learning machine. IEEE Trans Neural Netw Learn Syst 99:1–16

    Google Scholar 

  91. Zeiler MD (2012) Adadelta: an adaptive learning rate method. abs/1212.5701

  92. Zemouri R (2017) An evolutionary building algorithm for deep neural networks. In: 2017 12th international workshop on self-organizing maps and learning vector quantization, clustering and data visualization (WSOM), pp 1–7

  93. Zemouri R, Omri N, Devalland C, Arnould L, Morello B, Zerhouni N, Fnaiech F (2018) Breast cancer diagnosis based on joint variable selection and constructive deep neural network. In: 4th IEEE middle east conference on biomedical engineering (MECBME 2018)

  94. Zemouri R, Zerhouni N (2012) Autonomous and adaptive procedure for cumulative failure prediction. Neural Comput Appl 21(2):319–331

    Google Scholar 

  95. Zeng X, Yeung DS (2006) Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neurocomputing 69(7):825–837 (2006). New issues in neurocomputing: 13th European symposium on artificial neural networks

  96. Zeraatkar E, Soltani M, Karimaghaee P (2011) A fast convergence algorithm for bpnn based on optimal control theory based learning rate. In: The 2nd international conference on control, instrumentation and automation, pp 292–297

  97. Zhang R, Lan Y, Huang GB, Xu ZB (2012) Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Trans Neural Netw Learn Syst 23(2):365–371

    Google Scholar 

  98. Zhang R, Xu ZB, Huang GB, Wang D (2012) Global convergence of online bp training with dynamic learning rate. IEEE Trans Neural Netw Learn Syst 23(2):330–341

    Google Scholar 

  99. Zhang Z, Qiao J (2010) A node pruning algorithm for feedforward neural network based on neural complexity. In: 2010 international conference on intelligent control and information processing, pp 406–410

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryad Zemouri.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zemouri, R., Omri, N., Fnaiech, F. et al. A new growing pruning deep learning neural network algorithm (GP-DLNN). Neural Comput & Applic 32, 18143–18159 (2020). https://doi.org/10.1007/s00521-019-04196-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-019-04196-8

Keywords

Navigation