Skip to main content
Log in

Truncation: A New Approach to Neural Network Reduction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this manuscript, the method for optimizing the number of neurons in the hidden layer of multilayer neural network is proposed. The described method is similar to dropout: we exclude some neurons during training, but the probability of neuron exclusion depends on its position in the layer. In the result of training the neurons at beginning of layer have a stronger connections with the next layer and they make the main contribution to the result. Neurons at the end of layer have weak connections to the next layer and can be excluded for further application of the network. On the examples of fully-connected and convolutional neural networks with one and multiple hidden layers we show, that proposed method allow us to obtain dependence of network accuracy on the number of neurons in layer after one cycle of training and also performs some regularization. For example, we applied the method to a four-layer convolutional network and reduced the layer sizes from (50, 50, 100, 100) to (36, 36, 74, 56) without losing accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Mignan A, Broccardo M (2019) One neuron versus deep learning in aftershock prediction. Nature 574(7776):E1–E3. https://doi.org/10.1038/s41586-019-1582-8

    Article  Google Scholar 

  2. Dsouza R, Huang P, Yeh F (2020) Structural analysis and optimization of convolutional neural networks with a small sample size. Sci Rep 10:834. https://doi.org/10.1038/s41598-020-57866-2

    Article  Google Scholar 

  3. Haykin S (2009) Neural networks and learning machines. Pearson Education, New Jersey

    Google Scholar 

  4. Le Cun Y, Denker JS, Solla SA (1990) Optimal brain damage. Neural Information Processing Systems

  5. Louizos C, Welling M, Kingma DP (2018) Learning sparse neural networks through \(\rm L_0\) regularization. 6th international conference on learning representations

  6. He Y, Zhang X, SunJ (2017) Channel pruning for accelerating very deep neural networks. Proceedings of the IEEE international conference on computer vision

  7. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958

    MathSciNet  MATH  Google Scholar 

  8. Kingma D, Ba J (2014) Adam: a method for stochastic optimization. 3rd international conference on learning representations

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergey V. Perchenko.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the Russian Science Foundation (RSF), project no. 19-79-00098.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nevzorov, A.A., Perchenko, S.V. & Stankevich, D.A. Truncation: A New Approach to Neural Network Reduction. Neural Process Lett 54, 423–435 (2022). https://doi.org/10.1007/s11063-021-10638-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10638-z

Keywords

Navigation