Skip to main content

Method of Real Time Calculation of Learning Rate Value to Improve Convergence of Neural Network Training

  • Conference paper
  • First Online:
Artificial Intelligence and Soft Computing (ICAISC 2020)

Abstract

The scope of this research is a problem of correct initialization and further correction of a neural network learning rate. It is one of the main hyperparameters, which helps to increase a convergence rate of a training process. There are known techniques of time-based decay, step decay and exponential decay, in which the learning rate is initialized manually and then corrected downwards proportionally to some value. In contrast, in this paper, it is proposed to focus on an excitation level of a regressor - an output amplitude of a previous network layer. The formulas, which are based on the recursive least squares method, are derived to calculate the learning rate for each network layer, and their convergence is proved. Using them, the initial learning rate can be chosen arbitrarily, and not only can such rate decrease, but also it is able to increase when the value of the regressor has become lower. Experiments are conducted for a task of image recognition using multilayer networks and the MNIST database. For networks of different structures, the proposed method allows reducing the number of training epochs significantly in comparison with the backpropagation method with a constant learning rate.

This research was supported by Russian Foundation for Basic Research. Grant No 18-47-310003-r a.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chollet, F.: Deep Learning with Python. Manning Publications, New York (2018)

    Google Scholar 

  2. Goodfellow, I., Bengio Y., Courville A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  3. He, K., et al.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: IEEE Proceedings of International Conference on Computer Vision (ICCV), Santiago, pp. 1026–1034 (2015)

    Google Scholar 

  4. Wu, B. et al.: FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 10726–10734 (2019)

    Google Scholar 

  5. Omatu, S., Khalid, M.B., Yusof, R.: Neuro-Control and Its Applications. Springer, London (1996)

    Google Scholar 

  6. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of 3rd International Conference on Learning Representations, San Diego, USA, pp. 1–15 (2015)

    Google Scholar 

  7. Yu, H., Wilamowski, B.M.: Levenberg-Marquardt training. Ind. Electron. Handbook 5(12), 12.1–12.16 (2011)

    Google Scholar 

  8. Rafati, J., Marica, R.F.: Quasi-Newton optimization methods for deep learning applications. Deep Learn. Appl. 1098, 9–38 (2020)

    Article  Google Scholar 

  9. Sanjeev, A., Zhiyuan, L., Kaifeng, L.: Theoretical analysis of auto rate-tuning by batch normalization. In: Proceedings of International Conference on Learning Representations, New Orleans (2019)

    Google Scholar 

  10. An, W., et al.: Exponential decay sine wave learning rate for fast deep neural network training. In: Proceedings of Visual Communications and Image Processing, pp. 1–4 (2017)

    Google Scholar 

  11. Monson, H.H.: Statistical Digital Signal Processing and Modeling. Wiley, Chichester (1996)

    Google Scholar 

  12. Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anton I. Glushchenko .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Glushchenko, A.I., Petrov, V.A., Lastochkin, K.A. (2020). Method of Real Time Calculation of Learning Rate Value to Improve Convergence of Neural Network Training. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science(), vol 12415. Springer, Cham. https://doi.org/10.1007/978-3-030-61401-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61401-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61400-3

  • Online ISBN: 978-3-030-61401-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics