Abstract
The scope of this research is a problem of correct initialization and further correction of a neural network learning rate. It is one of the main hyperparameters, which helps to increase a convergence rate of a training process. There are known techniques of time-based decay, step decay and exponential decay, in which the learning rate is initialized manually and then corrected downwards proportionally to some value. In contrast, in this paper, it is proposed to focus on an excitation level of a regressor - an output amplitude of a previous network layer. The formulas, which are based on the recursive least squares method, are derived to calculate the learning rate for each network layer, and their convergence is proved. Using them, the initial learning rate can be chosen arbitrarily, and not only can such rate decrease, but also it is able to increase when the value of the regressor has become lower. Experiments are conducted for a task of image recognition using multilayer networks and the MNIST database. For networks of different structures, the proposed method allows reducing the number of training epochs significantly in comparison with the backpropagation method with a constant learning rate.
This research was supported by Russian Foundation for Basic Research. Grant No 18-47-310003-r a.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chollet, F.: Deep Learning with Python. Manning Publications, New York (2018)
Goodfellow, I., Bengio Y., Courville A.: Deep Learning. MIT Press, Cambridge (2016)
He, K., et al.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: IEEE Proceedings of International Conference on Computer Vision (ICCV), Santiago, pp. 1026–1034 (2015)
Wu, B. et al.: FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 10726–10734 (2019)
Omatu, S., Khalid, M.B., Yusof, R.: Neuro-Control and Its Applications. Springer, London (1996)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of 3rd International Conference on Learning Representations, San Diego, USA, pp. 1–15 (2015)
Yu, H., Wilamowski, B.M.: Levenberg-Marquardt training. Ind. Electron. Handbook 5(12), 12.1–12.16 (2011)
Rafati, J., Marica, R.F.: Quasi-Newton optimization methods for deep learning applications. Deep Learn. Appl. 1098, 9–38 (2020)
Sanjeev, A., Zhiyuan, L., Kaifeng, L.: Theoretical analysis of auto rate-tuning by batch normalization. In: Proceedings of International Conference on Learning Representations, New Orleans (2019)
An, W., et al.: Exponential decay sine wave learning rate for fast deep neural network training. In: Proceedings of Visual Communications and Image Processing, pp. 1–4 (2017)
Monson, H.H.: Statistical Digital Signal Processing and Modeling. Wiley, Chichester (1996)
Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Networks 2, 183–192 (1989)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Glushchenko, A.I., Petrov, V.A., Lastochkin, K.A. (2020). Method of Real Time Calculation of Learning Rate Value to Improve Convergence of Neural Network Training. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science(), vol 12415. Springer, Cham. https://doi.org/10.1007/978-3-030-61401-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-61401-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61400-3
Online ISBN: 978-3-030-61401-0
eBook Packages: Computer ScienceComputer Science (R0)