Abstract
This work proposes decomposition of gradient learning algorithm for neural network weights update. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on MLP neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings in multiple thread execution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Brent, R.P.: Fast Training Algorithms, for Multilayer Neural Nets. IEEE Transactions on Neural Networks 2(3), 346–353 (1991)
Barnard, E.: Optimization for training neural nets. IEEE Transactions on Neural Networks 3(2), 232–241 (1992)
Cichocki, A., Umbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, New York (1993)
Himmelblau, D.M.: Applied nonlinear programming mathematical theory. McGraw-Hill, New York (1972)
Turk, S., Budin, L.: Computer Analysis, Školska knjiga, Zagreb (in Croatian) (1989)
Choi, J.J., Oh, S., Marks II, R.J.: Training Layered Perceptrons Using Low Accuracy Computations. In: Proc. Int’l Joint Conf. Neural Networks, pp. 554–559. IEEE, Piscataway (1991)
Chen, D.S., Jain, R.C.: A robust backpropagation learning algorithm for function approximation. IEEE Transactions on Neural Networks 5(3), 467–479 (1994)
Piche, S.W.: Steepest descent algorithms for neural network controllers and filters. IEEE Transactions on Neural Networks 5(2), 198–212 (1994)
Ergenzinger, S., Thomsen, E.: An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer. IEEE Trans. Neural Networks 6(1), 31–42 (1995)
Dennis Jr., J.E., More, J.J.: Quasi-Newton Methods, Motivation and Theory. SIAM Review 19(1), 46–89 (1977)
Marquardt, D.W.: An Algorithm for Least-Squares Estimation of Nonlinear Parameters. Journal of the Society for Industrial and Applied Mathematics 11(2), 431–441 (1963)
Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)
Petrović, I., Baotić, M., Perić, N.: An Efficient Newton-type learning Algorithm for MLP Neural Networks. In: Proceedings of the International ICSC/IFAC Symposium on Neural Computation - NC 1998, pp. 551–557 (1998)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hocenski, Z., Antunovic, M., Filko, D. (2008). Accelerated Gradient Learning Algorithm for Neural Network Weights Update. In: Lovrek, I., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2008. Lecture Notes in Computer Science(), vol 5177. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85563-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-540-85563-7_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-85562-0
Online ISBN: 978-3-540-85563-7
eBook Packages: Computer ScienceComputer Science (R0)