Abstract
This work proposes decomposition of square approximation algorithm for neural network weights update. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on multilayer perceptrons neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings up to 40% in multiple thread execution.






Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Li X, Wang N, Li SY (2007) A fast training algorithm for SVM via clustering technique and Gabriel graph, communications in computer and information science, vol 2. Springer, Berlin, pp 403–412
Balcázar JL, Dai Y, Tanaka J, Watanabe O (2008) Provably fast training algorithms for support vector machines, theory of computing systems, vol 42, No. 4. Springer, New York, pp 568–595
Alba E, Domínguez E (2006) Comparative analysis of modern optimization tools for the p-median problem, statistics and computing, vol 16, No. 3. Springer, Netherlands, pp 251–260
Himmelblau DM (1972) Applied nonlinear programming mathematical theory. McGraw-Hill, New York
Turk S, Budin L (1989) Computer analysis. Školska knjiga, Zagreb (in Croatian)
Choi JJ, Oh S, Marks RJ II (1991) Training layered perceptrons using low accuracy computations. In: Proceedings of international joint conference on neural networks, IEEE. Piscataway, New Jersey, pp 554–559
Chen DS, Jain RC (1994) A robust backpropagation learning algorithm for function approximation. IEEE Trans Neural Networks 5(3):467–479
Piche SW (1994) Steepest descent algorithms for neural network controllers and filters. IEEE Trans Neural Networks 5(2):198–212
Ergenzinger S, Thomsen E (1995) An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer. IEEE Trans Neural Networks 6(1):31–42
Dennis JE Jr, Jorge JM (1977) Quasi-Newton methods, motivation and theory. SIAM Rev 19(1):46–89
Marquardt DW (1963) An algorithm for least-squares estimation of nonlinear parameters. J Soc Ind Appl Math 11(2):431–441
Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Quart Appl Math 2:164–168
Petrović I, Baotić M, Perić N (1998) An efficient Newton-type learning algorithm for MLP neural networks. In: Proceedings of the international ICSC/IFAC symposium on neural computation—NC’98, pp 551–557
Press WH, Flannery BP, Teukolsky SA, Vetterling WT (1992) Numerical recipes in C: the art of scientific computing, 2nd edn. Cambridge University Press, Cambridge
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hocenski, Ž., Antunoviæ, M. & Filko, D. Accelerated gradient learning algorithm for neural network weights update. Neural Comput & Applic 19, 219–225 (2010). https://doi.org/10.1007/s00521-009-0286-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-009-0286-7