Skip to main content
Log in

Accelerated gradient learning algorithm for neural network weights update

  • KES 2008
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This work proposes decomposition of square approximation algorithm for neural network weights update. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on multilayer perceptrons neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings up to 40% in multiple thread execution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Li X, Wang N, Li SY (2007) A fast training algorithm for SVM via clustering technique and Gabriel graph, communications in computer and information science, vol 2. Springer, Berlin, pp 403–412

    Google Scholar 

  2. Balcázar JL, Dai Y, Tanaka J, Watanabe O (2008) Provably fast training algorithms for support vector machines, theory of computing systems, vol 42, No. 4. Springer, New York, pp 568–595

  3. Alba E, Domínguez E (2006) Comparative analysis of modern optimization tools for the p-median problem, statistics and computing, vol 16, No. 3. Springer, Netherlands, pp 251–260

  4. Himmelblau DM (1972) Applied nonlinear programming mathematical theory. McGraw-Hill, New York

    Google Scholar 

  5. Turk S, Budin L (1989) Computer analysis. Školska knjiga, Zagreb (in Croatian)

  6. Choi JJ, Oh S, Marks RJ II (1991) Training layered perceptrons using low accuracy computations. In: Proceedings of international joint conference on neural networks, IEEE. Piscataway, New Jersey, pp 554–559

  7. Chen DS, Jain RC (1994) A robust backpropagation learning algorithm for function approximation. IEEE Trans Neural Networks 5(3):467–479

    Article  Google Scholar 

  8. Piche SW (1994) Steepest descent algorithms for neural network controllers and filters. IEEE Trans Neural Networks 5(2):198–212

    Article  Google Scholar 

  9. Ergenzinger S, Thomsen E (1995) An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer. IEEE Trans Neural Networks 6(1):31–42

    Article  Google Scholar 

  10. Dennis JE Jr, Jorge JM (1977) Quasi-Newton methods, motivation and theory. SIAM Rev 19(1):46–89

    Article  MATH  MathSciNet  Google Scholar 

  11. Marquardt DW (1963) An algorithm for least-squares estimation of nonlinear parameters. J Soc Ind Appl Math 11(2):431–441

    Article  MATH  MathSciNet  Google Scholar 

  12. Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Quart Appl Math 2:164–168

    MATH  MathSciNet  Google Scholar 

  13. Petrović I, Baotić M, Perić N (1998) An efficient Newton-type learning algorithm for MLP neural networks. In: Proceedings of the international ICSC/IFAC symposium on neural computation—NC’98, pp 551–557

  14. Press WH, Flannery BP, Teukolsky SA, Vetterling WT (1992) Numerical recipes in C: the art of scientific computing, 2nd edn. Cambridge University Press, Cambridge

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Željko Hocenski.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hocenski, Ž., Antunoviæ, M. & Filko, D. Accelerated gradient learning algorithm for neural network weights update. Neural Comput & Applic 19, 219–225 (2010). https://doi.org/10.1007/s00521-009-0286-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-009-0286-7

Keywords

Navigation