Skip to main content

Accelerated Gradient Learning Algorithm for Neural Network Weights Update

  • Conference paper
Book cover Knowledge-Based Intelligent Information and Engineering Systems (KES 2008)

Abstract

This work proposes decomposition of gradient learning algorithm for neural network weights update. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on MLP neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings in multiple thread execution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brent, R.P.: Fast Training Algorithms, for Multilayer Neural Nets. IEEE Transactions on Neural Networks 2(3), 346–353 (1991)

    Article  MathSciNet  Google Scholar 

  2. Barnard, E.: Optimization for training neural nets. IEEE Transactions on Neural Networks 3(2), 232–241 (1992)

    Article  Google Scholar 

  3. Cichocki, A., Umbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, New York (1993)

    MATH  Google Scholar 

  4. Himmelblau, D.M.: Applied nonlinear programming mathematical theory. McGraw-Hill, New York (1972)

    Google Scholar 

  5. Turk, S., Budin, L.: Computer Analysis, Školska knjiga, Zagreb (in Croatian) (1989)

    Google Scholar 

  6. Choi, J.J., Oh, S., Marks II, R.J.: Training Layered Perceptrons Using Low Accuracy Computations. In: Proc. Int’l Joint Conf. Neural Networks, pp. 554–559. IEEE, Piscataway (1991)

    Chapter  Google Scholar 

  7. Chen, D.S., Jain, R.C.: A robust backpropagation learning algorithm for function approximation. IEEE Transactions on Neural Networks 5(3), 467–479 (1994)

    Article  Google Scholar 

  8. Piche, S.W.: Steepest descent algorithms for neural network controllers and filters. IEEE Transactions on Neural Networks 5(2), 198–212 (1994)

    Article  Google Scholar 

  9. Ergenzinger, S., Thomsen, E.: An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer. IEEE Trans. Neural Networks 6(1), 31–42 (1995)

    Article  Google Scholar 

  10. Dennis Jr., J.E., More, J.J.: Quasi-Newton Methods, Motivation and Theory. SIAM Review 19(1), 46–89 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  11. Marquardt, D.W.: An Algorithm for Least-Squares Estimation of Nonlinear Parameters. Journal of the Society for Industrial and Applied Mathematics 11(2), 431–441 (1963)

    Article  MATH  MathSciNet  Google Scholar 

  12. Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)

    MATH  MathSciNet  Google Scholar 

  13. Petrović, I., Baotić, M., Perić, N.: An Efficient Newton-type learning Algorithm for MLP Neural Networks. In: Proceedings of the International ICSC/IFAC Symposium on Neural Computation - NC 1998, pp. 551–557 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ignac Lovrek Robert J. Howlett Lakhmi C. Jain

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hocenski, Z., Antunovic, M., Filko, D. (2008). Accelerated Gradient Learning Algorithm for Neural Network Weights Update. In: Lovrek, I., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2008. Lecture Notes in Computer Science(), vol 5177. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85563-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-85563-7_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-85562-0

  • Online ISBN: 978-3-540-85563-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics