Abstract
The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and effectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the inefficiency of zigzagging in the conventional backpropagation algorithm, one of the authors earlier proposed the use of a deflecting gradient technique to improve the convergence of backpropagation learning algorithm. The proposed method is called Partan backpropagation learning algorithm [3]. The convergence time of multilayer networks has further improved through dynamic adaptation of their learning rates [6]. In this paper, an extension to the dynamic parallel tangent learning algorithm is proposed. In the proposed algorithm, each connection has its own learning as well as acceleration rate. These individual rates are dynamically adapted as the learning proceeds. Simulation studies are carried out on different learning problems. Faster rate of convergence is achieved for all problems used in the simulations.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Baldi P., “Gradient Descending Learning Algorithm Overview: A General Dynamic systems Perspective”, IEEE Trans. on Neural Network, No. 1, Vol. 6, Jan. 1995.
Bazaraa M.S. and Shatty C.M., Nonlinear Programming Theory and Algorithms, John Wiley & Sons, USA, pp. 253–290, 1979.
Ghorbani A.A. and Bhavsar V.C., “Paarallel Tangent Learning Algorithm for Training Artificial Neural Networks”, Technical Rep. No. TR93-075, University of New Brunswick, April 1993.
Ghorbani A.A. and Bhavsar V.C., “Accelerated Backpropagation Learning Algorithm: Parallel Tangent Optimization Algorithm”, Proc. 1993 International Symposium on Nonlinear Theory and it’s Applications (NOLTA’93), pp. 59–62, Hawaii, USA, Dec. 1993.
Ghorbani A.A., Nezami A.R. and Bhavsar V.C., “An Incremental Parallel tangent Learning Algorithm for Artificial Neural Networks”, CCECE’97, pp. 301–304, Saint John’s, Canada, May 1997.
Ghorbani A.A. and Bayat L., “Accelerated Backpropagation Learning Algorithm: Dynamic Parallel Tangent Optimization Algorithm”, Proc. of IASTED International Conference on Computer Systems And Applications, pp. 116–119, Irbid, Jordan, March 30–April 2, 1998.
Gorman, R. P. and Sejnowski, T. J., Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets, Neural Networks 1, pp. 75–89, 1989.
Wilde D. J. and Beightler C. S., Foundations of optimization, Prentice-Hall, Englewood Cliffs, N.J., USA, 1967.
Wismer D. A. and Chattergy R., Introduction to Nonlinear Optimization, Elsevier North-Holland, Amsterdam, 1978.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ghorbani, A.A., Bayat, L. (2000). Accelerated Backpropagation Learning: Extended Dynamic Parallel Tangent Optimization Algorithm. In: Hamilton, H.J. (eds) Advances in Artificial Intelligence. Canadian AI 2000. Lecture Notes in Computer Science(), vol 1822. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45486-1_24
Download citation
DOI: https://doi.org/10.1007/3-540-45486-1_24
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67557-0
Online ISBN: 978-3-540-45486-1
eBook Packages: Springer Book Archive