Abstract
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, they all have different drawbacks and they cannot perform very well in all kinds of applications. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the learning process can converge to the global minimum. During the training, different fast learning algorithms will be used in different phases to improve the global convergence capability. Our performance investigation shows that the proposed algorithm always converges in different benchmarking problems (applications) whereas other popular fast learning algorithms sometimes give very poor global convergence capabilities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Exploration in the Microstructure of Cognition, vol. 1, MIT Press, Cambridge (1986)
Blum, E.K., Li, L.K.: Approximation theory and feedforward networks. Neural Networks 4, 511–515 (1991)
Gori, M., Tesi, A.: On the problem of local minima in back-propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(1), 76–86 (1992)
Lee, Y., Oh, S.H., Kim, M.W.: An Analysis of Premature Saturation in Back Propagation Learning. Neural Networks 6, 719–728 (1993)
Stager, F., Agarwal, M.: Three methods to speed up the training of feedforward and feedback perceptrons. Neural Networks 10(8), 1435–1443 (1997)
Van Ooyen, A., Nienhuis, B.: Improving the convergence of the back-propagation algorithm. Neural Networks 5, 465–471 (1992)
Vitela, J.E., Reifman, J.: Premature Saturation in Backpropagation Networks: Mechanism and Necessary Conditions. Neural Networks 10(4), 721–735 (1997)
Fahlman, S.E.: Fast learning variations on back-propagation: An empirical study. In: Touretzky, D., Hinton, G., Sejnowski, T. (eds.) Proceedings of the 1988 Connectionist Models Summer School, Pittsburgh, pp. 38–51. Morgan Kaufmann, San Mateo (1989)
Riedmiller, M., Braun, H.: A direct adaptive method for faster back-propagation learning: The RPROP Algorithm. In: Proceedings of International Conference on Neural Networks, vol. 1, pp. 586–591 (1993)
Ng, S.C., Cheung, C.-C., Leung, S.H.: Magnified Gradient Function with Deterministic Weight Evolution in Adaptive Learning. IEEE Transactions in Neural Networks 15(6), 1411–1423 (2004)
Cheung, C.-C., Ng, S.C., Lui, A.K., Shensheng, S.: Enhanced Two-Phase Method in Fast Learning Algorithms. In: Proceedings of IJCNN 2010, Barcelona, Spain (July 2010)
Cheung, C.-C., Ng, S.C., Lui, A.K., Shensheng, S.: A Fast Learning Algorithm with Promising Convergence Capability. In: Proceedings of IJCNN 2011, San Jose, US (August 2011)
Asuncion, A., Newman, D.J.: UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences (2007), http://archive.ics.uci.edu/ml/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cheung, CC., Ng, SC., Lui, A.kf. (2012). Multi-phase Fast Learning Algorithms for Solving the Local Minimum Problem in Feed-Forward Neural Networks. In: Wang, J., Yen, G.G., Polycarpou, M.M. (eds) Advances in Neural Networks – ISNN 2012. ISNN 2012. Lecture Notes in Computer Science, vol 7367. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31346-2_65
Download citation
DOI: https://doi.org/10.1007/978-3-642-31346-2_65
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31345-5
Online ISBN: 978-3-642-31346-2
eBook Packages: Computer ScienceComputer Science (R0)