Abstract
This paper proposes a new approach called output monitoring and modification (OMM) to address the local minimum problem for existing gradient-descent algorithms (like BP, Rprop and Quickprop) in training feed-forward neural networks. OMM monitors the learning process. When the learning process is trapped into a local minimum, OMM changes some incorrect output values to escape from such local minimum. This modification can be repeated with different parameter settings until the learning process converges to the global optimum. The simulation experiments show that a gradient-descent learning algorithm with OMM has a much better global convergence capability than those without OMM but their convergence rates are similar. In one benchmark problem (application), the global convergence capability was increased from 1% to 100%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Exploration in the Microstructure of Cognition, vol. 1, MIT Press, Cambridge (1986)
Van Ooyen, A., Nienhuis, B.: Improving the convergence of the back-propagation algorithm. Neural Networks 5, 465–471
Vitela, J.E., Reifman, J.: Premature Saturation in Backpropagation Networks: Mechanism and Necessary Conditions. Neural Networks 10(4), 721–735 (1997)
Blum, E.K., Li, L.K.: Approximation theory and feedforward networks. Neural Networks 4, 511–515 (1991)
Gori, M., Tesi, A.: On the problem of local minima in back-propagation. IEEE Trans. On Pattern Analysis and Machine Intelligence 14(1), 76–86 (1992)
Fahlman, S.E.: Fast learning variations on back-propagation: An empirical study. In: Touretzky, D., Hinton, G., Sejnowski, T. (eds.) Proc. the 1988 Connectionist Models Summer School, Pittsburgh, pp. 38–51 (1989)
Riedmiller, M., Braun, H.: A direct adaptive method for faster back-propagation learning: The RPROP Algorithm. In: Proc. of Int. Conf. on Neural Networks, vol. 1, pp. 586–591 (1993)
Yuceturk, A.C., Herdağdelen, A., Uyanik, K.: A solution to the problem of local minima in backpropagation algorithm. In: Proc. of the Fourteenth International Symposium on Computer and Information Sciences, Kusadasi, Turkey, pp. 1081–1083 (1999)
Wang, X.G., Tang, Z., Tamura, H., Ishii, M., Sun, W.D.: An improved backpropagation algorithm to avoid the local minima problem. Neurocomputing 56, 455–460 (2004)
Ng, S.C., Cheung, C.C., Leung, S.H.: Magnified Gradient Function with Deterministic Weight Evolution in Adaptive Learning. IEEE Trans. on Neural Networks 15(6), 1411–1423 (2004)
Frank, A., Asuncion, A.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2012), http://archive.ics.uci.edu/ml/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ng, SC., Cheung, CC., Lui, A.kf., Tse, HT. (2012). Addressing the Local Minima Problem by Output Monitoring and Modification Algorithms. In: Wang, J., Yen, G.G., Polycarpou, M.M. (eds) Advances in Neural Networks – ISNN 2012. ISNN 2012. Lecture Notes in Computer Science, vol 7367. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31346-2_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-31346-2_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31345-5
Online ISBN: 978-3-642-31346-2
eBook Packages: Computer ScienceComputer Science (R0)