Skip to main content

Multi-phase Fast Learning Algorithms for Solving the Local Minimum Problem in Feed-Forward Neural Networks

  • Conference paper
Advances in Neural Networks – ISNN 2012 (ISNN 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7367))

Included in the following conference series:

  • 2618 Accesses

Abstract

Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications of BP have been proposed to speed up the learning of the original BP. However, they all have different drawbacks and they cannot perform very well in all kinds of applications. This paper proposes a new algorithm, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the learning process can converge to the global minimum. During the training, different fast learning algorithms will be used in different phases to improve the global convergence capability. Our performance investigation shows that the proposed algorithm always converges in different benchmarking problems (applications) whereas other popular fast learning algorithms sometimes give very poor global convergence capabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Exploration in the Microstructure of Cognition, vol. 1, MIT Press, Cambridge (1986)

    Google Scholar 

  2. Blum, E.K., Li, L.K.: Approximation theory and feedforward networks. Neural Networks 4, 511–515 (1991)

    Article  Google Scholar 

  3. Gori, M., Tesi, A.: On the problem of local minima in back-propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(1), 76–86 (1992)

    Article  Google Scholar 

  4. Lee, Y., Oh, S.H., Kim, M.W.: An Analysis of Premature Saturation in Back Propagation Learning. Neural Networks 6, 719–728 (1993)

    Article  Google Scholar 

  5. Stager, F., Agarwal, M.: Three methods to speed up the training of feedforward and feedback perceptrons. Neural Networks 10(8), 1435–1443 (1997)

    Article  Google Scholar 

  6. Van Ooyen, A., Nienhuis, B.: Improving the convergence of the back-propagation algorithm. Neural Networks 5, 465–471 (1992)

    Article  Google Scholar 

  7. Vitela, J.E., Reifman, J.: Premature Saturation in Backpropagation Networks: Mechanism and Necessary Conditions. Neural Networks 10(4), 721–735 (1997)

    Article  Google Scholar 

  8. Fahlman, S.E.: Fast learning variations on back-propagation: An empirical study. In: Touretzky, D., Hinton, G., Sejnowski, T. (eds.) Proceedings of the 1988 Connectionist Models Summer School, Pittsburgh, pp. 38–51. Morgan Kaufmann, San Mateo (1989)

    Google Scholar 

  9. Riedmiller, M., Braun, H.: A direct adaptive method for faster back-propagation learning: The RPROP Algorithm. In: Proceedings of International Conference on Neural Networks, vol. 1, pp. 586–591 (1993)

    Google Scholar 

  10. Ng, S.C., Cheung, C.-C., Leung, S.H.: Magnified Gradient Function with Deterministic Weight Evolution in Adaptive Learning. IEEE Transactions in Neural Networks 15(6), 1411–1423 (2004)

    Article  Google Scholar 

  11. Cheung, C.-C., Ng, S.C., Lui, A.K., Shensheng, S.: Enhanced Two-Phase Method in Fast Learning Algorithms. In: Proceedings of IJCNN 2010, Barcelona, Spain (July 2010)

    Google Scholar 

  12. Cheung, C.-C., Ng, S.C., Lui, A.K., Shensheng, S.: A Fast Learning Algorithm with Promising Convergence Capability. In: Proceedings of IJCNN 2011, San Jose, US (August 2011)

    Google Scholar 

  13. Asuncion, A., Newman, D.J.: UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences (2007), http://archive.ics.uci.edu/ml/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cheung, CC., Ng, SC., Lui, A.kf. (2012). Multi-phase Fast Learning Algorithms for Solving the Local Minimum Problem in Feed-Forward Neural Networks. In: Wang, J., Yen, G.G., Polycarpou, M.M. (eds) Advances in Neural Networks – ISNN 2012. ISNN 2012. Lecture Notes in Computer Science, vol 7367. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31346-2_65

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-31346-2_65

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-31345-5

  • Online ISBN: 978-3-642-31346-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics