Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1983))

Abstract

Backpropagation is often used as the learning algorithm in layered-structure neural networks, because of its efficiency. However, backpropagation is not free from problems. The learning process sometimes gets trapped in a local minimum and the network cannot produce the required response. In addition, The algorithm has number of parameters such learning rate (µ), momentum factor (α) and steepness parameter (λ). whose values are not known in advance, and must be determined by trail and error. The appropriate selection of these parameters have large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. A class of algorithms which are developed recently uses learning automata (LA) for adjusting the parameters µ,α, and λ based on the observation of random response of the neural networks. One of the important aspects of learning automata based schemes is its remarkable effectiveness as a solution for increasing the speed of convergence. Another important aspect of learning automata based schemes which has not been pointed out earlier is its ability to escape from local minima with high possibility during the training period. In this report we study the ability of LA based schemes in escaping from local minma when standard BP fails to find the global minima. It is demonstrated through simulation that LA based schemes comparing to other schemes such as SAB, Super SAB, Fuzzy BP, ASBP method, and VLR method have higher ability in escaping from local minima.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rumelhart, D. E., Hinton, G. E. and Williams, R. J.: Learning Internal Representations by Error Propagation. In Parallel Distributed Processing, Cambridge, MA: MIT Press (1986)

    Google Scholar 

  2. Arabshahi, P. et al: Fuzzy Parameter Adaptation in Optimazation: Some Neural Net Training Examples. IEEE Computational Science and Engineering (1996)57–65

    Google Scholar 

  3. Sarkar, D.: Methods to Speedup Error Backpropagation Learning Algorithm. ACM Computing Surveys 27 (1995) 519–542

    Article  Google Scholar 

  4. Sperduti, A. and Starita, A.: Speed Up Learning and Network Optimization with Extended Backpropagation., Neural Networks, 6(1993) 365–383

    Article  Google Scholar 

  5. Beigy, H., Meybodi, M. R. and Menhaj, M. B.: Adaptation of Learning Rate in Backpropagation Algorithm Using Fixed Structure Learning Automata. Proc. of Iranian Conf. on Electerical Enginerring, ICEE-98 (1998) 117–123

    Google Scholar 

  6. Meybodi, M. R. and Beigy, H.: New Class of Learning Automata Based Schemes for Adaptation of Bachpropagation Algorithm Parameters. Iranian Journal of Science and technology (to appear)

    Google Scholar 

  7. Beigy, H. and Meybodi, M. R.: Adaptation of Momentum Factor and Steepness Parameter in Backpropagation Algorithm Using Fixed Structure Learning Automata. International Journal of Science and technology (to appear)

    Google Scholar 

  8. Menhaj, M. B. and Hagen, M. H.: Rapid Learning Using Modified BP Algorithms for Multi-Layer Feedforward Neural Nets. Proc. of ICEE-95 Conference, University of Science and Technology, Tehran, Iran (1995)

    Google Scholar 

  9. Narendra, K. S. and Thathachar, M. A. L.: Learning Automata: An Introduction. Prentice-hall, Englewood cliffs (1989)

    Google Scholar 

  10. Gori, M. and Tesi, A.: On the Problem of Local Minima in Backpropagation. IEEE Trans. on Pattern Analysis and Machine Intelligence PAMI-14 (1992) 76–86

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Meybodi, M.R., Beigy, H. (2000). A Note on Learning Automata Based Schemes for Adaptation of BP Parameters. In: Leung, K.S., Chan, LW., Meng, H. (eds) Intelligent Data Engineering and Automated Learning — IDEAL 2000. Data Mining, Financial Engineering, and Intelligent Agents. IDEAL 2000. Lecture Notes in Computer Science, vol 1983. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44491-2_22

Download citation

  • DOI: https://doi.org/10.1007/3-540-44491-2_22

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41450-6

  • Online ISBN: 978-3-540-44491-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics