Skip to main content

An Efficient Learning Algorithm for Feedforward Neural Network

  • Conference paper
  • 996 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3315))

Abstract

BP algorithm is frequently applied to train feedforward neural network, but it often suffers from slowness of convergence speed. In this paper, an efficient learning algorithm and its improved algorithm based on local search are proposed. Computer simulations with standard problems such as XOR, Parity, TwoNorm and MushRoom problems are presented and compared with BP algorithm. Experimental results indicate that our proposed algorithms achieve accuracy much better than BP algorithm and convergence speed much faster than BP algorithm, and the generalization of our proposed algorithms for TwoNorm and MushRoom problems is comparable to BP algorithm.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. McClulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in neurons activity. Bulletin of mathematical biophysics 5, 115–133 (1943)

    Article  Google Scholar 

  2. Judd, J.S.: Neural Network Design and Complexity of Learning. MIT Press, Cambridge (1990)

    Google Scholar 

  3. Blum: Training A 3-Node Neural Network Is NP-Complete (1992)

    Google Scholar 

  4. Sosic, R., Gu, J.: A polynomial time algorithm for the n-queens problem. SIGART Bulletin 1, 7–11 (1990)

    Article  Google Scholar 

  5. Sosic, R., Gu, J.: Fast search algorithms for n-queens problem. IEEE Trans. System, Man and Cybernetics 6, 1572–1576 (1991)

    Article  Google Scholar 

  6. Sosic, R., Gu, J.: Efficient Local Search With Conflict Minimization: A Case Study of the n-Queens Problem. IEEE Trans. System, Man and Cybernetics 6, 661–668 (1994)

    Google Scholar 

  7. Gu, J., Huang, X.: Efficient Local Search With Search Space Smoothing: A Case Study of Traveling Salesman Problem (TSP). IEEE Trans. System, Man and Cybernetics 24, 728–735 (1994)

    Article  Google Scholar 

  8. Gu, J.: Local Search for Satisfiability (SAT) Problem. IEEE Trans. System, Man and Cybernetics 23, 1108–1129 (1993)

    Article  Google Scholar 

  9. Selman, B., Levesque, H., Mitchell, D.: A New Method for Solving Hard Satisfiability Problems. In: Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 440–446 (1992)

    Google Scholar 

  10. Chan, L.W., Fallside, F.: An adaptive training algorithm for backpropagation networks. Computer Speech and Language 2, 205–218 (1987)

    Article  Google Scholar 

  11. Salomon: Ralf Adaptive Regelung der Lernrate bei back-propagation. Forschungsberichte des Fachbereichs Informatik. Technische Universitat Berlin, Bericht (1989)

    Google Scholar 

  12. Silva, F.M., Almeida, L.B.: Almeida. Speeding up Backpropagation. In: Eckmiller, R. (ed.) Advanced Neural Computers, pp. 151–158 (1990)

    Google Scholar 

  13. Moller, M.F.: A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Networks 6, 525–633 (1993)

    Article  Google Scholar 

  14. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Network 5, 989–993 (1994)

    Article  Google Scholar 

  15. Engel, J.: Teaching feed-forward neural networks by simulated annealing. Complex Systems 2, 641–648 (1988)

    MathSciNet  Google Scholar 

  16. Prados, D.L.: New learning algorithm for training multilayer neural networks that uses genetic-algorithm techniques. Electronics Letters 28, 1560–1561 (1992)

    Article  Google Scholar 

  17. Montana, D.J.: Neural network weight selection using genetic algorithms. Intelligent Hybrid Systems, pp. 85–104. Wiley, New York (1995)

    Google Scholar 

  18. Pandya, A.S., Sazbo, R.: A Fast Learning Algorithm for Neural Network Applications. IEEE, 1569–1573 (1991)

    Google Scholar 

  19. Venugopal, K.P., Pandya, A.S.: Alopex Algorithm For Training Multilayer Neural Networks. IEEE, 196–201

    Google Scholar 

  20. Unnikrishnan, K.P., Venugopal, K.P.: Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks. Neural Computations 6, 469–490 (1994)

    Article  Google Scholar 

  21. Breiman, L.: Bias, variance and arcing classifiers. Tec. Report 460, Statistics department. University of California (1996)

    Google Scholar 

  22. Merz, C.J., Murphy, P.: UCI repository of machine learning database., http://www.ics.uci.edu/~mlearn/MLRepository.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tan, S., Gu, J. (2004). An Efficient Learning Algorithm for Feedforward Neural Network. In: Lemaître, C., Reyes, C.A., González, J.A. (eds) Advances in Artificial Intelligence – IBERAMIA 2004. IBERAMIA 2004. Lecture Notes in Computer Science(), vol 3315. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30498-2_77

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30498-2_77

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23806-5

  • Online ISBN: 978-3-540-30498-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics