Skip to main content
Log in

Fast Learning Algorithms for Feedforward Neural Networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. L.F.A Wessels and E. Barnard, “Avoiding false local minima by proper initialization of connections,” IEEE Transactions on Neural Networks, vol. 3, no.6, pp. 899–905, 1992.

    Google Scholar 

  2. Yutaka Fukuoka, Hideo Matsuki, and Haruyuki Minamitani et al., “A modified back-propagation method to avoid false local minima,” Neural Networks, vol. 11, pp. 1059–1072, 1998.

    Google Scholar 

  3. G. Thimm and E. Fiesler, “High-order and multilayer perceptrons initialization,” IEEE Transactions on Neural Networks, vol. 8, no.6, pp. 349–359, 1997.

    Google Scholar 

  4. F. Stager and M. Agarwal, “Three methods to speed up the training of feedforward and feedback perceptrons,” Neural Networks, vol. 10, no.8, pp. 1435–144, 1997.

    Google Scholar 

  5. B. Verma, “Fast training of multilayer perceptrons,” IEEE Transactions on Neural Networks, vol. 8, no.6, pp. 1314–1320, 1997.

    Google Scholar 

  6. A.G. Parlos and B. Fernandez, “An accelerated learning algorithm for multilayer perceptron networks,” IEEE Transactions on Neural Networks, vol. 5, no.3, pp. 493–497, 1994.

    Google Scholar 

  7. B.K. Humpert, “Improving back propagation with a new error function,” Neural Networks, vol. 7, no.8, pp. 1191–1192, 1994.

    Google Scholar 

  8. A. Van Ooyen and B. Nienhuis, “Improving the convergence of the back-propagation algorithm,” Neural Networks, vol. 5, pp. 465–471, 1992.

    Google Scholar 

  9. N.B. Karayiannis and A.N. Venetsanopoulos, “Fast learning algorithms for neural networks,” IEEE Transactions on Ciruits and Systems-II: Analog and Digital Signal Processing, vol. 39, no.7, pp. 453–474, 1992.

    Google Scholar 

  10. S.-H. Oh, “Improving the error backpropagation algorithm with a modified error function,” IEEE Transactions on Neural Networks, vol. 8, no.7, pp. 799–802, 1997.

    Google Scholar 

  11. N.B. Karayinnis, “Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations,” IEEE Transactions on Neural Networks, vol. 7, no.2, pp. 419–426, 1996.

    Google Scholar 

  12. Xiao-Hu Yu, Guo-An Chen, and Shi-Xin Cheng, “Dynamic learning rate optimization of the back-propagation algorithm,” IEEE Transactions on Neural Networks, vol. 6, no.3, pp. 669–677, 1995.

    Google Scholar 

  13. G.D. Magoulas, M.N. Vrahatis, and G.S. Androulakis, “Effective backpropagation training with variable stepsize,” Neural Networks, vol. 10, no.1, pp. 69–82, 1997.

    Google Scholar 

  14. R.A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks, vol. 1, pp. 295–307, 1988.

    Google Scholar 

  15. R. Battiti, “First-and second-order methods for learning: Between steepest descent and Newton's method,” Neural Computation, vol. 4, pp. 141–166, 1992.

    Google Scholar 

  16. E.M. Johansson, F.U. Dowla, and D.M. Goodman, “Backpropagation learning for multilayer feed-forward neural networks using the conjugate gradient method,” International Journal of Neural Systems, vol. 2, no.4, pp. 291–301, 1992.

    Google Scholar 

  17. D.E. Rumelhart, G.E. Hinton, and R.J Williams, “Learning representations by back-propagation errors,” Nature, vol. 323, pp. 533–536, 1986.

    Google Scholar 

  18. B. Widrow and M.A. Lehr, “30 years of adaptive neural networks: Perceptron, madaline, and back-propagation,” Proceedings of IEEE, vol. 78, no.9, pp. 1415–1441, 1990.

    Google Scholar 

  19. C. Baolin, Optimization Theory and Algorithms, Tsinghua University Press, 1989.

  20. E. Polak and G. Ribiere, “Note sur la convergence de methods de directions conjures,” Revue Francaise Information Recherche Operationnelle, vol. 16, pp. 35–43, 1969.

    Google Scholar 

  21. R. Fletcher and C.M. Reeves, “Function minimization by conjugate gradients,” Computer Journal, vol. 7, pp. 149–154, 1964.

    Google Scholar 

  22. Y.H. Dai and Y. Yuan, “Some properties of a new conjugate gradient method,” in Advances in Nonlinear Programming, edited by Ya-xiang Yuan, Kluwer Academic Publishers, pp. 251–262, 1998.

  23. M.J.D. Powell, “Restart procedures of the conjugate gradient method,” Math. Program, vol. 2, pp. 14–254, 1977.

    Google Scholar 

  24. M.J.D. Powell, “Noncovex minimization calculations and conjugate gradient method,” Lecture Notes in Mathmatics, 1984, vol. 1066, Springer-Verlag, Berlin, pp. 122–144.

    Google Scholar 

  25. Y.H. Dai and Y. Yuan, “An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization,” Annals of Operations Research, vol. 103, pp. 33–47, 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jiang, M., Gielen, G., Zhang, B. et al. Fast Learning Algorithms for Feedforward Neural Networks. Applied Intelligence 18, 37–54 (2003). https://doi.org/10.1023/A:1020922701312

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1020922701312

Navigation