Skip to main content
Log in

Globally Convergent Modification of the Quickprop Method

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

A mathematical framework for the convergence analysis of the well-known Quickprop method is described. Furthermore, we propose a modification of this method that exhibits improved convergence speed and stability, and, at the same time, alleviates the use of heuristic learning parameters. Simulations are conducted to compare and evaluate the performance of the new modified Quickprop algorithm with various popular training algorithms. The results of the experiments indicate that the increased convergence rates achieved by the proposed algorithm, affect by no means its generalization capability and stability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Brodatz, P.: Textures-a Photographic Album for Artists and Designers. Dover, New York, 1966.

    Google Scholar 

  2. Broyden, C. G.: A class of methods for solving nonlinear simultaneous equations. Math. Comp. 19 (1965), 577–593.

    Google Scholar 

  3. Dennis, J. E. Jr and Moré , J.: A characterization of superlinear convergence and its applications to quasi Newton methods. Math. Comp. 28 (1974), 577–593.

    Google Scholar 

  4. Dennis, J. E. Jr and Schnabel, R. B.: A view of unconstrained optimization. In: G. L. Nemhauser et al., (eds), Handbooks inOR&MS, Vol. 1. Elsevier Science Publishers, 1989.

  5. Dennis, J. E. Jr and Schnabel, R. B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. SIAM, Philadelphia, 1996. (Originally published: Prentice Hall, Inc., New Jersey, 1983.)

    Google Scholar 

  6. Fahlman, S. E.: Faster-learning variations on back-propagation: an empirical study. In: D. S. Touretzky, G. E. Hinton and T. J. Sejnowski, (eds), Proc. 1988 Connectionist Models Summer School. Morgan Kaufmann, San Mateo, CA, 1988, pp. 38–51.

    Google Scholar 

  7. Gilbert J. C. and Nocedal. J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optimization 2 (1992), 21–42.

    Google Scholar 

  8. Haralick, R., Shanmugan, K. and Dinstein, I.: Textural features for image classification. IEEE Trans. System, Man and Cybernetics 3 (1973), 610–621.

    Google Scholar 

  9. Jacobs, R. A.: Increased rates of convergence through learning rate adaptation. Neural Networks 1 (1988), 295–307.

    Google Scholar 

  10. Lee, Y., Oh, S. H. and Kim, M. W.: An analysis of premature saturation in backpropagation learning. Neural Networks 6 (1993), 719–728.

    Google Scholar 

  11. Magoulas, G. D., Vrahatis, M. N. and Androulakis, G. S.: A new method in neural network supervised training with imprecision. Proc. IEEE 3rd Int. Conf.Electronics, Circuits and Systems, 1996, 287–290.

  12. Magoulas, G. D., Vrahatis, M. N. and Androulakis, G. S.: Effective back-propagation with variable stepsize. Neural Networks 10 (1997), 69–82.

    Google Scholar 

  13. Magoulas, G. D., Vrahatis, M. N. and Androulakis, G. S.: Improving the convergence of the back-propagation algorithm using learning rate adaptation methods. Neural Computation 11 (1999), 1769–1796.

    Google Scholar 

  14. Nocedal, J.: Theory of algorithms for unconstrained optimization. Acta Numerica (1992), 199–242.

  15. Ortega, J. M. and Rheinboldt, W. C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, 1970.

    Google Scholar 

  16. Polak, E.: Optimization: Algorithms and Consistent Approximations. Springer-Verlag, New York, 1997.

    Google Scholar 

  17. Rigler, A. K., Irvine, J. M. and Vogl, T. P.: Eescaling of variables in backpropagation learning. Neural Networks 4 (1991), 225–229.

    Google Scholar 

  18. Rumelhart, D. E., Hinton, G. E. and Williams, R. J:. Learning internal representations by error propagation. In: D. E. Rumelhart and J. L. McClelland, eds, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, Massachusetts, 1, 1986, pp. 318–362.

    Google Scholar 

  19. Sperduti, A. and Starita, A.: Speed up learning and network optimization with extended back-propagation. Neural Networks 6 (1993), 365–383.

    Google Scholar 

  20. Vogl, T. P., Mangis, J. K., Rigler, J. K., Zink, W. T. and Alkon, D. L.: Accelerating the convergence of the back-propagation method. Biological Cybernetics 59 (1988), 257–263.

    Google Scholar 

  21. Wolfe, P.: Convergence conditions for ascent methods. SIAM Review 11 (1969), 226–235.

    Google Scholar 

  22. Wolfe, P.: Convergence conditions for ascent methods. II: Some corrections. SIAM Review 13 (1971), 185–188.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vrahatis, M.N., Magoulas, G.D. & Plagianakos, V.P. Globally Convergent Modification of the Quickprop Method. Neural Processing Letters 12, 159–170 (2000). https://doi.org/10.1023/A:1009661729970

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009661729970

Navigation