Skip to main content
Log in

Parameter by Parameter Algorithm for Multilayer Perceptrons

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper presents a parameter by parameter (PBP) algorithm for speeding up the training of multilayer perceptrons (MLP). This new algorithm uses an approach similar to that of the layer by layer (LBL) algorithm, taking into account the input errors of the output layer and hidden layer. The proposed PBP algorithm, however, is not burdened by the need to calculate the gradient of the error function. In each iteration step, the weights or thresholds can be optimized directly one by one with other variables fixed. Four classes of solution equations for parameters of networks are deducted. The effectiveness of the PBP algorithm is demonstrated using two benchmarks. In comparisons with the BP algorithm with momentum (BPM) and the conventional LBL algorithms, PBP obtains faster convergences and better simulation performances.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Abbreviations

BPM:

BP algorithm with momentum

LBL:

Layer by Layer

MLP:

Multilayer Perceptrons

MNN:

Modular Neural Network

MSE:

Mean Square Error

PBP:

Parameter by Parameter

References

  1. D.E. Rumelhart G.E. Hinton R.J. Williams (1986) Learning internal representations by error backpropagation D.E. Rumelhart J.L. McClelland (Eds) Parallel Distributed Processing’s MIT press Cambridge, MA

    Google Scholar 

  2. R.A. Jacobs (1988) ArticleTitleIncreased rates of convergence through learning rate adaptation Neural Networks 1 295–307 Occurrence Handle10.1016/0893-6080(88)90003-2

    Article  Google Scholar 

  3. H.B. Kim S.H. Jung T.G. Kim et al. (1996) ArticleTitleFast learning method for back-propagation neural network by evolutionary adaptation of learning rates Neurocomputing 11 101–106 Occurrence Handle10.1016/0925-2312(96)00009-4

    Article  Google Scholar 

  4. S. Abid M. FnaiechF. Najim (2001) ArticleTitleA fast feedforward training algorithm using a modified form of the standard backpropagation algorithm IEEE Transactions on Neural Networks 12 424–430 Occurrence Handle10.1109/72.914537

    Article  Google Scholar 

  5. Y.H. Zweiri J.F. Whidborne L.D. Seneviratne (2003) ArticleTitleA three-term backpropagation algorithm Neurocomputing 50 305–318 Occurrence Handle10.1016/S0925-2312(02)00569-6

    Article  Google Scholar 

  6. E. Barnard (1992) ArticleTitleOptimization for training neural nets IEEE Transactions on Neural Networks 3 232–240

    Google Scholar 

  7. M.T. Hagan M.B. Menhaj (1994) ArticleTitleTraining feedforward networks with the marquardt algorithm IEEE Transactions on Neural Networks 5 989–993 Occurrence Handle10.1109/72.329697

    Article  Google Scholar 

  8. R. Parisi E.D. Claudio ParticleDi G. Orlandi (1996) ArticleTitleA generalized learning paradigm exploiting the structure of feedforward neural networks IEEE Transactions on Neural Networks 7 1450–1459 Occurrence Handle10.1109/72.548172

    Article  Google Scholar 

  9. Y.F. Yam T.W.S. Chow (1995) ArticleTitleAccelerated training algorithm for feedforward neural networks based on least squares method Neural Processing Letters 2 20–25

    Google Scholar 

  10. F. Biegle-König F. Bärmann (1993) ArticleTitleA learning algorithm for multilayered neural networks based on linear least squares problems Neural Networks 6 127–131

    Google Scholar 

  11. S. Ergezinger E. Thomsen (1995) ArticleTitleAn accelerated learning algorithm for multilayer perceptrons: Optimization layer by layer IEEE Transactions on Neural Networks 6 31–42 Occurrence Handle10.1109/72.363452

    Article  Google Scholar 

  12. G.-J. Wang C.-C. Chen (1996) ArticleTitleA fast multilayer neural networks training algorithm based on the layer-by-layer optimizing procedures IEEE Transactions on Neural Networks 7 768–775 Occurrence Handle10.1109/72.548174

    Article  Google Scholar 

  13. S.-H. Oh S.-Y. Lee (1999) ArticleTitleA new error function at hidden layers for fast training of Multilayer perceptrons IEEE Transactions on Neural Networks 10 960–964

    Google Scholar 

  14. J.Y.F. Yam W.S. Chow (1997) ArticleTitleExtended least squares based algorithm for training feedforward networks IEEE Transactions on Neural Networks 8 806–810 Occurrence Handle10.1109/72.572119

    Article  Google Scholar 

  15. G. Strang (1976) Linear Algebra and its Applications Academic Press New York

    Google Scholar 

  16. M.S. Bazaraa C.M. Shetty (1979) Nonlinear Programming Theory and Algorithms John Wiley & Sons Canada

    Google Scholar 

  17. http://www.ics.uci.edu/~mlearn/MLRepository.html.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Zhang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, Y., Zhang, D. & Wang, K. Parameter by Parameter Algorithm for Multilayer Perceptrons. Neural Process Lett 23, 229–242 (2006). https://doi.org/10.1007/s11063-006-0003-9

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-006-0003-9

Keywords

Navigation