Skip to main content
Log in

An Incremental Algorithm for Parallel Training of the Size and the Weights in a Feedforward Neural Network

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

An algorithm of incremental approximation of functions in a normed linearspace by feedforward neural networks is presented. The concept of variationof a function with respect to a set is used to estimate the approximationerror together with the weight decay method, for optimizing the size andweights of a network in each iteration step of the algorithm. Two alternatives, recursively incremental and generally incremental, are proposed. In the generally incremental case, the algorithm optimizes parameters of all units in the hidden layer at each step. In the recursively incremental case, the algorithm optimizes the parameterscorresponding to only one unit in the hidden layer at each step. In thiscase, an optimization problem with a smaller number of parameters is beingsolved at each step.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Barron, A. R.: Universal approximation bounds for superpositions of a sigmoidal function, IEEE Transactions on Information Theory 39 (1993), 930-945.

    Google Scholar 

  2. Bö, S.: Optimal weight decay in perceptron, Proc. of the International Conference on Neural Networks (1996), 551-556.

  3. Chester, D. L.: Why two layers are better than one, Proc. of the International Joint Conference on Neural Networks 1, Lawrence Erlebaum, Washington, D.C., 1990, pp. 265-268.

    Google Scholar 

  4. Cybenko, G.: Approximation by superpositions of sigmoidal function, Mathematical Congrol, Signals and Systems 2 (1989), 304-314.

    Google Scholar 

  5. Hlaváčková, K. and Sanguineti, M.: Algorithm of incremental approximation using variation of a function with respect to a subset, Proc. of the International Conference of Neural Computation, ICSC Academic Press, Canada/Switzerland, 1998, pp. 896-899.

    Google Scholar 

  6. Hornik, K., Stinchcombe, M. and White, H.: Multilayer Feedforward Networks are Universal Approximators 5 (1989), 359-366.

    Google Scholar 

  7. Kurková, V.: Dimension-independent rates of approximation by neural networks, In: K. Warwick and M. Kárný (eds.), Computer-Intensive Methods in Control and Signal Processing: Curse of Dimensionality, Birkhäuser, Boston, 1997, pp. 261-270.

    Google Scholar 

  8. Kurková, V., Savický, P. and Hlaváčková, K.: Representations and rates of approximation of real valued Boolean functions by neural networks, Neural Networks 11 (1998), 651-659.

    Google Scholar 

  9. Moody, M. E.: The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, Proc. of NIPS 4 (1992), 847-854.

    Google Scholar 

  10. Sarle, W. S.: http://camphor.elcom.nitech.ac.jp/Music/neural/FAQ3.html, 1997.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hlaváčková-Schindler, K., Hlaváčková-Schindler, K. & Fischer, M.M. An Incremental Algorithm for Parallel Training of the Size and the Weights in a Feedforward Neural Network. Neural Processing Letters 11, 131–138 (2000). https://doi.org/10.1023/A:1009640416018

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009640416018

Navigation