Abstract
An algorithm of incremental approximation of functions in a normed linearspace by feedforward neural networks is presented. The concept of variationof a function with respect to a set is used to estimate the approximationerror together with the weight decay method, for optimizing the size andweights of a network in each iteration step of the algorithm. Two alternatives, recursively incremental and generally incremental, are proposed. In the generally incremental case, the algorithm optimizes parameters of all units in the hidden layer at each step. In the recursively incremental case, the algorithm optimizes the parameterscorresponding to only one unit in the hidden layer at each step. In thiscase, an optimization problem with a smaller number of parameters is beingsolved at each step.
Similar content being viewed by others
References
Barron, A. R.: Universal approximation bounds for superpositions of a sigmoidal function, IEEE Transactions on Information Theory 39 (1993), 930-945.
Bö, S.: Optimal weight decay in perceptron, Proc. of the International Conference on Neural Networks (1996), 551-556.
Chester, D. L.: Why two layers are better than one, Proc. of the International Joint Conference on Neural Networks 1, Lawrence Erlebaum, Washington, D.C., 1990, pp. 265-268.
Cybenko, G.: Approximation by superpositions of sigmoidal function, Mathematical Congrol, Signals and Systems 2 (1989), 304-314.
Hlaváčková, K. and Sanguineti, M.: Algorithm of incremental approximation using variation of a function with respect to a subset, Proc. of the International Conference of Neural Computation, ICSC Academic Press, Canada/Switzerland, 1998, pp. 896-899.
Hornik, K., Stinchcombe, M. and White, H.: Multilayer Feedforward Networks are Universal Approximators 5 (1989), 359-366.
Kurková, V.: Dimension-independent rates of approximation by neural networks, In: K. Warwick and M. Kárný (eds.), Computer-Intensive Methods in Control and Signal Processing: Curse of Dimensionality, Birkhäuser, Boston, 1997, pp. 261-270.
Kurková, V., Savický, P. and Hlaváčková, K.: Representations and rates of approximation of real valued Boolean functions by neural networks, Neural Networks 11 (1998), 651-659.
Moody, M. E.: The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, Proc. of NIPS 4 (1992), 847-854.
Sarle, W. S.: http://camphor.elcom.nitech.ac.jp/Music/neural/FAQ3.html, 1997.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Hlaváčková-Schindler, K., Hlaváčková-Schindler, K. & Fischer, M.M. An Incremental Algorithm for Parallel Training of the Size and the Weights in a Feedforward Neural Network. Neural Processing Letters 11, 131–138 (2000). https://doi.org/10.1023/A:1009640416018
Issue Date:
DOI: https://doi.org/10.1023/A:1009640416018