Abstract
In this paper we address the problems which may appear when using the classical Perceptron or Pocket algorithms in order to train the units generated by Piecewise Linear Separation (PLS) incremental algorithms. These problems are due to the type of optimal solutions found by such training algorithms. Some of these solutions force a useless separation of input data, resulting in that the new units added to the network by the incremental algorithm are again faced with the same problem. The final network would then be composed of a large number of redundant units, each of them trying to solve exactly the same problem and arriving at exactly the same solution. We review some modifications proposed for improving the training algorithms, which are mainly based on the evaluation of entropy-like functions calculated for the input distributions. Furthermore, an alternative solution is proposed which has the advantage of the low computational cost associated to it. This method compares well, as simulation results show, with the methods based on Information Theory concepts.
Holder of an FI research Grant under the Generalitat de Catalunya's Educ. Dept.
Preview
Unable to display preview. Download preview PDF.
References
F. Castillo, “Incremental Neural Networks-A Survey”, Technical Report, INPG Grenoble, 1991.
D.E. Rumelhart, J.L. McClelland,”Parallel Distributed Processing: Explorations in the Microstructure of Cognition”, MIT Press, 1986.
S.I. Gallant, “Optimal Linear Discriminants”, Proc. of the 8th. Intl. Conf. on Pattern Recognition, Vol. 2, pps. 849–854, Paris, 1986.
D. Martinez, M. Chan, D. Estève, “Construction of Layered Quadratic Perceptrons”, Proc. of Neuronimes'92, pps. 655–665, Nimes, 1992.
J. Saffery, C. Thornton, “Using Stereographic Projection as a Preprocessing Technique for Upstart”, Proc. of IJCNN 91, pps. 441–446, Seattle, 1991.
J.A. Sirat, J.P. Nadal, “Neural Trees: A New Tool for Classification”, Technical Report, Laboratoires d'Electronique Philips, 1990.
J.P. Nadal, G. Toulouse, “Information Storage in Sparsely-Coded Memory Nets”, Network, Vol. 1, pps. 61–74, 1990.
S. Knerr, L. Personnaz, G. Dreyfus, “A new Approach to the Design of Neural Network Classifiers and its Application to the Automatic Recognition of Handwritten Digits”, Proc. of IJCNN 91, pps. 91–96, Seattle, 1991.
M. Frean, “The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks”, Neural Computation 2, pps. 198–209, 1990.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Moreno, J.M., Castillo, F., Cabestany, J. (1993). Optimized learning for improving the evolution of piecewise linear separation incremental algorithms. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_159
Download citation
DOI: https://doi.org/10.1007/3-540-56798-4_159
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-56798-1
Online ISBN: 978-3-540-47741-9
eBook Packages: Springer Book Archive