Abstract
Many classifier applications have been developed using the Multi-layer perceptron (MLP) model as representation form. The main difficulty found in designing an architecture based on the model has been, for the most part, induced by a lack of understanding of what each of an MLP’s network components embodies. Expressing the input domain to a classification task in terms of a subspace in R N, the problem to solve consists of computing an appropriate segmentation of the domain so that every input point will be assigned to a region of the space into which only points of the same class have fallen. This can be achieved with an MLP network if every weight vector is computed as the normal to each of the surfaces in the input domain that will induce the same sort of partitioning that is engendered by the classification criteria associated to the problem for which the network has been built. As the Delaunay Triangulation (DT) of a set of points is a geometric structure in which everything one would ever want to know about the proximity of the points from which it was derived is recorded, it provides an ideal source of information for computing the number and form of those weight vectors, enabling the possibility of building an initial maximal network architecture for a particular problem.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
N.K. Bose, A.K. Garga, Neural Network Design using Voronoi Diagrams, IEEE Transactions on Neural Networks 1993; 4, 5: 778–787.
A. Bowyer, J. Woodwark, Introduction to Computing with Geometry, Winchester: Information Geometers 1993.
N. Burgess, S. Di Zenzo, M. Notturno Granieri, The Generalisation of a Constructive Algorithm in Pattern Classification Problems, International Journal of Neural Systems 1992; 3: 1–6.
G. Cybenko, Continuous valued neural networks with two hidden layers are sufficient, technical report, Department of Computer Science, Tufts University, USA 1988.
J. Hertz, A. Krogh, R. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley 1991.
A. Okabe, B. Boots, K. Sugihara, Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, Wiley series in Probability and Statistics, John Wiley and Sons 1992.
J. O’Rourke, Computational Geometry in C, Cambridge University Press 1994.
E. Pérez-Mii ana, Learning Nature of the Feedforward Neural Networks, PhD thesis, Department of Artificial Intelligence, University of Edinburgh 1997.
L. Prechelt, Probenl: A Set of Benchmarks and Benchmarking Rules for Neural Network Training Algorithms, Fakultät für Informatik Universität Karlsruhe 1994.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag London Limited
About this paper
Cite this paper
Pérez-Miñana, E. (1998). A Generative Learning Algorithm that uses Structural Knowledge of the Input Domain yields a better Multi-layer Perceptron. In: Bullinaria, J.A., Glasspool, D.W., Houghton, G. (eds) 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1546-5_5
Download citation
DOI: https://doi.org/10.1007/978-1-4471-1546-5_5
Publisher Name: Springer, London
Print ISBN: 978-3-540-76208-9
Online ISBN: 978-1-4471-1546-5
eBook Packages: Springer Book Archive