Skip to main content

A Generative Learning Algorithm that uses Structural Knowledge of the Input Domain yields a better Multi-layer Perceptron

  • Conference paper
  • 120 Accesses

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

Many classifier applications have been developed using the Multi-layer perceptron (MLP) model as representation form. The main difficulty found in designing an architecture based on the model has been, for the most part, induced by a lack of understanding of what each of an MLP’s network components embodies. Expressing the input domain to a classification task in terms of a subspace in R N, the problem to solve consists of computing an appropriate segmentation of the domain so that every input point will be assigned to a region of the space into which only points of the same class have fallen. This can be achieved with an MLP network if every weight vector is computed as the normal to each of the surfaces in the input domain that will induce the same sort of partitioning that is engendered by the classification criteria associated to the problem for which the network has been built. As the Delaunay Triangulation (DT) of a set of points is a geometric structure in which everything one would ever want to know about the proximity of the points from which it was derived is recorded, it provides an ideal source of information for computing the number and form of those weight vectors, enabling the possibility of building an initial maximal network architecture for a particular problem.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N.K. Bose, A.K. Garga, Neural Network Design using Voronoi Diagrams, IEEE Transactions on Neural Networks 1993; 4, 5: 778–787.

    Article  Google Scholar 

  2. A. Bowyer, J. Woodwark, Introduction to Computing with Geometry, Winchester: Information Geometers 1993.

    Google Scholar 

  3. N. Burgess, S. Di Zenzo, M. Notturno Granieri, The Generalisation of a Constructive Algorithm in Pattern Classification Problems, International Journal of Neural Systems 1992; 3: 1–6.

    Article  Google Scholar 

  4. G. Cybenko, Continuous valued neural networks with two hidden layers are sufficient, technical report, Department of Computer Science, Tufts University, USA 1988.

    Google Scholar 

  5. J. Hertz, A. Krogh, R. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley 1991.

    Google Scholar 

  6. A. Okabe, B. Boots, K. Sugihara, Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, Wiley series in Probability and Statistics, John Wiley and Sons 1992.

    MATH  Google Scholar 

  7. J. O’Rourke, Computational Geometry in C, Cambridge University Press 1994.

    Google Scholar 

  8. E. Pérez-Mii ana, Learning Nature of the Feedforward Neural Networks, PhD thesis, Department of Artificial Intelligence, University of Edinburgh 1997.

    Google Scholar 

  9. L. Prechelt, Probenl: A Set of Benchmarks and Benchmarking Rules for Neural Network Training Algorithms, Fakultät für Informatik Universität Karlsruhe 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this paper

Cite this paper

Pérez-Miñana, E. (1998). A Generative Learning Algorithm that uses Structural Knowledge of the Input Domain yields a better Multi-layer Perceptron. In: Bullinaria, J.A., Glasspool, D.W., Houghton, G. (eds) 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1546-5_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1546-5_5

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76208-9

  • Online ISBN: 978-1-4471-1546-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics