Skip to main content

A Simplex Design of Linear Hyperplane Decision Networks

  • Conference paper
Mustererkennung 1989

Part of the book series: Informatik-Fachberichte ((INFORMATIK,volume 219))

Abstract

A new type of two-layer aritficial neural network is presented. In contrast to its conventional counterpart, the new network is capable of forming any desired decision region in the input space. The new network consists of a conventional input (hidden) layer and an output layer that is, in essence, a Boolean function of the hidden layer output. Each node of the hidden layer forms a decision hyperplane and the set of hyperplanes constituted by the hidden layers perform a partition of the input space into a number of cells. These cells can be labeled according to their location (binary code) relative to all hyperplanes. The sole function of the output layer is then to group all these cells together appropriately, to form an arbitrary decision region in the input space. In conventional approaches, this is accomplished by a linear combination of the binary decisions (outputs) of the nodes of the hidden layer. This traditional approach is very limited concerning the possible shape of the decision region. A much more natural approach is to form the decision region as a Boolean function of the biniary hidden layer “word” which has as many digits as there are nodes in the hidden layer. The training procedure of the new network can be split in two completely decoupled phases. First, the construction of the decision hyperplanes formed by the hidden layer can be posed as a linear programming problem which can be solved using the Simplex algorithm. Moreover, a fast algorithm for approximate solution of this linear programming problem based on an elliptic approximation of the Simplex can be devised. The second step in the design of the network is the appropriate construction of the Boolean function of the output layer. The key trick here is the adequate incorporation of don’t cares in the Boolean function so that the decision regions formed by the output layer cover the entire input space but are still not overlapping.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. J. Makhoul, R. Schwartz and A. El-Jaroudi, “Classification capabilities of two-layer neural networks”, in Proc. IEEE Int. Conf. on ASSP, paper S12.10, pp. 635–638, Glasgow, Scotland, May 1989.

    Google Scholar 

  2. L. Schsfli (1814-1895), Gesammelte Mathematische Abhandlungen, Vol. 1., Birkhauser Verlag, Basel, 1950.

    Google Scholar 

  3. G.B. Dantzig, Linear Programming and Extensions, Princeton, N.J., Princeton University Press, 1963.

    MATH  Google Scholar 

  4. C. Van De Panne, Methods for Linear and Quadratic Programming, Studies in Mathematical and Managerial Economics, Henri Theil (Ed.), North Holland Publishing Comp., Amsterdam, New York, 1975.

    Google Scholar 

  5. G.B. Dantzing, A. Orden and P. Wolfe, “Notes on linear programming, part I”, Pacific Journal of Mathematics, Vol. 5, pp. 183 - 195, 1955.

    MathSciNet  Google Scholar 

  6. N.K. Karmarkar, “A new polynomial-time algorithm for linear programming”, Combinatorica, Vol. 4, No. 4, pp. 373–395, 1984.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Strobach, P. (1989). A Simplex Design of Linear Hyperplane Decision Networks. In: Burkhardt, H., Höhne, K.H., Neumann, B. (eds) Mustererkennung 1989. Informatik-Fachberichte, vol 219. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-75102-8_77

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-75102-8_77

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-51748-1

  • Online ISBN: 978-3-642-75102-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics