Abstract
A new type of two-layer aritficial neural network is presented. In contrast to its conventional counterpart, the new network is capable of forming any desired decision region in the input space. The new network consists of a conventional input (hidden) layer and an output layer that is, in essence, a Boolean function of the hidden layer output. Each node of the hidden layer forms a decision hyperplane and the set of hyperplanes constituted by the hidden layers perform a partition of the input space into a number of cells. These cells can be labeled according to their location (binary code) relative to all hyperplanes. The sole function of the output layer is then to group all these cells together appropriately, to form an arbitrary decision region in the input space. In conventional approaches, this is accomplished by a linear combination of the binary decisions (outputs) of the nodes of the hidden layer. This traditional approach is very limited concerning the possible shape of the decision region. A much more natural approach is to form the decision region as a Boolean function of the biniary hidden layer “word” which has as many digits as there are nodes in the hidden layer. The training procedure of the new network can be split in two completely decoupled phases. First, the construction of the decision hyperplanes formed by the hidden layer can be posed as a linear programming problem which can be solved using the Simplex algorithm. Moreover, a fast algorithm for approximate solution of this linear programming problem based on an elliptic approximation of the Simplex can be devised. The second step in the design of the network is the appropriate construction of the Boolean function of the output layer. The key trick here is the adequate incorporation of don’t cares in the Boolean function so that the decision regions formed by the output layer cover the entire input space but are still not overlapping.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
J. Makhoul, R. Schwartz and A. El-Jaroudi, “Classification capabilities of two-layer neural networks”, in Proc. IEEE Int. Conf. on ASSP, paper S12.10, pp. 635–638, Glasgow, Scotland, May 1989.
L. Schsfli (1814-1895), Gesammelte Mathematische Abhandlungen, Vol. 1., Birkhauser Verlag, Basel, 1950.
G.B. Dantzig, Linear Programming and Extensions, Princeton, N.J., Princeton University Press, 1963.
C. Van De Panne, Methods for Linear and Quadratic Programming, Studies in Mathematical and Managerial Economics, Henri Theil (Ed.), North Holland Publishing Comp., Amsterdam, New York, 1975.
G.B. Dantzing, A. Orden and P. Wolfe, “Notes on linear programming, part I”, Pacific Journal of Mathematics, Vol. 5, pp. 183 - 195, 1955.
N.K. Karmarkar, “A new polynomial-time algorithm for linear programming”, Combinatorica, Vol. 4, No. 4, pp. 373–395, 1984.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1989 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Strobach, P. (1989). A Simplex Design of Linear Hyperplane Decision Networks. In: Burkhardt, H., Höhne, K.H., Neumann, B. (eds) Mustererkennung 1989. Informatik-Fachberichte, vol 219. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-75102-8_77
Download citation
DOI: https://doi.org/10.1007/978-3-642-75102-8_77
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-51748-1
Online ISBN: 978-3-642-75102-8
eBook Packages: Springer Book Archive