Abstract
One family of classifiers which has has considerable experimental success over the last thirty year is that of the n-tuple classifier and its descendents. However, the theoretical basis for such classifiers is uncertain despite attempts from time to time to place it in a statistical framework. In particular the most commonly used training algorithms do not even try to minimise recognition error on the training set. In this paper the tools of statistical learning theory are applied to the classifier in an attempt to describe the classifier's effectiveness. In particular the effective VC dimension of the classifier for various input distributions is calculated experimentally, and these results used as the basis for a discussion of the behaviour of the n-tuple classifier. As a side-issue an error-minimising algorithm for the n-tuple classifier is also proposed and briefly examined.
Preview
Unable to display preview. Download preview PDF.
References
I. Aleksander and T.J. Stonham. Guide to pattern recogntion using random-access memories. Computers and Digital Techniques, 2:29–40, 1979.
W.W. Bledsoe and C.L. Bisson. Improved memory matrices for the n-tuple recogntion method. In IRE Joint Computer Conference,11, pages 414–415, 1962.
W.W. Bledsoe and L. Browning. Pattern recognition and reading by machine. In Proc. Eastern Joint Computer Conf., pages 232–255, 1959.
N.P. Bradshaw. An Analysis of Learning in Weightless Neural Systems. PhD thesis, Imperial College, London., 1996.
N.P. Bradshaw. Improving the generalisation of the n-tuple classifier with the effective VC-dimension. Technical report, IRIDIA, Universite Libre de Bruxelles, 1997.
A. Kolcz and N.M. Allinson. N-tuple regression network. Neural Networks, 9(5):855–869, 1999.
R. Rohwer and M. Morciniec. A theoretical and experimental account of n-tuple classifier performance. Neural Computation, 8:657–670, 1996.
J. Shawe-Taylor, P.L. Bartlett, R.C. Williamson, and M. Anthony. A framework for structural risk minimisation. In Proceedings of the 9th Annual Conference on Computational Learning Theory, 1996.
M.J. Sixsmith, G.D. Tattershall, and J.M. Rollett. Speech recognition using n-tuple techniques. Br Telecom J, 8(2), April 1990.
V. Vapnik. The Nature of Statistical Learning Theory. Spinger-Verlag, 1995.
V. Vapnik, E Levin, and Y LeCun. Measuring the VC-dimension of a learning machine. Neural Computation, 6:851–876, 1994.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bradshaw, N.P. (1997). The effective VC dimension of the n-tuple classifier. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020206
Download citation
DOI: https://doi.org/10.1007/BFb0020206
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63631-1
Online ISBN: 978-3-540-69620-9
eBook Packages: Springer Book Archive