Abstract
It has been shown that a neural network is better than induction tree applications in modeling complex relations of input attributes in sample data. Those relations as a set of linear classifiers can be obtained from neural network modeling based on back-propagation. A linear classifier is derived from a linear combination of input attributes and neuron weights in the first hidden layer of neural networks. Training data are projected onto the set of linear classifier hyperplanes and then information gain measure is applied to the data. We propose that this can reduce computational complexity to extract rules from neural networks. As a result, concise rules can be extracted from neural networks to support data with input variable relations over continuous-valued attributes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
F. Alexandre and J.F. Remm. Knowledge extraction from neural networks for signal interpretation. In European Symposium on Artificial Neural Networks, 1997.
C. Blake, E. Keogh, and C.J. Merz. UCI repository of machine learning databases. In Preceedings of the Fifth International Conference on Machine Learning, http://www.ics.uci.edu/ mlearn, 1998.
T.G. Dietterich, H. Hild, and G. Bakiri. A comparative study of ID3 and backpropagation for english text-to-speech mapping. In Proceedings of the 1990 Machine Learning Conference, pages 24–31. Austin, TX, 1990.
L. Fu. Rule learning by searching on adaptive nets. In Preceedings of the 9th National Conference on Artificial Intelligence, pages 590–595, 1991.
K. Hornik, M. Stinchrombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989.
D. Kim and J. Lee. Handling continuous-valued attributes in decision tree using neural network modeling. In European Conference on Machine Learning, pages 211–219. Springer Verlag, 2000.
P.M. Murthy, S. Kasif, and S. Salzberg. A system for induction of oblique decision trees. Journal of Artificial Intelligence Research, 2:1–33, 1994.
J.R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986.
J.R. Quinlan. C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.
R. Setiono and H. Liu. Neurolinear: A system for extracting oblique decision rules from neural networks. In European Conference on Machine Learning, pages 221–233. Springer Verlag, 1997.
I. A. Taha and J. Ghosh. Symbolic interpretation of artificial neural networks. IEEE Transactions on Knowledge and Data Engineering, 11(3):448–463, 1999.
G.G. Towell and J.W. Shavlik. Extracting refined rules from knowledge-based neural networks. Machine Learning, 13(1):71–101, Oct. 1993.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kim, D., Lee, J. (2001). Instance-Based Method to Extract Rules from Neural Networks. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_166
Download citation
DOI: https://doi.org/10.1007/3-540-44668-0_166
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42486-4
Online ISBN: 978-3-540-44668-2
eBook Packages: Springer Book Archive