Abstract
In many real-world situations there is no known method for computing the desired output from a set of inputs. A strategy for solving these type of problems is to learn the input-output functionality from examples. However, in such situations it is not known which information is relevant to the task at hand. In this paper we focus on selection of relevant examples. We propose a new noise elimination method which is based on the filtering of the so called pattern frequency domain and which resembles frequency domain filtering in signal and image processing. The proposed method is inspired by the bases selection algorithm. A basis is an irredundant set of relevant attributes. By identifying examples that are non-typical in bases determination, noise elimination is achieved. The empirical results show the effectiveness of the proposed example selection method on artificial and real databases.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
R. Kohavi and J. John, Wrappers for subset selection. Artificial Intelligence 97, 273–324, Elsevier, 1997.
P. Devijver and J. Kittler, Pattern recognition: a statistical approach, Prentice Hall, 1982.
K. Kira and L. Rendell, A practical approach to feature selection. Proceedings of the 9th International Conference on Machine Learning, 249–256, 1992.
H. Almuallim and T. Dietterich, Learning with many irrelevant features. Proceedings of the 9th National Conference on Artificial Intelligence, 547–552, 1991.
J. Mao and A. Jain, Artificial neural networks for feature extraction and multi-variate data projection, IEEE Trans. Neural Networks 6, 296–317, 1995.
G. V. Lashkia, Learning with only the relevant features, Proceedings of the IEEE International Conference on Syst., Man, and Cyber., 298–303, 2001.
J. Quinlan, C4.5: Programs for machine learning, Morgan Kaufmann, 1993.
D. Wilson and T. Martinez, Reduction techniques for instance-based learning algorithms, Machine Learning 38, 257–286, 2000.
A. Blum and P. Langley, Selection of relevant features and examples in machine learning, Artificial Intelligence 97, 245–272, Elsevier, 1997.
C. Brodley and M. Friedl, Identifying and eliminating mislabeled training instances, Proc. of the 13rd National Conference on Artificial Intelligence, AAAI press., 799–805, 1996.
G. V. Lashkia and S. Aleshin, Test feature classifiers: performance and applications, IEEE Trans. Syst., Man, Cybern. Vol. 31, No. 4, 643–650, 2001.
G. V. Lashkia, S. Kaneko and M. Okura, On high generalization ability of test feature classifiers, Trans. IEEJ, vol. 121-C, no. 8, 1347–1353, 2001.
S. B. Thrun, et al., The Monk’s problems: A performance comparison of different learning algorithms. Technical Report CMU-CS-91-197, Carnegie Mellon University, Pittsburg, 1991.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lashkia, G.V. (2002). A Noise Filtering Method for Inductive Concept Learning. In: Cohen, R., Spencer, B. (eds) Advances in Artificial Intelligence. Canadian AI 2002. Lecture Notes in Computer Science(), vol 2338. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47922-8_7
Download citation
DOI: https://doi.org/10.1007/3-540-47922-8_7
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43724-6
Online ISBN: 978-3-540-47922-2
eBook Packages: Springer Book Archive