Skip to main content

A Noise Filtering Method for Inductive Concept Learning

  • Conference paper
  • First Online:
Advances in Artificial Intelligence (Canadian AI 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2338))

Abstract

In many real-world situations there is no known method for computing the desired output from a set of inputs. A strategy for solving these type of problems is to learn the input-output functionality from examples. However, in such situations it is not known which information is relevant to the task at hand. In this paper we focus on selection of relevant examples. We propose a new noise elimination method which is based on the filtering of the so called pattern frequency domain and which resembles frequency domain filtering in signal and image processing. The proposed method is inspired by the bases selection algorithm. A basis is an irredundant set of relevant attributes. By identifying examples that are non-typical in bases determination, noise elimination is achieved. The empirical results show the effectiveness of the proposed example selection method on artificial and real databases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R. Kohavi and J. John, Wrappers for subset selection. Artificial Intelligence 97, 273–324, Elsevier, 1997.

    Article  MATH  Google Scholar 

  2. P. Devijver and J. Kittler, Pattern recognition: a statistical approach, Prentice Hall, 1982.

    Google Scholar 

  3. K. Kira and L. Rendell, A practical approach to feature selection. Proceedings of the 9th International Conference on Machine Learning, 249–256, 1992.

    Google Scholar 

  4. H. Almuallim and T. Dietterich, Learning with many irrelevant features. Proceedings of the 9th National Conference on Artificial Intelligence, 547–552, 1991.

    Google Scholar 

  5. J. Mao and A. Jain, Artificial neural networks for feature extraction and multi-variate data projection, IEEE Trans. Neural Networks 6, 296–317, 1995.

    Article  Google Scholar 

  6. G. V. Lashkia, Learning with only the relevant features, Proceedings of the IEEE International Conference on Syst., Man, and Cyber., 298–303, 2001.

    Google Scholar 

  7. J. Quinlan, C4.5: Programs for machine learning, Morgan Kaufmann, 1993.

    Google Scholar 

  8. D. Wilson and T. Martinez, Reduction techniques for instance-based learning algorithms, Machine Learning 38, 257–286, 2000.

    Article  MATH  Google Scholar 

  9. A. Blum and P. Langley, Selection of relevant features and examples in machine learning, Artificial Intelligence 97, 245–272, Elsevier, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  10. C. Brodley and M. Friedl, Identifying and eliminating mislabeled training instances, Proc. of the 13rd National Conference on Artificial Intelligence, AAAI press., 799–805, 1996.

    Google Scholar 

  11. G. V. Lashkia and S. Aleshin, Test feature classifiers: performance and applications, IEEE Trans. Syst., Man, Cybern. Vol. 31, No. 4, 643–650, 2001.

    Article  Google Scholar 

  12. G. V. Lashkia, S. Kaneko and M. Okura, On high generalization ability of test feature classifiers, Trans. IEEJ, vol. 121-C, no. 8, 1347–1353, 2001.

    Google Scholar 

  13. S. B. Thrun, et al., The Monk’s problems: A performance comparison of different learning algorithms. Technical Report CMU-CS-91-197, Carnegie Mellon University, Pittsburg, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lashkia, G.V. (2002). A Noise Filtering Method for Inductive Concept Learning. In: Cohen, R., Spencer, B. (eds) Advances in Artificial Intelligence. Canadian AI 2002. Lecture Notes in Computer Science(), vol 2338. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47922-8_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-47922-8_7

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43724-6

  • Online ISBN: 978-3-540-47922-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics