Skip to main content

A Filter Feature Selection Method for Clustering

  • Conference paper
Foundations of Intelligent Systems (ISMIS 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3488))

Included in the following conference series:

Abstract

High dimensional data is a challenge for the KDD community. Feature Selection (FS) is an efficient preprocessing step for dimensionality reduction thanks to the removal of redundant and/or noisy features. Few and mostly recent FS methods have been proposed for clustering. Furthermore, most of them are ”wrapper” methods that require the use of clustering algorithms for evaluating the selected features subsets. Due to this reliance on clustering algorithms that often require parameters settings (such as number of clusters), and due to the lack of a consensual suitable criterion to evaluate clustering quality in different subspaces, the wrapper approach cannot be considered as a universal way to perform FS within the clustering framework. Thus, we propose and evaluate in this paper a ”filter” FS method. This approach is consequently completely independent of any clustering algorithm. It is based upon the use of two specific indices that allow to assess the adequacy between two sets of features. As these indices exhibit very specific and interesting properties as far as their computational cost is concerned (they just require one dataset scan), the proposed method can be considered as an effective method not only from the point of view of the results quality but also from the execution time point of view.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dash, M., Liu, H.: Feature selection for classification. Int. Journal of Intelligent Data Analysis 1(3) (1997)

    Google Scholar 

  2. Dash, M., Liu, H.: Feature selection for clustering. In: Terano, T., Chen, A.L.P. (eds.) PAKDD 2000. LNCS, vol. 1805. Springer, Heidelberg (2000)

    Google Scholar 

  3. Dash, M., Choi, K., Scheuermann, P., Liu, H.: Feature Selection for Clustering-A Filter Solution. In: Proc. of Int. Conference on Data Mining (ICDM 2002), pp. 115–122 (2002)

    Google Scholar 

  4. Devaney, M., Ram, A.: Efficient feature selection in conceptual clustering. In: Proc. of the International Conference on Machine Learning (ICML), pp. 92–97 (1997)

    Google Scholar 

  5. Dy, J.G., Brodley, C.E.: Visualization and interactive feature selection for unsupervised data. In: Proc. of the International Conference on Knowledge Discovery and Data Mining (KDD), pp. 360–364 (2000)

    Google Scholar 

  6. Huang, Z.: A Fast Clustering Algorithm to Cluster VeryLarge Categorical Data Sets in Data Mining. Research Issues on Data Mining and Knowledge Discovery (1997)

    Google Scholar 

  7. Jouve, P.E.: Clustering and Knowledge Discovery in Databases. PhD thesis, Lab. ERIC, University Lyon II, France (2003)

    Google Scholar 

  8. Jouve, P.E., Nicoloyannis, N.: KEROUAC, an Algorithm for Clustering Categorical Data Sets with Practical Advantages. In. Proc. of International Workshop on Data Mining for Actionable Knowledge (PAKDD 2003) (2003)

    Google Scholar 

  9. Kim, Y.S., Street, W.N., Menczer, F.: Feature selection in unsupervised learning via evolutionary search. In: Proc. of ACM SIGKDD International Conference on Knowledge and Discovery, pp. 365–369 (2000)

    Google Scholar 

  10. Merz, C., Murphy, P.: UCI repository of machine learning databases (1996), http://www.ics.uci.edu/#mlearn/mlrepository.html

  11. Talavera, L.: Feature selection and incremental learning of probabilistic concept hierarchies. In: Proc. of International Conference on Machine Learning, ICML (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jouve, PE., Nicoloyannis, N. (2005). A Filter Feature Selection Method for Clustering. In: Hacid, MS., Murray, N.V., Raś, Z.W., Tsumoto, S. (eds) Foundations of Intelligent Systems. ISMIS 2005. Lecture Notes in Computer Science(), vol 3488. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11425274_60

Download citation

  • DOI: https://doi.org/10.1007/11425274_60

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25878-0

  • Online ISBN: 978-3-540-31949-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics