Skip to main content

Feature Selection for Clustering

  • Conference paper
  • First Online:
Knowledge Discovery and Data Mining. Current Issues and New Applications (PAKDD 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1805))

Included in the following conference series:

Abstract

Clustering is an important data mining task. Data mining often concerns large and high-dimensional data but unfortunately most of the clustering algorithms in the literature are sensitive to largeness or high-dimensionality or both. Different features affect clusters differently, some are important for clusters while others may hinder the clustering task. An efficient way of handling it is by selecting a subset of important features. It helps in finding clusters efficiently, understanding the data better and reducing data size for efficient storage, collection and processing. The task of finding original important features for unsupervised data is largely untouched. Traditional feature selection algorithms work only for supervised data where class information is available. For unsupervised data, without class information, often principal components (PCs) are used, but PCs still require all features and they may be difficult to understand. Our approach: first features are ranked according to their importance on clustering and then a subset of important features are selected. For large data we use a scalable method using sampling. Empirical evaluation shows the effectiveness and scalability of our approach for benchmark and synthetic data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. C. C. Aggarwal, C. Procopiuc, J. L. Wolf, P. S. Yu, and J. S. Park. Fast algorithms for projected clustering. In Proceedings of ACM SIGMOD Conference on Management of Data, pages 61–72, 1999.

    Google Scholar 

  2. R Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. In Proceedings of ACM SIGMOD Conference on Management of Data, 1998.

    Google Scholar 

  3. R Agrawal and R. Srikant. Fast algorithm for mining association rules. In Proceedings of the 20th VLDB Conference, Santiago, Chile, 1994.

    Google Scholar 

  4. P. S. Bradley, U. Fayyad, and C. Reina. Scaling clustering algorithms to large databases. In Proceedings of the 4th International Conference on Knowledge Discovery & Data Mining (KDD’98), pages 9–15, 1998.

    Google Scholar 

  5. C. Cheng, A. W. Pu, and Y. Zhang. Entropy-based subspace clustering for mining numerical data. In Proceedings of Internationl Conference on Knowledge Discovery and Data Mining (KDD’99), 1999.

    Google Scholar 

  6. M. Dash and H. Liu. Feature selection for classification. International Journal of Intelligent Data Analysis, http://www.elsevier.com/locate/ida, 1(3), 1997.

  7. J. L. Devore. Probability and Statistics for Engineering and Sciences. Duxbury Press, 4th edition, 1995.

    Google Scholar 

  8. R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis, chapter Unsupervised Learning and Clustering. John Wiley & Sons, 1973.

    Google Scholar 

  9. U. Fayyad, C. Reina, and P. S. Bradley. Initialization of iterative refinment clustering algorithms. In Proceedings of the 4th International Conference on Knowledge Discovery & Data Mining (KDD’98), pages 194–198, 1998.

    Google Scholar 

  10. V. Ganti, J. Gehrke, and R. Ramakrishnan. CACTUS-clustering categorical data using summaries. In Proceedings of International Conference on Knowledge Discovery and Data Mining (KDD’99), 1999.

    Google Scholar 

  11. A. K. Jain and R. C. Dubes. Algorithm for Clustering Data, chapter Clustering Methods and Algorithms. Prentice-Hall Advanced Reference Series, 1988.

    Google Scholar 

  12. R. Kohavi. Wrappers for performance enhancement and oblivious decision graphs. PhD thesis, Department of Computer Science, Stanford University, Stanford, CA, 1995.

    Google Scholar 

  13. C. J. Merz and P. M. Murphy. UCI repository of machine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html, 1996.

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dash, M., Liu, H. (2000). Feature Selection for Clustering. In: Terano, T., Liu, H., Chen, A.L.P. (eds) Knowledge Discovery and Data Mining. Current Issues and New Applications. PAKDD 2000. Lecture Notes in Computer Science(), vol 1805. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45571-X_13

Download citation

  • DOI: https://doi.org/10.1007/3-540-45571-X_13

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67382-8

  • Online ISBN: 978-3-540-45571-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics