Skip to main content

Clustering with Feature Order Preferences

  • Conference paper
PRICAI 2008: Trends in Artificial Intelligence (PRICAI 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5351))

Included in the following conference series:

Abstract

We propose a clustering algorithm that effectively utilizes feature order preferences, which have the form that feature s is more important than feature t. Our clustering formulation aims to incorporate feature order preferences into prototype-based clustering. The derived algorithm automatically learns distortion measures parameterized by feature weights which will respect the feature order preferences as much as possible. Our method allows the use of a broad range of distortion measures such as Bregman divergences. Moreover, even when generalized entropy is used in the regularization term, the subproblem of learning the feature weights is still a convex programming problem. Empirical results demonstrate the effectiveness and potential of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. Irvine, CA: University of California, Department of Information and Computer Science (2007)

    Google Scholar 

  2. Banerjee, A., Merugu, S., Dhillon, I.S., Ghosh, J.: Clustering with Bregman divergences. Journal of Machine Learning Research 6, 1705–1749 (2005)

    MathSciNet  MATH  Google Scholar 

  3. Basu, S., Bilenko, M., Mooney, R.J.: A probabilistic framework for semi-supervised clustering. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2004)

    Google Scholar 

  4. Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific (1999)

    Google Scholar 

  5. Boyd, S., Vandenberghe, V.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  6. Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., Hullender, G.: Learning to rank using gradient descent. In: Proceedings of the Twenty-Second International Conference on Machine Learning (2005)

    Google Scholar 

  7. Domeniconi, C., Gunopulos, D., Ma, S., Yan, B., Al-Razgan, M., Papadopoulos, D.: Locally adaptive metrics for clustering high dimensional data. Data Mining and Knowledge Discovery 14(1), 63–97 (2007)

    Article  MathSciNet  Google Scholar 

  8. Estivill-Castro, V.: Why so many clustering algorithms — a position paper. SIGKDD Explorations 4(1), 65–75 (2002)

    Article  MathSciNet  Google Scholar 

  9. Jain, A.K., Murty, M.N., Flynn, P.J.: Data Clustering: A Review. ACM Computing Surveys 31(3), 264–323 (1999)

    Article  Google Scholar 

  10. Kulis, B., Basu, S., Dhillon, I.S., Mooney, R.J.: Semi-supervised graph clustering: a kernel approach. In: Proceedings of the Twenty-Second International Conference on Machine Learning (2005)

    Google Scholar 

  11. Luo, P., Zhan, G., He, Q., Shi, Z., Lü, K.: On defining partition entropy by inequalities. IEEE Transactions on Information Theory 53(9), 3233–3239 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  12. Mardia, K.V., Jupp, P.E.: Directional Statistics, 2nd edn. John Wiley and Sons Ltd., Chichester (2000)

    MATH  Google Scholar 

  13. Modha, D.S., Spangler, W.S.: Feature weighting in k-means clustering. Machine Learning 52(3), 217–237 (2003)

    Article  MATH  Google Scholar 

  14. Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 888–905 (2000)

    Article  Google Scholar 

  15. Strehl, A., Ghosh, J.: Cluster ensembles — a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research 3, 583–617 (2002)

    MathSciNet  MATH  Google Scholar 

  16. Wu, M., Schölkopf, B.: A local learning approach for clustering. In: Advances in Neural Information Processing Systems, vol. 19 (2006)

    Google Scholar 

  17. Zhu, X., Goldberg, A.: Kernel regression with order preferences. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sun, J., Zhao, W., Xue, J., Shen, Z., Shen, Y. (2008). Clustering with Feature Order Preferences. In: Ho, TB., Zhou, ZH. (eds) PRICAI 2008: Trends in Artificial Intelligence. PRICAI 2008. Lecture Notes in Computer Science(), vol 5351. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89197-0_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-89197-0_36

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-89196-3

  • Online ISBN: 978-3-540-89197-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics