Skip to main content

Finding Uninformative Features in Binary Data

  • Conference paper
Intelligent Data Engineering and Automated Learning - IDEAL 2005 (IDEAL 2005)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3578))

Abstract

For statistical modelling of multivariate binary data, such as text documents, datum instances are typically represented as vectors over a global vocabulary of attributes. Apart from the issue of high dimensionality, this also faces us with the problem of uneven importance of various attribute presences/absences. This problem has been largely overlooked in the literature, however it may create difficulties in obtaining reliable estimates of unsupervised probabilistic representation models. In turn, the problem of automated feature selection and feature weighting in the context of unsupervised learning is challenging, because there is no known target to guide the search. In this paper we propose and study a relatively simple cluster-based generative model for multivariate binary data, equipped with automated feature weighting capability. Empirical results on both synthetic and real data sets are given and discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Law, M., Figueiredo, M., Jain, A.K.: Simultaneous feature selection and clustering using a mixture model. IEEE Transaction on Pattern Analysis and Machine Intelligence 26(9), 1154–1166 (2004)

    Article  Google Scholar 

  2. Koller, D., Sahami, M.: Toward Optimal Feature Selection. In: Proc. 13-th International Conference on Machine Learning (ICML 1996), pp. 284–292 (1996)

    Google Scholar 

  3. Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: Proc. ICML 1997, pp. 412–420 (1997)

    Google Scholar 

  4. Lewis, D.D.: Feature Selection and Feature Extraction for Text Categorization. In: Proc. of Speech and Natural Language Workshop, pp. 212–217 (1992)

    Google Scholar 

  5. Liu, T., Liu, S., Chen, Z., Ma, W.-Y. (2003). An Evaluation on Feature Selection for Text Clustering. Proc. ICML 2003 (2003)

    Google Scholar 

  6. Barnard, K., Duygulu, P., de Freitas, N., Forsyth, D., Blei, D., Jordan, M.I.: Matching Words and Pictures. Journal of Machine Learning Research 3, 1107–1135 (2003)

    Article  MATH  Google Scholar 

  7. Hofmann, T.: The cluster-abstraction model: Unsupervised learning of topic hierarchies from text data. In: Proceedings of 16th International Joint Conference on Artificial Intelligence IJCAI 1999, pp. 682–687 (1999)

    Google Scholar 

  8. Dy, J.G., Brodley, C.E.: Feature Selection for Unsupervised Learning. Journal of Machine Learning Research 5, 845–889 (2004)

    MathSciNet  Google Scholar 

  9. Vaithyanathan, S., Dom, B.: Model-based hierarchical clustering. In: Proc. of 6-th Conf. on Uncertainty in Artificial Intelligence, pp. 599–608 (2000)

    Google Scholar 

  10. Fradkin, D., Madigan, D.: Experiments with random projections for machine learning. In: Proc. of the 9-th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 517–522 (2003)

    Google Scholar 

  11. Dunteman, G.H.: Principal components analysis. Quantitative Applications in the Social Sciences Series, vol. 69. Sage Publications, Thousand Oaks (1989)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wang, X., Kabán, A. (2005). Finding Uninformative Features in Binary Data. In: Gallagher, M., Hogan, J.P., Maire, F. (eds) Intelligent Data Engineering and Automated Learning - IDEAL 2005. IDEAL 2005. Lecture Notes in Computer Science, vol 3578. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11508069_6

Download citation

  • DOI: https://doi.org/10.1007/11508069_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26972-4

  • Online ISBN: 978-3-540-31693-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics