Abstract
For statistical modelling of multivariate binary data, such as text documents, datum instances are typically represented as vectors over a global vocabulary of attributes. Apart from the issue of high dimensionality, this also faces us with the problem of uneven importance of various attribute presences/absences. This problem has been largely overlooked in the literature, however it may create difficulties in obtaining reliable estimates of unsupervised probabilistic representation models. In turn, the problem of automated feature selection and feature weighting in the context of unsupervised learning is challenging, because there is no known target to guide the search. In this paper we propose and study a relatively simple cluster-based generative model for multivariate binary data, equipped with automated feature weighting capability. Empirical results on both synthetic and real data sets are given and discussed.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Law, M., Figueiredo, M., Jain, A.K.: Simultaneous feature selection and clustering using a mixture model. IEEE Transaction on Pattern Analysis and Machine Intelligence 26(9), 1154–1166 (2004)
Koller, D., Sahami, M.: Toward Optimal Feature Selection. In: Proc. 13-th International Conference on Machine Learning (ICML 1996), pp. 284–292 (1996)
Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: Proc. ICML 1997, pp. 412–420 (1997)
Lewis, D.D.: Feature Selection and Feature Extraction for Text Categorization. In: Proc. of Speech and Natural Language Workshop, pp. 212–217 (1992)
Liu, T., Liu, S., Chen, Z., Ma, W.-Y. (2003). An Evaluation on Feature Selection for Text Clustering. Proc. ICML 2003 (2003)
Barnard, K., Duygulu, P., de Freitas, N., Forsyth, D., Blei, D., Jordan, M.I.: Matching Words and Pictures. Journal of Machine Learning Research 3, 1107–1135 (2003)
Hofmann, T.: The cluster-abstraction model: Unsupervised learning of topic hierarchies from text data. In: Proceedings of 16th International Joint Conference on Artificial Intelligence IJCAI 1999, pp. 682–687 (1999)
Dy, J.G., Brodley, C.E.: Feature Selection for Unsupervised Learning. Journal of Machine Learning Research 5, 845–889 (2004)
Vaithyanathan, S., Dom, B.: Model-based hierarchical clustering. In: Proc. of 6-th Conf. on Uncertainty in Artificial Intelligence, pp. 599–608 (2000)
Fradkin, D., Madigan, D.: Experiments with random projections for machine learning. In: Proc. of the 9-th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 517–522 (2003)
Dunteman, G.H.: Principal components analysis. Quantitative Applications in the Social Sciences Series, vol. 69. Sage Publications, Thousand Oaks (1989)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wang, X., Kabán, A. (2005). Finding Uninformative Features in Binary Data. In: Gallagher, M., Hogan, J.P., Maire, F. (eds) Intelligent Data Engineering and Automated Learning - IDEAL 2005. IDEAL 2005. Lecture Notes in Computer Science, vol 3578. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11508069_6
Download citation
DOI: https://doi.org/10.1007/11508069_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26972-4
Online ISBN: 978-3-540-31693-0
eBook Packages: Computer ScienceComputer Science (R0)