Abstract
In this paper, we describe five features selection techniques used for a text classification. An information gain, independent significance feature test, chi-squared test, odds ratio test, and frequency filtering have been compared according to the text benchmarks based on Wikipedia. For each method we present the results of classification quality obtained on the test datasets using K-NN based approach. A main advantage of evaluated approach is reducing the dimensionality of the vector space that allows to improve effectiveness of classification task. The information gain method, that obtained the best results, has been used for evaluation of features selection and classification scalability. We also provide the results indicating the feature selection is also useful for obtaining the common-sense features for describing natural-made categories.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Li, Y., Jain, A.: Classification of text documents. The Computer Journal 41(8), 537–546 (1998)
Pestov, V.: On the geometry of similarity search: dimensionality curse and concentration of measure. Information Processing Letters 73(1), 47–51 (2000)
Blachnik, M.: Comparison of various feature selection methods in application to prototype best rules. Computer Recognition Systems 3 (2007)
Biesiada, J., Duch, W.: Feature selection for high-dimensional data: A kolmogorov-smirnov correlation-based filter. Computer Recognition Systems (2007)
Yang, Y., Pedersen, J.O.: A comparative Study on Feature Selection in Text Categorization. Morgan Kaufmann Publishers (1997)
Langley, P., et al.: Selection of relevant features in machine learning. Defense Technical Information Center (1994)
Kohavi, R., John, G.: Wrappers for feature subset selection. Artificial Intelligence 97, 273–324 (1997)
Forman, G.: An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research (2002)
Kent, J.: Information gain and a general measure of correlation. Biometrika 70, 163–173 (1983)
Weiss, S., Indurkhya, N.: Predictive data mining: a practical guide. Morgan Kaufmann (1998)
Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Transactions on Information Theory 13(1), 21–27 (1967)
Szymański, J.: Wikipedia articles representation with matrix’u. In: Hota, C., Srimani, P.K. (eds.) ICDCIT 2013. LNCS, vol. 7753, pp. 500–510. Springer, Heidelberg (2013)
Havasi, C., Pustejovsky, J., Speer, R., Lieberman, H.: Digital intuition: Applying common sense using dimensionality reduction. IEEE Intelligent Systems 24, 24–35 (2009)
Balicki, J.: An adaptive quantum-based multiobjective evolutionary algorithm for efficient task assignment in distributed systems. In: Mastorakis, N.E., et al. (eds.) Recent Advances in Computer Engineering, WSEAS, Rhodes Greece, pp. 417–422 (2009)
Balicki, J., Z., Stateczny, Kitowski, A.: Extended hopfield model of neural networks for combinatorial multiobjective optimization problems. In: Proceedings of IEEE World Congress on Computational Intelligence - IJCNN, Anchorage, May 4-9, pp. 1646–1651 (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Balicki, J., Krawczyk, H., Rymko, Ł., Szymański, J. (2013). Selection of Relevant Features for Text Classification with K-NN. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2013. Lecture Notes in Computer Science(), vol 7895. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38610-7_44
Download citation
DOI: https://doi.org/10.1007/978-3-642-38610-7_44
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-38609-1
Online ISBN: 978-3-642-38610-7
eBook Packages: Computer ScienceComputer Science (R0)