Abstract
Feature selection is used to improve performance of learning algorithms by finding a minimal subset of relevant features. Since the process of feature selection is computationally intensive, a trade-off between the quality of the selected subset and the computation time is required. In this paper, we are presenting a novel, anytime algorithm for feature selection, which gradually improves the quality of results by increasing the computation time. The algorithm is interruptible, i.e., it can be stopped at any time and provide a partial subset of selected features. The quality of results is monitored by a new measure: fuzzy information gain. The algorithm performance is evaluated on several benchmark datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
C. L. Blake and C. J. Merz,UCI Repository of machine learning databases, http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998.
T. M. Cover, Elements of Information Theory, Wiley, New York, 1991.
T. Dean and M. Boddy, An Analysis of Time-Dependent Planning, Proc. AAAI-88, pp.49–54, AAAI, 1988.
E. Eberbach, Expressiveness of $-Calculus: What Matters?, Proc. The Ninth Intern. Symp. on Intelligent Information Systems IIS'2000, Springer-Verlag, Bystra, Poland, June 2000.
E. Eberbach, Expressing Evolutionary Computation, Genetic Programming, Artificial Life, Autonomous Agents and DNA-Based Computing in $-Calculus-Revised Version, Proc. Congress on Evolutionary Computation CEC'2000, San Diego, CA, July 2000.
E.J. Horvitz, Reasoning about Beliefs and Actions under Computational Resource Constraints, Proc. of the 1987 Workshop on Uncertainty in AI, Seattle, Washington, 1987.
M. Last and O. Maimon, An Information-Theoretic Approach to Data Mining, Submitted to Publication, 1999.
M. Last and A. Kandel, Automated Perceptions in Data Mining, Proc. 1999 IEEE International Fuzzy Systems Conference, Seoul, Korea, pp. 190–197, 1999.
H. Liu and H. Motoda, Feature Selection for Knowledge Discovery and Data Mining, Kluwer, Boston, 1998.
O. Maimon, A. Kandel, and M. Last, Information-Theoretic Fuzzy Approach to Knowledge Discovery in Databases. In Advances in Soft Computing-Engineering Design and Manufacturing, R. Roy, T. Furuhashi and P. K. Chawdhry, Eds. Springer-Verlag, London, 1999.
S. Russell and E. Wefald, Do the Right Thing: Studies in Limited Rationality, The MIT Press, 1991.
J. R. Quinlan, Induction of Decision Trees, Machine Learning, vol. 1, no. 1, pp. 81–106, 1986.
J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Mateo, CA, 1993.
S. Zilberstein, Operational Rationality through Compilation of Anytime Algorithms, Ph.D. Dissertation, University of California at Berkeley, 1993.
S. Zilberstein, Using Anytime Algorithms in Intelligent Systems, AI Magazine, vol. 17, no. 3, pp. 73–83, 1996.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Last, M., Kandel, A., Maimon, O., Eberbach, E. (2001). Anytime Algorithm for Feature Selection. In: Ziarko, W., Yao, Y. (eds) Rough Sets and Current Trends in Computing. RSCTC 2000. Lecture Notes in Computer Science(), vol 2005. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45554-X_66
Download citation
DOI: https://doi.org/10.1007/3-540-45554-X_66
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43074-2
Online ISBN: 978-3-540-45554-7
eBook Packages: Springer Book Archive