Skip to main content

Using Feature Selection Approaches to Find the Dependent Features

  • Conference paper
Artificial Intelligence and Soft Computing (ICAISC 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6113))

Included in the following conference series:

  • 1813 Accesses

Abstract

Dependencies among the features can decrease the performance and efficiency in many algorithms. Traditional methods can only find the linear dependencies or the dependencies among few features. In our research, we try to use feature selection approaches for finding dependencies. We use and compare Relief, CFS, NB-GA and NB-BOA as feature selection approaches to find the dependent features among our artificial data. Unexpectedly, Relief has the best performance in our experiments, even better than NB-BOA, which is a population-based evolutionary algorithm that used the population distribution information to find the dependent features. It may be because some weak ”link strengths” between features or due to the fact that Naïve Bayes classifier which is used in these wrapper approaches cannot represent the dependencies between features. However, the exact reason for these results still is an open problem for our future work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Dash, M., Liu, H.: Consistency-based search in feature selection. Artificial Intelligence 151(1), 155–176 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  2. Kohavi, R., John, G.: Wrappers for feature subset selection. Artificial intelligence 97(1-2), 273–324 (1997)

    Article  MATH  Google Scholar 

  3. Inza, I., Larranaga, P., Etxeberria, R., Sierra, B.: Feature subset selection by bayesian network-based optimization. Artificial Intelligence 123(1-2), 157–184 (2000)

    Article  MATH  Google Scholar 

  4. Pelikan, M., Goldberg, D., Cantu-Paz, E.: Boa: The bayesian optimization algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference GECCO 1999, vol. 1, pp. 525–532 (1999) (Citeseer)

    Google Scholar 

  5. Lewis, D.: Naive (Bayes) at forty: The independence assumption in information retrieval. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 4–18. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  6. Kira, K., Rendell, L.: The feature selection problem: Traditional methods and a new algorithm. In: Proceedings of the National Conference on Artificial Intelligence, p. 129. John Wiley & Sons Ltd., Chichester (1992)

    Google Scholar 

  7. Hall, M.A.: Correlation-based feature selection for discrete and numeric class machine learning. In: ICML 2000: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 359–366. Morgan Kaufmann Publishers Inc., San Francisco (2000)

    Google Scholar 

  8. Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical recipes in C. Cambridge Univ. Press, Cambridge (1992)

    MATH  Google Scholar 

  9. Quinlan, J.: Induction of decision trees. Machine learning 1, 81–106 (1986)

    Google Scholar 

  10. Hagan, M., Demuth, H., Beale, M., et al.: Neural network design. PWS, Boston (1996)

    Google Scholar 

  11. Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13(3), 637–649 (2001)

    Article  MATH  Google Scholar 

  12. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., H., I.: The weka data mining software: An update. SIGKDD Explorations 11 (2009)

    Google Scholar 

  13. Pelikan, M.: A simple implementation of the Bayesian optimization algorithm (BOA) in C++(version 1.0). IlliGAL Report 99011

    Google Scholar 

  14. Ebert-Uphoff, I.: Measuring Connection Strengths and Link Strengths in Discrete Bayesian Networks. Woodruff School of Mechanical Engineering, Antlanta, GA, Tech. Rep. (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, Q., Salehi, E., Gras, R. (2010). Using Feature Selection Approaches to Find the Dependent Features. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2010. Lecture Notes in Computer Science(), vol 6113. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13208-7_61

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13208-7_61

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13207-0

  • Online ISBN: 978-3-642-13208-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics