Skip to main content
Log in

Dynamic feature selection algorithm based on Q-learning mechanism

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Feature selection is a technique to improve the classification accuracy of classifiers and a convenient data visualization method. As an incremental, task oriented, and model-free learning algorithm, Q-learning is suitable for feature selection, this study proposes a dynamic feature selection algorithm, which combines feature selection and Q-learning into a framework. First, the Q-learning is used to construct the discriminant functions for each class of the data. Next, the feature ranking is achieved according to the all discrimination functions vectors for each class of the data comprehensively, and the feature ranking is doing during the process of updating discriminant function vectors. Finally, experiments are designed to compare the performance of the proposed algorithm with four feature selection algorithms, the experimental results on the benchmark data set verify the effectiveness of the proposed algorithm, the classification performance of the proposed algorithm is better than the other feature selection algorithms, meanwhile the proposed algorithm also has good performance in removing the redundant features, and the experiments of the effect of learning rates on the our algorithm demonstrate that the selection of parameters in our algorithm is very simple.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Liu H, Motoda H (1998) Feature selection for knowledge discovery and data mining. Springer Press, New York

    Book  Google Scholar 

  2. Wang L, Zhou N, Chu F (2008) A general wrapper approach to selection of class-dependent features. IEEE Trans Neural Netw 19:1267–1278

    Article  Google Scholar 

  3. Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40:16–28

  4. Cao X, Wei Y, Wen F, Sun J (2014) Face alignment by explicit shape regression. Int J Comput Vis 107:177–190

    Article  MathSciNet  Google Scholar 

  5. Liu X, Wang L, Zhang J et al (2013) Global and local structure preservation for feature selection. IEEE Trans Neural Netw 25:1083–1095

    Google Scholar 

  6. Hou C, Wang J, Wu Y, Yi D (2009) Local linear transformation embedding. Neurocomputing 72:2368–2378

    Article  Google Scholar 

  7. Hou C, Zhang C, Wu Y, Nie F (2010) Multiple view semi-supervised dimensionality reduction. Pattern Recogn 43:720–730

    Article  Google Scholar 

  8. Cai J, Luo J, Wang S, Yang S (2018) Feature selection in machine learning: a new perspective. Neurocomputing 300:70–79

    Article  Google Scholar 

  9. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT Press, Cambridge

    MATH  Google Scholar 

  10. Li Z, Li S, Yue C et al (2019) Differential evolution based on reinforcement learning with fitness ranking for solving multimodal multiobjective problems. Swarm Evol Comput 49:234–244

    Article  Google Scholar 

  11. Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

    Article  Google Scholar 

  12. Li Y, Fang Y, Akhtar Z (2020) Accelerating deep reinforcement learning model for game strategy. Neurocomputing 408:157–168

    Article  Google Scholar 

  13. Won DO, Müller KR, Lee SW (2020) An adaptive deep reinforcement learning framework enables curling robots with human-like performance in real-world conditions. Sci Robot 5:eabb9764

  14. Gaudel R, Sebag M (2010) Feature selection as a one-player game. In: Proceedings of 27th international conference on machine learning, Haifa, pp 359–366

  15. Fard SMH, Hamzeh A, Hashemi S (2013) Using reinforcement learning to find an optimal set of features. Comput Math Appl 66:1892–1904

    Article  MathSciNet  Google Scholar 

  16. Rückstieß T, Osendorfer C, van der Smagt P (2013) Minimizing data consumption with sequential online feature selection. Int J Mach Learn Cybern 4:235–243

    Article  Google Scholar 

  17. Ba JL, Mnih V, Kavukcuoglu K (2015) Multiple object recognition with visual attention. In: Proceeding of 3rd international conference on learning representations, San Diego, pp 1–10

  18. Feng J, Huang M, Zhao L, et al (2018) Reinforcement learning for relation classification from noisy data. In: Proceedings of 32th AAAI Conference on Artificial Intelligence, New Orleans, pp 5779–5786

  19. Wu X, Yu K, Wang H, et al (2010) Online streaming feature selection. In: Proceedings of 27th international conference on machine learning, Haifa, pp 1159–1166

  20. Zhou P, Hu X, Li P, Wu X (2017) Online feature selection for high-dimensional class-imbalanced data. Knowl-Based Syst 136:187–199

    Article  Google Scholar 

  21. Wang JL, Zhao PL et al (2014) Online feature selection and its applications. IEEE Trans Knowl Data Eng 26:698–710

    Article  Google Scholar 

  22. Tao H, Hou C, Nie F, Jiao Y, Yi D (2016) Effective discriminative feature selection with nontrivial solution. IEEE Trans Neural Netw Learn Syst 27:796–808

    Article  MathSciNet  Google Scholar 

  23. Zhou P, Du L, Li X et al (2020) Unsupervised feature selection with adaptive multiple graph learning. Pattern Recogn 105:107375

    Article  Google Scholar 

  24. Shen HT, Zhu Y, Zheng W, et al (2020) Half-quadratic minimization for unsupervised feature selection on incomplete data. IEEE Transactions onNeural Networks and Learning Systems, pp 1–14. https://doi.org/10.1109/TNNLS.2020.3009632

  25. Zhang Y, Wang Q, Gong DW, Song XF (2019) Nonnegative Laplacian embedding guided subspace learning for unsupervised feature selection. Pattern Recogn 93:337–352

    Article  Google Scholar 

  26. Liu K, Yang X, Yu H, Mi J, Wang P, Chen X (2019) Rough set based semi-supervised feature selection via ensemble selector. Knowl-Based Syst 165:282–296

    Article  Google Scholar 

  27. Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27:1226–1238

    Article  Google Scholar 

  28. Estévez PA, Tesmer M, Perez CA et al (2009) Normalized mutual information feature selection. IEEE Trans Neural Netw 20:189–201

    Article  Google Scholar 

  29. Bishop C (1995) Neural networks for pattern recognition. Oxford University Press, Cambridge

    MATH  Google Scholar 

  30. Weston J, Mukherjee S, Chapelle O, et al (2001) Feature selection for SVMs. In: Advances in Neural Information Processing Systems, Denver, pp. 668–674

  31. Guyon I, Weston J, Barnhill S et al (2002) Gene selection for cancer classification using support vector machines. Mach Learn 46:289–422

    Article  Google Scholar 

  32. Fung G, Mangasarian OL (2000) Data selection for support vector machine classifiers. In: Proceeding of 6th knowledge discovery and data mining, Boston, pp 64–70

  33. Chan TM, Zhang J, Pu J, Huang H (2009) Neighbor embedding based super-resolution algorithm through edge detection and feature selection. Pattern Recogn Lett 30:494–502

    Article  Google Scholar 

  34. Hou C, Nie F, Li X, Yi D, Wu Y (2014) Joint embedding learning and sparse regression: a framework for unsupervised feature selection. IEEE Trans Cybern 44:793–804

    Article  Google Scholar 

  35. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  36. Raileanu LE, Stoffel K (2004) Theoretical comparison between the Gini index and information gain criteria. Ann Math Artif Intell 41:77–93

    Article  MathSciNet  Google Scholar 

  37. Roffo G, Melzi S, Castellani U, et al. (2017) Infinite latent feature selection: a probabilistic latent graph-based ranking approach. In: IEEE International Conference on Computer Vision, Venice, pp 1407–1415

  38. Devijver PA, Kittler J (1982) Pattern recognition: a statistical approach. Prentice Hall, London

    MATH  Google Scholar 

Download references

Acknowledgements

The work was supported in part by National Natural Science Foundation of China (61,673,353, and U1304602).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhigang Shang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, R., Li, M., Yang, Z. et al. Dynamic feature selection algorithm based on Q-learning mechanism. Appl Intell 51, 7233–7244 (2021). https://doi.org/10.1007/s10489-021-02257-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02257-x

Keywords

Navigation