Skip to main content
Log in

An ensemble pruning method considering classifiers’ interaction based on information theory for facial expression recognition

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Ensemble learning combines all generated base learners for better generalization performance, but weak and redundant classifiers reduce the classification system’s performance, so the researchers proposed ensemble pruning. Existing ensemble pruning methods usually utilize the classifiers’ diversity and ability to select optimal classifier sequences. However, they ignore that the interaction between two weak or redundant classifiers can improve performance. In this paper, we focus on the interaction between classifiers and propose a new ensemble pruning method CCIEP (An Ensemble Pruning Method Considering Classifiers' Interaction). We apply CCIEP mainly to facial expression recognition. CCIEP consists of two parts: first, we use symmetric uncertainty as the ranking metric, and then we use symmetric uncertainty to perform ranking-based pruning of the classifier pool to ensure the ensemble performance of a subset of classifiers. Second, we use the interaction information to explore the binding relationships between classifiers. Then, we add classifiers with interactions to the selected subset of classifiers, which can optimize the system performance by adding interacting classifiers. Experimental results demonstrate that the method outperforms some state-of-the-art ensemble pruning methods on five classical face expression data sets and 10 UCI data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Data availability

The datasets used and analyzed during the course of this study can be accessed from the following repositories: 1) FER2013 dataset: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data 2) JAFFE dataset: https://zenodo.org/record/3451524 3) CK+ dataset: http://vasc.ri.cmu.edu/idb/html/face/facial_expression/ 4) RaFD dataset: http://www.socsci.ru.nl:8180/RaFD2/RaFD 5) KDEF dataset: https://www.kdef.se/home/aboutKDEF 6) UCI datasets, http://archive.ics.uci.edu/ml/.

References

  1. Bisogni, C., Castiglione, A., Hossain, S., Narducci, F., Umer, S.: Impact of deep learning approaches on facial expression recognition in healthcare industries. IEEE Trans. Ind. Inf. 18, 5619–5627 (2022). https://doi.org/10.1109/TII.2022.3141400

    Article  Google Scholar 

  2. Li, D., Wen, G.: MRMR-based ensemble pruning for facial expression recognition. Multimed. Tools Appl. 77, 15251–15272 (2018). https://doi.org/10.1007/s11042-017-5105-z

    Article  Google Scholar 

  3. Chirra, V.R.R., Uyyala, S.R., Kolli, V.K.K.: Virtual facial expression recognition using deep CNN with ensemble learning. J. Ambient Intell. Human. Comput. (2021). https://doi.org/10.1007/s12652-020-02866-3

    Article  Google Scholar 

  4. Quiroz, M., Patiño, R., Diaz-Amado, J., Cardinale, Y.: Group emotion detection based on social robot perception. Sensors 22, 3749 (2022). https://doi.org/10.3390/s22103749

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  5. Li, Y., Zhong, Z., Zhang, F., Zhao, X.: Artificial intelligence-based human-computer interaction technology applied in consumer behavior analysis and experiential education. Front. Psychol. 13, 784311 (2022). https://doi.org/10.3389/fpsyg.2022.784311

    Article  PubMed  PubMed Central  Google Scholar 

  6. Huang, S., et al.: CSLSEP: an ensemble pruning algorithm based on clustering soft label and sorting for facial expression recognition. Multimed. Syst. 29, 1463–1479 (2023). https://doi.org/10.1007/s00530-023-01062-5

    Article  Google Scholar 

  7. Li, D., Wen, G., Li, X., Cai, X.: Graph-based dynamic ensemble pruning for facial expression recognition. Appl. Intell. 49, 3188–3206 (2019). https://doi.org/10.1007/s10489-019-01435-2

    Article  Google Scholar 

  8. He, Z., et al.: Global and local fusion ensemble network for facial expression recognition. Multimed. Tools Appl. 82, 5473–5494 (2023). https://doi.org/10.1007/s11042-022-12321-4

    Article  Google Scholar 

  9. Li, D., et al.: RTCRELIEF-F: an effective clustering and ordering-based ensemble pruning algorithm for facial expression recognition. Knowl. Inf. Syst. 59, 219–250 (2019). https://doi.org/10.1007/s10115-018-1176-z

    Article  Google Scholar 

  10. Pabba, C., Kumar, P.: An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition. Expert Syst. 39, e12839 (2022). https://doi.org/10.1111/exsy.12839

    Article  Google Scholar 

  11. Nan, Y., Ju, J., Hua, Q., Zhang, H., Wang, B.: A-mobilenet: an approach of facial expression recognition. Alex. Eng. J. 61, 4435–4444 (2022). https://doi.org/10.1016/j.aej.2021.09.066

    Article  Google Scholar 

  12. Zhou, Y., Jin, L., Liu, H., Song, E.: Color facial expression recognition by quaternion convolutional neural network with Gabor attention. IEEE Trans. Cogn. Develop. Syst. 13, 969–983 (2020). https://doi.org/10.1109/TCDS.2020.3041642

    Article  Google Scholar 

  13. Li, D., Zhang, Z., Wen, G.: Classifier subset selection based on classifier representation and clustering ensemble. Appl. Intell. (2023). https://doi.org/10.1007/s10489-023-04572-x

    Article  Google Scholar 

  14. Ganaie, M.A., Hu, M., Malik, A., Tanveer, M., Suganthan, P.: Ensemble deep learning: a review. Eng. Appl. Artif. Intell. 115, 105151 (2022). https://doi.org/10.1016/j.engappai.2022.105151

    Article  Google Scholar 

  15. Li, W., Luo, M., Zhang, P., Huang, W.: A novel multi-feature joint learning ensemble framework for multi-label facial expression recognition. IEEE Access 9, 119766–119777 (2021). https://doi.org/10.1109/ACCESS.2021.3108838

    Article  Google Scholar 

  16. Mohammed, A.M., Onieva, E., Woźniak, M.: Selective ensemble of classifiers trained on selective samples. Neurocomputing 482, 197–211 (2022). https://doi.org/10.1016/j.neucom.2021.11.045

    Article  Google Scholar 

  17. Hu, R., Zhou, S., Liu, Y., Tang, Z.: Margin-based pareto ensemble pruning: an ensemble pruning algorithm that learns to search optimized ensembles. Comput. Intell. Neurosci. (2019). https://doi.org/10.1155/2019/7560872

    Article  PubMed  PubMed Central  Google Scholar 

  18. Fatemifar, S., Asadi, S., Awais, M., Akbari, A., Kittler, J.: Face spoofing detection ensemble via multistage optimisation and pruning. Pattern Recogn. Lett. 158, 1–8 (2022). https://doi.org/10.1016/j.patrec.2022.04.006

    Article  ADS  Google Scholar 

  19. Xia, X., Lin, T., Chen, Z.: Maximum relevancy maximum complementary based ordered aggregation for ensemble pruning. Appl. Intell. 48, 2568–2579 (2018). https://doi.org/10.1007/s10489-017-1106-x

    Article  Google Scholar 

  20. Guo, H., et al.: Margin and diversity based ordering ensemble pruning. Neurocomputing 275, 237–246 (2018). https://doi.org/10.1016/j.neucom.2017.06.052

    Article  Google Scholar 

  21. Zhang, H., Wu, S., Zhang, X., Han, L., Zhang, Z.: Slope stability prediction method based on the margin distance minimization selective ensemble. CATENA 212, 106055 (2022). https://doi.org/10.1016/j.catena.2022.106055

    Article  Google Scholar 

  22. Zhang, C.-X., Zhang, J.-S., Yin, Q.-Y.: A ranking-based strategy to prune variable selection ensembles. Knowl. Based Syst. 125, 13–25 (2017). https://doi.org/10.1016/j.knosys.2017.03.031

    Article  Google Scholar 

  23. Bian, Y., Wang, Y., Yao, Y., Chen, H.: Ensemble pruning based on objection maximization with a general distributed framework. IEEE Trans. Neural Netw. Learn. Syst. 31, 3766–3774 (2019). https://doi.org/10.1109/TNNLS.2019.2945116

    Article  PubMed  Google Scholar 

  24. Ni, Z., Xia, P., Zhu, X., Ding, Y., Ni, L.: A novel ensemble pruning approach based on information exchange glowworm swarm optimization and complementarity measure. J. Intell. Fuzzy Syst. 39, 8299–8313 (2020). https://doi.org/10.3233/JIFS-189149

    Article  Google Scholar 

  25. Gu, X., Guo, J.: A feature subset selection algorithm based on equal interval division and three-way interaction information. Soft. Comput. 25, 8785–8795 (2021). https://doi.org/10.1007/s00500-021-05800-7

    Article  Google Scholar 

  26. Wang, L., Jiang, S., Jiang, S.: A feature selection method via analysis of relevance, redundancy, and interaction. Expert Syst. Appl. 183, 115365 (2021). https://doi.org/10.1016/j.eswa.2021.115365

    Article  Google Scholar 

  27. Wan, J., et al.: R2CI: information theoretic-guided feature selection with multiple correlations. Pattern Recogn. 127, 108603 (2022). https://doi.org/10.1016/j.patcog.2022.108603

    Article  Google Scholar 

  28. Li, Z.: A feature selection method using dynamic dependency and redundancy analysis. Arab. J. Sci. Eng. 47, 10419–10433 (2022). https://doi.org/10.1007/s13369-022-06590-2

    Article  Google Scholar 

  29. Sosa-Cabrera, G., Garcia-Torres, M., Gomez-Guerrero, S., Schaerer, C.E., Divina, F.: A multivariate approach to the symmetrical uncertainty measure: application to feature selection problem. Inf. Sci. 494, 1–20 (2019). https://doi.org/10.1016/j.ins.2019.04.046

    Article  MathSciNet  Google Scholar 

  30. Jiang, X., Xu, C.: Deep learning and machine learning with grid search to predict later occurrence of breast cancer metastasis using clinical data. J. Clin. Med. 11, 5772 (2022). https://doi.org/10.3390/jcm11195772

    Article  PubMed  PubMed Central  Google Scholar 

  31. Li, Q.: Functional connectivity inference from FMRI data using multivariate information measures. Neural Netw. 146, 85–97 (2022). https://doi.org/10.1016/j.neunet.2023.01.021

    Article  PubMed  Google Scholar 

  32. Goodfellow IJ, Erhan D, Carrier PL et al., Challenges in representation learning: A report on three machine learning contests, Neural Networks, 64 (2015) 59–63. https://doi.org/10.1016/j.neunet.2014.09.005

    Article  Google Scholar 

  33. Lucey P, Cohn JF, Kanade T et al., The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression, IEEE Computer Society Conference on Computer Vision & Pattern Recognition Workshops, 2010. https://doi.org/10.1109/CVPRW.2010.5543262

    Article  Google Scholar 

  34. Michael J Lyons, Miyuki Kamachi, & Jiro Gyoba (2020) Coding Facial Ex-pressions with Gabor Wavelets (IVC Special Issue). https://doi.org/10.5281/zenodo.4029680

    Article  Google Scholar 

  35. Lyons, Michael J (2021) "Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset. Zenodo. https://doi.org/10.5281/zenodo.5147170

    Article  Google Scholar 

  36. Goeleven E, Raedt RD, Leyman L, Verschuere B (2008) The karolinska directed emotional faces: A validation study. Cogn Emot 22:1094–1118. https://doi.org/10.1080/02699930701626582

    Article  Google Scholar 

  37. Oliver Langner, Ron Dotsch, Gijsbert Bijlstra, Daniel H. J. Wigboldus, Skyler T. Hawk & Ad van Knippenberg (2010) Presentation and validation of the Radboud Faces Database, Cognition and Emotion, 24:8, 1377-1388, https://doi.org/10.1080/02699930903485076

    Article  Google Scholar 

  38. Partalas, I., Tsoumakas, G., Vlahavas, I.: An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Mach. Learn. 81, 257–282 (2010). https://doi.org/10.1007/s10994-010-5172-0

    Article  MathSciNet  Google Scholar 

  39. Li, N., Yu, Y., Zhou, Z.-H., Flach, P.A., De Bie, T., Cristianini, N.: Diversity regularized ensemble pruning. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) Machine Learning and Knowledge Discovery in Databases, pp. 330–345. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-33460-3_27

    Chapter  Google Scholar 

  40. Dai, Q., Han, X.: An efficient ordering-based ensemble pruning algorithm via dynamic programming. Appl. Intell. 44, 816–830 (2016). https://doi.org/10.1007/s10489-015-0729-z

    Article  MathSciNet  Google Scholar 

  41. Kuncheva, L.I.: A bound on kappa-error diagrams for analysis of classifier ensembles. IEEE Educ. Activities Depart. (2013). https://doi.org/10.1109/TKDE.2011.234

    Article  Google Scholar 

  42. Dai, Q.: A novel ensemble pruning algorithm based on randomized greedy selective strategy and ballot. Neurocomputing 122, 258–265 (2013). https://doi.org/10.1016/j.neucom.2013.06.026

    Article  Google Scholar 

  43. Dai, Q., Ye, R., Liu, Z.: Considering diversity and accuracy simultaneously for ensemble pruning. Appl. Soft Comput. 58, 75–91 (2017). https://doi.org/10.1016/j.asoc.2017.04.058

    Article  Google Scholar 

  44. Madhusudhanan, S., Jaganathan, S.: Data augmented incremental learning (DAIL) for unsupervised data. IEICE Trans. Inf. Syst. 105, 1185–1195 (2022). https://doi.org/10.1587/transinf.2021EDP7213

    Article  Google Scholar 

  45. Woods, K., Kegelmeyer, W., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19, 405–410 (1997). https://doi.org/10.1109/34.588027

    Article  Google Scholar 

  46. Giacinto, G., Roli, F.: Dynamic classifier selection based on multiple classifier behaviour. Pattern Recogn. 34, 1879–1881 (2001). https://www.sciencedirect.com/science/article/pii/S0031320300001503. https://doi.org/10.1016/S0031-3203(00)00150-3

  47. Markatopoulou, F., Tsoumakas, G., Vlahavas, I.: Dynamic ensemble pruning based on multi-label classification. Neurocomputing 150, 501–512 (2015). https://doi.org/10.1016/j.neucom.2014.07.063

    Article  Google Scholar 

  48. Yang, Z., Lu, H., Yu, Q.: Critical independent sets of König–Egerváry graphs. Discrete Appl. Math. 318, 1–5 (2022). https://doi.org/10.1016/j.dam.2022.04.014

    Article  MathSciNet  Google Scholar 

  49. Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  50. Fushiki, T.: Estimation of prediction error by using \(k\)-fold cross-validation. Stat. Comput. 21, 137–146 (2011). https://doi.org/10.1007/s11222-009-9153-8

    Article  MathSciNet  Google Scholar 

  51. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011). https://doi.org/10.48550/arXiv.1201.0490

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China(Grant No.62063002) and the Science and Technology Plan Project of Guizhou Province (Qiankehe Platform Talents [2018] 5781).

Author information

Authors and Affiliations

Authors

Contributions

QY wrote the main manuscript text, generated the graphs, and conducted the experiments. Danyang guided the direction of the experiment; XC and SS trained the base classifiers. YM collected the experimental data set. Finally, all authors reviewed the manuscript.

Corresponding author

Correspondence to Danyang Li.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Communicated by R. Huang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Y., Li, D., Chen, X. et al. An ensemble pruning method considering classifiers’ interaction based on information theory for facial expression recognition. Multimedia Systems 30, 46 (2024). https://doi.org/10.1007/s00530-023-01227-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-023-01227-2

Keywords

Navigation