Skip to main content
Log in

An empirical study of ensemble techniques for software fault prediction

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Previously, many researchers have performed analysis of various techniques for the software fault prediction (SFP). Oddly, the majority of such studies have shown the limited prediction capability and their performance for given software fault datasets was not persistent. In contrast to this, recently, ensemble techniques based SFP models have shown promising and improved results across different software fault datasets. However, many new as well as improved ensemble techniques have been introduced, which are not explored for SFP. Motivated by this, the paper performs an investigation on ensemble techniques for SFP. We empirically assess the performance of seven ensemble techniques namely, Dagging, Decorate, Grading, MultiBoostAB, RealAdaBoost, Rotation Forest, and Ensemble Selection. We believe that most of these ensemble techniques are not used before for SFP. We conduct a series of experiments on the benchmark fault datasets and use three distinct classification algorithms, namely, naive Bayes, logistic regression, and J48 (decision tree) as base learners to the ensemble techniques. Experimental analysis revealed that rotation forest with J48 as the base learner achieved the highest precision, recall, and G-mean 1 values of 0.995, 0.994, and 0.994, respectively and Decorate achieved the highest AUC value of 0.986. Further, results of statistical tests showed used ensemble techniques demonstrated a statistically significant difference in their performance among the used ones for SFP. Additionally, the cost-benefit analysis showed that SFP models based on used ensemble techniques might be helpful in saving software testing cost and effort for twenty out of twenty-eight used fault datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. https://sites.google.com/site/santoshiiitmdj/software-fault-datasets?authuser=0

  2. TP = True positive, FP = False positive, FN = False negative, TN = True negative, N = Negative

References

  1. Chen C, Alfayez R, Srisopha K, Boehm B, Shi L (2017) Why is it important to measure maintainability, and what are the best ways to do it?. In: Proceedings of the 39th International conference on software engineering companion. IEEE Press, pp 377–378

  2. Menzies T, Milton Z, Turhan B, Cukic B, Jiang Y, Bener A (2010) Defect prediction from static code features: Current results, limitations, new approaches. Automated Software Engineering Journal 17(4):375–407

    Google Scholar 

  3. Fenton NE, Neil M (1999) A critique of software defect prediction models. IEEE Trans Softw Eng 25(5):675–689

    Google Scholar 

  4. Hall T, Beecham S, Bowes D, Gray D, Counsell S (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Softw Eng 38(6):1276–1304

    Google Scholar 

  5. Kamei Y, Shihab E (2016) Defect prediction: Accomplishments and future challenges. In: 2016 IEEE 23rd international conference on software analysis, evolution, and reengineering (SANER), vol 5. IEEE, pp 33–45

  6. Jiang Y, Cukic B, Ma Y (2008) Techniques for evaluating fault prediction models. Empir Softw Eng 13(5):561–595

    Google Scholar 

  7. Menzies T, Greenwald J, Frank A (2007) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng 33(1):2–13

    Google Scholar 

  8. Tosun A, Bener AB, Akbarinasaji S (2017) A systematic literature review on the applications of bayesian networks to predict software quality. Softw Qual J 25(1):273–305

    Google Scholar 

  9. Hall T, Bowes D (2012) The state of machine learning methodology in software fault prediction. In: Proceedings of the 11th International conference on machine learning and applications, vol 2, pp 308–313

  10. Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Trans Softw Eng 34(4):485–496

    Google Scholar 

  11. Challagulla VUB, Bastani FB, Ling I, Paul RA (2005) Empirical assessment of machine learning based software defect prediction techniques. Int J Artif Intell Tools 17(02):389–400

    Google Scholar 

  12. Chatterjee S, Nigam S, Singh JB, Upadhyaya LN (2012) Software fault prediction using nonlinear autoregressive with exogenous inputs (narx) network. Appl Intell 37(1):121–129

    Google Scholar 

  13. Rathore SS, Kumar S (2017) A study on software fault prediction techniques. Artif Intell Rev 1–73

  14. Chatterjee S, Maji B (2018) A bayesian belief network based model for predicting software faults in early phase of software development process. Appl Intell 48(8):2214–2228

    Google Scholar 

  15. Madeyski L, Jureczko M (2015) Which process metrics can significantly improve defect prediction models? an empirical study. Softw Qual J 23(3):393–422

    Google Scholar 

  16. Rathore SS, Kumar S (2016) An empirical study of some software fault prediction techniques for the number of faults prediction. Soft Comput 1–18

  17. Mendes-Moreira J, Jorge A, Soares C, de Sousa JF (2009) Ensemble learning: A study on different variants of the dynamic selection approach, pp 191–205

  18. Bowes D, Hall T, Petrić J (2017) Software defect prediction: do different classifiers find the same defects? Softw Qual J, 1–28

  19. Huizinga D, Kolawa A (2007) Automated defect prevention: best practices in software management. Wiley, Hoboken

    Google Scholar 

  20. Zhu X, Cao C, Zhang J (2017) Vulnerability severity prediction and risk metric modeling for software. Appl Intell 47(3):828–836

    Google Scholar 

  21. Menzies T, Turhan B, Bener A, Gay G, Cukic B, Jiang Y (2008) Implications of ceiling effects in defect predictors. In: Proceedings of the 4th international workshop on Predictor models in software engineering, pp 47–54

  22. Zhang H, Nelson A, Menzies T (2010) On the value of learning from defect dense components for software defect prediction. In: Proceedings of the 6th International conference on predictive models in software engineering. ACM, p 14

  23. Rathore SS, Kumar S (2017) Linear and non-linear heterogeneous ensemble methods to predict the number of faults in software systems. Knowl.-Based Syst 119:232–256

    Google Scholar 

  24. Yohannese CW, Li T, Bashir K (2018) A three-stage based ensemble learning for improved software fault prediction: An empirical comparative study. Int J Comput Intell Sys 11(1):1229–1247

    Google Scholar 

  25. Bal PR, Kumar S (2018) Cross project software defect prediction using extreme learning machine: An ensemble based study

  26. Wang T, Li W, Shi H, Liu Z (2011) Software defect prediction based on classifiers ensemble. J Info Comput Sci 8(16):4241–4254

    Google Scholar 

  27. Laradji IH, Alshayeb M, Ghouti L (2015) Software defect prediction using ensemble learning on selected features. Inf Softw Technol 58:388–402

    Google Scholar 

  28. Aljamaan H, Elish MO, et al. (2009) An empirical study of bagging and boosting ensembles for identifying faulty classes in object-oriented software. In: Proceedings of the symposium on computational intelligence and data mining, pp 187–194

  29. (2015) The PROMISE repository of empirical software engineering data, http://openscience.us/repo

  30. Rathore SS, Kumar S (2017) Towards an ensemble based system for predicting the number of software faults. Expert Syst Appl 82:357–382

    Google Scholar 

  31. Wang S, Yao X (2013) Using class imbalance learning for software defect prediction. IEEE Trans Reliab 62(2):434–443

    Google Scholar 

  32. Mısırlı AT, Bener A, Turhan B (2011) An industrial case study of classifier ensembles for locating software defects. Softw Qual J 19(3):515–536

    Google Scholar 

  33. Zheng J (2010) Cost-sensitive boosting neural networks for software defect prediction. Expert Syst Appl 37(6):4537–4543

    Google Scholar 

  34. Twala B (2011) Predicting software faults in large space systems using machine learning techniques. Def Sci J 61(4):306–316

    Google Scholar 

  35. Aljamaan HI, Elish MO (2009) An empirical study of bagging and boosting ensembles for identifying faulty classes in object-oriented software. In: 2009 IEEE Symposium on computational intelligence and data mining. IEEE, pp 187–194

  36. Siers MJ, Md ZI (2014) Cost sensitive decision forest and voting for software defect prediction. In: Pacific rim international conference on artificial intelligence. Springer, pp 929–936

  37. Li N, Shepperd M, Guo Y (2020) A systematic review of unsupervised learning techniques for software defect prediction. Information and Software Technology, p 106287

  38. Siers MJ, Md ZI (2015) Software defect prediction using a cost sensitive decision forest and voting, and a potential solution to the class imbalance problem. Inf Syst 51:62–71

    Google Scholar 

  39. Laradji IH, Alshayeb M, Ghouti L (2015) Software defect prediction using ensemble learning on selected features. Inf Softw Technol 58:388–402

    Google Scholar 

  40. Tong H, Liu B, Wang S (2018) Software defect prediction using stacked denoising autoencoders and two-stage ensemble learning. Inf Softw Technol 96:94–111

    Google Scholar 

  41. Yang X, Lo D, Xia X, Sun J (2017) Tlel: A two-layer ensemble learning approach for just-in-time defect prediction. Inf Softw Technol 87:206–220

    Google Scholar 

  42. Pandey SK, Mishra RB, Tripathi AK (2020) Bpdet: An effective software bug prediction model using deep representation and ensemble learning techniques. Expert Syst Appl 144:113085

    Google Scholar 

  43. Moustafa S, ElNainay MY, El Makky N, Abougabal MS (2018) Software bug prediction using weighted majority voting techniques. Alexandria Eng J 57(4):2763–2774

    Google Scholar 

  44. Shanthini A (2014) Effect of ensemble methods for software fault prediction at various metrics level

  45. Hussain S, Keung J, Khan AA, Bennin KE (2015) Performance evaluation of ensemble methods for software fault prediction: An experiment. In: Proceedings of the ASWEC 2015 24th Australasian software engineering conference, pp 91–95

  46. Petrić J, Bowes D, Hall T, Christianson B, Baddoo N (2016) Building an ensemble for software defect prediction based on diversity selection. In: Proceedings of the 10th ACM/IEEE International symposium on empirical software engineering and measurement, pp 1–10

  47. Li R, Zhou L, Zhang S, Liu H, Huang X, Sun Z (2019) Software defect prediction based on ensemble learning. In: Proceedings of the 2019 2nd International conference on data science and information technology, pp 1–6

  48. Yohannese CW, Li T, Bashir K (2018) A three-stage based ensemble learning for improved software fault prediction: an empirical comparative study. Int J Comput Intell Sys 11(1):1229–1247

    Google Scholar 

  49. Alsawalqah H, Hijazi N, Eshtay M, Faris H, Radaideh AA, Aljarah I, Alshamaileh Y (2020) Software defect prediction using heterogeneous ensemble classification based on segmented patterns. Appl Sci 10(5):1745

    Google Scholar 

  50. Abdou AS, Darwish NR (2018) Early prediction of software defect using ensemble learning: A comparative study. Int J Comput Appl 179(46)

  51. Khuat TT, Le MH (2020) Evaluation of sampling-based ensembles of classifiers on imbalanced data for software defect prediction problems. SN Computer Science 1:1–16

    Google Scholar 

  52. Twala B (2011) Predicting software faults in large space systems using machine learning techniques

  53. Ryu D, Jang Jong-In, Baik J (2017) A transfer cost-sensitive boosting approach for cross-project defect prediction. Softw Qual J 25(1):235–272

    Google Scholar 

  54. Saifudin A, Hendric SWHL, Soewito B, Gaol FL, Abdurachman E, Heryadi Y (2019) Tackling imbalanced class on cross-project defect prediction using ensemble smote. In: IOP conference series: Materials science and engineering, vol 662. IOP Publishing

  55. Wang T, Zhang Z, Jing X, Zhang L (2016) Multiple kernel ensemble learning for software defect prediction. Autom Softw Eng 23(4):569–590

    Google Scholar 

  56. Li N, Li Z, Nie Y, Sun X, Li X (2011) Predicting software black-box defects using stacked generalization. In: 2011 Sixth International conference on digital information management. IEEE, pp 294–299

  57. Sun Z, Song Q, Zhu X (2012) Using coding-based ensemble learning to improve software defect prediction. IEEE Trans Sys Man Cybern Part C (Applications and Reviews) 42(6):1806–1817

    Google Scholar 

  58. Rathore SS, Kumar S (2016) Ensemble methods for the prediction of number of faults A study on eclipse project. In: 2016 11th International Conference on Industrial and Information Systems (ICIIS). IEEE, pp 540–545

  59. Yohannese CW, Li T, Simfukwe M, Khurshid F (2017) Ensembles based combined learning for improved software fault prediction: A comparative study. In 2017 12th International conference on intelligent systems and knowledge engineering (ISKE). IEEE, pp 1–6

  60. Bal PR, Kumar S (2018) Extreme learning machine based linear homogeneous ensemble for software fault prediction. In: ICSOFT, pp 103–112

  61. Mousavi R, Eftekhari M, Rahdari F (2018) Omni-ensemble learning (oel): Utilizing over-bagging, static and dynamic ensemble selection approaches for software defect prediction. Int J Artif Intell Tools 27 (06):1850024

    Google Scholar 

  62. Campos JR, Costa E, Vieira M (2019) Improving failure prediction by ensembling the decisions of machine learning models: A case study. IEEE Access 7:177661–177674

    Google Scholar 

  63. He H, Zhang X, Wang Q, Ren J, Liu J, Zhao X, Cheng Y (2019) Ensemble multiboost based on ripper classifier for prediction of imbalanced software defect data. IEEE Access 7:110333–110343

    Google Scholar 

  64. Malhotra R, Jain J (2020) Handling imbalanced data using ensemble learning in software defect prediction. In: 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence). IEEE, pp 300–304

  65. Zheng J (2010) Cost-sensitive boosting neural networks for software defect prediction. Expert Syst Appl 37(6):4537–4543

    Google Scholar 

  66. Kumar L, Rath S, Sureka A (2017) Using source code metrics and ensemble methods for fault proneness prediction. arXiv:1704.04383

  67. Gao Y, Yang C (2016) Software defect prediction based on adaboost algorithm under imbalance distribution. In: 2016 4th International Conference on Sensors, Mechatronics and Automation (ICSMA 2016). Atlantis Press

  68. Coelho RA, dos RN Guimarães F, Esmin AAA (2014) Applying swarm ensemble clustering technique for fault prediction using software metrics. In: 2014 13th International conference on machine learning and applications. IEEE, pp 356–361

  69. Ryu D, Baik J (2018) Effective harmony search-based optimization of cost-sensitive boosting for improving the performance of cross-project defect prediction. KIPS Trans Softw Data Eng 7(3):77–90

    Google Scholar 

  70. Jonsson L, Borg M, Broman D, Sandahl K, Eldh S, Runeson P (2016) Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts. Empir Softw Eng 21(4):1533–1578

    Google Scholar 

  71. Li Z, Jing X-Y, Zhu X, Zhang H, Xu B, Ying S (2019) Heterogeneous defect prediction with two-stage ensemble learning. Autom Softw Eng 26(3):599–651

    Google Scholar 

  72. Mısırlı AT, Bener A, Turhan B (2011) An industrial case study of classifier ensembles for locating software defects. Softw Qual J 19(3):515–536

    Google Scholar 

  73. Ryu D, Choi O, Baik J (2016) Value-cognitive boosting with a support vector machine for cross-project defect prediction. Empir Softw Eng 21(1):43–71

    Google Scholar 

  74. Ryu D, Jang J-I, Baik J (2017) A transfer cost-sensitive boosting approach for cross-project defect prediction. Softw Qual J 25(1):235–272

    Google Scholar 

  75. Yi P, Kou G, Wang G, Wu W, Shi Y (2011) Ensemble of software defect predictors: an ahp-based evaluation method. International Journal of Information Technology & Decision Making 10(01):187–206

    Google Scholar 

  76. Zhang Y, Lo D, Xia X, Sun J (2018) Combined classifier for cross-project defect prediction: an extended empirical study. Frontiers of Computer Science 12(2):280–296

    Google Scholar 

  77. Wang H, Khoshgoftaar TM, Napolitano A (2010) A comparative study of ensemble feature selection techniques for software defect prediction. In: 2010 Ninth international conference on machine learning and applications. IEEE, pp 135–140

  78. Uchigaki S, Uchida S, Toda K, Monden A (2012) An ensemble approach of simple regression models to cross-project fault prediction. In: 2012 13th ACIS International conference on software engineering, artificial intelligence, networking and parallel/distributed computing. IEEE, pp 476–481

  79. Li Z, Jing Xiao-Yuan, Zhu X, Zhang H (2017) Heterogeneous defect prediction through multiple kernel learning and ensemble learning. In: 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, pp 91–102

  80. Tong H, Liu B, Wang S (2019) Kernel spectral embedding transfer ensemble for heterogeneous defect prediction. IEEE Transactions on Software Engineering

  81. Jiang Y, Cukic B, Ma Y (2008) Techniques for evaluating fault prediction models. Empir Softw Eng 13(5):561–595

    Google Scholar 

  82. Catal C, Diri B (2009) A systematic review of software fault prediction studies. Expert Systems with Applications 36(4):7346–7354

    Google Scholar 

  83. Kim S, Whitehead Jr JE, Zhang Y (2008) Classifying software changes clean or buggy? IEEE Trans Softw Eng 34(2):181–196

    Google Scholar 

  84. Chatterjee S, Nigam S, Singh JB, Upadhyaya LN (2012) Software fault prediction using nonlinear autoregressive with exogenous inputs (narx) network. Appl Intell 37(1):121–129

    Google Scholar 

  85. Malhotra R (2014) Comparative analysis of statistical and machine learning methods for predicting faulty modules. Appl Soft Comput 21(1):286–297

    Google Scholar 

  86. Bishnu PS, Bhattacherjee V (2011) Software fault prediction using quad tree-based k-means clustering algorithm. IEEE Trans Knowl Data Eng 24(6):1146–1150

    Google Scholar 

  87. Caglayan B, Misirli AT, Bener AB, Miranskyy A (2015) Predicting defective modules in different test phases. Softw Qual J 23(2):205–227

    Google Scholar 

  88. Rathore SS, Kumar S (2017) An empirical study of some software fault prediction techniques for the number of faults prediction. Soft Comput 21(24):7417–7434

    Google Scholar 

  89. Yang C-Z, Hou C-C, Kao W-C, Chen X (2012) An empirical study on improving severity prediction of defect reports using feature selection. In: 2012 19th Asia-Pacific software engineering conference, vol 1. IEEE, pp 240–249

  90. Yang X, Ke T, Yao X (2014) A learning-to-rank approach to software defect prediction. IEEE Trans Reliab 64(1):234–246

    Google Scholar 

  91. Rathore SS, Kumar S (2019) A study on software fault prediction techniques. Artif Intell Rev 51(2):255–327

    Google Scholar 

  92. Tantithamthavorn C, Hassan AE (2018) An experience report on defect modelling in practice: Pitfalls and challenges. In: Proceedings of the 40th International conference on software engineering: Software engineering in practice, pp 286–295

  93. Li L, Lessmann S, Baesens B (2019) Evaluating software defect prediction performance: an updated benchmarking study. arXiv:1901.01726

  94. Dietterich TG (2000) Ensemble methods in machine learning. In: Proceedings of the International workshop on multiple classifier systems, pp 1–15

  95. Mendes-Moreira J, Soares C, Jorge A, Sousa JFD (2012) Ensemble approaches for regression: A survey. ACM Computing Surveys 45(1):1–40

    MATH  Google Scholar 

  96. Ho TK (2002) Multiple classifier combination: Lessons and next steps. Series in Machine Perception and Artificial Intelligence 47:171–198

    MATH  Google Scholar 

  97. Ting KM, Witten IH (1997) Stacking bagged and dagged models

  98. Melville P, Mooney RJ (2003) Constructing diverse classifier ensembles using artificial training examples. In: IJCAI, vol 3, pp 505–510

  99. Seewald AK, Fürnkranz J (2001) An evaluation of grading classifiers. In: International symposium on intelligent data analysis. Springer, pp 115–124

  100. Seewald AK (2003) Towards a theoretical framework for ensemble classification. In: IJCAI, vol 3. Citeseer, pp 1443–1444

  101. Webb GI (2000) Multiboosting: A technique for combining boosting and wagging. Machine Learning 40(2):159–196

    Google Scholar 

  102. Friedman J, Hastie T, Tibshirani R, et al. (2000) Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Annals Stat 28(2):337–407

    MATH  Google Scholar 

  103. Lin W-C, Oakes M, Tait J (2008) Real adaboost for large vocabulary image classification. In: 2008 International workshop on content-based multimedia indexing. IEEE, pp 192–199

  104. Mauša G, Bogunović N, Grbac TG, Bašić BD (2015) Rotation forest in software defect prediction. In: Proceedings of the 4th Workshop on software quality analysis, monitoring, improvement, and applications, pp 35–44

  105. Aldave R, Dussault J-P (2014) Systematic ensemble learning for regression. arXiv:1403.7267

  106. Zhang H (2004) The optimality of naive bayes. AA 1(2):3

    Google Scholar 

  107. Turhan B, Bener A (2009) Analysis of naive bayes’ assumptions on software fault data: An empirical study. Data & Knowledge Engineering 68(2):278–290

    Google Scholar 

  108. Kleinbaum DG, Dietz K, Gail M, Klein M, Klein M (2002) Logistic regression. Springer, Berlin

    Google Scholar 

  109. Gyimothy T, Ferenc R, Siket I (2005) Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans Softw Eng 31(10):897–910

    Google Scholar 

  110. Quinlan JR (1986) Induction of decision trees. Machine Learning 1(1):81–106

    Google Scholar 

  111. Quinlan JR (1987) Simplifying decision trees. International Journal of Man-Machine Studies 27(3):221–234

    Google Scholar 

  112. Rathore SS, Kumar S (2016) A decision tree logic based recommendation system to select software fault prediction techniques. Computing 1–31

  113. Witten IH, Frank E (2005) Data practical machine learning tools and techniques. Morgan Kaufmann, Burlington

    MATH  Google Scholar 

  114. Jiang Y, Cuki B, Menzies T, Bartlow N (2008) Comparing design and code metrics for software quality prediction. In: Proceedings of the 4th international workshop on Predictor models in software engineering. ACM, pp 11–18

  115. Arisholm E, Briand LC, Johannessen EB (2010) A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. J Syst Softw 83(1):2–17

    Google Scholar 

  116. Cohen P, West SG, Aiken LS (2014) Applied multiple regression/correlation analysis for the behavioral sciences. Psychology Press

  117. Wagner S (2006) A literature survey of the quality economics of defect-detection techniques. In: Proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering. ACM, pp 194–203

  118. Kumar L, Misra S, Rath SK (2017) An empirical analysis of the effectiveness of software metrics and fault prediction model for identifying faulty classes. Computer Standards & Interfaces 53:1–32

    Google Scholar 

  119. Jones C, Bonsignour O (2011) The economics of software quality. Addison-Wesley Professional

  120. Wilde N, Huitt R (1991) Maintenance support for object oriented programs. In: Proceedings. Conference on Software Maintenance 1991. IEEE, pp 162–170

  121. Boehm B, Papaccio PN (1988) Understanding and controlling software costs. IEEE Trans Softw Eng 14(10):1462–1477

    Google Scholar 

Download references

Acknowledgments

We are thankful to the editor and the anonymous reviewers for their valuable comments that helped in improvement of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Santosh S. Rathore.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Informed Consent

This article does not contain any studies with human participants.

Appendix

Appendix

In this study, we have used Weka implementation of used ensemble techniques and base learners. Following parameter values have been set for these three base learning algorithms amd seven ensemble techniques.

Table 12

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rathore, S.S., Kumar, S. An empirical study of ensemble techniques for software fault prediction. Appl Intell 51, 3615–3644 (2021). https://doi.org/10.1007/s10489-020-01935-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-01935-6

Keywords

Navigation