Skip to main content

On the Effectiveness of Cost Sensitive Neural Networks for Software Defect Prediction

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 614))

Abstract

The cost of fixing a software defect varies with the phase in which it is uncovered. Defect found during post-release phase costs much more than the defect that is uncovered in pre-release phase. Hence defect prediction models have been proposed to predict bugs in pre-release phase. For any prediction model, there are two kinds of misclassification errors - Type I and Type II errors. Type II errors are found to be more costly than Type I errors for defect prediction problem. However there have been only few studies that have considered misclassifications costs while building or evaluating defect predictions models. We have built classification models using three cost-sensitive boosting Neural Network methods, namely, CSBNN-TM, CSBNN-WU1 and CSBNN-WU2. We have compared the performance of these cost sensitive Neural Networks with the traditional machine learning algorithms like Logistic Regression, Naive Bayes, Random Forest, Bayesian Network, Neural Networks, k-Nearest Neighbors and Decision Tree. We have compared the performance of the resultant models using cost centric measure - Normalized Expected Cost of Misclassification (NECM).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. The promise repository of empirical software engineering data (2015)

    Google Scholar 

  2. Boehm, B.: Industrial software metrics top 10 list (1987)

    Google Scholar 

  3. Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994)

    Article  Google Scholar 

  4. Ambros, M., Lanza, M., Robbes, R.: Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir. Softw. Eng. 17(4–5), 531–577 (2012)

    Article  Google Scholar 

  5. Dick, S., Kandel, A.: Data mining with resampling in software metrics. Artif. Intell. Methods Softw. Test. 56, 175 (2004)

    Article  Google Scholar 

  6. Ebert, C.: Classification techniques for metric-based software development. Softw. Qual. J. 5(4), 255–272 (1996)

    Article  Google Scholar 

  7. Elish, K.O., Elish, M.O.: Predicting defect-prone software modules using support vector machines. J. Syst. Softw. 81(5), 649–660 (2008)

    Article  Google Scholar 

  8. Fan, W., Stolfo, S.J., Zhang, J., Chan, P.K.: Adacost: misclassification cost-sensitive boosting. In: ICML, pp. 97–105 (1999)

    Google Scholar 

  9. Freund, Y.: Boosting a weak learning algorithm by majority. In: COLT, vol. 90, pp. 202–216 (1990)

    Google Scholar 

  10. Gao, K., Khoshgoftaar, T.M., Wang, H., Seliya, N.: Choosing software metrics for defect prediction: an investigation on feature selection techniques. Softw. Pract. Exp. 41(5), 579–606 (2011)

    Article  Google Scholar 

  11. Guo, L., Ma, Y., Cukic, B., Singh, H.: Robust prediction of fault-proneness by random forests. In: 15th International Symposium on Software Reliability Engineering, ISSRE 2004, pp. 417–428. IEEE (2004)

    Google Scholar 

  12. Gyimothy, T., Ferenc, R., Siket, I.: Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Softw. Eng. 31(10), 897–910 (2005)

    Article  Google Scholar 

  13. Halstead, M.H.: Elements of Software Science, vol. 7. Elsevier, New York (1977)

    MATH  Google Scholar 

  14. Jureczko, M., Madeyski, L.: Towards identifying software project clusters with regard to defect prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering, PROMISE 2010, pp. 9:1–9:10. ACM, New York (2010). http://doi.acm.org/10.1145/1868328.1868342

  15. Khoshgoftaar, T.M., Allen, E.B., Hudepohl, J.P., Aud, S.J.: Application of neural networks to software quality modeling of a very large telecommunications system. IEEE Trans. Neural Netw. 8(4), 902–909 (1997)

    Article  Google Scholar 

  16. Khoshgoftaar, T.M., Allen, E.B., Jones, W.D., Hudepohl, J.: Classification tree models of software quality over multiple releases. In: 10th International Symposium on Software Reliability Engineering, 1999. Proceedings, pp. 116–125. IEEE (1999)

    Google Scholar 

  17. Khoshgoftaar, T.M., Lanning, D.L., Pandya, A.S.: A comparative study of pattern recognition techniques for quality evaluation of telecommunications software. IEEE J. Sel. Areas Commun. 12(2), 279–291 (1994)

    Article  Google Scholar 

  18. Khoshgoftaar, T.M., Seliya, N.: Comparative assessment of software quality classification techniques: an empirical case study. Empir. Softw. Eng. 9(3), 229–257 (2004)

    Article  Google Scholar 

  19. Koru, A.G., Liu, H.: An investigation of the effect of module size on defect prediction using static measures, vol. 30, pp. 1–5. ACM, New York, May 2005

    Google Scholar 

  20. Lessmann, S., Baesens, B., Mues, C., Pietsch, S.: Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans. Softw. Eng. 34(4), 485–496 (2008)

    Article  Google Scholar 

  21. McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. 4, 308–320 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  22. Menzies, T., Greenwald, J., Frank, A.: Data mining static code attributes to learn defect predictors. IEEE Trans. Softw. Eng. 33(1), 2–13 (2007)

    Article  Google Scholar 

  23. Moser, R., Pedrycz, W., Succi, G.: A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: 2008 ACM/IEEE 30th International Conference on Software Engineering, pp. 181–190. IEEE (2008)

    Google Scholar 

  24. Muthukumaran, K., Rallapalli, A., Murthy, N.: Impact of feature selection techniques on bug prediction models. In: Proceedings of the 8th India Software Engineering Conference, pp. 120–129. ACM (2015)

    Google Scholar 

  25. Niu, N., Mahmoud, A.: Enhancing candidate link generation for requirements tracing: the cluster hypothesis revisited. In: 2012 20th IEEE International on Requirements Engineering Conference (RE), pp. 81–90, September 2012

    Google Scholar 

  26. Pai, G.J., Dugan, J.B.: Empirical analysis of software fault content and fault proneness using bayesian methods. IEEE Trans. Softw. Eng. 33(10), 675–686 (2007)

    Article  Google Scholar 

  27. Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J.: Improving software-quality predictions with data sampling and boosting. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 39(6), 1283–1294 (2009)

    Article  Google Scholar 

  28. Selby, R.W., Porter, A.A.: Learning from examples: generation and evaluation of decision trees for software resource analysis. IEEE Trans. Softw. Eng. 14(12), 1743–1757 (1988)

    Article  Google Scholar 

  29. Soni, M.: Defect Prevention: Reducing Costs and Enhancing Quality, vol. 19. iSixSigma.com, Chicago (2006)

    Google Scholar 

  30. Sun, Y., Kamel, M.S., Wong, A.K., Wang, Y.: Cost-sensitive boosting for classification of imbalanced data. Pattern Recogn. 40(12), 3358–3378 (2007)

    Article  MATH  Google Scholar 

  31. Ting, K.M.: A comparative study of cost-sensitive boosting algorithms. In: Proceedings of the 17th International Conference on Machine Learning. Citeseer (2000)

    Google Scholar 

  32. Wang, S., Yao, X.: Using class imbalance learning for software defect prediction. IEEE Trans. Reliab. 62(2), 434–443 (2013)

    Article  Google Scholar 

  33. Zheng, J.: Cost-sensitive boosting neural networks for software defect prediction. Expert Syst. Appl. 37(6), 4537–4543 (2010)

    Article  Google Scholar 

  34. Zhou, Y., Leung, H.: Empirical analysis of object-oriented design metrics for predicting high and low severity faults. IEEE Trans. Softw. Eng. 32(10), 771–789 (2006)

    Article  Google Scholar 

  35. Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. Knowl. Data Eng. 18(1), 63–77 (2006)

    Article  Google Scholar 

  36. Zimmermann, T., Premraj, R., Zeller, A.: Predicting defects for eclipse. In: International Workshop on Predictor Models in Software Engineering, PROMISE 2007: ICSE Workshops 2007, p. 9. IEEE (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lalita Bhanu Murthy Neti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Muthukumaran, K., Dasgupta, A., Abhidnya, S., Neti, L.B.M. (2018). On the Effectiveness of Cost Sensitive Neural Networks for Software Defect Prediction. In: Abraham, A., Cherukuri, A., Madureira, A., Muda, A. (eds) Proceedings of the Eighth International Conference on Soft Computing and Pattern Recognition (SoCPaR 2016). SoCPaR 2016. Advances in Intelligent Systems and Computing, vol 614. Springer, Cham. https://doi.org/10.1007/978-3-319-60618-7_55

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-60618-7_55

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-60617-0

  • Online ISBN: 978-3-319-60618-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics