Skip to main content

Predicting Software Fault Proneness Model Using Neural Network

  • Conference paper
Product-Focused Software Process Improvement (PROFES 2008)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 5089))

Abstract

Importance of construction of models for predicting software quality attributes is increasing leading to usage of artificial intelligence techniques such as Artificial Neural Network (ANN). The goal of this paper is to empirically compare traditional strategies such as Logistic Regression (LR) and ANN to assess software quality. The study used data collected from public domain NASA data set. We find the effect of software metrics on fault proneness. The fault proneness models were predicted using LR regression and ANN methods. The performance of the two methods was compared by Receiver Operating Characteristic (ROC) analysis. The areas under the ROC curves are 0.78 and 0.745 for the LR and ANN model, respectively. The predicted model shows that software metrics are related to fault proneness. The models predict faulty classes with more than 70 percent accuracy. The study showed that ANN method can also be used in constructing software quality models and more similar studies should further investigate the issue. Based on these results, it is reasonable to claim that such a model could help for planning and executing testing by focusing resources on fault-prone parts of the design and code.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aggarwal, K.K., Singh, Y., Kaur, A., Malhotra, R.: Investigating the Effect of Coupling Metrics on Fault Proneness in Object-Oriented Systems. Software Quality Professional 8(4), 4–16 (2006)

    Google Scholar 

  2. Aggarwal, K.K., Singh, Y., Kaur, A., Malhotra, R.: Software Reuse Metrics for Object-Oriented Systems. In: Third ACIS Int’l Conference on Software Engineering Research, Management and Applications (SERA 2005), pp. 48–55. IEEE Computer Society, Los Alamitos (2005)

    Chapter  Google Scholar 

  3. Barnett, V., Price, T.: Outliers in Statistical Data. John Wiley & Sons, Chichester (1995)

    Google Scholar 

  4. Basili, V., Briand, L., Melo, W.: A Validation of Object-Oriented Design Metrics as Quality Indicators. IEEE Transactions on Software Engineering 22(10), 751–761 (1996)

    Article  Google Scholar 

  5. Belsley, D., Kuh, E., Welsch, R.: Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. John Wiley & Sons, Chichester (1980)

    MATH  Google Scholar 

  6. Briand, L., Daly, W., Wust, J.: Unified Framework for Cohesion Measurement in Object-Oriented Systems. Empirical Software Engineering 3, 65–117 (1998)

    Article  Google Scholar 

  7. Briand, L., Daly, W., Wust, J.: A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering 25, 91–121 (1999)

    Article  Google Scholar 

  8. Briand, L., Daly, W., Wust, J.: Exploring the relationships between design measures and software quality. Journal of Systems and Software 5, 245–273 (2000)

    Article  Google Scholar 

  9. Cartwright, M., Kadoda, G.: Comparing software prediction techniques using simulation. IEEE Transactions of Software Engineering 27(1), 1014–1022 (2001)

    Article  Google Scholar 

  10. Cartwright, M., Shepperd, M.: An Empirical Investigation of an Object-Oriented Software System. IEEE Transactions of Software Engineering 26(8), 786–796 (1999)

    Article  Google Scholar 

  11. Chidamber, S., Kemerer, C.: A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering SE-20(6), 476–493 (1994)

    Article  Google Scholar 

  12. Chidamber, S., Kemerer, C.: Towards a Metrics Suite for Object Oriented design. In: Proc. Conference on Object-Oriented Programming: Systems, Languages and Applications (OOPSLA 1991). Published in SIGPLAN Notices, vol. 26(11), pp. 197–211 (1991)

    Google Scholar 

  13. Chidamber, S., Darcy, D., Kemerer, C.: Managerial use of Metrics for Object-Oriented Software: An Exploratory Analysis. IEEE Transactions on Software Engineering 24(8), 629–639 (1998)

    Article  Google Scholar 

  14. El Emam, K., Benlarbi, S., Goel, N., Rai, S.: The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics. IEEE Transactions on Software Engineering 27(7), 630–650 (2001)

    Article  Google Scholar 

  15. Gyimothy, T., Ferenc, R., Siket, I.: Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Software Engineering 31(10), 897–910 (2005)

    Article  Google Scholar 

  16. Halstead, M.H.: Elements of Software Science. North Holland, New York (1997)

    Google Scholar 

  17. Han, J., Kamber, M.: Data Mining: Concepts and Techniques. Harchort India Private Limited (2001)

    Google Scholar 

  18. Hanley, J., McNeil, B.: The meaning and use of the area under a Receiver Operating Characteristic ROC curve. Radiology 143, 29–36 (1982)

    Google Scholar 

  19. Harrison, R., Counsell, S.J., Nithi, R.V.: An Evaluation of MOOD set of Object-Oriented Software Metrics. IEEE Trans SE-24(6), 491–496 (1998)

    Google Scholar 

  20. Henderson-Sellers, B.: Object-Oriented Metrics, Measures of Complexity. Prentice-Hall, Englewood Cliffs (1996)

    Google Scholar 

  21. Henry, S., Kafura, D.: Software structure metrics based on information flow. IEEE Transactions on Software Engineering SE 7(5), 510–518 (1981)

    Article  Google Scholar 

  22. Hitz, M., Montazeri, B.: Measuring Coupling and Cohesion in Object-Oriented Systems. In: Proc. Int. Symposium on Applied Corporate Computing, Monterrey, Mexico (1995)

    Google Scholar 

  23. Hosmer, D., Lemeshow, S.: Applied Logistic regression. John Wiley and Sons, Chichester (1989)

    Google Scholar 

  24. Huang, X., Capretz, L.F., Ren, J., Ho, D.: A neuro-fuzzy model for software cost estimation. In: International Conference on Quality Software, p. 126 (2003)

    Google Scholar 

  25. Khoshgaftaar, T.M., Allen, E.D., Hudepohl, J.P., Aud, S.J.: Application of neural networks to software quality modeling of a very large telecommunications system. IEEE Transactions on Neural Networks 8(4), 902–990 (1997)

    Article  Google Scholar 

  26. Kothari, C.R.: Research Methodology. Methods and Techniques, New Age International Limited (2004)

    Google Scholar 

  27. Lorenz, M., Kidd, J.: Object-Oriented Software Metrics. Prentice-Hall, Englewood Cliffs (1994)

    Google Scholar 

  28. Lee, Y., Liang, B., Wu, S., Wang, F.: Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow (1995)

    Google Scholar 

  29. Li, W., Henry, S.: Object-Oriented Metrics that Predict Maintainability. Journal of Systems and Software 23(2), 111–122 (1993)

    Article  Google Scholar 

  30. Mccabe, T.J.: A Complexity Measure. IEEE Transactions on Software Engineering SE 2(4), 308–320 (1976)

    Article  MathSciNet  Google Scholar 

  31. Menzies, T., DiStefano, J., Orrego, A., Chapman, R.: Assessing Predictors of Software Defects. In: Proc. Workshop Predictive Software Models (2004)

    Google Scholar 

  32. Menzies, T.: Data Mining Static Code Attributes to Learn Defect Predictors. IEEE Transactions on Software Engineering 32(11), 771–784 (2006)

    Article  Google Scholar 

  33. Myers, G.J.: Composite/Structured Design, Von Nostrand, Reinhold, New York (1978)

    Google Scholar 

  34. ASA/WVU IV&V Facility, Metrics Data Program, http://mdp.ivv.nasa.gov

  35. Olague, H., Etzkorn, L., Gholston, S., Quattlebaum, S.: Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes. IEEE Transactions on software Engineering 33(8), 402–419 (2007)

    Article  Google Scholar 

  36. Singh, Y., Kaur, A., Malhotra, R.: Application of Logistic Regression and Artificial Neural Network for Predicting Software Quality Models. In: International Conference on Software Engineering Research and Practice (SERP 2007), Las Vegas, USA, June 25-26 (2007)

    Google Scholar 

  37. Stone, M.: Cross-validatory choice and assessment of statistical predictions. J. Royal Stat. Soc. 36, 111–147 (1974)

    MATH  Google Scholar 

  38. Yuming, Z., Hareton, L.: Empirical analysis of Object-Oriented Design Metrics for predicting high severity faults. IEEE Transactions on Software Engineering 32(10), 771–784 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Andreas Jedlitschka Outi Salo

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Singh, Y., Kaur, A., Malhotra, R. (2008). Predicting Software Fault Proneness Model Using Neural Network. In: Jedlitschka, A., Salo, O. (eds) Product-Focused Software Process Improvement. PROFES 2008. Lecture Notes in Computer Science, vol 5089. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69566-0_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-69566-0_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-69564-6

  • Online ISBN: 978-3-540-69566-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics