Skip to main content
Log in

A quantitative analysis of the unit verification perspective on fault distributions in complex software systems: an operational replication

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Andersson, C., & Runeson, P. (2007). A replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 33(5), 273–286.

    Article  Google Scholar 

  • Aurum, A., Petersson, P., & Wohlin, C. (2002). State-of-the-art: Software inspections after 25 years. Software Test and Verification Reliability, 12(3), 133–154.

    Article  Google Scholar 

  • Basili, V. R., & Perricone, B. T. (1984). Software errors and complexity: An empirical investigation. Communications of the ACM, 27(1), 42–52.

    Article  Google Scholar 

  • Basili, V. R., & Selby, R. W. (1987). Comparing the effectiveness of software testing strategies. IEEE Transactions on Software Engineering, 13(12), 1278–1296.

    Article  Google Scholar 

  • Bhat, T., & Nagappan, N. (2006). Evaluating the efficacy of test-driven development: Industrial case studies. In Proceedings of the International Symposium on Empirical Software Engineering. pp. 356–363.

  • Biffl, S., & Gutjahr, W. J. (2002). Using a reliability growth model to control software inspection. Empirical Software Engineering, 7(3), 257–284.

    Article  MATH  Google Scholar 

  • Briand, L. C., El Emam, K., & Freimut, B. G. (2000). A comprehensive evaluation of capture-recapture models for estimating software defect content. IEEE Transactions on Software Engineering, 26(6), 518–540.

    Article  Google Scholar 

  • Briand, L., El Emam, K., Laitenberger, O., & Fussbroich, T. (1998). Using simulation to build inspection efficiency benchmarks for development projects. In Proceedings of the 20th International Conference on Software Engineering. pp. 340–349.

  • Carver, J. (2010). Towards reporting guidelines for experimental replications: A proposal. In Proceedings of the 1st International Workshop on Replication in Empirical Software Engineering Research (RESER). Cape Town, South Africa.

  • Catal, C., & Diri, B. (2009). A systematic review of software fault prediction studies. Expert Systems with Application, 36(4), 7346–7354.

    Article  Google Scholar 

  • Concas, G., Marchesi, M., Murgia, A., Tonelli, R., & Turnu, I. (2011). On the distribution of bugs in the eclipse system. IEEE Transactions on Software Engineering, 37(6), 872–877.

    Article  Google Scholar 

  • El Emam, K., Laitenberger, O., & Harbich, T. (2000). The application of subjective estimates of effectiveness to controlling software inspections. The Journal of Systems and Software, 54(2), 119–136.

    Article  Google Scholar 

  • Engström, E., & Runeson, P. (2010). A qualitative survey of regression testing practices. In M. Ali Babar, M. Vierimaa, & M. Oivo (Eds.), Proceedings 11th international conference on product-focused software process improvement (PROFES), volume 6156 of lecture notes in computer science (pp. 3–16). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Fagan, M. (2002). Design and code inspections to reduce errors in program development, software pioneers. New York: Springer-verlag new york inc.

    Google Scholar 

  • Fenton, N., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5), 675–689.

    Article  Google Scholar 

  • Fenton, N. E., & Ohlsson, N. (2000). Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering, 26(8), 797–814.

    Article  Google Scholar 

  • Galinac Grbac, T., & Huljenić, D. (2011). Defect detection effectiveness and product quality in global software development. In Proceedings of the 12th International Conference Product-Focused Software Process Improvement (PROFES), Proceedings. Lecture Notes in Business Information Processing 6759, Springer: Torre Canne, Italy. 20–22 June 2011.

  • Galinac Grbac, T., & Huljenić, D. (2015). On the probability distribution of faults in complex software systems. Information and Software Technology, 58, 250–258.

    Article  Google Scholar 

  • Galinac Grbac, T., Car, Z., & Huljenić, D. (2012). Quantifying value of adding inspection effort early in the development process: A case study. Software IET, 6(3), 249–259.

    Article  Google Scholar 

  • Galinac Grbac, T., Car, Z., & Huljenić, D. (2015). A quality cost reduction model for large-scale software development. Software Quality Journal, 23, 363–390.

    Article  Google Scholar 

  • Galinac Grbac, T., Runeson, P., & Huljenić, D. (2013). A second replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 39(4), 462–476.

    Article  Google Scholar 

  • Gilb, T., & Graham, D. (1993). “Software inspection”, software pioneers. Boston: Addison-Wesley.

    Google Scholar 

  • Gómez, O. S., Juristo, N., & Vegas, S. (2014). Understanding replication of experiments in software engineering: A classification. Information and Software Technology, 56(8), 1033–1048.

    Article  Google Scholar 

  • Hall, T., Beecham, S., Bowes, D., Gray, D., & Counsell, S. (2012). A systematic literature review on fault prediction performance in software engineering. IEEE Transactions on Software Engineering, 38(6), 1276–1304.

    Article  Google Scholar 

  • Hannay, J. E., Sjoberg, D. I. K., & Dybå, T. (2007). A systematic review of theory use in software engineering experiments. IEEE Transactions on Software Engineering, 33(2), 87–107.

    Article  Google Scholar 

  • Hetzel, W. C. (1976). An experimental analysis of program verification methods. Ph.D. Dissertation, The University of North Carolina at Chapel Hill.

  • IEEE Std. 610.12-1990. (1990). Standard glossary of software engineering terminology, IEEE.

  • Juristo, N., Moreno, A. M., & Vegas, S. (2004). Reviewing 25 years of testing technique experiments. Empirical Software Engineering, 9(1), 7–44.

    Article  Google Scholar 

  • Juristo, N., Vegas, S., Solari, M., Abrahao, S., & Ramos, I. (2012). Comparing the effectiveness of equivalence partitioning, branch testing and code reading by stepwise abstraction applied by subjects. In Proceedings of Fifth IEEE International Conference on Software Testing, Verification and Validation, pp. 330–339.

  • Juristo Juzgado, N., Vegas, S., Solari, M., Abrahaõ, S., & Ramos, I. (2013). A process for managing interaction between experimenters to get useful similar replications. Information and Software Technology, 55(2), 215–225.

    Article  Google Scholar 

  • Koru, A. G., Zhang, D., El Emam, K., & Liu, H. (2013). An investigation into the functional form of the size-defect relationship for software modules. IEEE Transactions on Software Engineering, 35(2), 293–304.

    Article  Google Scholar 

  • Kamsties, E., & Lott, C. M. (1995). An empirical evaluation of three defect-detection techniques. In Proceedings of the 5th European Software Engineering and Conference, pp. 362–383.

  • Kitchenham, B. A. (2008). The role of replications in empirical software engineering: A word of warning. Empirical Software Engineering, 13(2), 219–221.

    Article  Google Scholar 

  • Mäntylä, M. V., & Lassenius, C. (2009). What types of defects are really discovered in code reviews? IEEE Transactions on Software Engineering, 35(3), 430–448.

    Article  Google Scholar 

  • Miller, J. (2005). Replicating software engineering experiments: A poisoned chalice or the holy grail. Information and Software Technology, 47(4), 233–244.

    Article  Google Scholar 

  • Munir, H., Moayyed, M., & Petersen, K. (2014). Considering rigor and relevance when evaluating test driven development: A systematic review, In Information and Software Technology, 13 January 2014. doi:10.1016/j.infsof.2014.01.002.

  • Myers, G. J. (1978). A controlled experiment in program testing and code walkthroughs/inspections. Communications of the ACM, 21(9), 13:1–13:31.

    Article  Google Scholar 

  • Nagappan, N., Maximilien, E. M., Bhat, T., & Williams, L. (2008). Realizing quality improvement through test driven development: Results and experiences of four industrial teams. Empirical Software Engineering, 13(3), 289–302.

    Article  Google Scholar 

  • Ohlsson, N., & Alberg, H. (1996). Predicting fault-prone software modules in telephone switches. IEEE Transactions on Software Engineering, 22(12), 886–894.

    Article  Google Scholar 

  • Petersson, H., Thelin, T., Runeson, P., & Wohlin, C. (2004). Capture-recapture in software inspections after 10 years research-theory, evaluation and application. The Journal of Systems and Software, 72(2), 249–264.

    Article  Google Scholar 

  • Runeson, P. (2006). A survey of unit testing practices. IEEE Software, 23(4), 22–29.

    Article  Google Scholar 

  • Runeson, P., Andersson, C., Thelin, T., Andrews, A., & Berling, T. (2006). What do we know about defect detection methods. IEEE Software, 23(3), 82–90.

    Article  Google Scholar 

  • Runeson, P., Höst, M., Rainer, A., & Regnell, B. (2012). Case study research in software engineering - guidelines and examples. New York: Wiley.

    Book  Google Scholar 

  • Runeson, P., Stefik, A., & Andrews, A. (2014). Variation factors in the design and analysis of replicated controlled experiments: Three (dis)similar studies on inspections versus unit testing. Empirical Software Engineering, 19(6), 1781–1808.

    Article  Google Scholar 

  • Shull, F. J., Carver, J. C., Vegas, S., & Juristo, N. (2008). The role of replications in empirical software engineering. Empirical Software Engineering, 13(2), 211–218.

    Article  Google Scholar 

  • Siy, H., & Votta, L. (2001). Does the modern code inspection have value? In Proceedings of the IEEE International Conference on Software Maintenance, pp. 281–89.

  • Sjøberg, D. I. K., Dybå, T., Anda, B., & Hannay, J. E. (2008). Building theories in software engineering, guide to advanced empirical software engineering. New York: Springer.

    Google Scholar 

  • Strauss, S. H., & Ebenau, R. G. (1994). Software inspection process. New York: McGraw-Hill.

    MATH  Google Scholar 

  • Wohlin, C., & Runeson, P. (2006). Defect content estimations from review data, In Proceeding of the 20th International Conference on Software Engineering, pp. 400–409.

  • Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., & Wesslén, A. (2012). Experimentation in software engineering. New York: Springer.

    Book  MATH  Google Scholar 

  • Wood, M., Roper, M., Brooks, A., & Miller, J. (1997). Comparing and combining software defect detection techniques: A replicated empirical study. SIGSOFT Software Engineering Notes, 22(6), 262–277.

    Article  Google Scholar 

  • Zhang, H. (2008). On the distribution of software faults. IEEE Transactions on Software Engineering, 34(2), 301–302.

    Article  Google Scholar 

Download references

Acknowledgments

The first author is partially supported by the University of Rijeka research Grant 13.09.2.2.16.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tihana Galinac Grbac.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grbac, T.G., Runeson, P. & Huljenić, D. A quantitative analysis of the unit verification perspective on fault distributions in complex software systems: an operational replication. Software Qual J 24, 967–995 (2016). https://doi.org/10.1007/s11219-015-9273-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-015-9273-7

Keywords

Navigation