ABSTRACT
While the financial consequences of software errors on the developer's side have been explored extensively, the cost arising for the end user has been largely neglected. One reason is the difficulty of linking errors in the code with emerging failure behavior of the software. The problem becomes even more difficult when trying to predict failure probabilities based on models or code metrics. In this paper we take a first step towards a cost prediction model by exploring the possibilities of modeling the financial consequences of already identified software failures. Firefox, a well-known open source software, is used as a test subject. Historically identified failures are modeled using fault trees. To identify expenses, usage profiles are employed to depict the interaction with the system. The presented approach demonstrates the possibility to model failure cost for an organization using a specific software by establishing a relationship between user behavior, software failures, and cost. As future work, an extension with software error prediction techniques as well as an empirical validation of the model is aspired.
- }}IEEE Standard Glossary of Software Engineering Terminology, Std 610.121990, 1990.Google Scholar
- }}R. Amuthakkannan, S. M. Kannan, K. Vijayalakshmi, and V. Jayabalan. Managing change and reliability of distributed software system. Int. J. Inf. Syst. Change Mange., 2:30--49, June 2007. Google ScholarDigital Library
- }}J. Anvik, L. Hiew, and G. C. Murphy. Coping with an open bug repository. In k, editor, eclipse '05: Proceedings of the 2005 OOPSLA workshop on Eclipse Technology eXchange, pages 35--39. ACM, 2005. Google ScholarDigital Library
- }}E. Arisholm, L. C. Briand, and E. B. Johannessen. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. Journal of Systems and Software, 83(1):2--17, Jan. 2010. Google ScholarDigital Library
- }}A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. Dependable Secure Comput., 1(1):11--33, 2004. Google ScholarDigital Library
- }}V. R. Basili, L. C. Briand, and W. L. Melo. A validation of object-oriented design metrics as quality indicators. IEEE Trans. Software Eng., 22(10):751--761, Oct. 1996. Google ScholarDigital Library
- }}N. Bettenburg, S. Just, A. Schröter, C. Weiss, R. Premraj, and T. Zimmermann. What makes a good bug report? In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 308--318. ACM, 2008. Google ScholarDigital Library
- }}S. R. Chidamber and C. F. Kemerer. A metrics suite for object oriented design. IEEE Trans. Software Eng., 20(6):476--493, June 1994. Google ScholarDigital Library
- }}M. Dowson. The ARIANE 5 sofware failure. ACM SIGSOFT Software Engineering Notes, 22(2):84, Mar. 1997. Google ScholarDigital Library
- }}K. O. Elish and M. O. Elish. Predicting defect-prone software modules using support vector machines. Journal of Systems and Software, 81(5):649--660, 2008. Software Process and Product Measurement. Google ScholarDigital Library
- }}Eurostat. Data in focus 48/2008 - ICT usage by enterprises, 2008.Google Scholar
- }}J. Ferzund, S. N. Ahsan, and F. Wotawa. Analysing bug prediction capabilities of static code metrics in open source software. In IWSM/Metrikon/Mensura'08: Proceedings of the International Conferences on Software Process and Product Measurement, volume 5338 of Lecture Notes in Computer Science, pages 331--343. Springer, 2008. Google ScholarDigital Library
- }}S. Frei, T. Dübendorfer, and B. Plattner. Firefox (in) security update dynamics exposed. SIGCOMM Comput. Commun. Rev., 39(1):16--22, 2009. Google ScholarDigital Library
- }}T. L. Graves, A. F. Karr, J. Marron, and H. Siy. Predicting fault incidence using software change history. IEEE Trans. Software Eng., 26(7):653--661, July 2000. Google ScholarDigital Library
- }}W. J. Gutjahr. Optimal test distributions for software failure cost estimation. IEEE Trans. Software Eng., 21(3):219--228, Mar. 1995. Google ScholarDigital Library
- }}T. Gyimóthy, R. Ferenc, and I. Siket. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Software Eng., 31(10):897--910, Oct. 2005. Google ScholarDigital Library
- }}G. J. Holzmann. Conquering complexity. IEEE Computer, 40(12):111--113, Dec. 2007. Google ScholarDigital Library
- }}P. Hooimeijer and W. Weimer. Modeling bug report quality. In ASE'07: 22nd International Conference on Automated Software Engineering, pages 34--43, Nov. 2007. Google ScholarDigital Library
- }}L. Huang and B. W. Boehm. How much software quality investment is enough: A value-based approach. IEEE Software, 23(5):88--95, 2006. Google ScholarDigital Library
- }}Y. Jiang, B. Cukic, and Y. Ma. Techniques for evaluating fault prediction models. Empirical Software Engineering, 13(5):561--595, 2008. Google ScholarDigital Library
- }}B. Kaiser, P. Liggesmeyer, and O. Mäckel. A new component concept for fault trees. In SCS'03: Proceedings of the 8th Australian workshop on Safety critical systems and software, volume 33, pages 37--46. Australian Computer Society, 2003. Google ScholarDigital Library
- }}S. Kim, E. J. W. Jr., and Y. Zhang. Classifying software changes: Clean or buggy? IEEE Trans. Software Eng., 34(2):181--196, Mar. /Apr. 2008. Google ScholarDigital Library
- }}P. L. Li, J. Herbsleb, M. Shaw, and B. Robinson. Experiences and results from initiating field defect prediction and product test prioritization efforts at ABB inc. In ICSE'06: Proceedings of the 28th international conference on Software engineering, pages 413--422. ACM, 2006. Google ScholarDigital Library
- }}C. E. Love, R. Guo, and K. H. Irwin. Acceptable quality level versus zero-defects: some empirical evidence. Computers & Operations Research, 22(4):403--417, Apr. 1995. Google ScholarDigital Library
- }}M. R. Lyu. Software reliability engineering: A roadmap. In FOSE'07: 2007 Future of Software Engineering, pages 153--170, May 2007. Google ScholarDigital Library
- }}W. A. Mandeville. Software costs of quality. IEEE J. Sel. Areas Commun., 8(2):315--318, Feb. 1990.Google ScholarDigital Library
- }}T. Mende, R. Koschke, and M. Leszak. Evaluating defect prediction models for a large evolving software system. In CSMR '09: Proceedings of the 2009 European Conference on Software Maintenance and Reengineering, pages 247--250. IEEE Computer Society, 2009. Google ScholarDigital Library
- }}A. Mockus, R. T. Fielding, and J. D. Herbsleb. Two case studies of open source software development: Apache and Mozilla. ACM Trans. Softw. Eng. Meth., 11(3):309--346, July 2002. Google ScholarDigital Library
- }}J. D. Musa and K. Okumoto. A logarithmic poisson execution time model for software reliability measurement. In ICSE'84: Proceedings of the 7th international conference on Software engineering, pages 230--238. IEEE Computer Society, Mar. 1984. Google ScholarDigital Library
- }}J. G. Myers. Software reliability: Principles and practices. Wiley, New York, 1976. Google ScholarDigital Library
- }}N. Nagappan, T. Ball, and A. Zeller. Mining metrics to predict component failures. In ICSE'06: Proceedings of the 28th international conference on Software engineering, pages 452--461. ACM, 2006. Google ScholarDigital Library
- }}S. Ntafos and V. Poceciun-Benson. Improved testing using failure cost and intensity profiles. In ASSET'00: Proceedings of the 3rd IEEE Symposium on Application-Specific Systems and Software Engineering Technology, pages 143--147. IEEE Computer Society, 2000. Google ScholarDigital Library
- }}T. J. Ostrand, E. J. Weyuker, and R. M. Bell. Predicting the location and number of faults in large software systems. IEEE Trans. Software Eng., 31(4):340--355, Apr. 2005. Google ScholarDigital Library
- }}J. W. Paulson, G. Succi, and A. Eberlein. An empirical study of open-source and closed-source software products. IEEE Trans. Software Eng., 30(4):246--256, Apr. 2004. Google ScholarDigital Library
- }}C. V. Ramamoorthy, G. S. Ho, and Y. W. Han. Fault tree analysis of computer systems. In AFIPS'77: Proceedings of the June 13--16, 1977, national computer conference, pages 13--17. ACM, 1977. Google ScholarDigital Library
- }}E. S. Raymond. The Cathedral & the Bazaar. O'Reilly & Associates, Inc., Sebastopol, CA, USA, Oct. 1999. Google ScholarDigital Library
- }}A. Schiffauerova and V. Thomson. A review of research on cost of quality models and best practices. International Journal of Quality & Reliability Management, 23(6):647--669, 2006.Google ScholarCross Ref
- }}M. S. Sherriff, S. S. Heckman, J. M. Lake, and L. A. Williams. Using groupings of static analysis alerts to identify files likely to contain field failures. In ESEC/SIGSOFT FSE companion '07: Proceedings of the 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 565--568. ACM, 2007. Google ScholarDigital Library
- }}D. Spinellis, G. Gousios, V. Karakoidas, P. Louridas, P. J. Adams, I. Samoladas, and I. Stamelos. Evaluating the quality of open source software. Electronic Notes in Theoretical Computer Science, 233:5--28, 2009. Proceedings of the International Workshop on Software Quality and Maintainability (SQM 2008). Google ScholarDigital Library
- }}S. Wagner. Towards software quality economics for defect-detection techniques. In 29th Annual IEEE/NASA Software Engineering Workshop, pages 265--274, 2005. Google ScholarDigital Library
- }}H. A. Watson. Launch control safety study, technical study. Technical report, Bell Telephone Laboratories, Murray Hill, NJ, USA, 1961.Google Scholar
- }}C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. How long will it take to fix this bug? In MSR'07: Proceedings of the Fourth International Workshop on Mining Software Repositories, page 1. IEEE Computer Society, May 2007. Google ScholarDigital Library
- }}E. J. Weyuker. Difficulties measuring software risk in an industrial environment. In DSN'01: Proceedings of the 2001 International Conference on Dependable Systems and Networks, pages 15--24. IEEE Computer Society, July 2001. Google ScholarDigital Library
- }}E. J. Weyuker, T. J. Ostrand, and R. M. Bell. Using developer information as a factor for fault prediction. In PROMISE '07: Proceedings of the Third International Workshop on Predictor Models in Software Engineering, pages 8--14. IEEE Computer Society, 2007. Google ScholarDigital Library
- }}Y. Zhou and J. Davis. Open source software reliability model: an empirical approach. ACM SIGSOFT Software Engineering Notes, 30(4):1--6, July 2005. Google ScholarDigital Library
Index Terms
- Towards a software failure cost impact model for the customer: an analysis of an open source product
Recommendations
An Empirical Study of Reliability Growth of Open versus Closed Source Software through Software Reliability Growth Models
APSEC '12: Proceedings of the 2012 19th Asia-Pacific Software Engineering Conference - Volume 01The purpose of this study is to analyze the reliability growth of Open Source Software (OSS) versus software developed in-house (i.e. Closed Source Software, CSS) using Software Reliability Growth Model. This study uses 22 datasets containing failure ...
Open source software reliability model: an empirical approach
We collected bug tracking data from a few popular open source projects and investigated the time related bug reporting patterns from them. The results indicate that along its development cycle, open source projects exhibit similar reliability growth ...
A Comparative Analysis of Software Reliability Growth Models using Defects Data of Closed and Open Source Software
SEW '12: Proceedings of the 2012 35th Annual IEEE Software Engineering WorkshopThe purpose of this study is to compare the fitting (goodness of fit) and prediction capability of eight Software Reliability Growth Models (SRGM) using fifty different failure Datasets. These data sets contain defect data collected from system test ...
Comments