skip to main content
10.1145/1868328.1868354acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Towards a software failure cost impact model for the customer: an analysis of an open source product

Published:12 September 2010Publication History

ABSTRACT

While the financial consequences of software errors on the developer's side have been explored extensively, the cost arising for the end user has been largely neglected. One reason is the difficulty of linking errors in the code with emerging failure behavior of the software. The problem becomes even more difficult when trying to predict failure probabilities based on models or code metrics. In this paper we take a first step towards a cost prediction model by exploring the possibilities of modeling the financial consequences of already identified software failures. Firefox, a well-known open source software, is used as a test subject. Historically identified failures are modeled using fault trees. To identify expenses, usage profiles are employed to depict the interaction with the system. The presented approach demonstrates the possibility to model failure cost for an organization using a specific software by establishing a relationship between user behavior, software failures, and cost. As future work, an extension with software error prediction techniques as well as an empirical validation of the model is aspired.

References

  1. }}IEEE Standard Glossary of Software Engineering Terminology, Std 610.121990, 1990.Google ScholarGoogle Scholar
  2. }}R. Amuthakkannan, S. M. Kannan, K. Vijayalakshmi, and V. Jayabalan. Managing change and reliability of distributed software system. Int. J. Inf. Syst. Change Mange., 2:30--49, June 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. }}J. Anvik, L. Hiew, and G. C. Murphy. Coping with an open bug repository. In k, editor, eclipse '05: Proceedings of the 2005 OOPSLA workshop on Eclipse Technology eXchange, pages 35--39. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. }}E. Arisholm, L. C. Briand, and E. B. Johannessen. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. Journal of Systems and Software, 83(1):2--17, Jan. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. }}A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. Dependable Secure Comput., 1(1):11--33, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. }}V. R. Basili, L. C. Briand, and W. L. Melo. A validation of object-oriented design metrics as quality indicators. IEEE Trans. Software Eng., 22(10):751--761, Oct. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. }}N. Bettenburg, S. Just, A. Schröter, C. Weiss, R. Premraj, and T. Zimmermann. What makes a good bug report? In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 308--318. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. }}S. R. Chidamber and C. F. Kemerer. A metrics suite for object oriented design. IEEE Trans. Software Eng., 20(6):476--493, June 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. }}M. Dowson. The ARIANE 5 sofware failure. ACM SIGSOFT Software Engineering Notes, 22(2):84, Mar. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. }}K. O. Elish and M. O. Elish. Predicting defect-prone software modules using support vector machines. Journal of Systems and Software, 81(5):649--660, 2008. Software Process and Product Measurement. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. }}Eurostat. Data in focus 48/2008 - ICT usage by enterprises, 2008.Google ScholarGoogle Scholar
  12. }}J. Ferzund, S. N. Ahsan, and F. Wotawa. Analysing bug prediction capabilities of static code metrics in open source software. In IWSM/Metrikon/Mensura'08: Proceedings of the International Conferences on Software Process and Product Measurement, volume 5338 of Lecture Notes in Computer Science, pages 331--343. Springer, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. }}S. Frei, T. Dübendorfer, and B. Plattner. Firefox (in) security update dynamics exposed. SIGCOMM Comput. Commun. Rev., 39(1):16--22, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. }}T. L. Graves, A. F. Karr, J. Marron, and H. Siy. Predicting fault incidence using software change history. IEEE Trans. Software Eng., 26(7):653--661, July 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. }}W. J. Gutjahr. Optimal test distributions for software failure cost estimation. IEEE Trans. Software Eng., 21(3):219--228, Mar. 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. }}T. Gyimóthy, R. Ferenc, and I. Siket. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Software Eng., 31(10):897--910, Oct. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. }}G. J. Holzmann. Conquering complexity. IEEE Computer, 40(12):111--113, Dec. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. }}P. Hooimeijer and W. Weimer. Modeling bug report quality. In ASE'07: 22nd International Conference on Automated Software Engineering, pages 34--43, Nov. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. }}L. Huang and B. W. Boehm. How much software quality investment is enough: A value-based approach. IEEE Software, 23(5):88--95, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. }}Y. Jiang, B. Cukic, and Y. Ma. Techniques for evaluating fault prediction models. Empirical Software Engineering, 13(5):561--595, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. }}B. Kaiser, P. Liggesmeyer, and O. Mäckel. A new component concept for fault trees. In SCS'03: Proceedings of the 8th Australian workshop on Safety critical systems and software, volume 33, pages 37--46. Australian Computer Society, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. }}S. Kim, E. J. W. Jr., and Y. Zhang. Classifying software changes: Clean or buggy? IEEE Trans. Software Eng., 34(2):181--196, Mar. /Apr. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. }}P. L. Li, J. Herbsleb, M. Shaw, and B. Robinson. Experiences and results from initiating field defect prediction and product test prioritization efforts at ABB inc. In ICSE'06: Proceedings of the 28th international conference on Software engineering, pages 413--422. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. }}C. E. Love, R. Guo, and K. H. Irwin. Acceptable quality level versus zero-defects: some empirical evidence. Computers & Operations Research, 22(4):403--417, Apr. 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. }}M. R. Lyu. Software reliability engineering: A roadmap. In FOSE'07: 2007 Future of Software Engineering, pages 153--170, May 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. }}W. A. Mandeville. Software costs of quality. IEEE J. Sel. Areas Commun., 8(2):315--318, Feb. 1990.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. }}T. Mende, R. Koschke, and M. Leszak. Evaluating defect prediction models for a large evolving software system. In CSMR '09: Proceedings of the 2009 European Conference on Software Maintenance and Reengineering, pages 247--250. IEEE Computer Society, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. }}A. Mockus, R. T. Fielding, and J. D. Herbsleb. Two case studies of open source software development: Apache and Mozilla. ACM Trans. Softw. Eng. Meth., 11(3):309--346, July 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. }}J. D. Musa and K. Okumoto. A logarithmic poisson execution time model for software reliability measurement. In ICSE'84: Proceedings of the 7th international conference on Software engineering, pages 230--238. IEEE Computer Society, Mar. 1984. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. }}J. G. Myers. Software reliability: Principles and practices. Wiley, New York, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. }}N. Nagappan, T. Ball, and A. Zeller. Mining metrics to predict component failures. In ICSE'06: Proceedings of the 28th international conference on Software engineering, pages 452--461. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. }}S. Ntafos and V. Poceciun-Benson. Improved testing using failure cost and intensity profiles. In ASSET'00: Proceedings of the 3rd IEEE Symposium on Application-Specific Systems and Software Engineering Technology, pages 143--147. IEEE Computer Society, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. }}T. J. Ostrand, E. J. Weyuker, and R. M. Bell. Predicting the location and number of faults in large software systems. IEEE Trans. Software Eng., 31(4):340--355, Apr. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. }}J. W. Paulson, G. Succi, and A. Eberlein. An empirical study of open-source and closed-source software products. IEEE Trans. Software Eng., 30(4):246--256, Apr. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. }}C. V. Ramamoorthy, G. S. Ho, and Y. W. Han. Fault tree analysis of computer systems. In AFIPS'77: Proceedings of the June 13--16, 1977, national computer conference, pages 13--17. ACM, 1977. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. }}E. S. Raymond. The Cathedral & the Bazaar. O'Reilly & Associates, Inc., Sebastopol, CA, USA, Oct. 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. }}A. Schiffauerova and V. Thomson. A review of research on cost of quality models and best practices. International Journal of Quality & Reliability Management, 23(6):647--669, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  38. }}M. S. Sherriff, S. S. Heckman, J. M. Lake, and L. A. Williams. Using groupings of static analysis alerts to identify files likely to contain field failures. In ESEC/SIGSOFT FSE companion '07: Proceedings of the 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 565--568. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. }}D. Spinellis, G. Gousios, V. Karakoidas, P. Louridas, P. J. Adams, I. Samoladas, and I. Stamelos. Evaluating the quality of open source software. Electronic Notes in Theoretical Computer Science, 233:5--28, 2009. Proceedings of the International Workshop on Software Quality and Maintainability (SQM 2008). Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. }}S. Wagner. Towards software quality economics for defect-detection techniques. In 29th Annual IEEE/NASA Software Engineering Workshop, pages 265--274, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. }}H. A. Watson. Launch control safety study, technical study. Technical report, Bell Telephone Laboratories, Murray Hill, NJ, USA, 1961.Google ScholarGoogle Scholar
  42. }}C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. How long will it take to fix this bug? In MSR'07: Proceedings of the Fourth International Workshop on Mining Software Repositories, page 1. IEEE Computer Society, May 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. }}E. J. Weyuker. Difficulties measuring software risk in an industrial environment. In DSN'01: Proceedings of the 2001 International Conference on Dependable Systems and Networks, pages 15--24. IEEE Computer Society, July 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. }}E. J. Weyuker, T. J. Ostrand, and R. M. Bell. Using developer information as a factor for fault prediction. In PROMISE '07: Proceedings of the Third International Workshop on Predictor Models in Software Engineering, pages 8--14. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. }}Y. Zhou and J. Davis. Open source software reliability model: an empirical approach. ACM SIGSOFT Software Engineering Notes, 30(4):1--6, July 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Towards a software failure cost impact model for the customer: an analysis of an open source product

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Other conferences
                PROMISE '10: Proceedings of the 6th International Conference on Predictive Models in Software Engineering
                September 2010
                195 pages
                ISBN:9781450304047
                DOI:10.1145/1868328
                • General Chair:
                • Tim Menzies,
                • Program Chair:
                • Gunes Koru

                Copyright © 2010 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 12 September 2010

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article

                Acceptance Rates

                Overall Acceptance Rate64of125submissions,51%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader