Skip to main content
Log in

Mehr Testwirtschaftlichkeit durch Value-Driven-Testing

  • HAUPTBEITRAG
  • VALUE-DRIVEN-TESTING
  • Published:
Informatik-Spektrum Aims and scope

Zusammenfassung

Die Wirtschaftlichkeit von Tests wird von Projektverantwortlichen häufig infrage gestellt. Zu Recht, denn bisher fehlen messbare Argumente, warum und wie viel getestet werden sollte. Value-Driven Software-Engineering ist ein Ansatz von Barry Boehm, der die Entwicklung von Software an ihrem ökonomischen Wert ausrichtet. In diesem Beitrag weiten wir diesen Ansatz auf das Gebiet des Softwaretestens aus und stellen erprobte Metriken für Kosten und Nutzen von Tests vor, sodass deren Return on Investment (ROI) berechnet werden kann. Damit werden Testverantwortliche in die Lage versetzt, die hohen Kosten des Testens zu rechtfertigen.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

References

  1. Basili V, Briand L, Melo W (1996) How Reuse influences productivity of object-oriented systems. Commun ACM 39(10):104

    Article  Google Scholar 

  2. Baumgartner M (2005) Bericht zum Testprojekt – Portal für den Freistaat Sachsen. ANECON Internal Report, Wien

  3. Berg KP (2008) Testen von Web-Services im SOA Umfeld – Theorie und Praxis. OBJEKTspektrum 5:32

    Google Scholar 

  4. Biffl S et al (2006) Value-based Software Engineering. Springer, Berlin, pp 21

    Book  Google Scholar 

  5. Blom S, Gruhn V, Koehler A, Schaefer C (2008) Methoden und Grundlagen der wertebasierten Softwareentwicklung. OBJEKTspektrum 1:12

    Google Scholar 

  6. Boeckle G, Clements P, McGregor J, Muthig D (2004) Calculating ROI for software product lines. IEEE Software 3:23

    Article  Google Scholar 

  7. Boehm B (1975) The high costs of software. In: Horowitz E (Hrsg) Practical Strategies for Developing Large Software Systems. Addison-Wesley, Reading MA, p 3

    Google Scholar 

  8. Boehm B (1981) Software Engineering Economics. Prentice-Hall, Englewood Cliffs, p 597

    MATH  Google Scholar 

  9. Boehm B (1983) The Economics of Software Maintenance. In: Proceedings of 1st ICSM Conference, IEEE Computer Society Press, Monterey, p 9

  10. Boehm B, Huang L (2003) Value-based software engineering. IEEE Computer 3:33

    Google Scholar 

  11. Boehm B, Turner R (2005) Management challenges to implementing agile processes. IEEE Software 5:30

    Article  Google Scholar 

  12. Boehm B et al (2000) Software Cost Estimation with COCOMO-II. Prentice-Hall, Upper Saddle River, NJ

    Google Scholar 

  13. Boehm B, Huang L, Apurva J, Madachy R (2004) The ROI of software dependability – the IDAVE model. IEEE Software 3:54

    Article  Google Scholar 

  14. Britcher R (1998) Software reliabilty – numbers aren’t the whole story. IEEE Computer 10:128

    Google Scholar 

  15. Broessler P, Sneed H (2003) Critical Success Factors in Software Maintenance. In: Proceedings of ICSM 2003, IEEE Computer Society Press, Amsterdam, p 190

  16. Brown D, Maghsoodloo S, Deason W (1989) A cost model for determining the optimal number of software test cases. Transact Softw Eng 15(2):218

    Article  Google Scholar 

  17. Cole J, Gorham T, McDonald M (1997) Reinventing the testing process. Am Program 10(8):3

    Google Scholar 

  18. Dyer M (1992) The Cleanroom Approach to Quality Software Development. Wiley, New York, p 15

    MATH  Google Scholar 

  19. Endres A (2003) Softwarequalität aus Nutzersicht und ihre wirtschaftliche Bewertung. Informatik-Spektrum 18(1):20

    Article  Google Scholar 

  20. Endres A, Rombach H-D (2004) Handbook on Software Engineering. Springer, Berlin, p 131

    Google Scholar 

  21. Erdogmus H, Favaro J, Strigel W (2004) Return on Investment. IEEE Software 3:18

    Article  Google Scholar 

  22. Everett GD (2008) The value of software testing to business. Test Exp 2:6

    Google Scholar 

  23. Fagan M (1976) Design and code inspections to reduce errors in program development. IBM Syst J 15(3):182

    Article  Google Scholar 

  24. Fagan M (1986) Advances in software inspections. IEEE Transact Softw Eng 12(7):744

    Google Scholar 

  25. Fetzer J (1988) Program verification – the very Idea. Commun ACM 31(9):1048

    Article  Google Scholar 

  26. Gaffney J (1984) Estimating the number of faults in code. IEEE Transact Softw Eng 19(4):454

    Google Scholar 

  27. Gilb T, Graham D (1996) Software Inspections. Addison-Wesley, Reading

    Google Scholar 

  28. Graham D (2002) Requirements and testing – seven missing link myths. IEEE Softw 5:15

    Article  Google Scholar 

  29. Harrison M, Walton G (2002) Identifying high maintenance legacy software. J Softw Maint Evol 14(6):429

    Article  MATH  Google Scholar 

  30. Hasitschka M, Teichmann MT, Sneed H (2009) Software quality management. OBJEKTspektrum 1:74

    Google Scholar 

  31. Hillson D, Simon P (2007) Practical Project Risk Management – the Atom Methodology. Management Concepts Inc, New York

    Google Scholar 

  32. IEEE Standard 1044-1995 (1995) IEEE Guide to Classification for Software Anomalies. Institute of Electrical and Electronic Engineers, New York, NY

  33. Jones TC (1991) Applied Software Measurements. Mcgraw-Hill, New York, p 43

    Google Scholar 

  34. Jörgensen M, Sjöberg D (2002) Impact of experience on maintenance skills. J Softw Maint 14(2):123–146

    Article  MATH  Google Scholar 

  35. Jungmayr S (2004) Improving Testability of object-oriented Systems. Dissertation, Berlin, p 123

  36. Jungmayr S (2009) Anforderungen an eine testbare Softwarearchitektur. Präsentation im Rahmen der Software Quality Days 2009, Wien, 20.–22. Januar 2009

  37. Kajko-Mattsson M, Forsannander S, Andersson G (2000) Software problem reporting and resolution process at ABB. J Softw Maint 12(5):255–286

    Article  Google Scholar 

  38. Kajko-Mattsson M (2003) Elicitating and Rating Problems within Support. In: Proceedings of ICSM-2003, IEEE Computer Society Press, Amsterdam, p 199

  39. Kajko-Mattsson M (2006) Evaluation of CM3: Front-End Problem Management within Industry. In: Proceedings of CSMR-2006, IEEE Computer Society Press, Bari, March 2006, p 367

  40. Karner G (1993) Metrics for Objectory. Diplomarbeit, Nr. LiTH-IDA-Ex-9344:21, Universität Linköping, Schweden

  41. Kan SH (1995) Metrics and Models in Software Quality Engineering. Addison-Wesley, Reading MA, p 178

    MATH  Google Scholar 

  42. Koster P, Peterson T (1994) Fault estimation and removal from the Space Shuttle software. Am Program 7(4):13

    Google Scholar 

  43. Laemmel A (1980) A statistical theory of computer program testing. Polytechnic Institute of NY, Report SRS 119/POLYEE 80-004

  44. Laird L, Brennan C (2006) Software Measurement and Estimation – a Practical Approach. Wiley, New York, p 133

    Google Scholar 

  45. Lientz B, Swanson EB (1980) Software Maintenance Management. Addison-Wesley, Reading, p 105

    Google Scholar 

  46. Lipow M (1982) Number of faults per line of code. IEEE Transact Softw Eng 8(4):503

    Google Scholar 

  47. Little T (2004) Value creation and capture – a model of the software development process. IEEE Softw 3:48

    Article  Google Scholar 

  48. McMenamin S, Palmer J (1984) Essential Systems Analysis. Yourdon Press, New York, p 7

    Google Scholar 

  49. Mookerjee R (2005) Maintaining enterprise software applications. Commun ACM 48(11):75

    Article  Google Scholar 

  50. National Institute of Standards and Technology (2002) The economic impacts of inadequate infrastructure for software testing. Planning report 02-3, Triangle Park, NC

  51. Nguyen HQ, Johnson B, Hackett M (2002) Testing Applications on the Web. Wiley, Indianapolis, p 22

    Google Scholar 

  52. Ostrand T, Weyuker E (2005) Predicting the location and number of faults in large software systems. IEEE Transact Softw Eng 31(4):340

    Article  Google Scholar 

  53. Redaktion (2004) Bei Softwarequalität haperts – Meta Group Bericht über mangelnde Softwarequalität. Computerzeitung 28:4

    Google Scholar 

  54. Redaktion (2008) ALU-II Pannensoftware vor der Ablösung. Comp Wkly 11:8

  55. Remus H (1983) Integrated Software Validation in the View of Inspections & Reviews. In: Proceedings of Symposium on Software Validation, Darmstadt, Germany, North-Holland, Amsterdam, p 57

  56. Sneed HM (1979) Das Software-Testlabor. In: Heilmann H (Hrsg) 8. Jahrbuch der EDV, Forkel, Stuttgart, S 31

  57. Sneed HM (1988) Software Qualitätssicherung. Rudolf Müller, Köln, S 58

    Google Scholar 

  58. Sneed HM (1996) Schätzung der Entwicklungskosten objektorientierter Software. Informatik-Spektrum 19(3):133

    Article  Google Scholar 

  59. Sneed HM (1997) Measuring the Performance of Software Maintenance Departments. In: Proceedings of 1st European Conf. on Software Maintenance and Reengineering (CSMR), IEEE Computer Society Press, Berlin, p 119

  60. Sneed HM (2003) Selective regression testing of large application systems. Softwaretechnik-Trends 23(4)

  61. Sneed HM (2006) Testing an e-Government Website. In: Proceedings of IEEE Symposium on Web Site Evolution, IEEE Computer Society Press, Budapest, p 3

  62. Sneed HM (2006) Reengineering for testability. Softwaretechnik-Trends 26(2):8

    Google Scholar 

  63. Sneed HM (2007) Qualitätsnachweis für Web Services. 2. Workshop Bewertungsaspekte serviceorientierter Architekturen, Shaker, Karlsruhe, S 1

  64. Sneed HM, Jungmayr S (2006) Produkt- und Prozessmetriken für den Softwaretest. Informatik-Spektrum 29(1):23

    Article  Google Scholar 

  65. Sneed HM, Winter M (2001) Testen objektorientierter Software. Hanser, München, S 21

    Google Scholar 

  66. Sneed HM, Hasitschka M, Teichmann MT (2004) Software-Produktmanagement. dpunkt, Heidelberg, S 358

    Google Scholar 

  67. Sneed HM, Baumgartner M, Seidl R (2008) Der Systemtest – Requirements-based Testing. Hanser, München, S 16

    Google Scholar 

  68. Solingen R (2004) Measuring the ROI of software process improvement. IEEE Software 3:32

    Article  Google Scholar 

  69. Spillner A (2008) Systematisches Testen von Software. dpunkt, Heidelberg, S 3

    Google Scholar 

  70. Spillner A, Koch T (2003) Basiswissen Softwaretest. dpunkt, Heidelberg, S 146

    Google Scholar 

  71. Vokac M (2004) Defect frequency and design patterns – an empirical study of industrial code. IEEE Transact Softw Eng 30(12):904

    Article  Google Scholar 

  72. Wallmüller E (2004) Risikomanagement für IT- und Softwareprojekte. Hanser, München, S 137

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harry M. Sneed.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sneed, H., Jungmayr, S. Mehr Testwirtschaftlichkeit durch Value-Driven-Testing. Informatik Spektrum 34, 192–209 (2011). https://doi.org/10.1007/s00287-010-0498-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00287-010-0498-3

Navigation