Skip to main content

Using Defect Taxonomies to Improve the Maturity of the System Test Process: Results from an Industrial Case Study

  • Conference paper
Book cover Software Quality. Increasing Value in Software and Systems Development (SWQD 2013)

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 133))

Included in the following conference series:

Abstract

Defect taxonomies collect and organize the domain knowledge and project experience of experts and are a valuable instrument of system testing for several reasons. They provide systematic backup for the design of tests, support decisions for the allocation of testing resources and are a suitable basis for measuring the product and test quality. In this paper, we propose a method of system testing based on defect taxonomies and investigate how these can systematically improve the efficiency and effectiveness, i.e. the maturity of requirements-based testing. The method is evaluated via an industrial case study based on two projects from a public health insurance institution by comparing one project with defect taxonomy-supported testing and one without. Empirical data confirm that system testing supported by defect taxonomies (1) reduces the number of test cases, and (2) increases of the number of identified failures per test case.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 72.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Serrano, N., Ciordia, I.: Bugzilla, ITracker, and other bug trackers. IEEE Software 22(2), 11–13 (2005)

    Article  Google Scholar 

  2. ISTQB: Standard glossary of terms used in software testing. Version 2.1 (2010)

    Google Scholar 

  3. McDonald, R., Musson, R., Smith, R.: The practical guide to defect prevention - techniques to meet the demand for more reliable software. Microsoft Press (2008)

    Google Scholar 

  4. Bach, J.: Risk and Requirements-Based Testing. IEEE Computer 32(6), 113–114 (1999)

    Google Scholar 

  5. Beizer, B.: Software testing techniques. Thomson Computer Press (1990)

    Google Scholar 

  6. Black, R.: Advanced Software Testing. Guide to the ISTQB Advanced Certification as an Advanced Test Analyst, vol. 1. Rocky Nook (2008)

    Google Scholar 

  7. Carr, M.J., Konda, S.L., Monarch, I., Ulrich, F.C., Walker, C.F.: Taxonomy-based risk identification, Software Engineering Institute, Carnegie-Mellon University, Pittsburgh (1993)

    Google Scholar 

  8. ISO/IEC: ISO/IEC 9126-1:2001 Software engineering - Product quality - Part 1: Quality model (2001)

    Google Scholar 

  9. Kelly, D., Shepard, T.: A case study in the use of defect classification in inspections. In: Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research (2001)

    Google Scholar 

  10. Vijayaraghavan, G., Kaner, C.: Bug taxonomies: Use them to generate better tests. STAR EAST (2003)

    Google Scholar 

  11. Kaner, C., Falk, J., Nguyen, H.Q.: Testing computer software. Van Nostrand Reinhold (1993)

    Google Scholar 

  12. IEEE: IEEE Std 1044-1993: IEEE Standard Classification for Software Anomalies (1993)

    Google Scholar 

  13. Mariani, L.: A fault taxonomy for component-based software. Electronic Notes in Theoretical Computer Science 82(6), 55–65 (2003)

    Article  MathSciNet  Google Scholar 

  14. Beer, A., Peischl, B.: Testing of Safety-Critical Systems – a Structural Approach to Test Case Design. In: Safety-Critical Systems Symposium, SSS 2011 (2011)

    Google Scholar 

  15. Looker, N., Munro, M., Xu, J.: Simulating errors in web services. International Journal of Simulation Systems, Science & Technology 5, 29–37 (2004)

    Google Scholar 

  16. Marchetto, A., Ricca, F., Tonella, P.: An empirical validation of a web fault taxonomy and its usage for web testing. Journal of Web Engineering 8(4), 316–345 (2009)

    Google Scholar 

  17. Morell, L.J.: A theory of fault-based testing. IEEE Transactions on Software Engineering, 844–857 (1990)

    Google Scholar 

  18. Vallespir, D., Grazioli, F., Herbert, J.: A framework to evaluate defect taxonomies. In: Argentine Congress on Computer Science (2009)

    Google Scholar 

  19. Chillarege, R., Bhandari, I.S., Chaar, J.K., Halliday, M.J., Moebus, D.S., Ray, B.K., Wong, M.Y.: Orthogonal defect classification-a concept for in-process measurements. IEEE Transactions on Software Engineering 18(11), 943–956 (1992)

    Article  Google Scholar 

  20. Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J.M.: On the value of static analysis for fault detection in software. IEEE Transactions on Software Engineering, 240–286 (2006)

    Google Scholar 

  21. El Emam, K., Wieczorek, I.: The repeatability of code defect classifications. IEEE (1998)

    Google Scholar 

  22. Henningsson, K., Wohlin, C.: Assuring fault classification agreement-an empirical evaluation. IEEE (2004)

    Google Scholar 

  23. Falessi, D., Cantone, G.: Exploring feasibility of software defects orthogonal classification. Software and Data Technologies, 136–152 (2008)

    Google Scholar 

  24. Fenton, N.E., Ohlsson, N.: Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering 26(8), 797–814 (2000)

    Article  Google Scholar 

  25. Basili, V., Briand, L.C., Melo, W.L.: A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering 22(10), 751–761 (1996)

    Article  Google Scholar 

  26. Andersson, C., Runeson, P.: A replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 273–286 (2007)

    Google Scholar 

  27. de Grood, D.J.: TestGoal – Result-Driven Testing. Springer (2008)

    Google Scholar 

  28. IEC: S+ IEC 61508 Commented version (2010)

    Google Scholar 

  29. Vegas, S., Basili, V.: A characterisation schema for software testing techniques. Empirical Software Engineering 10(4), 437–466 (2005)

    Article  Google Scholar 

  30. Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14(2), 131–164 (2009)

    Article  Google Scholar 

  31. Atos: SiTEMPPO, http://at.atos.net/de-at/solutions/sitemppo/ (accessed: June 10, 2012)

  32. Spillner, A., Rossner, T., Winter, M., Linz, T.: Software Testing Practice: Test Management. Rocky Nook (2007)

    Google Scholar 

  33. Argyrous, G.: Statistics for research: with a guide to SPSS. Sage (2011)

    Google Scholar 

  34. Ramler, R., Klammer, C., Natschläger, T.: The Usual Suspects: A Case Study on Delivered Defects per Developer. In: ESEM 2010 (2010)

    Google Scholar 

  35. Ramler, R., Biffl, S., Grünbacher, P.: Value-Based Management of Software Testing. In: Value-Based Software Engineering, pp. 225–244 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Felderer, M., Beer, A. (2013). Using Defect Taxonomies to Improve the Maturity of the System Test Process: Results from an Industrial Case Study. In: Winkler, D., Biffl, S., Bergsmann, J. (eds) Software Quality. Increasing Value in Software and Systems Development. SWQD 2013. Lecture Notes in Business Information Processing, vol 133. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35702-2_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35702-2_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35701-5

  • Online ISBN: 978-3-642-35702-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics