Skip to main content

Comparing the Effectiveness of Testing Techniques

  • Chapter
Formal Methods and Testing

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 4949))

Abstract

Testing software systems requires practitioners to decide how to select test data. This chapter discusses what it means for one test data selection criterion to be more effective than another. Several proposed comparison relations are discussed, highlighting the strengths and weaknesses of each. Also included is a discussion of how these relations evolved and argue that large scale empirical studies are needed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Avritzer, A., Weyuker, E.J.: The automatic generation of load test suites and the assessment of the resulting software. IEEE Transactions on Software Engineering, 705–716 (September 1995)

    Google Scholar 

  2. Avritzer, A., Weyuker, E.J.: Deriving workloads for performance testing. Software Practice and Experience 26(6), 613–633 (1996)

    Article  Google Scholar 

  3. Avritzer, A., Weyuker, E.J.: Metrics to assess the likelihood of project success based on architecture reviews. Empirical Software Engineering Journal 4(3), 197–213 (1999)

    Google Scholar 

  4. Duran, J.W., Ntafos, S.C.: An evaluation of random testing. IEEE Transactions on Software Engineering 10(7), 438–444 (1984)

    Article  Google Scholar 

  5. Thévenod-Fosse, P., Waeselynck, H., Crouzet, Y.: An Experimental Study on Software Structural Testing: Deterministic Versus Random Input Generation. In: IEEE Fault-Tolerant Computing: The Twenty-First International Symposium, Montreal, Canada, June 1991, pp. 410–417 (1991)

    Google Scholar 

  6. Frankl, P.G., Hamlet, D., Littlewood, B., Strigini, L.: Evaluating testing methods by delivered reliability. IEEE Transactions on Software Engineering 24(8), 586–601 (1998)

    Article  Google Scholar 

  7. Frankl, P.G., Weiss, S.N.: An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing. IEEE Transactions on Software Engineering 19(8), 774–787 (1993)

    Article  Google Scholar 

  8. Frankl, P.G., Weiss, S.N., Hu, C.: All-Uses versus Mutation Testing: An Experimental Comparison of Effectiveness. Journal of Systems and Software 38(3), 235–253 (1997)

    Article  Google Scholar 

  9. Frankl, P.G., Weyuker, E.J.: A formal analysis of the fault detecting ability of testing methods. IEEE Transactions on Software Engineering, 202–213 (March 1993)

    Google Scholar 

  10. Frankl, P.G., Weyuker, E.J.: Provable improvements on branch testing. IEEE Transactions on Software Engineering 19(10), 962–975 (1993)

    Article  Google Scholar 

  11. Frankl, P.G., Weyuker, E.J.: Testing Software to Detect and Reduce Risk. Journal of Systems and Software 53(3), 275–286 (2000)

    Article  Google Scholar 

  12. Girgis, M.R., Woodward, M.R.: An Experimental Comparison of the Error Exposing Ability of Program Testing Criteria. In: Proceedings of the IEEE Workshop on Software Testing, July 1986, pp. 64–73 (1986)

    Google Scholar 

  13. Gourlay, J.S.: A mathematical framework for the investigation of testing. IEEE Transactions on Software Engineering SE-9(6), 686–709 (1983)

    Article  Google Scholar 

  14. Grindal, M., Lindström, B., Offutt, J., Andler, S.F.: An Evaluation of Combination Testing Strategies. Empirical Software Engineering 11(4), 583–611 (2006)

    Article  Google Scholar 

  15. Hamlet, D.: Theoretical comparison of testing methods. In: Proceedings Third Symposium on Testing, Analysis and Verification, Key West, pp. 28–37 (1989)

    Google Scholar 

  16. Hamlet, D., Taylor, R.: Partition testing does not inspire confidence. IEEE Transactions on Software Engineering 16(12), 1402–1411 (1990)

    Article  MathSciNet  Google Scholar 

  17. Hierons, R.M.: Comparing test sets and criteria in the presence of test hypotheses and fault domains. ACM Transactions of Software Engineering and Methodology 11(4), 427–448 (2002)

    Article  Google Scholar 

  18. Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: Proceedings of the 16th International Conference on Software Engineering, May 1994, pp. 191–200 (1994)

    Google Scholar 

  19. Kuhn, D.R.: An Investigation of the Applicability of Design of Experiments to Software Testing. In: Proceedings of the 27th NASA/IEEE Software Engineering Workshop (December 2002)

    Google Scholar 

  20. Ntafos, S.: A comparison of some structural testing strategies. IEEE Transactions on Software Engineering 14(6), 868–874 (1988)

    Article  Google Scholar 

  21. Offutt, A.J., Pan, J., Tewary, K., Zhang, T.: An Experimental Evaluation of Data Flow and Mutation Testing. Software-Practice and Experience 26(2), 165–176 (1996)

    Article  Google Scholar 

  22. Ostrand, T., Weyuker, E., Bell, R.: Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering 31(4) (April 2005)

    Google Scholar 

  23. Rapps, S., Weyuker, E.J.: Data flow analysis techniques for program test data selection. In: Proceedings Sixth International Conference on Software Engineering, Tokyo, Japan, pp. 272–278 (September 1982)

    Google Scholar 

  24. Rapps, S., Weyuker, E.J.: Selecting software test data using data flow information. IEEE Transactions on Software Engineering SE-14(4), 367–375 (1985)

    Article  Google Scholar 

  25. Pizza, M., Strigini, L.: Comparing the effectiveness of testing methods in improving programs: the effect of variations in program quality. In: Proc. 9th International Symp. on Software Reliability Engineering, ISSRE 1998, Paderborn, Germany (November 1998)

    Google Scholar 

  26. Weyuker, E.J.: Axiomatizing software test data adequacy. IEEE Transactions on Software Engineering SE-12(12), 1128–1138 (1986)

    Google Scholar 

  27. Weyuker, E.J.: Evaluating Software Complexity Measures. IEEE Transactions on Software Engineering 14(9), 1357–1365 (1988)

    Article  MathSciNet  Google Scholar 

  28. Weyuker, E.J.: Using failure cost information for testing and reliability assessment. ACM Transactions on Software Engineering and Methodology 5(2), 87–98 (1996)

    Article  Google Scholar 

  29. Weyuker, E.J., Avritzer, A.: A metric to predict software scalability. In: Proc. 8th IEEE Symposium on Metrics (METRICS 2002), June 2002, pp. 152–158 (2002)

    Google Scholar 

  30. Weyuker, E.J., Jeng, B.: Analyzing partition testing strategies. IEEE Transactions on Software Engineering 17(7), 703–711 (1991)

    Article  Google Scholar 

  31. Weyuker, E.J., Ostrand, T.J.: Theories of Program Testing and the Application of Revealing Subdomains. IEEE Transactions on Software Engineering, 236–245 (May 1980)

    Google Scholar 

  32. Weyuker, E.J., Weiss, S.N., Hamlet, D.: Comparison of program testing strategies. In: Proceedings Fourth Symposium on Software Testing, Analysis, and Verification, October 1991, pp. 1–10. ACM Press, New York (1991)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Robert M. Hierons Jonathan P. Bowen Mark Harman

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Weyuker, E.J. (2008). Comparing the Effectiveness of Testing Techniques. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds) Formal Methods and Testing. Lecture Notes in Computer Science, vol 4949. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78917-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-78917-8_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-78916-1

  • Online ISBN: 978-3-540-78917-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics