Abstract
Testing software systems requires practitioners to decide how to select test data. This chapter discusses what it means for one test data selection criterion to be more effective than another. Several proposed comparison relations are discussed, highlighting the strengths and weaknesses of each. Also included is a discussion of how these relations evolved and argue that large scale empirical studies are needed.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Avritzer, A., Weyuker, E.J.: The automatic generation of load test suites and the assessment of the resulting software. IEEE Transactions on Software Engineering, 705–716 (September 1995)
Avritzer, A., Weyuker, E.J.: Deriving workloads for performance testing. Software Practice and Experience 26(6), 613–633 (1996)
Avritzer, A., Weyuker, E.J.: Metrics to assess the likelihood of project success based on architecture reviews. Empirical Software Engineering Journal 4(3), 197–213 (1999)
Duran, J.W., Ntafos, S.C.: An evaluation of random testing. IEEE Transactions on Software Engineering 10(7), 438–444 (1984)
Thévenod-Fosse, P., Waeselynck, H., Crouzet, Y.: An Experimental Study on Software Structural Testing: Deterministic Versus Random Input Generation. In: IEEE Fault-Tolerant Computing: The Twenty-First International Symposium, Montreal, Canada, June 1991, pp. 410–417 (1991)
Frankl, P.G., Hamlet, D., Littlewood, B., Strigini, L.: Evaluating testing methods by delivered reliability. IEEE Transactions on Software Engineering 24(8), 586–601 (1998)
Frankl, P.G., Weiss, S.N.: An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing. IEEE Transactions on Software Engineering 19(8), 774–787 (1993)
Frankl, P.G., Weiss, S.N., Hu, C.: All-Uses versus Mutation Testing: An Experimental Comparison of Effectiveness. Journal of Systems and Software 38(3), 235–253 (1997)
Frankl, P.G., Weyuker, E.J.: A formal analysis of the fault detecting ability of testing methods. IEEE Transactions on Software Engineering, 202–213 (March 1993)
Frankl, P.G., Weyuker, E.J.: Provable improvements on branch testing. IEEE Transactions on Software Engineering 19(10), 962–975 (1993)
Frankl, P.G., Weyuker, E.J.: Testing Software to Detect and Reduce Risk. Journal of Systems and Software 53(3), 275–286 (2000)
Girgis, M.R., Woodward, M.R.: An Experimental Comparison of the Error Exposing Ability of Program Testing Criteria. In: Proceedings of the IEEE Workshop on Software Testing, July 1986, pp. 64–73 (1986)
Gourlay, J.S.: A mathematical framework for the investigation of testing. IEEE Transactions on Software Engineering SE-9(6), 686–709 (1983)
Grindal, M., Lindström, B., Offutt, J., Andler, S.F.: An Evaluation of Combination Testing Strategies. Empirical Software Engineering 11(4), 583–611 (2006)
Hamlet, D.: Theoretical comparison of testing methods. In: Proceedings Third Symposium on Testing, Analysis and Verification, Key West, pp. 28–37 (1989)
Hamlet, D., Taylor, R.: Partition testing does not inspire confidence. IEEE Transactions on Software Engineering 16(12), 1402–1411 (1990)
Hierons, R.M.: Comparing test sets and criteria in the presence of test hypotheses and fault domains. ACM Transactions of Software Engineering and Methodology 11(4), 427–448 (2002)
Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: Proceedings of the 16th International Conference on Software Engineering, May 1994, pp. 191–200 (1994)
Kuhn, D.R.: An Investigation of the Applicability of Design of Experiments to Software Testing. In: Proceedings of the 27th NASA/IEEE Software Engineering Workshop (December 2002)
Ntafos, S.: A comparison of some structural testing strategies. IEEE Transactions on Software Engineering 14(6), 868–874 (1988)
Offutt, A.J., Pan, J., Tewary, K., Zhang, T.: An Experimental Evaluation of Data Flow and Mutation Testing. Software-Practice and Experience 26(2), 165–176 (1996)
Ostrand, T., Weyuker, E., Bell, R.: Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering 31(4) (April 2005)
Rapps, S., Weyuker, E.J.: Data flow analysis techniques for program test data selection. In: Proceedings Sixth International Conference on Software Engineering, Tokyo, Japan, pp. 272–278 (September 1982)
Rapps, S., Weyuker, E.J.: Selecting software test data using data flow information. IEEE Transactions on Software Engineering SE-14(4), 367–375 (1985)
Pizza, M., Strigini, L.: Comparing the effectiveness of testing methods in improving programs: the effect of variations in program quality. In: Proc. 9th International Symp. on Software Reliability Engineering, ISSRE 1998, Paderborn, Germany (November 1998)
Weyuker, E.J.: Axiomatizing software test data adequacy. IEEE Transactions on Software Engineering SE-12(12), 1128–1138 (1986)
Weyuker, E.J.: Evaluating Software Complexity Measures. IEEE Transactions on Software Engineering 14(9), 1357–1365 (1988)
Weyuker, E.J.: Using failure cost information for testing and reliability assessment. ACM Transactions on Software Engineering and Methodology 5(2), 87–98 (1996)
Weyuker, E.J., Avritzer, A.: A metric to predict software scalability. In: Proc. 8th IEEE Symposium on Metrics (METRICS 2002), June 2002, pp. 152–158 (2002)
Weyuker, E.J., Jeng, B.: Analyzing partition testing strategies. IEEE Transactions on Software Engineering 17(7), 703–711 (1991)
Weyuker, E.J., Ostrand, T.J.: Theories of Program Testing and the Application of Revealing Subdomains. IEEE Transactions on Software Engineering, 236–245 (May 1980)
Weyuker, E.J., Weiss, S.N., Hamlet, D.: Comparison of program testing strategies. In: Proceedings Fourth Symposium on Software Testing, Analysis, and Verification, October 1991, pp. 1–10. ACM Press, New York (1991)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Weyuker, E.J. (2008). Comparing the Effectiveness of Testing Techniques. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds) Formal Methods and Testing. Lecture Notes in Computer Science, vol 4949. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78917-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-540-78917-8_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78916-1
Online ISBN: 978-3-540-78917-8
eBook Packages: Computer ScienceComputer Science (R0)