Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

An empirical evaluation (and specification) of the all-du-paths testing criterion

An empirical evaluation (and specification) of the all-du-paths testing criterion

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Software Engineering Journal — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The all-du-paths structural testing criterion is one of the most discriminating of the data-flow testing criteria. Unfortunately, in the worst case, the criterion requires an intractable number of test cases. In a case study of an industrial software system, we find that the worst-case scenario is rare. Eighty percent of the subroutines require ten or fewer test cases. Only one subroutine out of 143 requires an intractable number of tests. However, the number of required test cases becomes tractable when using the all-uses criterion. This paper includes a formal specification of both the all-du-paths criterion and the software tools used to estimate a minimal number of test cases necessary to meet the criterion.

References

    1. 1)
      • B.W. Kernighan , P.J. Plauger . (1981) , Software tools in Pascal.
    2. 2)
      • L.J. White , B. Chandrasekaran , S. Radicchi . (1981) Basic mathematical definitions and results in testing, Computer program testing.
    3. 3)
      • Weyuker, E.J.: `An empirical study of the complexity of data flow testing', Proc. Second Workshop on Software Testing Verification and Analysis, 1988, Banff, Canada, p. 188–195.
    4. 4)
      • Frankl, P.G., Weyuker, E.J.: `A data flow testing tool', Proc. Softfair II, December 1985, San Francisco, California.
    5. 5)
      • M.D. Weiser , J.D. Gannon , P.R. McMullin . Comparison of structured test coverage metrics. IEEE Softw. , 2 , 80 - 85
    6. 6)
      • Bieman, J., Schultz, J.: `Estimating the number of test cases required to satisfy the all-du-paths testing criterion', TAV3-SIGSOFT89, Proc. Software Testing, Analysis and Verification Symp, December 1989, Key West, Florida, p. 179–186.
    7. 7)
      • J. Bieman , A. Baker , P. Clites , D. Gustafson , A. Melton . A standard representation of imperative language programs for data collection and software measures specification. J. Syst.Softw. , 1 , 13 - 37
    8. 8)
      • Bieman, J., Schultz, J.: `An empirical evaluation and specification of the all-du-paths testing criterion extended version', CS-91-105, Technical Report, 1991.
    9. 9)
      • S.C. Ntafos . A comparison of some structural testing strategies. IEEE Trans. , 868 - 874
    10. 10)
      • J.B. Goodenough , S.L. Gerhart . Toward a theory of test data selection. IEEE Trans. , 156 - 173
    11. 11)
      • Frankl, P.G., Weiss, S.N., Weyuker, E.J.: `Asset' a system to select and evaluate tests', Proc. IEEE Conf. on Software Tools, April 1985, New York, p. 72–79.
    12. 12)
      • Clarke, L.A., Podgurski, A., Richardson, D.J., Zeil, S.J.: `A comparison of data flow path selection criteria', Proc. 8th Int. Conf. on Software Engineering, 1985, London, p. 244–251.
    13. 13)
      • S. Rapps , E. Weyuker . Selecting software test data using data flow information. IEEE Trans. , 4 , 367 - 375
    14. 14)
      • S.C. Ntafos . On required element testing. IEEE Trans. , 6 , 795 - 803
    15. 15)
      • J. Laski , B. Korel . A data flow oriented program testing strategy. IEEE Trans. , 3 , 347 - 354
    16. 16)
      • G.J. Myers . (1979) , The art of software testing.
    17. 17)
      • T.J. McCabe . A complexity measure. IEEE Trans. , 4 , 308 - 320
    18. 18)
      • Doh, K., Bieman, J., Baker, A.: `Generating a standard representation from pascal programs', 86-15, Technical Report, 1986.
    19. 19)
      • Baker, A., Bieman, J., Clites, P.: `Implications for formal specifications — results of specifying a software engineering tool', Proc. COMPSAC 87, October 1987, Tokyo, Japan, p. 131–140.
    20. 20)
      • Schultz, J.L.: `Measuring the cardinality of execution path subsets meeting the all-du-paths testing criterion', 1988, MSc. thesis, Iowa State University, Department of Computer Science.
    21. 21)
      • E.J. Weyuker . The cost of data flow testing: an empirical study. IEEE Trans. , 2 , 121 - 128
    22. 22)
      • I. Hayes . (1993) , Specification case studies.
    23. 23)
      • P.G. Frankl , J.E. Weyuker . An applicable family of data flow testing criteria. IEEE Trans. , 10 , 1483 - 1498
    24. 24)
      • Baker, A.L., Howatt, J.W., Bieman, J.M.: `Criteria for finite sets of paths that characterize control flow', Proc. 19th Hawaii Int. Conf. on System Sciences HICSS-19, January 1986, p. 158–163, IIA.
    25. 25)
      • L.A. Clark , A. Posgurski , D.J. Richardson , S.J. Zeil . A formal evaluation of data flow path selection criteria. IEEE Trans. , 11 , 1318 - 1332
    26. 26)
      • W.E. Howden . Reliability of the path analysis testing strategy. IEEE Trans. , 3 , 208 - 215
    27. 27)
      • C.B. Jones . (1990) , Systematic software development using VDM.
    28. 28)
      • E.J. Weyuker . The complexity of data flow criteria for test data selection. Inform. Process. Lett. , 103 - 109
http://iet.metastore.ingenta.com/content/journals/10.1049/sej.1992.0005
Loading

Related content

content/journals/10.1049/sej.1992.0005
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address