Skip to main content

Exploratory studies of the software testing methods used by novice programmers

  • Section I Third SEI Conference On Software Engineering Education
  • Conference paper
  • First Online:
Software Engineering Education (SEI 1989)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 376))

Included in the following conference series:

Abstract

Computer science courses, including those at the introductory level, require students to be capable of demonstrating that the programs they write satisfy the problem specifications. This demonstration of correctness is nearly always done via testing. Yet these courses tend to offer little or no guidance to the students in selecting appropriate test data. This paper analyzes the characteristics of test data used by novice programmers in the absence of such guidance.

Test data generated by over 100 students in introductory courses at two large public institutions were analyzed for feature, statement, and validity coverage as described in the IEEE standard for unit testing. The student test data were often found inadequate to cover all but the most high-level features described in the problem specifications. Statement coverage was frequently not achieved, and most of the students did not include invalid inputs in their test suites, even if they wrote their programs to check for such invalid inputs. Students tended to do a better job in achieving these levels of coverage when testing someone else's program than they did when testing their own program.

About 2/3 of the defective programs found in the study were tested well enough by the student to reveal a defect, but in many cases the defects were not recognized even when revealed. Simple systematic testing strategies, based only on information contained in the specifications, were found capable of revealing defects in most of the defective programs, including most of those for which the student's own data were inadequate to reveal a defect.

Dr. Zweben's research is supported in part by grant CCR-8802312 from the National Science Foundation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. V. Basili and R. Selby, "Comparing the Effectiveness of Software Testing Strategies," IEEE Transactions on Software Engineering, SE-13, 12, December 1987, pp. 1278–1296.

    Google Scholar 

  2. W.H. Beyer (ed.) Handbook of Probability and Statistics, Chemical Rubber Co., 1968.

    Google Scholar 

  3. C. Fung, "A Methodology for the Collection and Evaluation of Software Error Data," Ph.D. Thesis, Dept. of Computer and Information Science, Ohio State University, 1985.

    Google Scholar 

  4. M. Girgis and M. Woodward, "An Experimental Comparison of the Error Exposing Ability of Program Testing Criteria," Proceedings of the Workshop on Software Testing, Banff, Alberta, Canada, July 1986, pp. 64–73.

    Google Scholar 

  5. "IEEE Standard for Software Unit Testing," ANSI/IEEE Std. 1008-1987, IEEE, 1987.

    Google Scholar 

  6. L. Morell, "Unit Testing and Analysis," SEI Curriculum Module SEI-CM-9-1.0, Software Engineering Institute, Pittsburgh, PA, October 1987.

    Google Scholar 

  7. P.G. Moulton and M.E. Muller, "DITRAN — A Compiler Emphasizing Diagnostics," Communications of the ACM, 10, 1, January 1967, pp. 50–52.

    Article  Google Scholar 

  8. G. Myers, "A Controlled Experiment in Program Testing and Code Walkthroughs/Inspections," Communications of the ACM, 21, 9, September 1978, pp. 760–768.

    Article  Google Scholar 

  9. R. Selby, V. Basili and F. T. Baker, "Cleanroom Software Development: An Empirical Evaluation," IEEE Transactions on Software Engineering, SE-13, 9, September 1987, pp. 1027–1037.

    Google Scholar 

  10. E.A. Young, "Human Errors in Programming," Int'l. Journal of Man-Machine Studies, 6, 3, 1974, pp. 361–376.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Norman E. Gibbs

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zweben, S.H., Stringfellow, C., Barton, R. (1989). Exploratory studies of the software testing methods used by novice programmers. In: Gibbs, N.E. (eds) Software Engineering Education. SEI 1989. Lecture Notes in Computer Science, vol 376. Springer, New York, NY. https://doi.org/10.1007/BFb0042358

Download citation

  • DOI: https://doi.org/10.1007/BFb0042358

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-0-387-97090-5

  • Online ISBN: 978-0-387-34791-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics