Skip to main content

Testing of object-oriented programming systems (OOPS): A fault-based approach

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 858))

Abstract

The goal of this paper is to examine the testing of object-oriented systems and to compare and contrast it with the testing of conventional programming language systems, with emphasis on fault-based testing. Conventional system testing, object-oriented system testing, and the application of conventional testing methods to object-oriented software will be examined, followed by a look at the differences between testing of conventional (procedural) software and the testing of object-oriented software. An examination of software faults (defects) will follow, with emphasis on developing a preliminary taxonomy of faults specific to object-oriented systems. Test strategy adequacy will be briefly presented. As a result of these examinations, a set of candidate testing methods for object-oriented programming systems will be identified.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barnes, M., P, Bishop, B. Bjarland, G. Dahll, D. Esp, J. Lahti, H. Valisuo, P. Humphreys. “Software Testing and Evaluation Methods (The STEM Project).” OECD Halden Reactor Project Report, No. HWR-210, May 1987.

    Google Scholar 

  2. Beizer, Boris. Software Testing Techniques. Second Edition. New York, Van Nostrand Reinhold, 1990.

    Google Scholar 

  3. Berard, Edward V. Essays on Object-Oriented Software Engineering. Prentice-Hall, Englewood Cliffs, New Jersey, 1993, p. 10.

    Google Scholar 

  4. Berge, C. Graph and Hypergraphs. North-Holland, Amsterdam, The Netherlands, 1973.

    Google Scholar 

  5. Chung, Chi-Ming and Ming-Chi Lee. “Object-Oriented Programming Testing Methodology”, published in Proceedings of the Fourth International Conference on Software Engineering and Knowledge Engineering, IEEE Computer Society Press, 15–20 June 1992, pp. 378–385.

    Google Scholar 

  6. Davis, A. Software Requirements: Analysis and Specification. New York, Prentice-Hall, Inc., 1990.

    Google Scholar 

  7. Dunn, R.H. Software Defect Removal. New York, McGraw-Hill, 1984.

    Google Scholar 

  8. Firesmith, Donald G. “Testing Object-Oriented Software,” published in Proceedings of TOOLS. 19 March 1993.

    Google Scholar 

  9. Howden, W. E., “Functional Program Testing,” IEEE Transactions on Software Engineering. SE-6(2): March 1980, 162–169.

    Google Scholar 

  10. IEEE 729–1983, “Glossary of Software Engineering Terminology,” September 23, 1982.

    Google Scholar 

  11. Jensen, R.W. and C.C. Tonies. Software Engineering. Englewood Cliffs, NJ, Prentice-Hall, 1979.

    Google Scholar 

  12. Korson, Tim, and John D. McGregor. “Understanding Object-Oriented: A Unifying Paradigm.” Communications of the ACM. Volume 33, No. 9, September 1990, pp. 40–60.

    Article  Google Scholar 

  13. Llinas, J., S. Rizzi, and M. McCown, “The Test and Evaluation Process for Knowledge-Based Systems.” SAIC final contract report, Contract #F30602-85-G-0313 (Task 86001-01), prepared for Rome Air Development Center, 1987.

    Google Scholar 

  14. McCabe, T.J. “Structured Testing: A Testing Methodology Using the McCabe Complexity Metric,” NBS Special Publication, Contract NB82NAAK5518, U.S. Department of Commerce, National Bureau of Standards, 1982.

    Google Scholar 

  15. Meyer, Bertrand. Object-oriented Software Construction. Prentice-Hall, New York, NY, 1988, p. 59, 62.

    Google Scholar 

  16. Meyer, Bertrand. Eiffel: The Language. Prentice-Hall, New York, NY, 1992, p. 17.

    MATH  Google Scholar 

  17. Miller, Edwin. “Better Software Testing” Proceedings of Third International Conference on Software for Strategic Systems. 27–28 February 1990, Huntsville, AL, 1–7.

    Google Scholar 

  18. Miller, Lance A., “Dynamic Testing of Knowledge Bases Using the Heuristic Testing Approach.” in Expert Systems with Applications: An International Journal. 1990, 1, 249–269.

    Article  Google Scholar 

  19. Miller, Lance A., Groundwater, Elizabeth, and Steven Mirsky. Survey and Assessment of Conventional Software Verification and Validation Methods (NUREG/CR-6018). U.S. Nuclear Regulatory Commission, April 1993, p. 9, 33, 35.

    Google Scholar 

  20. Miller, Lance A., Hayes, Jane E., and Steven Mirsky. Task 7: Guidelines for the Verification and Validation of Artificial Intelligence Software Systems. Prepared for United States Nuclear Regulatory Commission and the Electric Power Research Institute. Prepared by Science Applications International Corporation, May 28, 1993.

    Google Scholar 

  21. Miller, Lance A. Personal communication. September 1993.

    Google Scholar 

  22. Myers, G.J. The Art of Software Testing. Wiley, New York, New York, 1979.

    Google Scholar 

  23. NBS 500-93. “Software Validation, Verification, and Testing Technique and Tool Reference Guide,” September 1982.

    Google Scholar 

  24. Ng, P. and R. Yeh (Eds.). Modern Software Engineering: Foundations and Current Perspectives. New York, Van Nostrand Reinhold, 1990.

    Google Scholar 

  25. NUREG/CR-4227, Gilmore, W., “Human Engineering Guidelines for the Evaluation and Assessment of Video Display Units.” July 1985.

    Google Scholar 

  26. Perry, DeWayne E. and Gail E. Kaiser. “Adequate Testing and Object-Oriented Programming,” Journal of Object-Oriented Programming, 2(5):13–19, 1990.

    Google Scholar 

  27. Purchase, J.A. and R.L. Winder. “Debugging Tools for Object-Oriented Programming,” Journal of Object-Oriented Programming, Volume 4, Number 3, June 1991, pp. 10–27.

    Google Scholar 

  28. Rushby, J., “Quality Measures and Assurance for AI Software.” NASA Contractor Report No. 4187, prepared for Langley Research Center under Contract NAS1-17067, October 1988.

    Google Scholar 

  29. Smith, M.D. and D.J. Robson. “A Framework for Testing Object-Oriented Programs,” Journal of Object-Oriented Programming, June 1992, pp. 45–53.

    Google Scholar 

  30. Tung, C., “On Control Flow Error Detection and Path Testing,” Proceedings of Third International Conference Software for Strategic Systems, Huntsville, AL, 27–28 February 1990, 144–153.

    Google Scholar 

  31. Turner, C.D. and D.J. Robson. “The State-based Testing of Object-Oriented Programs,” Proceeding of the 1993 IEEE Conference on Software Maintenance (CSM-93). Montreal, Quebec, Canada, September 27–30, 1993, David Card-editor, pp. 302–310.

    Google Scholar 

  32. Voas, Jeffrey M. “PIE: A Dynamic Failure-Based Technique.” IEEE Transactions on Software Engineering, Volume 18, No. 8, August 1992.

    Google Scholar 

  33. Wallace, Dolores R. and Roger U. Fujii. “Software Verification and Validation: An Overview.” IEEE Software. Vol. 6, No. 3, May 1989.

    Google Scholar 

  34. Weyuker, Elaine J. “The Evaluation of Program-Based Software Test Data Adequacy Criteria,” Communications of the ACM, 31:6, June 1988, pp. 668–675.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Elisa Bertino Susan Urban

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hayes, J.H. (1994). Testing of object-oriented programming systems (OOPS): A fault-based approach. In: Bertino, E., Urban, S. (eds) Object-Oriented Methodologies and Systems. ISOOMS 1994. Lecture Notes in Computer Science, vol 858. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0014026

Download citation

  • DOI: https://doi.org/10.1007/BFb0014026

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58451-3

  • Online ISBN: 978-3-540-48804-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics