skip to main content
10.1145/1081706.1081749acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
Article

Parameterized unit tests

Published:01 September 2005Publication History

ABSTRACT

Parameterized unit tests extend the current industry practice of using closed unit tests defined as parameterless methods. Parameterized unit tests separate two concerns: 1) They specify the external behavior of the involved methods for all test arguments. 2) Test cases can be re-obtained as traditional closed unit tests by instantiating the parameterized unit tests. Symbolic execution and constraint solving can be used to automatically choose a minimal set of inputs that exercise a parameterized unit test with respect to possible code paths of the implementation. In addition, parameterized unit tests can be used as symbolic summaries which allows symbolic execution to scale for arbitrary abstraction levels. We have developed a prototype tool which computes test cases from parameterized unit tests. We report on its first use testing parts of the .NET base class library.

References

  1. F. Ambert, F. Bouquet, S. Chemin, S. Guenaud, B. Legeard, F. Peureux, N. Vacelet, and M. Utting. BZ-TT: A tool-set for test generation from Z and B using contraint logic programming. In R. Hierons and T. Jerron, editors, FATES 2002 workshop of CONCUR'02, pages 105--120. INRIA Report, August 2002.Google ScholarGoogle Scholar
  2. M. Barnett, R. Leino, and W. Schulte. The Spec# programming system: An overview. In M. Huisman, editor, Construction and Analysis of Safe, Secure, and Interoperable Smart Devices: International Workshop, CASSIS 2004, volume 3362 of LNCS, pages 49--69. Springer, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Barnett, D. A. Naumann, W. Schulte, and Q. Sun. 99.44% pure: Useful abstractions in specifications. Conference Proceedings ICIS report NIII-R0426, University of Nijmegen, 2004.Google ScholarGoogle Scholar
  4. G. Bernot, M. C. Gaudel, and B. Marre. Software testing based on formal specifications: a theory and a tool. Softw. Eng. J., 6(6):387--405, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Bidoit, H.-J. Kreowski, P. Lescanne, F. Orejas, and D. Sannella, editors. Algebraic system specification and development. Springer-Verlag New York, Inc., New York, NY, USA, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Bierman, M. Parkinson, and A. Pitts. MJ: An imperative core calculus for Java and Java with effects. Technical Report 563, University of Cambridge Computer Laboratory, 2003.Google ScholarGoogle Scholar
  7. C. Boyapati, S. Khurshid, and D. Marinov. Korat: automated testing based on Java predicates. In Proc. ISSTA, pages 123--133, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. D. Brucker and B. Wolff. Symbolic test case generation for primitive recursive functions. In J. Grabowski and B. Nielsen, editors, FATES, volume 3395 of LNCS, pages 16--32. Springer, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. W. R. Bush, J. D. Pincus, and D. J. Sielaff. A static analyzer for finding dynamic programming errors. Softw. Pract. Exper., 30(7):775--802, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Software: Practice and Experience, 34:1025--1050, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. Csallner and Y. Smaragdakis. Check 'n' Crash: Combining static checking and testing. In Proc. 27th ICSE, May 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. Detlefs, G. Nelson, and J. Saxe. Simplify: A theorem prover for program checking. Technical Report HPL-2003-148, HP Labs, Palo Alto, CA, USA, 2003.Google ScholarGoogle Scholar
  13. J. Dick and A. Faivre. Automating the generation and sequencing of test cases from model-based specifications. In Proc. FME'93: Industrial Strength Formal Methods, volume 670 of LNCS, pages 268--284. Springer, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R.-K. Doong and P. G. Frankl. The ASTOOT approach to testing object-oriented programs. ACM Trans. Softw. Eng. Methodol., 3(2):101--130, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. Flanagan, K. R. M. Leino, M. Lillibridge, G. Nelson, J. B. Saxe, and R. Stata. Extended static checking for Java. In Proc. 2002 PLDI, pages 234--245. ACM Press, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. SIGPLAN Notices, 40(6):213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. W. Grieskamp, N. Tillmann, and W. Schulte. XRT --- Exploring Runtime for .NET --- Architecture and Applications. In Proc. 3rd SoftMC, 2005. To appear.Google ScholarGoogle Scholar
  18. J. Henkel and A. Diwan. Discovering algebraic specifications from Java classes. In Proc. 17th ECOOP, pages 431--456, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  19. P. Jalote. Testing the completeness of specifications. IEEE Trans. Softw. Eng., 15(5):526--531, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Jeffries, A. Anderson, and C. Hendrickson. Extreme Programming Installed. Addison Wesley, Boston, MA, USA, Oct. 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. Loeckx and K. Sieber. The Foundations of Program Verification, 2nd Edition. Wiley, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Marinov and S. Khurshid. TestEra: A novel framework for automated testing of Java programs. In Proc. 16th ASE, pages 22--31, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Microsoft. Visual Studio 2005 Team System. http://lab.msdn.microsoft.com/teamsystem/. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. W. Newkirk and A. A. Vorontsov. Test-Driven Development in Microsoft .NET. Microsoft Press, Apr. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. NUnit development team. NUnit. http://www.nunit.org/.Google ScholarGoogle Scholar
  27. Parasoft. Jtest manuals version 5.1. Online manual, July 2004. http://www.parasoft.com/.Google ScholarGoogle Scholar
  28. Testing, Verification and Measurement, Microsoft Research. Zap theorem prover. http://research.microsoft.com/tvm/.Google ScholarGoogle Scholar
  29. W. Visser, C. S. Pasareanu, and S. Khurshid. Test input generation with Java PathFinder. In Proc. 2004 ISSTA, pages 97--107, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. T. Xie, D. Marinov, W. Schulte, and D. Notkin. Symstra: A framework for generating object-oriented unit tests using symbolic execution. In N. Halbwachs and L. D. Zuck, editors, TACAS, volume 3440 of LNCS, pages 365--381. Springer, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Parameterized unit tests

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE-13: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
      September 2005
      402 pages
      ISBN:1595930140
      DOI:10.1145/1081706
      • cover image ACM SIGSOFT Software Engineering Notes
        ACM SIGSOFT Software Engineering Notes  Volume 30, Issue 5
        September 2005
        462 pages
        ISSN:0163-5948
        DOI:10.1145/1095430
        Issue’s Table of Contents

      Copyright © 2005 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 September 2005

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader