Skip to main content
Log in

Tool-assisted unit-test generation and selection based on operational abstractions

  • Published:
Automated Software Engineering Aims and scope Submit manuscript

Abstract

Unit testing, a common step in software development, presents a challenge. When produced manually, unit test suites are often insufficient to identify defects. The main alternative is to use one of a variety of automatic unit-test generation tools: these are able to produce and execute a large number of test inputs that extensively exercise the unit under test. However, without a priori specifications, programmers need to manually verify the outputs of these test executions, which is generally impractical. To reduce this cost, unit-test selection techniques may be used to help select a subset of automatically generated test inputs. Then programmers can verify their outputs, equip them with test oracles, and put them into the existing test suite. In this paper, we present the operational violation approach for unit-test generation and selection, a black-box approach without requiring a priori specifications. The approach dynamically generates operational abstractions from executions of the existing unit test suite. These operational abstractions guide test generation tools to generate tests to violate them. The approach selects those generated tests violating operational abstractions for inspection. These selected tests exercise some new behavior that has not been exercised by the existing tests. We implemented this approach by integrating the use of Daikon (a dynamic invariant detection tool) and Parasoft Jtest (a commercial Java unit testing tool), and conducted several experiments to assess the approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Ammons, G., Bodik, R., & Larus, J.R.: Mining specifications. In Proc. 29th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 4–16, (2002)

  • Beck, K.: Test Driven Development: By Example. Addison-Wesley, (2003)

  • Bernot, G., Gaudel, M.C., & Marre, B.: Software testing based on formal specifications: a theory and a tool. Software Engineering Journal, 6(6):387–405, (1991)

    Article  Google Scholar 

  • Boyapati, C., Khurshid, S., & Marinov, D.: Korat: automated testing based on Java predicates. In Proc. International Symposium on Software Testing and Analysis, pp. 123–133, (2002)

  • Budd, T.A., DeMillo, R.A., Lipton, R.J., & Sayward, F.G.: Theoretical and empirical studies on using program mutation to test the functional correctness of programs. In Proc. 7th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 220–233, (1980)

  • Chang, J., & Richardson, D.J.: Structural specification-based testing: automated support and experimental evaluation. In Proc. 7th ESEC/FSE, pp. 285–302, (1999)

  • Cheon, Y., & Leavens, G.T: A simple and practical approach to unit testing: The JML and JUnit way. In Proc. 16th European Conference Object-Oriented Programming, pp. 231–255, (2002)

  • Csallner, C., & Smaragdakis, Y.: JCrasher: an automatic robustness tester for Java. Software: Practice and Experience, 34:1025–1050, (2004)

    Article  Google Scholar 

  • Csallner, C. & Smaragdakis, Y.: Check ‘n’ Crash: Combining static checking and testing. In Proc. 27th International Conference on Software Engineering, pp. 422–431, (2005)

  • Publications using the Daikon invariant detector tool, (2006). http://pag.csail.mit.edu/daikon/pubs/

  • Dickinson, W., Leon, D., & Podgurski, A.: Finding failures by cluster analysis of execution profiles. In Proc. 23rd International Conference on Software Engineering, pp. 339–348, (2001)

  • Ernst, M.D., Cockrell, J., Griswold, W.G., & Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering, 27(2):99–123, (2001)

    Article  Google Scholar 

  • Foundations of Software Engineering, Microsoft Research. The AsmL test generator tool. http://research.microsoft.com/fse/asml/doc/AsmLTester.html

  • Frankl, P.G., & Weiss, S.N.: An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering, 19(8):774–787, (1993)

    Article  Google Scholar 

  • Gaudel, M.-C.: Testing can be formal, too. In Proc. 6th International Joint Conference CAAP/FASE on Theory and Practice of Software Development, pp. 82–96, (1995)

  • Grieskamp, W., Gurevich, Y., Schulte, W., & Veanes, M.: Generating finite state machines from abstract state machines. In Proc. International Symposium on Software Testing and Analysis, pp. 112–122, (2002)

  • Gupta, N.: Generating test data for dynamically discovering likely program invariants. In Proc. ICSE 2003 Workshop on Dynamic Analysis, pp. 21–24, (2003)

  • Gupta, N., & Heidepriem, Z.V.: A new structural coverage criteria for dynamic detection of program invariants. In Proc. 18th IEEE International Conference on Automated Software Engineering, pp. 49–58, (2003)

  • Hangal, S., & Lam, M.S.: Tracking down software bugs using automatic anomaly detection. In Proc. 24th International Conference on Software Engineering, pp. 291–301, (2002)

  • Hansel 1.0, (2003). http://hansel.sourceforge.net/

  • Harder, M., Mellen, J., & Ernst, M.D.: Improving test suites via operational abstraction. In Proc. 25th International Conference on Software Engineering, pp. 60–71, (2003)

  • Henkel, J., & Diwan, A.: Discovering algebraic specifications from Java classes. In Proc. 17th European Conference on Object-Oriented Programming, pp. 431–456, (2003)

  • JUnit, (2003). http://www.junit.org

  • Korel, B., & Al-Yami, A.M.: Assertion-oriented automated test data generation. In Proc. the 18th International Conference on Software Engineering, pp. 71–80, (1996)

  • Kropp, N.P., Koopman P.J. Jr., & Siewiorek, D.P.: Automated robustness testing of off-the-shelf software components. In Proc. the 28th IEEE International Symposium on Fault Tolerant Computing, pp. 230–239, (1998)

  • Leavens, G.T., Baker, A.L., & Ruby, C.: Preliminary design of JML: A behavioral interface specification language for Java. Technical Report TR 98-06i, Department of Computer Science, Iowa State University, (1998)

  • Meyer, B.: Eiffel: The Language. Prentice Hall, New York, N.Y., (1992)

    MATH  Google Scholar 

  • Milner, R., Tofte, M., & Harper, R.: The Definition of Standard ML. MIT Press, Cambridge, MA, (1989)

    Google Scholar 

  • Nimmer, J.W.: Automatic generation and checking of program specifications. Technical Report 852, MIT Laboratory for Computer Science, Cambridge, MA, (2002)

  • Nimmer, J.W., & Ernst, M.D.: Static verification of dynamically detected program invariants: Integrating Daikon and ESC/Java. In Electronic Notes in Theoretical Computer Science, volume 55. Elsevier, (2001)

  • Parasoft. Jcontract manuals version 1.5. Online manual, (2002). http://www.parasoft.com/

  • Parasoft. Jtest manuals version 4.5. Online manual, (2003). http://www.parasoft.com/

  • Pavlopoulou, C., & Young, M.: Residual test coverage monitoring. In Proc. 21st International Conference on Software Engineering, pp. 277–284, (1999)

  • Perkins, J.H., & Ernst, M.D.: Efficient incremental algorithms for dynamic detection of likely invariants. In Proc. 12th ACM SIGSOFT 12th International Symposium on Foundations of Software Engineering, pp. 23–32, (2004)

  • Rothermel, G., Untch, R., Chu, C., & Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Transactions on Software Engineering, 27(10):929–948, (2001)

    Article  Google Scholar 

  • Srivastava, A., & Thiagarajan, J.: Effectively prioritizing tests in development environment. In Proc. International Symposium on Software Testing and Analysis, pp. 97–106, (2002)

  • Stotts, D., Lindsey, M., & Antley, A.: An informal formal method for systematic JUnit test case generation. In Proc. the 2002 XP/Agile Universe, pp. 131–143, (2002)

  • Weiss, M.A.: Data Structures and Algorithm Analysis in Java. Addison Wesley, (1999)

  • Whaley, J., Martin, M.C., & Lam, M.S.: Automatic extraction of object-oriented component interfaces. In Proc. the International Symposium on Software Testing and Analysis, pp. 218–228, (2002)

  • Xie, T., Marinov, D., & Notkin, D.: Improving generation of object-oriented test suites by avoiding redundant tests. Technical Report UW-CSE-04-01-05, University of Washington Department of Computer Science and Engineering, Seattle, WA, (2004a)

  • Xie, T., Marinov, D., & Notkin, D.: Rostra: A framework for detecting redundant object-oriented unit tests. In Proc. 19th IEEE International Conference on Automated Software Engineering, pp. 196–205, (2004b)

  • Xie, T., & Notkin, D.: Mutually enhancing test generation and specification inference. In Proc. 3rd International Workshop on Formal Approaches to Testing of Software, volume 2931 of LNCS, pp. 60–69, (2003)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Xie.

Additional information

This article is an invited submission to the Automated Software Engineering Journal. An earlier version of this article appeared in Proceedings of the 16th IEEE Conference on Automated Software Engineering (ASE 2003).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xie, T., Notkin, D. Tool-assisted unit-test generation and selection based on operational abstractions. Autom Software Eng 13, 345–371 (2006). https://doi.org/10.1007/s10851-006-8530-6

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-006-8530-6

Keywords

Navigation