Skip to main content
Log in

A unified framework for evaluating test criteria in model-checking-assisted test case generation

  • Published:
Information Systems Frontiers Aims and scope Submit manuscript

Abstract

Testing is often cited as one of the most costly operations in testing dependable systems (Heimdahl et al. 2001). A particular challenging task in testing is test-case generation. To improve the efficiency of test-case generation and reduce its cost, recently automated formal verification techniques such as model checking are extended to automate test-case generation processes. In model-checking-assisted test-case generation, a test criterion is formulated as temporal logical formulae, which are used by a model checker to generate test cases satisfying the test criterion. Traditional test criteria such as branch coverage criterion and newer temporal-logic-inspired criteria such as property coverage criteria (Tan et al. 2004) are used with model-checking-assisted test generation. Two key questions in model-checking-assisted test generation are how efficiently a model checker may generate test suites for these criteria and how effective these test suites are. To answer these questions, we developed a unified framework for evaluating (1) the effectiveness of the test criteria used with model-checking-assisted test-case generation and (2) the efficiency of test-case generation for these criteria. The benefits of this work are three-fold: first, the computational study carried out in this work provides some measurements of the effectiveness and efficiency of various test criteria used with model-checking-assisted test case generation. These performance measurements are important factors to consider when a practitioner selects appropriate test criteria for an application of model-checking-assisted test generation. Second, we propose a unified test generation framework based on generalized Büchi automata. The framework uses the same model checker, in this case, SPIN model checker (Holzmann 1997), to generate test cases for different criteria and compare them on a consistent basis. Last but not least, we describe in great details the methodology and automated test generation environment that we developed on the basis of our unified framework. Such details would be of interest to researchers and practitioners who want to use and extend this unified framework and its accompanying tools.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Ammann, P.E., Black, P.E., Majurski, W. (1998). Using model checking to generate tests from specifications. In Proceedings of the second IEEE international conference on formal engineering methods, ICFEM’98 (pp. 46–54). Washington, DC: IEEE Computer Society.

    Google Scholar 

  • Beer, I., Ben-David, S., Eisner, C., Rodeh, Y. (1997). Efficient detection of vacuity in actl formulas. In: Proceedings of the 9th international conference on computer aided verification, CAV ’97. London: Springer-Verlag.

    Google Scholar 

  • Beyer, D., Chlipala, A.J., Majumdar, R., Henzinger, T.A., Jhala, R. (2004). Generating tests from counterexamples. In Proceedings of the 26th international conference on software engineering, ICSE’04 (pp. 326–335). Washington, DC: IEEE Computer Society.

    Chapter  Google Scholar 

  • Clarke, E., Jha, S., Lu, Y. (2002). Tree-like counterexamples in model checking. In Proceedings of 17th annual IEEE symposium on Logic in computer science (pp. 19–29). Washington, DC: IEEE Computer Society.

    Chapter  Google Scholar 

  • Clarke, E.M., & Emerson, E.A. (1982). Design and synthesis of synchronization skeletons using branching-time temporal logic. In Logic of programs, workshop. London: Springer-Verlag.

    Google Scholar 

  • Clarke, E.M., Grumberg, O., McMillan, K.L., Zhao, X. (1995). Efficient generation of counterexamples and witnesses in symbolic model checking. In Proceedings of the 32nd annual ACM/IEEE design automation conference, DAC ’95. New York: ACM.

    Google Scholar 

  • Clarke, E.M., Grumberg, O., Peled, D. (1999). Model checking. Cambridge: MIT Press.

    Google Scholar 

  • Engels, A., Feijs, L.M.G., Mauw, S. (1997). Test generation for intelligent networks using model checking. In Proceedings of the third international workshop on tools and algorithms for construction and analysis of systems, TACAS ’97. London: Springer-Verlag.

    Google Scholar 

  • Fraser, G., & Gargantini, A. (2009). An evaluation of model checkers for specification based test case generation. In Proceedings of the 2009 international conference on software testing verification and validation ICST ’09. Washington, DC: IEEE Computer Society.

    Google Scholar 

  • Gargantini, A., & Heitmeyer, C. (1999). Using model checking to generate tests from requirements specifications. In Proceedings of the 7th European software engineering conference held jointly with the 7th ACM SIGSOFT international symposium on Foundations of software engineering, ESEC/FSE-7. London: Springer-Verlag.

    Google Scholar 

  • Heimdahl, M.P., Rayadurgam, S., Visser, W. (2001). Specification centered testing. In Proceedings of the second international workshop on automated program analysis, testing and verification. Washington, DC: IEEE Computer Society.

    Google Scholar 

  • Holzmann, G.J. (1997). The model checker {SPIN}. IEEE Transactions on Software Engineering, 23(5), 279–295.

    Article  Google Scholar 

  • Hong, H.S., Cha, S.D., Lee, I., Sokolsky, O., Ural, H. (2003). Data flow testing as model checking. In Proceedings of the 25th international conference on software engineering ICSE ’03. Washington, DC: IEEE Computer Society.

    Google Scholar 

  • Hong, H.S., Lee, I., Sokolsky, O., Ural, H. (2002). A temporal logic based theory of test coverage and generation. In Proceedings of the 8th international conference on tools and algorithms for the construction and analysis of systems, TACAS ’02. London: Springer-Verlag.

    Google Scholar 

  • Jorgensen, P.C. (1995). Software testing: a craftsman’s approach, 1st edn. Boca Raton: CRC Press.

    Google Scholar 

  • Kamel, M., & Leue, S. (1998). Validation of the general inter-orb protocol (giop) using the spin model-checker. In In software tools for technology transfer. Springer-Verlag.

  • Kupferman, O., & Vardi, M.Y. (1999). Vacuity detection in temporal model checking. Lecture Notes in Computer Science, 82, 82–96.

  • Lamport, L. (1974). A new solution of dijkstra’s concurrent programming problem. Communication ACM, 17, 453–455.

    Google Scholar 

  • Lerda, F., & Visser, W. (2001). Addressing dynamic issues of program model checking. In Lecture notes in computer science. Heidelberg: Springer-Verlag.

    Google Scholar 

  • MathWorks (2007). Simulink design verifier. http://www.mathworks.com/products/sldesignverifier/. Accessed 30 Mar 2013.

  • Peterson, G.L. (1981). Myths about the mutual exclusion problem. Information Processing Letters, 12(3), 115–116.

    Google Scholar 

  • Peterson, L.L., & Davie, B.S. (2003). Computer networks: A systems approach, 3rd edn. San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Rapps, S., & Weyuker, E.J. (1985). Selecting software test data using data flow information. IEEE Transactions on Software Engineering, 11, 367–375.

    Google Scholar 

  • Rayadurgam, S., & Heimdahl, M.P. (2001). Coverage based test-case generation using model checkers. In Proceedings of the 8th Annual IEEE international conference and workshop on the engineering of computer based systems (pp. 83–91), ECBS 2001. Washington, D.C: IEEE Computer Society.

    Chapter  Google Scholar 

  • Rushby, J. (2002). Using model checking to help discover mode confusions and other automation surprises. Reliability Engineering and System Safety, 75(2), 167–177.

    Google Scholar 

  • SC-167 Committee (1992). Software considerations in airborne systems and equipment certification. Tech. rep., Radio Technical Commission for Aeronautics.

  • Tan, L. (2011). State coverage metrics for specification-based testing with büchi automata. In Proceedings of the 5th international conference on tests and proofs, TAP’11. Heidelberg: Springer-Verlag.

    Google Scholar 

  • Tan, L., & Cleaveland, R. (2002). Evidence-based model checking. In In computer-aided verification. Heidelberg: Springer-Verlag.

    Google Scholar 

  • Tan, L., Sokolsky, O., Lee, I. (2003). Property-coverage testing. Tech. rep., Department of Computer and Information Science, University of Pennsylvania.

  • Tan, L., Sokolsky, O., Lee, I. (2004). Specification-based Testing with Linear Temporal Logic. In The proceedings of IEEE internation conference on information reuse and integration (pp. 493–498), IRI04. Las Vegas: IEEE society.

    Google Scholar 

  • Tsay, Y.-k., Chen, Y.-f., Tsai, M.-h., Wu, K.-n., Chan, W.-c. (2007). Goal: A graphical tool for manipulating Buchi automata and temporal formulae. In Proceedings of TACAS (2007), LNCS 4424. Heidelberg: Springer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Tan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zeng, B., Tan, L. A unified framework for evaluating test criteria in model-checking-assisted test case generation. Inf Syst Front 16, 823–834 (2014). https://doi.org/10.1007/s10796-013-9424-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10796-013-9424-y

Keywords

Navigation