Skip to main content
Log in

Experimental evaluation of a novel equivalence class partition testing strategy

  • Special Section Paper
  • Published:
Software & Systems Modeling Aims and scope Submit manuscript

Abstract

In this paper, a complete model-based equivalence class testing strategy recently developed by the authors is experimentally evaluated. This black-box strategy applies to deterministic systems with infinite input domains and finite internal state and output domains. It is complete with respect to a given fault model. This means that conforming behaviours will never be rejected, and all non-conforming behaviours inside a given fault domain will be uncovered. We investigate the question how this strategy performs for systems under test whose behaviours lie outside the fault domain. Furthermore, a strategy extension is presented, that is based on randomised data selection from input equivalence classes. While this extension is still complete with respect to the given fault domain, it also promises a higher test strength when applied against members outside this domain. This is confirmed by an experimental evaluation that compares mutation coverage achieved by the original and the extended strategy with the coverage obtained by random testing. For mutation generation, not only typical software errors, but also critical HW/SW integration errors are considered. The latter can be caused by mismatches between hardware and software design, even in the presence of totally correct software.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Observe that the restriction to quiescent states does not result in a loss of information. Every transient state has the internal and output variable valuations coinciding with its quiescent pre-state, and its input valuation is identical to that of its quiescent post-state.

  2. The predicate is guaranteed to have a solution, since it describes an input equivalence class, which has at least one member and thus at least one assignment that fulfils the predicate.

  3. A mutant is killed, if at least one test case of the test suite did not pass.

  4. see http://clang.llvm.org/docs/LibTooling.html.

  5. Besides these, “traditional” mutation operators [33] also support object-oriented (OO) mutation operators. These operators have been neglected in this work, since these operators account for typical OO errors, which are not possible in the SystemC implementations of our two case studies. In our implementations, inheritance is only used to declare modules, ports, and signals. The application of OO mutation operators would not account for typical HW/SW integration errors but only for SystemC specific errors.

  6. We consider a mismatch in precision a real error, while in real-world examples a mismatch only resulting from different precisions might be negligible.

  7. We assume that the fault tolerance of the system under test is evaluated and validated using other means than our model-based testing approach.

  8. See http://pitest.org/.

  9. For an exemplary list, see http://mit.bme.hu/~micskeiz/pages/modelbased_testing.html.

References

  1. Aichernig, B., Brandl, H., Jöbstl, E., Krenn, W., Schlick, R., Tiran, S.: MoMuT::UML model-based mutation testing for UML. In: Proceedings of the 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), pp. 1–8 (2015). doi:10.1109/ICST.2015.7102627

  2. Anand, S., Burke, E.K., Chen, T.Y., Clark, J.A., Cohen, M.B., Grieskamp, W., Harman, M., Harrold, M.J., McMinn, P.: An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013)

    Article  Google Scholar 

  3. Arcuri, A., Iqbal, M.Z., Briand, L.: Black-box system testing of real-time embedded systems using random and search-based testing. In: Proceedings of the 22nd IFIP WG 6.1 International Conference on Testing Software and Systems, ICTSS’10, pp. 95–110. Springer, Berlin (2010)

  4. Baier, C., Katoen, J.: Principles of Model Checking. MIT Press, Cambridge (2008)

    MATH  Google Scholar 

  5. Belinfante, A.: JTorX: A tool for on-line model-driven test derivation and execution. In: SpringerLink, pp. 266–270. Springer, Berlin (2010). doi:10.1007/978-3-642-12002-2_21

  6. Biere, A., Heljanko, K., Junttila, T., Latvala, T., Schuppan, V.: Linear encodings of bounded LTL model checking. Log. Methods Comput. Sci. (2006). doi:10.2168/LMCS-2(5:5)2006. arXiv: cs/0611029

  7. Braunstein, C., Haxthausen, A.E., Huang, W.L., Hübner, F., Peleska, J., Schulze, U., Hong, L.V.: Complete model-based equivalence class testing for the ETCS ceiling speed monitor. In: Merz, S., Pang, J. (eds.) Proceedings of the ICFEM 2014, No. 8829 in Lecture Notes in Computer Science, pp. 380–395. Springer, Berlin (2014)

    Google Scholar 

  8. Braunstein, C., Huang, W.L., Peleska, J., Schulze, U., Hübner, F., Haxthausen, A.E., Hong, L.V.: A SysML test model and test suite for the ETCS ceiling speed monitor. Technical Report, Embedded Systems Testing Benchmarks Site (2014-04-30). http://www.mbt-benchmarks.org

  9. Cavalcanti, A., Gaudel, M.C.: Testing for refinement in circus. Acta Inform. 48(2), 97–147 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Chen, T.Y., Kuo, F.C., Merkel, R.G., Tse, T.H.: Adaptive random testing: the art of test case diversity. J. Syst. Softw. 83(1), 60–66 (2010)

    Article  Google Scholar 

  11. Chow, T.S.: Testing software design modeled by finite-state machines. IEEE Trans. Softw. Eng. SE–4(3), 178–186 (1978)

    Article  MATH  Google Scholar 

  12. Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. The MIT Press, Cambridge (1999)

    MATH  Google Scholar 

  13. Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: Combination of abstractions in the Astrée static analyzer. In: Okada, M., Satoh, I. (eds.) Eleventh Annual Asian Computing Science Conference (ASIAN’06), pp. 1–24. Springer, Berlin, LNCS (2006) (to appear)

  14. Cousot, P., Cousot, R., Feret, J., Miné, A., Mauborgne, L., Rival, X.: Why does Astrée scale up? Form. Methods Syst. Des. (FMSD) 35(3), 229–264 (2009)

    Article  MATH  Google Scholar 

  15. Dranidis, D., Bratanis, K., Ipate, F.: JSXM: A tool for automated test generation. In: SpringerLink, pp. 352–366. Springer, Berlin (2012). doi:10.1007/978-3-642-33826-7_25

  16. Ernits, J.P., Kull, A., Raiend, K., Vain, J.: Generating Tests from EFSM Models Using Guided Model Checking and Iterated Search Refinement. In: Havelund, K., Núñez, M., Roşu, G., Wolff, B. (eds.) Formal Approaches to Software Testing and Runtime Verification, No. 4262 in Lecture Notes in Computer Science, pp. 85–99. Springer, Berlin (2006). http://link.springer.com/chapter/10.1007/11940197_6

  17. Fujiwara, S., Bochmann, G.V., Khendek, F., Amalou, M., Ghedamsi, A.: Test selection based on finite state models. IEEE Trans. Softw. Eng. 17(6), 591–603 (1991). doi:10.1109/32.87284

    Article  Google Scholar 

  18. Gaudel, M.C.: Testing can be formal, too. In: Mosses, P.D., Nielsen, M., Schwartzbach, M.I. (eds.) TAPSOFT, Lecture Notes in Computer Science, vol. 915, pp. 82–96. Springer, New York (1995)

    Google Scholar 

  19. Gill, A.: Introduction to the Theory of Finite-State Machines. McGraw-Hill, New York (1962)

    MATH  Google Scholar 

  20. Hong, H.S., Lee, I., Sokolsky, O., Ural, H.: A temporal logic based theory of test coverage and generation. In: Katoen, J.P., Stevens, P. (eds.) TACAS, Lecture Notes in Computer Science, vol. 2280, pp. 327–341. Springer, New York (2002)

    Google Scholar 

  21. Huang, W., Peleska, J.: Complete model-based equivalence class testing. STTT 18(3), 265–283 (2016). doi:10.1007/s10009-014-0356-8

    Article  Google Scholar 

  22. Huang, W., Peleska, J.: Complete model-based equivalence class testing for nondeterministic systems. Form. Asp. Comput. (2016). doi:10.1007/s00165-016-0402-2

    Article  MATH  Google Scholar 

  23. Hübner, F., Huang, W., Peleska, J.: Experimental evaluation of a novel equivalence class partition testing strategy. In: Blanchette, J.C., Kosmatov, N. (eds.) Proceedings of the Tests and Proofs: 9th International Conference, TAP 2015, Held as Part of STAF 2015, L’Aquila, Italy, July 22–24, 2015. Lecture Notes in Computer Science, vol. 9154, pp. 155–172. Springer (2015). doi:10.1007/978-3-319-21215-9_10

  24. IEEE Std 1666–2005: IEEE standard SystemC language reference manual. IEEE Computer Society, New York, USA (2006)

  25. Jaulin, L., Kieffer, M., Didrit, O., Walter, É.: Applied Interval Analysis. Springer, London (2001)

    Book  MATH  Google Scholar 

  26. Just, R.: The Major mutation framework: Efficient and scalable mutation analysis for Java. In: Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 433–436. San Jose (2014)

  27. Kalaji, A.S., Hierons, R.M., Swift, S.: Generating feasible transition paths for testing from an extended finite state machine (EFSM). In: ICST, pp. 230–239. IEEE Computer Society (2009)

  28. Kästner, D., Ferdinand, C.: Applying abstract interpretation to verify EN-50128 software safety requirements. In: Lecomte et al. [31], pp. 191–202. doi:10.1007/978-3-319-33951-1_14

  29. Kosmatov, N., Legeard, B., Peureux, F., Utting, M.: Boundary coverage criteria for test generation from formal models. In: Proceedings of the 15th International Symposium on Software Reliability Engineering, pp. 139–150 (2004). doi:10.1109/ISSRE.2004.12

  30. Lapschies, F.: SONOLAR homepage (2014). http://www.informatik.uni-bremen.de/agbs/florian/sonolar/

  31. Lecomte, T., Pinger, R., Romanovsky, A. (eds.): Reliability, Safety, and Security of Railway Systems. Modelling, Analysis, Verification, and Certification—First International Conference, RSSRail 2016, Paris, France, June 28–30, 2016, Proceedings, Lecture Notes in Computer Science, vol. 9707. Springer (2016). doi:10.1007/978-3-319-33951-1

  32. Luo, G., von Bochmann, G., Petrenko, A.: Test selection based on communicating nondeterministic finite-state machines using a generalized Wp-method. IEEE Trans. Softw. Eng. 20(2), 149–162 (1994). doi:10.1109/32.265636

    Article  Google Scholar 

  33. Ma, Y.S., Offutt, J., Kwon, Y.R.: MuJava: an automated class mutation system. Softw. Test. Verif. Reliab. 15(2), 97–133 (2005). doi:10.1002/stvr.v15:2

    Article  Google Scholar 

  34. Mueller-Gritschneder, D., Maier, P.R., Greim, M., Schlichtmann, U.: System C-based multi-level error injection for the evaluation of fault-tolerant systems. In: Proceedings of the 2014 International Symposium on Integrated Circuits (ISIC), pp. 460–463 (2014). doi:10.1109/ISICIR.2014.7029567

  35. Object Management Group: OMG Unified Modeling Language (OMG UML), superstructure, version 2.4.1. Technical Report, OMG (2011)

  36. Object Management Group: OMG Systems Modeling Language (OMG SysML), Version 1.4. Technical Report, Object Management Group (2015). http://www.omg.org/spec/SysML/1.4

  37. Peleska, J.: Industrial-strength model-based testing: state of the art and current challenges. In: Petrenko, A.K., Schlingloff, H. (eds.) Proceedings Eighth Workshop on Model-Based Testing, Rome, Italy, 17th March 2013, Electronic Proceedings in Theoretical Computer Science, vol. 111, pp. 3–28. Open Publishing Association (2013). doi:10.4204/EPTCS.111.1

  38. Peleska, J., Huang, W., Hübner, F.: A novel approach to HW/SW integration testing of route-based interlocking system controllers. In: Lecomte et al. [31], pp. 32–49. doi:10.1007/978-3-319-33951-1_3

  39. Peleska, J., Huang, W., Hübner, F.: A Novel Approach to HW/SW Integration Testing of Route-Based Interlocking System Controllers: Technical Report. Technical Report, University of Bremen (2016-03-10). Available under http://www.cs.uni-bremen.de/agbs/jp/jp_papers_e.html

  40. Peleska, J., Huang, W.L., Hübner, F.: A novel approach to HW/SW integration testing of route-based interlocking system controllers. In: Lecomte, T., Pinger, R., Romanovsky, A. (eds.) Reliability, Safety, and Security of Railway Systems Modelling, Analysis, Verification, and Certification, No 9707 in Lecture Notes in Computer Science, pp. 32–49. Springer, New York (2016). doi:10.1007/978-3-319-33951-1_3

    Chapter  Google Scholar 

  41. Peleska, J., Siegel, M.: Test automation of safety-critical reactive systems. South Afr. Comput. J. 19, 53–77 (1997)

    Google Scholar 

  42. Peleska, J., Vorobev, E., Lapschies, F.: Automated test case generation with SMT-solving and abstract interpretation. In: Bobaru, M., Havelund, K., Holzmann, G.J., Joshi, R. (eds.) Nasa Formal Methods, Third International Symposium, NFM 2011, LNCS, vol. 6617, pp. 298–312. Springer, Pasadena (2011)

  43. Perez, J., Azkarate-askasua, M., Perez, A.: Codesign and simulated fault injection of safety-critical embedded systems using systemC. In: Dependable Computing Conference (EDCC), 2010 European, pp. 221–229 (2010). doi:10.1109/EDCC.2010.34

  44. Petrenko, A., Simao, A., Maldonado, J.C.: Model-based testing of software and systems: recent advances and challenges. Int. J. Softw. Tools Technol. Transf. 14(4), 383–386 (2012). doi:10.1007/s10009-012-0240-3

  45. Petrenko, A., Yevtushenko, N., Bochmann, G.V.: Fault models for testing in context. In: Gotzhein, R., Bredereke, J. (eds.) Formal Description Techniques IX: Theory, Application and Tools, pp. 163–177. Chapman & Hall, Boca Raton (1996)

    Chapter  Google Scholar 

  46. Reid, S.C.: An empirical analysis of equivalence partitioning, boundary value analysis and random testing. In: Proceedings Fourth International Software Metrics Symposium, pp. 64–73 (1997). doi:10.1109/METRIC.1997.637166

  47. Springintveld, J., Vaandrager, F., D’Argenio, P.: Testing timed automata. Theor. Comput. Sci. 254(1–2), 225–257 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  48. Tretmans, J.: Model based testing with labelled transition systems. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) Formal Methods and Testing, Lecture Notes in Computer Science, vol. 4949, pp. 1–38. Springer, New York (2008)

    Chapter  Google Scholar 

  49. UNISIG: ERTMS/ETCS System Requirements Specification, Chapter 3, Principles, vol. Subset-026-3, chap. 3 (2012). Issue 3.3.0

  50. Vasilevskii, M.P.: Failure diagnosis of automata. Kibernetika (Transl.) 4, 98–108 (1973)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees for the careful and constructive review of this article. The work presented in this paper has been elaborated within project Implementable Testing Theory for Cyber-physical Systems (ITTCPS) (see http://www.informatik.uni-bremen.de/agbs/projects/ittcps/index.html) which has been granted by the University of Bremen in the context of the German Universities Excellence Initiative (see http://en.wikipedia.org/wiki/German_Universities_Excellence_Initiative).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Peleska.

Additional information

Communicated by Prof. Alfonso Pierantonio, Jasmin Blanchette, Francis Bordeleau, Nikolai Kosmatov, Gabi Taentzer, Manuel Wimmer.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hübner, F., Huang, Wl. & Peleska, J. Experimental evaluation of a novel equivalence class partition testing strategy. Softw Syst Model 18, 423–443 (2019). https://doi.org/10.1007/s10270-017-0595-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10270-017-0595-8

Keywords

Navigation