Skip to main content
Log in

Software testing with code-based test generators: data and lessons learned from a case study with an industrial software component

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Automatically generating effective test suites promises a significant impact on testing practice by promoting extensively tested software within reasonable effort and cost bounds. Code-based test generators rely on the source code of the software under test to identify test objectives and to steer the test case generation process accordingly. Currently, the most mature proposals on this topic come from the research on random testing, dynamic symbolic execution, and search-based testing. This paper studies the effectiveness of a set of state-of-the-research test generators on a family of industrial programs with nontrivial domain-specific peculiarities. These programs are part of a software component of a real-time and safety-critical control system and integrate in a control task specified in LabVIEW, a graphical language for designing embedded systems. The result of this study enhances the available body of knowledge on the strengths and weaknesses of test generators. The empirical data indicate that the test generators can truly expose subtle (previously unknown) bugs in the subject software and that there can be merit in using different types of test generation approaches in a complementary, even synergic fashion. Furthermore, our experiment pinpoints the support for floating point arithmetics and nonlinear computations as a major milestone in the path to exploiting the full potential of the prototypes based on symbolic execution in industry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://sourceforge.net/projects/fixedptc.

  2. http://www.jhauser.us/arithmetic/SoftFloat.html.

  3. Gcov is part of the Gnu compiler collection (GCC).

References

  • Anand, S., Pǎsǎreanu, C. S., & Visser, W. (2007). JPF-SE: A symbolic execution extension to Java Pathfinder. In: International conference on tools and algorithms for construction and analysis of systems (TACAS 2007), Springer, pp. 134–138.

  • Artzi, S., Quinonez, J., Kiezun, A., & Ernst, M. D. (2008). Finding bugs in dynamic web applications: Proceedings of the 2008 international symposium on software testing and analysis (ISSTA 2008). ACM, pp. 261–272.

  • Ball, T. (2003). Abstraction-guided test generation: A case study. Tech. Rep. MSR-TR-2003-86, Microsoft Research, Microsoft Corporation.

  • Baluda, M., Braione, P., Denaro, G., & Pezzè, M. (2010). Structural coverage of feasible code: Proceedings of the fifth international workshop on automation of software test (AST 2010). ACM.

  • Baluda, M., Braione, P., Denaro, G., & Pezzè, M. (2011). Enhancing structural software coverage by incrementally computing branch executability. Software Quality JournaL, l 19(4), 725–751.

    Article  Google Scholar 

  • Braione, P., Denaro, G., & Pezzè, M. (2012). On the integration of software testing and formal analysis. In B. Meyer & M. Nordio (Eds.), Empirical software engineering and verification, lecture notes in computer science (Vol. 7007, pp. 158–193). Berlin: Springer.

  • Burnim, J., & Sen, K. (2008). Heuristics for scalable dynamic test generation: Proceedings of the 23rd IEEE/ACM international conference on automated software engineering (ASE 2008). IEEE Computer Society, pp. 443–446.

  • Cadar, C., Ganesh, V., Pawlowski, P. M., Dill, D. L., & Engler, D. R. (2006). EXE: Automatically generating inputs of death: Proceedings of the 13th ACM conference on computer and communications security (CCS ’06). ACM, pp. 322–335.

  • Cadar, C., Dunbar, D., & Engler, D. (2008). KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs: Proceedings of the 8th USENIX symposium on operating systems design and implementation (OSDI 2008). USENIX Association.

  • Ciupa, I., Leitner, A., Oriol, M., & Meyer, B. (2008). ARTOO: Adaptive random testing for object-oriented software: Proceedings of the 30th international conference on software engineering (ICSE’08). ACM, pp. 71–80.

  • Csallner, C., & Smaragdakis, Y. (2004). JCrasher: An automatic robustness tester for Java. Software-Practice and Experience, 34(11),1025–1050.

    Article  Google Scholar 

  • De Moura, L., & Bjørner, N. (2008). Z3: An efficient smt solver: Proceedings of the 14th international conference on tools and algorithms for the construction and analysis of systems (TACAS’08). Springer, pp. 337–340.

  • Dutertre, B., & de Moura, L. (2006). The Yices SMT solver. SRI International.

  • Ferguson, R., & Korel, B. (1996). The chaining approach for software test data generation. ACM Transactions on Software Engineering and Methodology, 5(1), 63–86.

    Article  Google Scholar 

  • Fraser, G., & Arcuri, A. (2011). EvoSuite: Automatic test suite generation for object-oriented software: Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on foundations of software engineering. ACM, ESEC/FSE ’11, pp. 416–419.

  • Fraser, G., & Arcuri, A. (2012). Sound empirical evidence in software testing. In: Society IC (Ed.) Proceedings of the 34th international conference on software engineering (ICSE 2012), pp. 178–188.

  • Godefroid, P., Klarlund, N., & Sen, K. (2005). DART: Directed automated random testing: Proceedings of the ACM SIGPLAN 2005 conference on programming language design and implementation (PLDI 2005). ACM, pp. 213–223.

  • Godefroid, P., Kiezun, A., &Levin, M. Y. (2008a). Grammar-based whitebox fuzzing: Proceedings of the 2008 ACM SIGPLAN conference on programming language design and implementation (PLDI’08). ACM, pp. 206–215.

  • Godefroid, P., Levin, M. Y., & Molnar, D. (2008b). Automated whitebox fuzz testing: Proceedings of the 16th annual network and distributed system security symposium (NDSS 2008). The Internet Society, pp. 151–166.

  • Honda, T., Hattori, Y., Holloway, C., Martin, E., Matsumoto, Y., Matsunobu, T., Suzuki, T., Tesini, A., Baulo, V., Haange, R., Palmer, J., & Shibanuma, K. (2002). Remote handling systems for ITER. Fusion Engineering and Design, 63-64, 507–518.

    Article  Google Scholar 

  • Inkumsah, K., & Xie, T. (2008). Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution: Proceedings of the 23rd IEEE/ACM international conference on automated software engineering (ASE 2008). IEEE Computer Society, pp. 297–306.

  • Keilhacker, M. (1997). JET deuterium-tritium results and their implications: 17th IEEE/NPSS symposium on fusion engineering, Vol. 2, pp. 3–9.

  • King, J. C. (1976). Symbolic execution and program testing. Communications of the ACM,19(7), 385–394.

    Article  MATH  Google Scholar 

  • Korel, B. (1990). Automated software test data generation. IEEE Transactions on Software Engineering, 16(8), 870 –879.

    Article  Google Scholar 

  • Lakhotia, K., McMinn, P., & Harman, M. (2009). Automated test data generation for coverage: Haven’t we solved this problem yet? Proceedings of the 2009 testing: Academic and industrial conference—Practice and research techniques. IEEE Computer Society, pp. 95–104.

  • Lakhotia, K., Harman, M., & Gross, H. (2010). AUSTIN: A tool for search based software testing for the C language and its evaluation on deployed automotive systems: Proceedings of the 2nd international symposium on search based software engineering, pp. 101–110.

  • Majumdar, R., & Sen, K. (2007). Hybrid concolic testing: Proceedings of the 29th international conference on software engineering (ICSE 2007). IEEE Computer Society, pp. 416–426.

  • McMillan, K. L. (2010). Lazy annotation for program testing and verification: Computer aided verification, 22nd international conference, CAV 2010, Edinburgh, UK, July 15–19, 2010. Proceedings, Springer, Berlin, pp. 104–118.

  • Michael, C., & McGraw, G. (1998). Automated software test data generation for complex programs. In: Society IC (Ed.) Proceedings of IEEE international conference on automated software engineering (ASE’98), pp. 136–146.

  • Michael, C. C., McGraw, G., & Schatz, M. A. (2001). Generating software test data by evolution. IEEE Transactions on Software Engineering, 27, 1085–1110.

    Article  Google Scholar 

  • Miller, W., & Spooner, D. L. (1976). Automatic generation of floating-point test data. IEEE Transactions on Software Engineering, 2, 223–226.

    Article  MathSciNet  Google Scholar 

  • Muhammad, A., Esque, S., Tolonen, M., Mattila, J., Nieminen, P., Linna, O., & Vilenius, M. (2007). Water hydraulic teleoperation system for ITER: Proceedings of the 10th Scandinavian international conference on fluid power, Vol. 3, pp. 263–276.

  • Myers, G., Badgett, T., Thomas, T., & Sandler, C. (2004). The art of software testing. New York: Wiley.

    Google Scholar 

  • Pacheco, C., Lahiri, S. K., Ernst, M. D., & Ball, T. (2007). Feedback-directed random test generation: Proceedings of the 29th international conference on software engineering (ICSE 2007). IEEE Computer Society, pp. 75–84.

  • Pargas, R. P., Harrold, M. J., & Peck, R. R. (1999). Test-data generation using genetic algorithms. Software Testing, Verification and Reliability, 9(4), 263–282.

    Article  Google Scholar 

  • Pǎsǎreanu, C. S., Rungta, N., & Visser, W. (2011). Symbolic execution with mixed concrete-symbolic solving: Proceedings of the 2011 international symposium on software testing and analysis (ISSTA 2011). ACM, pp. 35–44.

  • Pezzè, M., & Young, M. (2007). Software testing and analysis: Process, principles and techniques. New York: Wiley.

    Google Scholar 

  • Santelices, R., Chittimalli, P. K., Apiwattanapong, T., Orso, A., & Harrold, M. J. (2008). Test-suite augmentation for evolving software: Proceedings of the 23rd IEEE/ACM international conference on automated software engineering (ASE’08). IEEE Computer Society, pp. 218–227.

  • Sen, K., Marinov, D., & Agha, G. (2005). CUTE: A concolic unit testing engine for C: Proceedings of the European software engineering conference joint with 13th ACM SIGSOFT international symposium on foundations of software engineering (ESEC/FSE-13). ACM, pp. 263–272.

  • Shimomura, Y. (2004). The present status and future prospects of the ITER project. Journal of Nuclear Materials, 329-333(1), 5–11.

    Article  Google Scholar 

  • Sthamer, H. H. (1996). The automatic generation of software test data using genetic algorithms. PhD thesis, University of Glamorgan, Pontyprid, Wales, Great Britain.

  • Tillmann, N., & de Halleux, J. (2008). Pex—white box test generation for .NET: Proceedings of the 2nd international conference on tests and proofs (TAP 2008). Springer, pp. 134–153.

  • Tonella, P. (2004). Evolutionary testing of classes: Proceedings of the 2004 ACM SIGSOFT international symposium on software testing and analysis (ISSTA’04). ACM, pp. 119–128.

  • Xie, T., Tillmann, N., de Halleux, P., & Schulte, W. (2009). Fitness-guided path exploration in dynamic symbolic execution: Proceedings of the 39th annual IEEE/IFIP international conference on dependable systems and networks (DSN 2009). IEEE Computer Society, pp. 359–368.

Download references

Acknowledgments

This work is partially supported by the European Community under the call FP7-ICT-2009-5—project PINCETTE 257647.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pietro Braione.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Braione, P., Denaro, G., Mattavelli, A. et al. Software testing with code-based test generators: data and lessons learned from a case study with an industrial software component. Software Qual J 22, 311–333 (2014). https://doi.org/10.1007/s11219-013-9207-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-013-9207-1

Keywords

Navigation