skip to main content
research-article

FlagRemover: A testability transformation for transforming loop-assigned flags

Published:26 August 2011Publication History
Skip Abstract Section

Abstract

Search-Based Testing is a widely studied technique for automatically generating test inputs, with the aim of reducing the cost of software engineering activities that rely upon testing. However, search-based approaches degenerate to random testing in the presence of flag variables, because flags create spikes and plateaux in the fitness landscape. Both these features are known to denote hard optimization problems for all search-based optimization techniques. Several authors have studied flag removal transformations and fitness function refinements to address the issue of flags, but the problem of loop-assigned flags remains unsolved. This article introduces a testability transformation along with a tool that transforms programs with loop-assigned flags into flag-free equivalents, so that existing search-based test data generation approaches can successfully be applied. The article presents the results of an empirical study that demonstrates the effectiveness and efficiency of the testability transformation on programs including those made up of open source and industrial production code, as well as test data generation problems specifically created to denote hard optimization problems.

References

  1. Alshraideh, M. and Bottaci, L. 2006. Using program data-state diversity in test data search. In Proceedings of the 1st Testing: Academic & Industrial Conference——Practice and Research Techniques (TAICPART'06). 107--114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Baresel, A., Binkley, D., Harman, M., and Korel, B. 2004. Evolutionary testing in the presence of loop-assigned flags: A testability transformation approach. In Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis (ISSTA'04). ACM, 108--118. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Baresel, A. and Sthamer, H. 2003. Evolutionary testing of flag conditions. In Proceedings of the Sympossium on Genetic and Evolutionary Computation (GECCO'03). Lecture Notes in Computer Science, vol. 2724. Springer-Verlag, 2442--2454. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Baresel, A., Sthamer, H., and Schmidt, M. 2002. Fitness function design to improve evolutionary structural testing. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'02). Morgan-Kaufmann, 1329--1336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Binkley, D. W. and Gallagher, K. B. 1996. Program slicing. In Advances in Computing, Volume 43, M. Zelkowitz, Ed. Academic Press, 1--50.Google ScholarGoogle Scholar
  6. Bottaci, L. 2002. Instrumenting programs with flag variables for test data search by genetic algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'02). Morgan-Kaufmann, 1337--1342. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Briand, L. C., Labiche, Y., and Shousha, M. 2005. Stress testing real-time systems with genetic algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'05). ACM, 1021--1028. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. British Standards Institute. 1998a. BS 7925-1, Vocabulary of Terms in Software Testing.Google ScholarGoogle Scholar
  9. British Standards Institute. 1998b. BS 7925-2, Software Component Testing.Google ScholarGoogle Scholar
  10. Burnim, J. and Sen, K. 2008. Heuristics for scalable dynamic test generation. Tech. rep. UCB/EECS-2008-123, EECS Department, University of California, Berkeley.Google ScholarGoogle Scholar
  11. Cadar, C., Dunbar, D., and Engler, D. R. 2008. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Proceedings of OSDI. USENIX Association, 209--224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Cadar, C. and Engler, D. R. 2005. Execution generated test cases: How to make systems code crash itself. In Proceedings of the 12th International SPIN Workshop on Model Checking Software. Lecture Notes in Computer Science, vol. 3639. Springer, 2--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Clark, J., Dolado, J. J., Harman, M., Hierons, R. M., Jones, B., Lumkin, M., Mitchell, B., Mancoridis, S., Rees, K., Roper, M., and Shepperd, M. 2003. Reformulating software engineering as a search problem. IEE Proc.—Softw. 150, 3, 161--175.Google ScholarGoogle Scholar
  14. Clarke, L. A. 1976. A system to generate test data and symbolically execute programs. IEEE Trans. Softw. Engin. 2, 3, 215--222. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Darlington, J. and Burstall, R. M. 1977. A tranformation system for developing recursive programs. J. ACM 24, 1, 44--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. DeMillo, R. A. and Offutt, A. J. 1993. Experimental results from an automatic test generator. ACM Trans. Softw. Engin. Method. 2, 2, 109--127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Dershowitz, N. and Manna, Z. 1977. The evolution of programs: A system for automatic program modification. In Conference Record of the 4th Annual Symposium on Principles of Programming Languages. ACM SIGACT and SIGPLAN, ACM, 144--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Feather, M. S. 1982. A system for assisting program transformation. ACM Trans. Prog. Lang. Syst. 4, 1, 1--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ferguson, R. and Korel, B. 1996. The chaining approach for software test data generation. ACM Trans. Softw. Engin. Method. 5, 1, 63--86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Godefroid, P. and Khurshid, S. 2002. Exploring very large state spaces using genetic algorithms. In Proceedings of the Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS). Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Godefroid, P., Klarlund, N., and Sen, K. 2005. DART: Directed Automated Random Testing. ACM SIG-PLAN Notices 40, 6, 213--223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Grammatech, Inc. 2002. The codesurfer slicing system. www.grammatech.com.Google ScholarGoogle Scholar
  23. Harman, M. 2007. The current state and future of search based software engineering. In Proceedings of the Conference on the Future of Software Engineering. IEEE Computer Society Press, 342--357. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Harman, M., Hu, L., Hierons, R. M., Wegener, J., Sthamer, H., Baresel, A., and Roper, M. 2004. Testability transformation. IEEE Trans. Softw. Engin. 30, 1, 3--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Harman, M. and Jones, B. F. 2001. Search based software engineering. Inf. Softw. Technol. 43, 14, 833--839.Google ScholarGoogle ScholarCross RefCross Ref
  26. Harman, M. and McMinn, P. 2007. A theoretical and empirical analysis of genetic algorithms and hill climbing for search based structural test data generation. In Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), ACM, 73--83. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Holland, J. H. 1975. Adaption in Natural and Artificial Systems. MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  28. Jones, B., Sthamer, H.-H., and Eyres, D. 1996. Automatic structural testing using genetic algorithms. Softw. Engin. J. 11, 299--306.Google ScholarGoogle ScholarCross RefCross Ref
  29. Jones, B. F., Eyres, D. E., and Sthamer, H. H. 1998. A strategy for using genetic algorithms to automate branch and fault-based testing. Comput. J. 41, 2, 98--107.Google ScholarGoogle ScholarCross RefCross Ref
  30. King, J. C. 1976. Symbolic execution and program testing. Comm. ACM 19, 7, 385--394. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Korel, B. 1990. Automated software test data generation. IEEE Trans. Softw. Engin. 16, 8, 870--879. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Lakhotia, K., Harman, M., and McMinn, P. 2008. Handling dynamic data structures in search based testing. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO'08). ACM, 1759--1766. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Lakhotia, K., McMinn, P., and Harman, M. 2009. Automated test data generation for coverage: Haven't we solved this problem yet? In Proceedings of Testing: Academic & Industrial Conference, Practice And Research Techniques (TAIC PART09). IEEE Computer Society Press, 95--104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Liu, X., Lei, N., Liu, H., and Wang, B. 2005a. Evolutionary testing of unstructured programs in the presence of flag problems. In Proceedings of APSEC. IEEE, 525--533. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Liu, X., Liu, H., Wang, B., Chen, P., and Cai, X. 2005b. A unified fitness function calculation rule for flag conditions to improve evolutionary testing. In Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE'05). ACM, 337--341. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. McMinn, P. 2004. Search-based software test data generation: A survey. Softw. Test. Verif. Reliab. 14, 2, 105--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. McMinn, P., Binkley, D., and Harman, M. 2005. Testability transformation for efficient automated test data search in the presence of nesting. In Proceedings of the UK Software Testing Workshop (UK Test'05).Google ScholarGoogle Scholar
  38. Michael, C., McGraw, G., and Schatz, M. 2001. Generating software test data by evolution. IEEE Trans. Softw. Engin. 12, 1085--1110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Mitchell, M. 1996. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Mueller, F. and Wegener, J. 1998. A comparison of static analysis and evolutionary testing for the verification of timing constraints. In Proceedings of the 4th IEEE Real-Time Technology and Applications Symposium (RTAS'98). IEEE, 144--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Necula, G. C., McPeak, S., Rahul, S. P., and Weimer, W. 2002. CIL: Intermediate language and tools for analysis and transformation of C programs. Lecture Notes in Computer Science, vol. 2304, Springer, 213--228. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. NIST. 2002. The economic impacts of inadequate infrastructure for software testing. Planning rep. 02--3.Google ScholarGoogle Scholar
  43. Offutt, A. J. 1990. An integrated system for automatically generating test data. In Proceedings of the 1st International Conference on Systems Integration. IEEE, 694--701. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Offutt, A. J., Jin, Z., and Pan, J. 1999. The dynamic domain reduction approach to test data generation. Softw. Pract. Exp. 29, 2, 167--193. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Pargas, R. P., Harrold, M. J., and Peck, R. R. 1999. Test-data generation using genetic algorithms. J. Softw. Test. Verif. Reliab. 9, 263--282.Google ScholarGoogle ScholarCross RefCross Ref
  46. Partsch, H. A. 1990. The Specification and Transformation of Programs: A Formal Approach to Software Development. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Pohlheim, H. 1999. Evolutionary algorithms: Overview, methods and operators. Documentation for genetic evalutionary algorithm toolbox for use with Matlab version: toolbox 1.92 documentation 1.92. URL: http://www.geatbx.com/download/geatbx_intro_algorithmen_v38.pdf.Google ScholarGoogle Scholar
  48. Pohlheim, H. and Wegener, J. 1999. Testing the temporal behavior of real-time software modules using extended evolutionary algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference. Vol. 2. Morgan-Kaufmann, 1795.Google ScholarGoogle Scholar
  49. Puschner, P. and Nossal, R. 1998. Testing the results of static worst-case execution-time analysis. In Proceedings of the 19th IEEE Real-Time Systems Symposium (RTSS'98). IEEE, 134--143. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Radio Technical Commission for Aeronautics. 1992. Software considerations in airborne systems and equipment certification. RTCA DO178-B.Google ScholarGoogle Scholar
  51. Schultz, A., Grefenstette, J., and Jong, K. 1993. Test and evaluation by genetic algorithms. IEEE Expert 8, 5, 9--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Sen, K., Marinov, D., and Agha, G. 2005. CUTE: a concolic unit testing engine for C. In Proceedings of the ESEC/SIGSOFT Joint Meeting on the Foundations of Software Engineering (FSE). ACM, 263--272. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Tillmann, N. and de Halleux, J. 2008. Pex-white box test generation for.NET. In Proceedings of the Tests and Proofs (TAP). Lecture Notes in Computer Science, vol. 4966. Springer, 134--153. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Tip, F. 1994. A survey of program slicing techniques. Tech. rep. CS-R9438, Centrum voor Wiskunde en Informatica, Amsterdam. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Tracey, N., Clark, J., and Mander, K. 1998a. Automated program flaw finding using simulated annealing. In Proceedings of the International Symposium on Software Testing and Analysis. ACM/SIGSOFT, 73--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Tracey, N., Clark, J., and Mander, K. 1998b. The way forward for unifying dynamic test-case generation: The optimisation-based approach. In Proceedings of the International Workshop on Dependable Computing and Its Applications (DCIA). IFIP, 169--180.Google ScholarGoogle Scholar
  57. Wappler, S., Baresel, A., and Wegener, J. 2007. Improving evolutionary testing in the presence of function-assigned flags. In Proceedings of the Testing: Academic & Industrial Conference, Practice And Research Techniques (TAIC PART07). IEEE, 23--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Ward, M. 1994. Reverse engineering through formal transformation. Comput. J. 37, 5, 795--813.Google ScholarGoogle ScholarCross RefCross Ref
  59. Wegener, J., Baresel, A., and Sthamer, H. 2001. Evolutionary test environment for automatic structural testing. Inf. Softw. Technol. Special Issue on Software Engineering using Metaheuristic Innovative Algorithms. 43, 14, 841--854.Google ScholarGoogle ScholarCross RefCross Ref
  60. Wegener, J., Grimm, K., Grochtmann, M., Sthamer, H., and Jones, B. F. 1996. Systematic testing of real-time systems. In Proceedings of the 4th International Conference on Software Testing Analysis and Review (EuroSTAR'96).Google ScholarGoogle Scholar
  61. Wegener, J. and Mueller, F. 2001. A comparison of static analysis and evolutionary testing for the verification of timing constraints. Real-Time Syst. 21, 3, 241--268. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Wegener, J., Sthamer, H., Jones, B. F., and Eyres, D. E. 1997. Testing real-time systems using genetic algorithms. Softw. Qual. 6, 127--135. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Weiser, M. 1979. Program slices: Formal, psychological, and practical investigations of an automatic program abstraction method. Ph.D. dissertation. University of Michigan, Ann Arbor, MI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Weiser, M. 1984. Program slicing. IEEE Trans. Softw. Engin. 10, 4, 352--357.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Xie, T., Tillmann, N., de Halleux, P., and Schulte, W. 2009. Fitness-guided path exploration in dynamic symbolic execution. In Proceedings of the 39th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'09).Google ScholarGoogle Scholar

Index Terms

  1. FlagRemover: A testability transformation for transforming loop-assigned flags

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Software Engineering and Methodology
      ACM Transactions on Software Engineering and Methodology  Volume 20, Issue 3
      August 2011
      176 pages
      ISSN:1049-331X
      EISSN:1557-7392
      DOI:10.1145/2000791
      Issue’s Table of Contents

      Copyright © 2011 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 August 2011
      • Accepted: 1 July 2009
      • Revised: 1 March 2009
      • Received: 1 June 2007
      Published in tosem Volume 20, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader