skip to main content
10.1145/2463372.2463545acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Minimizing test suites in software product lines using weight-based genetic algorithms

Published:06 July 2013Publication History

ABSTRACT

Test minimization techniques aim at identifying and eliminating redundant test cases from test suites in order to reduce the total number of test cases to execute, thereby improving the efficiency of testing. In the context of software product line, we can save effort and cost in the selection and minimization of test cases for testing a specific product by modeling the product line. However, minimizing the test suite for a product requires addressing two potential issues: 1) the minimized test suite may not cover all test requirements compared with the original suite; 2) the minimized test suite may have less fault revealing capability than the original suite. In this paper, we apply weight-based Genetic Algorithms (GAs) to minimize the test suite for testing a product, while preserving fault detection capability and testing coverage of the original test suite. The challenge behind is to define an appropriate fitness function, which is able to preserve the coverage of complex testing criteria (e.g., Combinatorial Interaction Testing criterion). Based on the defined fitness function, we have empirically evaluated three different weight-based GAs on an industrial case study provided by Cisco Systems, Inc. Norway. We also presented our results of applying the three weight-based GAs on five existing case studies from the literature. Based on these case studies, we conclude that among the three weight-based GAs, Random-Weighted GA (RWGA) achieved significantly better performance than the other ones.

References

  1. J. McGregor. Testing a Software Product Line. Technical Report Carnegie Mellon University/SEI-2001-TR-022. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania. 2001. http://www.sei.cmu.edu/library/abstracts/reports/01tr022.cfmGoogle ScholarGoogle ScholarCross RefCross Ref
  2. D. Benavides, S. Segura, and A. R. Cortés. Automated analysis of feature models 20 years later: A literature review. Information Systems. (35), 615--636. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. T. Chen, and M. Lau. Dividing strategies for the optimization of a test suite. Information Processing Letters. 60(3), pp. 135-- 141. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Wang, A. Gotlieb, M. Liaaen, and L. C. Briand. Automatic selection of test execution plans from a Video Conference System Product Line. In Proceedings of the ACM MODELS Workshop VARiability for You (VARY' 12), pp. 30--35. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Wang, A. Gotlieb, S. Ali, M. Liaaen. Automated Selection of Test Cases using Feature Model: An Industrial Case Study. Technical Report (2012--20), Simula Research Laboratory. 2012.Google ScholarGoogle Scholar
  6. A. Konak, D. W. Coit, and A. E. Smith. Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineering & System Safety. 91(9), pp. 992--1007. 2007.Google ScholarGoogle ScholarCross RefCross Ref
  7. K. Czarnecki, C. Kim, and K. Kalleberg. Feature models are views on ontologies. In Proceedings of International Software Product Line Conference, pp. 41--51. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Pure systems GmbH. Variant management with pure::variants. Technical white paper. Available from http://web.pure- systems.com, 2006.Google ScholarGoogle Scholar
  9. M. Harman, S. A. Mansouri, and Y. Zhang. Search based software engineering: A comprehensive analysis and review of trends techniques and applications. Technical Report TR-09-032009. King's College, London. 2009.Google ScholarGoogle Scholar
  10. K. C. Tai, and Y. Lei. A Test-Generation Strategy for Pairwise Testing. IEEE Trans. of Software Engineering. 28(1), pp. 109 - 111. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Kuhn, Y. Lei, and R. Kacker. Practical combinatorial testing: Beyond pairwise. IT Professional. 10(3), pp. 19--23. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Murata, T. Ishibuchi, and H. Tanaka. Multi-objective genetic algorithm and its applications to flowshop scheduling. Comput Ind Eng. 30(4), pp. 957--968. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Cisco Systems. Cisco telepresence codec c90. Data sheet. 2010. Available from http://www.cisco.com.Google ScholarGoogle Scholar
  14. D. Benavides. On the Automated Analysis of Software Product Lines Using Feature Models. Doctoral Thesis. Universidad de Sevilla. 2007.Google ScholarGoogle Scholar
  15. A. Arcuri, and L. C. Briand. A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering. In Proceedings of the International Conference on Software Engineering. pp. 21--28. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. Ali, L. C. Briand, H. Hemmati, and R. K. Panesar-Walawege. A Systematic Review of the Application and Empirical Investigation of Search-Based Test Case Generation. IEEE Trans on Software Engineering, 36(6), pp. 742--762. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. B. Cohen, M. B. Dwyer, and J. Shi. Coverage and Adequacy in Software Product Line Testing. In Proceedings ACM ISSTA Workshop on Role of Software Architecture for Testing and Analysis. pp. 53--63. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. Muccini, and A. Van Der Hoek. Towards Testing Product Line Architectures. Electronic Notes in Theoretical Computer Science. 82(6), pp. 99--109. 2003.Google ScholarGoogle ScholarCross RefCross Ref
  19. S. Yoo, and M. Harman. Regression test minimization, selection and prioritization: a survey. Software Testing, Verification, and Reliability; 22(2), pp. 67--120. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. Y. Chen, and M. F. Lau. Dividing strategies for the optimization of a test suite. Information Processing Letters. 60(3), pp. 135--141. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S. Tallam, and N. Gupta. A concept analysis inspired greedy algorithm for test suite minimization. SIGSOFT Software Engineering Notes. 31(1), 35--42. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. D. Jeffrey, and N. Gupta. Test suite reduction with selective redundancy. In Proceedings of the International Conference on Software Maintenance. pp. 549--558. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. Yoo, and M. Harman. Pareto Efficient Multi-Objective Test Case Selection. In Proceedings of the international symposium on Software testing and analysis (ISSTA), pp. 140--150.2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Minimizing test suites in software product lines using weight-based genetic algorithms

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            GECCO '13: Proceedings of the 15th annual conference on Genetic and evolutionary computation
            July 2013
            1672 pages
            ISBN:9781450319638
            DOI:10.1145/2463372
            • Editor:
            • Christian Blum,
            • General Chair:
            • Enrique Alba

            Copyright © 2013 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 6 July 2013

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            GECCO '13 Paper Acceptance Rate204of570submissions,36%Overall Acceptance Rate1,669of4,410submissions,38%

            Upcoming Conference

            GECCO '24
            Genetic and Evolutionary Computation Conference
            July 14 - 18, 2024
            Melbourne , VIC , Australia

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader