Skip to main content
Log in

Automated test reuse for highly configurable software

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Dealing with highly configurable systems is generally very complex. Researchers and practitioners have conceived hundreds of different analysis techniques to deal with different aspects of configurable systems. One large focal point is the testing of configurable software. This is challenging due to the large number of possible configurations. Moreover, tests themselves are rarely configurable and instead built for specific configurations. However, existing tests need to be adapted to run on a different configuration. In this paper, we report on an experiment about automatically reusing existing tests in configurable systems. We used manually developed tests for specific configurations of three configurable systems and investigated how changing the configuration affects the tests. Subsequently, we employed an approach for automated reuse to generate new test variants (by reusing from existing ones) for combinations of previous configurations and compared their results to the ones from existing tests. Our results showed that we could directly reuse some tests for different configurations. Nonetheless, our automatically generated test variants generally yielded better results. Our generated tests had a higher or equal success rate to the existing tests in most cases. Even in the cases the success rate was equal, our generated tests generally had higher code coverage.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26

Similar content being viewed by others

Notes

  1. https://almworks.com/archive/vbs

  2. https://www.turnkeylinux.org/bugzilla

  3. https://www.seleniumhq.org/

  4. https://maven.apache.org/

  5. https://maven.apache.org/shared/maven-invoker/

  6. https://maven.apache.org/surefire/maven-surefire-plugin/

  7. https://www.eclemma.org/jacoco/

  8. https://pitest.org/

References

  • Aaltonen K, Ihantola P, Seppälä O (2010) Mutation analysis vs. code coverage in automated assessment of students’ testing skills. In: Cook WR, Clarke S, Rinard MC (eds) Companion to the 25th annual ACM SIGPLAN conference on object-oriented programming, systems, languages, and applications, SPLASH/OOPSLA 2010, october 17-21, 2010, Reno/Tahoe, Nevada, USA, pp 153–160, ACM. https://doi.org/10.1145/1869542.1869567

  • Benavides D, Segura S, Cortés AR (2010) Automated analysis of feature models 20 years later: A literature review. Inf. Syst. 35(6):615–636

    Article  Google Scholar 

  • Berger T, Lettner D, Rubin J, Grünbacher P, Silva A, Becker M, Chechik M, Czarnecki K (2015) What is a feature?: a qualitative study of features in industrial software product lines. In: Schmidt DC (ed) Proceedings of the 19th International Conference on Software Product Line, SPLC 2015, Nashville, TN, USA, July 20-24, 2015, pp 16–25, ACM. https://doi.org/10.1145/2791060.2791108

  • Berger T, She S, Lotufo R, Wasowski A, Czarnecki K (2013) A study of variability models and languages in the systems software domain. IEEE Trans. Software Eng. 39(12):1611–1640. https://doi.org/10.1109/TSE.2013.34

    Article  Google Scholar 

  • Cohen MB, Snyder J, Rothermel G (2006) Testing across configurations: implications for combinatorial testing. ACM SIGSOFT Software Engineering Notes 31(6):1–9. https://doi.org/10.1145/1218776.1218785

    Article  Google Scholar 

  • Couto MV, Valente MT, Figueiredo E (2011) Extracting software product lines: A case study using conditional compilation. In: Mens T, Kanellopoulos Y, Winter A (eds) 15th European Conference on Software Maintenance and Reengineering, CSMR 2011, 1-4 March 2011, Oldenburg, Germany, pp 191–200, IEEE Computer Society. https://doi.org/10.1109/CSMR.2011.25

  • Dubinsky Y, Rubin J, Berger T, Duszynski S, Becker M, Czarnecki K (2013) An exploratory study of cloning in industrial software product lines. In: Cleve A, Ricca F, Cerioli M (eds) 17th European Conference on Software Maintenance and Reengineering, CSMR 2013, Genova, Italy, March 5-8, 2013, pp 25–34, IEEE Computer Society. https://doi.org/10.1109/CSMR.2013.13

  • da Mota SilveiraNeto PA, doCarmoMachado I, McGregor JD, deAlmeida ES, deLemosMeira SR (2011) A systematic mapping study of software product lines testing. Information & Software Technology 53(5):407–423

    Article  Google Scholar 

  • do Carmo Machado I, McGregor JD, Cavalcanti YC, deAlmeida ES (2014) On strategies for testing software product lines: A systematic literature review. Information & Software Technology 56(10):1183–1199

    Article  Google Scholar 

  • do Carmo Machado I, McGregor JD, deAlmeida ES (2012) Strategies for testing products in software product lines. ACM SIGSOFT Software Engineering Notes 37(6):1–8

    Article  Google Scholar 

  • Engström E, Runeson P (2011) Software product line testing - A systematic mapping study. Information & Software Technology 53(1):2–13

    Article  Google Scholar 

  • Fischer S, Linsbauer L, Lopez-Herrejon RE, Egyed A (2014) Enhancing clone-and-own with systematic reuse for developing software variants. In: 30th IEEE International Conference on Software Maintenance and Evolution, Victoria, BC, Canada, September 29 - October 3, 2014, pp 391–400, IEEE Computer Society. https://doi.org/10.1109/ICSME.2014.61

  • Fischer S, Linsbauer L, Lopez-Herrejon RE, Egyed A (2015) The ECCO tool: Extraction and composition for clone-and-own. In: Bertolino A, Canfora G, Elbaum SG (eds) 37th IEEE/ACM International Conference on Software Engineering, ICSE 2015, Florence, Italy, May 16-24, 2015, Volume 2, pp 665–668, IEEE Computer Society. https://doi.org/10.1109/ICSE.2015.218

  • Fischer S, Lopez-Herrejon RE, Egyed A (2018) Towards a fault-detection benchmark for evaluating software product line testing approaches. In: Haddad HM, Wainwright RL, Chbeir R (eds) Proceedings of the 33rd Annual ACM Symposium on Applied Computing, SAC 2018, Pau, France, April 09-13, 2018, pp 2034–2041, ACM. https://doi.org/10.1145/3167132.3167350

  • Fischer S, Ramler R, Linsbauer L, Egyed A (2019) Automating test reuse for highly configurable software. In: Berger T, Collet P, Duchien L, Fogdal T, Heymans P, Kehrer T, Martinez J, Mazo R, Montalvillo L, Salinesi C, Tërnava X, Thüm T, Ziadi T (eds) Proceedings of the 23rd International Systems and Software Product Line Conference, SPLC 2019, Volume A, Paris, France, September 9-13, 2019, pp 1:1–1:11, ACM. https://doi.org/10.1145/3336294.3336305

  • Halin A, Nuttinck A, Acher M, Devroey X, Perrouin G, Baudry B (2019) Test them all, is it worth it? assessing configuration sampling on the jhipster web development stack. Empir Softw Eng 24(2):674–717. https://doi.org/10.1007/s10664-018-9635-4

    Article  Google Scholar 

  • Holling D, Banescu S, Probst M, Petrovska A, Pretschner A (2016) Nequivack: Assessing mutation score confidence. In: 2016 IEEE Ninth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp 152–161. https://doi.org/10.1109/ICSTW.2016.29

  • Kim C HP, Batory DS, Khurshid S (2011) Reducing combinatorics in testing product lines. In: Borba P, Chiba S (eds) Proceedings of the 10th International Conference on Aspect-Oriented Software Development, AOSD 2011, Porto de Galinhas, Brazil, March 21-25, 2011, pp 57–68, ACM. https://doi.org/10.1145/1960275.1960284

  • Kim C HP, Marinov D, Khurshid S, Batory DS, Souto S, Barros P, d’Amorim M (2013) Splat: lightweight dynamic analysis for reducing combinatorics in testing configurable systems. In: Meyer B, Baresi L, Mezini M (eds) Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE’13, Saint Petersburg, Russian Federation, August 18-26, 2013, pp 257–267, ACM. https://doi.org/10.1145/2491411.2491459

  • Krueger CW (2006) New methods in software product line practice. Commun. ACM 49(12):37–40. https://doi.org/10.1145/1183236.1183262

    Article  Google Scholar 

  • Krüger J, Al-Hajjaji M, Schulze S, Saake G, Leich T (2018) Towards automated test refactoring for software product lines. In: Berger T, Borba P, Botterweck G, Männistö T, Benavides D, Nadi S, Kehrer T, Rabiser R, Elsner C, Mukelabai M (eds) Proceeedings of the 22nd International Systems and Software Product Line Conference - Volume 1, SPLC 2018, Gothenburg, Sweden, September 10-14, 2018, pp 143–148, ACM. https://doi.org/10.1145/3233027.3233040

  • Kuhn DR, Wallace DR, Gallo AM (2004) Software fault interactions and implications for software testing. IEEE Trans. Software Eng. 30(6):418–421. https://doi.org/10.1109/TSE.2004.24

    Article  Google Scholar 

  • Lopez-Herrejon RE, Fischer S, Ramler R, Egyed A (2015) A first systematic mapping study on combinatorial interaction testing for software product lines. In: Eighth IEEE International Conference on Software Testing, Verification and Validation, ICST 2015 Workshops, Graz, Austria, April 13-17, 2015, pp 1–10,IEEE Computer Society. https://doi.org/10.1109/ICSTW.2015.7107435

  • Martinez J, Assunção W KG, Ziadi T (2017) Espla: A catalog of extractive spl adoption case studies. In: SPLC, ACM

  • Martinez J, Ordoñez N, Tërnava X, Ziadi T, Aponte J, Figueiredo E, Valente MT (2018) Feature location benchmark with argouml SPL. In: SPLC, ACM

  • Moghadam MH, Babamir SM (2014) Mutation score evaluation in terms of object-oriented metrics. In: 2014 4th International Conference on Computer and Knowledge Engineering (ICCKE), pp 775–780. https://doi.org/10.1109/ICCKE.2014.6993419

  • Mukelabai M, Nesic D, Maro S, Berger T, Steghöfer J-P (2018) Tackling combinatorial explosion: a study of industrial needs and practices for analyzing highly configurable systems. In: Huchard M, Kästner C, Fraser G (eds) Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3-7, 2018, pp 155–166, ACM. https://doi.org/10.1145/3238147.3238201

  • Nguyen HV, Kästner C, Nguyen TN (2014) Exploring variability-aware execution for testing plugin-based web applications. In: Jalote P, Briand LC, vander Hoek A (eds) 36th International Conference on Software Engineering, ICSE ’14, Hyderabad, India - May 31 - June 07, 2014, pp 907–918, ACM. https://doi.org/10.1145/2568225.2568300

  • Ramler R, Putschögl W (2013) Reusing automated regression tests for multiple variants of a software product line. In: Sixth IEEE International Conference on Software Testing, Verification and Validation, ICST 2013 Workshops Proceedings, Luxembourg, Luxembourg, March 18-22, 2013, pp 122–123, IEEE Computer Society. https://doi.org/10.1109/ICSTW.2013.21

  • Reisner E, Song C, Ma K-K, Foster JS, Porter AA (2010) Using symbolic evaluation to understand behavior in configurable software systems. In: Kramer J, Bishop J, Devanbu PT, Uchitel S (eds) Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1, ICSE 2010, Cape Town, South Africa, 1-8 May 2010, pp 445–454, ACM. https://doi.org/10.1145/1806799.1806864

  • Skoglund M, Runeson P (2004) A case study on regression test suite maintenance in system evolution. In: 20th IEEE International Conference on Software Maintenance, 2004. Proceedings, pp 438–442, IEEE

  • Souto S, d’Amorim M, Gheyi R (2017) Balancing soundness and efficiency for practical testing of configurable systems. In: Uchitel S, Orso A, Robillard MP (eds) Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, Argentina, May 20-28, 2017, pp 632–642, IEEE / ACM. https://doi.org/10.1109/ICSE.2017.64

Download references

Acknowledgements

The research reported in this paper has been funded by the Austrian Ministry for Transport, Innovation and Technology (BMVIT), the Federal Ministry for Digital and Economic Affairs (BMDW), and the Province of Upper Austria in the frame of the COMET - Competence Centers for Excellent Technologies Programme managed by Austrian Research Promotion Agency FFG. This research was in part funded by the Linz Institute of Technology (LIT) Secure and Correct Systems Lab and the Austrian Science Fund (FWF), grant no. P31989.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Fischer.

Additional information

Communicated by: Laurence Duchien, Thomas Thüm and Paul Grünbacher

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Configurable Systems

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fischer, S., Michelon, G.K., Ramler, R. et al. Automated test reuse for highly configurable software. Empir Software Eng 25, 5295–5332 (2020). https://doi.org/10.1007/s10664-020-09884-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-020-09884-x

Keywords

Navigation