Skip to main content
Log in

A Germinal Center Artificial Immune System for Black Box Test Selection

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

The rise of computer science has given engineers more opportunities to automate tasks. A prime example is testing which is moving from a manual to an automated process. Modern test frameworks enable programmers to write vast numbers of tests as source code in a rather short time. High numbers of tests are necessary as many products such as cars offer more and more features. On the other hand product development is a time critical job and thus exhaustive testing is not always feasible. Often a subset of critical tests is to be executed in order to get both an insight into the current status of the product as well as meeting time constraints. The choice of an appropriate set of tests is commonly known as test selection. A corresponding test suite usually has to fulfill several goals such as a short execution time or failure revealing capabilities. Thus this vital task has naturally come into the focus of multiobjective optimization research. In a previous work we proposed a germinal center artificial immune system (GCAIS) for test selection and experimentally showed that it is capable to outperform several other metaheuristics including a NSGA-II specialized for the task. Within this work we focus on a deeper evaluation, especially of the algorithmic changes that we made. Further, we give insight into how we delivered the final GCAIS to our customer. Thereby we show the full stack development of an evolutionary computation application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Availability of Data and Material

The used datasets may be found here: https://github.com/LagLukas/moa_testing.

Code availability

The source code for the metaheuristics etc. can be retrieved from here: https://github.com/LagLukas/moa_testing.

Notes

  1. The test has further conditions: the observations are ordinal (which is true in our case) and that the observations from both groups are independent. The later we assume as the individual algorithms are run independently from each other.

  2. Available here: https://github.com/LagLukas/moa_testing.

  3. The percentiles are the empirical quantiles. For example the 50 percent quantile is the smallest value in the sample such that 50 percent of the sample are smaller than this value.

  4. A corresponding statistical test would yield a p-value of \(4.04e-09\) (for the nullhypothesis that the vanilla NSGA-II performs better than the version with multiaxis initialization).

  5. An alternative wording for test suite is test sequence if an ordering for the test cases is fixed.

References

  1. Arrieta A, Wang S, Arruabarrena A, Markiegi U, Sagardui G, Etxeberria L. Multi-Objective Black-Box Test Case Selection for Cost-Effectively Testing Simulation Models. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’18, page 1411-1418, New York, NY, USA, 2018. Association for Computing Machinery.

  2. Beasley JE, Chu PC. A genetic algorithm for the set covering problem. Eur J Oper Res. 1996;94(2):392–404.

    Article  MATH  Google Scholar 

  3. Dijkstra EW. Chapter I: Notes on Structured Programming, page 1–82. GBR: Academic Press Ltd.; 1972.

    Google Scholar 

  4. Dinus I, Steurer S. Analytical Approach to Parallel Repetition. In Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 624–633, New York, NY, USA, 2014. ACM.

  5. Duran JW, Ntafos SC. An evaluation of random testing. IEEE Trans Softw Eng. 1984;10(4):438–44.

    Article  Google Scholar 

  6. International Organization for Standardization. ISO/IEC 25010. https://iso25000.com/index.php/en/iso-25000-standards/iso-25010, 2014. [Online; accessed 15-June-2021].

  7. Fraser G, Wotawa Fr. Redundancy Based Test-Suite Reduction. In Matthew B. Dwyer and Antónia Lopes, editors, Fundamental Approaches to Software Engineering, pages 291–305, Berlin, Heidelberg, 2007. Springer Berlin, Heidelberg.

  8. Hsu HY, Orso A. MINTS: A General Framework and Tool for supporting Test-Suite Minimization. In 2009 IEEE 31st International Conference on Software Engineering, pages 419–429, 2009.

  9. Joshi A. The Germinal Centre Artificial Immune System. PhD thesis, University of Birmingham, 2017.

  10. Ayush Joshi, Jonathan Rowe, Christine Zarges. An Immune-Inspired Algorithm for the Set Cover Problem. In Parallel Problem Solving from Nature – PPSN XIII, pages 243–251, Cham, 2014. Springer International Publishing.

  11. Ayush Joshi, Jonathan E. Rowe, Christine Zarges. Improving the Performance of the Germinal Center Artificial Immune System Using epsilon-Dominance: A Multi-objective Knapsack Problem Case Study. In Gabriela Ochoa and Francisco Chicano, editors, Evolutionary Computation in Combinatorial Optimization, pages 114–125, Cham, 2015. Springer International Publishing.

  12. Remo Lachmann, Michael Felderer, Manuel Nieke, Sandro Schulze, Christoph Seidl, Ina Schaefer. Multi-Objective Black-Box Test Case Selection for System Testing. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17, page 1311-1318, New York, NY, USA, 2017. Association for Computing Machinery.

  13. Scott McMaster, Atif M. Memon. Call stack coverage for test suite reduction. In 21st IEEE International Conference on Software Maintenance (ICSM’05), pages 539–548, 2005.

  14. Debajyoti Mondal, Hadi Hemmati, Stephane Durocher. Exploring test suite diversification and code coverage in multi-objective test case selection. In 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), pages 1–10, 2015.

  15. Frank Neumann, Carsten Witt. Bioinspired Computation in Combinatorial Optimization: Algorithms and their Computational Complexity. Natural Computing Series, ISBN 978-3-642-16543-6. Springer-Verlag Berlin Heidelberg, 2010, 01 2010.

  16. Everton Note Narciso, Márcio Delamaro, Fátima Nunes. Test Case Selection: A Systematic Literature Review. International Journal of Software Engineering and Knowledge Engineering, 24:653–676, 05 2014.

  17. Mitchell Olsthoorn, Annibale Panichella. Multi-objective test case selection through linkage learning-based crossover. In Una-May O’Reilly and Xavier Devroey, editors, Search-Based Software Engineering, pages 87–102, Cham, 2021. Springer International Publishing.

  18. Panichella A, Oliveto R, Di Penta M, De Lucia A. Improving multi-objective test case selection by injecting diversity in genetic algorithms. Improving multi-objective test case selection by injecting diversity in genetic algorithms. 2015;41(4):358–83.

    Google Scholar 

  19. Mike Papadakis, Marinos Kintis, Jie Zhang, Yue Jia, Yves Le Traon, and Mark Harman. Chapter Six - Mutation Testing Advances: An Analysis and Survey. In Atif M. Memon, editor, Advances in Computers, volume 112, pages 275 – 378. Elsevier, 2019.

  20. Rosenbauer L, Pätzel D, Stein A, Hähner J. A learning classifier system for automated test case prioritization and selection. SN Computer Science. 2022;3(5):373.

    Article  Google Scholar 

  21. Lukas Rosenbauer, Anthony Stein, Jörg Hähner. An artificial immune system for black box test case selection. In Christine Zarges and Sébastien Verel, editors, Evolutionary Computation in Combinatorial Optimization, pages 169–184, Cham, 2021. Springer International Publishing.

  22. Lukas Rosenbauer, Anthony Stein, Jörg Hähner. A Germinal Centre Artificial Immune System for Software Test Suite Reduction. Artificial Life, 2020.

  23. Lukas Rosenbauer, Anthony Stein, Jörg Hähner. An Artificial Immune System for Adaptive Test Selection. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 2020.

  24. Lukas Rosenbauer, Anthony Stein, Helena Stegherr, Jörg Hähner. Metaheuristics for the Minimum Set Cover Problem: A Comparison. In Proceedings of International Joint Conference on Computational Intelligence, 2020.

  25. Helge Spieker, Arnaud Gotlieb, Dusica Marijan, Morten Mossige. Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration. CoRR, abs/1811.04122, 2018.

  26. Yoo S, Harman M. Regression Testing Minimization, Selection and Prioritization: A Survey. Softw Test Verif Reliab. 2012;22(2):67–120.

    Article  Google Scholar 

  27. Yanbing Yu, James A. Jones, Mary J. Harrold. An Empirical Study of the Effects of Test-Suite Reduction on Fault Localization. In Proceedings of the 30th International Conference on Software Engineering, ICSE ’08, page 201-210, New York, NY, USA, 2008. Association for Computing Machinery.

Download references

Funding

Not applicable (no funding).

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Lukas Rosenbauer.

Ethics declarations

Conflict of interest

Not applicable (no conflicts of interests, competing interests known).

Compliance with Ethical Standards

Dear editors and dear guest editors, this section contains the declarations necessary for the manuscript submission. Next to the required ones, we would like to express our gratitude for the invitation to contribute within the special issue. We are looking forward to your and the reviewer’s feedback.

Consent to Participate

Not applicable (no medical study).

Consent to Publish

Not applicable (no medical study).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Bio-inspired Algorithms for Combinatorial Optimization” guest edited by Aniko Ekart, Christine Zarges and Sébastien Verel.

A Fitness on the Oven Datasets

A Fitness on the Oven Datasets

This appendix section contains the fitness for the vanilla GCAIS on the two oven datasets. Furthermore, it shows the difference between the vanilla variant and our extended version. They show a similar behaviour as discussed for the dishwasher dataset. For a fixed test session the vanilla variant has a constant output. Our adapted variant is capable to offer a more rich solution output as can be seen in the differences. For the oven 2 dataset the effect is less visible than for the oven 1 dataset, but nonetheless still there (Figs. 13, 14, 15, 16).

Fig. 13
figure 13

Fitness difference between the adapted GCAIS variant and the standard one (oven 1)

Fig. 14
figure 14

Fitness values of the standard version of GCAIS (oven 1)

Fig. 15
figure 15

Fitness difference between the adapted GCAIS variant and the standard one (oven 2)

Fig. 16
figure 16

Fitness values of the standard version of GCAIS (oven 2)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rosenbauer, L., Stein, A. & Hähner, J. A Germinal Center Artificial Immune System for Black Box Test Selection. SN COMPUT. SCI. 4, 55 (2023). https://doi.org/10.1007/s42979-022-01474-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-022-01474-6

Keywords

Navigation