Skip to main content

Advertisement

Log in

On the preferences of quality indicators for multi-objective search algorithms in search-based software engineering

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Multi-Objective Search Algorithms (MOSAs) have been applied to solve diverse Search-Based Software Engineering (SBSE) problems. In most cases, SBSE users select one or more commonly used MOSAs (for instance, Nondominated Sorting Genetic Algorithm II (NSGA-II)) to solve their search problems, without any justification (i.e., not supported by any evidence) on why those particular MOSAs are selected. However, when working with a specific multi-objective SBSE problem, users typically know what kind(s) of qualities they are looking for in solutions. Such qualities are represented by one or more Quality Indicators (QIs), which are often employed to assess various MOSAs to select the best MOSA. However, users usually have limited time budgets, which prevents them from executing multiple MOSAs and consequently selecting the best MOSA in the end. Therefore, for such users, it is highly preferred to select only one MOSA since the beginning. To this end, in this paper, we aim to assist SBSE users in finding appropriate MOSAs for their experiments, given their choices of QIs or quality aspects (e.g., Convergence, Uniformity). To achieve this aim, we conduct an extensive empirical evaluation with 18 search problems from a set of real-world, industrial, and open-source case studies, to study preferences among commonly used QIs and MOSAs in SBSE. We observe that each QI has its own specific most-preferred MOSA and vice versa; NSGA-II and Strength Pareto Evolutionary Algorithm 2 (SPEA2) are the most preferred MOSAs by QIs; no QI is the most preferred by all the MOSAs; the preferences between QIs and MOSAs vary across the search problems; QIs covering the same quality aspect(s) do not necessarily have the same preference for MOSAs. Based on our results, we provide discussions and guidelines for SBSE users to select appropriate MOSAs based on experimental evidence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. A maximization objective can be converted into a minimization one by negating it.

  2. Note that some MOSAs are not applicable to some of the search problems, and so the formulations of (1), (2), and (3) should be slightly more complicated. We report the simplified versions here, but we use the correct versions in the experiments.

References

  • Achimugu P, Selamat A, Ibrahim R, Mahrin MN (2014) A systematic literature review of software requirements prioritization research. Inf Softw Technol 56(6):568–585

    Article  Google Scholar 

  • Ali S, Arcaini P, Pradhan D, Safdar SA, Yue T (2020) Quality indicators in search-based software engineering: An empirical evaluation. ACM Trans Softw Eng Methodol 29(2). https://doi.org/10.1145/3375636

  • Ali S, Arcaini P, Yue T (2020) Do quality indicators prefer particular multi-objective search algorithms in search-based software engineering?. In: Aleti A, Panichella A (eds) Search-Based Software Engineering. Springer International Publishing, Cham, pp 25–41

  • Ali S, Briand LC, Hemmati H, Panesar-Walawege RK (2010) A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans Softw Eng 36(6):742–762. https://doi.org/10.1109/TSE.2009.52

  • Arcuri A, Briand L (2011) A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11. ACM, New York, pp 1–10

  • Barros M, Neto A (2011) Threats to validity in search-based software engineering empirical studies. RelaTe-DIA 5

  • CoelloCoello CA, ReyesSierra M (2004) A study of the parallelization of a coevolutionary multi-objective evolutionary algorithm. In: Monroy R, Arroyo-Figueroa G, Sucar LE, Sossa H (eds) MICAI 2004: Advances in Artificial Intelligence. Springer, Berlin, pp 688–697

  • Dantas A, Yeltsin I, Araújo AA, Souza J (2015) Interactive software release planning with preferences base. In: International Symposium on Search Based Software Engineering. Springer, pp 341–346

  • Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. Trans Evol Comp 6(2):182–197. https://doi.org/10.1109/4235.996017

    Article  Google Scholar 

  • Durillo JJ, García-Nieto J, Nebro AJ, Coello Coello CA, Luna F, Alba E (2009a) Multi-objective particle swarm optimizers: An experimental comparison. In: International conference on evolutionary multi-criterion optimization. Springer, pp 495–509

  • Durillo JJ, Nebro AJ (2011) jMetal: A Java framework for multi-objective optimization. Adv Eng Softw 42(10):760–771

    Article  Google Scholar 

  • Durillo JJ, Zhang Y, Alba E, Nebro AJ (2009b) A study of the multi-objective next release problem. In: 2009 1st International Symposium on Search Based Software Engineering. IEEE, pp 49–58

  • Durillo JJ, Nebro AJ, Luna F, Alba E (2008) Solving three-objective optimization problems using a new hybrid cellular genetic algorithm. In: Parallel Problem Solving from Nature – PPSN X. Springer, Berlin, pp 661–670

  • Fieldsend JE, Everson RM, Singh S (2003) Using unconstrained elite archives for multiobjective optimization. IEEE Trans Evol Comput 7(3):305–323

    Article  Google Scholar 

  • Goh CK, Tan KC (2009) Evolutionary multi-objective optimization in uncertain environments. Issues Algorithm Stud Comput Intell 186:5–18

  • Greer D, Ruhe G (2004) Software release planning: an evolutionary and iterative approach. Inf Softw Technol 46(4):243–253

    Article  Google Scholar 

  • Guizzo G, Vergilio SR, Pozo ATR, Fritsche GM (2017) A multi-objective and evolutionary hyper-heuristic applied to the integration and test order problem. Appl Soft Comput 56:331–344

    Article  Google Scholar 

  • Harman M, Mansouri SA, Zhang Y (2012) Search-based software engineering: Trends, techniques and applications. ACM Comput Surv (CSUR) 45 (1):1–61

    Article  Google Scholar 

  • Karim MR, Ruhe G (2014) Bi-objective genetic search for release planning in support of themes. In: International Symposium on Search Based Software Engineering. Springer, pp 123–137

  • Kitchenham B, Madeyski L, Budgen D, Keung J, Brereton P, Charters S, Gibbs S, Pohthong A (2017) Robust statistical methods for empirical software engineering. Empir Softw Engg 22(2):579–630. https://doi.org/10.1007/s10664-016-9437-5

  • Knowles J, Corne D (2002) On metrics for comparing nondominated sets. In: Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), vol 1. IEEE, pp 711–716

  • Knowles JD, Corne DW (June 2000) Approximating the nondominated front using the pareto archived evolution strategy. Evol Comput 8(2):149–172. https://doi.org/10.1162/106365600568167

  • Knowles JD, Thiele L, Zitzler E (2006) A tutorial on the performance assessment of stochastic multiobjective optimizers. TIK-report 214

  • Li M, Chen T, Yao X (2018) A critical review of: “a practical guide to select quality indicators for assessing Pareto-based search algorithms in search-based software engineering”: Essay on quality indicator selection for SBSE. In: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, ICSE-NIER ’18. ACM, New York, pp 17–20. https://doi.org/10.1145/3183399.3183405https://doi.org/10.1145/3183399.3183405

  • Li M, Yao X (2019) Quality evaluation of solution sets in multiobjective optimisation: A survey. ACM Comput Surv 52(2). https://doi.org/10.1145/3300148

  • Li M, Chen T, Yao X (2020) How to evaluate solutions in pareto-basedsearch-based software engineering? a critical review and methodological guidance. IEEE Transactions on Software Engineering

  • Lim SL (2011) Social networks and collaborative filtering for large-scale requirements elicitation. Ph.D. Thesis, University of New South Wales

  • McMinn P (2004) Search-based software test data generation: a survey. Softw Test Verif Reliab 14(2):105–156

    Article  Google Scholar 

  • McMinn P (2011) Search-based software testing: Past, present and future. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops. IEEE, pp 153–163

  • Nebro AJ, Durillo JJ, Garcia-Nieto J, CoelloCoello CA, Luna F, Alba E (2009) SMPSO: A new PSO-based metaheuristic for multi-objective optimization. In: 2009 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making(MCDM), pp 66–73

  • Nebro AJ, Durillo JJ, Luna F, Dorronsoro B, Alba E (2009) MOCell: A cellular genetic algorithm for multiobjective optimization. Int J Intell Syst 24(7):726–746. https://doi.org/10.1002/int.v24:7

    Article  Google Scholar 

  • Pitangueira AM, Tonella P, Susi A, Maciel RS, Barros M (2016) Risk-aware multi-stakeholder next release planning using multi-objective optimization. In: International Working Conference on Requirements Engineering: Foundation for Software Quality. Springer, pp 3–18

  • Pradhan D, Wang S, Ali S, Yue T (2016a) Search-based cost-effective test case selection within a time budget: An empirical study. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO ’16. ACM, New York, pp 1085–1092

  • Pradhan D, Wang S, Ali S, Yue T, Liaaen M (2016b) STIPI: Using search to prioritize test cases based on multi-objectives derived from industrial practice. In: IFIP International Conference on Testing Software and Systems. Springer, pp 172–190

  • Pradhan D, Wang S, Ali S, Yue T, Liaaen M (2018) REMAP: Using rule mining and multi-objective search for dynamic test case prioritization. In: Software Testing, Verification and Validation (ICST), 2018 IEEE 11th International Conference on. IEEE, pp 46–57

  • Pradhan D, Wang S, Ali S, Yue T, Liaaen M (January 2021) CBGA-ES+: A cluster-based genetic algorithm with non-dominated elitist selection for supporting multi-objective test optimization. IEEE Trans Softw Eng 47 (1):86–107. https://doi.org/10.1109/TSE.2018.2882176

  • Ramírez A, Romero JR, Ventura S (2014) On the performance of multiple objective evolutionary algorithms for software architecture discovery. In: Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp 1287–1294

  • Ravber M, Mernik M, Črepinšek M (2017) The impact of quality indicators on the rating of multi-objective evolutionary algorithms. Appl Soft Comput 55(C):265–275. https://doi.org/10.1016/j.asoc.2017.01.038

  • Safdar SA, Lu H, Yue T, Ali S (2017) Mining cross product line rules with multi-objective search and machine learning. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17. ACM, New York, pp 1319–1326

  • Sayyad AS, Ammar H (2013) Pareto-optimal search-based software engineering (POSBSE): A literature survey. In: 2013 2nd International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE), pp 21–27

  • Shang K, Ishibuchi H, He L, Pang LM (2021) A survey on the hypervolume indicator in evolutionary multiobjective optimization. IEEE Trans Evol Comput 25(1):1–20. https://doi.org/10.1109/TEVC.2020.3013290https://doi.org/10.1109/TEVC.2020.3013290

    Article  Google Scholar 

  • Sheskin DJ (2011) Handbook of Parametric and Nonparametric Statistical Procedures, 5 edn. Chapman & Hall/CRC

  • Spieker H, Gotlieb A, Marijan D, Mossige M (2017) Reinforcement learning for automatic test case prioritization and selection in continuous integration. In: Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM, pp 12–22

  • Van Veldhuizen DA, Lamont GB (1998) Evolutionary computation and convergence to a Pareto front. In: Koza JR (ed) Late Breaking Papers at the Genetic Programming 1998 Conference. Stanford University Bookstore, University of Wisconsin, Madison, pp 221–228

  • VanVeldhuizen DA (1999) Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. Air Force Institute of Technology

  • Vargha A, Delaney HD (2000) A critique and improvement of the CL common language effect size statistics of McGraw and Wong. J Educ Behav Stat 25(2):101–132

    Google Scholar 

  • Veček N, Mernik M, Črepinšek M (2014) A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms. Inf Sci 277:656–679. https://doi.org/10.1016/j.ins.2014.02.154

  • Wang S, Ali S, Gotlieb A (2015) Cost-effective test suite minimization in product lines using search techniques. J Syst Softw 103:370–391

    Article  Google Scholar 

  • Wang S, Ali S, Yue T, Li Y, Liaaen M (2016) A practical guide to select quality indicators for assessing Pareto-based search algorithms in search-based software engineering. In: Proceedings of the 38th International Conference on Software Engineering, ICSE ’16. ACM, New York, pp 631–642. https://doi.org/10.1145/2884781.2884880

  • Wang Z, Tang K, Yao X (2008) A multi-objective approach to testing resource allocation in modular software systems. In: 2008. CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE Congress on Evolutionary Computation. IEEE, pp 1148–1153

  • Wilcoxon F (1992) Individual comparisons by ranking methods. In: Kotz S, Johnson NL (eds) Breakthroughs in Statistics: Methodology and Distribution. Springer, New York, pp 196–202

  • Wohlin C, Runeson P, Hst M, Ohlsson MC, Regnell B, Wessln A (2012) Experimentation in Software Engineering. Springer Publishing Company, Incorporated

  • Wu J, Arcaini P, Yue T, Ali S, Zhang H (2021) Repository for the paper “On the Preferences of Quality Indicators for Multi-Objective Search Algorithms in Search-Based Software Engineering”. https://github.com/wjh-test/Quality-Indicator-2021

  • Yue T, Ali S (2014) Applying search algorithms for optimizing stakeholders familiarity and balancing workload in requirements assignment. In: Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, GECCO ’14. ACM, New York, pp 1295–1302

  • Zeleny M (1973) Compromise programming. In: Cochrane J, Zeleny M (eds) Multiple Criteria Decision Making. University of South Carolina Press, Columbia, pp 262–301

  • Zhang H, Zhang M, Yue T, Ali S, Li Y (2020) Uncertainty-wise requirements prioritization with search. ACM Trans Softw Eng Methodol (TOSEM) 30(1):1–54

    Article  Google Scholar 

  • Zhang M, Ali S, Yue T (2019) Uncertainty-wise test case generation and minimization for cyber-physical systems. J Syst Softw 153:1–21

    Article  Google Scholar 

  • Zhou A, Jin Y, Zhang Q, Sendhoff B, Tsang E (2006) Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion. In: 2006 IEEE international conference on evolutionary computation. IEEE, pp 892–899

  • Zitzler E, Laumanns M, Thiele L (2002) SPEA2: Improving the strength Pareto evolutionary algorithm. In: EUROGEN 2001. Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems, pp 95–100

  • Zitzler E, Thiele L (1998) Multiobjective optimization using evolutionary algorithms — a comparative case study. In: Eiben AE, Bäck T, Schoenauer M, Schwefel H-P (eds) Parallel Problem Solving from Nature — PPSN V. Springer, Berlin, pp 292–301

  • Zitzler E, Thiele L, Laumanns M, Fonseca CM, DaFonseca VG (2003) Performance assessment of multiobjective optimizers: An analysis and review. IEEE Trans Evol Comput 7(2):117–132

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grant No. 61872182. It is also partially supported by the Co-evolver project (No. 286898/F20) funded by the Research Council of Norway. Paolo Arcaini is supported by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JST. Funding Reference number: 10.13039/501100009024 ERATO. Huihui Zhang is supported by the project (grant No. ZR2021MF026) funded by the Shandong Provincial Natural Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Yue.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflicts of interest.

Additional information

Communicated by: Aldeida Aleti, Annibale Panichella and Shin Yoo

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Advances in Symposium on Search-Based Software Engineering (SSBSE)

A Appendix

A Appendix

1.1 A.1 Detailed Data for RQ 1.2

For answering RQ1.2, Table 13 presents all detailed results. For each QI and each search problem, it reports the ranking of all MOSA preferences.

Table 13 The preference relationship orders of selected MOSAs for each search problem of each QI

1.2 A.2 Detailed Data for RQ 2.2

For answering RQ2.2, Table 14 presents all detailed results. For each MOSA and each search problem, it reports the ranking of all QI preferences.

Table 14 The preference relationship orders of selected QIs for each search problem of each MOSA

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, J., Arcaini, P., Yue, T. et al. On the preferences of quality indicators for multi-objective search algorithms in search-based software engineering. Empir Software Eng 27, 144 (2022). https://doi.org/10.1007/s10664-022-10127-4

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10664-022-10127-4

Keywords

Navigation