Skip to main content
Log in

Zen-ReqOptimizer: a search-based approach for requirements assignment optimization

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

At early phases of a product development lifecycle of large scale Cyber-Physical Systems (CPSs), a large number of requirements need to be assigned to stakeholders from different organizations or departments of the same organization for review, clarification and checking their conformance to standards and regulations. These requirements have various characteristics such as extents of importance to the organization, complexity, and dependencies between each other, thereby requiring different effort (workload) to review and clarify. While working with our industrial partners in the domain of CPSs, we discovered an optimization problem, where an optimal solution is required for assigning requirements to various stakeholders by maximizing their familiarity to assigned requirements, meanwhile balancing the overall workload of each stakeholder. In this direction, we propose a fitness function that takes into account all the above-mentioned factors to guide a search algorithm to find an optimal solution. As a pilot experiment, we first investigated four commonly applied search algorithms (i.e., GA, (1 + 1) EA, AVM, RS) together with the proposed fitness function and results show that (1 + 1) EA performs significantly better than the other algorithms. Since our optimization problem is multi-objective, we further empirically evaluated the performance of the fitness function with six multi-objective search algorithms (CellDE, MOCell, NSGA-II, PAES, SMPSO, SPEA2) together with (1 + 1) EA (the best in the pilot study) and RS (as the baseline) in terms of finding an optimal solution using an real-world case study and 120 artificial problems of varying complexity. Results show that both for the real-world case study and the artificial problems (1 + 1) EA achieved the best performance for each single objective and NSGA-II achieved the best performance for the overall fitness. NSGA-II has the ability to solve a wide range of problems without having their performance degraded significantly and (1 + 1) EA is not fit for problems with less than 250 requirements Therefore we recommend that, if a project manager is interested in a particular objective then (1 + 1) EA should be used; otherwise, NSGA-II should be applied to obtain optimal solutions when putting the overall fitness as the first priority.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. In our context, we define stakeholders as engineers in different organizations who have responsibilities to review and clarify requirements and check their conformance to various standards. Such stakeholders include, for example, domain experts of a specific discipline such as software engineering and requirements engineers who are responsible to manage requirement artifacts.

References

  • Ali S, Briand LC, Hemmati H, Panesar-Walawege RK (2010) A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans Softw Eng 36:742–762. doi:10.1109/TSE.2009.52

    Article  Google Scholar 

  • Arcuri A (2013) It really does matter how you normalize the branch distance in search – based software testing. Verification and Reliability 23:119–147

    Article  Google Scholar 

  • Arcuri A, Briand L (2011) A practical guide for using statistical tests to assess randomized algorithms in software engineering. Paper presented at the 33rd international conference on software engineering (ICSE), Hawaii, 21–28 May 2011

  • Arcuri A, Fraser G (2011) On parameter tuning in search based software engineering. Lecture Notes in: Computer Science, pp 33–47

  • Bagnall AJ, Rayward-Smith VJ, Whittley I (2001) The next release problem. Inf Softw Technol 43:883–890

    Article  Google Scholar 

  • Bai Y, Bai Q (2012) Subsea engineering handbook. Gulf Professional Publishing, USA

    Google Scholar 

  • Baker P, Harman M, Steinhöfel K, Skaliotis A (2006) Search based approaches to component selection and prioritization for the next release problem. Paper presented at the 22nd IEEE international conference on software maintenance, Philadelphia, Pennsylvania, 24–27 September 2006

  • Barros MO, Dias-Neto AC (2011) Threats to validity in search-based software engineering empirical studies. RelaTe-DIA, 2011, 5(1)

  • Bradner S (1997) Key words for use in RFCs to indicate requirement levels. doi:10.17487/RFC2119

  • Brownlee J (2012) Clever algorithms: nature-inspired programming recipes. lulu.com; 1st edn

  • Cortellessa V, Crnkovic I, Marinelli F, Potena P (2008) Experimenting the automated selection of COTS components based on cost and system requirements. J UCS 14:1228–1255

    Google Scholar 

  • Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II evolutionary computation. IEEE Trans 6:182–197. doi:10.1109/4235.996017

    Google Scholar 

  • Droste S, Jansen T, Wegener I (2002) On the analysis of the (1 + 1) evolutionary algorithm. Theor Comput Sci 276:51–81. doi:10.1016/S0304-3975(01)00182-7

    Article  MathSciNet  MATH  Google Scholar 

  • Durillo J, Nebro A, Luna F, Alba E (2008) Solving three-objective optimization problems using a new hybrid cellular genetic algorithm. In: Rudolph G, Jansen T, Lucas S, Poloni C, Beume N (eds) Parallel problem solving from nature – PPSN X. Lecture Notes in Computer Science, vol 5199. Springer, Berlin Heidelberg, pp 661–670, DOI 10.1007/978-3-540-87700-4_66

  • Durillo J, Zhang Y, Alba E, Harman M, Nebro A (2011) A study of the Bi-objective next release problem. Empir Softw Eng 16:29–60. doi:10.1007/s10664-010-9147-3

    Article  Google Scholar 

  • Durillo JJ, Nebro AJ (2011) jMetal: a java framework for multi-objective optimization. Adv Eng Softw 42:760–771. doi:10.1016/j.advengsoft.2011.05.014

    Article  Google Scholar 

  • Feather MS, Menzies T (2002) Converging on the optimal attainment of requirements. In: International conference on requirements engineering. IEEE Joint, 2002, pp 263–270

  • Finkelstein A, Harman M, Mansouri SA, Ren J, Zhang Y (2008) Fairness analysis in requirements assignments. In: International requirements engineering, IEEE, pp 115–124

  • Finkelstein A, Harman M, Mansouri SA, Ren J, Zhang Y (2009) A search based approach to fairness analysis in requirement assignments to aid negotiation, mediation and decision making. Requir Eng 14(4):231–245

    Article  Google Scholar 

  • Greer D, Ruhe G (2004) Software release planning: an evolutionary and iterative approach. Inf Softw Technol 46(4):243–253

    Article  Google Scholar 

  • Harman M, Mansouri SA, Zhang Y (2009) Search based software engineering: a comprehensive analysis and review of trends techniques and applications. Department of Computer Science, King College London, Tech Rep TR-09-03

  • Herrmann A, Daneva M (2008) Requirements Prioritization Based on Benefit and Cost Prediction: an agenda for future research. In: International requirements engineering, IEEE, pp 125–134

  • Hickey AM, Davis AM (2003) Elicitation technique selection: How do experts do it?. In: 11th IEEE international requirements engineering conference, IEEE, pp 169–178

  • Karlsson J, Ryan K (1997) A cost-value approach for prioritizing requirements software. IEEE 14(4):67–74

    Google Scholar 

  • Karlsson J, Wohlin C, Regnell B (1998) An evaluation of methods for prioritizing software requirements. Inf Softw Technol 39(14):939–947

    Article  Google Scholar 

  • Knowles JD, Corne DW (2000) Approximating the nondominated front using the Pareto archived evolution strategy. Evol Comput 8(2):149–172. doi:10.1162/106365600568167

    Article  Google Scholar 

  • Konak A, Coit DW, Smith AE (2006) Multi-objective optimization using genetic algorithms: a tutorial. Reliab Eng Syst Saf 91(9):992–1007. doi:10.1016/j.ress.2005.11.018

    Article  Google Scholar 

  • Korel B (1990) Automated software test data generation. IEEE Trans Softw Eng 16:870–879. doi:10.1109/32.57624

    Article  Google Scholar 

  • Lehtola L, Kauppinen M, Kujala S (2004) Requirements prioritization challenges in practice. In: Product focused software process improvement. Springer, Berlin, pp 497–508

  • Nebro A, Durillo J, Luna F, Dorronsoro B, Alba E (2007) Design issues in a multiobjective cellular genetic algorithm. In: Evolutionary multi-criterion optimization. Springer, Berlin Heidelberg, pp 126–140

  • Nebro AJ, Durillo JJ, Garcia-Nieto J, Coello Coello CA, Luna F, Alba E (2009) SMPSO: a new PSO-based metaheuristic for multi-objective optimization. In: IEEE symposium on computational intelligence in multi-criteria decision-making, March 30 2009–April 2 2009. doi:10.1109/MCDM.2009.4938830, pp 66–73

  • PTC Integrity (2012) http://www.ptc.com/product/integrity

  • Rational DOORS (2009) http://www.ibm.com/developerworks/downloads/r/doorswebaccess/

  • ReqIF1.1 (2013) Document formal/2013-10-01. Technical report, OMG

  • RMF (2008) http://download.eclipse.org/rmf/documentation/rmf-latex/main.html

  • Saaty RW (1987) The analytic hierarchy process—what it is and how it is used. Math Model 9(3):161–176

    Article  MathSciNet  MATH  Google Scholar 

  • Saliu MO, Ruhe G (2007) Bi-objective release planning for evolving software systems. In: Proceedings of the 16th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering, ACM, pp 105–114

  • Sheskin DJ (2007) Handbook of parametric and nonparametric statistical procedures. Chapman and Hall/CRC

  • Sommerville I, Kotonya G (1998) Requirements engineering: processes and techniques. Wiley, New York

    Google Scholar 

  • Wohlin C (2005) Engineering and managing software requirements. Springer, Berlin

    MATH  Google Scholar 

  • Yuanyuan Z, Harman M (2010) Search based optimization of requirements interaction management. In: 2nd international symposium on search based software engineering, 7–9 September 2010. doi:10.1109/SSBSE.2010.16, pp 47–56

  • Yue T, Ali S (2014) Applying search algorithms for optimizing stakeholders familiarity and balancing workload in requirements assignment. Paper presented at the proceedings of the 2014 conference on genetic and evolutionary computation, Vancouver, BC

  • Zhang Y, Alba E, Durillo JJ, Eldh S, Harman M (2010) Today/Future importance analysis. In: Proceedings of the 12th annual conference on genetic and evolutionary computation. ACM, pp 1357–1364

  • Zhang Y, Finkelstein A, Harman M (2008) Search based requirements optimisation: existing work and challenges. In: Requirements engineering: foundation for software quality. Springer, Berlin Heidelberg, pp 88–94

  • Zhang Y, Harman M, Lim SL (2013) Empirical evaluation of search based requirements interaction management. Inf Softw Technol 55(1):126–152. doi:10.1016/j.infsof.2012.03.007

    Article  Google Scholar 

  • Zhang Y, Harman M, Mansouri SA (2007) The multi-objective next release problem. In: Proceedings of the 9th annual conference on genetic and evolutionary computation. ACM, pp 1129–1137

  • Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength Pareto evolutionary algorithm. Paper presented at the the EUROGEN 2001-evolutionary methods for design, optimization and control with applications to industrial problems

Download references

Acknowledgments

This work was supported by the Zen-Configurator project (No. 240024) and the MBT4CPS project (No. 240013) funded by the Research Council of Norway under the category of Young Research Talents of the FRIPO funding scheme. Tao Yue and Shaukat Ali are also supported by the EU Horizon 2020 project U-Test (http://www.u-test.eu/), the MBE-CR (An Innovative Approach for Longstanding Development and Maintenance of the Automated Cancer Registry System, No. 239063) and the Certus SFI (http://certus-sfi.no/). It was also supported in part by a grant from the National Natural Science Foundation of China (No. 61370058, No. 61170087).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Yue.

Additional information

Communicated by: Daniel Amyot

Appendix

Appendix

In this section, we present results of evaluating the multi-objective search algorithms using Hyper Volume (HV)—a commonly used quality indicator with multi-objective search algorithms. The results are for the full-scale empirical study. Recall that each search algorithm was run 100 times for each problem and generated a Pareto front at each run. We calculated the HV value of each Pareto front.

1.1 Real-world Case Study

We obtained 100 HV values for the real world case study for each algorithm. We conducted the Wilcoxon signed-rank test at the significance level of 0.05 for the HV values and results are presented in Table 19. Algorithms with larger values of HVare desirable; if \(\hat {A}_{12}\) is greater than 0.5, it means that algorithm A has a higher chance of obtaining a higher HV than B. An \(\hat {A}_{12}\)value less than 0.5 means the Algorithm A has lesser chance of obtaining a higher value of HV than B. From Table 19, we can conclude that in terms of HV, NSGA-II obtains significant better results than SPEA2, followed by MOCell, PAES and SMPSO. CellDE obtains the worst. These results are consistent with what we observed from Section 4.2.1.

Table 19 Results of the Vargha and Delaney statistics and the Wilcoxon signed-rank test at the significance level of 0.05 (in terms of HV) – real-world case study

1.2 Artificial Problems

For each of the 120 artificial problems, we conducted the one sample Wilcoxon signed-rank test to compare each pair of the algorithms in terms of HV. Table 20 summarizes the results of the Vargha and Delaney statistic test (with or without the Wilcoxon signed-rank test applied). Without the Wilcoxon signed-rank test, A>B means the number of problems (out of 120) that A is better than B for obtaining a better solution; A<B means the number of problems (out of 120) that A is worse than B for obtaining a better solution; and A = B means the number of problems for which there are no differences between A and B.

Table 20 Results of the Vargha and Delaney statistical test (in terms of HV) - 120 artificial problems (without/with the Wilcoxon signed-rank test)

Results show that for HV, NSGA-II performed significantly better than SPEA2 for 115 problems and SPEA2 performed significantly better than PAES for 72 problem. PAES performed significantly better than MOCell for 65 problems. MOCell performed significantly better than CellDE for 118 problems. CellDE performed significantly better than SMPSO for 55 problems and significantly worse than SMPSO for 30 problems. SMPSO and CellDE had no significant difference for 35 problems. Based on the results, we can conclude that in terms of HV, NSGA-II achieves the best performance, followed by SPEA2, PAES and MOCell. CellDE and SMPSO share similar performance, where both performed worse than the other search algorithms. In terms of HV, for all the 120 artificial problems, results are similar to what have been reported in Section 4.2.2.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Yue, T., Ali, S. et al. Zen-ReqOptimizer: a search-based approach for requirements assignment optimization. Empir Software Eng 22, 175–234 (2017). https://doi.org/10.1007/s10664-015-9418-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-015-9418-0

Keywords

Navigation