Abstract
During the selection of commercial off-the-shelf (COTS) products, mismatches encountered between stakeholders’ requirements and features offered by COTS products are inevitable. These mismatches occur as a result of an excess or shortage of functionality offered by the COTS. A decision support approach, called mismatch handling for COTS selection (MiHOS), was proposed earlier to help address mismatches while considering limited resources. In MiHOS, several input parameters need to be estimated such as the level of mismatches and the resource consumptions and constraints. These estimates are subject to uncertainty and therefore limit the applicability of the results. In this paper, we propose sensitivity analysis for MiHOS (MiHOS-SA), an approach that aims at helping decision makers gain insights into the impact of input uncertainties on the validity of MiHOS’ results. MiHOS-SA draws on existing sensitivity analysis techniques to address the problem. A case study from the e-services domain was conducted to illustrate MiHOS-SA and discuss its added value.
Similar content being viewed by others
Notes
How Amount i can be estimated and normalized to the range from 0 to 1 is discussed in [8].
In this paper, we assume the sampling range is symmetric around ρ i for simplicity. The method is still applicable for non-symmetric sampling range.
Keystone Identification is a COTS evaluation strategy that starts by identifying a key requirement, and then search for products that satisfy this requirement. Progressive Filtering is an evaluation strategy that starts with a large number of COTS products, and then progressively eliminating less-fit COTS through successive iterations of products evaluation cycles.
A bottleneck constraint is the one that controls the optimization process and suppresses the effect of other constraints [20].
The impact of resolving a mismatch on the COTS fitness is represented by multiplying the mismatch amount Amounti by the relative weight Ω i in Eq. (2) \( F(x) = {\sum\nolimits_{i = 1}^\mu {{\left( {Amount_{i} \cdot \Upomega _{i} \cdot {\sum\nolimits_{j = 1}^J {(x_{{i,j}} \cdot \Updelta r_{{i,j}} )} }} \right)}} }. \)
From Table 3, COTS4 has 63 mismatches. This means each mismatch-resolution plan has 63 suggestions, one suggestion to resolve each mismatch. Thus, the number of changes is equal to 63 × 10% ≈ 6.
These numbers do not include administrative activities such as the effort spent for meetings and reporting, and assuming the analysts are familiar with both approaches.
References
Carney D (1998) COTS evaluation in the real world, Carnegie Mellon University
Kontio J (1995) OTSO: a systematic process for reusable software component selection. University of Maryland, Maryland CS-TR-3478, December 1995
Vigder MR, Gentleman WM, Dean J (1996) COTS software integration: state of the art. National Research Council Canada (NRC) 39198
Mohamed A (2007) Decision support for selecting COTS software products based on comprehensive mismatch handling. PhD Thesis, Electrical and Computer Engineering Department, University of Calgary, Canada
Mohamed A, Ruhe G, Eberlein A (2007) Decision support for handling mismatches between COTS products and system requirements. In: The 6th IEEE international conference on COTS-based software systems (ICCBSS’07), Banff, pp 63–72
Alves C (2003) COTS-based requirements engineering. In: Component-based software quality—methods and techniques, vol 2693. Springer, Heidelberg, pp 21–39
Carney D, Hissam SA, Plakosh D (2000) Complex COTS-based software systems: practical steps for their maintenance. J Softw Maintenance 12:357–376
Mohamed A, Ruhe G, Eberlein A (2007) MiHOS: an approach to support handling the mismatches between system requirements and COTS products. Requirements Eng J (Accepted on Jan 2, 2007, http://www.dx.doi.org/10.1007/s00766-007-0041-5)
Ziv H, Richardson D, Klösch R (1996) The uncertainty principle in software engineering. University of California, Irvine UCI-TR-96-33, Aug 1996
Saltelli A, Chan K, Scott EM (2000) Sensitivity analysis. Wiley, New York
Saltelli A (2004) Global sensitivity analysis: an introduction. In: 4th international conference on sensitivity analysis of model output (SAMO ‘04), Los Alamos National Laboratory, pp 27–43
Lung C-H, Van KK (2000) An approach to quantitative software architecture sensitivity analysis. Int J Softw Eng Knowl Eng 10:97–114
Wagner S (2007) Global sensitivity analysis of predictor models in software engineering. In: Ihe 3rd international PROMISE workshop (co-located with ICSE’07), Minneapolis
Saltelli A, Tarantola S, Campolongo F, Ratto M (2004) Sensitivity analysis in practice: a guide to assessing scientific models. Wiley, New York
Kontio J (1996) A case study in applying a systematic method for COTS selection. In: 18th International Conference on Software Engineering (ICSE’96), Berlin, pp 201–209
Wolsey LA, Nemhauser GL (1998) Integer and combinatorial optimization. Wiley, New York
LINDO_Systems: http://www.lindo.com
Ngo-The A, Ruhe G (2008) A systematic approach for solving the wicked problem of software release planning. Soft Comput, 12 (in press)
Tukey JW (1977) Exploratory data analysis. Addison-Wesley, Reading
Goldratt EM (1998) Essays on the theory of constraints. North River Press, Great Barrington
Humphrey W (1989) Managing the software process. Addison-Wesley Professional, Reading
Al-Emran A, Pfahl D, Ruhe G (2007) DynaReP: a discrete event simulation model for planning and re-planning of software releases, Minneapolis, May 2007
Li J, Ruhe G, Al-Emran A, Richter M (2006) A flexible method for effort estimation by analogy. Emp Softw Eng 12:65–106
Acknowledgments
We appreciate the support of the Natural Sciences and Engineering Council of Canada (NSERC) and of the Alberta Informatics Circle of Research Excellence (iCORE) to conduct this research.
Author information
Authors and Affiliations
Corresponding author
Appendix: “DIFF” metric
Appendix: “DIFF” metric
This appendix elaborates the discussion presented in Sect. 3.2.3 for estimating the value of the DIFF metric when applying MiHOS-SA. Consider a set of mismatches M = {m 1, …, m μ}. Typically, MiHOS suggests a set of 5 plans to handle these mismatches. Assume this set is given as:
For the mismatches {m 1, m 2, …,m μ}, a plan Y n would suggests a set of actions {y 1, y 2, …, y μ}, where y i refers to one of the options: “Do not resolve m i ”, “Resolve m i using resolution action a i,1”, “Resolve m i using resolution action a i,2”, etc.
When MiHOS-SA is applied, the input parameters of MiHOS are varied to simulate input uncertainties. Thus, the output is changed. Assume the new set of suggested plans is:
where Z n is a solution plan after changing the input parameters. Similarly to Y n , a plan Z n can be represented as follows, Z n = {z 1, …, z μ} where z i refers to one of the options: “Do not resolve m i ”, “Resolve m i using resolution action a i,1”, etc.
As discussed in Sect. 3.2.3, if we want to estimate DIFF only between two plans, e.g., Y 1 and Z 1, then we have to compare each y i with z i , and then count the number of occurrences where y i (≠z i . However, in MiHOS we have to compare all of the five plans Y 0, …, Y 4 with Z 0, …, Z 4 . This means, DIFF per plan can be estimated by estimated the total number of differences between all plans in SOL and those in SOLuncertain divided by the number of plans. This is calculated as follows:
where: “K” is the total number of plans in SOL (K = 5 for five solution plans).
“μ” is the total number of mismatches.
We divide by “K” to get the average number of structural differences per plan; and by “μ” because DIFF, by definition, indicates the percentage (not the number) of structural difference, and thus we have to calculate it with respect to the total number of mismatches.
The challenge here is to estimate the numerator in Eq. (6). The order of the plans in SOL and SOLuncertain is meaningless. This means we cannot calculate the total number of differences by comparing Y 0 with Z 0, Y 1 with Z 1, etc. But rather, we should “link” each plan from SOLuncertain to exactly one plan in SOL based on the following hypothesis:
“the correct linking scheme between the plans in SOLuncertain’s and the plans in SOL’s would result in a total number of differences between SOLuncertain and SOL that is lower than any other linking scheme”.
The above hypothesis stems from the fact that each plan in SOLuncertain should be linked to the most similar plan in SOL because it should represent that plan after it has been changed. To find the “correct linking”, the following procedure is used:
-
1.
Create an empty 5 × 5 table where the rows are labelled Y 0, …, Y 4 and the columns Z 0, …, Z 4 (Fig. 9). The first cell in the table is denoted Cell(0,0).
-
2.
For all values of two variables m and n, where 0 < m < 4 and 0 < n < 4: count the number of differences between Y m and Z n and record the result in Cell(m, n).
-
3.
For all linking permutations between {Z 0, …, Z 4 } and {Y 0, …, Y 4 }, calculate the total number of differences using the data stored in the 5 × 5 table. for example, for a permutation Z 0 → Y 0, Z 1 → Y 1, Z 2 → Y 2, Z 3 → Y 3, and Z 4 → Y 4 , the total number of differences is equal to Cell(0,0) + Cell(1,1) + Cell(2,2) + Cell(3,3) + Cell(4,4).
-
4.
The correct linking indicates the permutation that results in the minimum number of differences.
Rights and permissions
About this article
Cite this article
Mohamed, A., Ruhe, G. & Eberlein, A. Sensitivity analysis in the process of COTS mismatch-handling. Requirements Eng 13, 147–165 (2008). https://doi.org/10.1007/s00766-008-0062-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00766-008-0062-8