Skip to main content
Log in

Sensitivity analysis in the process of COTS mismatch-handling

  • Original Article
  • Published:
Requirements Engineering Aims and scope Submit manuscript

Abstract

During the selection of commercial off-the-shelf (COTS) products, mismatches encountered between stakeholders’ requirements and features offered by COTS products are inevitable. These mismatches occur as a result of an excess or shortage of functionality offered by the COTS. A decision support approach, called mismatch handling for COTS selection (MiHOS), was proposed earlier to help address mismatches while considering limited resources. In MiHOS, several input parameters need to be estimated such as the level of mismatches and the resource consumptions and constraints. These estimates are subject to uncertainty and therefore limit the applicability of the results. In this paper, we propose sensitivity analysis for MiHOS (MiHOS-SA), an approach that aims at helping decision makers gain insights into the impact of input uncertainties on the validity of MiHOS’ results. MiHOS-SA draws on existing sensitivity analysis techniques to address the problem. A case study from the e-services domain was conducted to illustrate MiHOS-SA and discuss its added value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Overall, we have identified five types of mismatches, for example: (1) PartialMatch which indicates that fi partially satisfies gi, and (2) OverMatch which indicates that fi exhibits more capability than what is required by gi. A taxonomy of which has been introduced in [4, 8].

  2. How Amount i can be estimated and normalized to the range from 0 to 1 is discussed in [8].

  3. In this paper, we assume the sampling range is symmetric around ρ i for simplicity. The method is still applicable for non-symmetric sampling range.

  4. Keystone Identification is a COTS evaluation strategy that starts by identifying a key requirement, and then search for products that satisfy this requirement. Progressive Filtering is an evaluation strategy that starts with a large number of COTS products, and then progressively eliminating less-fit COTS through successive iterations of products evaluation cycles.

  5. A bottleneck constraint is the one that controls the optimization process and suppresses the effect of other constraints [20].

  6. The impact of resolving a mismatch on the COTS fitness is represented by multiplying the mismatch amount Amounti by the relative weight Ω i in Eq. (2) \( F(x) = {\sum\nolimits_{i = 1}^\mu {{\left( {Amount_{i} \cdot \Upomega _{i} \cdot {\sum\nolimits_{j = 1}^J {(x_{{i,j}} \cdot \Updelta r_{{i,j}} )} }} \right)}} }. \)

  7. From Table 3, COTS4 has 63 mismatches. This means each mismatch-resolution plan has 63 suggestions, one suggestion to resolve each mismatch. Thus, the number of changes is equal to 63 × 10% ≈ 6.

  8. These numbers do not include administrative activities such as the effort spent for meetings and reporting, and assuming the analysts are familiar with both approaches.

References

  1. Carney D (1998) COTS evaluation in the real world, Carnegie Mellon University

  2. Kontio J (1995) OTSO: a systematic process for reusable software component selection. University of Maryland, Maryland CS-TR-3478, December 1995

  3. Vigder MR, Gentleman WM, Dean J (1996) COTS software integration: state of the art. National Research Council Canada (NRC) 39198

  4. Mohamed A (2007) Decision support for selecting COTS software products based on comprehensive mismatch handling. PhD Thesis, Electrical and Computer Engineering Department, University of Calgary, Canada

  5. Mohamed A, Ruhe G, Eberlein A (2007) Decision support for handling mismatches between COTS products and system requirements. In: The 6th IEEE international conference on COTS-based software systems (ICCBSS’07), Banff, pp 63–72

  6. Alves C (2003) COTS-based requirements engineering. In: Component-based software quality—methods and techniques, vol 2693. Springer, Heidelberg, pp 21–39

  7. Carney D, Hissam SA, Plakosh D (2000) Complex COTS-based software systems: practical steps for their maintenance. J Softw Maintenance 12:357–376

    Article  Google Scholar 

  8. Mohamed A, Ruhe G, Eberlein A (2007) MiHOS: an approach to support handling the mismatches between system requirements and COTS products. Requirements Eng J (Accepted on Jan 2, 2007, http://www.dx.doi.org/10.1007/s00766-007-0041-5)

  9. Ziv H, Richardson D, Klösch R (1996) The uncertainty principle in software engineering. University of California, Irvine UCI-TR-96-33, Aug 1996

  10. Saltelli A, Chan K, Scott EM (2000) Sensitivity analysis. Wiley, New York

  11. Saltelli A (2004) Global sensitivity analysis: an introduction. In: 4th international conference on sensitivity analysis of model output (SAMO ‘04), Los Alamos National Laboratory, pp 27–43

  12. Lung C-H, Van KK (2000) An approach to quantitative software architecture sensitivity analysis. Int J Softw Eng Knowl Eng 10:97–114

    Google Scholar 

  13. Wagner S (2007) Global sensitivity analysis of predictor models in software engineering. In: Ihe 3rd international PROMISE workshop (co-located with ICSE’07), Minneapolis

  14. Saltelli A, Tarantola S, Campolongo F, Ratto M (2004) Sensitivity analysis in practice: a guide to assessing scientific models. Wiley, New York

  15. Kontio J (1996) A case study in applying a systematic method for COTS selection. In: 18th International Conference on Software Engineering (ICSE’96), Berlin, pp 201–209

  16. Wolsey LA, Nemhauser GL (1998) Integer and combinatorial optimization. Wiley, New York

    Google Scholar 

  17. LINDO_Systems: http://www.lindo.com

  18. Ngo-The A, Ruhe G (2008) A systematic approach for solving the wicked problem of software release planning. Soft Comput, 12 (in press)

  19. Tukey JW (1977) Exploratory data analysis. Addison-Wesley, Reading

    MATH  Google Scholar 

  20. Goldratt EM (1998) Essays on the theory of constraints. North River Press, Great Barrington

    Google Scholar 

  21. Humphrey W (1989) Managing the software process. Addison-Wesley Professional, Reading

    Google Scholar 

  22. Al-Emran A, Pfahl D, Ruhe G (2007) DynaReP: a discrete event simulation model for planning and re-planning of software releases, Minneapolis, May 2007

  23. Li J, Ruhe G, Al-Emran A, Richter M (2006) A flexible method for effort estimation by analogy. Emp Softw Eng 12:65–106

    Article  Google Scholar 

Download references

Acknowledgments

We appreciate the support of the Natural Sciences and Engineering Council of Canada (NSERC) and of the Alberta Informatics Circle of Research Excellence (iCORE) to conduct this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guenther Ruhe.

Appendix: “DIFF” metric

Appendix: “DIFF” metric

This appendix elaborates the discussion presented in Sect. 3.2.3 for estimating the value of the DIFF metric when applying MiHOS-SA. Consider a set of mismatches M = {m 1, …, m μ}. Typically, MiHOS suggests a set of 5 plans to handle these mismatches. Assume this set is given as:

$$ {\text{SOL}} = \{ Y_{0} ,\ldots ,Y_{4} \} $$

For the mismatches {m 1, m 2, …,m μ}, a plan Y n would suggests a set of actions {y 1, y 2, …, y μ}, where y i refers to one of the options: “Do not resolve m i ”, “Resolve m i using resolution action a i,1”, “Resolve m i using resolution action a i,2”, etc.

When MiHOS-SA is applied, the input parameters of MiHOS are varied to simulate input uncertainties. Thus, the output is changed. Assume the new set of suggested plans is:

$$ {\text{SOL}}_{{{\text{uncertain}}}} = \{ Z_{0} ,\ldots ,Z_{4} \} . $$

where Z n is a solution plan after changing the input parameters. Similarly to Y n , a plan Z n can be represented as follows, Z n  = {z 1, …, z μ} where z i refers to one of the options: “Do not resolve m i ”, “Resolve m i using resolution action a i,1”, etc.

As discussed in Sect. 3.2.3, if we want to estimate DIFF only between two plans, e.g., Y 1 and Z 1, then we have to compare each y i with z i , and then count the number of occurrences where y i (≠z i . However, in MiHOS we have to compare all of the five plans Y 0, …, Y 4 with Z 0, …, Z 4 . This means, DIFF per plan can be estimated by estimated the total number of differences between all plans in SOL and those in SOLuncertain divided by the number of plans. This is calculated as follows:

$$ DIFF = \frac{{{\text{Total}}\,{\text{number}}\,{\text{of}}\,{\text{differences}}\,{\text{between}}\,{\text{(all}}\,{\text{plans}}\,{\text{in}}\,{\text{SOL)}}\,{\text{and}}\,{\text{(all}}\,{\text{plans}}\,{\text{in}}\,{\text{SOL}}_{{{\text{uncertain}}}} {\text{)}}}} {{K \times \mu }} $$

where: “K” is the total number of plans in SOL (K = 5 for five solution plans).

“μ” is the total number of mismatches.

We divide by “K” to get the average number of structural differences per plan; and by “μ” because DIFF, by definition, indicates the percentage (not the number) of structural difference, and thus we have to calculate it with respect to the total number of mismatches.

The challenge here is to estimate the numerator in Eq. (6). The order of the plans in SOL and SOLuncertain is meaningless. This means we cannot calculate the total number of differences by comparing Y 0 with Z 0, Y 1 with Z 1, etc. But rather, we should “link” each plan from SOLuncertain to exactly one plan in SOL based on the following hypothesis:

“the correct linking scheme between the plans in SOLuncertain’s and the plans in SOL’s would result in a total number of differences between SOLuncertain and SOL that is lower than any other linking scheme”.

The above hypothesis stems from the fact that each plan in SOLuncertain should be linked to the most similar plan in SOL because it should represent that plan after it has been changed. To find the “correct linking”, the following procedure is used:

  1. 1.

    Create an empty 5 × 5 table where the rows are labelled Y 0, …, Y 4 and the columns Z 0, …, Z 4 (Fig. 9). The first cell in the table is denoted Cell(0,0).

  2. 2.

    For all values of two variables m and n, where 0 < < 4 and 0 < < 4: count the number of differences between Y m and Z n and record the result in Cell(m, n).

  3. 3.

    For all linking permutations between {Z 0, …, Z 4 } and {Y 0, …, Y 4 }, calculate the total number of differences using the data stored in the 5 × 5 table. for example, for a permutation Z 0 → Y 0, Z 1 → Y 1, Z 2 → Y 2, Z 3 → Y 3, and Z 4  → Y 4 , the total number of differences is equal to Cell(0,0) + Cell(1,1) + Cell(2,2) + Cell(3,3) + Cell(4,4).

  4. 4.

    The correct linking indicates the permutation that results in the minimum number of differences.

Fig. 9
figure 9

A 5 × 5 table used when estimating DIFF

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mohamed, A., Ruhe, G. & Eberlein, A. Sensitivity analysis in the process of COTS mismatch-handling. Requirements Eng 13, 147–165 (2008). https://doi.org/10.1007/s00766-008-0062-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00766-008-0062-8

Keywords

Navigation