Skip to main content
Log in

Quantitative risk-based requirements reasoning

  • Original Article
  • Published:
Requirements Engineering Aims and scope Submit manuscript

Abstract

At NASA we have been developing and applying a risk management framework, "Defect Detection and Prevention" (DDP). It is based on a simple quantitative model of risk and is supported by custom software. We have used it to aid in study and planning for systems that employ advanced technologies. The framework has proven successful at identifying problematic requirements (those which will be the most difficult to attain), at optimizing the allocation of resources so as to maximize requirements attainment, at identifying areas where research investments should be made, and at supporting tradeoff analyses among major alternatives. We describe the DDP model, the information that populates a model, how DDP is used, and its tool support. DDP has been designed to aid decision making early in development. Detailed information is lacking at this early stage. Accordingly, DDP exhibits a number of strategic compromises between fidelity and tractability. The net result is an approach that appears both feasible and useful during early requirements decision making.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.

Similar content being viewed by others

References

  1. Kurtz T, Feather MS (2000) Putting it all together: software planning, estimating and assessment for a successful project. In: Proceedings of 4th international software and internet quality week conference, Brussels, Belgium

  2. Cornford SL, Feather MS, Hicks KA (2001) DDP: a tool for life-cycle failure mode management. In: IEEE Aerospace Conference, Big Sky, MT, 2001, pp 441–451

  3. Greenfield MA (n.d.) Risk balancing profile tool. http://www.hq.nasa.gov/office/codeq/risk/rbp.pdf

  4. Feather MS, Menzies T (2002) Converging on the optimal attainment of requirements. In: Proceedings IEEE joint international requirements engineering conference, Essen, Germany, 2002, pp 263–270

  5. Cornford SL, Dunphy J, Feather MS (2002) Optimizing the design of end-to-end spacecraft systems using failure mode as a currency. In: IEEE aerospace conference, Big Sky, MT

  6. Tufte ER (1983) The visual display of quantitative information. Graphics Press, Cheshire, CT

  7. Tukey J (1972) Some graphic and semigraphic displays. In: Statistical papers in honor of George W. Snedecor. Iowa State University Press, Ames, IA

  8. Feather MS, Cornford SL, Larson T (2000) Combining the best attributes of qualitative and quantitative risk management tool support. In: Proceedings, 15th IEEE international conference on automated software engineering, Grenoble, France, 11–15 September 2000. IEEE Computer Society, pp 309–312

  9. Conklin J et al (2001) Facilitated hypertext for collective sensemaking: 15 years on from gIBIS. Hypertext '01 conference, Aarhus, Denmark, 14–18 August 2001

    Google Scholar 

  10. Chung L, Nixon BA, Yu E, Mylopoulos J (2000) Non-functional requirements in software engineering. Kluwer, Dordrecht

  11. van Lamsweerde A (2001) Goal-oriented requirements engineering: a guided tour. In: Proceedings 5th IEEE international symposium on requirements engineering, Toronto, Canada, August 2001, pp 249–263

  12. Akao Y (1990) Quality function deployment. Productivity Press, Cambridge, MA

  13. Boehm B et al (2000) Software cost estimation with COCOMO II. Prentice-Hall, Englewood Cliffs, NJ

  14. Stutzke MA, Smidts CS (2001) A stochastic model of fault introduction and removal during software development. IEEE Trans Reliability 50(2):184–193

    Google Scholar 

  15. Bertrand P, Darimont R, Delor E, Massonet P, van Lamsweerde A (1998) GRAIL/KAOS: an environment for goal driven requirements engineering. In: 20th international conference on software engineering, Kyoto, Japan

  16. Mylopoulos J, Chung L, Liao S, Wang H, Yu E (2001) Exploring alternatives during requirements analysis. IEEE Software 18(1):92–96

    Google Scholar 

  17. Burgess CJ, Dattani I, Hughes G, May JHR, Rees K (2001) Using influence diagrams to aid the management of software change. Requirements Eng 6(3):173–182

    Google Scholar 

  18. Karlsson J, Ryan K (1997) A cost-value approach for prioritizing requirements. IEEE Software September/October:67–74

  19. Boehm B, Bose P, Horowitz E, Lee M (1994) Software requirements as negotiated win conditions. In: Proceedings 1st international conference on requirements engineering, Colorado Springs, CO, pp 74–83

  20. In H, Boehm B, Rodgers T, Deutsch M (2001) Applying WinWin to quality requirements: a case study. In: Proceedings 23rd international conference on software engineering, Toronto, Ontario, Canada, pp 555–564

  21. Vesely WE, Goldberg FF, Roberts NH, Haasl DF (1981) Fault tree handbook. US Nuclear Regulatory Commission, NUREG-0492

  22. Cornford SL (2000) Design and development assessment. In: Proceedings, 10th IEEE international workshop on software specification and design, San Diego, CA, 5–7 November 2000. IEEE Computer Society, pp 105–114

  23. Feather MS, Sigal B, Cornford SL, Hutchinson P (2001) Incorporating cost–benefit analyses into software assurance planning. In: Proceedings, 26th IEEE/NASA software engineering workshop, Greenbelt, MD, 27–29 November 2001

  24. Feather MS, Cornford SL, Gibbel M (2000) Scalable mechanisms for requirements interaction management. In: IEEE international conference on requirements engineering, Schaumburg, IL

  25. Feather MS, Cornford SL, Dunphy J, Hicks K (2002) A quantitative risk model for early lifecycle decision making. In: Integrated design and process technology, Pasadena, CA, June 2002

Download references

Acknowledgements

The research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not constitute or imply its endorsement by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology. Contributions from, and discussions with, Phil Daggett (JPL), Julia Dunphy (JPL), Patrick Hutchinson (Wofford College, Spartanburg, SC), Ken Hicks (JPL), Christopher Hartsough (JPL), Denise Howard (JPL), Peter Hoh In (Texas A&M), John Kelly (JPL), Tim Kurtz (NASA Glenn), James Kiper (Miami University, OH), Tim Larson (JPL), Tim Menzies (University of British Columbia), Kelly Moran (JPL) and Burton Sigal (JPL) have been most useful in helping us formulate our ideas and bring them to fruition.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin S. Feather.

Appendix: DDP concepts and example

Appendix: DDP concepts and example

Requirements are whatever the system under scrutiny is to achieve, and operational constraints on the system's construction and operation. Each requirement has a title, a position in the Requirements tree, a weight (representing its relative importance), on/off status, and optional further information such as description, notes, and reference. Only those Requirements whose status is on are taken into account in the qualitative calculations.

Failure Modes are the things that, should they occur, will lead to lack of attainment of Requirements. Each Failure Mode has a title, a position in the Failure Modes tree, an a priori likelihood (the chance of the Failure Mode occurring, if nothing is done to inhibit it), a repair cost per phase (what it would cost to remove an instance of that Failure Mode at that phase in the project), on/off status, and optional further information such as description, notes, and reference. Only Failure Modes whose status is on are taken into account in the qualitative calculations.

PACTs are the activities that could be done to reduce the likelihood of Failure Modes and/or reduce their impact on Requirements. Each PACT has a title, a position in the PACTs tree, a cost, the phase in which it applies, on/off status, and optional further information such as description, notes, and reference. PACTs are classified into preventions (which reduce the likelihood of Failure Modes), detections (which discover instances of failure modes so that those detected failure modes can be corrected prior to release/use), and alleviations (which reduce the severity of Failure Modes). Only those PACTs whose status is on are taken into account in the qualitative calculations, with the exception of calculations specifically to reveal what would be the net effect (in terms of risk reduction) were an "off" PACT to be turned on.

Impacts are the qualitative relationships between Requirements and Failure Modes. Each impact has the Requirement and Failure Mode it links, a value representing the proportion of loss of attainment of the Requirement should the Failure Mode occur, and optional further information such as description, notes, and reference. The value may be non-numeric, in which case although it shows up on displays it is ignored in the quantitative calculations; the usual use for this is as a placeholder and reminder for further scrutiny (e.g., a value "to be determined").

Effects are the qualitative relationships between PACTs and Failure Modes. Each impact has the PACT and Failure Mode it links, a value representing the proportion of reduction of the Failure Mode should that PACT be applied, and optional further information such as description, notes, and reference. If the value is negative, it denotes an effect of increasing, rather than decreasing, a Failure Mode's likelihood. As for impacts, the value may be non-numeric, and if so is ignored in quantitative calculations.

We use a hypothetical example for illustration. This avoids any proprietary issues that would arise from reporting one of the actual DDP applications, and permits the use of somewhat smaller amounts of data than would arise in practice. Nevertheless, this example will serve to illustrate the elements of DDP referenced throughout the paper. The figures are annotated fragments of screenshots taken from the DDP tool running on this example.

Figure 11 shows the Requirements tree, the Failure Modes tree, and the matrix of Impact values between these trees' elements. The blue coloring (reproduced here as the darker gray border of the highlighted row and background of its header cells) highlights one of the Requirements, "Get to the target", and the red coloring (reproduced here as the darker gray border of the highlighted column and background of its header cells) highlights one of the Failure Modes, "Tolerance Issues".

Fig. 11.
figure 11

Requirements, failure modes and impact matrix, between requirements (rows) and failure modes (cols)

The matrix header rows and columns show the titles of the items, and some totals computed by DDP.

The third row down contains values (39.4, 39.4, 15.9, etc.) that are the computed totals of loss of Requirements attainment that each Failure Mode causes. For a given Failure Mode F, this value is computed as:

F. APrioriLikelihood*Σ(R∈AllRequirements):R.Weight*Impact(F, R)where Impact(F, R) is the impact value of Failure Mode F on Requirement R (zero if there is no numerical impact asserted between them). For the highlighted Failure Mode, the calculation is:

1*((10*0.3)+(10*0.1)+(10*0.1)+(10*0.3)+(8*0.7)+(2*1.0)+(3*0.1))=3+1+1+3+5.6+2+0.3=15.9

The third column across contains values (71, 73, 78, etc.) that are the computed totals of loss of each Requirements Attainment caused by Failure Modes. For a given Requirement R, this value is computed as:

R.Weight*Σ(F∈AllFailureModes):Impact(F, R)*F.APrioriLikelihood

In the example data, all FM's a priori likelihoods are 1, so for the highlighted Requirement, this calculation is:

10*((0.3*1)+(0.3*1)+(0.3*1)+(0.3*1)+(1.0*1)+…)=

10*(0.3+0.3+0.3+0.3+1.0+0.3+1.0+1.0+0+0+0.1+0.7+0.1+0.7+1.0)=10*7.1=71

These are the totals for risk in the extreme case that nothing is done to prevent the Failure Modes from occurring. The first requirement has an assigned weight of 10, while its loss of attainment is calculated as 71, indicating that it is more than totally at risk.

This example's tree of PACTs is shown in Fig. 12 (top). PACTs' costs are listed in the column to the left of the tree. PACTs' effects on reducing Failure Modes are shown in the matrix in Fig. 12 (bottom). The green coloring (reproduced here as the darker gray border of the highlighted row and background of its header cells) highlights one of the PACTs "Environmental Tests", and the red coloring (reproduced here as the darker gray border of the highlighted column and background of its header cells) highlights one of the Failure Modes, "Tolerance Issues".

Fig. 12.
figure 12

PACTs and effect matrix, between PACTs (rows) and failure modes (cols)

For the sake of illustration, we have checked those and only those PACTs in the "Tests" folder. DDP calculates their combined cost (176) and combined effect on reducing the likelihood of the Failure Modes. For example, the highlighted Failure Mode is effected by three of those PACTs: 0.7 by Environmental Tests, 0.7 by Functional Tests, and 0.9 by Component Test/Characterize. These are all detection-style PACTs, meaning that their effect is to reduce Failure Modes' likelihoods.

For a Failure Mode F, its "PACTed" likelihood, i.e., taking into account effects of PACTs, is computed as:

F.AprioriLikelihood*(Π(P∈PACTs when P.Status=On): (1−Effect(P, F) )

(If there are PACTs that induce Failure Modes, then a slightly more complicated formula must be used.)

For the highlighted Failure Mode, and the five checked PACTs in the Tests folder, the calculation is:

1*((1–0.7)*(1–0.7)*(1–0.9)*(1–0)*(1–0))=1*(0.3*0.3*0.1*1*1)=1*0.009=0.009

This was the Failure Mode whose (unreduced) total contribution to loss of Requirements attainment we calculated earlier to be 15.9. The corresponding "PACTed" calculation that takes into account the beneficial effects of the selected PACTs, substituting the Failure Mode's "PACTed" likelihood for its a priori likelihood, thus:

F. PACTedLikelihood*Σ (R∈AllRequirements):R.Weight*Impact(F, R)

For the highlighted Failure Mode this is 0.009*15.9=0.1431, i.e., considerably reduced.

Because these PACTs were all detections, the reduction in likelihood of this Failure Mode is accomplished by repairing (prior to launch of course!) the problems those PACTs detect. DDP takes this into account in computing the sum total costs. Repair costs for a Failure Mode F detected and repaired in phase PH are calculated as: (F.PACTedLikelihood prior to PH−F.PACTedLikelihood after PH)*F.RepairCost (PH).

In the example data, Tolerance Issues' RepairCost in the test phase is 250, so DDP computes its cost of repair due to these PACTs as: (1−0.009)*250=247.75. PACTs from the Preventative Measures folder are prevention PACTs, so would avoid incurring repair costs such as these, and leave far fewer such problems for detection and repair.

These fragmentary examples illustrate DDP's core calculations. Their results are displayed to users via bar charts (e.g., Fig. 9) and the risk region plot (Fig. 10) shown earlier, the overall aim being to aid users in their decision making.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Feather, M.S., Cornford, S.L. Quantitative risk-based requirements reasoning. Requirements Eng 8, 248–265 (2003). https://doi.org/10.1007/s00766-002-0160-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00766-002-0160-y

Keywords

Navigation