Skip to main content

Experimental Validation of a Risk Assessment Method

  • Conference paper
  • First Online:
Requirements Engineering: Foundation for Software Quality (REFSQ 2015)

Abstract

[Context and motivation] It is desirable that requirement engineering methods are reliable, that is, that methods can be repeated with the same results. Risk assessments methods, however, often have low reliability when they identify risk mitigations for a system based on expert judgement. [Question/problem] Our goal is to assess the reliability of an availability risk assessment method for telecominfrastructures, and to identify possibilities for improvement of its reliability. [Principal ideas/results] We propose an experimental validation of reliability, and report on its application. We give a detailed analysis of sources of variation, explain how we controlled them and validated their mitigations, and motivate the statistical procedure used to analyse the outcome. [Contribution] Our results can be used to improve the reliability of risk assessment methods. Our approach to validating reliability can be useful for the assessment of the reliability of other methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Bartha, P.: By Parallel Reasoning. Oxford University Press (2010)

    Google Scholar 

  2. Feather, M.S., Cornford, S.L.: Quantitative risk-based requirements reasoning. Requirements Engineering 8(4), 248–265 (2003)

    Article  Google Scholar 

  3. Franqueira, V.N., Tun, T.T., Yu, Y., Wieringa, R., Nuseibeh, B.: Risk and argument: a risk-based argumentation method for practical security. In: 2011 19th IEEE International Requirements Engineering Conference (RE), pp. 239–248. IEEE (2011)

    Google Scholar 

  4. Ghaisas, S., Rose, P., Daneva, M., Sikkel, K.: Generalizing by similarity: Lessons learnt from industrial case studies. In: 1st International Workshop on Conducting Empirical Studies in Industry (CESI), pp. 37–42 (2013)

    Google Scholar 

  5. Gorschek, T., Garre, P., Larsson, S., Wohlin, C.: A model for technology transfer in practice. IEEE Software 23(6), 88–95 (2006)

    Article  Google Scholar 

  6. Hallgren, K.A.: Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology 8(1), 23 (2012)

    Google Scholar 

  7. Herrmann, A.: Information need of IT risk estimation - qualitative results from experiments. In: Proceedings of the REFSQ 2011 RePriCo Workshop, pp. 72–84 (2011)

    Google Scholar 

  8. Herrmann, A.: REFSQ 2011 live experiment about risk-based requirements prioritization: The influence of wording and metrics. In: Proceedings of REFSQ 2011, pp. 176–194 (2011)

    Google Scholar 

  9. Herrmann, A., Paech, B.: Practical challenges of requirements prioritization based on risk estimation. Empirical Software Engineering 14(6), 644–684 (2009)

    Article  Google Scholar 

  10. Höst, M., Regnell, B., Wohlin, C.: Using students as subjects–a comparative study of students and professionals in lead-time impact assessment. Empirical Software Engineering 5(3), 201–214 (2000)

    Article  MATH  Google Scholar 

  11. IEC: Analysis techniques for system reliability - Procedure for failure mode and effects analysis (FMEA). International Standard 60812:2006 (2006)

    Google Scholar 

  12. ISO: Risk management - principles and guidelines. International Standard 31000 (2009)

    Google Scholar 

  13. Kerckhoffs Institute: The Kerckhoffs masters programme. http://www.kerchoffs-institute.org/ Last accessed 2014–07-10

  14. Krippendorff, K.: Calculation of alpha over partitions (Private communication)

    Google Scholar 

  15. Krippendorff, K.: Content analysis: an introduction to its methodology. 2nd edn. Sage Publications (2004)

    Google Scholar 

  16. Ojameruaye, B., Bahsoon, R.: Systematic elaboration of compliance requirements using compliance debt and portfolio theory. In: Salinesi, C., van de Weerd, I. (eds.) REFSQ 2014. LNCS, vol. 8396, pp. 152–167. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  17. Runeson, P.: Using students as experiment subjects-an analysis on graduate and freshmen student data. In: Proceedings of the 7th International Conference on Empirical Assessment in Software Engineering.-Keele University, UK, pp. 95–102. Citeseer (2003)

    Google Scholar 

  18. Shadish, W., Cook, T., Campbell, D.: Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin Company (2002)

    Google Scholar 

  19. Sunstein, C.: On analogical reasoning. Harvard Law Review 106, 741–790 (1993)

    Article  Google Scholar 

  20. Svahnberg, M., Aurum, A., Wohlin, C.: Using students as subjects-an empirical evaluation. In: Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 288–290. ACM (2008)

    Google Scholar 

  21. Vriezekolk, E.: Testing reliability of Raster - report of experiment with Kerckhoffs students. Technical report, University of Twente (2014)

    Google Scholar 

  22. Vriezekolk, E., Wieringa, R., Etalle, S.: A new method to assess telecom service availability risks. In: Mendonca, D., Dugdale, J. (eds.) Proceedings of the 8th International Conference on Information Systems for Crisis Response and Management ISCRAM2011 (2011)

    Google Scholar 

  23. Vriezekolk, E., Wieringa, R., Etalle, S.: Design and initial validation of the Raster method for telecom service availability risk assessment. In: Rothkrantz, L., Ristvej, J., Franco, Z. (eds.) Proceedings of the 9th International Conference on Information Systems for Crisis Response and Management ISCRAM2012 (2012)

    Google Scholar 

  24. Vriezekolk, E.: Raster documentation website http://wwwhome.ewi.utwente.nl/~vriezekolk/Raster/

  25. Wieringa, R.: Design Science Methodology for Information Systems and Software Engineering. Springer (2014)

    Google Scholar 

  26. Willner, P.: Methods for assessing the validity of animal models of human psychopathology. In: Boulton, A., Baker, G., Martin-Iverson, M. (eds.) Animal Models in Psychiatry, I, Neuromethods vol. 18, pp. 1–23. Humana Press (1991)

    Google Scholar 

  27. Yin, R.K.: Case study research: design and methods - fourth edition. Sage Publications (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eelco Vriezekolk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Vriezekolk, E., Etalle, S., Wieringa, R. (2015). Experimental Validation of a Risk Assessment Method. In: Fricker, S., Schneider, K. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2015. Lecture Notes in Computer Science(), vol 9013. Springer, Cham. https://doi.org/10.1007/978-3-319-16101-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16101-3_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16100-6

  • Online ISBN: 978-3-319-16101-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics