ABSTRACT
Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.
- M Swanson, N Bartol, J Sabato, J Hash, and L Graffo. 2003. Security metrics guide for information technology systems. Number NIST SP 800-55, NIST SP 800--55. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-55.pdf. Google ScholarCross Ref
- Peter Mell, Karen Scarfone, and Sasha Romanosky. 2007. The common vulnerability scoring system (CVSS) and its applicability to federal agency systems. Number NIST IR 7435, NIST IR 7435. https://nvlpubs.nist.gov/nistpubs/Legacy/IR/nistir7435.pdf. Google ScholarCross Ref
- Marcus Pendleton, Richard Garcia-Lebron, Jin-Hee Cho, and Shouhuai Xu. 2016. A survey on systems security metrics. ACM Computing Surveys, 49, 4, (December 2016), 1--35. issn: 03600300. Google ScholarDigital Library
- Alex Ramos, Marcella Lazar, Raimir Holanda Filho, and Joel J. P. C. Rodrigues. 2017. Model-based quantitative network security metrics: a survey. IEEE Communications Surveys Tutorials, 19, 4, 2704--2734. issn: 2373-745X. Google ScholarCross Ref
- Vilhelm Verendel. 2009. Quantified security is a weak hypothesis: a critical survey of results and assumptions. In Proceedings of the 2009 workshop on New security paradigms workshop - NSPW '09. ACM Press, 37. isbn: 978-1-60558-8452. http://portal.acm.org/citation.cfm?doid=1719030.1719036. Google ScholarDigital Library
- Awais Rashid, Howard Chivers, George Danezis, Emil Lupu, and Andrew Martin. [n. d.] The cyber security body of knowledge, 854. tex.ids: rashidCyberSecurityBodya, rashidCyber-SecurityBodyb.Google Scholar
- R.B. Vaughn, R. Henning, and A. Siraj. 2003. Information assurance measures and metrics - state of practice and proposed taxonomy. In 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the. tex.ids: vaughn2003a tex.citation-number: 14. IEEE, 10 pp. isbn: 978-0-7695-1874-9. http://ieeexplore.ieee.org/document/1174904/. Google ScholarCross Ref
- Patrick Morrison, David Moye, Rahul Pandita, and Laurie Williams. 2018. Mapping the field of software life cycle security metrics. Information and Software Technology, 102, (October 2018), 146--159. issn: 09505849. Google ScholarCross Ref
- M. Rostami, F. Koushanfar, J. Rajendran, and R. Karri. 2013. Hardware security: threat models and metrics. In Proc. Int. Conf. Citation Key: rostami2013a tex.citation-number: 13. Computer Aided Design, 819--823.Google Scholar
- Masoud Rostami, Farinaz Koushanfar, and Ramesh Karri. 2014. A primer on hardware security: models, methods, and metrics. Proceedings of the IEEE, 102, 8, (August 2014), 1283--1295. tex.ids: rostamiPrimerHardwareSecurity2014a. issn: 1558-2256. Google ScholarCross Ref
- Alberto Medina, Anukool Lakhina, Ibrahim Matta, and John Byers. 2001. Brite: an approach to universal topology generation. In MASCOTS 2001, Proceedings Ninth International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems. IEEE, 346--353.Google ScholarCross Ref
- James Cowie, Andy Ogielski, and David Nicol. 2002. The ssfnet network simulator. Software on-line: http://www.ssfnet.org/homePage.html.Google Scholar
- Bob Lantz, Brandon Heller, and Nick McKeown. 2010. A network in a laptop: rapid prototyping for software-defined networks. In Proceedings of the Ninth ACM SIGCOMM Workshop on Hot Topics in Networks - Hotnets '10. ACM Press, 1--6. isbn: 978-1-4503-0409-2. http://portal.acm.org/citation.cfm?doid=1868447.1868466. Google ScholarDigital Library
- Alan S. Morris. 2001. Measurement and instrumentation principles. Butterworth-Heinemann. isbn: 978-0-7506-5081-6.Google Scholar
- Xinming Ou and Andrew W. Appel. 2005. A logic-programming approach to network security analysis. Princeton University Princeton.Google Scholar
- Daniel Hall. 2013. Ansible Configuration Management. Packt Publishing. isbn: 9781783280810.Google Scholar
- Steven Noel and Sushil Jajodia. 2014. Metrics suite for network attack graph analytics. In Proceedings of the 9th Annual Cyber and Information Security Research Conference on - CISR '14. ACM Press, 5--8. isbn: 978-1-4503-2812-8. http://dl.acm.org/citation.cfm?doid=2602087.2602117. Google ScholarDigital Library
- Marc Dacier and Yves Deswarte. 1994. Privilege graph: an extension to the typed access matrix model. In Proceedings of the Third European Symposium on Research in Computer Security (ESORICS '94). Springer-Verlag, London, UK, UK, 319--334. isbn: 3-540-58618-0. http://dl.acm.org/citation.cfm?id=646645.699167.Google ScholarCross Ref
- Rodolphe Ortalo, Yves Deswarte, and Mohamed Kaâniche. 1999. Experimenting with quantitative evaluation tools for monitoring operational security. IEEE Transactions on Software Engineering, 25, 5, 633--650.Google ScholarDigital Library
- Cynthia Phillips and Laura Painton Swiler. 1998. A graph-based system for network-vulnerability analysis. In Proceedings of the 1998 workshop on New security paradigms - NSPW '98. ACM Press, 71--79. isbn: 978-1-58113-168-0. http://portal.acm.org/citation.cfm?doid=310889.310919. Google ScholarDigital Library
- Marc Dacier, Yves Deswarte, and Mohamed Kaâniche. 1996. Quantitative assessment of operational security: models and tools. Information Systems Security, ed. by SK Katsikas and D. Gritzalis, London, Chapman&Hall, 179--86. tex.ids: dacierQuantitativeAssessmentOperational.Google Scholar
- Subil Abraham and Suku Nair. 2014. Cyber security analytics: a stochastic model for security quantification using absorbing markov chains. Journal of Communications, 9, 12, 899--907.Google Scholar
Recommendations
Are Slice-Based Cohesion Metrics Actually Useful in Effort-Aware Post-Release Fault-Proneness Prediction? An Empirical Study
Background. Slice-based cohesion metrics leverage program slices with respect to the output variables of a module to quantify the strength of functional relatedness of the elements within the module. Although slice-based cohesion metrics have been ...
First International Workshop on Measurability of Security in Software Architectures -- MeSSa 2010
ECSA '10: Proceedings of the Fourth European Conference on Software Architecture: Companion VolumeThe growing complexity of service-centric systems has increased the need for pertinent and reliable software security and trusted system solutions. Systematic approaches to measuring security in software architectures are needed in order to obtain ...
A Critical Analysis of Current OO Design Metrics
Chidamber and Kemerer (C&K) outlined some initial proposals for language-independent OO design metrics in 1991. This suite is expanded on by C&K in 1994 and the metrics were tested on systems developed in C++ and Smalltalk™. The six metrics making up ...
Comments