Appraisal and reporting of security assurance at operational systems level

https://doi.org/10.1016/j.jss.2011.08.013Get rights and content

Abstract

In this paper we discuss the issues relating the evaluation and reporting of security assurance of runtime systems. We first highlight the shortcomings of current initiatives in analyzing, evaluating and reporting security assurance information. Then, the paper proposes a set of metrics to help capture and foster a better understanding of the security posture of a system. Our security assurance metric and its reporting depend on whether or not the user of the system has a security background. The evaluation of such metrics is described through the use of theoretical criteria, a tool implementation and an application to a case study based on an insurance company network.

Highlights

► We investigate security assurance metrics that may help the understanding of a system's security posture. ► Our metrics integrate: the quality of the verification process, the criticality of the context in which the system operates and, the correctness posture of the security mechanism at a given time. ► The security correctness metrics are used for the understanding of users with security exposure. ► A context of use based security assurance level is adopted as an indication for those without a sound knowledge of security.

Introduction

The idea of producing Information Technology (IT) systems that are and remain 100% secure over time has long been dismissed as desirable but yet unachievable. This is partly due to both, the difficulty in anticipating during the system development process all the potential future threats to the system, and to the evolution of the systems’ environment parameters. Furnell (2009), in his article entitled ‘The Irreversible March of Technology’, remarks that although technology is perpetually evolving to the convenience of end-users and businesses, the security situation related to that technology has barely improved and the risks have escalated. In fact new technology often gives rise to newer ways of compromising IT systems’ security, thus adding extra layers of complexity to the endeavours to guarantee security.

A study conducted by Wool (2004), on firewall configurations, revealed that a common but sometimes overlooked source of security risks for large distributed and open IT systems is the improper deployment of the security mechanisms. In fact the security mechanisms, even properly elucidated during the risks assessment stage, may be deployed inappropriately or unidentified hazards in the system environment may render them less effective. How good, for instance, is a fortified door if the owner, inadvertently, leaves it unlocked? Or considering a more technical example, how relevant is a firewall for a critical system linked to the Internet if it is configured to allow any incoming packets? Therefore, monitoring and reporting on the security posture of IT systems must be carried out to determine compliance with security requirements (Jansen, 2009) and to gain assurance as to their ability to protect system assets adequately. This remains one of the fundamental tasks of security assurance which is here defined as the ground for confidence on deployed security mechanisms to meet their objectives. It is worth mentioning that our understanding of security assurance is in line with the Common Criteria's definition of assurance (Common Criteria Sponsoring Organizations, 2006).

Unfortunately, although assurance is a field that has been gaining momentum, partly due to the growing need for compliance within big corporations (Julisch, 2008), limited efforts have been dedicated to appraising security assurance of operational systems. Several reasons may explain this. First, security assurance was relegated to the shadow of security as both terms were often used interchangeably (Jelen and Williams, 1998). Hence, it has been assumed that addressing security also covers the assurance angle, when in reality there may be security protocols in place without evidence of them working properly (Jelen and Williams, 1998, Wool, 2004). The second reason is related to the target level of the security assurance analysis and/or evaluation. In fact most of the efforts have been dedicated to assurance at the software development level and at the end product criteria. Examples of such initiatives include: assurance cases (Strunk and Knight, 2006); UMLSec (Jürjens, 2005); Secure Tropos (Mouratidis and Giorgini, 2007a, Mouratidis and Giorgini, 2007b) and the Common Criteria or CC (Common Criteria Sponsoring Organizations, 2006). The rationale behind such efforts is that without a rigorous and effective way of dealing with security at system development process, the end product cannot be secure. While this is true, the emphasis on design and process evidence versus actual product software largely overshadows practical security concerns involving the implementation and deployment of operational systems (Jansen, 2009). Finally, published literature has focused on providing guidelines for identifying security metrics (Vaughn et al., 2002, Swanson et al., 2003, Seddigh et al., 2004, Savola, 2007) without providing indications on how to combine them into the quantitative or qualitative indicators that are important for a meaningful understanding of the security posture of an IT system component. The term security posture here refers to the status of the security mechanisms, i.e. whether there is any abnormally that may result in the system's security being jeopardized.

This paper aims to address such a gap in the current literature and support the evaluation of security assurance at the operational systems level, by introducing metrics for the continuous evaluation of security assurance of runtime systems. The outcome of evaluation is a statement about the extent to which confidence has been gained that the security mechanisms are operating properly. This paper considers a metric as a value, selected from a partially ordered set by some assessment process that represents an Information System related quality of some object of concern. It provides, or is used to create a description, prediction, or comparison, with some degree of confidence (WISSSR, 2001).

Approach and contribution: A prerequisite and challenge in seeking to evaluate security assurance of operational systems is to identify the keys characteristics of the security mechanisms and/or the system that ought to be assessed in order to develop assurance indicators. Only after that one can start to address how measurements of these characteristics can be integrated and communicated to users (including system administrators and security managers) to ensure a good understanding of the security posture. Such a challenge is partially answered by considering NIST's special publication NIST800-33 (Stoneburner, 2000). As a matter of fact, NIST asserts that assurance that the security objectives (integrity, availability, confidentiality, and accountability) will be adequately met by a specific implementation depends partly on whether the required security functionalities are present and correctly implemented. Consequently, probing the presence and correctness of the deployed security mechanisms in an operational system is paramount to gaining assurance. However, since assurance is about confidence, the quality of the mean of verification is as equally relevant and should be reflected in the assurance value. This is paramount since a clear correlation exists between the quality of the verification process and the reliability of the result achieved). Consider for instance the following scenario: one is using two different anti-viruses AV1 and AV2 for checking whether a system is currently immune of certain malware. Assuming one knows from previous use of AV1 that its effectiveness to detect malware is somehow dubious, whereas AV2 has proven to be very reliable in terms of malware detection precision. If one was to use AV1 and then AV2, and the verification outcome is “no virus found”, one would certainly be relieved. However, one's confidence in the actual result will differ depending on whether they have used AV1 or AV2. In fact, the confidence in the system security posture reading after using AV1 will be lower compared to the use of AV2.

No metrics are worthwhile if the results of applying them cannot be effectively understood and security assurance metrics are no exception. In the security assurance realm there are actually two main stakeholders: the common user who resorts to the IT system for fulfilling his/her daily activity, with a limited or no understanding of security; and on the other hand, the security expert. While the common users purely need assurance to boost their confidence in the system they are using, assurance from the security expert perspective mainly is needed to support better management of systems or network security. Thus according to Skroch et al. (2000), at least the reporting of the security assurance level (SAL) to the common user and the security expert has to be such that it is meaningful and useful to each of them. To illustrate, the correctness information related to a security mechanism could be enough for a security expert to grasp the urgency of a situation while meaning little to the common user. For instance, saying to a user that his/her firewall configuration is such that it allows all incoming connection may be less meaningful than telling him the likelihood of his/her firewall protecting him/her against external attack is nearly null. Therefore it is pertinent to determine ways in which security (SA) information may be beneficially communicated to a user without much exposure to security. Providing meaningful security mechanisms without knowing the system and environments in which the component (system) could be deployed is a difficult task (Grunske, 2007). Similarly, SA is a context dependent concept. Consider the following example: Alice may feel very confident in using an unsecured wireless connection for simple Internet browsing but that confidence would drop considerably if Alice was to use it for Internet banking. We could state that purpose 1 (web browsing) requires low security or that its context security criticality level is low. This is due to the fact that any potential risk impact for that context will be relatively low for the user, whereas the context security criticality for purpose 2 (Internet Banking) is high. This example illustrates that a user without any security background can still make informed decisions on pursuing or not a course of action, if she/he is provided SALs based on the security posture and the security criticality of the context in which their system operates.

Based on the above analysis, the main contribution of the paper can be summarized as: the provision of metrics for appraising the SAL of an operational system’ security. Those metrics are meant to help users, with and without security expertise, to understand the security posture of their system. Such a SAL depends on:

  • (i)

    The quality of the software probe doing the verification. We developed a metric taxonomy based on the Common Criteria and the System Security Engineering Capability Maturity Model (SSE-CMM, 1999). In fact not knowing how good a verification process or a software probe is, would result in no rational way of judging the reliability of the measurements it provides. A probe is a software program inserted in a system for the purpose of monitoring or collecting data about its security.

  • (ii)

    The reported status of the security mechanism at a given time (investigate the Availability and Conformity of the security mechanism). A security mechanism for which deployment does not conform to the policy specification is less assured to provide adequate protection to a system. The outcome from (i) is then used as a cap to the possible values that can be obtained by adopting a given probe or more generally a verification process. The idea behind such consideration is that one cannot provide more assurance than its quality.

  • (iii)

    The context in which the system is operating: One's confidence in a safeguard to meet its objectives and protect a system will be higher in a usage context that has a low security criticality (therefore requiring a low level security) than in a high security critical context.

Results from (ii) mainly aim at users with security background; while outputs from (ii) and (iii) are integrated in a SA function (refer to Section 7) to yield a contextual value of SA more relevant to those users without security expertise.

Outline: The rest of the paper is organized as follows: Section 2 provides a general overview of existing SA evaluation approaches, as reported in the literature. In Section 3, the assurance evaluation model is linked to risk assessment process. In Section 4, we present the developed probe quality metric taxonomy while in Section 5 the SA checks are discussed. Section 6 describes the process of elucidating the security criticality of the context in which a system operates and Section 7 presents the contextual SA function. Section 8 illustrates the case study and discusses the results obtained from applying our approach. In Section 9, the agents’ organization adopted for the evaluation of the assurance is described, while Section 10 concludes the paper.

Section snippets

Related work

Although the literature on SA itself is very scarce, considerable efforts have been made to address the ever-growing issue of computer security. Information System engineering, for instance, has recently called for the systematic integration of security in the development process, to help developers in producing more secure systems. As a result, modeling methodologies such as Secure Tropos (Mouratidis and Giorgini, 2007a, Mouratidis and Giorgini, 2007b), KAOS (Van Lamsweerde and Letier, 2000),

Linking the security assurance evaluation model to risk assessment concepts

We argue that the evaluation of a system's SA only makes sense when placed within a risk management context. To reflect this, in our work, the SA evaluation takes place after a risk assessment has been completed and the appropriate countermeasures have been deployed. Fig. 1 shows the SA evaluation model and how it relates to the risk assessment whose concepts are depicted in bold. As it can be seen in Fig. 1, risk is commonly characterized as the opportunity of a threat exploiting

Software probe quality metric taxonomy

It is important to note that the quality taxonomy presented below refers to the thoroughness of the probes (or security verification process in general) to verify the security mechanism. The probes (and their implementation) along with the transport links used to transport the security assurance information constitute the evaluation framework and is distributed (see Section 9).

The metric taxonomy used in this paper is inspired by the organization of the Common Criteria security requirements

Verification of the security posture of a security mechanism

The previous sections have discussed how to gauge the quality of a probe involved in the verification of a security mechanism using a quality matrix. This section discusses how to quantify the confidence inspired by a security mechanism of an operational system. Unlike the determination of the probe quality level, the SAL of the security mechanism should be dynamic (change over time either due to context change or due to modified parameters of the targeted security mechanism). In line with the

Determining the security criticality of a context of use of a system

IT systems may be used in different environments for different purpose. Different environments will require different uses and different levels of security. The combination of these two elements (environmental and/or purpose of use) is what we refer to as the context of use of the system. Although a system's security criticality is taken into account when defining the security mechanisms, the actual evaluation of the confidence level in those measures has so far been conducted without

Security assurance function

Once the key concepts and bedrock of the SA have been determined, the question is now how one could combine them so to yield a rough measure of confidence in the security mechanisms to meet their objectives.

An interesting mathematical characterization closer to the definition of SA adopted within this paper can be derived from probability theory, namely from probability distribution. Thus this paper considers that the SA of a security mechanism (S) with a conformity level c and an estimated

Case study

This section shows how the SA is determined through the use of a case study.

Security assurance evaluation and agents organization

Given the highly distributed nature of most current systems, the verification of the security mechanisms is more challenging due to issues such as concurrency, fault tolerance, security and interoperability. Multi-agent systems (MAS) (Wooldridge, 2002) offer interesting features for verifying the security of such systems. In our work, we consider an agent as an encapsulated computer system that is situated in some environment and that is capable of flexible, autonomous action in that

Conclusion and discussions

This paper has presented a general overview of SA theory through the analysis of existing approaches. The review has been followed by a description of assurance metrics and, an overall approach to gauging the confidence on the deployed security mechanisms. We highlighted aspects of the general SA evaluation through specification of probe quality metrics taxonomy, security criticality categorization and SA function. The paper has also highlighted our vision of dynamic risk management using the

References (54)

  • L. Grunske et al.

    Quantitative risk-based security prediction for component-based systems with explicitly modeled attack profiles

    Journal of Systems and Software

    (2008)
  • H. Mouratidis et al.

    Security Attack Testing (SAT) – testing the security of information systems at design time

    Information System

    (2007)
  • T. Beth et al.

    Valuation of trust in open networks

  • Y. Benchaïb et al.

    VIRCONEL: a new emulation environment for experiments with networked IT systems

  • D. Bodeau

    Information assurance assessment: lessons-learned and challenges

  • E. Bulut et al.

    Multi-agent based security assurance monitoring system for telecommunication infrastructures

  • Common Criteria Sponsoring Organizations

    Common Criteria for information Technology, Part 1–3, Version 3.1

    (2006)
  • D.L. Evans et al.

    Standards for Security Categorization of Federal Information and Information Systems

    (2004)
  • M. Feather et al.

    A Broad Quantitative Model for Making Requirements Decisions

    IEEE Software

    (2008)
  • E. Fong et al.

    Structured assurance case methodology for assessing software trustworthiness

  • Furnell, S.M., 2009. The irreversible march of technology, Information Security Technical Report 14(4), 176–180,...
  • L. Grunske

    Early quality prediction of component-based systems – a generic framework

    Journal of Systems and Software

    (2007)
  • A. Hecker

    On system security metrics and the definition approaches

  • A. Hecker et al.

    On the operational security assurance evaluation of networked IT systems

    Lecture Notes in Computer Science

    (2009)
  • E. Herrera-Viedma et al.

    A consensus support system model for group decision-making problems with multi-granular linguistic preference relations

    IEEE Transactions on Fuzzy Systems

    (2005)
  • D.K. Holstein

    A systems dynamics view of security assurance issues—The curse of complexity and avoiding chaos

  • S.H. Houmb et al.

    Developing secure networked web-based systems using model-based risk assessment and UMLsec

  • S.H. Houmb et al.

    Cost-benefit trade-off analysis using BBN for aspect-oriented risk-driven development

  • S.H. Houmb et al.

    Eliciting security requirements and tracing them to design: an integration of common criteria, heuristics, and UMLsec

    Requirements Engineering Journal (REJ)

    (2010)
  • ISO/IEC, 2007. Systems and Software Engineering – Measurement Process. ISO/IEC15939, Geneva,...
  • ISO/IEC, 2009. Information Technology – Security Techniques – Information Security Management Measurements....
  • ISO/IEC, 2008. Information Technology – Security Techniques – Information Security Risk Management. ISO/IEC27005,...
  • Jansen W., 2009. Directions in Security Metrics Research. National Institute of Standards and Technology, Special...
  • G.F. Jelen et al.

    A practical approach to measuring assurance

  • N.R. Jennings

    An agent-based software engineering

  • K. Julisch

    Security compliance: the next frontier in security research

  • J. Jürjens

    Secure Systems Development with UML

    (2005)
  • Cited by (22)

    • System security assurance: A systematic literature review

      2022, Computer Science Review
      Citation Excerpt :

      They also made an analysis of the mapping between different capability levels and the quality levels of the different verification metrics families, such as coverage, rigour, depth, and independence of verification. Ouedraogo et al. [38,72] also developed a method to combine the security metrics into the quantitative or qualitative indicators that are crucial in developing understanding regarding the security status of an IT system component. A summary of security metrics category, goals, and methods is given in Table 4.

    • Mapping the field of software life cycle security metrics

      2018, Information and Software Technology
      Citation Excerpt :

      In this section, we describe the process used to conduct our systematic mapping study. We based our selection of online databases on the common databases used in Software Engineering Systematic Literature Reviews (SLRs), and in Systematic Mapping Studies (SMSs), and on sources used in previous software security metric literature reviews [11,17]. The data sources in this study include online databases, conference proceedings, and academic journals.

    • Eliciting metrics for accountability of cloud systems

      2016, Computers and Security
      Citation Excerpt :

      Recall that the notion of confidence we are referring to should indicate the quality of the evaluation processes associated to the metric. We follow a similar approach to Ouedraogo et al. (2012, 2013). when distinguishing what are the factors that influence the confidence in the metrics results.

    • EVM and IVM Dynamics in Cloud Environment

      2016, Procedia Computer Science
    View all citing articles on Scopus
    View full text