Elsevier

Computer Communications

Volume 34, Issue 3, 15 March 2011, Pages 264-273
Computer Communications

VRSS: A new system for rating and scoring vulnerabilities

https://doi.org/10.1016/j.comcom.2010.04.006Get rights and content

Abstract

Vulnerabilities are extremely important for network security. IT management must identify and assess vulnerabilities across many disparate hardware and software platforms to prioritize these vulnerabilities and remediate those that pose the greatest risk. The focus of our research is the comparative analysis of existing vulnerability rating systems, so as to discover their respective advantages and propose a compatible rating framework to unify them. We do the statistic work on vulnerabilities of three famous vulnerability databases (IBM ISS X-Force, Vupen Security and National Vulnerability database) and analyze the distribution of vulnerabilities to expose the differences among different vulnerability rating systems. The statistical results show that the distributions of vulnerabilities are not much consistent with the normal distribution. Taking into account all kinds of existing vulnerability rating systems, we propose VRSS for qualitative rating and quantitative scoring vulnerabilities, which can combine respective advantages of all kinds of vulnerability rating systems. An experimental study of 33,654 vulnerabilities demonstrates that VRSS works well.

Introduction

In network security, vulnerabilities play a special role. A vulnerability means a bug, a flaw, a weakness, or an exposure of an application, system, device, or service which could lead to a failure of confidentiality, integrity, or availability [1], [2]. Attackers can exploit some vulnerabilities to endanger your computer system’s security.

In order to reduce the losses due to vulnerabilities, IT management must identify and assess vulnerabilities across many disparate hardware and software platforms [2]. They need to prioritize these vulnerabilities and remediate those that pose the greatest risk. But when there are so many to fix, with each being scored using different scales, how can IT managers convert this mountain of vulnerability data into actionable information?

Historically, vendors have used their own methods for scoring software vulnerabilities, usually without detailing their criteria or processes [3]. Over the past several years, a number of large computer security vendors and not-for-profit organizations have developed, promoted, and implemented procedures to rank information system vulnerabilities. Such as National Vulnerability Database [14], US-CERT, SANS [5], Secunia [6], ISS X-Force [8], Vupen Security [16], Symantec [7], Microsoft [9], [17], Sun [10], Redhat [11] and so on. Unfortunately, there has been no cohesion or interoperability among these systems. Also, existing systems tend to be limited in scope as to what they cover [1].

In July 2003, the National Infrastructure Advisory Council (NIAC) commissioned a research project to promote a common understanding of vulnerabilities and their impact through development of a common vulnerability scoring system. Current scoring systems, in use by the Computer Emergency Response Team/Coordination Center (CERT/CC), Symantec, Internet Security Systems, Cisco Systems, and others, rate vulnerabilities according to a variety of metrics and determine a single overall threat score by weighing these metrics. These systems use different, non-common metrics to characterize vulnerabilities; they are Internet-centric; they do not universally accommodate changes over time; and they do not have provisions for user operational environments with different risk profiles [12].

The Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the characteristics and impacts of IT vulnerabilities and enables IT managers, vulnerability bulletin providers, security vendors, application vendors and researchers to all benefit by adopting this common language of scoring IT vulnerabilities [2].

The focus of our research is the comparative analysis of existing vulnerability rating systems, so as to discover their respective advantages and propose a compatible rating framework to unify them. In our paper, we analyze the practice effects of three vulnerability rating methods belonging to three different vulnerabilities database and find that vulnerabilities in the traditional rating method are more accord with the statistical law. Based on the existing vulnerability rating systems, we propose a new method for rating and scoring vulnerabilities, which can combine their respective advantages of all kinds of vulnerability rating systems.

The paper is organized as follows. Section 2 gives the related work. Section 3 describes three vulnerability databases and vulnerability rating systems of their own. Section 4 gives the analysis results of three databases and describes a problem among them. In Section 5, we analyze two different vulnerability rating methods. In Section 6, we propose VRSS, a new Vulnerability Rating and Scoring System. Then, we give some examples to explain our new method. Section 7 gives the distribution analysis of 33,654 CVE vulnerabilities to illustrate the performance of our new system. Then, we give the discussion work in Section 8. Finally, Section 9 is the conclusion of the paper and some future works will also be shown.

Section snippets

Related work

Over the past several years, a number of large computer security vendors and not-for-profit organizations have developed, promoted, and implemented procedures to rank information system vulnerabilities. Leading IT companies including Cisco Systems and Symantec are promoting a rating system that will standardize the measurement of the severity of software vulnerabilities. A plan for the new system, called the Common Vulnerability Scoring System (CVSS), was unveiled at the RSA Conference in San

Overview of three vulnerabilities databases

There are many vulnerability scoring systems, which are supported by different organizations. Generally there are two major categories of how to describe the severity of vulnerabilities: qualitative vulnerability rating system and quantitative vulnerability scoring system. In this part, we will introduce three different vulnerabilities databases: IBM Internet Security Systems X-Force (IBM ISS X-Force), Vupen Security and National Vulnerability database (NVD). All of them are famous

Analysis of vulnerability rating systems

The main function of the rating system is to separate vulnerabilities from each other as far as possible according to their effects, so as to let end-users to understand the vulnerability and aware of the threat of the vulnerability at the same time. In this part, we will analyze the distribution of vulnerabilities in these three databases.

Analysis of two different methods

Shown in Fig. 12, there are two major categories to describe the severity of vulnerabilities: one is vulnerability qualitative rating system, such as IBM ISS X-Force and Vupen security; another type is quantitative scoring system, such as CVSS. Through above study and analysis, we reach some conclusions as follows.

  • 1.

    In probability theory and statistics, the normal distribution or Gaussian distribution is a continuous probability distribution that describes data that clusters around a mean or

Improvement of vulnerability rating system

In order to reduce the differences between the two methods, and succeed historical experience on vulnerability rating methods, we propose a new method named Vulnerability Rating and Scoring System (VRSS). Shown in Fig. 13, we combine qualitative method and quantitative method together. If widely adopted, the new vulnerability rating and scoring system will provide a common method for describing the threat of computer security vulnerabilities and will replace different, vendor-specific rating

Distribution analysis

In order to illustrate the performance of our new rating and scoring vulnerability system, we analyze all the 34,093 CVE vulnerabilities3 that published between 1999 and 2008. Fig. 15 shows the distribution of the number of vulnerabilities, grouped by three levels. We can see that the number of vulnerabilities with “Medium” severity ranking is the largest, and the number of vulnerabilities with “High” severity ranking is much smaller

Discussion

The two major goals of our research are studying and improving existing vulnerability rating system. We have used the angle of normal distribution to analyze the vulnerability rating system. In our opinion, the number of vulnerabilities with “Medium” severity ranking should be the largest and the number of vulnerabilities with “High” or “Low” severity ranking is much smaller than labeled with “Medium”. The main function of the scoring system is to separate vulnerabilities from each other as far

Conclusion and future work

In this paper, we have analyzed a large set of vulnerabilities of three famous vulnerability databases to expose the differences among different vulnerability rating systems and used the idea of normal distribution to improve the vulnerability rating system. In order to reduce the differences between so many vulnerability rating methods, we have proposed a new vulnerability rating and scoring system named VRSS for qualitative rating and quantitative scoring vulnerabilities. An experimental

Acknowledgement

The authors thank Karen Scarfone, a computer scientist in the Computer Security Division at NIST. Thanks for her insightful discussions and comments that improved our work.

References (20)

  • John Chambers, John Thompson, Common vulnerability scoring system, October 2004. Available from:...
  • Peter Mell, Karen Scarfone, Sasha Romanosky, A complete guide to the common vulnerability scoring system version 2.0,...
  • Peter Mell et al.

    Common Vulnerability Scoring System Security & Privacy

    IEEE

    (2006)
  • Peter Mell et al.

    Improving the Common Vulnerability Scoring System

    Information Security IET

    (2007)
  • SANS,...
  • Secunia,...
  • Symantec Security Response, Threat severity assessment. Available from:...
  • X-Force, X-Force frequently asked questions. Available from:...
  • MSRC, Microsoft security response center security bulletin severity rating system. Available from:...
  • Sun,...
There are more references available in the full text version of this article.

Cited by (76)

  • CAVP: A context-aware vulnerability prioritization model

    2022, Computers and Security
    Citation Excerpt :

    RC indicates confidence levels of exploits. If the presence of a vulnerability is indicated but reports differ or not certain, the confidence level is considered low as Unknown (U); if exploits have been replicated and explained, the confidence level is considered as Reasonable (R); and if exploits have independently verified software components, vendor confirmation, and replicability, the confidence level is considered as Confirmed (C) (Mézešová and Bahsi, 2019). Table 1 depicts the metric values for each Temporal metric subgroup (FIRST.org, 2018).

  • Cyber-attacks and stock market activity

    2021, International Review of Financial Analysis
  • An ensemble approach for optimization of penetration layout in wide area networks

    2021, Computer Communications
    Citation Excerpt :

    In 2005, FIRST (Forum of Incident Response and Security Teams) have proposed CVSS v1 to rank individual vulnerabilities [18]. After that various authors have worked upon scoring schemes by either modifying CVSS base score or proposing a rank analysis approach [19–25]. Wing [26] has proposed a survivability model for network i.e. ability of a system to maintain its state despite the presence of unusual events.

  • Quantitative Evaluation of Extensive Vulnerability Set Using Cost Benefit Analysis

    2024, IEEE Transactions on Dependable and Secure Computing
View all citing articles on Scopus

This work is supported by The National Natural Science Foundation of China (60773135, 60970140, 90718007), the High Technology Research and Development Program of China (863 Program) (2007AA01Z427, 2007AA01Z450).

View full text