Standardisation: A tool for addressing market failure within the software industry

https://doi.org/10.1016/j.clsr.2013.05.009Get rights and content

Abstract

Despite the maturity of the software industry, empirical research demonstrates that average software quality, when measured through the presence of software defects, is low. Such defects cause a wide array of issues, not least in the form of vulnerabilities, which support a multi-billion pound a year industry of fraud in cyber crime. This paper suggests that this is the result of market failure stemming from two factors: the first is that information asymmetry prevents the establishment of software quality prior to purchase; whilst the second is that the legal provisions available under private law are unable in their current form to adequately address software liability issues. On that basis this paper proposes the use of standardisation as a tool to address both of these shortcomings by providing an industry benchmark against which software quality can be ascertained, as well as forming a legal tool for determining causation for the purposes of establishing legal liability.

Introduction

Pervading modern society, computing is a well-established technology that has been recognised by law for almost half a century.1 Computers use software to provide instruction on the storage and processing of information,2 with the process of creating software known as software engineering – a process involving a number of stages that progress from requirements gathering and design, through to the creation of source code and object code,3 and finally testing. Whilst there are many actors within the software industry ecosystem these principally include the software vendor (creating the software), distributors and resellers,4 and end-users (both consumer and business entities).

Globalisation and a reliance on technology means that software underpins many aspects of daily life – for example transportation, banking, communication, retail and healthcare. Despite offering a plethora of benefits, software can become problematic when it contains bugs or errors, the ramifications of which can be unpredictable and, by virtue of its usage, widespread. A key reason for this resides in the fact that computers simply interpret program instructions and unlike humans which may identify certain instructions as harmful, computers do not. Even if they could, computers would be unable to address errors without knowledge of the programmer's intention. It is for this reason that software quality assurance (SQA) is important in order to ensure that software functions as intended and has been thoroughly tested. Failure to perform adequate SQA places at risk the software's ability to fulfil its intended function, raising the likelihood of the software failing altogether with potential for widespread and significant damage.

The lack of quality in a large number of widely used software packages now has an extremely significant role to play in global security. Software defects are often vulnerabilities, which can be exploited by cyber criminals or ‘hacktivists’ to gain access to systems,5 steal data, damage data or damage reputation. In some extreme instances such defects have caused billions worth of financial damage, or threats to life and limb in attacks on power and health systems. Reports vary in their estimations but agree the damage to the UK alone is in the order of billions of pounds per year,6 most of which cannot be pursued with conventional legal action given the difficulty of identifying attackers and complexity in the ecosystem of technology involved with such cases. A substandard focus on software quality facilitates a wide array of different attacks. There is little better demonstration of the scale of this issue and the endemic quality issues in the software industry than the view provided by cyber criminals. The BlackHole exploit pack,7 a cyber criminal toolkit which exploits such software defects with a point and click interface, enables a massive range of skilled or unskilled attackers to exploit organisations all over the world. When law enforcement or security researchers gain visibility of such cyber criminal software they can see which software is being used to attack systems and precisely which defects in which vendors products are to blame. Frequent enablers of cyber criminals are the likes of Adobe and Oracle,8 though there are many others. Whilst eliminating all defects is unrealistic, a poor investment in SQA indirectly facilitates a growing multi billion pound fraud bill, which is ultimately shared across end-users.

As software dependencies continue to grow it should be expected that SQA be at the forefront to minimise software developer costs or liability. Unfortunately empirical evidence suggests that this is not the case, with data provided in Section 2.2 demonstrating that despite the maturity of the software industry, endemic quality concerns exist globally.9 There are many potential reasons for this failure, of which this paper focuses on that of causation – whereby the technical nature of software is largely incompatible with consumer understanding and existing legal frameworks, hindering the ability for end-users to make appropriate use of the law to exact remedies when instances of poor quality software are encountered. When taken in view of the overall level of governance imposed on the software industry – it is suggested that the lack of legal remedy harbours software vendor complacency on matters such as quality.

Whilst there are several ways of addressing this problem, this paper focuses on the use of standardisation, the premise being that it is an essential tool to provide the courts with a mechanism for identifying and establishing liability, which in turn facilitates the market to address software quality. In this context the intended use of standardisation within SQA is to formalise the use of software engineering methodologies that are known to result in high levels of software quality - as evidenced in Section 4.1, as well as addressing the key challenges faced when drawing upon existing legislation to provide remedies for loss or damage resulting from defective software.10

In light of the above the structure of the paper is as follows: Section 2 provides a background to SQA and standardisation. Section 3 discusses the forms of governance currently applied to the software industry, highlighting why market failure has occurred. Section 4 addresses standardisation, looking at what should be standardised to address software quality, and how standards are currently used within the law. Finally, Section 5 considers causation and the legal shortcomings applicable to software defects, and highlighting how standardisation can take a central role in addressing this.

Section snippets

Background

Given that software quality and standardisation are both central to the core arguments of this paper, the following subsections provide a brief background to each, along with a summary of empirical evidence supporting the claim that SQA is lacking across the entire software industry.

Existing governance of software quality

It was noted in the introduction that the current governance presiding over the software industry has failed to ensure SQA. This governance comprises of two overlapping spheres: the market (self-regulation) and the law. The following subsections discuss each of these to suggest why they have failed to address software quality concerns.

Standardisation

The previous section discussed how the current spheres of governance applicable to the software industry have facilitated sub-standard software quality. In returning to the topic of standardisation, the purpose of this section is to place standardisation into context by establishing how it can address the current status quo, and how it can be used within a legal capacity.

Issues of causation

Further to the above analysis of standardisation and there are two key barriers inhibiting its use within a software engineering context: the first is the non-compulsory nature of software engineering methodologies and the second is that the wide choices available, and variations within their effectiveness, mean that consistent results are not guaranteed. To focus on individual defects is an ineffective and resource intensive approach to software quality. Instead the methodologies identified as

Roksana Moore ([email protected]) is a Lecturer in Information Technology and Intellectual Property law, and a member of the Institute for Law and the Web at the School of Law, University of Southampton.

References (0)

Cited by (3)

  • “I couldn't find it your honour, it mustn't be there!” – Tool errors, tool limitations and user error in digital forensics

    2018, Science and Justice
    Citation Excerpt :

    Moore [30] states that ‘despite the maturity of the software industry, empirical research demonstrates that average software quality, when measured through the presence of software defects, is low’.

Roksana Moore ([email protected]) is a Lecturer in Information Technology and Intellectual Property law, and a member of the Institute for Law and the Web at the School of Law, University of Southampton.

View full text