Keywords

1 Introduction

The importance of computer technology in our everyday life and also its strategic importance are growing fast. Nowadays, digital communication networks are almost inevitable in both private and public sector organisations. Due to the growing role of computers and Internet in important business and state-related activities, investments to computer security have also been growing fast. Computer security as a subject is as old as the computer technology but during the explosive growth of computer industry and the Internet, Computer Security has become a rapidly growing industry. Many companies offer security products and security services, international standards organisations develop standards on best security management practices, etc.

In spite of the growth of the security industry, we also have been witnessing a fast growing trend of the losses through cyber crime. In this paper, we go through some fundamental reasons that suggest us to predict that both these rapid growth trends will probably continue in the future. The reasons we refer to are related to historical design decisions made during the development of the frameworks, protocols and formats that todays computers and the Internet are based on, but also on the current system engineering practices and the role of security industry in these practices. There are three main observations that justify our prediction.

The first observation is that the Internet itself continues to grow fast. Not only in terms of the number of users but also in terms of the amount of available data (cloud computation), as well as in terms of the types of devices connected to the Internet (e.g. Internet of Things). More and more assets will be connected to the Internet which increases the motivation for potential attackers. Internet is not only used for inter-organisational communication but also for internal communications at very high level. For example, while ten years ago only dedicated communication lines were used for the government’s internal mail exchange, today, the Internet mail is used. The known security incidents, like the Hillary Clinton’s mailbox scandal [10], seem not to decrease the optimism of companies and countries to use Internet in the most sensitive areas.

The second observation is related to fundamental technical design choices during the development of computers and engineering that, one the one hand, enabled rapid growth of the computer industry and the Internet, but which on the other hand, have caused numerous fundamental vulnerabilities in systems that are very hard to eliminate. For example, one of the main design decision is Open Systems Interconnection (OSI) framework that can be considered as the main reason why the rapid growth of computers and Internet was possible. This framework proposed a modular layered design approach such that the information exchange formats and protocols between the layers are standardised, while inside the layers the producers have full freedom to implement the desired functionality. On the one hand, such an approach guarantees scalability of the production of computer- and Internet technology. On the other hand, as the information exchange between different layers is limited, the layers are not able to cooperate in a way that is sufficient for effectively meeting denial of service attacks.

The third observation is related to the growing role of Chief Security Officers (CSOs) in organisations. Nowadays, they are even higher in the hierarchy than Chief Technical Officers (CTOs) and Chief Information Officers (CIOs). The mainstream approach today for developing a secure application is that CTO and CIO present a functional solution of the system and the role of CSO is to make and keep it secure by applying security measures to the system by following a general rules (given in security management standards and best practices) that do not assume understanding the functionality of the system. This guarantees that there will always exist vulnerabilities in systems that attackers can abuse and hence, there is a reason to apply more security measures and buy more security products.

The paper is organised as follows. In Sect. 2, we discuss the reasons and mechanisms behind the rapid growth of computer technology and the Internet and the reasons why such growth will continue in the future. In Sect. 3, we discuss the role of Security as a separate discipline and characterise its branches. In Sect. 4, we discuss the reasons behind the persistent vulnerabilities in today’s and future systems. In Sect. 5, we discuss the role of todays system engineering and management practices in the existence vulnerabilities in systems.

2 Computers and Networks

Billions of personal computers are in use today and most of them are interconnected via computer networks. Novaday’s computers and the networks have modular design, which means they are built using mutually compatible macro-components that are easy to interconnect and make computers and networks easy to assemble.

The modularity is achieved due to conventions and standards that specify the physical parameters and the data formats used in interfaces between the components. This means that the components will fit together independent of their producers and can be produced anywhere in the world, which means that the components are widely accessible.

The modularity also means specialisation. Engineers who interconnect the macro components of a computer do not have to know how to produce such components. Computer engineers are not necessarily electronic engineers. Electronic engineers do not necessarily know enough solid state physics to understand how the basic components of electronic circuits (such as transistors, diodes, etc.) are built. Similar specialisation happens in higher levels. Application programmers do not necessarily know the details of the operating systems. Systems programmers do not necessarily know the physical details of computers. This makes the education and training of specialists much easier and together with standardisation enables efficient industrial mass production of complex computer systems, such as personal computers, computer networks, supercomputers, etc.

In addition to general purpose computers there are many different types of special-purpose computers and controllers with various internal architectures. Similarly, there are many different computer networks but nowadays most of them are connected to the Internet that has become the world-wide infrastructure for information exchange. Compared to 1985 the number of Internet hosts in the world has grown about one million times: from thousands to billions.

Both the companies and states use and trust the Internet more and more. Their everyday functions have become almost impossible without Internet communication. Therefore, the Internet has become a critical infrastructure for both the companies and the states. Numerous services for private and legal persons are offered through the Internet such as electronic banking, web shops, citizen services offered by states, etc.

In addition, Internet has become an entertaining system and a communication environment for private persons. Today, most of the TV-sets and phones are connected to the Internet and use the Internet as a communication channel.

3 Security

All this makes Internet a potential target of attacks and we have witnessed an increasing trend of attacks and crime through the Internet, and also an increasing trend of losses through cyber crime. This has created a Security industry, the obvious goal of which should be decreasing these losses. Nowadays, security is a popular topic among the users and designers of information technology. Several forms and notions of security have been discussed, like Computer Security, Network Security, Information/Data Security, as well as Cyber Security. In this section we just observe what are branches of Security as a discipline and what has been written about them.

Five-minute science project (a joke): we searched Amazon bookstore with these keywords and got the following numbers of matches: Computer Security (145,000), Network Security (70,000), Information/Data Security (68,000), Cyber Security (7,000). The funny thing here is that 70,000 + 68,000 + 7,000 = 145,000, which suggests that Computer Security is the form of security which is as important as all the other forms altogether.

A far more practical implication from these figures is that we have so many textbooks on security that no-one is able to have a complete picture of the subject. Security has become an independent (of the systems’ engineering) discipline with its own specialists who do not necessarily know the details of other systems’ engineering disciplines. In the following, we briefly describe what is meant about these different areas of security.

3.1 Computer Security

Computer security deals with protecting all components of computer systems (and the information stored in them) from threats such as backdoors, denial-of-service (DOS) attacks, direct access attacks, eavesdropping, spoofing, tampering, privilege escalation, phishing, clickjacking, etc. Computer security is the most general term about security. It is also the oldest security area which became important once computers became widely used in banks and other organisations.

3.2 Network Security

Network security is focused on protecting computer systems from network-related attacks, such as wiretapping, port scanning, denial-of-service (DOS), DNS spoofing, buffer- and heap overflow attacks, as well as the forms of man in the middle attacks, and many other attacks. Sometimes, also the phishing attacks are considered as a subject of network security. Hence, Network Security deals with both the attacks targeted against the network as an infrastructure and the attacks targeted to the computers and the users through the network. In addition to the Internet, all other kinds of networks (public and private) are also covered by Network Security.

3.3 Information/Data Security

Information Security is very close to Computer Security but the threats it considers are focused on information, not the physical components of computer systems. It considers general threats like information leakage (secrets become known to unauthorised persons), information modification (existing data is modified in unauthorised way), information forgery (falsified data is added to the system), information destruction (all data that encodes the information has been accidentally lost or intentionally deleted).

Threats in such an abstract general form suggested to define “security” in positive terms by using three abstract properties of information, the so-called CIA-triad:

  • Confidentiality: no leakage

  • Integrity: no modifications or forgeries

  • Availability: no destruction

Such abstract goals are not self-explanatory and do not give any hints how they can be achieved in particular computer systems. Therefore, the CIA triad has been constantly criticised and there have been many proposals of extending the triad with new features [3], like for example accountability [14], awareness, responsibility, ethics [11], auditability, non-repudiation, and privacy [4], etc.

In 1998, Parker [12] proposed and alternative six-component list of properties called the Parkerian hexade that consists of Confidentiality, Control, Integrity, Authenticity, Availability, and Utility. Some systems have a relatively large number of different properties. For example, the system proposed by NIST [14] has 33 properties!

These principles of Information Security have the main responsibility why the Security tends to become an independent technical discipline. As some of the claims about information security tend to be neither verifiable nor falsifiable, such a discipline is claimed to be non-scientific [8].

3.4 Cyber Security

Cyber security is a relatively young form of security that pays more attention to the homeland-security aspects of computer security and fight against cyber terrorism and the strategies of cyber war between countries. Cyber Security as a discipline started to develop rapidly in 2007 after the massive DOS attacks against Estonia [5, 15]. After that several massive cyber attacks have been witnessed, like the DOS attack against elections in Burma [16] or the Stuxnet [9], a malicious computer worm against Iran’s nuclear program.

3.5 Security-Providing Methods

There are several methods that describe how the security-related decisions have to be made in organisations. The methods can be divided into two categories: risk-oriented methods and baseline methods.

Risk-oriented methods (such as FAIR [6]) try to estimate the risks in monetary terms and approach the security from economic perspective. Potential threats that produce loss to the organisation have to be identified, their likelihood estimated, and possible countermeasures applied, considering their economic feasibility, i.e. the reduced risk must outweigh the cost of the measures. Risk-oriented methods have often criticised for the hardness of estimating risks in a reasonable precision.

Baseline methods (such as BSI IT-Grund-schutz [7]) try to define a hierarchy of security-levels described by sets of mandatory security measures that the organisations are obliged to take if they decided or have to belong to a certain security level. Baseline methods do not require risk calculation. After having decided which level of security is suitable, it only remains to apply the security measures of that particular level. Baseline methods have been criticised for too course-grained view to the protected systems which may lead to over-secured systems or insufficient protection.

4 Fundamental Vulnerabilities

As so far we have been witnessing a growing trend of real monetary losses from security incidents, it would be reasonable to analyse the causes of the security incidents. We can identify three types of general vulnerabilities:

  1. I.

    Non-technical: System is abused without breaking any intended business rules of the system.

  2. II.

    Fundamental technical: Well known general vulnerabilities that exist due to the global design choices made in the computer- and network design.

  3. III.

    Non-fundamental technical: Vulnerabilities caused by the systems’ developer (improper design), builder (improper implementation), or holder (improper maintenance).

Meeting the attacks that abuse type I vulnerabilities require traditional crime fighting or cyber-defense strategies. Type III vulnerabilities can be avoided by proper design practices. We will focus on them later in Sect. 5. In this section, we focus on some of the causes of type II vulnerabilities that cannot be avoided without changing the technical standards that are followed today.

The have been many important design choices made during the development history of nowadays computers and networks the positive effect and the impact of which have been thoroughly studied and taught in universities. A topic that is much less covered is the negative effect of these studies for the security of today’s systems. Security threats and incidents point to undesired features of computer systems that are there due to these historical design choices. In this section, we observe some of the design choices that influence today’s security situation. We do not claim that the list we provide is even close to being complete. We divide the design choices into three classes: (1) Internet design, (2) operating systems design, (3) computer hardware design, and (4) applications/services design.

4.1 Internet Design

Internet has been designed to allow any user A to send any data X to any other user B at any time. Among the main drivers for the design decisions have been: (1) the robustness of the Internet, (2) the communication efficiency, and (3) the modular design.

Packet Switching Instead of Circuit Switching. Circuit Switching and Packet Switching are two different methods for establishing a connection between network nodes A and B. Circuit switching establishes a dedicated communication channel (a multi-link path in the network graph) before A and B start to communicate. This channel remains connected for the duration of the whole communication session. Old telephone networks were connected this way. If one phone calls another, a continuous electrical circuit between the two phones is established and the phones stay connected until the end of the call.

Packet switching divides data into packets that are then transmitted through the network independently. All packets have a payload (used by applications) and header (used by the networking hardware). Headers are added to the payload before transmission and are removed by the networking hardware when the packets reach their destinations. The connections between the communicating pairs of nodes are logical (not physical) and may have common links, i.e. links are not occupied by just one connection and can be used to transfer packets from many different logical connections. This may cause the loss of quality compared to circuit switching. On the one hand, packet switching may cause potentially arbitrarily large transfer delays, while in circuit switching the transfer delay is constant. On the other hand, it enables more efficient use of channel capacity because in circuit switching all the links (wires) of the connection stay occupied by the connection and cannot be used by other connections even if no actual communication is taking place (for example, silence periods during a phone call). Packet switching increases the robustness and efficiency of the network and enables simultaneous interaction of many network applications.

The packet switching technology is the method used in todays Internet. It was invented and developed by American computer scientist Baran during a research project funded by the US Department of Defense [1, 2]. The name “packet switching” came from British computer scientist Donald Davies due to whose works the concept of packet switching was engaged in the early ARPANET in the US [13].

Though the design decision of using packet switching instead of circuit switching in the Internet was essential for robust and efficient communication, it also created the possibility of efficient co-operative denial of service (DOS) attacks against any Internet node.

Layered Design of Data Transfer, OSI Stack. The Open Systems Interconnection (OSI) model fixes interconnection standards of communication and computing devices, so that their connection is possible independent of their internal structure and technology. By providing standard communication protocols it creates interoperability between diverse computer/communication systems.

The model partitions systems into a hierarchy of abstract layers, the so-called OSI stack. Each layer serves the layer above it. For sending a message obtained from the layer above, a new header is added to the message and the message with the new header is given to the lower layer, until in the lowermost (physical) layer the data is converted to physical signals on the transmission medium (wire, radio-waves, etc.). When the message is received from the lower layer, the corresponding header is removed and the rest of the message is given to the upper layer until the highest (application) layer is reached. There are seven layers in the original version of the OSI stack:

  1. 7.

    Application layer: APIs for resource sharing, remote file access, etc.

  2. 6.

    Presentation layer: Converts the data between applications and networking services (character encoding, compression, enciphering)

  3. 5.

    Session layer: Organises continuous exchange of information between two nodes by using multiple transport-layer transmissions

  4. 4.

    Transport layer: Transmission of data segments between network nodes (TCP and UDP protocols). Segments data and forms packets from the segments

  5. 3.

    Network layer: Organising a multi-node network (addressing, routing, packet traffic control, etc.)

  6. 2.

    Data link layer: Transmission of data frames between two nodes

  7. 1.

    Physical layer: Transmission and reception of bits encoded into physical signals.

The protocols of the OSI framework enable two same-level entities at two nodes to communicate, i.e. to exchange messages by using the lower layers as a transport mechanism.

The OSI framework was developed during the Open Systems Interconnection project in late 1970s at the International Organization for Standardization (ISO) and was published in 1984 as the standard ISO/IEC 7498-1.

OSI framework supports specialisation and fair market of products. As the input/output and the basic functionality of the tools at every layer is specified by the standard, industrial competitors can implement such functionality in the best possible and economically efficient way. This guarantees that best products win and a high quality communication can be achieved.

Every layer (taken separately) specifies a universal data exchange framework which does not depend on what happens at lower layers, i.e. the protocols and formats of the layer can potentially stay the same even if the standards of lower layers will change.

In spite of its extremely positive role in the fast development of the Internet and computer systems, the OSI framework is not flexible enough for an efficient strategies against massive denial of service (DOS) attacks. The formats and protocols of ISO/IEC 7498 do not enable higher layers to give complex options about transmission strategies at the lower levels. Much more cooperation between the layers would be needed for fighting against organised DOS.

4.2 Operating Systems Design

In this subsection, we provide some examples of design choices in the field of operating systems design that, while being reasonable and economically feasible at their time, are responsible for the most important universal technical vulnerabilities that todays cybercrime abuses.

Code and Data Mixed. Compared to nowadays computers, early computers had very little memory which was tried to be used flexibly. For example, programs at certain stage could modify the part of memory where their own code was held, in order to reuse the memory under the segments of the code which will no more be executed. Some of the early general purpose operating systems did not distinguish between the memory intended for running code and the memory for storing data.

This enables viruses that already run to easily modify other programs in computer’s memory and devices and use it for subsequent infections and damage.

Loadable Operating Systems. The code of operating system of a personal computer is uploaded and executed from the same memory (e.g. hard disk) that is used for storing ordinary data by the computer. Such an option is good for flexible update and bug-fixes in the operating system.

The problem with such a choice is that a virus that already runs has no obstacles to rewrite the operating system stored on the hard disk and will thereby get a full control over the computer.

4.3 Computer Hardware Design

Shared Interfaces for Loadable Code and Loadable Data. Bootstrapping is done with the same type (an shape) of disks than those used to store data and even the same interfaces. For example, early personal computers even had a default option that if at the time of computer’s restart a floppy disk is in a disk-drive, then the computer automatically uploads a segment of code on the floppy disk (the so-called boot-sector). Computers with a hard disk upload and run a piece of code that is stored on a certain part of the hard disk (master-boot-sector). Such options provided a flexible mechanisms of upgrading computer’s operating system and also using your own operating system (that is saved on your floppy disk) in any other computer of the same type.

At the same time, such design choices provided an efficient infection mechanism for computer viruses.

Bus Architecture. Buses transfer data between components of a computer as well as between a computer and its external devices. Buses in early computers were collections of parallel electrical wires with many connections but the nowaday’s meaning of a bus is wider and engages any solution with the same logical function as a bus with electrical wires. Buses may use parallel or serial bit-connections and use several ways of connection and topology. The external bus (or expansion bus) connects the different external devices (e.g. printers) to the computer.

Mostly, operating systems do not handle the external buses completely and hence viruses may use the external devices for hiding themselves and being very hard to detect.

4.4 Applications’/Services’ Design

Global Identities. Many Internet applications are related to identities that are used to take (define) real contractual responsibilities. For example, in some countries, personal digital signatures must be used, in spite of the fact that their owners have almost no control over the corresponding devices and supporting infrastructures. Global identities also have risen the increasing topic of privacy. The foundation of the privacy problem are provable relations between data and identities.

Though, global identities give us many convenient options in electronic services, the public trust on them seems to be too optimistic. The best example is voting over the Internet (i-voting), universal and for everybody to use! Are we indeed ready to handle the case where someone gains power through a falsified i-voting?

Clouds. Cloud storage/computing is an Internet-based data-storage and computing platform that provides shared computational power and data to computers and other devices. Clouds enable on-demand access to a shared configurable computing resources (e.g. computer networks, servers, storage, applications, services) that can be handled with minimal management effort. It provides users and enterprises with various capabilities to store and process their data in privately owned or third-party data centres across the world. Clouds help organisations to lower the computer- and network-related infrastructure costs. They also enable organisations to more rapidly adjust the resources in case the business demands change unpredictably. Via clouds, companies can easily increase the used memory and computational power in case their business needs increase, and also decrease the used memory and power if business demands decrease.

Clouds have many negative aspects too. For example, there is no way for users to detect how carefully their valuable data is held, i.e. what are the likelihoods of losing the data, and modifying or using the data in unintended and unauthorised way. Possible privacy violations is one of the main concerns regarding clouds. We find more and more references to claims that public data can be used more efficiently than any intelligence agencies have been done in the past.

5 Systems Development and Security Management

The historical design decisions described above and specialisation have supported a systems’ design approach where system development engineers build universal platforms which are later “secured” by security engineers.

Numerous security specialists in the World understand the direct causes of losses via security incidents and try to take measures against these causes. For example, to fight against computer viruses, security specialists recommend using virus detection/protection software. After discovering a new computer virus, the virus scanners are updated for being able to recognise the new virus in the future, so the security specialists recommend frequent updating of virus-detection software. They also recommend using Virtual Private Network software/hardware to prevent attackers from eavesdropping secret communications going on via otherwise non-protected communication channels. Pairing vulnerabilities with the corresponding measures creates security practices many of which are standardised, and which contain lists of security measures that are necessary to fight against vulnerabilities. Security specialists know (standard) security practices, can follow them and make systems “secure” by applying measures and installing numerous security-oriented products. In some cases, measures mean significant redundancy, which means one has to install several copies of a functional component instead of one.

Often, security specialists do their job without having sufficient knowledge about the initial (business) intention of the system they secure. So they automatically follow their security practices even if the system is actually secure by design. Security standards and best practices support such an approach. Some of them even claim that security risks are fundamentally different from ordinary business risks. There is a huge number of organisations that offer security-related certified education. Most of the education is dedicated to technical staff and to middle-level executives, not much to top-level executives. Certified security courses mostly teach security engineers how to buy security products and explain their necessity to their management staff. This all makes it very hard for Chief Executive Officers (CEOs) to have technical decisions under control, i.e. to make sure that technical decisions always support rational business decisions of the company.

In companies, Chief Technical Officers (CTOs) are responsible for the whole computational platform of the company. For more than 15 years, companies also have Chief Information Officers (CIOs) who are responsible about all the information the company produces/processes and also about how this is done. Just about 10 years, companies also have Chief Security Officers (CSOs) who are positioned even higher than CIO and Chief Financial Officer (CFO), i.e. CSO receives a functional solution from CIO and CTO and makes it secure.

One the one hand, such an approach may enable faster and modular development of the systems. On the other hand, all the additional security-related equipment may make the system many times more expensive. Another drawback of such a development practice is that the security specialists who try to protect the systems are always behind the attackers.

Though there exist more systematic approaches to systems’ design, where the specifications include both the functionality and the restrictions and which may significantly reduce the overall costs of designing, building and maintaining a system, for some reasons, such approaches are not practiced.

For obvious reasons, security equipment sellers are interested in such a situation. Also the producers of ordinary computer equipment gain from the situation because due to redundancy required by security standards they can sell more products. This is one probable reason why such a practice is very hard to change.

It is also very hard to come out with scientifically proved arguments against such practice because there is no general theory of systems’ security [8]. One cannot prove that the security measures one applies are justified, and no one can either prove that they are not. Systems’ security is not yet an engineering practice (such as we have in Civil Engineering), it is just a technician practice that has no sufficient support from science.

6 Conclusions

We predict that the growing trend of the losses via cyber crime will continue in the future the main reasons being that: (1) as more and more assets will be connected to the Internet, the number of potential targets and stimuli for attackers grow; (2) fundamental (and hard to change) design decisions made in early development stages of todays Internet- and computer technology guarantee persistent technical vulnerabilities in Internet-based systems due to which attackers will always be one step ahead of defenders; (3) growing role of Chief Security Officers (CSOs) in organisations, who do not necessarily have to understand the detailed purpose and functionality of the system but whose duty is still to make the ITC system of the organisation secure. These reasons guarantee the continuous growth of the security industry but also the continuous growth of losses through cyber crime.