Keywords

1 Introduction

Supervisory control and data acquisition (SCADA) is user to refer to a range of industrial control systems (ICSs) which assist in overseeing complex industrial processes. SCADA systems are used in a long list of industrial applications and processes in facilities including electricity generation plants, chemical plants, manufacturing plants, water and sewage treatment facilities, and industries such as the transportation industry. SCADA systems have gained increasing popularity, and industries have become heavily dependent on these systems for collecting data from industrial processes in order to control and monitor their operations to ensure that they are functioning properly. A failure in a SCADA system or one of its elements may result in a failure of the industrial process being controlled. In some cases those systems are life critical, and thus a successful attack on them can jeopardize thousands of people’s lives [12, 18]. Because of this, the foremost design considerations of such systems have always included a high level of reliability and availability. In general, modern SCADA systems are comprised of a communication infrastructure and the following major elements:

The programmable logical controller (PLC) is one of the main components of the SCADA system. Field devices, e.g., sensors and controllers, send signals and status updates to the PLC and receive operational commands from the PLC, usually without the direct involvement of a human operator. The PLC is also responsible for reflecting the field device state to remote devices (e.g., HMI).

The engineering workstation (EWS) is a computer workstation used to update the PLC software and program the PLC logic.

The human machine interface (HMI) is a computer workstation that makes the industrial process controlled by a SCADA system accessible to a human operator. The operator can monitor processes (e.g., the HMI may display the current water level at an automated reservoir) and send commands to the field devices through an HMI (e.g., stopping the operation of a pump).

Sensors are used in order to reflect the state of an industrial element (e.g., wind speed in a wind tunnel) or the environment (e.g., air temperature). The information from sensors is used by the PLC to control the industrial elements.

Communication infrastructure includes switches, cables, wireless receivers, etc. Contemporary SCADA systems are able to use Ethernet and TCP/IP infrastructure in order to achieve connectivity; legacy SCADA systems rely on older technologies and communication protocols. SCADA components communicate by utilizing standard SCADA protocols, such as DNP3 and IEC 61850, or proprietary vendor-specific protocols, such as S7 and variants of Modbus.

Additional components such as controllers and actuators, databases which store historical information (i.e., Historian), and security elements such as Firewall and one-way traffic devices can also be found in a typical SCADA system.

SCADA systems, especially legacy SCADA systems, are inherently insecure. Initially they were designed and built using specialized and proprietary protocols, implemented by old software and hardware which were rarely patched [11]. Security measures such as anti-viruses and encryption are usually not considered in ICSs. These security measures are not capable of identifying and defending against ICS-specific attacks (e.g., attacks against SCADA protocols such as Modbus) and might harm the availability of the system, which is one of its most important requirements [7, 14].

The use of SCADA systems in critical infrastructures makes them desirable targets for attackers. Attacks on such systems have been increasing in recent years. As demonstrated by the Stuxnet worm, and more recently by the TRITON malware, a successful SCADA attack can have serious impact on a nation’s economy, safety, and stability. For this reason, continuous security evaluation of ICSs is crucial for mitigating cyber-attacks.

Penetration testing (pen-testing) [1] is a commonly used security measure. The goal of pen-testing is to detect weaknesses in the network such as hosts running vulnerable software, misconfiguration of network components or security countermeasures, usage of default passwords for login services, etc.

The security evaluation of an ICS is quite different from the security evaluation of a typical enterprise network. Typical pen-testing activity focuses on an enterprise’s IT environment, especially IT components that can be exploited via the Internet. These kinds of tests usually represent a small part of a typical security evaluation of an ICS [1]. Pen-testing for ICSs mainly focuses on the industrial components (e.g., Historian, HMI, PLC, and sensors) which communicate over dedicated industrial protocols (e.g., Modbus, DNP3). These components and protocols were originally developed for serial communication based on the assumption that ICSs are isolated from the IT environment (and thus not connected to the Internet); therefore, security properties such as authentication and encryption are usually not implemented in these protocols. Currently, industrial protocols are commonly transmitted over TCP/IP; in addition, many ICSs are connected to the Internet, thus making them easy targets for attackers.

The fact that SCADA systems are implemented in critical infrastructures also makes it difficult to evaluate their security. A typical pen-testing activity (for a non-industrial environment) is usually executed within the enterprise network, however this cannot be done in the case of an ICS. Pen-testing activities involve intrusive actions such as port scanning (e.g., using Nmap) and vulnerability assessment (e.g., using OpenVAS or a Zeus scanner), which may crash industrial components and therefore cannot be directly executed in operational industrial environments. Given this, security experts have suggested the construction of a dedicated testbed for evaluating the security of an ICS [5, 7, 9].

A testbed is an isolated environment which contains a generic implementation of the architecture of the system under test and allows safe execution of penetration tests. The creation of a testbed requires significant investment of funds and effort. Therefore, an efficient testbed should be able to mimic a variety of ICS setups [5]. For this reason, most testbeds are not designed to represent a specific ICS environment, but are more generic so as to be able to address the needs of multiple facilities in the same industry. Keeping the testbed generic can compromise the fidelity (i.e., the requirement that a testbed should represent the system under inspection as accurately as possible) [17].

In this paper, we introduce a new automated method for inferring the specification of a digital twin that is designed to facilitate the security evaluation of a specific industrial environment. In contrast to testbeds, which are generic, a digital twin is a replica of a specific ICS; i.e., a model that consists of all of the components from the original industrial environment. Each replicated component can either be implemented as a digital clone (e.g., by using simulation or virtualization software), or alternatively can be physically installed in the twin model. The components that are implemented within the digital twin, as well as the level of implementation of each component, defines the specific security tests that can be conducted on the digital twin (e.g., a digital twin without the HMI implemented does not support the execution of security tests on the HMI). The primary benefit of using a digital twin, as opposed to a testbed, is that it reliably represents the real industrial environment. In other words, the results of a pen-test conducted on the digital twin genuinely reflect the expected results of conducting the same test in the real environment.

One of the most challenging tasks in the process of creating a digital twin is determining the implementation level (specification) of its components. The implementation level of the components in the digital twin directly affects the overall cost of establishing the digital twin as well as the degree to which it reflects the industrial environment (fidelity). For example, a twin model that is completely identical to the real industrial environment (i.e., a physical clone) has the highest fidelity (as it allows the execution of all possible tests), but implementing such a model is extremely expensive. We present a method for deriving the specification of a cost-effective digital twin that is specifically designed to facilitate the security evaluation of a specific industrial environment. The proposed method models the problem of deriving the digital twin for a specific industrial environment as an optimization problem. The optimization problem attempts to maximize the impact of the digital twin under strict budget constraints (i.e., allowing the execution of the most important penetration tests for improving the security of the industrial environment).

The contributions of this paper are as follows:

  • We introduce the concept of creating a cost-effective digital twin that is specifically designed to facilitate the security evaluation of a specific industrial environment.

  • We propose a method that is based on a constrained optimization problem, specifically, 0–1 non-linear programming, for deriving the configuration of the digital twin model of a specific industrial environment.

  • We demonstrate the application of our proposed method on a simplified thermal power plant architecture.

2 Related Work

In order to conduct penetration testing on ICS networks, the use of a testbed has been proposed. A testbed is an isolated environment that simulates the operation of some real system.

According to a recent survey conducted by Qassim et al. [15] testbed implementation approaches can be categorized as follows:

  • Physical implementation: refers to a physical clone of the components. This approach reflects the industrial environment at the highest degree. However, physical implementation of all of the components of a specific factory is in most cases, not feasible because of the high costs of such implementation. As a result, the majority of physical testbeds are more generic, aimed at being able to address the needs of multiple facilities in the same industry, rather than specific facilities.

  • Virtualization/emulation software: eliminates the software’s dependency on the hardware. Virtualization/emulation software enables the establishment of large-scale testbeds, while requiring less hardware, thereby reducing the implementation costs. This approach enables the testing of software components and protocols, but it does not enable the testing of hardware components. In addition, by eliminating software and hardware dependencies, some of the penetration tests may not provide the expected results as tests performed in the real environment.

  • Software simulation: designed to simulate the inputs, outputs, and behavior of real components (e.g., temperature sensor). This approach can provide large-scale implementation at a low cost, however, it provides very low fidelity. Therefore, the main usage of simulation software is to enable the testing of other components (e.g., to feed a virtual or physical component with simulated inputs/outputs).

To avoid the high costs (as described above), as well as the maintenance involved in a physical replication testbed, many researchers chose to implement their testbed using the simulation, virtualization, or hybrid approaches.

Genge et al. [4] and Lemay et al. [8] presented testbeds for assessing the security of ICS networks. Both works suggested the combination of emulated and simulated components in order to reduce implementation costs. Lemay et al. [8] provided the following methodology for component implementation: the components that are relevant to the test objectives should be emulated; components that directly interface with the emulated components should be implemented as closely as possible to real life; the remaining components can be implemented at any level, and can even be simulated.

Unlike Genge et al. [4] and Lemay et al. [8], Gao et al. [3] and Green et al. [5] suggested the integration of physical devices in their testbeds.

Alves et al. [2] also addressed the discrepancies between different implementation levels. They established physical and virtual gas pipeline testbeds and showed that the testbeds behave differently under a denial of service attack, and behave similarly under a man-in-the-middle attack.

A digital twin is a concept from the product life-cycle management (PLM) domain introduced by Grieves et al. [6]. It is a virtual representation of a specific physical product. The idea behind this concept is that the digital twin should be linked to the physical product throughout the product’s life-cycle and constantly mirror it. By doing so, the digital twin enables the prediction of the future behavior and performance of the real product.

Unlike the previously mentioned works that suggested general testbed architectures, we propose an adaptive method for deriving the configuration of a cost-effective digital twin for a specific industrial environment. The cost-effective digital twin defines the implementation level of the different industrial components (physical implementation, virtualization/emulation software, and simulation software) to allow the evaluation of the desired security tests.

3 Cost-Effective Digital Twin for ICS

In this section, we present an adaptive method for deriving a digital twin specification for a given ICS, under strict budget constraints. The proposed method maximizes, within the budgetary limitations, the impact of the digital twin. The impact of a digital twin is evaluated by the number and types of security penetration tests that it supports. On one hand, each test has its own benefit i.e., security-wise, one test might be more important than another. On the other hand, each test has its own cost. The cost of a test is determined by the costs of the participating components (i.e., the direct cost of implementing them in the digital twin), as well as the test’s execution costs (e.g., security expert’s time/salary). Note that a component might be required for multiple security tests.

Similar to the creation of testbeds, we consider three types of implementation levels for each element: physical, virtualization/emulation, and software simulation. The output of the proposed method specifies the digital twin configuration, i.e., which components of the ICS should be implemented and at which implementation level.

Our proposed method models the problem of deriving a cost-effective digital twin as a 0–1 non-linear programming problem. Such problems optimizes a non-linear target function (e.g., the overall benefit of the tests supported by the digital twin), while being subjected to multiple related constraints (e.g., budget limits).

3.1 Notations

In order to formally describe the problem and the method’s inputs, we define the following notations.

General ICS Environment Information. The set of possible ICS components is denoted by

$$\begin{aligned} C = \{c_1,\ldots ,c_{n_C}\}. \end{aligned}$$

For example, \(C = \{PLC,EWS,Historian,PC,\ldots \}\).

We also define the following subsets of C:

  • \(N \subset C\) - ICS component communicating over IP

  • \(M \subset N\) - ICS components running modern operating systems (e.g., desktops, Web servers, HMI, EWS, Historian)

  • \(NC \subset N\) - Network components (e.g., router, switch, and firewall)

  • \(F \subset C\) - Field devices (e.g., generator and boiler)

  • \(D \subset N\) - ICS components which are part of the direct control layer (e.g., RTU and PLC)

  • \(S \subset M\) - ICS components which are part of the supervision layer (e.g., HMI, EWS, and Historian)

General Test Specification. The set of all possible tests is denoted by

$$\begin{aligned} T = \{t_1 \ldots t_{n_T}\} \end{aligned}$$

A list of possible tests for the penetration testing of electric utilities based on the NESCOR methodology [16] is presented in Appendix C.

We denote the execution of test \(t_i\) on component \(c_j\) by \(t_i(c_j)\). For each test \(t_i \in T\) we specify three types of prerequisites in order to be able to execute \(t_i\) on \(c_j\): device implementation requirements (DIR), environment implementation requirements (EIR), and prerequisite test (PT).

  • \(DIR(t_i,c_j)\), \(t_i \in T \wedge c_j \in C\) - denotes the minimal implementation level of a tested component \(c_j\), which enables the execution of test \(t_i\) in the digital twin. For example, disassembling an embedded device (test \(t_{4.1.1}\) in Appendix C) cannot be performed on either an emulated or simulated device, thus a physical implementation of the component in the digital twin is essential for executing this test. The formal representation of this requirement is as follows (p stands for physical):

    $$\begin{aligned} DIR(t_{4.1.1},f \in F) = f^p \end{aligned}$$
  • \(EIR(t_i,c_j)\), \(t_i \in T \wedge c_j \in C\) - denotes the minimal implementation level of components that communicate with \(c_j\) and are required for executing \(t_i\). For example, in order to perform functional analysis (test \(t_{4.2.1}\) in Appendix C) on \(d \in D\), such as a PLC, one must emulate the components that communicate with the PLC from the direct control group, such as other PLCs (denoted by \(D_d\)), and from the supervisory control group, such as HMI (denoted by \(S_d\)). In addition, there is a need to simulate field devices that communicate with the PLC (denoted by \(F_d\)). The formal representation of these requirements is as follows (e stands for emulation, and s stands for simulation):

    $$\begin{aligned} EIR(t_{4.1.1},d \in D) = \{D_d^e,S_d^e,F_d^s\} \end{aligned}$$
  • \(PT(t_i,c_j)\), \(t_i \in T \wedge c_j \in C\) - represents the dependencies between tests; for example,

    $$\begin{aligned} PT(t_{4.2.5},f \in F) = \{t_{4.2.3},t_{4.2.2},t_{4.2.1}\} \end{aligned}$$

    indicates that tests \(t_{4.2.3}\), \(t_{4.2.2}\), \(t_{4.2.1}\) should be executed first in order to execute test \(t_{4.2.5}\) on f.

Using these three types of requirements, we define the set of test dependencies (TD) for executing test \(t_i\in T\) on a component \(c_j\in C\), as follows:

$$\begin{aligned} TD(t_i,c_j) = {<}DIR(t_i,c_j),EIR(t_i,c_j),PT(t_i,c_j){>} \end{aligned}$$
  • Example I: Device disassembly. In order to enable the disassembling of a field device \(f \in F\) (\(t_{4.2.1}\)), the digital twin model must physically implement f. Thus, the test dependencies for device disassembly of field devices \(f \in F\) are as follows:

    $$\begin{aligned} TD(t_{4.2.1},f \in F) = {<}f^p,\emptyset ,\emptyset {>} \end{aligned}$$
  • Example II: Endpoint fuzzing. Endpoint fuzzing (\(t_{4.2.5}\)) is a pen-testing activity that could be executed on an emulated or physical device. However, it is not possible to perform fuzzing without understanding the tested interface and without capturing and analyzing the communication with the interface. For these reasons, capture analysis (\(t_{4.2.3}\)), communication capture (\(t_{4.2.2}\)), and interface functional analysis (\(t_{4.2.1}\)) are prerequisite tests for endpoint fuzzing. In addition, in order to perform this test the digital twin must also emulate the direct control devices which communicate with the tested device. Thus, the test dependencies for fuzzing a field device \(f \in F\) are as follows:

    $$\begin{aligned} TD(t_{4.2.5},f \in F) = {<}f^e,\{D_f^e\},\{t_{4.2.3},t_{4.2.2},t_{4.2.1}\}{>} \end{aligned}$$

Specific ICS Environment Information. The specific ICS environment (for which we would like to derive the digital twin definition) is denoted as follows:

  • \(E = \{e_1,\ldots ,e_{n_E}\}\) - the set of elements in a specific ICS environment, e.g., \(e_i\) is a specific PLC in the ICS.

  • \(Communication = \{{<}e_i,e_j{>}~|~e_i,e_j \in E\} \) - the set of links between elements in the specific ICS environment, as was observed in the ICS’s network, e.g., \({<}e_i,e_j{>}\) indicates that a communication was observed between element \(e_i\) and element \(e_j\).

  • \(Topology = {<}E,Communication{>}\) - the topology of the specific ICS, which consists of the set of elements (E) and their communication links (Communication).

  • \(I = \{p,v,s\}\) - the set of possible implementation levels of an element in E where p stands for physical replica, v for virtualization, and s for simulation.

  • \(role:E \rightarrow C\) - a function that maps an element in the specific ICS environment to its type, e.g., \(role(e_1) = PLC\) indicates that element \(e_1\) is an instance of a PLC in the ICS.

In addition, we define the specific environment dependencies (ED) as follows:

$$\begin{aligned} ED(t_i,e_j), t_i \in T \wedge e_j \in E \end{aligned}$$

Unlike the test dependencies (TD), the environment dependencies (ED) are derived for a specific ICS environment, e.g., the following expression: \(ED(t_m,e_n) = \{{e_2}^v,{e_4}^s,{e_5}^s\}\) indicates that in order to execute test \(t_m\) on the specific element \(e_n\), the digital twin must contain the following: a virtual (or higher) implementation of element \(e_2\) and at least a simulation of elements \(e_4\), and \(e_5\).

According to the proposed method the main prerequisite for deriving the digital twin is the topology of the specific ICS environment. Typical ICS environments are extremely complex and may change over time; thus, acquiring the environment information is not a trivial task. There are several tools and methods that can be used to collect the required information, including the ICS blueprints which usually contain the architecture design of the specific ICS environment, as well as passive monitoring tools such as the GRASSMARLIN that are able to extract information from the live (or recorded) network traffic (including IP addresses, operating system of components, vendors, and component types).

Costs, Benefits and Budget

  • \(cost:E \times I \rightarrow \mathbb {R}\) - a function that maps a specific implementation of an element to its cost, e.g., \(cost(e,p)=650\) indicates that a physical implementation of element e in the digital twin costs $650.

  • \(benefit:T \times E \rightarrow \mathbb {R}\) - a function that defines the benefit of executing a test on an element, e.g., by setting the benefit(te) to b, the asset owner indicates that the benefit of executing test t on element e is b; where, a high b value will increase the probability that this test will be supported by the digital twin model (by setting the benefit(te) to \(\infty \), the asset owner can force the algorithm to derive a digital twin which support this test). The benefit of a test is assigned according to the importance of the test (the significance of the expected findings) and the element being tested.

  • \(Budget \in \mathbb {R}\) - the overall budget assigned to create the digital twin.

3.2 Proposed Method

The proposed method consists of the following three main modules (see Fig. 2 in Appendix E):

The Data Processor is responsible for integrating the general test dependencies (i.e., TD) and the topology of a specific industrial environment (i.e., Topology), in order to derive the list of environment dependencies (i.e., ED) of the specific industrial environment.

The Problem Builder is responsible for translating the information provided for the specific industrial environment (e.g., budget and test dependencies) to a non-linear maximization problem.

The Solver solves the non-linear maximization problem in order to derive the specification of the cost-effective digital twin.

The input to the proposed method includes the following:

ICS Architecture: the specification of the architecture of the industrial environment for which the digital twin is created. The specification includes: system topology

(Topology) i.e., a description of the elements in the system (E) and their communication patterns (Communication); the role of each element (role(e)); the cost for each possible implementation of the elements (cost(ei)); and the benefit of executing tests on elements (benefit(te)).

Budget: (Budget) the overall budget allocated for the creation of the digital twin.

Test Specification: includes the set of possible tests \(T = \{t_1 \ldots t_{n_T}\}\) and the set of test dependencies \(TD(t_i,c_j), t_i \in T \wedge c_j \in C\).

3.3 Data Processor

The Data Processor derives the set of environment dependencies (ED) by analyzing the following inputs: (1) a general specification of test dependencies (TD); (2) the specific topology of the industrial environment under test (\(Topology = {{<}E,Communication{>}}\)); and (3) an element in the environment (\(e \in E\)). This is done according to the process presented in Algorithm 1.

Given the inputs, the Data Processor initially adds the appropriate device implementation requirement (DIR) to the environment dependencies (lines 10–12). Then, for each environment implementation requirement \(r \in EIR\) it adds the elements in the ICS that communicate with e and are of the type specified in r (lines 13–16). Finally, it recursively adds the environment dependencies of the prerequisite tests (lines 19–21). The output of the procedure are the environment dependencies for executing t on e, which are specific for the particular ICS architecture.

3.4 Problem Builder

The Problem Builder represents the digital twin inference problem as a 0–1 non-linear programming problem. The non-linear integer programming problem focuses on the optimization of a non-linear target function, while satisfying a set of non-linear constraints (that are represented as algebraic equations) [10]. The non-linear integer problem is formally defined as follows:

$$\begin{aligned} \begin{aligned}&\min /\max f(x) \\&s.t\,\,g_i(x) \le b_i, i=1,\ldots ,m \quad h_i(x) = y_i, j=1,\ldots ,k \\&x\in X, X\subset \mathbb {Z}^n\,and\,X\,is\,a\,finite\,set \end{aligned} \end{aligned}$$

where f(x) is the target function that we wish to maximize (or minimize), and the constraints are represented by \(g_i(x)\) and \(h_i(x)\).

figure a

A 0–1 non-linear programming problem is a special case of the non-linear integer programming problem, in which x can either be 0 or 1. In this section, we describe how we define the target function (f(x)) and the constraints (\(g_i(x)\)), in order to represent the digital twin specification inference problem as a 0–1 non-linear programming problem.

The specification of a given digital twin model is defined by the variables of the 0–1 non-linear programming problem, which are denoted as follows:

$$\begin{aligned} X = {<}x^s_1, x^e_1,x^p_1,\ldots ,x_{n_E}^s,x_{n_E}^e,x_{n_E}^p{>}. \end{aligned}$$
(1)

Each variable indicates whether a specific element e is implemented as i within the digital twin as defined by Eq. 2.

$$\begin{aligned} X = \{x^i_e~|~i \in I,~e \in E\} \end{aligned}$$
(2)

These variables can be equal to 0 (zero) or 1 (one), and thus the first set of constraints is:

$$\begin{aligned} x^i_e \in \{0,1\} \end{aligned}$$
(3)

where \(x^i_e=1\) indicates that element e is implemented in the digital twin as i, and \(x^i_e=0\) indicates that element e is not implemented in the digital twin as i.

Equation 4 presents the implementation constraint, which ensures that an element e is implemented as either simulated, virtualized, physical, or not implemented at all. The number of implementation constraints is equal to the number of elements in the given ICS (i.e., \(n_E\)).

$$\begin{aligned} x^p_e + x^v_e + x^s_e \le 1 \end{aligned}$$
(4)

In order to ensure that the overall cost of the digital twin implementation does not exceed the allocated budget, we define the cost constraint presented in Eq. 5.

$$\begin{aligned} \sum _{x^i_e \in X} cost(x^i_e) \le Budget \end{aligned}$$
(5)

Each assignment for X defines a possible configuration of the digital twin, where a valid assignment satisfies all of the defined constraints.

Given the above constraints, the target function (defined in Eq. 6) is designed to maximize the impact of the digital twin model.

$$\begin{aligned} \max \bigg ( \sum _{e \in E, t \in T} benefit(t,e) \cdot \prod _{e^i_j \in ED(t,e)} x^i_{e_j}\bigg ) \end{aligned}$$
(6)

where the impact of a given digital twin model (defined by the assignment X) is defined as the sum of all of the benefit values for the tests in T that can be executed on X. As can be seen, the benefit value is added only if all of the dependencies of a test are satisfied.

3.5 Solver

A 0–1 non-linear programming problem is NP-hard [13]. In small environments the solution for this problem can be determined by applying a brute force approach, i.e., for each possible assignment for X, first check whether it satisfies all of the constraints; if all of the constraints are satisfied, compute the value of the target function, and finally, select a valid assignment that provides the maximal value.

The time that it will take for the brute force approach to provide the optimal result is significant as it grows exponentially by the number of components. Given n components, and m security tests, and three implementation levels (real, emulated, simulated), the time complexity for the brute force algorithm is as follows:

$$\begin{aligned} O ( 3^n \cdot nm \cdot n) \end{aligned}$$
(7)

where, \(3^n\) represents all of the possible implementation of a components, nm represents the maximum tests per component, and n is the calculation of the cost per implementation state. The exponential time complexity makes the brute force algorithm unsuitable for large ICS environments (more than 20 components). For example, executing the brute force approach on the simple ICS environment presented in Appendix E in Fig. 3, which consisted of 14 components, takes three minutes when using a standard personal computer. In future work, we plan to develop and evaluate different heuristics which are on average sub-exponential (but may not provide the best setup for the digital twin.)

4 Demonstration

In this section, we demonstrate the application of the proposed method on a simplified ICS environment of a thermal power station with one boiler and two generators.

4.1 Description of the Tested ICS Environment

The simplified environment (illustrated in Appendix E, Fig. 3) consists of an enterprise network, a supervision layer, a direct control layer, and field devices. The enterprise network contains an IT client and an IT server, which are connected to the supervision layer through a firewall that filters improper packets. The supervision layer consist of the following components which monitor and control the direct control components:

  • Historian. Responsible for logging all events occurring during the process. To do so, the historian periodically queries the PLCs for their states (via Modbus/TCP in the case of PLC-1, or S7comm in the case of PLC-2).

  • Human machine interface (HMI). Provides a human-friendly interface for interacting with the field devices. In order to report the field devices’ states and alarms to the operator, the HMI periodically queries the PLCs, as the Historian does (via Modbus/TCP in the case of PLC-1, or S7comm in the case of PLC-2). Moreover, the HMI enables the operator to remotely change field devices’ parameters.

  • Engineering Work Station (EWS). Enables the operator to change the PLCs’ configurations and logic. The EWS has all of the required programming and configuration software installed. It communicates with the PLCs and HMI through the S7comm protocol when such updates occur.

The supervision layer’s components are connected to the direct control devices through a switch. The direct control components include:

  • Two Siemens S7-300 PLCs. These components directly control the field devices. PLC-1 controls both the boiler (BLR) and one of the generators (GEN-1). It can turn the boiler’s heater on or off, change the generator’s rotation speed, and start or stop its operation. PLC-2 controls only GEN-2 and can perform the two latter actions as well. The PLCs are connected to the supervision layer via the switch (SW-2), and communicate with each other via the S7comm protocol.

  • Remote Terminal Unit (RTU). This component is connected directly to the PLCs and enables the operator to manually change the field devices’ parameters and present their current states and alarms.

The field devices include the components that physically perform the process. This simplified environment contains two generators (GEN-1 and GEN-2) and one boiler (BLR).

4.2 Security Test Specifications

For the demonstration, we followed the pen-testing methodology presented by the National Electric Sector Cybersecurity Organization Resource (NESCOR) [16]. This methodology provides guidelines for executing penetration tests on smart grid systems. Although the NESCOR methodology is specifically designed for smart grid systems such as advanced metering infrastructure (AMI), wide-area monitoring, protection and control (WAMPAC), and home area network (HAN), it provides an extensive list of pen-testing activities that can be applied on other types of ICSs.

The various testing activities presented in their methodology are classified into four categories: embedded device penetration tasks, which address the physical attack vector against field devices; network communication penetration tasks, which address the exploitation of devices through network protocol manipulation; server application penetration tasks, which address testing applications that are running on the control servers; and, server operating system penetration tasks, which address testing of the operating system of the control servers.

Execution of the pen-tests presented in the NESCOR methodology on a digital twin in which not all of the components are physically implemented is not trivial, because, as described in Sect. 3.2, the execution of some activities in a digital twin may depend on a specific set of requirements (denoted by DIR, EIR, and PT).

We thoroughly analyzed more than 80 penetration tests presented in the NESCOR methodology and defined the three types of requirements for each test. The complete set of tests and requirements is summarized in Appendix C. For our demonstration we select the following five tests: Device Disassembly (4.1.1), Interface Functional Analysis (4.2.1) Communication Capture (5.2.1), Fuzzing (5.2.4), Application Fingerprinting (7.1.1), and Application Functional Analysis (7.1.2).

4.3 Implementation Cost Description

In the proposed method we considered three types of implementation levels: physical, virtual, and simulation.

Obviously, not all of the components can be implemented by all type of implementations, and some physical devices may not have an emulated/virtual version. In addition, the pricing of different implementation levels is not the same for different vendors. For example, a physical SIEMENS PLC can cost from hundreds of dollars to thousands with an average cost of about $2500 for the S7300 modelsFootnote 1; a license for S7-Plcsim software, which can be used for emulating a SIEMENS PLC or HMI costs $700Footnote 2; and using third party tools to simulate a PLC can be less expensive (e.g., awlsimFootnote 3 is free of charge, with costs just for the setup time).

For simplicity, in our demonstration we assumed that a physical implementation of a device would have the highest cost and a simulation-based implementation the lowest. Specifically, as presented in Appendix B, a physical implementation is ten times more expensive than virtualization, which is three times more expensive than simulation. In addition, an equal benefit for all tests i.e., \(benefit(t,e) = 1\ \forall t \in T \wedge e \in E\) was assumed. It should be mentioned that these assumptions do not affect the construction of the problem or its solution by using 0–1 non-linear methods; therefore we believe that these assumptions are plausible.

4.4 Results

The creation of the cost-effective digital twin model for the specific ICS environment starts with processing the generic test specification (TD) and the specific topology inputs (\(Topology = {{<}E,Communication{>}}\)). This is done by applying the data processing algorithm presented in Algorithm 1 on each combination of element \(e \in E\) and test \(t \in T\). The output of this algorithm produces 42 different tests (presented in Appendix D), each of which includes a set of environment dependencies (ED). The environment dependencies are specific to the ICS environment described in Sect. A.

Next, given the specific budgetary limitations, we apply the Problem Builder module and create the 0–1 non-linear programming problem (A formal representation of the problem is presented in Appendix A). We implemented a naive brute force algorithm to find the optimal configuration for a given budgetary limitation.

We conducted an experiment in which we derived the configuration of a digital twin model for different budgetary limitation values, while considering all of the tests presented in Appendix D (a total of 42 tests).

The results of this experiment are presented in Fig. 1. As expected, the higher the available budget the higher the impact of the digital twin.

In this figure, it can also be seen that when \(Budget=\$3700\), all of the elements are implemented as virtual devices; in this case, the digital twin model supports 36 tests of the 42 possible tests. The remaining tests require physical implementation of various elements; in order to support all of the tests, the budget required is $23500 (while the total cost of the industrial system is $40000).

The results show a logarithmic increase of the benefit (impact) with the increase in the available budget.

Fig. 1.
figure 1

The trade-off between the budget and the impact of the digital twin computed for the simple thermal power station.

5 Conclusions and Future Work

We present a method for deriving the specification of a digital twin for an ICS for the purpose of security analysis. The resulting specification is a cost-effective representation of the ICS under test that provides the high fidelity required for executing a given set of security tests. The method is designed as a three step process. First, the Data Processor derives the ICS’s environment dependencies from its topology and the tests’ dependencies. Then, the Problem Builder uses the ICS’s architecture, tests’ dependencies, and budgetary limitations to create a 0–1 non-linear programming problem representation. Finally, the Solver applies a search algorithm to find the best solution for the problem, i.e., finds the digital twin specification with the highest impact and an affordable cost (i.e., its implementation cost does not exceed the specified budget). To demonstrate the application of the proposed method, we used a simplified structure of a thermal power station and the NESCOR pen-testing methodology to define the tests and their requirements.

In future work we plan to evaluate the method on more realistic environments from a diverse range of industries and propose a heuristic algorithm for finding a near-optimal solution (digital twin setup) with sub-exponential time complexity. In addition, we plan to extend the solution to support different pricing strategies for the various implementations, such as software bundles with contribution margin-based pricing. We also plan to (1) add new types of constraints, e.g., constraints that take the physical space available within the digital twin that will be implemented (e.g., a small room or an open space) into account; (2) consider implementations of multiple elements as virtual or simulations on the same machine; and (3) handle identical setups in an industrial environment (e.g., if two similar production lines are implemented, there is no need to test both of them). Finally, a general knowledge base of possible tests and their test dependencies should be researched and established.