1 Introduction

Cloud computing, as a new computing paradigm driven by the economics of scale, is becoming an academic and industry hot research spot. Currently, many companies such as Google, IBM, Microsoft, have developed their own commercial cloud platform. However, we still face many challenges to build a good cloud computing platform. For example, heavy energy consumption is one of the most serious problems in the cloud computing center. According to New York Times reports, Google recently revealed that its datacenters continuously drew almost 260 million watts—about a quarter of the output of a nuclear power plant—to run Google searches, YouTube views, Gmail messaging, and display ads on all those services [1] . In the cloud computing center, the reasons for heavy energy consumption are as follows: (1) Cloud servers consume 80 % of the peak power even at 20 % utilization [2]. Cloud server is often on idle state, on which energy is still consumed. (2) Different servers consume different energy for processing a task. For example, a graphic processing task is processed either by a server equipped with graphic processing unit GPU, or by a server only equipped with a general-purpose CPU, the energy consumption will be very different. When the cloud task is scheduled to execute by a cloud server, there will be the situation of mismatching scheduling, i.e., when we should use low energy to complete a task, we use heavy energy in fact. The mismatching scheduling is one of the main reasons of heavy energy consumption. The heavy energy problem in cloud system has greatly limited the ability of cloud computing services and the increasing size of cloud computing system. Therefore, the energy optimization approach for the cloud computing system becomes a hot research field [3].

To solve the energy problem of a computer system, traditionally, people have developed the dynamic voltage scaling (DVS) technology, the sleep/shutdown technology, the storage optimization technology [4], and especially, the energy-aware task scheduling technology. For example, Kim et al. [5] presented an energy-aware bag-of-task scheduling algorithm which is based on DVS with a deadline constraint in a cluster environment with homogeneous computing nodes. In order to optimize energy consumption for the overall system, the algorithm selects the supply voltage for each computing node. Veeravalli and co-workers [6] proposed two heuristic energy-aware scheduling algorithms EGMS and EGMSIV, which use DVS in the embedded multi-processor system. Zomaya and co-workers [7] presented two algorithms ECS and ECS+idle, in which parallel tasks with constraint conditions considering balance factor for the optimizing parallel task execution time and energy in a multi-processor system. The algorithm ECS and ECS+idle introduce the objective function of balance condition of the minimum execution time and energy to consider the optimization of processor execution, and the optimization of the balance of execution and idle. Zeng and co-workers [8] proposed a task scheduling strategy with high service utilization and low execution energy, which is used to optimize and control idle energy and “luxury” energy, so that it can greatly reduce energy consumption overhead in the cloud computing system. Recently, we found the report, which used the model checking and automata to analyze the energy consumption optimization problem, in the literature. Rasmussen et al. [9] proposed to use priced timed automata to solve the energy optimization scheduling problem in the heterogeneous multiprocessor, but it is only suitable for solving a single bus and a small number of processors. Lungu et al. [10] proposed to use the probabilistic model checking method to verify the safety and effectiveness of the dynamic power management design for a multi-core processor, but they did not consider the relationship between energy and time constraints. Nocco and Quer [11] proposed a SAT-based solution to solve the problem of parallel tasks scheduling, which can also be applied to the energy-aware task scheduling, while the number of the tasks and processors should be small.

Obviously, traditional task scheduling methods for processors can partially solve the energy optimization problem in different computing environments, but the majority of them cannot be directly applied to optimize the energy consumption in the cloud computing system. The reason is that they are either concerned on the homogeneous computing system [5], or focused in the embedded system [6], many-core system [10], multi-processor system [7, 9, 11], which are different with a cloud computing system, which consists of heterogeneous computing nodes and has different energy states for each computing node. Literature [8] studied the cloud computing system to process independent taskswhich is special situation, but it cannot solve the problem that the cloud computing system processes dependent tasks, which is general situation. In this paper, in response to the energy waste problem in the cloud computing center, we pay attention to processing dependent parallel tasks in the cloud platform, use a priced timed automaton to model a cloud computing system, and find out energy optimization solution and optimization value of the cloud computing system.

The rest of this paper is organized as follows. Section 2 presents the basic concepts of a priced timed automaton. In Sect. 3 we model a green cloud center with a priced timed automaton. An energy consumption optimization method of a green cloud center based on priced timed automaton is formally given in Sect. 4. Then, we give the full details of a case study and simulation experiments in Sect. 5. Section 6 concludes the paper with some summarizing remarks and gives possible future work.

2 Basic concepts on priced timed automaton

Automaton is generally used to describe the behaviors of dynamic systems, but recently, we find casually that it can be used to research parallel tasks scheduling. Illuminated by this, we try to use a priced timed automaton to optimize energy in the cloud computing center. Therefore, we first briefly introduce the basic concepts of an automaton as follows.

Definition 1

(Automaton) An automaton is a 5-tuple \(( S,B,E,Act,F)\), where: \(S\) is a finite set of states; \(B\subseteq S\) is the set of initial states; \(E\subseteq S\times S \)is a directed edge set \(\left\{ {e_i } \right\} \) which describes the state transition, and \(e_{i }\) represents the \(i\) th edge; Act is the transition condition set \(\left\{ {act_i } \right\} \) of the edges, and \(act_{i}\) represents the occurred conditions of the transition action of the edge \( e_{i}\); \(F\subseteq S \) is termination state set.

There are many systems that are related to time on real world, but a simplex automaton cannot describe the time factor of these systems. Alur and Dill [12] proposed the concept of a timed automaton, which is defined as follows.

Definition 2

(Timed automaton) A timed automaton is a 6-tuple (\(S\), \(B\), \(E\), Act, \(C\), \(F\)), where: \(S\), \(B\), \(E\), Act, \(F\) have equivalent meanings to definition 1; the special C represents the set of the duration of all states, while \(c_{i}\) represents the duration of the state \(s_{i}\), and the value of\( c_{i}\) is a real number.

When we describe the corresponding system on real world, sometimes it is necessary to depict the cost factors of the system, but the timed automaton does not have the ability to do so. Behrmann et al. [13] proposed the concept of a priced timed automaton, which is defined as follows.

Definition 3

(Priced timed automaton) A priced timed automaton is a 7-tuple (\(S\), \(B\), \(E\), Act, \(P\), \(C\), \(F\)), where: \(S\), \(B\), \(E\), Act, \(C\), \(F\) have equivalent meanings to definition 2;\( P \) represents the set of the cost rate of states, \(p_{i }\) represents the cost rate of the state \(s_{i,}\) and the value of \( p_{i }\) is a real number.

To better describe the priced timed automaton and fully understand its meaning and purpose, we use an example to illustrate.

Example 1

A simple priced timed automaton modeling. A traveler drives a car from city A to city B. There are two routes to choose: in route 1,it needs to spend three days, and takes an average of $600 every day; in route 2, it needs to spend five days, and takes an average of $300 every day. Fig. 1 shows the priced timed automaton model which represents all travel routes, time and cost.

Fig. 1
figure 1

A tour priced timed automaton

3 Modeling green cloud computing with priced timed automaton

In the cloud computing center, a great number of computing nodes process stochastic tasks, resulting in great difficulties in modeling such situation. In order to make full use of the priced timed automaton to analyze the green cloud center energy consumption, we should model a cloud computing system with a priced timed automaton at first.

3.1 Computing node energy consumption states and their transitions

The main function of cloud computing is to assign a mass of cloud resources to users on demand. A cloud computing center, which is made up of a great number of cloud computing nodes, is a cluster of cloud computing nodes. In order to study the energy consumption in the whole cloud computing center, we need to firstly study the energy consumption in a single cloud computing node. Without losing the generality, for each cloud computing node, we pay special attention to the following eight energy states.

  1. (1)

    Initial state (init), is the startup state of a cloud server. After the cloud server turns on power supply, it need go through startup time and consume energy to enter the idle (standby) state on which the cloud server can accept tasks. In order to calculate energy consumption of the server’s startup state—from shutdown state to the next state, we set the cloud server startup state to be the initial state.

  2. (2)

    Idle state (idle), is a standby state on which a cloud server is waiting for a task to run. The cloud server requires a good preparation state, on which the necessary hardware and software have started up for the execution of the task, so that it can respond a cloud task within a very short time after the cloud task arrives, thereby enhancing the performance of the server. After the cloud server starts from the initial state, it goes to the idle state to wait for the cloud task to be executed.

  3. (3)

    Sleep state (sleep), is a low power standby state. The duration of a cloud server on idle state reaches an upper bound \(T_\mathrm{max\_idle}\). So as to save energy, the cloud server stops some functions, such as the CPU and memory of the virtual machine, and goes to the sleep state on low power. The power consumption on sleep state is usually 20 % of that on idle state.

  4. (4)

    Execution state (exe), is the state on which the cloud server executes the cloud task. The cloud task, which needs to be executed, arrives at the server on sleep state or idle state, so that the cloud server goes to execution state to execute cloud task.

  5. (5)

    Migration out of state (migout). During the time the cloud server executes the cloud task, since some task increases the calculation amount greatly, resulting in the overload of the cloud server, this server is no longer appropriate for the task to be executed, leading to a virtual machine and its processing task migrating from this cloud server to another server. At this time, we say that the server is on migration out of state (migout).

  6. (6)

    Migration into state (migin). When another cloud server virtual machine and its being processed task migrate into the local cloud server, which is on idle or sleep state, the local cloud server goes to migration into state.

  7. (7)

    Communication state (comm), is the state on which the cloud server processes operations, namely communication tasks. After a communication cloud task arrives at the cloud server on sleep or idle state, the cloud server goes to communication state and uses communication equipments to process the communication task. According to this state, we can consider and calculate energy consumption overhead during communication.

  8. (8)

    Shutdown state (shut), is the state on which soft power supply of the cloud server shutdowns. The cloud server need shutting down in two cases: first, the server is on failures of hardware and software which can not be excluded; second, the duration of sleep is too long.

To explain and embody the rationality of these eight states, a practical example is given below.

Example 2

An IBM X3650 server state graph. In Fig. 2, in the parentheses next to the state, the number in the left position indicates the duration or the maximum duration on the state, while the number in the right position indicates the power on the state, “–” represents a time value which is unknown.

Fig. 2
figure 2

An IBM X3650 server state graph

Similarly, in the cloud center, each computing node can be represented as in Fig. 2—a state graph which indicates the running behaviors, only different in the duration and the power of states. The state graphs of all computing nodes constitute the entire running behaviors of the cloud system.

3.2 Description of the cloud task

Because the demands of cloud users are diverse, cloud tasks which fulfill the demands have different types. According to the correlation among subtasks of cloud tasks, cloud tasks can be divided into two categories: independent tasks and dependent tasks. An independent task, which is a special form of cloud computing tasks, is constituted of one or more subtasks, among which there is no data dependency relationship. A dependent task, which is the general form of cloud computing task, is composed of multiple subtasks, among which there are dependency relationships. We used to take workflow to indicate network services that have dependency relationships, but in this paper, we use a directed acyclic graph DAG to describe cloud task in which there are data dependency relationships among subtasks. Indeed, DAG is a special form of the workflow description.

Definition 4

(Dependent task) DAG is used to describe a dependent task. DAG =(\(V\), \(E\)), where \(V=\left\{ {v_\mathrm{1} ,v_\mathrm{2} ,\,\ldots ,v_\mathrm{m} } \right\} \) indicates the dependent cloud task, \(v_\mathrm{i} \in V\) indicates a subtask of the dependent task, that is, a dependent cloud task is composed of multiple subtasks; \(E=\left\{ {e_{\mathrm{ij}} \vert v_\mathrm{i} ,v_\mathrm{j} \in V} \right\} V\times V\) indicates that there are dependencies among data. \(v_\mathrm{i }\) must be executed before \(v_\mathrm{j}\). After executed \(v_\mathrm{i,}\) the output of\( v_\mathrm{i}\) must be used as the input of \( v_\mathrm{j}\). The only subtask, whose indegree equals to 0, is called the entry. The only subtask, whose outdegree equals to 0, is called the exit.

Figure 3 is an example of a dependent task graph in which \(v_{1}\) must be executed before \(v_{2}\), \(v_{3}\) and \(v_{4}\). After executed \(v_{1}\), the output of \(v_{1 }\) must be used as input of \( v_{2}\), \(v_{3}\) and \(v_{4}\). \(v_{1}\) is the entry, and \(v_{8}\) is the exit.

Fig. 3
figure 3

A dependent task graph

3.3 Cloud system priced timed automaton

3.3.1 Running behavior metrics matrix

The purpose of this section is to generate a priced timed automaton in the cloud computing system. Price and time of each state of a priced timed automaton require to be identified specifically. The state price of the cloud is closely related with the power of the cloud server on different states. The state duration of the cloud system priced timed automaton is closely related with the execution time of the cloud task on the server. To this end, we give two metrics matrixes of the cloud system running behaviors: the tasks and cloud resources matching time matrix, as well as the cloud server state power matrix.

Definition 5

(Tasks and cloud resources matching time matrix) Depicting the degree of quickness or slowness that a cloud task is executed in different cloud computing nodes, denoted by \(T_{\hbox {m}\times \hbox {n}} =( {t_{ij} })_{\mathrm{m \times n}} \), where \(t_{ij}\) represents the execution time of cloud task \(v_{i}\) in cloud server \(g_{j}\), 1 \(\le \, i \, \le \) m, 1 \(\le \, j \, \le \) n, while \(m\) is the total number of subtasks and \(n\) is the total number of cloud servers.

Shown in Fig. 4, \(t_{ij}\) can be obtained by measuring and summing-up the execution time of task \(v_{i}\) in the cloud server \(g_{j }\). When \(t_{ij} =\infty \), it means that task \(v_{i}\) is mismatching with \(g_{j}\), i.e., task \(v_{i}\) cannot be executed in cloud server \(g_{j}\). When the value of\( t_{ij }\) is a real number, it indicates that task \(v_{i}\) can be executed in the cloud server \(g_{j}\), at which time the state of server is exe state. The smaller the value of \(t_{ij}\) is, the more matching task \(v_{i}\) and cloud server \(g_{j}\) are.

Fig. 4
figure 4

Tasks and cloud resources matching matrix

The energy consumption of the cloud system refers to the total energy consumption of all cloud servers on different states. In order to denote all cloud servers on various energy consumption states in the cloud system, we introduce the cloud server state power matrix.

Definition 6

(Cloud server state power matrix) Depicting energy consumption levels for the same cloud server on different states, namely, a power consumption set of a single cloud server on different states, denoted by \(P_\mathrm{n\times m}= (p_{ij})_\mathrm{n\times m}\), where \(p_{ij}\) represents the power consumption of cloud server \(g_{i}\) on state \(u_j , 1\le i\le \hbox { n}, 1\le j\le \hbox {m}\).

As shown in Fig. 5, the value of\( p_{ij}\) can be obtained by measuring the power consumption of cloud server \(g_{i}\) on state \(u_{j}\). In fact, \(p_{ij}\) is the “price” of the priced timed automaton. After introducing the cloud server state power matrix, we only need to compute each server state and state duration in the cloud system when cloud servers execute all cloud tasks, so that we can calculate the total energy that the cloud system executes cloud tasks.

As described in Sect. 3.1, we have refined eight cloud server running states. However, in order to simplify our description and discussion, on the basis of losing no nature of the problem undoubtedly, we may only consider three states as idle, exe and shut, shown in Fig. 6. Typically, a cloud server is first in the idle state, and then transfers to exe state. After executed, it transfers to shut state.

Fig. 5
figure 5

Cloud server state power matrix

Fig. 6
figure 6

Cloud server simplified state graph

3.3.2 Generation ideas and rules of the cloud system priced timed automaton

In order to find the optimal energy consumption approach for the cloud system, we can use a priced timed automaton to describe all running state behaviors of the cloud system, after which we may find a trace of the optimal energy consumption. Therefore, to structure the cloud system priced timed automaton is a critical step. The basic idea, which we use to construct the cloud system state automaton, is that we build Cartesian product of the application task set \(V\), the cloud server set \(G\) and the cloud server state set \(U\). The product space is a cloud system automaton state space \(S\), i.e., \(S=V\times G\times U\). In \(S,\) each ordered 3-tuple element is a state of automaton, i.e., \(\forall v_x \in V,\forall g_y \in G,\forall u_z \in U\), then \(s=( {v_x ,g_y ,u_z })\in S\). Every two states are added to two opposite edges, \(E=S\times S\). Obviously, with the increasing of the number of the application tasks and cloud servers, the state space is large. Thus, we need to use a simplified state graph of the cloud server, the running behavior metrics matrix and the structure of the DAG, etc., to reduce as many as possible irrational states and edges. After the generation of the initial state set \(B\) and termination state set \(F\), the generation of the transition operation set Act of edges, and the addition of the attributes of the price and duration of the state, we finally get a rational and relatively simplified cloud system priced timed automaton.

Reduction Rule 1: Using a simplified cloud server state graph, reduction rules of irrational edges are as follows:

As for the same task \(v_x \in V\) executing in the same cloud server \(g_y \in G\), retain initial edges from state (\(v_{x},\,g_{y},\, idle\)) to state (\(v_{x}, g_{y},\, exe\)), as well as initial edges from state (\(v_{x},\,g_{y},\,exe\)) to state (\(v_{x}, g_{y}, \, shut\)). Delete other initial edges of state (\(v_{x}, g_{y},\, idle\)) and state (\(v_{x},\, g_{y},\,exe\)), in the meantime, delete other initial edges and terminal edges of state (\(v_{x},\, g_{y},\, exe\)) and other terminal edges of state (\(v_{x},\,g_{y},\, shut\)).

Reduction Rule 2: Using application tasks and cloud resources matching time matrix \(T_\mathrm{m\times n }= (t_{ij})_\mathrm{m\times n}\), reduction rules of irrational states and edges are as follows:

As we know, it is irrational that a cloud task executes on the cloud server for too long a time. As a result, we set a threshold for the execution time of each cloud task on multiple cloud servers. Those who are beyond the threshold are considered irrational execution. Therefore, for any cloud task \(v_x \in V\), all execution time values \( t_{xy} (1\le y\le n)\) of the xth row of matrix \(T_\mathrm{m\times n}\) are sorted in ascending order. Afterwards, we take the element of the middle position where \(y^{\prime }=\left\lceil {\frac{n}{2}} \right\rceil \), namely, cloud task execution time \(t_{xy\prime }\)as a threshold value \(t_{0}\). Then, delete all states corresponding to \(t_{xy}\) when it satisfies \(t_{xy} \ge t_0 (1\le y\le n)\), which shape like (\(v_{x},\, g_{y},\, u\)), and delete edges connected to the states.

Reduction Rule 3: Using the structure of the dependent task DAG graph and the cloud server state power matrix, reduction rules of irrational states and edges are as follows:

  1. 1

    After a task has been completed, it is impossible that predecessor tasks of this task will be executed again. Thus, for a task and its predecessor tasks, delete all edges from the corresponding states of the task to the ones corresponding to its predecessor tasks.

  2. 2

    As a task has been finished, the cloud system goes to execute the direct successor task, not the non-direct successor task. Therefore, delete the edges from the corresponding state of the task to the non-direct successor’s corresponding states of the task.

  3. 3

    For multiple tasks that can be executed in parallel, their energy consumption has a additive relationship with no order, because one task is running on a cloud server and another task is running on another cloud server. Therefore, for parallel tasks \(v_{i}, v_{j}\), retain the edges from the shut state of task \(v_{i}\) that executes in a low power cloud server to theidle state of task \(v_{j}\) that executes in a high power cloud server. Delete all remaining edges which are from corresponding states of task \( v_{j}\) to the corresponding states of task \( v_{i}\).

For example, assume tasks \(v_{2}\), \(v_{3}\), \(v_{4}\) can be executed in parallel, and \(v_{2}\), \(v_{3}\), \(v_{4}\) execute in the cloud server \(g_{2}\),\( g_{3}\), \(g_{4}\) respectively, \(p_{2, 2} \, <p_{3, 2} \,<p_{4, 2}\). The edge reduction of corresponding states of three parallel tasks has shown in Fig. 7. Figure 7a indicates states and edges before reduction; Fig. 7b indicates states and edges after reduction.

Reduction Rule 4: Using the naturality and rationality of the cloud server’s transition among states, reduction rules of edges are as follows:

  1. 1

    For any state \(( {v_x ,g_y ,shut})\in S\), starting from this state, retain initial edges from the state to the one that shapes like the state \((v_x^\prime ,g_y^\prime ,idle)\in S\), and delete other initial edges.

  2. 2

    For any state \(( {v_x ,g_y ,idle})\in S\), retain only the terminal edges from the state to the one that shapes like the state \((v_x^\prime ,g_y^\prime ,shut)\in S\), and delete other terminal edges.

  3. 3

    For any state \(s_1 =( {v_x ,g_y ,u_z })\in S\), if there is a state that shapes like the state \(s_2 =(v_x ,g_y^\prime ,u_z^\prime )\in S\), and \(g_y \ne g_y^\prime \), delete all edges between \(s_{1}\) and \(s_{2}\), because we do not allow a task to run on two computing nodes.

    Fig. 7
    figure 7

    The edge reduction of three parallel tasks corresponding states

Initial State Rule 5: Generation rules of the initial state set B are as follows:

For any state \(s=( {v_x ,g_y ,u_z })\in S\), if \(v_{x}\) is the entry of the dependent task graph DAG, and \(u_{z }= idle\), then, s can be used as an element of the initial state set \(B\).

Termination State Rule 6: Generation rules of the termination state set \(F\) are as follows:

For any state \(s=( {v_x ,g_y ,u_z })\in S\), if \(v_{x}\) is the exit of the dependent task graph DAG, and \(u_{z }= shut\), then, s can be used as an element of the termination state set \(F\).

Transition Rule 7: An automaton transfers from one state to another. The transition conditions of each edge are as follows:

  1. 1

    \(\forall s\in S,\) assume the prerequisite task set of the corresponding task of s is A (\(s\)). For convenience, we have A (\(s\)) referred to as the condition set of state s. Obviously, the initial state \(s\in B,\hbox { A}( s)=\phi \).

  2. 2

    For state \(s^\prime \in S-B\), assume \(s'\) has only one parent state \(s=( {v_x ,g_y ,u_z })\in S\), and the conditions set A (s) of s is known. Then, for state s’ that has only a terminal edge (\(s,s^\prime \)), the transition condition of its terminal edge is \(act(( {s,s^\prime }))=\hbox { A}( s)+\left\{ {v_x } \right\} \), and the condition set of s’ is \(\hbox {A }( {\hbox {s}'})=act(( {\hbox {s},\hbox { s}'}))\).

  3. 3

    For state \(s^\prime =(v_x^\prime ,g_y^\prime ,idle)\in S-B\), assume that \(s\)’ may have more than one parent state, and condition sets of \(s\)’ for parent states \(s_1 ,s_2 ,...,s_n ( {n\ge \hbox { 1}})\) are \(\hbox {A}( {s_1 }),\hbox { A }( {s_2 }),\,...,\hbox { A }( {s_n })\), which are known. There exists a state \(s=( {v_x ,\hbox { g}_\mathrm{y} ,u_z })\in \left\{ {s_1 ,s_2 ,\ldots ,s_n } \right\} \), and for any state \(s^{\prime \prime } \in \left\{ {s_1 ,s_2 ,\ldots ,s_n } \right\} \), we have \(\hbox {A}( {s^{\prime \prime } })\subseteq \hbox {A}( s)\). The transition condition of each terminal edge of state s’ is \(act( {( {s_1 ,s^\prime })})=act( {( {s_2 ,s^\prime })})=\ldots =act( {( {s_n ,s^\prime })})=\hbox { A}( s)+\left\{ {v_x } \right\} \), and the condition set of state s’ is \(\hbox {A}( {s^\prime })=\hbox {A}( s)+\left\{ {v_x } \right\} \).

Price Rule 8: Using the cloud server state power matrix \(P_{\hbox {n}\times \hbox {m}} =( {p_{ij} })_{\hbox {n}\times \hbox {m}}\), generation rules of each state price are as follows:

The price of each cloud automaton state \(s=( {v_x ,g_y ,u_z })\in S,\) i.e., the power is equal to \(p(s)=p_{yz }\). If \(u_{z} = shut, p(s)= p_{yz}= 0\).

Time Rule 9: To generate the duration of each state, the rules are as follows:

  1. 1

    \(\forall s\in B,\) the duration of the state \(s\) is c(\(s)\) = 0.

  2. 2

    \(\forall s\in \hbox {S},\) let us assume that the entry time of state \(s\) is t\(_{1}(s)\), the exit time of state \(s\) is t\(_{2}(s)\).Clearly, for the state \(s\in B,\hbox { t}_\mathrm{1} (s)=\hbox { t}_\mathrm{2} (s)=0\).

  3. 3

    For state \(s^\prime =(v^\prime _x ,g^\prime _y ,u^\prime _z )\in S-B\), assume that \(s'\) has only one parent state \(s\in S\) . If the known exit time of \(s\) is t\(_{2}(s)\), we calculate the entry time t\(_{1}\) (\(s\)’), the exit time t\(_{2}(s')\) and the duration of state \(s\)’ as follows:

    1. a)

      If \(u_z ^\prime =idle\), then \(\hbox {t}_\mathrm{1} ( {s^\prime })=\hbox {t}_\mathrm{2} ( s)\), the duration of \(s'\) is \(\hbox {c}( {s^\prime })=\hbox {t}_\mathrm{1} ( {s^\prime }),\hbox {t}_\mathrm{2} ( {s^\prime })=\hbox {t}_\mathrm{1} ( {s^\prime })\).

    2. b)

      If \(u_z '=exe\), then \(\hbox {t}_\mathrm{1} ( {s'})=\hbox { t}_\mathrm{2} (s)\), the duration of \(s'\) is \(\hbox {c}( {s'})=\hbox { t}_{\hbox {xy}} ,\hbox {t}_\mathrm{2} ( {s'})=\hbox { t}_\mathrm{1} ( {s'})+\hbox {t}_{\hbox {xy}} \).

    3. c)

      If \(u_z '=shut\), then \(\hbox {t}_{1}(s')= t_{2}(s)\), the duration of \(s\)’ is c(\(s\)’) = 0, \(\hbox {t}_{2}(s') = t_{1}(s')\).

  4. 4

    For state \(s^\prime =(v_x^\prime ,g_y^\prime , idle)\in S-B\), assume that s’ may have more than one parent state, and exit time of parent states of state \(s\)’ is known as \(\hbox {t}_\mathrm{2} ( {s_1 }),\hbox { t}_\mathrm{2} ( {s_2 }),...,\hbox {t}_{\mathrm{2}} ( {s_n })\). We calculate in this way that: \(\hbox {t}_\mathrm{1} ( {s'})=\hbox { min }\left\{ {\hbox {t}_\mathrm{2} ( {s_1 }),\hbox {t}_{\mathrm{2}} ( {s_2 }),...,\hbox {t}_{\mathrm{2}} ( {s_n })} \right\} ,\hbox {c}( {s'})=\hbox {t}_{\mathrm{1}} ( {s'})\), and \(\hbox {t}_{\hbox {2} }( {s'})=\hbox {t}_{\mathrm{1}} ( {s'})\).

3.3.3 Generation algorithm of cloud system priced timed automaton state graph

Based on reduction rules and generation rules described in Sect. 3.3.2, we give a generation algorithm of the cloud system priced timed automaton state graph as below:

figure f
figure g

4 An energy optimization approach for green cloud system priced timed automaton

Once a rational cloud system priced timed automaton has been generated, to find a minimum energy consumption trace from initial states to termination states, which is the optimal trace, becomes possible. We consider to change the problem from seeking the optimal trace of the cloud priced timed automaton to the “shortest path problem” of all paths from initial states to termination states. To do this, we introduce the concept of the path energy consumption first.

Definition 7

(Path energy consumption) For a cloud system automaton \(\hbox {Auto} =(S, B, E, Act, P, C, F)\), suppose that there is a path path \((j)=\mathop s\nolimits _{_0 }^j \rightarrow \mathop s\nolimits _{_1 }^j \;\rightarrow \cdots \rightarrow \mathop s\nolimits _{n }^j \), where \(\mathop s\nolimits _{_i }^j \in \hbox {S},\hbox {n}\in \hbox {N},0\le \hbox { i }\le \hbox {n}\). The duration of the state \(\mathop s\nolimits _{i}^j \) is \(c_{i}\), and the power is \(p_{i}\). Then the path energy consumption of path (\(j)\) is:

$$\begin{aligned} \hbox {energy }( {\hbox {path}( j)})=\sum \limits _{i=0}^n {c_i \cdot } p_i \end{aligned}$$

Based on the concept of the path energy consumption, in order to solve the energy optimization problem for cloud automaton \(\hbox {Auto} =(S, B, E, Act, P, C, F)\), we can use mathematical formulas to describe it formally as follows:

$$\begin{aligned} \left\{ \begin{array}{l} \min _{{j = 1}}^{k} energy(path(j)) = \sum _{{i = 0}}^{n} {c_{i} \cdot p_{i} } \\ path\;(j) = S_{0}^{j} \xrightarrow {{act_{1} }}S_{1}^{j} \xrightarrow {{act_{2} }} \ldots \xrightarrow {{act_{n} }}S_{n}^{j} \\ S_{i}^{j} = (v_{{xi}} ,g_{{yi}} ,u_{{zi}} ) \in S,\;i = 0,1,2, \ldots ,n \\ S_{0}^{j} \in B,\;S_{n}^{j} \in F \\ act_{i} = \left\{ {v_{{x0}} ,v_{{x1}} , \ldots ,v_{{xi}} } \right\} \\ \end{array} \right. \end{aligned}$$

In the above equations, \(k\) is the value of the number of possible paths in cloud automaton Auto from the initial states to the termination states. In order to solve the above equations, we design the following Algorithm 2 to obtain the optimal path of the cloud automaton energy consumption and the optimal value of energy consumption. The main idea of the algorithm is as below: corresponding to any initial state \(b\in B\) in cloud automaton Auto, the state set S-B + \(\{\hbox {b}\}\) is divided into two parts: set Source and set Destination. Initially, each state of S-B + \(\{\hbox {b}\}\) is in Source, and we set two parameters \(\lambda (s)\) and \(\Gamma (s)\) for each state s\(\in \)S-B + \(\{\hbox {b}\}\). The reachable states of \(b\) move from Source to Destination. When \(s\) is in Source, the parameter \(\lambda \)(s) is an upper bound of the minimum value of the path energy consumption from \(b\) to \(s\), and the corresponding parameter \(\Gamma (s)\) is a corresponding task set of the corresponding states of an optimal path from \(b\) to \(s.\) Once the state \(s\) is moved into Destination, then \(\lambda \)(s) is the minimum path energy consumption from \(b\) to \(s\), while \(\Gamma (s)\) is a task set which corresponds to all states of the path of the minimum path energy consumption from \(b\) to \(s\). When all termination states \(f\in F\) are in Destination, find the minimum value of \(\lambda (f)\) of all \(f\in \) F. The minimum value is the minimum path energy consumption from initial state \(b\) to termination state \(f\in \) F. We use forward function to record predecessor state of each state in the optimal path. When we find the corresponding termination state \(f\in \) F of the minimum path energy consumption, we use the forward function to trace from \(f \) back to \(b\), so that we can find the optimal path. Finally, we find the minimum path energy consumption and its corresponding optimal path from all initial states \(b\in B\) to all termination states \(s\in F\).

figure h

Assume the number of states of cloud automaton is \(n\), the time complexity of Algorithm 2 can be O(\(n^{3}\)) to find the optimal path and minimum energy values. To find the optimal path, namely, to obtain the cloud system energy consumption optimization solution, is a specific mapping scheduling scheme for the cloud tasks and cloud servers. In order to save energy, we shut down servers which are not included in the optimization solution of the cloud system energy consumption before the cloud system runs.

5 Case study and simulation experiment

5.1 Case study

Example 3

Assume a cloud environment consists of 8 computing nodes and a stochastic dependent task has reached the cloud environment, the graph of the stochastic dependent task is shown in Fig. 3, and tasks and cloud resources matching time matrix \(T_{8\times 8}\) and cloud server state power matrix \(P_{3\times 8}\) is shown in Fig. 8. Wherein, the unit of each element is second in \( T_{8\times 8}\), and the unit of each element is watt in \( P_{3\times 8}\).

Fig. 8
figure 8

Tasks and resources match time matrix and cloud server state power matrix

Using the priced timed automaton generation Algorithm 1, the resulting priced timed automata is shown in Fig. 9. Wherein, the state \(n_{1}=(v_{1}, g_{1}, idle), n_{2}=(v_{1}, g_{1}, exe), {\ldots }, n_{51}=(v_{8}, g_{8}, shut)\).

Fig. 9
figure 9

The generation of the cloud system priced timed automaton from the case

Then, using the optimal energy consumption path Algorithm 2, the priced timed automaton that has been generated in Algorithm 1 is input, and the output optimal energy consumption path is as follows:

$$\begin{aligned} \begin{array}{l} \hbox {n}_\mathrm{1} \rightarrow \hbox {n}_\mathrm{2} \rightarrow \hbox {n}_\mathrm{3} \rightarrow \hbox {n}_\mathrm{7} \rightarrow \hbox {n}_\mathrm{8} \rightarrow \hbox {n}_\mathrm{9} \rightarrow \hbox {n}_\mathrm{16} \rightarrow \hbox {n}_{\mathrm{17}} \rightarrow \hbox {n}_{\mathrm{18}} \\ \rightarrow \hbox {n}_{\mathrm{25}} \rightarrow \hbox {n}_{\mathrm{26}} \rightarrow \hbox {n}_{\mathrm{27}} \rightarrow \hbox {n}_{\mathrm{28}} \rightarrow \hbox {n}_{\mathrm{29}} \rightarrow \hbox {n}_{\mathrm{30}} \rightarrow \hbox {n}_{\mathrm{34}} \rightarrow \\ \hbox {n}_{\mathrm{35}} \rightarrow \hbox {n}_{\mathrm{36}} \rightarrow \hbox {n}_{\mathrm{40}} \rightarrow \hbox {n}_{\mathrm{41}} \rightarrow \hbox {n}_{\mathrm{42}} \rightarrow \hbox {n}_{\mathrm{49}} \rightarrow \hbox {n}_{\mathrm{50}} \rightarrow \hbox {n}_{\mathrm{51}}. \\ \end{array} \end{aligned}$$

The output of the optimal energy consumption is 9485 joules. In the optimal energy consumption path, \(\hbox {n}_{2}=(v_{1}, g_{1}, exe)\), \(\hbox {n}_{8}=(v_{2}, g_{3},exe)\), \(\hbox {n}_{17}=(v_{3}, g_{4}, exe)\), \(\hbox {n}_{26}=(v_{4}, g_{2}, exe)\), \(\hbox {n}_{29}=(v_{5}, g_{6}, exe)\), \(\hbox {n}_{35}=(v_{6}, g_{5}, exe)\), \(\hbox {n}_{41}=(v_{7}, g_{7}, exe)\), \(\hbox {n}_{50}=(v_{8}, g_{8}, exe)\), which show the specific energy consumption optimization solution, that is, task \(v_{1}\) executes in cloud server \(g_{1,}\) task \( v_{2}\) executes in cloud server \(g_{3}, ...., \hbox {task} \,g_{8}\) executes in cloud server \(v_{8}\).

5.2 Simulation experiments

To further validate the proposed cloud center energy optimization approach based on the priced timed automaton, abbreviated as APTA, we perform simulation experiments. The hardware environment of simulation experiments is a DELL OptiPlex 320 computer, and the software environment is Matlab which is a discrete event simulation tool.

We set parameters in simulation experiments as shown in Table 1. After setting the number of subtasks of a dependent task, the dependent task graph is generated randomly. Experiments compare APTA with scheduling algorithm HLFET and ETF. HLFET and ETF, classic scheduling algorithms for the dependent task [14], are widely used in the heterogeneous parallel computing environment, with a good scheduling performance.

Table 1 Simulation parameters

APTA sets that each server has three states:idle, exe and shut, with different energy consumption. Before the cloud system runs, we shut down servers in the cloud center, which are not used in the cloud system optimization solution. To the comparability of experiments, when we calculate the total energy consumption of the optimization solution in the cloud center, HLFET and ETF also calculate the energy consumption of each server that has idle, exe and shut states that have different energy consumption, and before the cloud system runs, we shut down servers in the cloud center, which are not used in the cloud system optimization solution. We have gone through repeated experiments; the results are shown in Figs. 10 and 11.

Fig. 10
figure 10

Comparison of the system energy consumption

Fig. 11
figure 11

Comparison of the task completion time

As shown in Fig. 10, with the increasing of the number of subtasks, the energy consumption of HLFET and ETF is rising rapidly. However, the energy consumption of APTA grows steadily. When the number of subtasks reaches 120, the energy consumption of HLFET and ETF is more than 170,000 joules, while the energy consumption of APTA is no more than 130,000 joules. To analyze the reasons, APTA focuses on the system energy consumption optimization. When the scheduling scheme is designed, it concerns the power of different states of the cloud server, namely, the energy factor. However, HLFET and ETF consider only the execution time, and do not consider the energy factor when these two scheduling schemes are designed. After the scheduling scheme is generated, they calculate the energy consumption.

As shown in Fig. 11, in most cases, APTA takes slightly longer time to complete the dependent task. The reasons are: (1) APTA does not pay special attention to completing the dependent task with the minimal time, while the target of HLFET and ETF is the minimum execution time to complete the dependent task. (2) APTA concerns the cloud system energy optimization. The energy consumption is calculated by the power and duration of each cloud server on different states, so that in APTA, the power consumption and the time factor have been considered.

6 Conclusions

In this paper, we have proposed an energy optimization approach for the cloud computing center based on the priced timed automaton to solve the energy waste problem in the cloud computing center. The main work of this paper is in the following four aspects: Firstly, we use a priced timed automaton to model a cloud computing node in different states. Secondly, through analyzing the characteristics of cloud tasks, we give the cloud system running behavior metrics matrix, and based on the generation rules and reduction rules, we design the generation algorithm of the cloud system priced timed automaton. Furthermore, we design an algorithm to find the optimal path and the minimum energy value, derived from the cloud center energy optimization solution and energy consumption optimization value. Finally, we use a case to analyze the proposed energy optimization approach, and compare it with the traditional scheduling algorithms HLFET and ETF on the total system energy consumption and the dependent task completion time in a simulated cloud system, which verifies that the proposed energy consumption optimization approach can effectively reduce the energy consumption. In the future, we will do it on given deadline constraints about how to use the priced timed automaton on the energy optimization of the cloud center. We will also do more to further improve the generation algorithm of the cloud system priced timed automaton, thereby further improve the energy optimization management for the cloud computing system.