Elsevier

Neurocomputing

Volume 483, 28 April 2022, Pages 501-514
Neurocomputing

Energy-efficient VM opening algorithms for real-time workflows in heterogeneous clouds

https://doi.org/10.1016/j.neucom.2021.08.145Get rights and content

Abstract

Minimizing energy consumption is a critical challenge for real-time workflows, particularly in heterogeneous cloud computing systems. State-of-the-art algorithms aim to minimize the energy consumed for processing such applications by choosing virtual machines (VMs) to shut down from all opened VMs (i.e., VM merging). However, such VM merging through an “on-to-close” approach usually incurs high computational complexity. This paper proposes an energy-efficient VM opening (EEVO) algorithm that is capable of choosing VMs to turn on from all closed VMs while satisfying the real-time constraint of applications. Considering that there are slacks that can be eliminated or reduced between adjacently scheduled tasks after using the EEVO algorithm, a dynamic scaling down EEVO algorithm (DEEVO) is further proposed. DEEVO is implemented by scaling down the frequency of VMs executing each task based on the dynamic voltage and frequency scaling (DVFS) technique. Experimental results demonstrate that, with the above-mentioned improvements, DEEVO achieves lower energy consumption for real-time workflows than state-of-the-art algorithms do. In addition, DEEVO outperforms state-of-the-art algorithms in the computational efficiency of accomplishing task scheduling.

Introduction

Heterogeneous cloud computing systems consisting of virtual machines (VMs) can offer large-scale computing and data storage services and solutions. Different cloud service providers offer heterogeneous cloud computing platforms that are distinct from each other. Larger computation and data storage processes consume a significant amount of energy, which leads to cost burden and environmental impact. According to the energy consumption data released by China Unicom Data Center, its annual electricity consumption is 9.9 billion kWh [1]. Energy consumption becomes a hot issue that affects the development of computing systems. There are usually extensive tasks to be processed in heterogeneous clouds, which may lead to higher energy consumption. Tasks with dependencies are usually expressed in the form of workflows. In the study of heterogeneous cloud systems [2], a set of precedence-constrained tasks composes a workflow, which is generally modeled as a directed acyclic graph (DAG), in which the vertices represent tasks to be executed, and the edges specify the communication demands among tasks [3], [4], [5], [6], [7]. Therefore, it is necessary to propose effective workflow scheduling algorithms to ensure that tasks are executed correctly and efficiently to minimize energy consumption.

Workflow scheduling have been extensively studied by researchers. In clouds, there are mainly two roles: resource providers and users, who have different requirements. From providers’ perspective, minimizing the total energy consumption of an application is necessary for saving economic cost. From users’ perspective, the execution of application must be finished within a given time range (i.e., deadline constraint, real-time constraint). Otherwise, the resource providers will violate the server-level agreement (SLA) and further negatively affect the quality of service (QoS) [8], [9]. Therefore, how to design a workflow scheduling algorithm to meet the different requirements of resource providers and users has become a key issue that needs to be solved urgently.

To solve this problem, several work was proposed by using the dynamic voltage and frequency scaling (DVFS) technique [3], [10], [11], [12] to minimize the total energy consumption of a real-time workflow in heterogeneous cloud computing systems. In [3], an enhanced-efficient scheduling (EES) algorithm was proposed to reduce the energy required for processing a specific application while satisfying the real-time constraint. EES exploits the slack room and allocates it in a global manner, which ignores static energy consumption, thereby significantly saving power. In [11], a downward energy consumption minimization (DECM) algorithm and a downward and upward energy consumption minimization (DUECM) algorithm based on the DVFS technique were proposed. However, DUECM focuses on minimizing solely the dynamic energy consumption but fails to take static energy into account. In [12], an energy-aware VM merging (EPM) algorithm and a quick EPM (QEPM) algorithm were proposed, which choose the VMs to shut down from opened VMs. EPM can reduce energy consumption but incurs high computation complexity.

In summary, these approaches assigning tasks to the VMs with the principle of selecting the VM with minimum earliest finish time (EFT) or setting the deadline of each task according to the level of each task to scale down the frequencies. And the methods that choose VMs to shut down from all opened VMs may incur high computation overhead or poor in energy-saving. Therefore, it is of great importance to propose a workflow scheduling algorithm to minimize energy consumption which considers both dynamic energy consumption and static energy consumption without high computation complexity.

In this paper, we propose an energy-efficient VM opening (EEVO) algorithm aiming to minimize the energy for processing a real-time workflow with low computation time in heterogeneous cloud computing systems, which chooses a set of VMs to turn on from all closed VMs and assigns tasks to the VMs that consume minimum energy. On the basis of EEVO, a DVFS-enabled EEVO (DEEVO) algorithm is developed to reduce the slack space between adjacent tasks in the schedule. DEEVO can achieve a further energy reduction by decreasing the operating frequencies of running VMs. Specifically, the technical contributions of this study are as follows.

  • (1) We propose a EEVO algorithm which chooses the VMs to turn on from all closed VMs, thereby reducing the computational complexity.

  • (2) The proposed EEVO algorithm considers assigning tasks to the VMs with the principle of selecting the VM with minimum energy consumption.

  • (3) We propose a DEEVO algorithm which sets the deadlines for each task according to the VM usage, thereby reducing the slack of the adjacent tasks in the schedule.

Experimental results demonstrate that, with the above-mentioned improvements, DEEVO achieves lower energy consumption for real-time workflows than state-of-the-art algorithms do. More importantly, DEEVO shows its superiority in terms of the computational efficiency in accomplishing task scheduling.

Section snippets

Related work

To solve the problem of high energy consumption in clouds, researchers have done a series of energy-saving studies in terms of flow scheduling [13], task scheduling [14] and green datacenters [15], [16], [17] and so on. In [14], an algorithm named MGGS which leverages the modified GA algorithm combined with greedy strategy was proposed to find an optimal solution for task scheduling process using fewer number of iterations. Zhou et al. [15] proposed two novel adaptive energy-aware algorithms

Problem formulation

Table 1 summarizes the key notations used in the rest of this paper.

Energy-efficient VM opening (EEVO)

This section details the two proposed energy-efficient algorithms for processing workflows. The energy-efficient VM opening (EEVO) algorithm is implemented by choosing a set of VMs out of all closed VMs to open up, and assigning each task to the VM that consumes the minimum energy when executing it. The dynamic scaling down-EEVO (DEEVO) algorithm is further developed on the basis of EEVO, focusing on reducing or eliminating the slacks between adjacent tasks.

Dynamic energy-efficient VM opening (DEEVO)

We further develop a dynamic scaling down-EEVO algorithm to achieve a further energy reduction. DEEVO is capable of eliminating or reducing the slack between adjacent tasks by scaling down task execution frequencies with deadline constraints D(ni).

Analysis of experimental results

In experiments, we evaluate the performance of the proposed EEVO and DEEVO algorithms for different applications, in terms of three important metrics: total energy cost, computation time, and number of VMs turned on in the schedule.

The experiment platform is CloudSim simulator, which is used to simulate the cloud computing infrastructures in this study.

The configuration in CloudSim is as follows [34]: 10hwi,k100h,10hci,j100h,0.1pk,s0.5,0.03pk,ind0.07,0.8Ck,ef1.2,2.5mk3.0, and fk,max

Conclusion

This paper proposes an energy-efficient VM opening (EEVO) algorithm and a dynamic scaling down-EEVO (DEEVO) algorithm to achieve further energy reduction when processing workflows in Heterogeneous Clouds. EEVO is a novel approach for VM merging algorithms that considers turning on VMs from all closed VMs. DEEVO considers reducing the slack exist between adjacent tasks according to VM usage. The proposed methods are compared with HEFT, DEWTS, EPM, and QEPM and the results verify that our methods

CRediT authorship contribution statement

Saiqin Long: Conceptualization, Methodology, Software. Xin Dai: Data curation, Writing – original draft. Tingrui Pei: Supervision. Jiasheng Cao: Software, Investigation, Visualization. Hiroo Sekiya: Validation. Young-June Choi: Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1003702, the Natural Science Foundation of China under Grant No.62032020, the Hunan Science and Technology Planning Project under Grant No.2019RS3019, the Hunan Provincial Natural Science Foundation of China for Distinguished Young Scholars under Grant 2018JJ1025, the Project in Hunan Province Department of Education under Grant No.18C0107, the National Natural Science Foundation of

Saiqin Long received the B.S. degree in software engineering from Hunan Normal University and the Ph.D degree in computer applications technology at the South China University of Technology, in 2009, 2014, respectively. She was an associate professor in the School of Computer Science, Xiangtan University, in 2017. Her research interests include cloud computing, cloud storage, parallel and distributed systems, file systems, computer system architecture. She is a member of Chinese Computer

References (36)

  • J.E.N. Mboula et al.

    Cost-time trade-off efficient workflow scheduling in cloud

    Simul. Model. Pract. Theory

    (2020)
  • H. Aziza et al.

    A hybrid genetic algorithm for scientific workflow scheduling in cloud environment

    Neural Comput. Appl.

    (2020)
  • Z. Tang et al.

    An energy-efficient task scheduling algorithm in dvfs-enabled cloud environment

    J. Grid Comput.

    (2016)
  • G. Xie et al.

    Minimizing energy consumption of real-time parallel applications using downward and upward approaches on heterogeneous systems

    IEEE Trans. Industr. Inf.

    (2017)
  • G. Xie et al.

    Energy-aware processor merging algorithms for deadline constrained parallel applications in heterogeneous cloud computing

    IEEE Trans. Sustain. Comput.

    (2017)
  • D. Wu et al.

    Towards distributed sdn: Mobility management and flow scheduling in software defined urban iot

    IEEE Trans. Parallel Distrib. Syst.

    (2018)
  • Z. Zhou et al.

    An improved genetic algorithm using greedy strategy toward task scheduling optimization in cloud environments

    Neural Comput. Appl.

    (2020)
  • Z. Zhou et al.

    A truthful and efficient incentive mechanism for demand response in green datacenters

    IEEE Trans. Parallel Distrib. Syst.

    (2018)
  • Cited by (2)

    Saiqin Long received the B.S. degree in software engineering from Hunan Normal University and the Ph.D degree in computer applications technology at the South China University of Technology, in 2009, 2014, respectively. She was an associate professor in the School of Computer Science, Xiangtan University, in 2017. Her research interests include cloud computing, cloud storage, parallel and distributed systems, file systems, computer system architecture. She is a member of Chinese Computer Federation (CCF).

    Xin Dai received the B.S. degree in communication engineering from Xingxiang College of Xiangtan university in 2019, She is currently working toward the M.S. degree with the School of Automation and Electronic Information, Xiangtan University, China. Her research interests include cloud computing, task scheduling and wireless communication network.

    Tingrui Pei received the B.S. and M.S. degrees from Xiangtan University, Hunan, China, in 1992 and 1998, respectively, and the Ph.D. degree in signal and information processing from the Beijing University of Posts and Telecommunications in 2004. From 2006 to 2007, he was a Visiting Scholar with Waseda University. He is currently a Professor with Xiangtan University. His research interests include the Internet of Things, cloud computing, wireless sensor networks (WSNs), mobile ad hoc networks and mobile communication networks.

    Jiasheng Cao received the B.S. degree from Xiangtan University, China, in 2018. He is currently an employee of Zhejiang Yushi Technology Co., Ltd. His research interests include cloud computing and wireless sensor network.

    Hiroo Sekiya (Senior Member, IEEE) received the B.S., M.S., and Ph.D. degrees in electrical engineering from Keio University, Yokohama, Japan, in 1996, 1998, and 2001, respectively. Since April 2001, he has been with Chiba University, Chiba, Japan, where he is currently a professor with the Graduate School of Engineering. His research interests include high-frequency high-efficiency tuned power amplifiers, resonant dc/dc power converters, dc/ac inverters, and digital signal processing for wireless communications.

    Young-June Choi received the B.S., M.S., and Ph.D. degrees from the Department of Electrical Engineering and Computer Science, Seoul National University, South Korea, in 2000, 2002, and 2006, respectively. From September 2006 to July 2007, he was a Postdoctoral Researcher with the University of Michigan, Ann Arbor, MI, USA. From 2007 to 2009, he was with NEC Laboratories America, Princeton, NJ, USA, as a Research Staff Member. He joined Ajou University from Sept. 2009 as a faculty member.

    View full text