Elsevier

Future Generation Computer Systems

Volume 100, November 2019, Pages 380-394
Future Generation Computer Systems

Virtual machine allocation and migration based on performance-to-power ratio in energy-efficient clouds

https://doi.org/10.1016/j.future.2019.05.036Get rights and content

Abstract

The last decade witnessed a dramatic advance in cloud computing research and techniques. One of the key challenges in this field is reducing the massive amount of energy consumption in cloud computing data centers. Many power-aware virtual machine (VM) allocation and consolidation approaches were proposed to reduce energy consumption efficiently. However, most of the existing efficient cloud solutions save energy at the cost of significant performance degradation. In this paper, we propose a strategy to calculate the optimized working utilization levels for host computers. As the performance and power data need to be measured on real platforms, to make our design practical, we propose a strategy named “PPRGear” which is based on the sampling of utilization levels with distinct Performance-to-Power Ratios (PPR) calculated as the number of Server Side Java operations completed during a certain time period divided by the average active power consumption in that period. In addition, we present a framework for virtual machine allocation and migration which leverages the PPR for various host types. By achieving the optimal balance between host utilization and energy consumption, our framework is able to ensure that host computers run at the most power-efficient utilization levels, i.e., the levels with the highest PPR, thus tremendously reducing energy consumption with ignorable sacrifice of performance. Our extensive experiments with real world traces show that compared with three baseline energy-efficient VM allocation and selection algorithms, IqrMc, MadMmt, and ThrRs, our framework is able to reduce the energy consumption up to 69.31% for various host computer types with fewer migration times, shutdown times, and little performance degradation for cloud computing data centers.

Introduction

Cloud computing has been widely adopted by businesses, individuals, and large enterprises. However, energy consumption has become a big concern in the last decade since cloud data centers consumed significant power and generated giant power bills. According to the data disclosed by The New York Times in 2012, Facebook data centers consumed about 60 million watts and Google data centers consumed as much as almost 300 million watts [1]. In 2013, data centers in the United States collectively consumed 91 billion kWh of electrical energy and generated 97 million metric tons of carbon dioxide (CO2) [2]. In 2014, more than 2% of the United State’s electricity usage was consumed by data centers [3]. Furthermore, by 2020, the annual electricity usage in the United States is expected to be as much as 140 billion kWh which is the output of about 50 power plants [4]. The carbon dioxide emission generated by Information and Communication Technology (ICT) is expected to exceed 1.4 billion metric tons. It is estimated that data centers are responsible for about 18% of the total energy consumed by all ICT systems in the world [5]. Therefore, many energy-efficient approaches have been explored at facility level, in cooling systems [6], [7], in data center network [8], and by using computing resource allocation strategies. Among those methods, the computing resource allocation is considered as the most achievable and cost-effective approach since it does not require any hardware modifications or upgrades. Virtualization is a key technology to achieve energy efficiency in data centers. VMs can be created, deleted, and migrated among host computers depending on power-aware decisions [9]. Energy-efficient VM management has been explored in task scheduling  [10], workload consolidation [11], [12], temperature-aware capping [13], request batching [14], local or remote clouds choosing [15], mobile service selection [16], etc.

Gelenbe et al. showed that energy consumption in ICT is related to workload, and concluded that the optimal energy consumption and processing time trade-off could be achieved by tuning workloads in computer systems [5]. Their work also indicated that computing systems should turn on more servers when the workload is sufficiently high in order to achieve energy efficiency and acceptable levels of Quality of Service (QoS) [17].

To the best of our knowledge, our work is the first to leverage the Performance-to-Power Ratio (PPR) of computing nodes in VM allocation and migration to achieve the optimal balance between host utilization and energy consumption. Performance-to-Power Ratio is calculated as the number of Server Side Java operations, or ssj_ops, completed during a certain time period divided by the average active power consumption in that period. Most of the current VM placement and migration policies are based on primitive system characteristics like power, utilization, network bandwidth, or storage space. However, in this paper, we propose an energy-efficient VM allocation and migration strategy based on PPR which is not a primitive characteristic of host computers. Our proposed framework is able to dynamically allocate VMs to and migrate VMs among hosts so that host computers can operate at the most power-efficient utilization levels, i.e., at the utilization level with the highest PPR. Specifically, this paper has the following contributions:

  • We propose a novel VM allocation and migration framework which allocates and migrates virtual machines in clouds based on host performance-to-power ratios. Under thisframework, host computers run at their optimal or near-optimal utilization levels so that the energy consumption can be significantly reduced without much sacrifice of cloud-end computation performance.

  • We propose the exact and approximate methods to determine the ranges of gears for a specific host type. Thanks to our sampling strategy, the proposed approximate method is able to efficiently derive the range of each gear and estimate the whole system energy cost. Without loss of generality, in our verification experiments, we assume that each host computer maintains 11 gear levels (from gear 0 to gear 10) that corresponds to distinct utilization levels (from 0%, 10%, …to 100%).

  • We develop the VM allocation and migration modules under our proposed PPRGear framework based on the calculation of the Performance-to-Power Ratio (PPR) on host computers. These two modules have been designed seamlessly to trigger virtual machine allocation and migration automatically when a host is overutilized or underutilized in order to achieve the optimal balance between host utilization and energy consumption.

  • Our extensive experiments on CloudSim [18] with real-world traces show that compared with ThrRs, MadMmt, and IqrMc [19], our framework is able to reduce the energy consumption significantly for various host computer types. More importantly, the SLA violation rate of our framework is almost the same as that of Dynamic Voltage and Frequency Scaling (DVFS), indicating that our framework results in ignorable performance degradation.

In our design, each host computer maintains 11 gear levels (from gear 0 to gear 10) that correspond to distinct utilization levels (from 0%, 10%, …to 100%). The gear with the highest PPR is chosen as the best gear. The top n gears with the highest PPRs are chosen as preferred gears. When the current working gear of a host is not in the range of the preferred gears, the current host is considered as either overutilized or underutilized. Before executing any tasks energy-efficiently, we evaluate the characteristics of computing node at different utilization levels. This evaluation finds the best gear with the highest PPR and the n preferred gears with the n highest PPR values. By allocating and migrating VMs in clouds, we aim to keep computing nodes working at the best gears. When a computing node is working at a gear higher than any preferred gears, the computing node is considered overutilized. When a computing node is working at a gear lower than any preferred gears, the computing node is considered underutilized. If a computing node is overutilized, one or multiple VM(s) in this host will be selected and then migrated out. If a computing node is underutilized, the cloud will either migrate VMs from other hosts to this host or migrate out all VMs on this host then shutdown it to save energy consumption.

The remaining part of the paper is organized as follows: Section 2 introduces the motivation and observation, Section 3 presents the preferred utilization and the energy model, Section 4 presents the overview of our approach, Section 5 details the algorithmic design, Section 6 compares PPRGear with the baselines and shows our simulation results, Section 7 presents the related work, Section 8 concludes the paper.

Section snippets

Our observations

Our paper is focused on energy conservation by improving the effectiveness of energy usage, i.e., accomplishing more tasks with less energy, rather than simply reducing the energy consumption, which is vital for heavy workloads.

Standard Performance and Evaluation Corporation (SPEC) developed an energy benchmark suite SPECpower_ssj2008 [20]. A number of corporations have conducted experiments on their host computers by using SPECpower_ssj2008 and uploaded experimental results to the SPEC

Problem formulation

The objective of our work is to achieve the optimal balance between the host utilization and the energy consumption for cloud data centers. Inspired by the aforementioned observations, our proposed mechanism achieves the goal by allocating and migrating VMs so that computing nodes are able to operate at their best gears, i.e., the utilization levels that will result in the highest performance-to-power ratios. In this section, we first focus on the definition and calculation of the best gear and

System design

Most of current VM placement and migration policies are based on the factors that are primitive system characteristics like power, utilization, network bandwidth, and storage space. We propose an energy-efficient VM allocation and migration strategy based on Performance-to-Power Ratio (PPR) which is not a primitive characteristic of host computers. Before executing any tasks energy-efficiently, we evaluate the characteristics of computing nodes at different utilization levels, called gears. The

Algorithmic design

In this section, we first provide our algorithmic design of PPRGear and then elaborate on its two modules: VM Allocation and VM Migration.

Performance evaluation

To demonstrate the performance and energy-efficiency of PPRGear, we evaluated the performance of PPRGear on four different host models under different workloads in terms of energy consumption, Service-Level Agreement (SLA) violation, shutdown times, and migration times by using CloudSim 3.0.3 [18]. CloudSim 3.0.3 is an event-driven simulator that is used to simulate infrastructures and application services in cloud computing with customizable policies of virtual machine selection, allocation,

Related work

An important aspect of energy-efficiency clouds is accomplishing more jobs with less power. In energy-efficient clouds, power consumption is measured at computing node level since different components, like processors, memory, and second-level storage [31], have different power consumption models. It is indirect and difficult to measure power consumption for individual components in order to evaluate overall power consumption. According to recent studies, although DVFS demonstrates that the

Conclusion

Energy consumption has become a big concern in the last decade since cloud data centers consumed significant power and generated giant power bills. In a cloud computing environment, computing resources are allocated to virtual machines that are generated for customers. The placement and migration of virtual machines have significant impact on both performance and energy cost. In this paper, we presented PPRGear, an energy-efficient virtual machine allocation and migration framework for

Conflict of interest

None.

Declaration of competing interest

The authors declare that there is no conflict of interest regarding the publication of this manuscript.

Xiaojun Ruan is an Assistant Professor with the Department of Computer Science at California State University, East Bay. He was an Assistant/Associate Professor of Computer Science Department at West Chester University Pennsylvania from 2011 to 2017. He received his B.E. degree in Computer Science and Technology from Shandong University, Jinan, China in 2005, and Ph.D. degree in Computer Science from Auburn University in 2011. His research interests include cloud computing, data science,

References (43)

  • ZhuX. et al.

    Real-time tasks oriented energy-aware scheduling in virtualized clouds

    IEEE Trans. Cloud Comput.

    (2014)
  • TziritasN. et al.

    Application-aware workload consolidation to minimize both energy consumption and network load in cloud environments

  • YeoS. et al.

    Atac: Ambient temperature-aware Capping for power efficient datacenters

  • WangY. et al.

    Virtual batching: Request batching for server energy conservation in virtualized data centers

    IEEE Trans. Parallel Distrib. Syst.

    (2013)
  • GelenbeE. et al.

    Choosing a local or remote cloud

  • GelenbeE. et al.

    Energy-qos trade-offs in mobile service selection

    Future Internet

    (2013)
  • GelenbeE. et al.

    Trade-offs between energy and quality of service

  • CalheirosR.N. et al.

    Cloudsim: A toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms

    Softw. Pract. Exper.

    (2011)
  • BeloglazovA. et al.

    Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers

    Concurr. Comput.: Pract. Exper.

    (2012)
  • SPEC Power,...
  • FanX. et al.

    Power provisioning for a warehouse-sized computer

  • Cited by (0)

    Xiaojun Ruan is an Assistant Professor with the Department of Computer Science at California State University, East Bay. He was an Assistant/Associate Professor of Computer Science Department at West Chester University Pennsylvania from 2011 to 2017. He received his B.E. degree in Computer Science and Technology from Shandong University, Jinan, China in 2005, and Ph.D. degree in Computer Science from Auburn University in 2011. His research interests include cloud computing, data science, storage systems, energy-efficient computing, distributed and parallel computing, Artificial Intelligence, and computer security.

    Haiquan Chen is an Assistant Professor with the Department of Computer Science at California State University, Sacramento. He was an Assistant/Associate Professor with the Department of Computer Science at Valdosta State University, Valdosta, GA from 2011 to 2017. He obtained his Ph.D. degree in Computer Science from Auburn University in 2011. He received his M.E. degree and B.E. degree in Computer Science from Xian Jiaotong University (XJTU) in China in 2006 and 2003, respectively. His research interests embrace database and big data management, with emphasis on data mining, machine learning, location-based services, and social network analysis. He is a member of the ACM and the ACM SIGMOD.

    Yun Tian is an Assistant Professor with the Department of Computer Science at California State University, Fullerton. She obtained her Ph.D. degree in Computer Science and Software Engineering and M.S.E. degree in Software Engineering from Auburn university in 2013 and 2011, respectively. She received her B.E. degree in Computer Science and Technology from Northwest University, Xi’an, China in 2006. Her current research interests lie in the areas of computer and network security, distributed computing, and parallel and high performance computing. In addition, she is interested in modeling, simulation, deep learning, data mining and other big data related fields.

    Shu Yin received his B.S. in Communication Engineering from Wuhan University of Technology (WUT) in 2006 and his M.S. degree in Signal and Information Processing from WUT in 2008. He received the Ph.D. degree in Computer Science from Auburn University in 2012. He was an Associate Professor in the College of Computer Science and Electronics Engineering at Hunan University, Changsha, China. Currently, he is an Assistant Professor of School of Information Science and Technology, ShanghaiTech University, China. During July–December 2011 he has worked as an intern at Los Alamos National Laboratory. His research interests include storage systems, reliability modeling, fault tolerance, energy efficient computing, high performance computing, and wireless communications.

    View full text