A Joint Power Efficient Server and Network Consolidation approach for virtualized data centers
Introduction
One of the main objectives in cloud computing is to find the right compromise between the cost-efficiency of the underlying infrastructure and the Quality of Service (QoS) as perceived by the users running their virtualized applications [1]. In the cloud computing paradigm there is an emerging green computing [2] awareness, which is strictly related to the cost-efficiency in the utilization of the physical resources: the aim is to reduce the power consumption and achieve an appropriate level of energy-efficiency. Big data centers, along with the adoption of the virtualization technology, are increasingly experiencing the need to reduce the energy consumption, because of both the environmental pollution and the economic concern. Hence, the interest of the research community is moving towards the introduction of metrics that allow to evaluate how energy efficient the resource utilization is. In the literature, techniques that try to increase the resource utilization of data centers have been massively studied from the energy view-point. One of them is the VM consolidation, whose objective is to minimize the number of physical servers necessary to host a set of VMs. By leveraging the VM live migration [3], the allocation of VMs to physical nodes can be dynamically adjusted to achieve different goals such as load balancing, avoiding hot spots or hibernating under-utilized servers.
Many VM consolidation techniques do not take into account networking issues within a data center. Greenberg et al. [4] shows, that the network consumes around 20% of the total energy in a data center. Large data centers are providing cloud enabled services and applications, which typically consist of multiple cooperating VMs that need to exchange large data volumes over the data center network. Classical network infrastructures, based on the spanning tree topology, have been proved to suffer from severe performance limits, such as network throughput, equipment capital expenditure (CapEx), network diameter and so on. In [5] a new parameter, namely the Network Power Effectiveness (NPE), is introduced to evaluate the network efficiency as the ratio between the aggregate throughput and the total power consumption. Along with novel architectures, techniques for increasing energy efficiency in the network are also emerging. Commercial switches allow putting network interfaces in sleep-mode when no traffic is flowing, and waking-up a link only when a packet arrives. This is useful since several papers argue that the traffic load is very day-time dependent in data centers. For example, [6] shows that the average link utilization in the connections to the aggregate switches is only 8% of the capacity for 95% of the time, while in the core layer the utilization is between 20% and 40%. Therefore, it is possible to dynamically switch-off under-utilized network devices and links in some time intervals in order to save energy. Network protocols can also benefit from energy efficiency. For example, the IEEE 802.3az [7] standard is aimed at achieving energy reduction in the Ethernet based communications by activating a link only when real data is being sent.
In [8], we proposed a new MILP model and a heuristic to consolidate a virtualized data center by reallocating VMs on the smallest subset servers in order to minimize the total energy consumption due to compute resources. The key contribution of this work is a new mathematical model extending [8], that also considers the network characteristics in order to jointly optimize the VM placement and energy efficient network routing. By knowing the current VM placement and the network routing for sending VM-to-VM traffic, we consider the energy efficiency profiles of servers and networking devices in order to find the VM migrations and network paths that jointly minimize the server and network power consumption, powering down unused switches and link ports which are not necessary for routing the traffic demands. As the proposed model is complex and very hard to solve, we develop a fast Simulated Annealing based Resource Consolidation (SARC) heuristic. The parameter, called migration desirability, is used in the perturbation phase of the heuristic: it allows us to balance between the resource utilization of the physical servers, the power efficiency and the impact the migrations may have on the power-aware established network paths. We show that our heuristic is able to save on average 50% of the total network power consumption when compared to the heuristic that simply consolidates the active servers.
The paper is structured as follows: in Section 2, the related work is discussed. In Section 3, the problem is formulated and the objective is explained, while in Section 4, a Mixed Integer Programming Model is described. In Section 5, the heuristic is presented and Section 6 shows the experimental evaluation. Finally, the paper concludes in chapter VII.
Section snippets
Related work
Several papers address the consolidation problem in virtualized environment by either optimizing the physical resources utilization or the network efficiency. In [8], we presented a novel model for power efficient VM consolidation: the problem is to find the set of migrations that minimize both the overall power consumption of the active servers after the consolidation and the number of migrations. The proposed heuristic was shown to be very effective in approximating the optimal solution. In
Problem formulation
The aim of this section is to build a mathematical model, which optimizes the power consumption of a data center. The model takes into account the power profiles of the servers used to host a set of VMs, the network power consumption due to switches and routers and, finally, the number of VM migrations, which are required in order to move the VMs to the most efficient servers and power down the unused ones. We define the Joint Power Efficient Server and Network Consolidation Problem as follows:
Joint Power Efficient Server and Network Consolidation Problem Formulation
The problem is formulated as a Mixed Integer Linear Programming model, whose input parameters and decision variables are summarized in Table 1, while the objective function and the constraints are shown in Table 2.
Simulated Annealing based Resource Consolidation (SARC)
The proposed algorithm is based on the Simulated Annealing meta-heuristic, which is an optimization technique capable of finding a good approximation of the global optimum of a given function over a large search space. The Simulated Annealing starts with an initialization phase, where an initial feasible solution for the problem is constructed and an initial temperature value is set. Then, different solutions are explored by performing a number of iterations. At the end of each iteration, the
Experimental results
In our first experiment, we evaluate the impact of different weights in the objective function on the different types of resources we take into account. We considered a single scenario, consisting of 200 servers, 400 VMs, 14 network nodes and three different configurations for the coefficients in the objective function: the first one puts more emphasis on the minimization of the servers’ power consumption (), the second one is aimed at improving the efficiency in the network () and
Conclusions and future work
In this work, we presented a joint server and network consolidation model that takes into different power profiles for the switches and the physical servers into account when minimizing the total power consumption and number of migrations inside a modern data center. The purpose of the proposed model is to provide data center management systems with a flexible way to take efficient re-allocation decisions, by achieving a good trade-off among several objectives. We develop a fast solution
Acknowledgment
Parts of this work has been funded by the Knowledge Foundation of Sweden through the Profile HITS.
Antonio Marotta received the M.Sc. and Ph.D. degrees from the University of Napoli “Federico II” in 2010 and 2014, respectively. He is currently a post-doc researcher at the Karlstad University. His research interests include cloud computing, critical infrastructure protection and software defined networks.
References (42)
- et al.
Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers
Concurr. ComputPract. Exper.
(2012) - et al.
Probabilistic consolidation of virtual machines in self-organizing cloud data centers
IEEE Trans. Cloud Comput.
(2013) - et al.
Minimizing energy consumption of fattree data center networks
SIGMETRICS Perform. Eval. Rev.
(2014) - et al.
Energy efficient utilization of resources in cloud computing systems
J. Supercomput.
(2012) - et al.
Lifetime-Aware cloud data centers: models and performance evaluation
Energies
(2016) - et al.
Improving consolidation of virtual machines with risk-aware bandwidth oversubscription in compute clouds
INFOCOM, 2012 Proceedings IEEE
(2012) - et al.
Green cloud computing: a review on green it areas for cloud computing environment
Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on
(2015) Costs of virtual machine live migration: a survey
2012 IEEE Eighth World Congress on Services
(2012)- et al.
The cost of a cloud: research problems in data center networks
SIGCOMM Comput. Commun. Rev.
(2008) - et al.
On the network power effectiveness of data center architectures
Comput. IEEE Trans.
(2015)
Understanding data center traffic characteristics
SIGCOMM Comput. Commun. Rev.
IEEE 802.3az: the road to energy efficient Ethernet
Commun.Mag. IEEE
A simulated annealing based approach for power efficient virtual machines consolidation
Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on
Energy efficient VM scheduling for cloud data centers: exact allocation and migration algorithms
2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing
Dynamic resource allocation using virtual machines for cloud computing environment
IEEE Trans. Parallel Distrib. Syst.
An energy efficient virtual machine placement algorithm with balanced resource utilization
Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2013 Seventh International Conference on
Virtual machine re-assignment considering migration overhead
2012 IEEE Network Operations and Management Symposium
Starling: Minimizing communication overhead in virtualized computing platforms using decentralized affinity-aware migration
Parallel Processing (ICPP), 2010 39th International Conference on
Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing
Futur. Gener. Comput. Syst.
Consolidating virtual machines with dynamic bandwidth demand in data centers
Proceedings of IEEE INFOCOM
Server consolidation with migration control for virtualized data centers
Futur. Gener. Comput. Syst.
Cited by (18)
A Service Sustainable Live Migration Strategy for Multiple Virtual Machines in Cloud Data Centers
2021, Big Data ResearchCitation Excerpt :On the other hand, Sun et al. [38] proposed a parallel migration strategy to migrate multiple correlated VMs over wide area networks (WANs) to optimize the average migration time and downtime. Marotta et al. [37], discussed a joint server and network consolidation model that takes into account the power efficiency of both the switches forwarding the traffic and the servers hosting the VMs. Since both migration time and downtime are directly related to the volume of memory copied, Jin et al. [35] proposed a memory-compression-based VM migration approach (MECOM) to provide fast and stable VM migration, without compromising the service quality.
Application of virtual machine consolidation in cloud computing systems
2021, Sustainable Computing: Informatics and SystemsCitation Excerpt :The defeats will lead to the loss of some cloud systems services that can impose an expense of five thousand dollars each minute to the cloud-related penalty of SLAs [28]. In study [29] decreasing power usage beside VMs affinity, based on [30] decreasing power usage beside SLA criteria, based on [31,32] decreasing power usage beside NBW criteria, according to the study [33,34] decreasing power usage beside performance criteria, based on [35] decreasing power usage beside reliability criteria, also, based on [36] decreasing power usage beside SLA and performance criteria, in the design of the VM consolidation algorithm are considered. The most efficient policy to map VMs to PMs is not solely placing the highest abundance of VMs into the lowest quantity of PMs.
A survey of data center consolidation in cloud computing systems
2021, Computer Science ReviewCitation Excerpt :Energy consumption has aroused the interest of several researchers. Most of them concentrate on server level energy consumption, fewer are the work that treated the consumption of storage [44] and network [100,101] equipments, and energy consumption at the infrastructure level (i.e. lighting, cooling, controls) is neglected. As mentioned above, DC level energy consumption, which depends on the consumption of IT equipment and that at the network architecture level represents 40% of total consumption, which is enormous.
Evaluating impacts of traffic migration and virtual network functions consolidation on power aware resource allocation algorithms
2019, Future Generation Computer SystemsCitation Excerpt :They formulated the problem using a Decision Tree model, and solved it using Monte Carlo Tree Search strategy. Moreover, [20] proposed a joint server and network consolidation model that takes into account the power efficiency of both, the switches forwarding the traffic and the servers hosting the VMs, and it powers down switch ports and routes traffic along the most energy efficient path towards the least energy consuming server under QoS constraints. Similar work was conducted by [21] who proposed an energy aware and QoS aware multi objective Ant Colony Optimization approach for virtual machine placement and consolidation, which makes a trade-off between energy efficiency, system performance, and service level agreement compliance.
Geo-Distributed Multi-Tier Workload Migration Over Multi-Timescale Electricity Markets
2023, IEEE Transactions on Services ComputingEnergy efficiency in cloud computing data centers: a survey on software technologies
2023, Cluster Computing
Antonio Marotta received the M.Sc. and Ph.D. degrees from the University of Napoli “Federico II” in 2010 and 2014, respectively. He is currently a post-doc researcher at the Karlstad University. His research interests include cloud computing, critical infrastructure protection and software defined networks.
Stefano Avallone received the M.Sc. and Ph.D. degrees from the University of Napoli “Federico II” in 2001 and 2005, respectively. He is currently an Associate Professor with the Department of Computer Engineering at the University of Napoli. He was a visiting researcher at the Delft University of Technology (2003-04) and at the Georgia Institute of Technology (2005). He is on the editorial board of Elsevier Ad Hoc Networks and the technical committee of Elsevier Computer Communications. He also serves as technical program co-chair of the 12th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2016). His research interests include wireless mesh networks, 4G/5G networks and the bufferbloat problem.
Dr. Andreas Kassler received his M.Sc. degree in Mathematics / Computer Science from Augsburg University, Germany in 1995 and his Ph.D. degree in Computer Science from University of Ulm, Germany, in 2002. Currently, he is Full Professor with the Department of Computer Science at Karlstad University in Sweden, where he teaches wireless networking and advanced topics in computer networking. His main research interests include Wireless Meshed Networks, Ad-Hoc Networks, Future Internet, Multimedia Networking, QoS, and P2P systems.