A global-energy-aware virtual machine placement strategy for cloud data centers

https://doi.org/10.1016/j.sysarc.2021.102048Get rights and content

Abstract

Virtual machine (VM) placement is a key technique for energy optimization in cloud data centers. Previous work generally focus on how to place the VMs efficiently in servers to optimize the physical resources used (e.g., memory, bandwidth, CPU, etc.), network resources used or cooling energy consumption. These work can optimize the energy consumption of cloud data centers according to one or two aspects (e.g. server, network or cooling), however, these methods may cause increased energy consumption in other aspects. To address this problem, we propose a global-energy-aware VMP (virtual machine placement) strategy to reduce, from multiple aspects, the total energy consumption of data centers. A two-step SAG algorithm is designed to lower the energy consumption of cloud data centers where multiple VMs are deployed. We conduct extensive experiments to evaluate the effectiveness of SAG. Two workloads from real-world data centers are utilized to quantitatively measure and compare the performance of our SAG with other typical algorithms. Experimental results indicate that, compared to other algorithms, our global-energy-aware VMP strategy can reduce the total energy consumption of the cloud data center by 8%–24.9%.

Introduction

Cloud computing has become a rapidly growing field of compute mode in recent years. It has been widely used in daily business and innovatively explored in fields such as web searching, scientific computing, and biological information, etc. The data centers, as providers of basic cloud computing services, are centralized places to accommodate computing resources. Cloud data centers allow users to utilize cloud services through the Internet. These data centers use virtualization technology to provide multiple virtual machine (VM) resources in the cloud to facilitate large-scale system resource management.

VMs can increase efficiency and reduce management overhead of servers in data centers. However, how to place a large number of VMs on physical resources has become the issue that data center managers must consider. The choice of VMP (virtual machine placement) strategy and method has a direct impact on the data center’s energy consumption and utilization of physical resources. Reasonable placement strategies can ensure that upper-layer applications and services are not affected and can effectively reduce the energy consumption of cloud data centers. There are some existing VMP works focusing on optimizing single or multiple physical machine (PM, which is represented for the server of data centers in this paper) resources such as CPU usage, memory usage, bandwidth resources usage, etc. [1], [2], [3].

In addition, traditional north–south traffic, i.e. between users and the data center, is gradually changed to east–west traffic, i.e. among servers and VMs. In fact, as report, almost 75% of the total traffic nowadays is the east–west traffic inside the data center [4]. For some applications, such as those in Google and Facebook, the east–west traffic accounted for a larger part [5]. This change in traffic pattern brings more network equipment into use, leading to a greater network energy consumption [4]. Some researches are devoted to reducing the network costs by using the optimized target placement for VMs [6], [7]. However, existing VMP works fail to reduce the global energy consumption of data centers, VMs are only optimized in a single target dimension, which often leads to an increase of energy consumption in another dimension. For example, in order to reduce the energy usage of IT equipment and network, one common method is scheduling workloads on fewer servers and shutting down the idle servers [8], [9], [10]. However, such unbalanced distribution of workloads can quickly increase the temperatures of heavily-loaded servers in a short time and cause an increase in cooling energy consumption.

Different from the existing research, this work is devoted to exploring a global VMP strategy in order to optimize the energy consumption of data centers from various points of view. Specifically, to minimize the global energy consumption of data centers, we consider three optimal performance methods in this work: (1) In order to reduce the energy consumption of cooling systems, we define a new VMP model by leveraging the heat-recirculation effect within data centers. With this model, we can further develop a VMP algorithm to improve the efficiency of the cooling system. (2) In order to reduce the energy consumption of servers, we implement a control mechanism for the VMP algorithm to control the number of activated servers. (3) In order to reduce the energy consumption of networks, we distribute the VMs with large traffic demands in close servers. The main contributions of our work is given as follows:

  • We investigate a global-energy-consumption VMP model, which takes into account both the IT and non-IT resources usage. For IT resources, we consider the server, VM and the network usage model. For non-IT resources, we take the cooling system and the heat-recirculation model of data centers into consideration.

  • We introduce a two-step algorithm (SAG) to solve the NP-hard VMP problem. The first step is based on the Simulated Annealing algorithm (SA algorithm), which is used to minimize the cooling and server energy consumption. The second step is based on the Greedy algorithm, which is used to minimize the network energy consumption.

  • We conduct a series of simulation-based experiments to verify the effectiveness of our SAG algorithm. Experimental results indicate that, compared with the existing VMP solution (e.g., Cluster-and-Cut [11], PPVMP [12], TSTD [13] and XINT-GA [14]), our SAG algorithm can significantly reduce total energy consumption of data centers.

The rest of the paper is organized as follows. Section 2 summarizes related work. Definitions of different system models for data centers are presented in Section 3. Section 4 illustrates our two-step SAG algorithm. Section 5 validates the effectiveness of our SAG algorithm by a set of experiments. Finally, Section 6 concludes the paper.

Section snippets

Related works

With the increasing discussions of cloud computing in industry and academia, VMP has become a popular research area. The energy consumed by a data center can be divided into two parts: energy used by IT equipment (e.g. servers, networks, etc.) and used by non-IT equipment facilities (e.g., cooling system). The amounts of energy consumed by these two components are different in data centers [29]. For example, according to the statistics published by Dayarathna et al. [30], the energy consumption

System model and problem formulation

The total energy consumption of a data center is composed of the power consumption by IT equipment (including PMs and network equipment) and the energy consumption of non-IT equipment (for example, cooling systems). Power consumed by other systems such as fire control, lightning and electrical equipments are considered to be negligible. For reducing the energy consumption of IT equipment, we introduce the model of PM and network, and for the non-IT equipment, we present the heat-recirculation

SAG — a two-step algorithm

In order to reduce the global energy consumption of the data center, we propose a two-step algorithm SAG. The first step of SAG algorithm can optimize the energy consumption of the cooling system by finding the optimal solution of Eq. (11), and can optimize the server’s energy consumption by controlling the number of servers activated. The second step of SAG algorithm can reduce the energy consumption of the network by placing the VMs with large traffic requirements to relative close locations.

Experiment simulation

In our extensive experiments, we build a simulation resembling a real-world data center to evaluate the effectiveness of our VMP strategy. To meet the 42U industry standards, the simulated data center possesses two rows of racks, and one typical cold (hot) aisle. The cold air flow is supplied by a CRAC with an air speed of 8 m3s. This data center contains ten racks, each rack is equipped with 50 servers, and each server contains eight processors and 16 Gb ram. We assume that each VM consumes 2

Conclusion

In this study, we propose a global-energy-aware VMP strategy for cloud data centers which exhibits three salient features. First, a data center applying SAG can reduce the cooling energy consumption and reduce the possibility of arising hot spots. Second, our SAG algorithm yields significant server energy savings by keeping the number of active servers to a minimum. Third, SAG can reduce the network energy consumption by adopting the strategy of placing VM groups with large communication costs

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work is sponsored by the National Natural Science Foundation of China under Grant No. 62072214 and Grant No. 61572232, the International Cooperation Project of Guangdong Province, China under Grant No. 2020A0505100040, and the Open Project Program of Wuhan National Laboratory for Optoelectronics, China No. 2020WNLOKF006.

Hao Feng is a teacher in School of Computer and Cyberspace Security, Hainan University. He received the Ph.D. degree in computer application technology from the Computer Science Department of Jinan University. He received the B.E. degree in computer science and technology from Dalian Maritime University in 2014, and the MS degree in computer science and technology from the Guangxi University in 2017. His current research interests include parallel and distributed computing, data center

References (42)

  • ChengQ. et al.

    Recent advances in optical technologies for data centers: A review

    Optica

    (2018)
  • XiaoS. et al.

    Traffic-aware virtual machine migration in topology-adaptive DCN

  • WangL. et al.

    Joint virtual machine assignment and traffic engineering for green data center networks

    ACM SIGMETRICS Perform. Eval. Rev.

    (2014)
  • HuZ. et al.

    Time-and cost-efficient task scheduling across geo-distributed data centers

    IEEE Trans. Parallel Distrib. Syst.

    (2017)
  • ZhouJ. et al.

    Cost and makespan-aware workflow scheduling in hybrid clouds

    J. Syst. Archit.

    (2019)
  • MengX. et al.

    Improving the scalability of data center networks with traffic-aware virtual machine placement

  • ZhaoH. et al.

    Power-aware and performance-guaranteed virtual machine placement in the cloud

    IEEE Trans. Parallel Distrib. Syst.

    (2018)
  • LiuH. et al.

    Thermal-aware and DVFS-enabled big data task scheduling for data centers

    IEEE Trans. Big Data

    (2018)
  • TangQ. et al.

    Energy-efficient thermal-aware task scheduling for Homogeneous high-performance computing data centers: A cyber-physical approach

    IEEE Trans. Parallel Distrib. Syst.

    (2008)
  • MannZ.Á.

    Multicore-aware virtual machine placement in cloud data centers

    IEEE Trans. Comput.

    (2016)
  • ZhangW. et al.

    Automatic memory control of multiple virtual machines on a consolidated server

    IEEE Trans. Cloud Comput.

    (2017)
  • Cited by (40)

    View all citing articles on Scopus

    Hao Feng is a teacher in School of Computer and Cyberspace Security, Hainan University. He received the Ph.D. degree in computer application technology from the Computer Science Department of Jinan University. He received the B.E. degree in computer science and technology from Dalian Maritime University in 2014, and the MS degree in computer science and technology from the Guangxi University in 2017. His current research interests include parallel and distributed computing, data center architecture, cloud computing and resources management.

    Yuhui Deng received the Ph.D. degree in computer science from the Huazhong University of Science and Technology, in 2004. He is a professor with the Computer Science Department, Jinan University. Before joining Jinan University, he worked at EMC Corporation as a senior research scientist from 2008 to 2009. He worked as a research officer at Cranfield University in the United Kingdom from 2005 to 2008. His research interests cover green computing, cloud computing, information storage, computer architecture, performance evaluation, etc.

    Jie Li received the B.E. degree in automation from Wanjiang University of Technology, Anhui, China, in 2016, and the MS degree in computer science and technology from the Guangxi University for Nationalities, Nanning, China, in 2019. He is currently pursuing the PHD degree with the Computer Science Department, Jinan University. His current research interests include parallel and distributed computing, data center architecture, clouding computing and data replica placement.

    View full text