当前位置: X-MOL 学术arXiv.cs.PF › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Energy Management in Data Centers with Server Setup Delay: A Semi-MDP Approximation
arXiv - CS - Performance Pub Date : 2021-08-03 , DOI: arxiv-2108.01292
Behzad Chitsaz, Ahmad Khonsari, Masoumeh Moradian, Aresh Dadlani

The energy management schemes in multi-server data centers with setup time mostly consider thresholds on the number of idle servers or waiting jobs to switch servers $\textit{on}$ or $\textit{off}$. An optimal energy management policy can be characterized as a $\textit{Markov decision process}$ (MDP) at large, given that the system parameters evolve Markovian. The resulting optimal reward can be defined as the weighted sum of mean power usage and mean delay of requested jobs. For large-scale data centers however, these models become intractable due to the colossal state-action space, thus making conventional algorithms inefficient in finding the optimal policy. In this paper, we propose an approximate $\textit{semi-MDP}$ (SMDP) approach, known as `$\textit{multi-level SMDP}$', based on state aggregation and Markovian analysis of the system behavior. Rather than averaging the transition probabilities of aggregated states as in typical methods, we introduce an approximate Markovian framework for calculating the transition probabilities of the proposed multi-level SMDP accurately. Moreover, near-optimal performance can be attained at the expense of increased state-space dimensionality by tuning the number of levels in the multi-level approach. Simulation results show that the proposed approach reduces the SMDP size while yielding better rewards as against existing fixed threshold-based policies and aggregation methods.

中文翻译:

具有服务器设置延迟的数据中心能源管理:半 MDP 近似

具有设置时间的多服务器数据中心的能源管理方案主要考虑空闲服务器或等待切换服务器 $\textit{on}$ 或 $\textit{off}$ 的作业数量的阈值。考虑到系统参数演化为马尔可夫,最优能源管理策略可以概括为 $\textit{马尔可夫决策过程}$ (MDP)。由此产生的最佳奖励可以定义为平均功耗和请求作业的平均延迟的加权和。然而,对于大型数据中心,由于巨大的状态-动作空间,这些模型变得难以处理,从而使传统算法在寻找最佳策略方面效率低下。在本文中,我们提出了一种近似的 $\textit{semi-MDP}$ (SMDP) 方法,称为 `$\textit{multi-level SMDP}$',基于状态聚合和系统行为的马尔可夫分析。我们不是像在典型方法中那样平均聚合状态的转移概率,而是引入了一个近似的马尔可夫框架来准确计算所提出的多级 SMDP 的转移概率。此外,通过调整多级方法中的级数,可以以增加状态空间维度为代价来获得接近最佳的性能。仿真结果表明,与现有的基于固定阈值的策略和聚合方法相比,所提出的方法减少了 SMDP 大小,同时产生了更好的回报。我们引入了一个近似的马尔可夫框架,用于准确计算所提出的多级 SMDP 的转移概率。此外,通过调整多级方法中的级数,可以以增加状态空间维数为代价来获得接近最佳的性能。仿真结果表明,与现有的基于固定阈值的策略和聚合方法相比,所提出的方法减少了 SMDP 大小,同时产生了更好的回报。我们引入了一个近似的马尔可夫框架,用于准确计算所提出的多级 SMDP 的转移概率。此外,通过调整多级方法中的级数,可以以增加状态空间维度为代价来获得接近最佳的性能。仿真结果表明,与现有的基于固定阈值的策略和聚合方法相比,所提出的方法减少了 SMDP 大小,同时产生了更好的回报。
更新日期:2021-08-04
down
wechat
bug