当前位置: X-MOL 学术IEEE Trans. Parallel Distrib. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ADRL: A Hybrid Anomaly-aware Deep Reinforcement Learning-based Resource Scaling in Clouds
IEEE Transactions on Parallel and Distributed Systems ( IF 5.3 ) Pub Date : 2021-03-01 , DOI: 10.1109/tpds.2020.3025914
Sara Kardani-Moghaddam , Rajkumar Buyya , Kotagiri Ramamohanarao

The virtualization concept and elasticity feature of cloud computing enable users to request resources on-demand and in the pay-as-you-go model. However, the high flexibility of the model makes the on-time resource scaling problem more complex. A variety of techniques such as threshold-based rules, time series analysis, or control theory are utilized to increase the efficiency of dynamic scaling of resources. However, the inherent dynamicity of cloud-hosted applications requires autonomic and adaptable systems that learn from the environment in real-time. Reinforcement Learning (RL) is a paradigm that requires some agents to monitor the surroundings and regularly perform an action based on the observed states. RL has a weakness to handle high dimensional state space problems. Deep-RL models are a recent breakthrough for modeling and learning in complex state space problems. In this article, we propose a Hybrid Anomaly-aware Deep Reinforcement Learning-based Resource Scaling (ADRL) for dynamic scaling of resources in the cloud. ADRL takes advantage of anomaly detection techniques to increase the stability of decision-makers by triggering actions in response to the identified anomalous states in the system. Two levels of global and local decision-makers are introduced to handle the required scaling actions. An extensive set of experiments for different types of anomaly problems shows that ADRL can significantly improve the quality of service with less number of actions and increased stability of the system.

中文翻译:

ADRL:云中基于混合异常感知的深度强化学习资源扩展

云计算的虚拟化概念和弹性特性使用户能够以按需付费的模式请求资源。然而,模型的高度灵活性使得准时资源扩展问题变得更加复杂。使用基于阈值的规则、时间序列分析或控制理论等多种技术来提高资源动态扩展的效率。然而,云托管应用程序固有的动态性需要能够实时从环境中学习的自主和适应性强的系统。强化学习 (RL) 是一种范式,它需要一些代理监视周围环境并根据观察到的状态定期执行操作。RL 在处理高维状态空间问题方面存在弱点。Deep-RL 模型是最近在复杂状态空间问题中建模和学习的突破。在本文中,我们提出了一种基于混合异常感知深度强化学习的资源扩展 (ADRL),用于动态扩展云中的资源。ADRL 利用异常检测技术,通过触发响应系统中识别出的异常状态的动作来提高决策者的稳定性。引入了两个级别的全球和本地决策者来处理所需的扩展操作。针对不同类型异常问题的大量实验表明,ADRL 可以显着提高服务质量,同时减少操作次数并提高系统稳定性。我们提出了一种基于混合异常感知深度强化学习的资源扩展(ADRL),用于动态扩展云中的资源。ADRL 利用异常检测技术,通过触发响应系统中识别出的异常状态的动作来提高决策者的稳定性。引入了两个级别的全球和本地决策者来处理所需的扩展操作。针对不同类型异常问题的大量实验表明,ADRL 可以显着提高服务质量,同时减少操作次数并提高系统稳定性。我们提出了一种基于混合异常感知深度强化学习的资源扩展(ADRL),用于动态扩展云中的资源。ADRL 利用异常检测技术,通过触发响应系统中识别出的异常状态的动作来提高决策者的稳定性。引入了两个级别的全球和本地决策者来处理所需的扩展操作。针对不同类型异常问题的大量实验表明,ADRL 可以显着提高服务质量,同时减少操作次数并提高系统稳定性。ADRL 利用异常检测技术,通过触发响应系统中识别出的异常状态的动作来提高决策者的稳定性。引入了两个级别的全球和本地决策者来处理所需的扩展操作。针对不同类型异常问题的大量实验表明,ADRL 可以显着提高服务质量,同时减少操作次数并提高系统稳定性。ADRL 利用异常检测技术,通过触发响应系统中识别出的异常状态的动作来提高决策者的稳定性。引入了两个级别的全球和本地决策者来处理所需的扩展操作。针对不同类型异常问题的大量实验表明,ADRL 可以显着提高服务质量,同时减少操作次数并提高系统稳定性。
更新日期:2021-03-01
down
wechat
bug