当前位置: X-MOL 学术J. Parallel Distrib. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards cost-effective service migration in mobile edge: A Q-learning approach
Journal of Parallel and Distributed Computing ( IF 3.8 ) Pub Date : 2020-08-26 , DOI: 10.1016/j.jpdc.2020.08.008
Yang Wang , Shan Cao , Hongshuai Ren , Jianjun Li , Kejiang Ye , Chengzhong Xu , Xi Chen

Service migration in mobile edge computing is a promising approach to improving the quality of service (QoS) for mobile users and reducing the network operational cost for service providers as well. However, these benefits are not free, coming at costs of bulk-data transfer, and likely service disruption, which could consequently increase the overall service costs. To gain the benefits of service migration while minimizing its cost across the edge nodes, in this paper, we leverage reinforcement learning (RL) method to design a cost-effective framework, called Mig-RL, for the service migration with a reduction of total service costs as a goal in a mobile edge environment. The Mig-RL leverages the infrastructure of edge network and deploys a migration agent through Q-learning to learn the optimal policy with respect to the service migration status. We distinguish the Mig-RL from other existing works in several major aspects. First, we fully exploit the nature of this problem in a modest migration space, which allows us to constrain the number of service replicas whereby a defined state–action space could be effectively handled, as opposed to those methods that need to always approximate a huge state–action space for policy optimality. Second, we advocate a migration policy-base as a cache to save the learning process by retrieving the most effective policy whenever a similar migration pattern is encountered as time goes on. Finally, by exploiting the idea of software defined network, we also investigate the efficient implementation of Mig-RL in mobile edge network. Experimental results based on some real and synthesized access sequences show that Mig-RL, compared with the selected existing algorithms, can substantially minimize the service costs, and in the meantime, efficiently improve the QoS by adapting to the changes of mobile access patterns.



中文翻译:

在移动边缘实现经济高效的服务迁移:一种Q学习方法

移动边缘计算中的服务迁移是一种改善移动用户服务质量(QoS)并降低服务提供商的网络运营成本的有前途的方法。但是,这些好处不是免费的,要付出大量数据传输的费用,并且可能会破坏服务,从而可能增加总体服务成本。为了获得服务迁移的好处,同时最大程度地减少跨边缘节点的成本,在本文中,我们利用强化学习(RL)方法设计了一种成本有效的框架,称为Mig-RL,用于服务迁移,同时减少了总数服务成本是移动边缘环境中的目标。的米格RL利用边缘网络的基础设施并部署迁移剂通过Q学习学习关于服务迁移状态的最佳策略。我们在几个主要方面将Mig-RL与其他现有作品区分开。首先,我们在适度的迁移空间中充分利用了此问题的性质,这使我们能够限制服务副本的数量,从而可以有效地处理已定义的状态-操作空间,而与那些需要始终近似估算巨大方法的方法相反政策优化的国家行动空间。其次,我们提倡将迁移策略库作为缓存,通过在时间推移中遇到类似的迁移模式时检索最有效的策略来节省学习过程。最后,通过利用软件定义网络的思想,我们还研究了Mig-RL的有效实现在移动边缘网络中。基于一些真实的和合成的访问序列的实验结果表明,与选择的现有算法相比,Mig-RL可以极大地降低服务成本,同时可以通过适应移动访问模式的变化有效地提高QoS。

更新日期:2020-09-03
down
wechat
bug