当前位置: X-MOL 学术IEEE J. Sel. Area. Comm. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Seek Common While Shelving Differences: Orchestrating Deep Neural Networks for Edge Service Provisioning
IEEE Journal on Selected Areas in Communications ( IF 16.4 ) Pub Date : 2021-01-01 , DOI: 10.1109/jsac.2020.3036953
Lixing Chen , Jie Xu

Edge computing (EC) platforms, which enable Application Service Providers (ASPs) to deploy applications in close proximity to users, are providing ultra-low latency and location-awareness to a rich portfolio of services. As monetary costs are incurred for renting computing resources on edge servers to enable service provisioning, ASP has to cautiously decide where to deploy the application and how much resources would be needed to deliver satisfactory performance. However, the service provisioning problem exhibits complex correlations with multifarious factors in EC systems, ranging from user behavior to computation offloading, which are difficult to be fully captured by mathematical modeling and also put off traditional machine learning techniques due to the induction of high-dimension state space. The recent success of deep learning (DL) underpins new tools for addressing our problem. While previous works provide valuable insights on applying DL techniques, e.g., distributed DL, deep reinforcement learning (DRL), and multi-agent DL, in EC systems, these techniques cannot solely handle the distributed and heterogeneous nature of EC systems. To address these limitations, we propose a novel framework based on multi-agent DRL, distributed neural network orchestration (N2O), and knowledge distilling. The multi-agent DRL enables edge servers to learn deep neural networks that shelve distinct features learned from local edge sites and hence caters to the heterogeneity of EC systems. N2O coordinates edge servers in a fully distributed manner toward a common goal of maximizing ASP’s reward. It requires only local communications during execution and provides provable performance guarantees. The knowledge distilling is further utilized to distill the N2O policy for reducing the communication overhead and stabilizing the decision-making. We also carry out systematic experiments to show the advantages of our method over state-of-the-art alternatives.

中文翻译:

在搁置差异的同时寻求共同点:为边缘服务提供编排深度神经网络

边缘计算 (EC) 平台使应用程序服务提供商 (ASP) 能够在用户附近部署应用程序,为丰富的服务组合提供超低延迟和位置感知。由于在边缘服务器上租用计算资源以启用服务供应会产生金钱成本,因此 ASP 必须谨慎决定在何处部署应用程序以及需要多少资源才能提供令人满意的性能。然而,服务提供问题与 EC 系统中从用户行为到计算卸载的多种因素之间表现出复杂的相关性,数学建模难以完全捕获这些因素,并且由于高维的归纳,传统的机器学习技术也因此而望而却步。状态空间。最近深度学习 (DL) 的成功为解决我们的问题提供了新的工具。虽然以前的工作为在 EC 系统中应用 DL 技术(例如分布式 DL、深度强化学习 (DRL) 和多代理 DL)提供了宝贵的见解,但这些技术不能单独处理 EC 系统的分布式和异构性质。为了解决这些限制,我们提出了一种基于多代理 DRL、分布式神经网络编排(N2O)和知识提炼的新框架。多代理 DRL 使边缘服务器能够学习深度神经网络,这些网络搁置从本地边缘站点学习的不同特征,从而迎合 EC 系统的异构性。N2O 以完全分布式的方式协调边缘服务器,以实现最大化 ASP 奖励的共同目标。它在执行期间只需要本地通信,并提供可证明的性能保证。进一步利用知识提炼来提炼 N2O 策略,以减少通信开销和稳定决策。我们还进行了系统的实验,以展示我们的方法相对于最先进的替代方案的优势。
更新日期:2021-01-01
down
wechat
bug