当前位置: X-MOL 学术IEEE Open J. Commun. Soc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
HierTrain: Fast Hierarchical Edge AI Learning With Hybrid Parallelism in Mobile-Edge-Cloud Computing
IEEE Open Journal of the Communications Society ( IF 6.3 ) Pub Date : 2020-05-15 , DOI: 10.1109/ojcoms.2020.2994737
Deyin Liu , Xu Chen , Zhi Zhou , Qing Ling

Nowadays, deep neural networks (DNNs) are the core enablers for many emerging edge AI applications. Conventional approaches for training DNNs are generally implemented at central servers or cloud centers for centralized learning, which is typically time-consuming and resource-demanding due to the transmission of a large number of data samples from the edge device to the remote cloud. To overcome these disadvantages, we consider accelerating the learning process of DNNs on the Mobile-Edge-Cloud Computing (MECC) paradigm. In this paper, we propose HierTrain, a hierarchical edge AI learning framework, which efficiently deploys the DNN training task over the hierarchical MECC architecture. We develop a novel hybrid parallelism method, which is the key to HierTrain, to adaptively assign the DNN model layers and the data samples across the three levels of the edge device, edge server and cloud center. We then formulate the problem of scheduling the DNN training tasks at both layer-granularity and sample-granularity. Solving this optimization problem enables us to achieve the minimum training time. We further implement a hardware prototype consisting of an edge device, an edge server and a cloud server, and conduct extensive experiments on it. Experimental results demonstrate that HierTrain can achieve up to $6.9\times $ speedups compared to the cloud-based hierarchical training approach.

中文翻译:

HierTrain:在移动边缘云计算中使用混合并行性进行快速分层边缘AI学习

如今,深度神经网络(DNN)是许多新兴边缘AI应用程序的核心推动力。用于训练DNN的常规方法通常在中央服务器或云中心实施以进行集中学习,由于大量数据样本从边缘设备到远程云的传输,这通常既耗时又需要资源。为了克服这些缺点,我们考虑在移动边缘云计算(MECC)范例中加速DNN的学习过程。在本文中,我们提出了HierTrain,这是一个层次化的边缘AI学习框架,可以在层次化的MECC架构上有效地部署DNN训练任务。我们开发一部小说混合并行这种方法是HierTrain的关键,它可以在边缘设备,边缘服务器和云中心的三个级别上自适应地分配DNN模型层和数据样本。然后,我们提出在层粒度和样本粒度上调度DNN训练任务的问题。解决此优化问题可使我们获得最少的培训时间。我们进一步实现了由边缘设备,边缘服务器和云服务器组成的硬件原型,并对其进行了广泛的实验。实验结果表明,HierTrain可以达到 $ 6.9 \次$ 与基于云的分层培训方法相比,速度有所提高。
更新日期:2020-05-15
down
wechat
bug