当前位置: X-MOL 学术Comput. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Communication-efficient asynchronous federated learning in resource-constrained edge computing
Computer Networks ( IF 5.6 ) Pub Date : 2021-08-26 , DOI: 10.1016/j.comnet.2021.108429
Jianchun Liu 1 , Hongli Xu 2 , Yang Xu 2 , Zhenguo Ma 2 , Zhiyuan Wang 2 , Chen Qian 3 , He Huang 4
Affiliation  

Federated learning (FL) has been widely used to train machine learning models over massive data in edge computing. However, the existing FL solutions may cause long training time and/or high resource (e.g., bandwidth) cost, and thus cannot be directly applied for resource-constrained edge nodes, such as base stations and access points. In this paper, we propose a novel communication-efficient asynchronous federated learning (CE-AFL) mechanism, in which the parameter server will aggregate the local model updates only from a certain fraction α, with 0<α<1, of all edge nodes by their arrival order in each epoch. As a case study, we design efficient algorithms to determine the optimal value of α for two cases of CE-AFL, single learning task and multiple learning tasks, under bandwidth constraints. We formally prove the convergence of the proposed algorithm. We evaluate the performance of our algorithm with experiments on Jetson TX2, deep learning workstation and extensive simulations. Both experimental results and simulation results on the classical models and datasets show the effectiveness of our proposed mechanism and algorithms. For example, CE-AFL can reduce the training time by about 69% while achieving similar accuracy, and improve the accuracy of the trained models by about 18% under resource constraints, compared with the state-of-the-art solutions.



中文翻译:

资源受限边缘计算中通信高效的异步联邦学习

联邦学习(FL)已被广泛用于在边缘计算中针对海量数据训练机器学习模型。然而,现有的FL解决方案可能导致训练时间长和/或资源(例如,带宽)成本高,因此不能直接应用于资源受限的边缘节点,例如基站和接入点。在本文中,我们提出了一种新颖的通信高效异步联邦学习 (CE-AFL) 机制,其中参数服务器将仅从某个部分聚合本地模型更新α, 和 0<α<1, 所有边缘节点按它们在每个时期的到达顺序。作为案例研究,我们设计了有效的算法来确定α对于 CE-AFL 的两种情况,单个学习任务和多个学习任务,在带宽限制下。我们正式证明了所提出算法的收敛性。我们通过在 Jetson TX2、深度学习工作站和大量模拟上的实验来评估我们算法的性能。经典模型和数据集的实验结果和仿真结果都表明了我们提出的机制和算法的有效性。例如,与最先进的解决方案相比,CE-AFL 可以在达到相似精度的同时减少约 69% 的训练时间,并在资源限制下将训练模型的精度提高约 18%。

更新日期:2021-09-03
down
wechat
bug