当前位置: X-MOL 学术IEEE ACM Trans. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation
IEEE/ACM Transactions on Networking ( IF 3.0 ) Pub Date : 2020-11-17 , DOI: 10.1109/tnet.2020.3035770
Canh T. Dinh 1 , Nguyen H. Tran 1 , Minh N. H. Nguyen 2 , Choong Seon Hong 2 , Wei Bao 1 , Albert Y. Zomaya 1 , Vincent Gramoli 1
Affiliation  

There is an increasing interest in a fast-growing machine learning technique called Federated Learning (FL), in which the model training is distributed over mobile user equipment (UEs), exploiting UEs’ local computation and training data. Despite its advantages such as preserving data privacy, FL still has challenges of heterogeneity across UEs’ data and physical resources. To address these challenges, we first propose FEDL, a FL algorithm which can handle heterogeneous UE data without further assumptions except strongly convex and smooth loss functions. We provide a convergence rate characterizing the trade-off between local computation rounds of each UE to update its local model and global communication rounds to update the FL global model. We then employ FEDL in wireless networks as a resource allocation optimization problem that captures the trade-off between FEDL convergence wall clock time and energy consumption of UEs with heterogeneous computing and power resources. Even though the wireless resource allocation problem of FEDL is non-convex, we exploit this problem’s structure to decompose it into three sub-problems and analyze their closed-form solutions as well as insights into problem design. Finally, we empirically evaluate the convergence of FEDL with PyTorch experiments, and provide extensive numerical results for the wireless resource allocation sub-problems. Experimental results show that FEDL outperforms the vanilla FedAvg algorithm in terms of convergence rate and test accuracy in various settings.

中文翻译:


无线网络联邦学习:收敛分析和资源分配



人们对快速发展的称为联邦学习 (FL) 的机器学习技术越来越感兴趣,其中模型训练分布在移动用户设备 (UE) 上,利用 UE 的本地计算和训练数据。尽管 FL 具有保护数据隐私等优势,但仍然面临 UE 数据和物理资源异构性的挑战。为了解决这些挑战,我们首先提出 FEDL,这是一种 FL 算法,除了强凸和平滑损失函数之外,无需进一步假设即可处理异构 UE 数据。我们提供了一个收敛速度来表征每个UE更新其本地模型的本地计算轮次和更新FL全局模型的全局通信轮次之间的权衡。然后,我们在无线网络中采用 FEDL 作为资源分配优化问题,捕获 FEDL 收敛挂钟时间与具有异构计算和电力资源的 UE 能耗之间的权衡。尽管 FEDL 的无线资源分配问题是非凸的,但我们利用该问题的结构将其分解为三个子问题,并分析它们的封闭式解决方案以及对问题设计的见解。最后,我们通过 PyTorch 实验对 FEDL 的收敛性进行了实证评估,并为无线资源分配子问题提供了广泛的数值结果。实验结果表明,FEDL 在各种设置下的收敛速度和测试精度方面均优于普通 FedAvg 算法。
更新日期:2020-11-17
down
wechat
bug