当前位置: X-MOL 学术IEEE J. Sel. Area. Comm. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast-Convergent Federated Learning
IEEE Journal on Selected Areas in Communications ( IF 16.4 ) Pub Date : 2021-01-01 , DOI: 10.1109/jsac.2020.3036952
Hung T. Nguyen , Vikash Sehwag , Seyyedali Hosseinalipour , Christopher G. Brinton , Mung Chiang , H. Vincent Poor

Federated learning has emerged recently as a promising solution for distributing machine learning tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved through each round of federated learning. However, convergence generally requires a large number of communication rounds, which induces delay in model training and is costly in terms of network resources. In this paper, we propose a fast-convergent federated learning algorithm, called $\mathsf {FOLB}$ , which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed. We first theoretically characterize a lower bound on improvement that can be obtained in each round if devices are selected according to the expected improvement their local models will provide to the current global model. Then, we show that $\mathsf {FOLB}$ obtains this bound through uniform sampling by weighting device updates according to their gradient information. $\mathsf {FOLB}$ is able to handle both communication and computation heterogeneity of devices by adapting the aggregations according to estimates of device’s capabilities of contributing to the updates. We evaluate $\mathsf {FOLB}$ in comparison with existing federated learning algorithms and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.

中文翻译:

快速收敛联邦学习

联邦学习最近已成为通过现代移动设备网络分发机器学习任务的有前途的解决方案。最近的研究已经获得了通过每轮联邦学习实现的模型损失预期减少的下限。然而,收敛通常需要大量的通信轮次,这会导致模型训练的延迟,并且在网络资源方面成本高昂。在本文中,我们提出了一种快速收敛的联邦学习算法,称为 $\mathsf {FOLB}$ ,它在每轮模型训练中对设备进行智能采样,以优化预期的收敛速度。我们首先在理论上表征了如果根据其本地模型将为当前全局模型提供的预期改进来选择设备,则可以在每一轮中获得改进的下限。然后,我们证明 $\mathsf {FOLB}$ 通过根据设备更新的梯度信息加权设备更新,通过均匀采样获得此界限。 $\mathsf {FOLB}$ 通过根据设备对更新做出贡献的能力的估计调整聚合,能够处理设备的通信和计算异构性。我们评估 $\mathsf {FOLB}$ 与现有的联邦学习算法相比,并通过实验表明其在各种机器学习任务和数据集上的训练模型准确性、收敛速度和/或模型稳定性方面的改进。
更新日期:2021-01-01
down
wechat
bug