当前位置: X-MOL 学术IEEE Trans. Cognit. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast-Convergent Federated Learning With Adaptive Weighting
IEEE Transactions on Cognitive Communications and Networking ( IF 8.6 ) Pub Date : 2021-05-27 , DOI: 10.1109/tccn.2021.3084406
Hongda Wu , Ping Wang

Federated learning (FL) enables resource-constrained edge nodes to collaboratively learn a global model under the orchestration of a central server while keeping privacy-sensitive data locally. The non-independent-and-identically-distributed (non-IID) data samples across participating nodes slow model training and impose additional communication rounds for FL to converge. In this paper, we propose Fed erated Ad a p tive Weighting ( FedAdp ) algorithm that aims to accelerate model convergence under the presence of nodes with non-IID dataset. We observe the implicit connection between the node contribution to the global model aggregation and data distribution on the local node through theoretical and empirical analysis. We then propose to assign different weights for updating the global model based on node contribution adaptively through each training round. The contribution of participating nodes is first measured by the angle between the local gradient vector and the global gradient vector, and then, weight is quantified by a designed non-linear mapping function subsequently. The simple yet effective strategy can reinforce positive (suppress negative) node contribution dynamically, resulting in communication round reduction drastically. Its superiority over the commonly adopted Federated Averaging ( FedAvg ) is verified both theoretically and experimentally. With extensive experiments performed in Pytorch and PySyft, we show that FL training with FedAdp can reduce the number of communication rounds by up to 54.1% on MNIST dataset and up to 45.4% on FashionMNIST dataset, as compared to FedAvg algorithm.

中文翻译:

具有自适应加权的快速收敛联邦学习

联合学习 (FL) 使资源受限的边缘节点能够在中央服务器的编排下协作学习全局模型,同时将隐私敏感数据保存在本地。跨参与节点的非独立且相同分布(非 IID)数据样本减慢模型训练速度,并为 FL 收敛施加额外的通信轮次。在本文中,我们提出美联储 已评级 广告 一个 p 主动权重 ( 联邦调查局 ) 算法,旨在在具有非 IID 数据集的节点存在的情况下加速模型收敛。我们通过理论和实证分析观察节点对全局模型聚合的贡献与本地节点上的数据分布之间的隐含联系。然后,我们建议在每轮训练中根据节点贡献自适应地分配不同的权重以更新全局模型。参与节点的贡献首先通过局部梯度向量和全局梯度向量的夹角来衡量,然后权重由设计的非线性映射函数量化。简单而有效的策略可以动态地加强积极(抑制消极)节点的贡献,从而大大减少通信轮次。 联邦平均 ) 在理论上和实验上都得到了验证。通过在 Pytorch 和 PySyft 中进行的大量实验,我们表明 FL 训练联邦调查局 与 MNIST 数据集相比,可以将通信轮数减少多达 54.1%,在 FashionMNIST 数据集上减少多达 45.4% 联邦平均 算法。
更新日期:2021-05-27
down
wechat
bug