当前位置: X-MOL 学术Comput. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FedPA: An adaptively partial model aggregation strategy in Federated Learning
Computer Networks ( IF 4.4 ) Pub Date : 2021-09-14 , DOI: 10.1016/j.comnet.2021.108468
Juncai Liu 1 , Jessie Hui Wang 1 , Chenghao Rong 1 , Yuedong Xu 2 , Tao Yu 1 , Jilong Wang 1
Affiliation  

Federated Learning has sparked increasing interest as a promising approach to utilize large amounts of data stored on network edge devices. Federated Averaging is the most widely accepted Federated Learning framework. In Federated Averaging, the server keeps waiting for client models to compute the global model in each round unless all client models are received or a pre-configured timer expires, therefore it suffers seriously from participant devices with weak computation and/or communication capability, which is a kind of straggler problem. In this paper we design FedPA, a framework based on partial model aggregation strategy, in which the server waits for only an appropriate number of device models (referred to as aggregation number) in each round. Our experiment shows that the accuracy loss of the aggregated global model in a single round is not significant if the aggregation number is decided carefully. We propose a waiting strategy to determine the aggregation number for each round dynamically and the aggregation number is adaptive to achieve a tradeoff between single-round training time and the expected number of rounds to reach the target accuracy. Stale models are also included during aggregation when they arrive, and their positive value and negative effect are carefully evaluated and reflected in the aggregation strategy. Experiments show that FedPA outperforms the baseline strategy FedAvg and other three algorithms named FedAsync, FLANP and AD-SG. It can work well in all scenarios with different distributions of data samples (characterized by non-IID ratio) and computation/communication capability (characterized by level of heterogeneity) among devices. Experiments also show that FedPA is robust when a certain amount of noise is added into the input from clients for privacy concerns.



中文翻译:

FedPA:联邦学习中的自适应部分模型聚合策略

联邦学习作为一种利用存储在网络边缘设备上的大量数据的有前途的方法,引起了越来越多的兴趣。联邦平均是最广泛接受的联邦学习框架。在联合平均中,服务器在每一轮中一直等待客户端模型计算全局模型,除非收到所有客户端模型或预配置的计时器到期,因此它严重受到计算和/或通信能力弱的参与者设备的影响,这是一种落后的问题。在本文中,我们设计了 FedPA,一个基于部分模型聚合策略的框架,其中服务器在每一轮中只等待适当数量的设备模型(称为聚合数量)。我们的实验表明,如果仔细确定聚合数量,则单轮聚合全局模型的精度损失并不显着。我们提出了一种等待策略来动态确定每一轮的聚合数,并且聚合数是自适应的,以实现单轮训练时间和达到目标精度的预期轮数之间的权衡。陈旧的模型在到达时也包含在聚合过程中,它们的正值和负影响会被仔细评估并反映在聚合策略中。实验表明,FedPA 优于基准策略 FedAvg 和其他三种名为 FedAsync、FLANP 和 AD-SG 的算法。它可以在设备间具有不同数据样本分布(以非 IID 比率为特征)和计算/通信能力(以异构程度为特征)的所有场景中运行良好。实验还表明,当出于隐私考虑将一定量的噪声添加到客户的输入中时,FedPA 是稳健的。

更新日期:2021-09-21
down
wechat
bug