当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Gradual Federated Learning With Simulated Annealing
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2021-11-13 , DOI: 10.1109/tsp.2021.3125137
Luong Trung Nguyen , Junhan Kim , Byonghyo Shim

Federated averaging (FedAvg) is a popular federated learning (FL) technique that updates the global model by averaging local models and then transmits the updated global model to devices for their local model update. One main limitation of FedAvg is that the average-based global model is not necessarily better than local models in the early stage of the training process so that FedAvg might diverge in realistic scenarios, especially when the data is non-identically distributed across devices and the number of data samples varies significantly from device to device. In this paper, we propose a new FL technique based on simulated annealing. The key idea of the proposed technique, henceforth referred to as simulated annealing-based FL (SAFL), is to allow a device to choose its local model when the global model is immature. Specifically, by exploiting the simulated annealing strategy, we make each device choose its local model with high probability in early iterations when the global model is immature. From extensive numerical experiments using various benchmark datasets, we demonstrate that SAFL outperforms the conventional FedAvg technique in terms of the convergence speed and the classification accuracy.

中文翻译:

模拟退火的渐进联邦学习

联邦平均 (FedAvg) 是一种流行的联邦学习 (FL) 技术,它通过对局部模型求平均来更新全局模型,然后将更新后的全局模型传输到设备以进行局部模型更新。FedAvg 的一个主要限制是,在训练过程的早期阶段,基于平均值的全局模型不一定比局部模型好,因此 FedAvg 在现实场景中可能会发散,尤其是当数据在不同设备之间分布不相同并且数据样本的数量因设备而异。在本文中,我们提出了一种基于模拟退火的新 FL 技术。所提出技术的关键思想,以下称为基于模拟退火的 FL (SAFL),是允许设备在全局模型不成熟时选择其局部模型。具体来说,通过利用模拟退火策略,当全局模型不成熟时,我们使每个设备在早期迭代中以高概率选择其局部模型。通过使用各种基准数据集的大量数值实验,我们证明 SAFL 在收敛速度和分类精度方面优于传统的 FedAvg 技术。
更新日期:2021-12-03
down
wechat
bug