当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Decentralized Federated Learning Framework via Committee Mechanism with Convergence Guarantee
arXiv - CS - Cryptography and Security Pub Date : 2021-08-01 , DOI: arxiv-2108.00365
Chunjiang Che, Xiaoli Li, Chuan Chen, Xiaoyu He, Zibin Zheng

Federated learning allows multiple participants to collaboratively train an efficient model without exposing data privacy. However, this distributed machine learning training method is prone to attacks from Byzantine clients, which interfere with the training of the global model by modifying the model or uploading the false gradient. In this paper, we propose a novel serverless federated learning framework Committee Mechanism based Federated Learning (CMFL), which can ensure the robustness of the algorithm with convergence guarantee. In CMFL, a committee system is set up to screen the uploaded local gradients. The committee system selects the local gradients rated by the elected members for the aggregation procedure through the selection strategy, and replaces the committee member through the election strategy. Based on the different considerations of model performance and defense, two opposite selection strategies are designed for the sake of both accuracy and robustness. Extensive experiments illustrate that CMFL achieves faster convergence and better accuracy than the typical Federated Learning, in the meanwhile obtaining better robustness than the traditional Byzantine-tolerant algorithms, in the manner of a decentralized approach. In addition, we theoretically analyze and prove the convergence of CMFL under different election and selection strategies, which coincides with the experimental results.

中文翻译:

通过具有收敛保证的委员会机制的去中心化联邦学习框架

联合学习允许多个参与者在不暴露数据隐私的情况下协作训练一个有效的模型。然而,这种分布式机器学习训练方法容易受到拜占庭客户端的攻击,通过修改模型或上传虚假梯度来干扰全局模型的训练。在本文中,我们提出了一种新的无服务器联邦学习框架基于委员会机制的联邦学习(CMFL),它可以在保证收敛的情况下确保算法的鲁棒性。在CMFL中,设置了一个委员会系统来筛选上传的局部梯度。委员会制度通过选择策略选择所选成员的当地梯度,以通过选择策略,通过选举战略取代委员会成员。基于对模型性能和防御的不同考虑,为了准确性和鲁棒性,设计了两种相反的选择策略。大量实验表明,CMFL 比典型的联邦学习实现了更快的收敛和更好的准确性,同时以分散的方式获得了比传统拜占庭容忍算法更好的鲁棒性。此外,我们从理论上分析并证明了CMFL在不同选举和选择策略下的收敛性,与实验结果相符。同时以分散的方式获得比传统拜占庭容忍算法更好的鲁棒性。此外,我们从理论上分析并证明了CMFL在不同选举和选择策略下的收敛性,与实验结果相符。同时以分散的方式获得比传统拜占庭容忍算法更好的鲁棒性。此外,我们从理论上分析并证明了CMFL在不同选举和选择策略下的收敛性,与实验结果相符。
更新日期:2021-08-03
down
wechat
bug