当前位置: X-MOL 学术Comput. Stand. Interfaces › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy-preserving Byzantine-robust federated learning
Computer Standards & Interfaces ( IF 4.1 ) Pub Date : 2021-08-02 , DOI: 10.1016/j.csi.2021.103561
Xu Ma 1, 2 , Yuqing Zhou 1 , Laihua Wang 1 , Meixia Miao 2, 3
Affiliation  

Robustness of federated learning has become one of the major concerns since some Byzantine adversaries, who may upload false data owning to unreliable communication channels, corrupted hardware or even malicious attacks, might be concealed in the group of the distributed worker. Meanwhile, it has been proved that membership attacks and reverse attacks against federated learning can lead to privacy leakage of the training data. To address the aforementioned challenges, we propose a privacy-preserving Byzantine-robust federated learning scheme (PBFL) which takes both the robustness of federated learning and the privacy of the workers into account. PBFL is constructed from an existing Byzantine-robust federated learning algorithm and combined with distributed Paillier encryption and zero-knowledge proof to guarantee privacy and filter out anomaly parameters from Byzantine adversaries. Finally, we prove that our scheme provides a higher level of privacy protection compared to the previous Byzantine-robust federated learning algorithms.



中文翻译:

隐私保护拜占庭鲁棒联邦学习

联邦学习的鲁棒性已成为主要关注点之一,因为一些拜占庭对手可能会因不可靠的通信渠道、损坏的硬件甚至恶意攻击而上传虚假数据,这些对手可能隐藏在分布式工人组中。同时,已经证明针对联邦学习的成员攻击和反向攻击会导致训练数据的隐私泄露。为了解决上述挑战,我们提出了一种隐私保护拜占庭鲁棒联邦学习方案(PBFL),它同时考虑了联邦学习的鲁棒性和工人的隐私。PBFL 由现有的拜占庭鲁棒联邦学习算法构建而成,并结合分布式 Paillier 加密和零知识证明,以保证隐私并过滤拜占庭对手的异常参数。最后,我们证明了与之前拜占庭稳健的联邦学习算法相比,我们的方案提供了更高级别的隐私保护。

更新日期:2021-08-20
down
wechat
bug