当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy-Enhanced Federated Learning Against Poisoning Adversaries
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2021-08-30 , DOI: 10.1109/tifs.2021.3108434
Xiaoyuan Liu , Hongwei Li , Guowen Xu , Zongqi Chen , Xiaoming Huang , Rongxing Lu

Federated learning (FL), as a distributed machine learning setting, has received considerable attention in recent years. To alleviate privacy concerns, FL essentially promises that multiple parties jointly train the model by exchanging gradients rather than raw data. However, intrinsic privacy issue still exists in FL, e.g., user’s training samples could be revealed by solely inferring gradients. Moreover, the emerging poisoning attack also poses a crucial security threat to FL. In particular, due to the distributed nature of FL, malicious users may submit crafted gradients during the training process to undermine the integrity and availability of the model. Furthermore, there exists a contradiction in simultaneously addressing two issues, that is, privacy-preserving FL solutions are dedicated to ensuring gradients indistinguishability, whereas the defenses against poisoning attacks tend to remove outliers based on their similarity. To solve such a dilemma, in this paper, we aim to build a bridge between the two issues. Specifically, we present a privacy-enhanced FL (PEFL) framework that adopts homomorphic encryption as the underlying technology and provides the server with a channel to punish poisoners via the effective gradient data extraction of the logarithmic function. To the best of our knowledge, the PEFL is the first effort to efficiently detect the poisoning behaviors in FL under ciphertext. Detailed theoretical analyses illustrate the security and convergence properties of the scheme. Moreover, the experiments conducted on real-world datasets show that the PEFL can effectively defend against label-flipping and backdoor attacks, two representative poisoning attacks in FL.

中文翻译:

针对中毒对手的隐私增强联合学习

联邦学习(FL)作为一种分布式机器学习设置,近年来受到了相当多的关注。为了减轻隐私问题,FL 本质上承诺多方通过交换梯度而不是原始数据来联合训练模型。然而,FL 中仍然存在固有的隐私问题,例如,仅通过推断梯度就可以揭示用户的训练样本。此外,新出现的中毒攻击也对 FL 构成了至关重要的安全威胁。特别是,由于 FL 的分布式特性,恶意用户可能会在训练过程中提交精心设计的梯度,以破坏模型的完整性和可用性。此外,同时解决两个问题存在矛盾,即隐私保护的 FL 解决方案致力于确保梯度不可区分,而针对中毒攻击的防御往往会根据它们的相似性去除异常值。为了解决这样的困境,在本文中,我们旨在在这两个问题之间架起一座桥梁。具体来说,我们提出了一种隐私增强的 FL (PEFL) 框架,该框架采用同态加密作为底层技术,并通过对数函数的有效梯度数据提取为服务器提供惩罚投毒者的通道。据我们所知,PEFL 是第一个在密文下有效检测 FL 中毒行为的努力。详细的理论分析说明了该方案的安全性和收敛性。此外,在真实数据集上进行的实验表明,PEFL 可以有效地防御标签翻转和后门攻击,
更新日期:2021-09-17
down
wechat
bug