当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning
IEEE Transactions on Industrial Informatics ( IF 12.3 ) Pub Date : 2022-03-15 , DOI: 10.1109/tii.2022.3156645
Jingjing Guo 1 , Haiyang Li 1 , Feiran Huang 2 , Zhiquan Liu 2 , Yanguo Peng 1 , Xinghua Li 1 , Jianfeng Ma 1 , Varun G. Menon 3 , Konstantin Kostromitin Igorevich 4
Affiliation  

Recently, federated learning has received widespread attention, which will promote the implementation of artificial intelligence technology in various fields. Privacy-preserving technologies are applied to users’ local models to protect users’ privacy. Such operations make the server not see the true model parameters of each user, which opens wider door for a malicious user to upload malicious parameters and make the training result converge to an ineffective model. To solve this problem, in this article, we propose a poisoning attack defense framework for horizontal federated learning systems called ADFL. Specifically, we design a proof generation method for users to generate proofs to verify whether it is malicious or not. An aggregation rule is also proposed to make sure the global model has a high accuracy. Several verification experiments were conducted and the results show that our method can detect malicious user effectively and ensure the global model has a high accuracy.

中文翻译:

ADFL:用于水平联邦学习的中毒攻击防御框架

近期,联邦学习受到广泛关注,这将推动人工智能技术在各个领域的落地。隐私保护技术应用于用户的本地模型,以保护用户的隐私。这样的操作使得服务器看不到每个用户的真实模型参数,为恶意用户上传恶意参数打开了更大的大门,使训练结果收敛到一个无效的模型。为了解决这个问题,在本文中,我们提出了一种用于横向联邦学习系统的中毒攻击防御框架,称为 ADFL。具体来说,我们设计了一种证明生成方法,供用户生成证明来验证它是否是恶意的。还提出了一种聚合规则,以确保全局模型具有较高的准确性。
更新日期:2022-03-15
down
wechat
bug