当前位置: X-MOL 学术Concurr. Comput. Pract. Exp. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks
Concurrency and Computation: Practice and Experience ( IF 2 ) Pub Date : 2020-06-29 , DOI: 10.1002/cpe.5906
Ying Zhao 1 , Junjun Chen 1 , Jiale Zhang 2 , Di Wu 3, 4 , Michael Blumenstein 4 , Shui Yu 3
Affiliation  

In the age of the Internet of Things (IoT), large numbers of sensors and edge devices are deployed in various application scenarios; Therefore, collaborative learning is widely used in IoT to implement crowd intelligence by inviting multiple participants to complete a training task. As a collaborative learning framework, federated learning is designed to preserve user data privacy, where participants jointly train a global model without uploading their private training data to a third party server. Nevertheless, federated learning is under the threat of poisoning attacks, where adversaries can upload malicious model updates to contaminate the global model. To detect and mitigate poisoning attacks in federated learning, we propose a poisoning defense mechanism, which uses generative adversarial networks to generate auditing data in the training procedure and removes adversaries by auditing their model accuracy. Experiments conducted on two well-known datasets, MNIST and Fashion-MNIST, suggest that federated learning is vulnerable to the poisoning attack, and the proposed defense method can detect and mitigate the poisoning attack.

中文翻译:

使用生成对抗网络检测和减轻联邦学习中的中毒攻击

在物联网(IoT)时代,大量的传感器和边缘设备部署在各种应用场景中;因此,协同学习在物联网中被广泛使用,通过邀请多个参与者完成一个训练任务来实现人群智能。作为一种协作学习框架,联邦学习旨在保护用户数据隐私,参与者共同训练一个全局模型,而无需将他们的私人训练数据上传到第三方服务器。尽管如此,联邦学习仍面临中毒攻击的威胁,攻击者可以上传恶意模型更新来污染全局模型。为了检测和减轻联邦学习中的中毒攻击,我们提出了一种中毒防御机制,它使用生成对抗网络在训练过程中生成审计数据,并通过审计他们的模型准确性来消除对手。在两个著名的数据集 MNIST 和 Fashion-MNIST 上进行的实验表明,联邦学习容易受到中毒攻击,所提出的防御方法可以检测和减轻中毒攻击。
更新日期:2020-06-29
down
wechat
bug