当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
arXiv - CS - Cryptography and Security Pub Date : 2022-03-16 , DOI: arxiv-2203.08669
Xiaoyu Cao, Neil Zhenqiang Gong

Existing model poisoning attacks to federated learning assume that an attacker has access to a large fraction of compromised genuine clients. However, such assumption is not realistic in production federated learning systems that involve millions of clients. In this work, we propose the first Model Poisoning Attack based on Fake clients called MPAF. Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs. Towards this goal, our attack drags the global model towards an attacker-chosen base model that has low accuracy. Specifically, in each round of federated learning, the fake clients craft fake local model updates that point to the base model and scale them up to amplify their impact before sending them to the cloud server. Our experiments show that MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted, highlighting the need for more advanced defenses.

中文翻译:

MPAF:基于假客户端的联邦学习中毒攻击模型

对联邦学习的现有模型中毒攻击假设攻击者可以访问大部分受感染的真实客户端。然而,这样的假设在涉及数百万客户的生产联邦学习系统中是不现实的。在这项工作中,我们提出了第一个基于假客户端的模型中毒攻击,称为 MPAF。具体来说,我们假设攻击者将虚假客户端注入联邦学习系统,并在训练期间将精心制作的虚假本地模型更新发送到云服务器,这样学习的全局模型对于许多不加选择的测试输入的准确性较低。为了实现这个目标,我们的攻击将全局模型拖向攻击者选择的低准确度的基础模型。具体来说,在每一轮联邦学习中,虚假客户端制作指向基本模型的虚假本地模型更新,并在将它们发送到云服务器之前对其进行扩展以放大其影响。我们的实验表明,即使采用经典防御和范数裁剪,MPAF 也会显着降低全局模型的测试精度,这凸显了对更高级防御的需求。
更新日期:2022-03-16
down
wechat
bug