当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Defending Against Backdoors in Federated Learning with Robust Learning Rate
arXiv - CS - Cryptography and Security Pub Date : 2020-07-07 , DOI: arxiv-2007.03767
Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel

Federated Learning (FL) allows a set of agents to collaboratively train a model in a decentralized fashion without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor trigger functionality to the model during training which can later be activated to cause a desired misclassification. To prevent such backdoor attacks, we propose a lightweight defense that requires no change to the FL structure. At a high level, our defense is based on carefully adjusting the server's learning rate, per dimension, at each round based on the sign information of agent's updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence to the support of our conjecture. We test our defense against backdoor attacks under different settings, and, observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggests that our approach significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models.

中文翻译:

以稳健的学习率防御联邦学习中的后门

联邦学习 (FL) 允许一组代理以分散的方式协作训练模型,而无需共享其潜在的敏感数据。这使得 FL 适用于隐私保护应用。同时,由于去中心化和未经审查的数据,FL 容易受到对抗性攻击。针对 FL 的一种重要攻击方式是后门攻击。在后门攻击中,攻击者试图在训练期间将后门触发器功能嵌入模型中,稍后可以激活该功能以导致所需的错误分类。为了防止这种后门攻击,我们提出了一种不需要改变 FL 结构的轻量级防御。在高层次上,我们的防御基于在每一轮中根据代理更新的符号信息仔细调整服务器的学习率,每个维度。我们首先推测在 FL 设置中进行成功后门攻击的必要步骤,然后根据我们的推测明确地制定防御。通过实验,我们提供了经验证据来支持我们的猜想。我们在不同的设置下测试了我们对后门攻击的防御,并观察到后门要么被完全消除,要么其准确性显着降低。总的来说,我们的实验表明,我们的方法明显优于文献中最近提出的一些防御措施。我们通过对训练模型的准确性的影响最小来实现这一点。我们提供经验证据来支持我们的猜想。我们在不同的设置下测试了我们对后门攻击的防御,并观察到后门要么被完全消除,要么其准确性显着降低。总的来说,我们的实验表明,我们的方法明显优于文献中最近提出的一些防御措施。我们通过对训练模型的准确性的影响最小来实现这一点。我们提供经验证据来支持我们的猜想。我们在不同的设置下测试了我们对后门攻击的防御,并观察到后门要么被完全消除,要么其准确性显着降低。总的来说,我们的实验表明,我们的方法明显优于文献中最近提出的一些防御措施。我们通过对训练模型的准确性的影响最小来实现这一点。
更新日期:2020-07-09
down
wechat
bug