当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SAFELearning: Secure Aggregation in Federated Learning With Backdoor Detectability
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2023-05-25 , DOI: 10.1109/tifs.2023.3280032
Zhuosheng Zhang 1 , Jiarui Li 1 , Shucheng Yu 1 , Christian Makaya 2
Affiliation  

For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator. This technique is referred to as secure aggregation . However, secure aggregation makes model poisoning attacks such as backdooring more convenient given that existing anomaly detection methods mostly require access to plaintext local models. This paper proposes a new federated learning technique SAFELearning to support backdoor detection for secure aggregation. We achieve this through two new primitives - oblivious random grouping (ORG) and partial parameter disclosure (PPD) . ORG partitions participants into one-time random subgroups with group configurations oblivious to participants; PPD allows secure partial disclosure of aggregated subgroup models for anomaly detection without leaking individual model privacy. ORG is based on our construction of several new primitives including tree-based random subgroup generation, oblivious secure aggregation, and randomized Diffie-Hellman key exchange. ORG can thwart colluding attackers from knowing each other’s group membership assignment with non-negligible advantage than random guess. Backdoor attacks are detected based on statistical distributions of the subgroup aggregated parameters of the learning iterations. SAFELearning can significantly reduce backdoor model accuracy without jeopardizing the main task accuracy under common backdoor strategies. Extensive experiments show SAFELearning is robust against malicious and faulty participants, whilst being more efficient than the state-of-art secure aggregation protocol in terms of both communication and computation costs.

中文翻译:

SAFELearning:具有后门可检测性的联邦学习中的安全聚合

为了模型隐私,联邦学习中的本地模型参数在发送到远程聚合器之前应该被混淆。这种技术被称为安全聚合。然而,鉴于现有的异常检测方法大多需要访问明文本地模型,安全聚合使得后门等模型中毒攻击更加方便。本文提出了一种新的联邦学习技术 SAFELearning 来支持安全聚合的后门检测。我们通过两个新的原语实现了这一点—— 不经意的随机分组 (ORG) 和部分参数公开 (PPD)。ORG 将参与者分成一次性随机子组,组配置对参与者不敏感;PPD 允许安全地部分披露聚合子组模型以进行异常检测,而不会泄露单个模型的隐私。ORG 基于我们构建的几个新原语,包括基于树的随机子组生成、不经意的安全聚合和随机 Diffie-Hellman 密钥交换。ORG 可以阻止合谋攻击者了解彼此的组成员分配,其优势比随机猜测具有不可忽略的优势。基于学习迭代的子组聚合参数的统计分布检测后门攻击。SAFELearning 可以显着降低后门模型的准确性,而不会危及常见后门策略下的主要任务准确性。广泛的实验表明,SAFELearning 对恶意和错误的参与者具有鲁棒性,同时在通信和计算成本方面比最先进的安全聚合协议更有效。
更新日期:2023-05-25
down
wechat
bug