当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Comments on 鈥淧rivacy-Enhanced Federated Learning Against Poisoning Adversaries鈥
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 1-20-2023 , DOI: 10.1109/tifs.2023.3238544
Thomas Schneider 1 , Ajith Suresh 1 , Hossein Yalame 1
Affiliation  

Liu et al. (2021) recently proposed a privacy-enhanced framework named PEFL to efficiently detect poisoning behaviours in Federated Learning (FL) using homomorphic encryption. In this article, we show that PEFL does not preserve privacy. In particular, we illustrate that PEFL reveals the entire gradient vector of all users in clear to one of the participating entities, thereby violating privacy. Furthermore, we clearly show that an immediate fix for this issue is still insufficient to achieve privacy by pointing out multiple flaws in the proposed system.

中文翻译:


对“针对中毒对手的隐私增强联邦学习”的评论



刘等人。 (2021) 最近提出了一种名为 PEFL 的隐私增强框架,用于使用同态加密有效检测联邦学习 (FL) 中的中毒行为。在本文中,我们表明 PEFL 不保护隐私。特别是,我们说明 PEFL 向参与实体之一清楚地揭示了所有用户的整个梯度向量,从而侵犯了隐私。此外,我们通过指出拟议系统中的多个缺陷,清楚地表明立即解决此问题仍然不足以实现隐私。
更新日期:2024-08-26
down
wechat
bug