当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated $f$-Differential Privacy
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-22 , DOI: arxiv-2102.11158
Qinqing Zheng, Shuxiao Chen, Qi Long, Weijie J. Su

Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information without compromising much on the privacy of their local sensitive data. In this paper, we introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting, based on the framework of Gaussian differential privacy. Federated $f$-differential privacy operates on record level: it provides the privacy guarantee on each individual record of one client's data against adversaries. We then propose a generic private federated learning framework {PriFedSync} that accommodates a large family of state-of-the-art FL algorithms, which provably achieves federated $f$-differential privacy. Finally, we empirically demonstrate the trade-off between privacy guarantee and prediction performance for models trained by {PriFedSync} in computer vision tasks.

中文翻译:

联邦$ f $-差异隐私

联合学习(FL)是一种培训范式,在该范式中,客户通过反复共享信息来协作学习模型,而不会在很大程度上损害其本地敏感数据的隐私性。在本文中,我们介绍了基于高斯差分隐私框架的联邦$ f $-差分隐私,这是专门针对联邦环境量身定制的新概念。联邦$ f $-差异性隐私在记录级别上运行:它在针对客户的每一个客户数据的每条单独记录中提供针对对手的隐私保证。然后,我们提出了一个通用的私有联合学习框架{PriFedSync},该框架可容纳一大批最新的FL算法,可证明实现了$ f $-差分联合隐私。最后,
更新日期:2021-02-24
down
wechat
bug