当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via $f$-Divergences
arXiv - CS - Cryptography and Security Pub Date : 2020-01-16 , DOI: arxiv-2001.05990
Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of R\'enyi differential privacy (RDP). Our result is based on the joint range of two $f$-divergences that underlie the approximate and the R\'enyi variations of differential privacy. We apply our result to the moments accountant framework for characterizing privacy guarantees of stochastic gradient descent. When compared to the state-of-the-art, our bounds may lead to about 100 more stochastic gradient descent iterations for training deep learning models for the same privacy budget.

中文翻译:

更好的边界提供一百轮:通过 $f$-Divergences 增强隐私保证

我们推导出满足给定级别的 R\'enyi 差分隐私 (RDP) 的机制的最佳差分隐私 (DP) 参数。我们的结果基于两个 $f$-divergences 的联合范围,这是差分隐私的近似和 R\'enyi 变化的基础。我们将我们的结果应用于矩量会计框架,以表征随机梯度下降的隐私保证。与最先进的技术相比,我们的边界可能会导致大约 100 次以上的随机梯度下降迭代,用于在相同的隐私预算下训练深度学习模型。
更新日期:2020-01-17
down
wechat
bug