当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Privacy Amplification of Iterative Algorithms via Contraction Coefficients
arXiv - CS - Machine Learning Pub Date : 2020-01-17 , DOI: arxiv-2001.06546 Shahab Asoodeh, Mario Diaz, and Flavio P. Calmon
arXiv - CS - Machine Learning Pub Date : 2020-01-17 , DOI: arxiv-2001.06546 Shahab Asoodeh, Mario Diaz, and Flavio P. Calmon
We investigate the framework of privacy amplification by iteration, recently
proposed by Feldman et al., from an information-theoretic lens. We demonstrate
that differential privacy guarantees of iterative mappings can be determined by
a direct application of contraction coefficients derived from strong data
processing inequalities for $f$-divergences. In particular, by generalizing the
Dobrushin's contraction coefficient for total variation distance to an
$f$-divergence known as $E_{\gamma}$-divergence, we derive tighter bounds on
the differential privacy parameters of the projected noisy stochastic gradient
descent algorithm with hidden intermediate updates.
中文翻译:
通过收缩系数对迭代算法进行隐私放大
我们从信息理论的角度研究了最近由 Feldman 等人提出的迭代隐私放大框架。我们证明了迭代映射的差分隐私保证可以通过直接应用从强数据处理不等式导出的收缩系数来确定。特别是,通过将总变异距离的 Dobrushin 收缩系数推广到称为 $E_{\gamma}$-divergence 的 $f$-divergence,我们推导出更严格的投影噪声随机梯度下降算法的差分隐私参数的界限隐藏的中间更新。
更新日期:2020-01-22
中文翻译:
通过收缩系数对迭代算法进行隐私放大
我们从信息理论的角度研究了最近由 Feldman 等人提出的迭代隐私放大框架。我们证明了迭代映射的差分隐私保证可以通过直接应用从强数据处理不等式导出的收缩系数来确定。特别是,通过将总变异距离的 Dobrushin 收缩系数推广到称为 $E_{\gamma}$-divergence 的 $f$-divergence,我们推导出更严格的投影噪声随机梯度下降算法的差分隐私参数的界限隐藏的中间更新。