当前位置: X-MOL 学术arXiv.cs.DS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Wide Network Learning with Differential Privacy
arXiv - CS - Data Structures and Algorithms Pub Date : 2021-03-01 , DOI: arxiv-2103.01294
Huanyu Zhang, Ilya Mironov, Meisam Hejazinia

Despite intense interest and considerable effort, the current generation of neural networks suffers a significant loss of accuracy under most practically relevant privacy training regimes. One particularly challenging class of neural networks are the wide ones, such as those deployed for NLP typeahead prediction or recommender systems. Observing that these models share something in common--an embedding layer that reduces the dimensionality of the input--we focus on developing a general approach towards training these models that takes advantage of the sparsity of the gradients. More abstractly, we address the problem of differentially private Empirical Risk Minimization (ERM) for models that admit sparse gradients. We demonstrate that for non-convex ERM problems, the loss is logarithmically dependent on the number of parameters, in contrast with polynomial dependence for the general case. Following the same intuition, we propose a novel algorithm for privately training neural networks. Finally, we provide an empirical study of a DP wide neural network on a real-world dataset, which has been rarely explored in the previous work.

中文翻译:

具有差异性隐私的广泛网络学习

尽管引起了极大的兴趣和相当大的努力,但是在最实际相关的隐私培训制度下,当前一代的神经网络仍遭受着准确性的重大损失。一类特别具有挑战性的神经网络是广泛的神经网络,例如用于NLP提前预测或推荐系统的神经网络。观察到这些模型有共同点-减少输入维数的嵌入层-我们着重于开发一种通用方法来训练这些模型,利用梯度的稀疏性。更抽象地讲,对于允许稀疏梯度的模型,我们解决了差分私有经验风险最小化(ERM)问题。我们证明,对于非凸型ERM问题,损失与参数数量成对数关系,与一般情况下的多项式相关性相反。遵循相同的直觉,我们提出了一种用于私有训练神经网络的新算法。最后,我们提供了对真实世界数据集上的DP广域神经网络的实证研究,这在以前的工作中很少进行探讨。
更新日期:2021-03-03
down
wechat
bug