当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning
arXiv - CS - Machine Learning Pub Date : 2020-11-30 , DOI: arxiv-2011.14572
Farhad Farokhi

We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models. To this aim, we employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients. Noisy privacy-preserving gradients are used to perform stochastic gradient descent for training machine learning models. Sparsification, achieved by setting the smallest gradient entries to zero, can reduce the convergence speed of the training algorithm. However, by sparsification and compressed sensing, the dimension of communicated gradient and the magnitude of additive noise can be reduced. The interplay between these effects determines whether gradient sparsification improves the performance of differentially-private machine learning models. We investigate this analytically in the paper. We prove that, in small-data regime with tight privacy budget, compression can improve performance of privacy-preserving machine learning models. However, in big-data regime, compression does not necessarily improve the performance. Intuitively, this is because the effect of privacy-preserving noise is minimal in big-data regime and thus improvements from gradient sparsification cannot compensate for its slower convergence.

中文翻译:

梯度稀疏化可以提高微分-凸凸机器学习的性能

我们使用梯度稀疏化来减少差异性隐私噪声对私有机器学习模型性能的不利影响。为此,我们采用压缩感测和加法拉普拉斯噪声来评估差分私有梯度。嘈杂的隐私保护梯度用于执行随机梯度下降,以训练机器学习模型。通过将最小梯度项设置为零来实现稀疏化,可以降低训练算法的收敛速度。但是,通过稀疏化和压缩感测,可以减小通信梯度的大小和加性噪声的大小。这些效果之间的相互作用决定了梯度稀疏化是否可以提高差异化专用机器学习模型的性能。我们在论文中对此进行了分析研究。我们证明,在隐私预算紧张的小数据环境中,压缩可以提高保留隐私的机器学习模型的性能。但是,在大数据环境中,压缩并不一定会提高性能。直觉上,这是因为在大数据环境中,保护隐私的噪声的影响最小,因此,梯度稀疏化的改进无法弥补其较慢的收敛性。
更新日期:2020-12-01
down
wechat
bug