当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differentially Private ADMM Algorithms for Machine Learning
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2021-09-20 , DOI: 10.1109/tifs.2021.3113768
Fanhua Shang , Tao Xu , Yuanyuan Liu , Hongying Liu , Longjie Shen , Maoguo Gong

In this paper, we study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation for many centralized machine learning problems. For smooth convex loss functions with (non)-smooth regularization, we propose the first differentially private ADMM (DP-ADMM) algorithm with the performance guarantee of $(\epsilon,\delta)$ -differential privacy ( $(\epsilon,\delta)$ -DP). From the viewpoint of theoretical analysis, we use the Gaussian mechanism and the conversion relationship between Rényi Differential Privacy (RDP) and DP to perform a comprehensive privacy analysis for our algorithm. Then we establish a new criterion to prove the convergence of the proposed algorithms including DP-ADMM. We also give the utility analysis of our DP-ADMM. Moreover, we propose a new accelerated DP-ADMM (DP-AccADMM) algorithm with the Nesterov’s acceleration technique. Finally, we conduct numerical experiments on many real-world datasets to show the privacy-utility tradeoff of the two proposed algorithms, and all the comparative analysis shows that DP-AccADMM converges faster and has a better utility than DP-ADMM, when the privacy budget $\epsilon $ is larger than a threshold.

中文翻译:

用于机器学习的差分私有 ADMM 算法

在本文中,我们针对许多集中式机器学习问题,通过梯度扰动研究了乘法器(ADMM)的有效差分私有交替方向方法。对于具有(非)平滑正则化的平滑凸损失函数,我们提出了第一个具有以下性能保证的差分私有 ADMM(DP-ADMM)算法 $(\epsilon,\delta)$ - 差分隐私( $(\epsilon,\delta)$ -DP)。从理论分析的角度,我们利用高斯机制和人一差分隐私(Rényi Differential Privacy,RDP)与DP之间的转换关系,对我们的算法进行了全面的隐私分析。然后我们建立了一个新的标准来证明所提出的算法的收敛性,包括 DP-ADMM。我们还给出了我们的 DP-ADMM 的效用分析。此外,我们提出了一种新的加速 DP-ADMM (DP-AccADMM) 算法和 Nesterov 的加速技术。最后,我们在许多真实世界的数据集上进行了数值实验,以展示两种算法的隐私-效用权衡,所有的比较分析表明,DP-AccADMM 收敛速度更快,比 DP-ADMM 具有更好的效用,当隐私预算 $\epsilon $ 大于阈值。
更新日期:2021-10-06
down
wechat
bug