当前位置: X-MOL 学术Automatica › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decentralized nonconvex optimization with guaranteed privacy and accuracy
Automatica ( IF 6.4 ) Pub Date : 2023-01-24 , DOI: 10.1016/j.automatica.2023.110858
Yongqiang Wang , Tamer Başar

Privacy protection and nonconvexity are two challenging problems in decentralized optimization and learning involving sensitive data. Despite some recent advances addressing each of the two problems separately, no results have been reported that have theoretical guarantees on both privacy protection and saddle/maximum avoidance in decentralized nonconvex optimization. We propose a new algorithm for decentralized nonconvex optimization that can enable both rigorous differential privacy and saddle/maximum avoiding performance. The new algorithm allows the incorporation of persistent additive noise to enable rigorous differential privacy for data samples, gradients, and intermediate optimization variables without losing provable convergence, and thus circumventing the dilemma of trading accuracy for privacy in differential privacy design. More interestingly, the algorithm is theoretically proven to be able to efficiently guarantee accuracy by avoiding convergence to local maxima and saddle points, which has not been reported before in the literature on decentralized nonconvex optimization. The algorithm is efficient in both communication (it only shares one variable in each iteration) and computation (it is encryption-free), and hence is promising for large-scale nonconvex optimization and learning involving high-dimensional optimization parameters. Numerical experiments for both a decentralized estimation problem and an Independent Component Analysis (ICA) problem confirm the effectiveness of the proposed approach.



中文翻译:

保证隐私和准确性的去中心化非凸优化

隐私保护和非凸性是分散优化和涉及敏感数据的学习中的两个具有挑战性的问题。尽管最近的一些进展分别解决了这两个问题中的每一个问题,但尚未报告在分散非凸优化中对隐私保护和鞍点/最大回避具有理论保证的结果。我们提出了一种用于分散非凸优化的新算法,该算法可以同时实现严格的差分隐私和鞍点/最大避免性能。新算法允许加入持久性加性噪声,在不丢失可证明收敛性的情况下,为数据样本、梯度和中间优化变量实现严格的差分隐私,从而规避差分隐私设计中以准确性换取隐私的困境。更有趣的是,该算法在理论上被证明能够通过避免收敛到局部极大值和鞍点来有效地保证精度,这在之前关于分散非凸优化的文献中没有被报道过。该算法在通信(它在每次迭代中只共享一个变量)和计算(它是无加密的)方面都是高效的,因此有望用于涉及高维优化参数的大规模非凸优化和学习。分散估计问题和独立分量分析 (ICA) 问题的数值实验证实了所提出方法的有效性。这在之前关于分散非凸优化的文献中没有被报道过。该算法在通信(它在每次迭代中只共享一个变量)和计算(它是无加密的)方面都是高效的,因此有望用于涉及高维优化参数的大规模非凸优化和学习。分散估计问题和独立分量分析 (ICA) 问题的数值实验证实了所提出方法的有效性。这在之前关于分散非凸优化的文献中没有被报道过。该算法在通信(它在每次迭代中只共享一个变量)和计算(它是无加密的)方面都是高效的,因此有望用于涉及高维优化参数的大规模非凸优化和学习。分散估计问题和独立分量分析 (ICA) 问题的数值实验证实了所提出方法的有效性。因此有望用于涉及高维优化参数的大规模非凸优化和学习。分散估计问题和独立分量分析 (ICA) 问题的数值实验证实了所提出方法的有效性。因此有望用于涉及高维优化参数的大规模非凸优化和学习。分散估计问题和独立分量分析 (ICA) 问题的数值实验证实了所提出方法的有效性。

更新日期:2023-01-24
down
wechat
bug