当前位置: X-MOL 学术Computing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differential privacy distributed learning under chaotic quantum particle swarm optimization
Computing ( IF 3.3 ) Pub Date : 2020-10-30 , DOI: 10.1007/s00607-020-00853-2
Yun Xie , Peng Li , Jindan Zhang , Marek R. Ogiela

Differential privacy has been a common framework that provides an effective method of establishing privacy-guaranteed machine learning. Extensive research work has focused on differential privacy stochastic gradient descent (SGD-DP) and its variants under distributed machine learning to improve training efficiency and protect privacy. However, SGD-DP relies on the premise of convex optimization. In large-scale distributed machine learning, the objective function may be more a non-convex objective function, which not only makes the gradient calculation difficult and easy to fall into local optimization. It’s difficult to achieve truly global optimization. To address this issue, we propose a novel differential privacy optimization algorithm based on quantum particle swarm theory that suitable for both convex optimization and non-convex optimization. We further comprehensively apply adaptive contraction–expansion and chaotic search to overcome the premature problem, and provide theoretical analysis in terms of convergence and privacy protection. Also, we verify through experiments that the actual application performance of the algorithm is consistent with the theoretical analysis.

中文翻译:

混沌量子粒子群优化下的差分隐私分布式学习

差分隐私一直是一个通用框架,它提供了一种建立隐私保证机器学习的有效方法。广泛的研究工作集中在分布式机器学习下的差分隐私随机梯度下降(SGD-DP)及其变体上,以提高训练效率和保护隐私。但是,SGD-DP 依赖于凸优化的前提。在大规模分布式机器学习中,目标函数可能更多是一个非凸的目标函数,这不仅使梯度计算困难,而且容易陷入局部优化。很难实现真正的全局优化。为了解决这个问题,我们提出了一种基于量子粒子群理论的新型差分隐私优化算法,该算法适用于凸优化和非凸优化。我们进一步综合应用自适应收缩-扩展和混沌搜索来克服早熟问题,并在收敛和隐私保护方面提供理论分析。同时,我们通过实验验证了该算法的实际应用性能与理论分析是一致的。
更新日期:2020-10-30
down
wechat
bug