当前位置: X-MOL 学术IEEE Trans. Signal Inf. Process. Over Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Computation-Efficient Decentralized Algorithm for Composite Constrained Optimization
IEEE Transactions on Signal and Information Processing over Networks ( IF 3.2 ) Pub Date : 2020-11-16 , DOI: 10.1109/tsipn.2020.3037837
Qingguo Lu , Xiaofeng Liao , Huaqing Li , Tingwen Huang

This paper focuses on solving the problem of composite constrained convex optimization with a sum of smooth convex functions and non-smooth regularization terms ( ${\ell _1}$ norm) subject to locally general constraints. Motivated by the modern large-scale information processing problems in machine learning (the samples of a training dataset are randomly decentralized across multiple computing nodes), each of the smooth objective functions is further considered as the average of several constituent functions. To address the problem in a decentralized fashion, we propose a novel computation-efficient decentralized stochastic gradient algorithm, which leverages the variance reduction technique and the decentralized stochastic gradient projection method with constant step-size. Theoretical analysis indicates that if the constant step-size is less than an explicitly estimated upper bound, the proposed algorithm can find the exact optimal solution in expectation when each constituent function (smooth) is strongly convex. Concerning the existing decentralized schemes, the proposed algorithm not only is suitable for solving the general constrained optimization problems but also possesses low computation cost in terms of the total number of local gradient evaluations. Furthermore, the proposed algorithm via differential privacy strategy can effectively mask the privacy of each constituent function, which is more practical in applications involving sensitive messages, such as military affairs or medical treatment. Finally, numerical evidence is provided to demonstrate the appealing performance of the proposed algorithm.

中文翻译:

组合约束优化的高效计算分散算法

本文着重解决光滑凸函数和非光滑正则项之和的复合约束凸优化问题( $ {\ ell _1} $规范)受当地一般约束的约束。受机器学习中现代大规模信息处理问题的影响(训练数据集的样本在多个计算节点上随机分散),每个平滑目标函数被进一步视为若干组成函数的平均值。为了以分散的方式解决该问题,我们提出了一种新颖的计算有效的分散随机梯度算法,该算法利用方差减小技术和恒定步长的分散随机梯度投影方法。理论分析表明,如果恒定步长小于显式估计的上限,则当每个组成函数(平滑)都是强凸的时,所提出的算法可以找到期望中的精确最优解。对于现有的分散方案,所提出的算法不仅适合于解决一般的约束优化问题,而且就局部梯度评估的总数而言具有较低的计算成本。此外,通过差分隐私策略提出的算法可以有效地掩盖每个组成功能的隐私,这在涉及敏感消息的应用中更实用,例如军事或医疗。最后,提供了数值证据来证明所提出算法的吸引力。此外,通过差分隐私策略提出的算法可以有效地掩盖每个组成功能的隐私,这在涉及敏感消息的应用中更实用,例如军事或医疗。最后,提供了数值证据来证明所提出算法的吸引力。此外,通过差分隐私策略提出的算法可以有效地掩盖每个组成功能的隐私,这在涉及敏感消息的应用中更实用,例如军事或医疗。最后,提供了数值证据来证明所提出算法的吸引力。
更新日期:2020-12-25
down
wechat
bug