当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Communication-efficient algorithms for decentralized and stochastic optimization
Mathematical Programming ( IF 2.2 ) Pub Date : 2018-12-07 , DOI: 10.1007/s10107-018-1355-4
Guanghui Lan , Soomin Lee , Yi Zhou

We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. Our major contribution is to present a new class of decentralized primal–dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions. By employing DCS, agents can find an $$\epsilon $$ ϵ -solution both in terms of functional optimality gap and feasibility residual in $${{\mathcal {O}}}(1/\epsilon )$$ O ( 1 / ϵ ) (resp., $${{\mathcal {O}}}(1/\sqrt{\epsilon })$$ O ( 1 / ϵ ) ) communication rounds for general convex functions (resp., strongly convex functions), while maintaining the $${{\mathcal {O}}}(1/\epsilon ^2)$$ O ( 1 / ϵ 2 ) (resp., $$\mathcal{O}(1/\epsilon )$$ O ( 1 / ϵ ) ) bound on the total number of intra-node subgradient evaluations. We also present a stochastic counterpart for these algorithms, denoted by SDCS, for solving stochastic optimization problems whose objective function cannot be evaluated exactly. In comparison with existing results for decentralized nonsmooth and stochastic optimization, we can reduce the total number of inter-node communication rounds by orders of magnitude while still maintaining the optimal complexity bounds on intra-node stochastic subgradient evaluations. The bounds on the (stochastic) subgradient evaluations are actually comparable to those required for centralized nonsmooth and stochastic optimization under certain conditions on the target accuracy.

中文翻译:

用于分散和随机优化的高效通信算法

我们提出了一类新的去中心化一阶方法,用于在多智能体网络上定义的非光滑和随机优化问题。考虑到通信是分散优化的主要瓶颈,我们在本文中的主要目标是开发可以显着减少节点间通信数量的算法框架。我们的主要贡献是提出了一类新的去中心化原始对偶类型算法,即去中心化通信滑动(DCS)方法,它可以跳过节点间通信,同时代理通过其局部目标函数的线性化迭代解决原始子问题。通过使用 DCS,代理可以在 $${{\mathcal {O}}}(1/\epsilon )$$ O ( 1 / ϵ ) (分别为 $${{\mathcal {O}}}(1/\sqrt{\epsilon })$$ O ( 1 / ϵ ) ) 通用凸函数(分别为强凸函数)的通信轮次,同时保持$${{\mathcal {O}}}(1/\epsilon ^2)$$ O ( 1 / ϵ 2 ) (resp., $$\mathcal{O}(1/\epsilon )$$ O ( 1 / ϵ ) ) 受限于节点内次梯度评估的总数。我们还提出了这些算法的随机对应物,用 SDCS 表示,用于解决无法准确评估目标函数的随机优化问题。与现有的去中心化非平滑和随机优化结果相比,我们可以将节点间通信轮次的总数减少几个数量级,同时仍然保持节点内随机次梯度评估的最佳复杂度界限。(随机)次梯度评估的界限实际上与在目标精度的某些条件下集中非平滑和随机优化所需的界限相当。
更新日期:2018-12-07
down
wechat
bug