当前位置: X-MOL 学术arXiv.cs.SY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems
arXiv - CS - Systems and Control Pub Date : 2020-11-24 , DOI: arxiv-2011.12233
Youbang Sun, Shahin Shahrampour

Distributed optimization often requires finding the minimum of a global objective function written as a sum of local functions. A group of agents work collectively to minimize the global function. We study a continuous-time decentralized mirror descent algorithm that uses purely local gradient information to converge to the global optimal solution. The algorithm enforces consensus among agents using the idea of integral feedback. Recently, Sun and Shahrampour (2020) studied the asymptotic convergence of this algorithm for when the global function is strongly convex but local functions are convex. Using control theory tools, in this work, we prove that the algorithm indeed achieves (local) exponential convergence. We also provide a numerical experiment on a real data-set as a validation of the convergence speed of our algorithm.

中文翻译:

具有强凸问题的带积分反馈的分布镜像下降的线性收敛

分布式优化通常需要找到作为局部函数之和编写的全局目标函数的最小值。一组座席共同努力以最小化全局功能。我们研究了一种连续时间的分散镜像下降算法,该算法使用纯局部梯度信息收敛到全局最优解。该算法使用积分反馈的思想在代理之间强制达成共识。最近,Sun和Shahrampour(2020)研究了当全局函数强凸而局部函数凸时该算法的渐近收敛性。使用控制理论工具,在这项工作中,我们证明了该算法确实实现了(局部)指数收敛。我们还提供了在真实数据集上的数值实验,以验证算法的收敛速度。
更新日期:2020-11-25
down
wechat
bug