当前位置: X-MOL 学术IEEE Trans. Autom. Control › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Asynchronous Gradient-Push
IEEE Transactions on Automatic Control ( IF 6.8 ) Pub Date : 2021-01-01 , DOI: 10.1109/tac.2020.2981035
Mahmoud S. Assran , Michael G. Rabbat

We consider a multiagent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents. When the local functions are strongly convex with Lipschitz-continuous gradients, we show that the iterates at each agent converge to a neighborhood of the global minimum, where the neighborhood size depends on the degree of asynchrony in the multiagent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that asynchronous gradient push can minimize the global objective faster than the state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.

中文翻译:

异步梯度推送

我们考虑用于分布式优化的多智能体框架,其中每个智能体都可以访问局部平滑的强凸函数,并且共同目标是就最小化智能体局部函数总和的参数达成共识。我们提出了一种算法,其中每个代理都异步且独立于其他代理运行。当局部函数具有 Lipschitz 连续梯度的强凸性时,我们表明每个代理的迭代收敛到全局最小值的邻域,其中邻域大小取决于多代理网络中的异步程度。当代理以相同的速率工作时,就可以收敛到全局最小化器。
更新日期:2021-01-01
down
wechat
bug