当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps
IEEE Transactions on Signal Processing ( IF 4.6 ) Pub Date : 2021-07-09 , DOI: 10.1109/tsp.2021.3094906
Albert Berahas , Raghu Bollapragada , Ermin Wei

In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where the cost of communication and/or computation can be expensive. We extend and generalize the analysis for a class of nested gradient-based distributed algorithms [NEAR-DGD, (Berahas et al., 2019)] to account for multiple gradient steps at every iteration. We show the effect of performing multiple gradient steps on the rate of convergence and on the size of the neighborhood of convergence, and prove R-Linear convergence to the exact solution with a fixed number of gradient steps and increasing number of consensus steps. We test the performance of the generalized method on quadratic functions and show the effect of multiple consensus and gradient steps in terms of iterations, number of gradient evaluations, number of communications and cost.

中文翻译:


具有多重一致性和梯度步骤的嵌套分散梯度方法的收敛性



在本文中,我们考虑在分布式环境中最小化局部凸目标函数的总和,其中通信和/或计算的成本可能很昂贵。我们扩展并概括了一类基于嵌套梯度的分布式算法 [NEAR-DGD,(Berahas 等人,2019)] 的分析,以在每次迭代时考虑多个梯度步骤。我们展示了执行多个梯度步骤对收敛速度和收敛邻域大小的影响,并证明了使用固定数量的梯度步骤和增加一致步骤数量的 R 线性收敛到精确解。我们测试了广义方法在二次函数上的性能,并展示了多个共识和梯度步骤在迭代、梯度评估次数、通信次数和成本方面的效果。
更新日期:2021-07-09
down
wechat
bug