当前位置: X-MOL 学术Proc. IEEE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A General Framework for Decentralized Optimization With First-Order Methods
Proceedings of the IEEE ( IF 20.6 ) Pub Date : 2020-11-01 , DOI: 10.1109/jproc.2020.3024266
Ran Xin , Shi Pu , Angelia Nedic , Usman A. Khan

Decentralized optimization to minimize a finite sum of functions, distributed over a network of nodes, has been a significant area within control and signal-processing research due to its natural relevance to optimal control and signal estimation problems. More recently, the emergence of sophisticated computing and large-scale data science needs have led to a resurgence of activity in this area. In this article, we discuss decentralized first-order gradient methods, which have found tremendous success in control, signal processing, and machine learning problems, where such methods, due to their simplicity, serve as the first method of choice for many complex inference and training tasks. In particular, we provide a general framework of decentralized first-order methods that is applicable to directed and undirected communication networks alike and show that much of the existing work on optimization and consensus can be related explicitly to this framework. We further extend the discussion to decentralized stochastic first-order methods that rely on stochastic gradients at each node and describe how local variance reduction schemes, previously shown to have promise in the centralized settings, are able to improve the performance of decentralized methods when combined with what is known as gradient tracking. We motivate and demonstrate the effectiveness of the corresponding methods in the context of machine learning and signal-processing problems that arise in decentralized environments.

中文翻译:

使用一阶方法进行分散优化的通用框架

分散优化以最小化分布在节点网络上的有限功能总和,由于其与最优控制和信号估计问题的自然相关性,已成为控制和信号处理研究中的一个重要领域。最近,复杂计算和大规模数据科学需求的出现导致该领域的活动重新兴起。在本文中,我们讨论分散式一阶梯度方法,这些方法在控制、信号处理和机器学习问题方面取得了巨大成功,这些方法由于其简单性,成为许多复杂推理和推理的首选方法。训练任务。特别是,我们提供了一个适用于有向和无向通信网络的去中心化一阶方法的一般框架,并表明大部分现有的优化和共识工作都可以与该框架明确相关。我们进一步将讨论扩展到依赖于每个节点的随机梯度的分散随机一阶方法,并描述了先前证明在集中设置中具有前景的局部方差减少方案如何能够提高分散方法的性能与所谓的梯度跟踪。我们在分散环境中出现的机器学习和信号处理问题的背景下激励并证明了相应方法的有效性。
更新日期:2020-11-01
down
wechat
bug