当前位置: X-MOL 学术Proc. IEEE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Advances in Asynchronous Parallel and Distributed Optimization
Proceedings of the IEEE ( IF 20.6 ) Pub Date : 2020-11-01 , DOI: 10.1109/jproc.2020.3026619
Mahmoud Assran , Arda Aytekin , Hamid Reza Feyzmahdavian , Mikael Johansson , Michael G. Rabbat

Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade. Asynchronous methods do not require all processors to maintain a consistent view of the optimization variables. Consequently, they generally can make more efficient use of computational resources than synchronous methods, and they are not sensitive to issues like stragglers (i.e., slow nodes) and unreliable communication links. Mathematical modeling of asynchronous methods involves proper accounting of information delays, which makes their analysis challenging. This article reviews recent developments in the design and analysis of asynchronous optimization methods, covering both centralized methods, where all processors update a master copy of the optimization variables, and decentralized methods, where each processor maintains a local copy of the variables. The analysis provides insights into how the degree of asynchrony impacts convergence rates, especially in stochastic optimization methods.

中文翻译:

异步并行和分布式优化的进展

在机器学习背景下出现的大规模优化问题的推动下,在过去十年中,异步并行和分布式优化方法的研究取得了一些进展。异步方法不需要所有处理器保持优化变量的一致视图。因此,它们通常可以比同步方法更有效地利用计算资源,并且它们对诸如落后者(即慢节点)和不可靠的通信链接等问题不敏感。异步方法的数学建模涉及对信息延迟的正确计算,这使得它们的分析具有挑战性。本文回顾了异步优化方法的设计和分析的最新进展,涵盖了集中式方法,其中所有处理器更新优化变量的主副本,以及分散方法,其中每个处理器维护变量的本地副本。该分析提供了有关异步程度如何影响收敛速度的见解,尤其是在随机优化方法中。
更新日期:2020-11-01
down
wechat
bug