当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Asynchronous Distributed Optimization with Randomized Delays
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2020-09-22 , DOI: arxiv-2009.10717
Margalit Glasgow, Mary Wootters

In this work, we study asynchronous finite sum minimization in a distributed-data setting with a central parameter server. While asynchrony is well understood in parallel settings where the data is accessible by all machines, little is known for the distributed-data setting. We introduce a variant of SAGA called ADSAGA for the distributed-data setting where each machine stores a partition of the data. We show that with independent exponential work times -- a common assumption in distributed optimization -- ADSAGA converges in $\tilde{O}\left(\left(n + \sqrt{m}\kappa\right)\log(1/\epsilon)\right)$ iterations, where $n$ is the number of component functions, $m$ is the number of machines, and $\kappa$ is a condition number. We empirically compare the iteration complexity of ADSAGA to existing parallel and distributed algorithms, including synchronous mini-batch algorithms.

中文翻译:

具有随机延迟的异步分布式优化

在这项工作中,我们研究了具有中央参数服务器的分布式数据设置中的异步有限和最小化。虽然异步在所有机器都可以访问数据的并行设置中很好理解,但对分布式数据设置知之甚少。我们为分布式数据设置引入了一种称为 ADSAGA 的 SAGA 变体,其中每台机器存储数据的一个分区。我们证明了独立指数工作时间——分布式优化中的一个常见假设——ADSAGA 收敛于 $\tilde{O}\left(\left(n + \sqrt{m}\kappa\right)\log(1/ \epsilon)\right)$ 迭代,其中 $n$ 是组件函数的数量,$m$ 是机器的数量,$\kappa$ 是一个条件数。我们凭经验将 ADSAGA 的迭代复杂度与现有的并行和分布式算法进行比较,
更新日期:2020-10-05
down
wechat
bug