当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.03239
Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, Gennady Pekhimenko

Training deep neural networks on large datasets can often be accelerated by using multiple compute nodes. This approach, known as distributed training, can utilize hundreds of computers via specialized message-passing protocols such as Ring All-Reduce. However, running these protocols at scale requires reliable high-speed networking that is only available in dedicated clusters. In contrast, many real-world applications, such as federated learning and cloud-based distributed training, operate on unreliable devices with unstable network bandwidth. As a result, these applications are restricted to using parameter servers or gossip-based averaging protocols. In this work, we lift that restriction by proposing Moshpit All-Reduce -- an iterative averaging protocol that exponentially converges to the global average. We demonstrate the efficiency of our protocol for distributed optimization with strong theoretical guarantees. The experiments show 1.3x speedup for ResNet-50 training on ImageNet compared to competitive gossip-based strategies and 1.5x speedup when training ALBERT-large from scratch using preemptible compute nodes.

中文翻译:

Moshpit SGD:对异构不可靠设备进行有效通信的分散式培训

通常可以通过使用多个计算节点来加快在大型数据集上训练深度神经网络的速度。这种称为分布式培训的方法可以通过专门的消息传递协议(例如Ring All-Reduce)利用数百台计算机。但是,大规模运行这些协议需要可靠的高速网络,这仅在专用群集中可用。相反,许多现实世界的应用程序(例如联合学习和基于云的分布式培训)在不稳定的设备上运行,且设备的网络带宽不稳定。结果,这些应用程序只能使用参数服务器或基于八卦的平均协议。在这项工作中,我们通过提出Moshpit All-Reduce来解除这种限制,Moshpit All-Reduce是一种迭代平均协议,它指数收敛于全球平均值。我们以强大的理论保证证明了我们的协议在分布式优化中的效率。实验表明,与基于竞争性八卦的策略相比,ImageNet上ResNet-50培训的速度提高了1.3倍,而使用可抢占的计算节点从头开始训练ALBERT-large的速度提高了1.5倍。
更新日期:2021-03-05
down
wechat
bug