当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A consensus-based decentralized training algorithm for deep neural networks with communication compression
Neurocomputing ( IF 5.5 ) Pub Date : 2021-01-13 , DOI: 10.1016/j.neucom.2021.01.020
Bo Liu , Zhengtao Ding

Facing the challenge of distributed computing on processing large-scale data, this paper proposes a consensus-based decentralized training method with communication compression. First, the decentralized training method is designed based on the decentralized topology to reduce the communication burden on the busiest agent and avoid any agent revealing its locally stored data. The convergence of the decentralized training algorithm is then analyzed, which demonstrates that the decentralized trained model can reach the minimal empirical risk on the whole dataset, without the sharing of data samples. Furthermore, model compression combined with the error-compensated method is considered to reduce communication costs during the decentralized training process. At last, the simulation study shows that the proposed decentralized training with error-compensated communication compression is applicable for both IID and non-IID datasets, and exhibits much better performance than the local training method. Besides, the proposed algorithm with an appropriate compression rate shows comparable performance with decentralized training and centralized training, while saving a lot of communication costs.



中文翻译:

基于共识的通信压缩深度神经网络分散训练算法

面对分布式计算对海量数据处理的挑战,本文提出了一种基于共识的带通信压缩的分散式训练方法。首先,基于分散的拓扑设计分散的训练方法,以减少最繁忙的代理上的通信负担,并避免任何代理泄露其本地存储的数据。然后分析了去中心化训练算法的收敛性,这表明去中心化训练模型可以在整个数据集上达到最小的经验风险,而无需共享数据样本。此外,考虑将模型压缩与误差补偿方法相结合,以减少分散式训练过程中的通信成本。终于,仿真研究表明,所提出的带有误差补偿通信压缩的分散式训练适用于IID和非IID数据集,并且比本地训练方法具有更好的性能。此外,所提出的具有适当压缩率的算法在分散训练和集中训练中表现出可比的性能,同时节省了大量的通信成本。

更新日期:2021-03-05
down
wechat
bug