当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coded Stochastic ADMM for Decentralized Consensus Optimization With Edge Computing
IEEE Internet of Things Journal ( IF 10.6 ) Pub Date : 2021-02-09 , DOI: 10.1109/jiot.2021.3058116
Hao Chen , Yu Ye , Ming Xiao , Mikael Skoglund , H. Vincent Poor

Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones, and vehicles. Due to the limitations of communication costs and security requirements, it is of paramount importance to analyze information in a decentralized manner instead of aggregating data to a fusion center. To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning. We consider the problem of learning model parameters in a multiagent system with data locally processed via distributed edge nodes. A class of minibatch stochastic alternating direction method of multipliers (ADMMs) algorithms is explored to develop the distributed learning model. To address two main critical challenges in distributed learning systems, i.e., communication bottleneck and straggler nodes (nodes with slow responses), error-control-coding-based stochastic incremental ADMM is investigated. Given an appropriate minibatch size, we show that the minibatch stochastic ADMM-based method converges in a rate of $O(1/\sqrt {k})$ , where $k$ denotes the number of iterations. Through numerical experiments, it is revealed that the proposed algorithm is communication efficient, rapidly responding, and robust in the presence of straggler nodes compared with state-of-the-art algorithms.

中文翻译:

编码随机ADMM用于边缘计算的分散共识优化

大数据,包括对安全性有较高要求的应用程序,通常会收集并存储在多个异构设备上,例如移动设备,无人机和车辆。由于通信成本和安全要求的限制,以分散方式分析信息而不是将数据汇总到融合中心至关重要。为了训练大规模机器学习模型,通常利用边缘/雾计算作为集中式学习的替代方法。我们考虑在通过分布式边缘节点本地处理数据的多主体系统中学习模型参数的问题。探索了一类微分乘数交替交替方向算法(ADMMs),以发展分布式学习模型。为了解决分布式学习系统中的两个主要关键挑战,即,研究通信瓶颈和散乱节点(响应缓慢的节点),基于错误控制编码的随机增量式ADMM。给定适当的小批量大小,我们证明了基于小批量随机ADMM的方法的收敛速度为 $ O(1 / \ sqrt {k})$ , 在哪里 $ k $ 表示迭代次数。通过数值实验表明,与现有算法相比,该算法在存在散乱节点的情况下通信效率高,响应速度快,鲁棒性强。
更新日期:2021-03-26
down
wechat
bug