当前位置: X-MOL 学术IEEE J. Sel. Area. Comm. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semi-Decentralized Federated Learning With Cooperative D2D Local Model Aggregations
IEEE Journal on Selected Areas in Communications ( IF 13.8 ) Pub Date : 2021-10-06 , DOI: 10.1109/jsac.2021.3118344
Frank Po-Chen Lin , Seyyedali Hosseinalipour , Sheikh Shams Azam , Christopher G. Brinton , Nicolo Michelusi

Federated learning has emerged as a popular technique for distributing machine learning (ML) model training across the wireless edge. In this paper, we propose two timescale hybrid federated learning (TT-HF), a semi-decentralized learning architecture that combines the conventional device-to-server communication paradigm for federated learning with device-to-device (D2D) communications for model training. In TT-HF, during each global aggregation interval, devices (i) perform multiple stochastic gradient descent iterations on their individual datasets, and (ii) aperiodically engage in consensus procedure of their model parameters through cooperative, distributed D2D communications within local clusters. With a new general definition of gradient diversity, we formally study the convergence behavior of TT-HF, resulting in new convergence bounds for distributed ML. We leverage our convergence bounds to develop an adaptive control algorithm that tunes the step size, D2D communication rounds, and global aggregation period of TT-HF over time to target a sublinear convergence rate of O(1/t)\mathcal {O}(1/t) while minimizing network resource utilization. Our subsequent experiments demonstrate that TT-HF significantly outperforms the current art in federated learning in terms of model accuracy and/or network energy consumption in different scenarios where local device datasets exhibit statistical heterogeneity. Finally, our numerical evaluations demonstrate robustness against outages caused by fading channels, as well favorable performance with non-convex loss functions.

中文翻译:


具有协作式 D2D 局部模型聚合的半分散式联邦学习



联合学习已成为跨无线边缘分发机器学习 (ML) 模型训练的流行技术。在本文中,我们提出了两种时间尺度的混合联邦学习(TT-HF),这是一种半分散式学习架构,它将用于联邦学习的传统设备到服务器通信范式与用于模型训练的设备到设备(D2D)通信相结合。在 TT-HF 中,在每个全局聚合间隔期间,设备 (i) 在其各自的数据集上执行多次随机梯度下降迭代,并且 (ii) 通过本地集群内的协作式分布式 D2D 通信不定期地参与其模型参数的共识过程。通过梯度多样性的新一般定义,我们正式研究了 TT-HF 的收敛行为,从而为分布式机器学习带来了新的收敛界限。我们利用收敛界限开发一种自适应控制算法,该算法可以随时间调整 TT-HF 的步长、D2D 通信轮次和全局聚合周期,以实现 O(1/t)\mathcal {O}( 1/t),同时最大限度地减少网络资源利用率。我们随后的实验表明,在本地设备数据集表现出统计异质性的不同场景中,TT-HF 在模型准确性和/或网络能耗方面显着优于当前的联邦学习技术。最后,我们的数值评估证明了针对衰落信道引起的中断的鲁棒性,以及非凸损失函数的良好性能。
更新日期:2021-10-06
down
wechat
bug