当前位置: X-MOL 学术IEEE Trans. Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Learning Over Multihop Wireless Networks With In-Network Aggregation
IEEE Transactions on Wireless Communications ( IF 10.4 ) Pub Date : 2022-04-26 , DOI: 10.1109/twc.2022.3168538
Xianhao Chen 1 , Guangyu Zhu 1 , Yiqin Deng 2 , Yuguang Fang 1
Affiliation  

Communication limitation at the edge is widely recognized as a major bottleneck for federated learning (FL). Multi-hop wireless networking provides a cost-effective solution to enhance service coverage and spectrum efficiency at the edge, which could facilitate large-scale and efficient machine learning (ML) model aggregation. However, FL over multi-hop wireless networks has rarely been investigated. In this paper, we optimize FL over wireless mesh networks by taking into account the heterogeneity in communication and computing resources at mesh routers and clients. We present a framework that each intermediate router performs in-network model aggregation before sending the data to the next hop, so as to reduce the outgoing data traffic and hence aggregate more models under limited communication resources. To accelerate model training, we formulate our optimization problem by jointly considering model aggregation, routing, and spectrum allocation. Although the problem is a non-convex mixed-integer nonlinear programming, we transform it into a mixed-integer linear programming (MILP), and develop a coarse-grained fixing procedure to solve it efficiently. Simulation results demonstrate the effectiveness of the solution approach, and the superiority of the in-network aggregation scheme over the counterpart without in-network aggregation.

中文翻译:

具有网络内聚合的多跳无线网络上的联邦学习

边缘的通信限制被广泛认为是联邦学习 (FL) 的主要瓶颈。多跳无线网络提供了一种经济高效的解决方案,可在边缘增强服务覆盖范围和频谱效率,从而促进大规模高效的机器学习 (ML) 模型聚合。然而,很少研究基于多跳无线网络的 FL。在本文中,我们通过考虑网状路由器和客户端的通信和计算资源的异构性来优化无线网状网络上的 FL。我们提出了一个框架,每个中间路由器在网络中执行在将数据发送到下一跳之前进行模型聚合,以减少传出的数据流量,从而在有限的通信资源下聚合更多的模型。为了加速模型训练,我们通过联合考虑模型聚合、路由和频谱分配来制定优化问题。尽管该问题是一个非凸混合整数非线性规划,但我们将其转换为混合整数线性规划 (MILP),并开发了一个粗粒度的固定程序来有效地解决它。仿真结果证明了解决方法的有效性,以及网络内聚合方案优于没有网络内聚合的对应方案。
更新日期:2022-04-26
down
wechat
bug