当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Resource-Efficient Federated Learning in Mobile Edge Computing
IEEE NETWORK ( IF 6.8 ) Pub Date : 2-18-2021 , DOI: 10.1109/mnet.011.2000295
Rong Yu , Peichun Li

Federated learning is a newly emerged distributed deep learning paradigm, where the clients separately train their local neural network models with private data and then jointly aggregate a global model at the central server. Mobile edge computing is aimed at deploying mobile applications at the edge of wireless networks. Federated learning in mobile edge computing is a prospective distributed framework to deploy deep learning algorithms in many application scenarios. The bottleneck of federated learning in mobile edge computing is the intensive resources of mobile clients in computation, bandwidth, energy, and data. This article first illustrates the typical use cases of federated learning in mobile edge computing, and then investigates the state-of-the-art resource optimization approaches in federated learning. The resource-efficient techniques for federated learning are broadly divided into two classes: the black-box and white-box approaches. For black-box approaches, the techniques of training tricks, client selection, data compensation, and hierarchical aggregation are reviewed. For white-box approaches, the techniques of model compression, knowledge distillation, feature fusion, and asynchronous update are discussed. After that, a neural-structure-aware resource management approach with module-based federated learning is proposed, where mobile clients are assigned with different subnetworks of the global model according to the status of their local resources. Experiments demonstrate the superiority of our approach in elastic and efficient resource utilization.

中文翻译:


迈向移动边缘计算中资源高效的联合学习



联邦学习是一种新出现的分布式深度学习范式,其中客户端使用私有数据单独训练其本地神经网络模型,然后在中央服务器上共同聚合全局模型。移动边缘计算旨在在无线网络边缘部署移动应用程序。移动边缘计算中的联邦学习是一种有前景的分布式框架,可以在许多应用场景中部署深度学习算法。移动边缘计算中联邦学习的瓶颈是移动客户端在计算、带宽、能源和数据方面的密集资源。本文首先阐述了联邦学习在移动边缘计算中的典型用例,然后研究了联邦学习中最先进的资源优化方法。联邦学习的资源高效技术大致分为两类:黑盒方法和白盒方法。对于黑盒方法,回顾了训练技巧、客户选择、数据补偿和分层聚合的技术。对于白盒方法,讨论了模型压缩、知识蒸馏、特征融合和异步更新技术。之后,提出了一种基于模块的联邦学习的神经结构感知资源管理方法,其中根据本地资源的状态为移动客户端分配全局模型的不同子网络。实验证明了我们的方法在弹性和高效资源利用方面的优越性。
更新日期:2024-08-22
down
wechat
bug