当前位置: X-MOL 学术IEEE Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Energy-Efficient Distributed Federated Learning for 6G Networks
IEEE Wireless Communications ( IF 10.9 ) Pub Date : 2022-01-21 , DOI: 10.1109/mwc.012.2100153
Sunder Ali Khowaja 1 , Kapal Dev 2 , Parus Khowaja 3 , Paolo Bellavista 4
Affiliation  

The provision of communication services via portable and mobile devices, such as aerial base stations, is a crucial concept to be realized in 5G/6G networks. Conventionally, IoT/edge devices need to transmit data directly to the base station for training the model using machine learning techniques. The data transmission introduces privacy issues that might lead to security concerns and monetary losses. Recently, federated learning was proposed to partially solve privacy issues via model sharing with the base station. However, the centralized nature of federated learning only allows the devices within the vicinity of base stations to share trained models. Furthermore, the long-range communication compels the devices to increase transmission power, which raises energy efficiency concerns. In this work, we propose the distributed federated learning (DBFL) framework that overcomes the connectivity and energy efficiency issues for distant devices. The DBFL framework is compatible with mobile edge computing architecture that connects the devices in a distributed manner using clustering protocols. Experimental results show that the framework increases the classification performance by 7.4 percent in comparison to conventional federated learning while reducing the energy consumption.

中文翻译:


面向 6G 网络的节能分布式联邦学习



通过便携式和移动设备(例如空中基站)提供通信服务是 5G/6G 网络中要实现的关键概念。传统上,物联网/边缘设备需要将数据直接传输到基站,以使用机器学习技术训练模型。数据传输引入了隐私问题,可能会导致安全问题和金钱损失。最近,联邦学习被提出,通过与基站共享模型来部分解决隐私问题。然而,联邦学习的集中式性质只允许基站附近的设备共享经过训练的模型。此外,长距离通信迫使设备增加传输功率,这引发了能源效率问题。在这项工作中,我们提出了分布式联合学习(DBFL)框架,该框架克服了远程设备的连接性和能源效率问题。 DBFL框架与移动边缘计算架构兼容,该架构使用集群协议以分布式方式连接设备。实验结果表明,与传统联邦学习相比,该框架的分类性能提高了7.4%,同时降低了能耗。
更新日期:2022-01-21
down
wechat
bug