当前位置: X-MOL 学术IEEE J. Sel. Area. Comm. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Guest Editorial Special Issue on Distributed Learning Over Wireless Edge Networks鈥擯art I
IEEE Journal on Selected Areas in Communications ( IF 13.8 ) Pub Date : 2021-11-19 , DOI: 10.1109/jsac.2021.3118484
Mingzhe Chen , Deniz Gunduz , Kaibin Huang , Walid Saad , Mehdi Bennis , Aneta Vulgarakis Feljan , H. Vincent Poor

Analyzing massive amounts of data using complex machine learning models requires significant computational resources. The conventional approach to such problems involves centralizing training data and inference processes in the cloud, i.e., in data centers. However, with the proliferation of mobile devices and increasing application of the Internet-of-Things (IoT) paradigm, very large amounts of data are collected at the edges of wireless networks, and due to privacy constraints and limited communication resources, it is undesirable or impractical to upload this data from mobile devices to the cloud for centralized learning. This problem can be solved by distributed learning at the network edge, by which edge devices collaboratively train a shared learning model using real-time mobile data. The avoidance of raw-data uploading not only helps to preserve privacy but may also alleviate network-traffic congestion and minimize latency. With that said, distributed training still requires a substantial amount of information exchange between devices and edge servers over wireless links. In the process, wireless impairments such as noise, interference, and imperfect knowledge of channel states can significantly slow down distributed learning (e.g., convergence speed) and degrades its performance (e.g., learning accuracy). This makes it crucial to optimize wireless network performance so as to support the efficient deployment of distributed learning algorithms. On the other hand, distributed learning algorithms provide a powerful tool-set for solving complex problems in wireless communication and networking. One important framework, called federated learning (FL), enables users to collaboratively learn a shared model while helping to preserve local data privacy. The application of FL can endow edge devices with capabilities of user behavior prediction, user identification, and wireless environment analysis. As another example, distributed reinforcement learning is capable of leveraging distributed computation power and data to solve complex optimization and control problems that arise in various use cases, such as network control, user clustering, resource management, and interference alignment. To cover this paradigm of distributed learning over wireless networks, this two-part Special Issue features papers dealing with two main research challenges: a) optimization of wireless network performance for efficient implementation of distributed learning in wireless networks, and b) distributed learning for solving communication problems and optimizing network performance.

中文翻译:


关于无线边缘网络分布式学习的客座社论特刊第一章



使用复杂的机器学习模型分析大量数据需要大量的计算资源。解决此类问题的传统方法是将训练数据和推理过程集中在云中,即数据中心。然而,随着移动设备的激增和物联网(IoT)范式的应用不断增加,在无线网络边缘收集大量数据,并且由于隐私限制和通信资源有限,这是不希望的或者将这些数据从移动设备上传到云端进行集中学习是不切实际的。这个问题可以通过网络边缘的分布式学习来解决,边缘设备使用实时移动数据协作训练共享学习模型。避免原始数据上传不仅有助于保护隐私,还可以缓解网络流量拥塞并最大限度地减少延迟。尽管如此,分布式训练仍然需要通过无线链路在设备和边缘服务器之间进行大量信息交换。在此过程中,无线损伤(例如噪声、干扰和信道状态知识的不完善)会显着减慢分布式学习(例如,收敛速度)并降低其性能(例如,学习准确性)。这使得优化无线网络性能以支持分布式学习算法的高效部署变得至关重要。另一方面,分布式学习算法为解决无线通信和网络中的复杂问题提供了强大的工具集。联邦学习 (FL) 是一个重要的框架,它使用户能够协作学习共享模型,同时帮助保护本地数据隐私。 FL的应用可以赋予边缘设备用户行为预测、用户识别、无线环境分析等能力。另一个例子,分布式强化学习能够利用分布式计算能力和数据来解决各种用例中出现的复杂优化和控制问题,例如网络控制、用户集群、资源管理和干扰对齐。为了涵盖无线网络上分布式学习的这种范式,本期特刊分为两部分,其中包含涉及两个主要研究挑战的论文:a)优化无线网络性能,以在无线网络中有效实现分布式学习;b)分布式学习,以解决通信问题和优化网络性能。
更新日期:2021-11-19
down
wechat
bug