当前位置: X-MOL 学术IEEE Commun. Surv. Tutor. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Learning and Meta Learning: Approaches, Applications, and Directions
IEEE Communications Surveys & Tutorials ( IF 35.6 ) Pub Date : 2023-11-07 , DOI: 10.1109/comst.2023.3330910
Xiaonan Liu 1 , Yansha Deng 1 , Arumugam Nallanathan 2 , Mehdi Bennis 3
Affiliation  

Over the past few years, significant advancements have been made in the field of machine learning (ML) to address resource management, interference management, autonomy, and decision-making in wireless networks. Traditional ML approaches rely on centralized methods, where data is collected at a central server for training. However, this approach poses a challenge in terms of preserving the data privacy of devices. To address this issue, federated learning (FL) has emerged as an effective solution that allows edge devices to collaboratively train ML models without compromising data privacy. In FL, local datasets are not shared, and the focus is on learning a global model for a specific task involving all devices. However, FL has limitations when it comes to adapting the model to devices with different data distributions. In such cases, meta learning is considered, as it enables the adaptation of learning models to different data distributions using only a few data samples. In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta). Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks. We also analyze the relationships among these learning algorithms and examine their advantages and disadvantages in real-world applications.

中文翻译:

联邦学习和元学习:方法、应用和方向

在过去几年中,机器学习 (ML) 领域在解决无线网络中的资源管理、干扰管理、自治和决策方面取得了重大进展。传统的机器学习方法依赖于集中式方法,在中央服务器上收集数据以进行训练。然而,这种方法在保护设备的数据隐私方面提出了挑战。为了解决这个问题,联邦学习 (FL) 作为一种有效的解决方案应运而生,它允许边缘设备在不损害数据隐私的情况下协作训练 ML 模型。在 FL 中,本地数据集不共享,重点是学习涉及所有设备的特定任务的全局模型。然而,FL 在使模型适应不同数据分布的设备方面存在局限性。在这种情况下,可以考虑元学习,因为它只需使用少量数据样本即可使学习模型适应不同的数据分布。在本教程中,我们对 FL、元学习和联邦元学习 (FedMeta) 进行了全面回顾。与其他教程论文不同,我们的目标是探索如何设计、优化和发展 FL、元学习和 FedMeta 方法,以及它们在无线网络上的应用。我们还分析了这些学习算法之间的关系,并检查它们在实际应用中的优缺点。
更新日期:2023-11-07
down
wechat
bug