当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Federated Learning Meets Blockchain: A New Distributed Learning Paradigm
IEEE Computational Intelligence Magazine ( IF 9 ) Pub Date : 2022-07-19 , DOI: 10.1109/mci.2022.3180932
Chuan Ma 1 , Jun Li 1 , Long Shi 1 , Ming Ding 2 , Taotao Wang 3 , Zhu Han 4 , H. Vincent Poor 5
Affiliation  

Motivated by the increasingly powerful computing capabilities of end-user equipment, and by the growing privacy concerns over sharing sensitive raw data, a distributed machine learning paradigm known as federated learning (FL) has emerged. By training models locally at each client and aggregating learning models at a central server, FL has the capability to avoid sharing data directly, thereby reducing privacy leakage. However, the conventional FL framework relies heavily on a single central server, and it may fail if such a server behaves maliciously. To address this single point of failure, in this work, a blockchain-assisted decentralized FL framework is investigated, which can prevent malicious clients from poisoning the learning process, and thus provides a self-motivated and reliable learning environment for clients. In this framework, the model aggregation process is fully decentralized and the tasks of training for FL and mining for blockchain are integrated into each participant. Privacy and resource-allocation issues are further investigated in the proposed framework, and a critical and unique issue inherent in the proposed framework is disclosed. In particular, a lazy client can simply duplicate models shared by other clients to reap benefits without contributing its resources to FL. To address these issues, analytical and experimental results are provided to shed light on possible solutions, i.e., adding noise to achieve local differential privacy and using pseudo-noise (PN) sequences as watermarks to detect lazy clients.

中文翻译:

当联邦学习遇到区块链:一种新的分布式学习范式

受最终用户设备日益强大的计算能力的推动,以及对共享敏感原始数据的隐私担忧日益增加,一种称为联邦学习 (FL) 的分布式机器学习范式应运而生。通过在每个客户端本地训练模型并在中央服务器上聚合学习模型,FL 能够避免直接共享数据,从而减少隐私泄露。然而,传统的 FL 框架严重依赖于单个中央服务器,如果这样的服务器出现恶意行为,它可能会失败。为了解决这个单点故障,在这项工作中,研究了一个区块链辅助的去中心化 FL 框架,它可以防止恶意客户端毒害学习过程,从而为客户端提供一个自我激励和可靠的学习环境。在这个框架下,模型聚合过程是完全去中心化的,FL 的训练和区块链的挖掘任务被集成到每个参与者中。在提议的框架中进一步研究了隐私和资源分配问题,并揭示了提议的框架中固有的关键和独特的问题。特别是,懒惰的客户端可以简单地复制其他客户端共享的模型以获取收益,而无需将其资源贡献给 FL。为了解决这些问题,提供了分析和实验结果以阐明可能的解决方案,即添加噪声以实现局部差分隐私并使用伪噪声(PN)序列作为水印来检测惰性客户端。在提议的框架中进一步研究了隐私和资源分配问题,并揭示了提议的框架中固有的关键和独特的问题。特别是,懒惰的客户端可以简单地复制其他客户端共享的模型以获取收益,而无需将其资源贡献给 FL。为了解决这些问题,提供了分析和实验结果以阐明可能的解决方案,即添加噪声以实现局部差分隐私并使用伪噪声(PN)序列作为水印来检测惰性客户端。在提议的框架中进一步研究了隐私和资源分配问题,并揭示了提议的框架中固有的关键和独特的问题。特别是,懒惰的客户端可以简单地复制其他客户端共享的模型以获取收益,而无需将其资源贡献给 FL。为了解决这些问题,提供了分析和实验结果以阐明可能的解决方案,即添加噪声以实现局部差分隐私并使用伪噪声(PN)序列作为水印来检测惰性客户端。
更新日期:2022-07-22
down
wechat
bug