当前位置: X-MOL 学术Wireless Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A fair and verifiable federated learning profit-sharing scheme
Wireless Networks ( IF 3 ) Pub Date : 2022-09-05 , DOI: 10.1007/s11276-022-03110-w
Xianxian Li , Mei Huang , Shiqi Gao , Zhenkui Shi

In recent years, gradient boosting decision trees (GBDTs) has become a popular machine learning algorithm and there have been some studies on federated GBDT training to preserve clients’ privacy. However, existing schemes face some severe issues. For example, the integrity of the training process cannot be guaranteed. And most of the schemes ignore how to evaluate the performance gains from different clients’ datasets fairly. Developing a fair and secure contribution evaluation mechanism in federated learning to motivate clients to join federated learning remains a challenge. In this paper, we propose a fair and verifiable secure federated GBDT scheme that utilizes Trusted Execution Environments (TEEs) to ensure the integrity of the GBDT training process and quantify the contribution of different parties fairly. We propose a fair and verifiable contribution calculation mechanism based on TEE and the adaptive truncated Monte Carlo approximation Shapley value method. The mechanism can adapt to the limited resources of the device and avoid dishonest behaviors during the training process. In addition, as far as we all know, we attempted to implement the validation of contributions in the federated GBDT scheme for the first time. We implement a prototype of our scheme and evaluate it comprehensively. The results show that, compared with calculating the contribution of each party by the Shapley value method, our scheme can significantly improve the efficiency of contribution calculation in the case of more parties, and provide integrity and fairness guarantees for model and contribution calculations.



中文翻译:

一个公平可验证的联邦学习利润分享方案

近年来,梯度提升决策树 (GBDT) 已成为一种流行的机器学习算法,并且已经有一些关于联合 GBDT 训练以保护客户隐私的研究。然而,现有的方案面临一些严重的问题。例如,无法保证培训过程的完整性。而且大多数方案都忽略了如何公平地评估来自不同客户数据集的性能增益。在联邦学习中建立一个公平和安全的贡献评估机制来激励客户加入联邦学习仍然是一个挑战。在本文中,我们提出了一种公平且可验证的安全联合 GBDT 方案,该方案利用可信执行环境 (TEE) 来确保 GBDT 训练过程的完整性并公平地量化各方的贡献。我们提出了一种基于 TEE 和自适应截断蒙特卡罗逼近 Shapley 值方法的公平且可验证的贡献计算机制。该机制可以适应设备有限的资源,避免训练过程中的不诚实行为。此外,据我们所知,我们首次尝试在联邦 GBDT 方案中实现贡献验证。我们实现了我们方案的原型并对其进行了全面评估。结果表明,与Shapley值法计算各方贡献相比,我们的方案在参与方较多的情况下能够显着提高贡献计算的效率,为模型和贡献计算提供完整性和公平性保证。

更新日期:2022-09-06
down
wechat
bug