当前位置: X-MOL 学术IEEE Trans. Parallel Distrib. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Fair and Privacy-Preserving Federated Deep Models
IEEE Transactions on Parallel and Distributed Systems ( IF 5.3 ) Pub Date : 2020-11-01 , DOI: 10.1109/tpds.2020.2996273
Lingjuan Lyu , Jiangshan Yu , Karthik Nandakumar , Yitong Li , Xingjun Ma , Jiong Jin , Han Yu , Kee Siong Ng

The current standalone deep learning framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based solutions are prone to the problem of a single-point-of-failure. In this respect, collaborative learning frameworks, such as federated learning (FL), are more robust. Existing federated learning frameworks overlook an important aspect of participation: fairness. All parties are given the same final model without regard to their contributions. To address these issues, we propose a decentralized Fair and Privacy-Preserving Deep Learning (FPPDL) framework to incorporate fairness into federated deep learning models. In particular, we design a local credibility mutual evaluation mechanism to guarantee fairness, and a three-layer onion-style encryption scheme to guarantee both accuracy and privacy. Different from existing FL paradigm, under FPPDL, each participant receives a different version of the FL model with performance commensurate with his contributions. Experiments on benchmark datasets demonstrate that FPPDL balances fairness, privacy and accuracy. It enables federated learning ecosystems to detect and isolate low-contribution parties, thereby promoting responsible participation.

中文翻译:

迈向公平和隐私保护的联合深度模型

当前独立的深度学习框架往往会导致过度拟合和低效用。这个问题可以通过部署中央服务器来训练来自各方的联合数据的全局模型的集中式框架或利用参数服务器聚合本地模型更新的分布式框架来解决。基于服务器的解决方案容易出现单点故障问题。在这方面,诸如联邦学习 (FL) 之类的协作学习框架更加强大。现有的联邦学习框架忽略了参与的一个重要方面:公平。所有各方都得到相同的最终模型,而不考虑他们的贡献。为了解决这些问题,我们提出了一个分散的公平和隐私保护深度学习 (FPPDL) 框架,将公平性纳入联邦深度学习模型。特别是,我们设计了一个本地可信度互评机制来保证公平性,以及一个三层洋葱式加密方案来保证准确性和隐私性。与现有的 FL 范式不同,在 FPPDL 下,每个参与者都会收到不同版本的 FL 模型,其性能与其贡献相称。基准数据集的实验表明 FPPDL 平衡了公平性、隐私性和准确性。它使联合学习生态系统能够检测和隔离低贡献方,从而促进负责任的参与。以及三层洋葱式加密方案,以保证准确性和隐私性。与现有的 FL 范式不同,在 FPPDL 下,每个参与者都会收到不同版本的 FL 模型,其性能与其贡献相称。基准数据集的实验表明 FPPDL 平衡了公平性、隐私性和准确性。它使联合学习生态系统能够检测和隔离低贡献方,从而促进负责任的参与。以及三层洋葱式加密方案,以保证准确性和隐私性。与现有的 FL 范式不同,在 FPPDL 下,每个参与者都会收到不同版本的 FL 模型,其性能与其贡献相称。基准数据集的实验表明 FPPDL 平衡了公平性、隐私性和准确性。它使联合学习生态系统能够检测和隔离低贡献方,从而促进负责任的参与。
更新日期:2020-11-01
down
wechat
bug