当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Unlearning: Guarantee the Right of Clients to Forget
IEEE NETWORK ( IF 6.8 ) Pub Date : 11-25-2022 , DOI: 10.1109/mnet.001.2200198
Leijie Wu 1 , Song Guo 1 , Junxiao Wang 1 , Zicong Hong 1 , Jie Zhang 1 , Yaohong Ding 1
Affiliation  

The Right to be Forgotten gives a data owner the right to revoke their data from an entity storing it. In the context of federated learning, the Right to be Forgotten requires that, in addition to the data itself, any influence of the data on the FL model must disappear, a process we call “federated unlearning.” The most straightforward and legitimate way to implement federated unlearning is to remove the revoked data and retrain the FL model from scratch. Yet the computational and time overhead associated with fully retraining FL models can be prohibitively expensive. In this article, we take the first step to comprehensively investigate the way to settle the unlearning paradigm in the context of federated learning. First, we define the problem of efficient federated unlearning, including its challenges and goals, and we identify three common types of federated unlearning requests: class unlearning, client unlearning, and sample unlearning. Based on those challenges and goals, a general pipeline is proposed for federated unlearning for the above three types of requests. We revisit how the training data affects the final FL model performance and thereby empowers the proposed framework with the reverse stochastic gradient ascent (SGA) and elastic weight consolidation (EWC). Various experiments are conducted to verify effectiveness of the proposed method in both aspects of unlearning efficacy and efficiency. We believe the proposed method will perform as an essential component for future machine unlearning systems.

中文翻译:


联合遗忘:保证客户的遗忘权



被遗忘权赋予数据所有者从存储数据的实体撤销其数据的权利。在联邦学习的背景下,被遗忘权要求除了数据本身之外,数据对 FL 模型的任何影响都必须消失,这个过程我们称之为“联邦遗忘”。实现联合取消学习最直接、最合法的方法是删除已撤销的数据并从头开始重新训练 FL 模型。然而,与完全重新训练 FL 模型相关的计算和时间开销可能非常昂贵。在本文中,我们迈出了第一步,全面研究联邦学习背景下解决忘却范式的方法。首先,我们定义了高效联合遗忘的问题,包括其挑战和目标,并且我们确定了三种常见类型的联合遗忘请求:类遗忘、客户端遗忘和样本遗忘。基于这些挑战和目标,提出了一个通用管道,用于对上述三种类型的请求进行联合取消学习。我们重新审视训练数据如何影响最终的 FL 模型性能,从而通过逆随机梯度上升 (SGA) 和弹性权重巩固 (EWC) 为所提出的框架提供支持。进行了各种实验来验证所提出的方法在遗忘效果和效率方面的有效性。我们相信所提出的方法将成为未来机器取消学习系统的重要组成部分。
更新日期:2024-08-22
down
wechat
bug