当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Model-Protected Multi-Task Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 8-11-2020 , DOI: 10.1109/tpami.2020.3015859
Jian Liang , Ziqi Liu , Jiayu Zhou , Xiaoqian Jiang , Changshui Zhang , Fei Wang

Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. In contrast, in single-task learning (STL) each individual task is learned independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks. However, because MTL algorithms can “leak” information from different models across different tasks, MTL poses a potential security risk. Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task. The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods. In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix. We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix. Our algorithms can be guaranteed not to underperform compared with STL methods. We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered. The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem.

中文翻译:


模型保护的多任务学习



多任务学习(MTL)是指一起学习多个相关任务的范式。相比之下,在单任务学习(STL)中,每个单独的任务都是独立学习的。 MTL 通常会带来更好的训练模型,因为它们可以利用相关任务之间的共性。然而,由于 MTL 算法可能会跨不同任务的不同模型“泄漏”信息,因此 MTL 带来了潜在的安全风险。具体地,攻击者可以通过一个任务参与MTL过程,从而获取另一任务的模型信息。之前提出的隐私保护 MTL 方法保护的是数据实例而不是模型,其中一些方法与 STL 方法相比可能表现不佳。在本文中,我们提出了一种隐私保护 MTL 框架,以基于模型矩阵的协方差矩阵的扰动来防止每个模型的信息泄漏到其他模型。我们研究了两种流行的 MTL 实例化方法,即学习模型矩阵的低秩和组稀疏模式。与STL方法相比,我们的算法可以保证不会表现不佳。我们基于差异隐私工具构建我们的方法,并提供隐私保证、效用界限,并考虑异构隐私预算。实验表明,在所提出的模型保护问题上,我们的算法优于现有隐私保护 MTL 方法构建的基线方法。
更新日期:2024-08-22
down
wechat
bug