当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition.
Neural Networks ( IF 6.0 ) Pub Date : 2020-02-11 , DOI: 10.1016/j.neunet.2020.02.001
Maoguo Gong 1 , Ke Pan 1 , Yu Xie 1 , A K Qin 2 , Zedong Tang 1
Affiliation  

In recent years, deep learning achieves remarkable results in the field of artificial intelligence. However, the training process of deep neural networks may cause the leakage of individual privacy. Given the model and some background information of the target individual, the adversary can maliciously infer the sensitive feature of the target individual. Therefore, it is imperative to preserve the sensitive information in the training data. Differential privacy is a state-of-the-art paradigm for providing the privacy guarantee of datasets, which protects the private and sensitive information from the attack of adversaries significantly. However, the existing privacy-preserving models based on differential privacy are less than satisfactory since traditional approaches always inject the same amount of noise into parameters to preserve the sensitive information, which may impact the trade-off between the model utility and the privacy guarantee of training data. In this paper, we present a general differentially private deep neural networks learning framework based on relevance analysis, which aims to bridge the gap between private and non-private models while providing an effective privacy guarantee of sensitive information. The proposed model perturbs gradients according to the relevance between neurons in different layers and the model output. Specifically, during the process of backward propagation, more noise is added to gradients of neurons that have less relevance to the model output, and vice-versa. Experiments on five real datasets demonstrate that our mechanism not only bridges the gap between private and non-private models, but also prevents the disclosure of sensitive information effectively.

中文翻译:

通过基于相关性的自适应噪声强加,在深度神经网络中保留差分隐私。

近年来,深度学习在人工智能领域取得了令人瞩目的成果。但是,深度神经网络的训练过程可能会导致个人隐私的泄露。给定目标个人的模型和一些背景信息,对手可能会恶意地推断目标个人的敏感特征。因此,必须将敏感信息保存在训练数据中。差异隐私是提供数据集隐私保证的最新范例,它可以保护隐私和敏感信息免受对手的攻击。然而,现有的基于差异隐私的隐私保护模型并不令人满意,因为传统方法总是向参数中注入相同数量的噪声以保留敏感信息,这可能会影响模型实用程序与训练数据的隐私保证之间的权衡。在本文中,我们提出了一种基于相关性分析的通用差分私有深度神经网络学习框架,旨在弥补私有模型与非私有模型之间的差距,同时为敏感信息提供有效的隐私保证。所提出的模型根据不同层中神经元之间的相关性和模型输出来扰动梯度。具体来说,在向后传播的过程中,更多的噪声被添加到与模型输出相关性较小的神经元梯度中,反之亦然。在五个真实数据集上的实验表明,我们的机制不仅弥合了私有模型与非私有模型之间的鸿沟,而且还有效地防止了敏感信息的泄露。
更新日期:2020-02-11
down
wechat
bug