当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
NeuralDP Differentially private neural networks by design
arXiv - CS - Cryptography and Security Pub Date : 2021-07-30 , DOI: arxiv-2107.14582
Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual. The predominant approach to differentially private training of neural networks is DP-SGD, which relies on norm-based gradient clipping as a method for bounding sensitivity, followed by the addition of appropriately calibrated Gaussian noise. In this work we propose NeuralDP, a technique for privatising activations of some layer within a neural network, which by the post-processing properties of differential privacy yields a differentially private network. We experimentally demonstrate on two datasets (MNIST and Pediatric Pneumonia Dataset (PPD)) that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.

中文翻译:

NeuralDP 设计的差分私有神经网络

差分隐私在深度神经网络训练中的应用有望允许大规模(分散)使用敏感数据,同时为个人提供严格的隐私保证。神经网络差分私有训练的主要方法是 DP-SGD,它依赖于基于范数的梯度裁剪作为边界灵敏度的方法,然后添加适当校准的高斯噪声。在这项工作中,我们提出了 NeuralDP,这是一种用于将神经网络中某个层的激活私有化的技术,它通过差分隐私的后处理特性产生一个差分私有网络。
更新日期:2021-08-02
down
wechat
bug