当前位置: X-MOL 学术Comput. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differential Privacy Protection over Deep Learning: An Investigation of Its Impacted Factors
Computers & Security ( IF 5.6 ) Pub Date : 2020-12-01 , DOI: 10.1016/j.cose.2020.102061
Ying Lin , Ling-Yan Bao , Ze-Minghui Li , Shu-Zheng Si , Chao-Hsien Chu

Abstract Deep learning (DL) has been widely applied to achieve promising results in many fields, but it still exists various privacy concerns and issues. Applying differential privacy (DP) to DL models is an effective way to ensure privacy-preserving training and classification. In this paper, we revisit the DP stochastic gradient descent (DP-SGD) method, which has been used by several algorithms and systems and achieved good privacy protection. However, several factors, such as the sequence of adding noise, the models used etc., may impact its performance with various degrees. We empirically show that adding noise first and clipping second will not only significantly achieve high accuracy, but also accelerate convergence. Rigorous experiments have been conducted on three different datasets to train two popular DL models, Convolutional Neural Network (CNN) and Long and Short-Term Memory (LSTM). For the CNN, the accuracy rate can be increased by 3%, 8% and 10% on average for the respective datasets, and the loss value is reduced by 18%, 14% and 22% on average. For the LSTM, the accuracy rate can be increased by 18%, 13% and 12% on average, and the loss value can be reduced by 55%, 25% and 23% on average. Meanwhile, we have compared the performance of our proposed method with a state-of-the-art SGD-based technique. The results show that under the premise of a reasonable clipping threshold, the proposed method not only has better performance, but also achieve ideal privacy protection effects. The proposed alternative can be applied to many existing privacy preserving solutions.

中文翻译:

深度学习差异化隐私保护:影响因素调查

摘要 深度学习(DL)已被广泛应用于许多领域,取得了可喜的成果,但它仍然存在各种隐私问题和问题。将差分隐私 (DP) 应用于 DL 模型是确保隐私保护训练和分类的有效方法。在本文中,我们重新审视了 DP 随机梯度下降 (DP-SGD) 方法,该方法已被多种算法和系统使用并实现了良好的隐私保护。但是,加入噪声的顺序、使用的模型等几个因素可能会不同程度地影响其性能。我们凭经验表明,首先添加噪声,然后进行裁剪不仅可以显着实现高精度,而且还可以加速收敛。已经在三个不同的数据集上进行了严格的实验,以训练两个流行的 DL 模型,卷积神经网络 (CNN) 和长短期记忆 (LSTM)。对于CNN,各个数据集的准确率可以平均提高3%、8%和10%,损失值平均降低18%、14%和22%。对于LSTM,准确率平均可提高18%、13%和12%,损失值平均可降低55%、25%和23%。同时,我们将我们提出的方法的性能与最先进的基于 SGD 的技术进行了比较。结果表明,在合理剪裁阈值的前提下,所提方法不仅具有较好的性能,而且达到了理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。各个数据集的准确率平均可提高3%、8%和10%,损失值平均降低18%、14%和22%。对于LSTM,准确率平均可提高18%、13%和12%,损失值平均可降低55%、25%和23%。同时,我们将我们提出的方法的性能与最先进的基于 SGD 的技术进行了比较。结果表明,在合理剪裁阈值的前提下,所提方法不仅具有较好的性能,而且达到了理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。各个数据集的准确率平均可提高3%、8%和10%,损失值平均降低18%、14%和22%。对于LSTM,准确率平均可提高18%、13%和12%,损失值平均可降低55%、25%和23%。同时,我们将我们提出的方法的性能与最先进的基于 SGD 的技术进行了比较。结果表明,在合理剪裁阈值的前提下,所提方法不仅具有较好的性能,而且达到了理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。准确率平均可提高18%、13%和12%,损失值平均可降低55%、25%和23%。同时,我们将我们提出的方法的性能与最先进的基于 SGD 的技术进行了比较。结果表明,在合理剪裁阈值的前提下,所提方法不仅具有较好的性能,而且达到了理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。准确率平均可提高18%、13%和12%,损失值平均可降低55%、25%和23%。同时,我们将我们提出的方法的性能与最先进的基于 SGD 的技术进行了比较。结果表明,在合理剪裁阈值的前提下,所提方法不仅具有较好的性能,而且达到了理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。还能达到理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。还能达到理想的隐私保护效果。提议的替代方案可以应用于许多现有的隐私保护解决方案。
更新日期:2020-12-01
down
wechat
bug