当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy in Neural Network Learning: Threats and Countermeasures
IEEE NETWORK ( IF 9.3 ) Pub Date : 2018-08-03 , DOI: 10.1109/mnet.2018.1700447
Shan Chang , Chao Li

Algorithmic breakthroughs, the feasibility of collecting huge amount of data, and increasing computational power, contribute to the remarkable achievements of NNs. In particular, since Deep Neural Network (DNN) learning presents astonishing results in speech and image recognition, the amount of sophisticated applications based on it has exploded. However, the increasing number of instances of privacy leakage has been reported, and the corresponding severe consequences have caused great worry in this area. In this article, we focus on privacy issues in NN learning. First, we identify the privacy threats during NN training, and present privacy-preserving training schemes in terms of using centralized and distributed approaches. Second, we consider the privacy of prediction requests, and discuss the privacy-preserving protocols for NN prediction. We also analyze the privacy vulnerabilities of trained models. Three types of attacks on private information embedded in trained NN models are discussed, and a differential privacy-based solution is introduced.

中文翻译:

神经网络学习中的隐私:威胁与对策

算法的突破,收集大量数据的可行性以及不断提高的计算能力,为神经网络取得了令人瞩目的成就。特别是,由于深度神经网络(DNN)学习在语音和图像识别中呈现出惊人的结果,因此基于它的复杂应用程序数量激增。然而,已经报道了越来越多的隐私泄露事件,并且相应的严重后果在这一领域引起了极大的担忧。在本文中,我们重点讨论NN学习中的隐私问题。首先,我们在NN训练期间确定隐私威胁,并根据使用集中式和分布式方法提出保护隐私的训练方案。其次,我们考虑了预测请求的隐私,并讨论了用于神经网络预测的隐私保护协议。我们还分析了经过训练的模型的隐私漏洞。讨论了针对训练有素的NN模型中嵌入的私人信息的三种类型的攻击,并介绍了基于差分隐私的解决方案。
更新日期:2018-08-06
down
wechat
bug