当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy in Neural Network Learning: Threats and Countermeasures
IEEE NETWORK ( IF 6.8 ) Pub Date : 8-3-2018 , DOI: 10.1109/mnet.2018.1700447
Shan Chang , Chao Li

Algorithmic breakthroughs, the feasibility of collecting huge amount of data, and increasing computational power, contribute to the remarkable achievements of NNs. In particular, since Deep Neural Network (DNN) learning presents astonishing results in speech and image recognition, the amount of sophisticated applications based on it has exploded. However, the increasing number of instances of privacy leakage has been reported, and the corresponding severe consequences have caused great worry in this area. In this article, we focus on privacy issues in NN learning. First, we identify the privacy threats during NN training, and present privacy-preserving training schemes in terms of using centralized and distributed approaches. Second, we consider the privacy of prediction requests, and discuss the privacy-preserving protocols for NN prediction. We also analyze the privacy vulnerabilities of trained models. Three types of attacks on private information embedded in trained NN models are discussed, and a differential privacy-based solution is introduced.

中文翻译:


神经网络学习中的隐私:威胁与对策



算法的突破、收集大量数据的可行性以及计算能力的提高,促成了神经网络取得的显著成就。特别是,由于深度神经网络(DNN)学习在语音和图像识别方面呈现出惊人的结果,基于它的复杂应用数量呈爆炸式增长。然而,越来越多的隐私泄露事件被报道,相应的严重后果引起了业界的极大担忧。在本文中,我们重点讨论神经网络学习中的隐私问题。首先,我们识别神经网络训练期间的隐私威胁,并使用集中式和分布式方法提出隐私保护训练方案。其次,我们考虑预测请求的隐私,并讨论神经网络预测的隐私保护协议。我们还分析了训练模型的隐私漏洞。讨论了针对训练神经网络模型中嵌入的私人信息的三种类型的攻击,并介绍了基于差分隐私的解决方案。
更新日期:2024-08-22
down
wechat
bug