当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Survey on Differentially Private Machine Learning [Review Article]
IEEE Computational Intelligence Magazine ( IF 9 ) Pub Date : 2020-05-01 , DOI: 10.1109/mci.2020.2976185
Maoguo Gong , Yu Xie , Ke Pan , Kaiyuan Feng , A.K. Qin

Recent years have witnessed remarkable successes of machine learning in various applications. However, machine learning models suffer from a potential risk of leaking private information contained in training data, which have attracted increasing research attention. As one of the mainstream privacy- preserving techniques, differential privacy provides a promising way to prevent the leaking of individual-level privacy in training data while preserving the quality of training data for model building. This work provides a comprehensive survey on the existing works that incorporate differential privacy with machine learning, so- called differentially private machine learning and categorizes them into two broad categories as per different differential privacy mechanisms: the Laplace/ Gaussian/exponential mechanism and the output/objective perturbation mechanism. In the former, a calibrated amount of noise is added to the non-private model and in the latter, the output or the objective function is perturbed by random noise. Particularly, the survey covers the techniques of differentially private deep learning to alleviate the recent concerns about the privacy of big data contributors. In addition, the research challenges in terms of model utility, privacy level and applications are discussed. To tackle these challenges, several potential future research directions for differentially private machine learning are pointed out.

中文翻译:

关于差异私有机器学习的调查 [评论文章]

近年来,机器学习在各种应用中取得了显着的成功。然而,机器学习模型存在泄露训练数据中包含的私人信息的潜在风险,这引起了越来越多的研究关注。作为主流的隐私保护技术之一,差分隐私提供了一种很有前途的方法,可以防止训练数据中个人级别的隐私泄露,同时保持模型构建训练数据的质量。这项工作对现有的将差分隐私与机器学习相结合的作品进行了全面调查,即所谓的差分隐私机器学习,并根据不同的差分隐私机制将它们分为两大类:拉普拉斯/高斯/指数机制和输出/目标微扰机制。在前者中,将校准量的噪声添加到非私有模型中,而在后者中,输出或目标函数受到随机噪声的干扰。特别是,该调查涵盖了差异私有深度学习的技术,以缓解最近对大数据贡献者隐私的担忧。此外,还讨论了模型效用、隐私级别和应用方面的研究挑战。为了应对这些挑战,指出了差分私有机器学习的几个潜在的未来研究方向。该调查涵盖了差异私有深度学习的技术,以缓解最近对大数据贡献者隐私的担忧。此外,还讨论了模型效用、隐私级别和应用方面的研究挑战。为了应对这些挑战,指出了差分私有机器学习的几个潜在的未来研究方向。该调查涵盖了差异私有深度学习的技术,以缓解最近对大数据贡献者隐私的担忧。此外,还讨论了模型效用、隐私级别和应用方面的研究挑战。为了应对这些挑战,指出了差分私有机器学习的几个潜在的未来研究方向。
更新日期:2020-05-01
down
wechat
bug