当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quantifying Membership Privacy via Information Leakage
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2021-04-19 , DOI: 10.1109/tifs.2021.3073804
Sara Saeidian , Giulia Cervia , Tobias J. Oechtering , Mikael Skoglund

Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types of attacks to infer information about the training data, most notably, membership inference attacks. In this paper, we propose an approach based on information leakage for guaranteeing membership privacy. Specifically, we propose to use a conditional form of the notion of maximal leakage to quantify the information leaking about individual data entries in a dataset, i.e., the entrywise information leakage. We apply our privacy analysis to the Private Aggregation of Teacher Ensembles (PATE) framework for privacy-preserving classification of sensitive data and prove that the entrywise information leakage of its aggregation mechanism is Schur-concave when the injected noise has a log-concave probability density. The Schur-concavity of this leakage implies that increased consensus among teachers in labeling a query reduces its associated privacy cost. Finally, we derive upper bounds on the entrywise information leakage when the aggregation mechanism uses Laplace distributed noise.

中文翻译:


通过信息泄露量化会员隐私



众所周知,机器学习模型可以记住训练集中各个数据点的独特属性。这种记忆能力可以被多种类型的攻击利用来推断有关训练数据的信息,最明显的是成员推理攻击。在本文中,我们提出了一种基于信息泄漏的方法来保证会员隐私。具体来说,我们建议使用最大泄漏概念的条件形式来量化数据集中各个数据条目的信息泄漏,即条目信息泄漏。我们将隐私分析应用于教师集合的私有聚合(PATE)框架,以对敏感数据进行隐私保护分类,并证明当注入的噪声具有对数凹概率密度时,其聚合机制的条目信息泄漏是 Schur 凹的。这种泄漏的 Schur 凹性意味着教师之间在标记查询方面达成的共识会减少其相关的隐私成本。最后,我们推导了当聚合机制使用拉普拉斯分布式噪声时条目信息泄漏的上限。
更新日期:2021-04-19
down
wechat
bug