当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unexpected Information Leakage of Differential Privacy Due to the Linear Property of Queries
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2021-04-26 , DOI: 10.1109/tifs.2021.3075843
Wen Huang , Shijie Zhou , Yongjian Liao

Differential privacy is a widely accepted concept of privacy preservation, and the Laplace mechanism is a famous instance of differentially private mechanisms used to deal with numerical data. In this paper, we find that differential privacy does not take the linear property of queries into account, resulting in unexpected information leakage. Specifically, the linear property makes it possible to divide one query into two queries, such as q(D)=q(D1)+q(D2) if D=D1∪D2 and D1∩D2=Ø. If attackers try to obtain an answer to q(D), they can not only issue the query q(D) but also issue q(D1) and calculate q(D2) by themselves as long as they know D2. Through different divisions of one query, attackers can obtain multiple different answers to the same query from differentially private mechanisms. However, from the attackers' perspective and differentially private mechanisms' perspective, the total consumed privacy budget is different if divisions are delicately designed. This difference leads to unexpected information leakage because the privacy budget is the key parameter for controlling the amount of information that is legally released from differentially private mechanisms. To demonstrate unexpected information leakage, we present a membership inference attack against the Laplace mechanism. Specifically, under the constraints of differential privacy, we propose a method for obtaining multiple independent identically distributed samples of answers to queries that satisfy the linear property. The proposed method is based on a linear property and some background knowledge of the attackers. When the background knowledge is sufficient, the proposed method can obtain a sufficient number of samples from differentially private mechanisms such that the total consumed privacy budget can be made unreasonably large. Based on the obtained samples, a hypothesis testing method is used to determine whether a target record is in a target dataset.

中文翻译:


查询线性特性导致的差分隐私意外信息泄露



差分隐私是一个被广泛接受的隐私保护概念,拉普拉斯机制是用于处理数值数据的差分隐私机制的一个著名实例。在本文中,我们发现差分隐私没有考虑查询的线性属性,导致意外的信息泄露。具体来说,线性属性使得可以将一个查询分为两个查询,例如如果 D=D1∪D2 且 D1∩D2=Ø,则 q(D)=q(D1)+q(D2)。如果攻击者试图获得q(D)的答案,他们不仅可以发出查询q(D),而且只要知道D2,他们就可以发出q(D1)并自己计算q(D2)。通过对一个查询的不同划分,攻击者可以从差分隐私机制中获得同一查询的多个不同答案。然而,从攻击者的角度和差分隐私机制的角度来看,如果划分精心设计,所消耗的总隐私预算是不同的。这种差异会导致意外的信息泄露,因为隐私预算是控制从差分隐私机制合法发布的信息量的关键参数。为了证明意外的信息泄漏,我们提出了针对拉普拉斯机制的成员推理攻击。具体来说,在差分隐私的约束下,我们提出了一种获得满足线性属性的查询答案的多个独立同分布样本的方法。所提出的方法基于线性属性和攻击者的一些背景知识。 当背景知识足够时,所提出的方法可以从差分隐私机制中获得足够数量的样本,使得总消耗的隐私预算可以变得不合理地大。基于获得的样本,使用假设检验方法来确定目标记录是否在目标数据集中。
更新日期:2021-04-26
down
wechat
bug