当前位置: X-MOL 学术World Wide Web › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Why current differential privacy schemes are inapplicable for correlated data publishing?
World Wide Web ( IF 2.7 ) Pub Date : 2020-06-08 , DOI: 10.1007/s11280-020-00825-8
Hao Wang , Zhengquan Xu , Shan Jia , Ying Xia , Xu Zhang

Although data analysis and mining technologies can efficiently provide intelligent and personalized services to us, data owners may not always be willing to share their true data because of privacy concerns. Recently, differential privacy (DP) technology has achieved a good trade-off between data utility and privacy guarantee by publishing noisy outputs. Nonetheless, DP still has a risk of privacy leakage when handling correlated data directly. Current schemes attempt to extend DP to publish correlated data, but are faced with the challenge of violating DP or low-level data utility. In this paper, we try to explore the essential cause of this inapplicability. Specifically, we suppose that this inapplicability is caused by the different correlations between noise and original data. To verify our supposition, we propose the notion of Correlation-Distinguishability Attack (CDA) to separate IID (Independent and Identically Distributed) noise from correlated data. Furthermore, taking time series as an example, we design an optimum filter to realize CDA in practical applications. Experimental results support our supposition and show that, the privacy degree of current approaches has a degradation under CDA.



中文翻译:

为什么当前的差异性隐私方案不适用于相关数据发布?

尽管数据分析和挖掘技术可以有效地为我们提供智能和个性化服务,但由于隐私问题,数据所有者可能并不总是愿意共享其真实数据。最近,差分隐私(DP)技术通过发布嘈杂的输出,在数据实用程序和隐私保证之间取得了很好的折衷。尽管如此,当直接处理相关数据时,DP仍然存在隐私泄漏的风险。当前的方案试图扩展DP以发布相关数据,但是面临违反DP或低级数据实用程序的挑战。在本文中,我们尝试探索这种不适用性的根本原因。具体来说,我们认为这种不适用性是由于噪声与原始数据之间的不同相关性引起的。为了验证我们的假设,我们提出了“关联可分辨性攻击(CDA)”概念,以从相关数据中分离IID(独立和相同分布)噪声。此外,以时间序列为例,我们设计了一个最佳滤波器,以在实际应用中实现CDA。实验结果支持了我们的假设,并表明,当前方法的隐私度在CDA下有所下降。

更新日期:2020-06-08
down
wechat
bug