当前位置: X-MOL 学术arXiv.cs.FL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Verifying Pufferfish Privacy in Hidden Markov Models
arXiv - CS - Formal Languages and Automata Theory Pub Date : 2020-08-04 , DOI: arxiv-2008.01704
Depeng Liu, Bow-yaw Wang and Lijun Zhang

Pufferfish is a Bayesian privacy framework for designing and analyzing privacy mechanisms. It refines differential privacy, the current gold standard in data privacy, by allowing explicit prior knowledge in privacy analysis. Through these privacy frameworks, a number of privacy mechanisms have been developed in literature. In practice, privacy mechanisms often need be modified or adjusted to specific applications. Their privacy risks have to be re-evaluated for different circumstances. Moreover, computing devices only approximate continuous noises through floating-point computation, which is discrete in nature. Privacy proofs can thus be complicated and prone to errors. Such tedious tasks can be burdensome to average data curators. In this paper, we propose an automatic verification technique for Pufferfish privacy. We use hidden Markov models to specify and analyze discretized Pufferfish privacy mechanisms. We show that the Pufferfish verification problem in hidden Markov models is NP-hard. Using Satisfiability Modulo Theories solvers, we propose an algorithm to analyze privacy requirements. We implement our algorithm in a prototypical tool called FAIER, and present several case studies. Surprisingly, our case studies show that na\"ive discretization of well-established privacy mechanisms often fail, witnessed by counterexamples generated by FAIER. In discretized \emph{Above Threshold}, we show that it results in absolutely no privacy. Finally, we compare our approach with testing based approach on several case studies, and show that our verification technique can be combined with testing based approach for the purpose of (i) efficiently certifying counterexamples and (ii) obtaining a better lower bound for the privacy budget $\epsilon$.

中文翻译:

在隐马尔可夫模型中验证河豚隐私

Pufferfish 是用于设计和分析隐私机制的贝叶斯隐私框架。它通过允许隐私分析中的显式先验知识来完善差分隐私,这是当前数据隐私的黄金标准。通过这些隐私框架,文献中已经开发了许多隐私机制。在实践中,隐私机制通常需要针对特定​​应用进行修改或调整。他们的隐私风险必须针对不同情况重新评估。此外,计算设备仅通过浮点计算来近似连续噪声,而浮点计算本质上是离散的。因此,隐私证明可能很复杂并且容易出错。对于普通的数据管理者来说,这样繁琐的任务可能是繁重的。在本文中,我们提出了一种用于河豚隐私的自动验证技术。我们使用隐马尔可夫模型来指定和分析离散化的河豚隐私机制。我们表明隐藏马尔可夫模型中的河豚验证问题是 NP 难的。使用可满足性模理论求解器,我们提出了一种分析隐私需求的算法。我们在名为 FAIER 的原型工具中实现了我们的算法,并展示了几个案例研究。令人惊讶的是,我们的案例研究表明,完善的隐私机制的天真离散化经常失败,FAIER 生成的反例证明了这一点。在离散化 \emph {Above Threshold} 中,我们表明它绝对没有隐私。最后,我们在几个案例研究中将我们的方法与基于测试的方法进行比较,
更新日期:2020-08-05
down
wechat
bug