当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differential Privacy Meets Maximum-weight Matching
arXiv - CS - Multiagent Systems Pub Date : 2020-11-16 , DOI: arxiv-2011.07934
Panayiotis Danassis, Aleksei Triastcyn, Boi Faltings

When it comes to large-scale multi-agent systems with a diverse set of agents, traditional differential privacy (DP) mechanisms are ill-matched because they consider a very broad class of adversaries, and they protect all users, independent of their characteristics, by the same guarantee. Achieving a meaningful privacy leads to pronounced reduction in solution quality. Such assumptions are unnecessary in many real-world applications for three key reasons: (i) users might be willing to disclose less sensitive information (e.g., city of residence, but not exact location), (ii) the attacker might posses auxiliary information (e.g., city of residence in a mobility-on-demand system, or reviewer expertise in a paper assignment problem), and (iii) domain characteristics might exclude a subset of solutions (an expert on auctions would not be assigned to review a robotics paper, thus there is no need for indistinguishably between reviewers on different fields). We introduce Piecewise Local Differential Privacy (PLDP), a privacy model designed to protect the utility function in applications where the attacker possesses additional information on the characteristics of the utility space. PLDP enables a high degree of privacy, while being applicable to real-world, unboundedly large settings. Moreover, we propose PALMA, a privacy-preserving heuristic for maximum-weight matching. We evaluate PALMA in a vehicle-passenger matching scenario using real data and demonstrate that it provides strong privacy, $\varepsilon \leq 3$ and a median of $\varepsilon = 0.44$, and high quality matchings ($10.8\%$ worse than the non-private optimal).

中文翻译:

差分隐私满足最大权重匹配

当涉及具有多种代理的大规模多代理系统时,传统的差分隐私 (DP) 机制不匹配,因为它们考虑了非常广泛的对手类别,并且它们保护所有用户,而不受其特征的影响,通过同样的保证。实现有意义的隐私会导致解决方案质量的显着降低。由于以下三个关键原因,在许多实际应用中不需要这样的假设:(i) 用户可能愿意披露不太敏感的信息(例如,居住城市,但不是确切的位置),(ii) 攻击者可能拥有辅助信息(例如,移动按需系统中的居住城市,或论文分配问题中的审稿专家),(iii) 领域特征可能会排除解决方案的一个子集(拍卖专家不会被指派审查机器人论文,因此不同领域的审查者之间无需难以区分)。我们介绍了分段局部差分隐私 (PLDP),这是一种隐私模型,旨在保护攻击者拥有有关效用空间特征的附加信息的应用程序中的效用函数。PLDP 可实现高度隐私,同时适用于现实世界中无限大的设置。此外,我们提出了 PALMA,一种用于最大权重匹配的隐私保护启发式算法。我们使用真实数据在车辆-乘客匹配场景中评估 PALMA,并证明它提供了强大的隐私,$\varepsilon \leq 3$ 和 $\varepsilon = 0.44$ 的中位数,
更新日期:2020-11-17
down
wechat
bug