当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
arXiv - CS - Multiagent Systems Pub Date : 2021-06-30 , DOI: arxiv-2107.00032
Alex Raymond, Matthew Malencia, Guilherme Paulino-Passos, Amanda Prorok

Fairness is commonly seen as a property of the global outcome of a system and assumes centralisation and complete knowledge. However, in real decentralised applications, agents only have partial observation capabilities. Under limited information, agents rely on communication to divulge some of their private (and unobservable) information to others. When an agent deliberates to resolve conflicts, limited knowledge may cause its perspective of a correct outcome to differ from the actual outcome of the conflict resolution. This is subjective unfairness. To enable decentralised, fairness-aware conflict resolution under privacy constraints, we have two contributions: (1) a novel interaction approach and (2) a formalism of the relationship between privacy and fairness. Our proposed interaction approach is an architecture for privacy-aware explainable conflict resolution where agents engage in a dialogue of hypotheses and facts. To measure the privacy-fairness relationship, we define subjective and objective fairness on both the local and global scope and formalise the impact of partial observability due to privacy in these different notions of fairness. We first study our proposed architecture and the privacy-fairness relationship in the abstract, testing different argumentation strategies on a large number of randomised cultures. We empirically demonstrate the trade-off between privacy, objective fairness, and subjective fairness and show that better strategies can mitigate the effects of privacy in distributed systems. In addition to this analysis across a broad set of randomised abstract cultures, we analyse a case study for a specific scenario: we instantiate our architecture in a multi-agent simulation of prioritised rule-aware collision avoidance with limited information disclosure.

中文翻译:

同意与不同意:隐私限制的去中心化冲突解决的主观公平性

公平性通常被视为系统全局结果的一个属性,并假设中心化和完整的知识。然而,在真正的去中心化应用中,代理仅具有部分观察能力。在信息有限的情况下,代理依靠通信向其他人泄露他们的一些私人(和不可观察的)信息。当代理考虑解决冲突时,有限的知识可能会导致其对正确结果的看法与冲突解决的实际结果不同。这是主观上的不公平。为了在隐私约束下实现去中心化、具有公平意识的冲突解决方案,我们有两个贡献:(1)一种新颖的交互方法和(2)隐私与公平之间关系的形式主义。我们提出的交互方法是一种用于隐私感知可解释冲突解决的架构,其中代理参与假设和事实的对话。为了衡量隐私公平关系,我们在局部和全局范围内定义了主观和客观公平,并在这些不同的公平概念中将隐私对部分可观察性的影响形式化。我们首先抽象地研究我们提出的架构和隐私公平关系,在大量随机文化中测试不同的论证策略。我们凭经验证明了隐私、客观公平和主观公平之间的权衡,并表明更好的策略可以减轻分布式系统中隐私的影响。除了对广泛的随机抽象文化进行的这种分析之外,
更新日期:2021-07-02
down
wechat
bug