当前位置: X-MOL 学术Int. J. Approx. Reason. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Revealed preference in argumentation: Algorithms and applications
International Journal of Approximate Reasoning ( IF 3.2 ) Pub Date : 2021-02-02 , DOI: 10.1016/j.ijar.2021.01.004
Nguyen Duy Hung , Van-Nam Huynh

Argumentative agents in AI are inspired by how humans reason by exchange of arguments. Given the same set of arguments possibly attacking one another (Dung's AA framework) these agents are bound to accept the same subset of those arguments (aka extension) unless they reason by different argumentation semantics. However humans may not be so predictable, and in this paper we assume that this is because any real agent's reasoning is inevitably influenced by her own preferences over the arguments. Though such preferences are usually unobservable, their effects on the agent's reasoning cannot be washed out. Hence by reconstructing her reasoning process, we might uncover her hidden preferences, which then allow us to predict what else the agent must accept. Concretely we formalize and develop algorithms for such problems as uncovering the hidden argument preference relation of an agent from her expressed opinion, by which we mean a subset of arguments or attacks she accepted from a given AA framework; and uncovering the collective preferences of a group from a dataset of individual opinions. A major challenge we addressed in this endeavor is to deal with “answer sets” of argument preference relations which are generally exponential or even infinite. So we start by developing a compact representation for such answer sets called preference states. Preference revelation tasks are then structured as derivations of preference states from data, and reasoning prediction tasks are reduced to manipulations of derived preference states without enumerating the underlying (possibly infinite) answer sets. We also apply the presented results to two non-trivial problems: learning preferences over rules in structured argumentation with priorities – an open problem so far; and analyzing public polls in apparently deeper ways than existing social argumentation frameworks allow.



中文翻译:

争论中的显性偏爱:算法和应用

人工智能中的议事代理受到人类如何通过交换论据进行推理的启发。给定同一组参数可能互相攻击(Dung的AA框架),除非这些代理通过不同的论点语义推理,否则它们必然会接受这些参数的相同子集(又称为扩展名)。然而,人类可能不是那么可预测的,因此在本文中我们假设这是因为任何真实行为者的推理都不可避免地受到其自身对论据的偏好的影响。尽管通常无法观察到这样的偏好,但是它们对主体推理的影响却无法消除。因此,通过重构她的推理过程,我们可能会发现她的隐藏偏好,然后让我们预测代理还必须接受什么。具体来说,我们针对以下问题进行形式化和开发算法,例如从其表达的意见中发现代理人的隐藏的论证偏好关系,这意味着我们从给定的AA框架中接受了一部分论据或攻击。并从个人意见的数据集中发现一个群体的集体偏好。我们在此过程中面临的主要挑战是处理通常是指数甚至无限的参数偏好关系的“答案集”。因此,我们首先为这样的答案集开发一个紧凑的表示形式,即 我们在此过程中面临的主要挑战是处理通常是指数甚至无限的参数偏好关系的“答案集”。因此,我们首先为这样的答案集开发一个紧凑的表示形式,即 我们在此过程中面临的主要挑战是处理通常是指数甚至无限的参数偏好关系的“答案集”。因此,我们首先为这样的答案集开发一个紧凑的表示形式,即偏好状态。然后将偏好显示任务构造为从数据中获取偏好状态的方法,并且将推理预测任务简化为对衍生的偏好状态的操作,而无需枚举底层(可能是无限的)答案集。我们还将提出的结果应用于两个非平凡的问题:在具有优先权的结构化论证中学习对规则的偏好–迄今为止尚待解决的问题;并以比现有的社会论证框架所允许的更深层次的方式分析民意调查。

更新日期:2021-02-09
down
wechat
bug