当前位置:
X-MOL 学术
›
arXiv.cs.LO
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Splitting Epistemic Logic Programs
arXiv - CS - Logic in Computer Science Pub Date : 2018-12-20 , DOI: arxiv-1812.08763 Pedro Cabalar, Jorge Fandinno and Luis Fari\~nas del Cerro
arXiv - CS - Logic in Computer Science Pub Date : 2018-12-20 , DOI: arxiv-1812.08763 Pedro Cabalar, Jorge Fandinno and Luis Fari\~nas del Cerro
Epistemic logic programs constitute an extension of the stable models
semantics to deal with new constructs called subjective literals. Informally
speaking, a subjective literal allows checking whether some regular literal is
true in all stable models or in some stable model. As it can be imagined, the
associated semantics has proved to be non-trivial, as the truth of the
subjective literal may interfere with the set of stable models it is supposed
to query. As a consequence, no clear agreement has been reached and different
semantic proposals have been made in the literature. Unfortunately, comparison
among these proposals has been limited to a study of their effect on individual
examples, rather than identifying general properties to be checked. In this
paper, we propose an extension of the well-known splitting property for logic
programs to the epistemic case. To this aim, we formally define when an
arbitrary semantics satisfies the epistemic splitting property and examine some
of the consequences that can be derived from that, including its relation to
conformant planning and to epistemic constraints. Interestingly, we prove
(through counterexamples) that most of the existing proposals fail to fulfill
the epistemic splitting property, except the original semantics proposed by
Gelfond in 1991.
中文翻译:
拆分认知逻辑程序
认知逻辑程序构成了稳定模型语义的扩展,以处理称为主观文字的新结构。非正式地说,主观文字允许检查某些常规文字在所有稳定模型或某些稳定模型中是否为真。可以想象,相关的语义已被证明是不平凡的,因为主观文字的真实性可能会干扰它应该查询的稳定模型集。因此,尚未达成明确的一致意见,并且文献中提出了不同的语义建议。不幸的是,这些建议之间的比较仅限于研究它们对单个示例的影响,而不是确定要检查的一般属性。在本文中,我们建议将众所周知的逻辑程序拆分属性扩展到认知案例。为此,我们正式定义了任意语义何时满足认知分裂属性,并检查可以从中得出的一些后果,包括其与一致性规划和认知约束的关系。有趣的是,我们证明(通过反例)除了 Gelfond 在 1991 年提出的原始语义之外,大多数现有的提议都不能满足认知分裂属性。
更新日期:2020-05-06
中文翻译:
拆分认知逻辑程序
认知逻辑程序构成了稳定模型语义的扩展,以处理称为主观文字的新结构。非正式地说,主观文字允许检查某些常规文字在所有稳定模型或某些稳定模型中是否为真。可以想象,相关的语义已被证明是不平凡的,因为主观文字的真实性可能会干扰它应该查询的稳定模型集。因此,尚未达成明确的一致意见,并且文献中提出了不同的语义建议。不幸的是,这些建议之间的比较仅限于研究它们对单个示例的影响,而不是确定要检查的一般属性。在本文中,我们建议将众所周知的逻辑程序拆分属性扩展到认知案例。为此,我们正式定义了任意语义何时满足认知分裂属性,并检查可以从中得出的一些后果,包括其与一致性规划和认知约束的关系。有趣的是,我们证明(通过反例)除了 Gelfond 在 1991 年提出的原始语义之外,大多数现有的提议都不能满足认知分裂属性。