当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A framework for step-wise explaining how to solve constraint satisfaction problems
Artificial Intelligence ( IF 14.4 ) Pub Date : 2021-06-29 , DOI: 10.1016/j.artint.2021.103550
Bart Bogaerts , Emilio Gamba , Tias Guns

We explore the problem of step-wise explaining how to solve constraint satisfaction problems, with a use case on logic grid puzzles. More specifically, we study the problem of explaining the inference steps that one can take during propagation, in a way that is easy to interpret for a person. Thereby, we aim to give the constraint solver explainable agency, which can help in building trust in the solver by being able to understand and even learn from the explanations. The main challenge is that of finding a sequence of simple explanations, where each explanation should aim to be as cognitively easy as possible for a human to verify and understand. This contrasts with the arbitrary combination of facts and constraints that the solver may use when propagating. We propose the use of a cost function to quantify how simple an individual explanation of an inference step is, and identify the explanation-production problem of finding the best sequence of explanations of a CSP. Our approach is agnostic of the underlying constraint propagation mechanisms, and can provide explanations even for inference steps resulting from combinations of constraints. In case multiple constraints are involved, we also develop a mechanism that allows to break the most difficult steps up and thus gives the user the ability to zoom in on specific parts of the explanation. Our proposed algorithm iteratively constructs the explanation sequence by using an optimistic estimate of the cost function to guide the search for the best explanation at each step. Our experiments on logic grid puzzles show the feasibility of the approach in terms of the quality of the individual explanations and the resulting explanation sequences obtained.



中文翻译:

逐步解释如何解决约束满足问题的框架

我们通过逻辑网格谜题的用例探索逐步解释如何解决约束满足问题的问题。更具体地说,我们研究了以一种易于理解的方式解释人们在传播过程中可以采取的推理步骤的问题。因此,我们的目标是为约束求解器提供可解释的代理,这可以通过理解解释甚至从解释中学习来帮助建立对求解器的信任。主要的挑战是找到一个简单的序列解释,其中每个解释都应旨在使人类在认知上尽可能容易验证和理解。这与求解器在传播时可能使用的事实和约束的任意组合形成对比。我们建议使用成本函数来量化推理步骤的单个解释的简单程度,并确定寻找 CSP 的最佳解释序列的解释产生问题。我们的方法与潜在的约束传播机制无关,甚至可以为由约束组合产生的推理步骤提供解释。如果涉及多个约束,我们还开发了一种机制,允许分解最困难的步骤,从而使用户能够放大关于具体部分的解释。我们提出的算法通过使用成本函数的乐观估计来迭代地构建解释序列,以指导每一步寻找最佳解释。我们在逻辑网格谜题上的实验表明了该方法在单个解释的质量和获得的解释序列方面的可行性。

更新日期:2021-07-12
down
wechat
bug