当前位置: X-MOL 学术Int. J. Hum. Comput. Stud. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpretable confidence measures for decision support systems
International Journal of Human-Computer Studies ( IF 5.3 ) Pub Date : 2020-06-09 , DOI: 10.1016/j.ijhcs.2020.102493
Jasper van der Waa , Tjeerd Schoonderwoerd , Jurriaan van Diggelen , Mark Neerincx

Decision support systems (DSS) have improved significantly but are more complex due to recent advances in Artificial Intelligence. Current XAI methods generate explanations on model behaviour to facilitate a user’s understanding, which incites trust in the DSS. However, little focus has been on the development of methods that establish and convey a system’s confidence in the advice that it provides. This paper presents a framework for Interpretable Confidence Measures (ICMs). We investigate what properties of a confidence measure are desirable and why, and how an ICM is interpreted by users. In several data sets and user experiments, we evaluate these ideas. The presented framework defines four properties: 1) accuracy or soundness, 2) transparency, 3) explainability and 4) predictability. These characteristics are realized by a case-based reasoning approach to confidence estimation. Example ICMs are proposed for -and evaluated on- multiple data sets. In addition, ICM was evaluated by performing two user experiments. The results show that ICM can be as accurate as other confidence measures, while behaving in a more predictable manner. Also, ICM’s underlying idea of case-based reasoning enables generating explanations about the computation of the confidence value, and facilitates user’s understandability of the algorithm.



中文翻译:

决策支持系统的可解释的置信度

决策支持系统(DSS)已有显着改善,但由于人工智能的最新进展而变得更加复杂。当前的XAI方法会生成有关模型行为的解释,以促进用户的理解,从而激发对DSS的信任。但是,很少关注开发方法来建立和传达系统对其所提供建议的信心。本文提出了一种可解释的置信度(ICM)框架。我们研究置信度度量的哪些属性是理想的以及为什么,以及用户如何解释ICM。在几个数据集和用户实验中,我们评估了这些想法。提出的框架定义了四个属性:1)准确性或健全性,2)透明度,3)可解释性和4)可预测性。这些特征是通过基于案例的推理方法来进行置信度估计的。提出了针对多个数据集并对其进行评估的示例ICM。此外,通过执行两个用户实验评估了ICM。结果表明,ICM可以和其他置信度一样准确,同时行为方式更可预测。同样,ICM基于案例的推理的基本思想使得能够生成有关置信度值计算的解释,并有助于用户理解算法。

更新日期:2020-06-09
down
wechat
bug