当前位置: X-MOL 学术Artif. Intell. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making.
Artificial Intelligence in Medicine ( IF 6.1 ) Pub Date : 2020-01-31 , DOI: 10.1016/j.artmed.2020.101812
Evangelia Kyrimi 1 , Somayyeh Mossadegh 2 , Nigel Tai 3 , William Marsh 1
Affiliation  

Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to ‘hybrid’ BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted.



中文翻译:

贝叶斯网络中推理的增量解释,以增加模型的可信度并支持临床决策。

越来越多的AI模型被视为临床决策支持工具的一部分。但是,很少考虑此类模型的可信赖性。如果临床医生可以理解和信任模型的预测,则他们更有可能使用模型。关键在于是否可以解释其基本推理。贝叶斯网络(BN)模型的优点在于它不是黑盒,并且可以解释其推理。在本文中,我们提出了可以应用于“混合” BN(即既包含离散节点又包含连续节点的BN)的推理的增量解释。我们要回答的关键问题是:(1)哪些重要证据支持或与预测相矛盾,(2)中间变量通过哪些信息进行流动。使用真实的临床案例研究来说明该解释。

更新日期:2020-01-31
down
wechat
bug