当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond
Information Fusion ( IF 14.7 ) Pub Date : 2021-07-31 , DOI: 10.1016/j.inffus.2021.07.016
Guang Yang 1, 2, 3 , Qinghao Ye 4, 5 , Jun Xia 6
Affiliation  

Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems’ black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.



中文翻译:


通过多模式和多中心数据融合,揭开医学可解释人工智能的黑匣子:一个迷你评论、两个展示等等



可解释的人工智能(XAI)是机器学习的一个新兴研究主题,旨在解开人工智能系统的黑盒选择是如何做出的。该研究领域考察决策中涉及的措施和模型,并寻求解决方案来明确解释它们。许多机器学习算法无法表明如何以及为何做出决策。对于目前使用的最流行的深度神经网络方法来说尤其如此。因此,我们对人工智能系统的信心可能会因这些黑盒模型缺乏可解释性而受到阻碍。 XAI 对于深度学习驱动的应用程序变得越来越重要,特别是对于医疗和保健研究,尽管一般来说这些深度神经网络可以在性能上带来引人注目的红利。大多数现有人工智能系统的可解释性和透明度不足可能是人工智能工具成功实施和集成到常规临床实践中并不常见的主要原因之一。在本研究中,我们首先调查了 XAI 目前的进展,特别是其在医疗保健应用方面的进展。然后,我们介绍了利用多模式和多中心数据融合的 XAI 解决方案,并随后在真实临床场景的两个展示中进行了验证。全面的定量和定性分析可以证明我们提出的 XAI 解决方案的有效性,从中我们可以设想在更广泛的临床问题中的成功应用。

更新日期:2021-08-04
down
wechat
bug