当前位置: X-MOL 学术Front. Comput. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis
Frontiers in Computational Neuroscience ( IF 3.2 ) Pub Date : 2020-02-07 , DOI: 10.3389/fncom.2020.00006
Parth Natekar 1 , Avinash Kori 1 , Ganapathy Krishnamurthi 1
Affiliation  

The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.

中文翻译:

揭开脑肿瘤分割网络的神秘面纱:可解释性和不确定性分析

胶质瘤及其肿瘤内结构的准确自动分割不仅对治疗计划很重要,而且对后续评估也很重要。已经开发了几种基于 2D 和 3D 深度神经网络 (DNN) 的方法来分割脑肿瘤,并对来自不同 MRI 模式的不同类别的肿瘤进行分类。然而,这些网络通常是黑盒模型,并且不提供关于它们执行此任务所采取的过程的任何证据。提高此类深度学习技术的透明度和可解释性对于将此类方法完全集成到医疗实践中是必要的。在本文中,我们探索了各种技术来解释脑肿瘤分割模型的功能组织,并提取内部概念的可视化,以了解这些网络如何实现高度准确的肿瘤分割。我们使用 BraTS 2018 数据集来训练三个具有标准架构的不同网络,并概述这些网络在分割脑肿瘤的过程中的异同。我们展示了脑肿瘤分割网络在过滤器级别上学习某些人类可理解的解开概念。我们还表明,他们采用自上而下或分层的方法来定位肿瘤的不同部分。然后,我们提取一些内部特征图的可视化,并提供有关模型输出的不确定性度量,以提供有关这些网络预测的额外定性证据。我们相信,这种人类可以理解的组织和概念的出现可能有助于这些方法在医学诊断中的接受和整合。
更新日期:2020-02-07
down
wechat
bug