当前位置: X-MOL 学术Artif. Intell. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CEFEs: A CNN Explainable Framework for ECG Signals
Artificial Intelligence in Medicine ( IF 7.5 ) Pub Date : 2021-03-26 , DOI: 10.1016/j.artmed.2021.102059
Barbara Mukami Maweu 1 , Sagnik Dakshit 1 , Rittika Shamsuddin 2 , Balakrishnan Prabhakaran 1
Affiliation  

In the healthcare domain, trust, confidence, and functional understanding are critical for decision support systems, therefore, presenting challenges in the prevalent use of black-box deep learning (DL) models. With recent advances in deep learning methods for classification tasks, there is an increased use of deep learning in healthcare decision support systems, such as detection and classification of abnormal Electrocardiogram (ECG) signals. Domain experts seek to understand the functional mechanism of black-box models with an emphasis on understanding how these models arrive at specific classification of patient medical data. In this paper, we focus on ECG data as the healthcare data signal to be analyzed. Since ECG is a one-dimensional time-series data, we target 1D-CNN (Convolutional Neural Networks) as the candidate DL model. Majority of existing interpretation and explanations research has been on 2D-CNN models in non-medical domain leaving a gap in terms of explanation of CNN models used on medical time-series data. Hence, we propose a modular framework, CNN Explanations Framework for ECG Signals (CEFEs), for interpretable explanations. Each module of CEFEs provides users with the functional understanding of the underlying CNN models in terms of data descriptive statistics, feature visualization, feature detection, and feature mapping. The modules evaluate a model’s capacity while inherently accounting for correlation between learned features and raw signals which translates to correlation between model’s capacity to classify and it’s learned features. Explainable models such as CEFEs could be evaluated in different ways: training one deep learning architecture on different volumes/amounts of the same dataset, training different architectures on the same data set or a combination of different CNN architectures and datasets. In this paper, we choose to evaluate CEFEs extensively by training on different volumes of datasets with the same CNN architecture. The CEFEs’ interpretations, in terms of quantifiable metrics, feature visualization, provide explanation as to the quality of the deep learning model where traditional performance metrics (such as precision, recall, accuracy, etc.) do not suffice.



中文翻译:

CEFE:一个 CNN 可解释的 ECG 信号框架

在医疗保健领域,信任、信心和功能理解对于决策支持系统至关重要,因此,对黑盒深度学习 (DL) 模型的普遍使用提出了挑战。随着用于分类任务的深度学习方法的最新进展,深度学习在医疗决策支持系统中的使用越来越多,例如异常心电图 (ECG) 信号的检测和分类。领域专家试图了解黑盒模型的功能机制,重点是了解这些模型如何达到患者医疗数据的特定分类。在本文中,我们将心电图数据作为要分析的医疗数据信号。由于 ECG 是一维时间序列数据,因此我们将 1D-CNN(卷积神经网络)作为候选 DL 模型。大多数现有的解释和解释研究都是在非医学领域的 2D-CNN 模型上进行的,在解释用于医学时间序列数据的 CNN 模型方面存在差距。因此,我们提出了一个模块化框架,即 CNN 心电图信号解释框架 (CEFE),用于可解释的解释。CEFEs的每个模块都让用户在数据描述统计、特征可视化、特征检测和特征映射方面对底层CNN模型进行功能理解。这些模块评估模型的能力,同时固有地考虑学习特征和原始信号之间的相关性,这转化为模型分类能力与其学习特征之间的相关性。可以通过不同方式评估诸如 CEFE 之类的可解释模型:在同一数据集的不同数量/数量上训练一种深度学习架构,在同一数据集或不同 CNN 架构和数据集的组合上训练不同的架构。在本文中,我们选择通过在具有相同 CNN 架构的不同数据集上进行训练来广泛评估 CEFE。CEFE 在可量化指标、特征可视化方面的解释为深度学习模型的质量提供了解释,而传统的性能指标(例如精度、召回率、准确性等)无法满足这些要求。我们选择通过在具有相同 CNN 架构的不同数据集上进行训练来广泛评估 CEFE。CEFE 在可量化指标、特征可视化方面的解释为深度学习模型的质量提供了解释,而传统的性能指标(例如精度、召回率、准确性等)无法满足这些要求。我们选择通过在具有相同 CNN 架构的不同数据集上进行训练来广泛评估 CEFE。CEFE 在可量化指标、特征可视化方面的解释为深度学习模型的质量提供了解释,而传统的性能指标(例如精度、召回率、准确性等)无法满足这些要求。

更新日期:2021-04-05
down
wechat
bug