当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ECG Heartbeat Classification Using Multimodal Fusion
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-07-21 , DOI: arxiv-2107.09869
Zeeshan Ahmad, Anika Tabassum, Ling Guan, Naimul Khan

Electrocardiogram (ECG) is an authoritative source to diagnose and counter critical cardiovascular syndromes such as arrhythmia and myocardial infarction (MI). Current machine learning techniques either depend on manually extracted features or large and complex deep learning networks which merely utilize the 1D ECG signal directly. Since intelligent multimodal fusion can perform at the stateof-the-art level with an efficient deep network, therefore, in this paper, we propose two computationally efficient multimodal fusion frameworks for ECG heart beat classification called Multimodal Image Fusion (MIF) and Multimodal Feature Fusion (MFF). At the input of these frameworks, we convert the raw ECG data into three different images using Gramian Angular Field (GAF), Recurrence Plot (RP) and Markov Transition Field (MTF). In MIF, we first perform image fusion by combining three imaging modalities to create a single image modality which serves as input to the Convolutional Neural Network (CNN). In MFF, we extracted features from penultimate layer of CNNs and fused them to get unique and interdependent information necessary for better performance of classifier. These informational features are finally used to train a Support Vector Machine (SVM) classifier for ECG heart-beat classification. We demonstrate the superiority of the proposed fusion models by performing experiments on PhysioNets MIT-BIH dataset for five distinct conditions of arrhythmias which are consistent with the AAMI EC57 protocols and on PTB diagnostics dataset for Myocardial Infarction (MI) classification. We achieved classification accuracy of 99.7% and 99.2% on arrhythmia and MI classification, respectively.

中文翻译:

使用多模态融合进行心电图心跳分类

心电图 (ECG) 是诊断和应对心律失常和心肌梗塞 (MI) 等严重心血管综合征的权威来源。当前的机器学习技术要么依赖于手动提取的特征,要么依赖于仅直接利用一维 ECG 信号的大型复杂深度学习网络。由于智能多模态融合可以通过高效的深度网络在最先进的水平上执行,因此,在本文中,我们提出了两种计算高效的多模态融合框架,用于 ECG 心跳分类,称为多模态图像融合 (MIF) 和多模态特征融合(MFF)。在这些框架的输入中,我们使用 Gramian Angular Field (GAF)、Recurrence Plot (RP) 和 Markov Transition Field (MTF) 将原始 ECG 数据转换为三个不同的图像。在 MIF 中,我们首先通过组合三种成像模式来创建单个图像模式来执行图像融合,该模式作为卷积神经网络 (CNN) 的输入。在 MFF 中,我们从 CNN 的倒数第二层提取特征并将它们融合以获得更好的分类器性能所需的独特且相互依赖的信息。这些信息特征最终用于训练支持向量机 (SVM) 分类器以进行 ECG 心跳分类。我们通过在 PhysioNets MIT-BIH 数据集上对符合 AAMI EC57 协议的五种不同心律失常条件和在心肌梗塞 (MI) 分类的 PTB 诊断数据集上进行实验,证明了所提出的融合模型的优越性。我们在心律失常和 MI 分类上实现了 99.7% 和 99.2% 的分类准确率,
更新日期:2021-07-22
down
wechat
bug