当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Attention-based Representation Learning for Heart Sound Classification
arXiv - CS - Sound Pub Date : 2021-01-13 , DOI: arxiv-2101.04979
Zhao Ren, Kun Qian, Fengquan Dong, Zhenyu Dai, Yoshiharu Yamamoto, Björn W. Schuller

Cardiovascular diseases are the leading cause of deaths and severely threaten human health in daily life. On the one hand, there have been dramatically increasing demands from both the clinical practice and the smart home application for monitoring the heart status of subjects suffering from chronic cardiovascular diseases. On the other hand, experienced physicians who can perform an efficient auscultation are still lacking in terms of number. Automatic heart sound classification leveraging the power of advanced signal processing and machine learning technologies has shown encouraging results. Nevertheless, human hand-crafted features are expensive and time-consuming. To this end, we propose a novel deep representation learning method with an attention mechanism for heart sound classification. In this paradigm, high-level representations are learnt automatically from the recorded heart sound data. Particularly, a global attention pooling layer improves the performance of the learnt representations by estimating the contribution of each unit in feature maps. The Heart Sounds Shenzhen (HSS) corpus (170 subjects involved) is used to validate the proposed method. Experimental results validate that, our approach can achieve an unweighted average recall of 51.2% for classifying three categories of heart sounds, i. e., normal, mild, and moderate/severe annotated by cardiologists with the help of Echocardiography.

中文翻译:

基于深度注意力的表征学习在心音分类中的应用

心血管疾病是导致死亡的主要原因,并在日常生活中严重威胁着人类健康。一方面,临床实践和智能家居应用对监视患有慢性心血管疾病的受试者的心脏状况的需求都急剧增加。另一方面,在数量上仍然缺乏能够进行有效听诊的有经验的医师。利用先进的信号处理和机器学习技术的自动心音分类已显示出令人鼓舞的结果。然而,人类手工制作的功能昂贵且费时。为此,我们提出了一种具有注意机制的新型深层表示学习方法,用于心音分类。在这种范式中 从记录的心音数据中自动学习高级表示。特别地,全局注意力集中层通过估计特征图中每个单元的贡献来提高学习的表示的性能。深圳之心(HSS)语料库(涉及170名受试者)用于验证所提出的方法。实验结果证实,我们的方法可将心脏病专家在超声心动描记术下标注的三类心音分类为正常,轻度和中度/重度,从而实现未加权平均召回率51.2%。深圳之心(HSS)语料库(涉及170名受试者)用于验证所提出的方法。实验结果证实,我们的方法可将心脏病专家在超声心动描记术下标注的三类心音分类为正常,轻度和中度/重度,从而实现未加权平均召回率51.2%。深圳之心(HSS)语料库(涉及170名受试者)用于验证所提出的方法。实验结果证实,我们的方法可将心脏病专家在超声心动描记术下标注的三类心音分类为正常,轻度和中度/重度,从而实现未加权平均召回率51.2%。
更新日期:2021-01-14
down
wechat
bug