当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Supervised Multi-Modal Hybrid Fusion Network for Brain Tumor Segmentation
IEEE Journal of Biomedical and Health Informatics ( IF 6.7 ) Pub Date : 2021-09-03 , DOI: 10.1109/jbhi.2021.3109301
Feiyi Fang 1 , Yazhou Yao 1 , Tao Zhou 1 , Guosen Xie 1 , Jianfeng Lu 1
Affiliation  

Accurate medical image segmentation of brain tumors is necessary for the diagnosing, monitoring, and treating disease. In recent years, with the gradual emergence of multi-sequence magnetic resonance imaging (MRI), multi-modal MRI diagnosis has played an increasingly important role in the early diagnosis of brain tumors by providing complementary information for a given lesion. Different MRI modalities vary significantly in context, as well as in coarse and fine information. As the manual identification of brain tumors is very complicated, it usually requires the lengthy consultation of multiple experts. The automatic segmentation of brain tumors from MRI images can thus greatly reduce the workload of doctors and buy more time for treating patients. In this paper, we propose a multi-modal brain tumor segmentation framework that adopts the hybrid fusion of modality-specific features using a self-supervised learning strategy. The algorithm is based on a fully convolutional neural network. Firstly, we propose a multi-input architecture that learns independent features from multi-modal data, and can be adapted to different numbers of multi-modal inputs. Compared with single-modal multi-channel networks, our model provides a better feature extractor for segmentation tasks, which learns cross-modal information from multi-modal data. Secondly, we propose a new feature fusion scheme, named hybrid attentional fusion. This scheme enables the network to learn the hybrid representation of multiple features and capture the correlation information between them through an attention mechanism. Unlike popular methods, such as feature map concatenation, this scheme focuses on the complementarity between multi-modal data, which can significantly improve the segmentation results of specific regions. Thirdly, we propose a self-supervised learning strategy for brain tumor segmentation tasks. Our experimental results demonstrate the effectiveness of the proposed model against other state-of-the-art multi-modal medical segmentation methods.

中文翻译:


用于脑肿瘤分割的自监督多模态混合融合网络



脑肿瘤的精确医学图像分割对于疾病的诊断、监测和治疗是必要的。近年来,随着多序列磁共振成像(MRI)的逐渐出现,多模态MRI诊断通过为特定病变提供补充信息,在脑肿瘤的早期诊断中发挥着越来越重要的作用。不同的 MRI 模式在上下文以及粗略和精细信息方面存在显着差异。由于脑肿瘤的人工识别非常复杂,通常需要多位专家的长时间会诊。从MRI图像中自动分割脑肿瘤可以大大减轻医生的工作量,为治疗患者赢得更多时间。在本文中,我们提出了一种多模态脑肿瘤分割框架,该框架采用自监督学习策略,采用模态特定特征的混合融合。该算法基于全卷积神经网络。首先,我们提出了一种多输入架构,可以从多模态数据中学习独立特征,并且可以适应不同数量的多模态输入。与单模态多通道网络相比,我们的模型为分割任务提供了更好的特征提取器,可以从多模态数据中学习跨模态信息。其次,我们提出了一种新的特征融合方案,称为混合注意融合。该方案使网络能够学习多个特征的混合表示,并通过注意力机制捕获它们之间的相关信息。 与特征图串联等流行方法不同,该方案注重多模态数据之间的互补性,可以显着提高特定区域的分割结果。第三,我们提出了一种用于脑肿瘤分割任务的自监督学习策略。我们的实验结果证明了所提出的模型相对于其他最先进的多模态医学分割方法的有效性。
更新日期:2021-09-03
down
wechat
bug