当前位置: X-MOL 学术J. Neurosci. Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-modal neuroimaging feature fusion for diagnosis of Alzheimer's disease.
Journal of Neuroscience Methods ( IF 3 ) Pub Date : 2020-05-22 , DOI: 10.1016/j.jneumeth.2020.108795
Tao Zhang 1 , Mingyang Shi 1
Affiliation  

Background

Compared with single-modal neuroimages classification of AD, multi-modal classification can achieve better performance by fusing different information. Exploring synergy among various multi-modal neuroimages is contributed to identifying the pathological process of neurological disorders. However, it is still problematic to effectively exploit multi-modal information since the lack of an effective fusion method.

New method

In this paper, we propose a deep multi-modal fusion network based on the attention mechanism, which can selectively extract features from MRI and PET branches and suppress irrelevant information. In the attention model, the fusion ratio of each modality is assigned automatically according to the importance of the data. A hierarchical fusion method is adopted to ensure the effectiveness of Multi-modal Fusion.

Results

Evaluating the model on the ADNI dataset, the experimental results show that it outperforms the state-of-the-art methods. In particular, the final classification results of the NC/AD, SMCI/PMCI and Four-Class are 95.21 %, 89.79 %, and 86.15 %, respectively.

Comparison with existing methods

: Different from the early fusion and the late fusion, the hierarchical fusion method contributes to learning the synergy between the multi-modal data. Compared with some other prominent algorithms, the attention model enables our network to focus on the regions of interest and effectively fuse the multi-modal data.

Conclusion

Benefit from the hierarchical structure with attention model, the proposed network is capable of exploiting low-level and high-level features extracted from the multi-modal data and improving the accuracy of AD diagnosis. Results show its promising performance.



中文翻译:

多模式神经影像特征融合诊断阿尔茨海默氏病。

背景

与AD的单模态神经图像分类相比,多模态分类通过融合不同的信息可以获得更好的性能。探索各种多模态神经图像之间的协同作用有助于识别神经系统疾病的病理过程。然而,由于缺乏有效的融合方法,有效利用多模式信息仍然存在问题。

新方法

在本文中,我们提出了一种基于注意力机制的深度多模态融合网络,该网络可以选择性地从MRI和PET分支中提取特征,并抑制无关信息。在注意力模型中,根据数据的重要性自动分配每个模态的融合率。为了保证多模式融合的有效性,采用了分层融合的方法。

结果

在ADNI数据集上评估该模型,实验结果表明该模型优于最新方法。特别地,NC / AD,SMCI / PMCI和四类的最终分类结果分别为95.21%,89.79%和86.15%。

与现有方法的比较

:不同于早期融合和晚期融合,分层融合方法有助于学习多模态数据之间的协同作用。与其他一些突出的算法相比,注意力模型使我们的网络能够专注于感兴趣的区域并有效融合多模式数据。

结论

得益于具有注意力模型的分层结构,所提出的网络能够利用从多模态数据中提取的低层和高层特征,并提高AD诊断的准确性。结果表明它具有良好的性能。

更新日期:2020-05-22
down
wechat
bug