当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FusionM4Net: A multi-stage multi-modal learning algorithm for multi-label skin lesion classification
Medical Image Analysis ( IF 10.9 ) Pub Date : 2021-11-22 , DOI: 10.1016/j.media.2021.102307
Peng Tang 1 , Xintong Yan 2 , Yang Nan 3 , Shao Xiang 4 , Sebastian Krammer 5 , Tobias Lasser 1
Affiliation  

Skin disease is one of the most common diseases in the world. Deep learning-based methods have achieved excellent skin lesion recognition performance, most of which are based on only dermoscopy images. In recent works that use multi-modality data (patient’s meta-data, clinical images, and dermoscopy images), the methods adopt a one-stage fusion approach and only optimize the information fusion at the feature level. These methods do not use information fusion at the decision level and thus cannot fully use the data of all modalities. This work proposes a novel two-stage multi-modal learning algorithm (FusionM4Net) for multi-label skin diseases classification. At the first stage, we construct a FusionNet, which exploits and integrates the representation of clinical and dermoscopy images at the feature level, and then uses a Fusion Scheme 1 to conduct the information fusion at the decision level. At the second stage, to further incorporate the patient’s meta-data, we propose a Fusion Scheme 2, which integrates the multi-label predictive information from the first stage and patient’s meta-data information to train an SVM cluster. The final diagnosis is formed by the fusion of the predictions from the first and second stages. Our algorithm was evaluated on the seven-point checklist dataset, a well-established multi-modality multi-label skin disease dataset. Without using the patient’s meta-data, the proposed FusionM4Net’s first stage (FusionM4Net-FS) achieved an average accuracy of 75.7% for multi-classification tasks and 74.9% for diagnostic tasks, which is more accurate than other state-of-the-art methods. By further fusing the patient’s meta-data at FusionM4Net’s second stage (FusionM4Net-SS), the entire FusionM4Net finally boosts the average accuracy to 77.0% and the diagnostic accuracy to 78.5%, which indicates its robust and excellent classification performance on the label-imbalanced dataset. The corresponding code is available at: https://github.com/pixixiaonaogou/MLSDR.



中文翻译:

FusionM4Net:一种用于多标签皮肤病变分类的多阶段多模态学习算法

皮肤病是世界上最常见的疾病之一。基于深度学习的方法取得了优异的皮肤病变识别性能,其中大部分仅基于皮肤镜图像。在最近使用多模态数据(患者元数据、临床图像和皮肤镜图像)的工作中,这些方法采用一阶段融合方法,仅在特征级别优化信息融合。这些方法在决策层没有使用信息融合,因此不能充分利用所有模态的数据。这项工作提出了一种用于多标签皮肤病分类的新型两阶段多模态学习算法(FusionM4Net)。在第一阶段,我们构建了一个 FusionNet,它在特征级别利用和集成临床和皮肤镜图像的表示,然后使用融合方案1在决策层进行信息融合。在第二阶段,为了进一步整合患者的元数据,我们提出了一种融合方案 2,它将第一阶段的多标签预测信息和患者的元数据信息相结合,以训练一个 SVM 集群。最终诊断是由第一阶段和第二阶段的预测融合而成。我们的算法在七点清单数据集上进行了评估,这是一个成熟的多模态多标签皮肤病数据集。在不使用患者元数据的情况下,所提出的 FusionM4Net 第一阶段(FusionM4Net-FS)在多分类任务和诊断任务中的平均准确率分别为 75.7% 和 74.9%,比其他最先进的技术更准确方法。通过在FusionM4Net的第二阶段(FusionM4Net-SS)进一步融合患者的元数据,整个FusionM4Net最终将平均准确率提升至77.0%,将诊断准确率提升至78.5%,这表明其在标签不平衡上的鲁棒性和出色的分类性能数据集。对应代码在:https://github.com/pixixiaonaogou/MLSDR。

更新日期:2021-12-01
down
wechat
bug