当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-Label Classification of Multi-Modality Skin Lesion via Hyper-Connected Convolutional Neural Network
Pattern Recognition ( IF 8 ) Pub Date : 2020-11-01 , DOI: 10.1016/j.patcog.2020.107502
Lei Bi , David Dagan Feng , Michael Fulham , Jinman Kim

Abstract Objective Clinical and dermoscopy images (multi-modality image pairs) are routinely used sequentially in the assessment of skin lesions. Clinical images characterize a lesion's geometry and color; dermoscopy depicts vascularity, dots and globules from the sub-surface of the lesion. Together these modalities provide labels to characterize a skin lesion. Recently, convolutional neural networks (CNNs), due to the ability to learn low-level features and high-level semantic information in an end-to-end architecture, have been shown to be the state-of-the-art in skin lesion classification. Most of the CNN methods have relied on dermoscopy alone. In the few published papers that support multi-modalities, the methods are based on ‘late-fusion’ to integrate extracted clinical and dermoscopy image features separately. These late-fusion methods tend to ignore the accessible complementary image features between the paired images at the early stage of the CNN architecture. Methods We propose a hyper-connected CNN (HcCNN) to classify skin lesions. Compared to existing multi-modality CNNs, our HcCNN has an additional hyper-branch that integrates intermediary image features in a hierarchical manner. The hyper-branch enables the network to learn more complex combinations between the images at all, early and late, stages of the network. We also coupled the HcCNN with a multi-scale attention block (MsA) to prioritize semantically important subtle regions in the two modalities across various image scales. Results Our HcCNN achieved an average accuracy of 74.9% for multi-label classification on the 7-point Checklist dataset, which is a well-benchmarked public dataset. Conclusions: Our method is more accurate than the state-of-the-art methods and, in particular, our method achieved consistent and the best results in datasets with imbalanced label distributions.

中文翻译:

通过超连接卷积神经网络对多模态皮肤病变进行多标签分类

摘要 目的临床和皮肤镜检查图像(多模态图像对)通常依次用于评估皮肤病变。临床图像表征病变的几何形状和颜色;皮肤镜检查显示病变表面下的血管、点和小球。这些方式一起提供了表征皮肤病变的标签。最近,由于能够在端到端架构中学习低级特征和高级语义信息,卷积神经网络 (CNN) 已被证明是皮肤病变方面的最新技术分类。大多数 CNN 方法仅依赖于皮肤镜检查。在支持多模态的少数已发表论文中,这些方法基于“后期融合”来分别整合提取的临床和皮肤镜图像特征。这些后期融合方法往往会忽略 CNN 架构早期配对图像之间可访问的互补图像特征。方法我们提出了一个超连接的 CNN (HcCNN) 来对皮肤病变进行分类。与现有的多模态 CNN 相比,我们的 HcCNN 有一个额外的超分支,它以分层方式集成了中间图像特征。超分支使网络能够在网络的早期和晚期阶段学习更复杂的图像组合。我们还将 HcCNN 与多尺度注意力块 (MsA) 相结合,以在各种图像尺度上对两种模态中语义重要的细微区域进行优先级排序。结果我们的 HcCNN 在 7 点清单数据集上实现了 74.9% 的多标签分类平均准确率,这是一个经过良好基准测试的公共数据集。
更新日期:2020-11-01
down
wechat
bug