当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation
Medical Image Analysis ( IF 10.9 ) Pub Date : 2021-11-03 , DOI: 10.1016/j.media.2021.102293
Duwei Dai 1 , Caixia Dong 1 , Songhua Xu 1 , Qingsen Yan 2 , Zongfang Li 1 , Chunyan Zhang 1 , Nana Luo 3
Affiliation  

Computer-Aided Diagnosis (CAD) for dermatological diseases offers one of the most notable showcases where deep learning technologies display their impressive performance in acquiring and surpassing human experts. In such the CAD process, a critical step is concerned with segmenting skin lesions from dermoscopic images. Despite remarkable successes attained by recent deep learning efforts, much improvement is still anticipated to tackle challenging cases, e.g., segmenting lesions that are irregularly shaped, bearing low contrast, or possessing blurry boundaries. To address such inadequacies, this study proposes a novel Multi-scale Residual Encoding and Decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency. Specifically, a multi-scale residual encoding fusion module (MsR-EFM) is employed in an encoder, and a multi-scale residual decoding fusion module (MsR-DFM) is applied in a decoder to fuse multi-scale features adaptively. In addition, to enhance the representation learning capability of the newly proposed pipeline, we propose a novel multi-resolution, multi-channel feature fusion module (M2F2), which replaces conventional convolutional layers in encoder and decoder networks. Furthermore, we introduce a novel pooling module (Soft-pool) to medical image segmentation for the first time, retaining more helpful information when down-sampling and getting better segmentation performance. To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art methods on ISIC 2016, 2017, 2018, and PH2. Experimental results consistently demonstrate that the proposed Ms RED attains significantly superior segmentation performance across five popularly used evaluation criteria. Last but not least, the new model utilizes much fewer model parameters than its peer approaches, leading to a greatly reduced number of labeled samples required for model training, which in turn produces a substantially faster converging training process than its peers. The source code is available at https://github.com/duweidai/Ms-RED.



中文翻译:

Ms RED:一种用于皮肤病变分割的新型多尺度残差编码和解码网络

皮肤病计算机辅助诊断 (CAD) 提供了最引人注目的展示之一,深度学习技术在获取和超越人类专家方面表现出令人印象深刻的表现。在这样的 CAD 过程中,关键步骤是从皮肤镜图像中分割皮肤病变。尽管最近的深度学习工作取得了显着的成功,但仍有望在处理具有挑战性的情况下取得很大的进步,例如分割形状不规则、对比度低或边界模糊的病变。为了解决这些不足,本研究提出了一种用于皮肤病变分割的新型多尺度残差编码和解码网络 (Ms RED),能够准确可靠地高效分割各种病变。具体来说,编码器采用多尺度残差编码融合模块(MsR-EFM),解码器采用多尺度残差解码融合模块(MsR-DFM)自适应融合多尺度特征。此外,为了增强新提出的管道的表示学习能力,我们提出了一种新颖的多分辨率、多通道特征融合模块(M2 F 2 ),它取代了编码器和解码器网络中的传统卷积层。此外,我们首次在医学图像分割中引入了一种新的池化模块(Soft-pool),在下采样时保留了更多有用的信息,并获得了更好的分割性能。为了验证所提出网络的有效性和优势,我们将其与 ISIC 2016、2017、2018 和 PH 2上的几种最先进的方法进行了比较. 实验结果一致表明,所提出的 Ms RED 在五个常用的评估标准中获得了显着优越的分割性能。最后但同样重要的是,与同类方法相比,新模型使用的模型参数要少得多,从而大大减少了模型训练所需的标记样本数量,从而产生了比同类方法更快的收敛训练过程。源代码可在 https://github.com/duweidai/Ms-RED 获得。

更新日期:2021-11-17
down
wechat
bug