当前位置: X-MOL 学术Comput. Biol. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound
Computers in Biology and Medicine ( IF 7.0 ) Pub Date : 2020-10-08 , DOI: 10.1016/j.compbiomed.2020.104036
Wilfrido Gómez-Flores 1 , Wagner Coelho de Albuquerque Pereira 2
Affiliation  

The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNet18, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s>0.90 and IoU>0.81. In the case of U-Net, the segmentation performance is F1s=0.89 and IoU=0.80, whereas FCN-AlexNet attains the lowest results with F1s=0.84 and IoU=0.73. In particular, ResNet18 obtains F1s=0.905 and IoU=0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNet18 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.



中文翻译:

预训练卷积神经网络在超声乳腺肿瘤语义分割中的比较研究

最近已经使用卷积神经网络(CNN)解决了超声(BUS)中乳腺肿瘤的自动分割问题。这些基于CNN的方法通常会修改以前提出的CNN架构,或者使用CNN集成设计新的架构。尽管这些方法报告了令人满意的结果,但是出于可重复性目的,经常无法使用经过训练的CNN架构。而且,这些方法通常从具有特定属性的小型BUS数据集中学习,这限制了新情况下的泛化。本文评估了由计算机视觉社区开发的四个基于公共CNN的语义分割模型,如下:(1)具有AlexNet网络的完全卷积网络(FCN),(2)U-Net网络,(3)使用VGG16的SegNet和VGG19网络,以及(4)使用ResNet18,ResNet50,MobileNet-V2和Xception网络。通过转移学习,可以对这些CNN进行微调,以在正常和肿瘤像素中分割BUS图像。目标是选择一种潜在的基于CNN的分割模型,以进一步用于计算机辅助诊断(CAD)系统。这项研究的主要意义是,使用比文献中当前发现的方法更广泛的BUS数据集,对八个完善的CNN架构进行比较。从七个美国机器模型中获取的3000多个BUS图像用于训练和验证。F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳结果 对这些CNN进行微调,以在正常和肿瘤像素中分割BUS图像。目标是选择一种潜在的基于CNN的分割模型,以进一步用于计算机辅助诊断(CAD)系统。这项研究的主要意义是,与使用文献中当前发现的方法相比,使用更广泛的BUS数据集对八个完善的CNN架构进行比较。从七个美国机器模型中获取的3000多个BUS图像用于训练和验证。F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳效果 对这些CNN进行微调,以在正常和肿瘤像素中分割BUS图像。目标是选择一种潜在的基于CNN的分割模型,以进一步用于计算机辅助诊断(CAD)系统。这项研究的主要意义是,与使用文献中当前发现的方法相比,使用更广泛的BUS数据集对八个完善的CNN架构进行比较。从七个美国机器模型中获取的3000多个BUS图像用于训练和验证。F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳结果 这项研究的主要意义是,使用比文献中当前发现的方法更广泛的BUS数据集,对八个完善的CNN架构进行比较。从七个美国机器模型中获取的3000多个BUS图像用于训练和验证。F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳结果 这项研究的主要意义是,与使用文献中当前发现的方法相比,使用更广泛的BUS数据集对八个完善的CNN架构进行比较。从七个美国机器模型中获取的3000多个BUS图像用于训练和验证。F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳结果 F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳结果 F1分数(F1s)和联合路口(IoU)量化了细分效果。基于SegNet和DeepLabV3 +的细分模型可通过以下方式获得最佳结果F1个s>0.90o>0.81。对于U-Net,细分效果为F1个s=0.89o=0.80,而FCN-AlexNet在 F1个s=0.84o=0.73。ResNet18特别是获得F1个s=0.905o=0.827并且在SegNet和DeepLabV3 +网络之间所需的培训时间更少。因此,ResNet18是实施全自动端到端CAD系统的潜在候选者。研究人员可以在https://github.com/wgomezf/CNN-BUS-segment上使用此研究中生成的CNN模型,该模型试图影响与其他基于CNN的BUS图像分割方法的公平比较。

更新日期:2020-10-12
down
wechat
bug