当前位置: X-MOL 学术Ultrasonics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Generic Deep Learning Framework to Classify Thyroid and Breast Lesions in Ultrasound Images
Ultrasonics ( IF 4.2 ) Pub Date : 2021-02-01 , DOI: 10.1016/j.ultras.2020.106300
Yi-Cheng Zhu , Alaa AlZoubi , Sabah Jassim , Quan Jiang , Yuan Zhang , Yong-Bing Wang , Xian-De Ye , Hongbo DU

Breast and thyroid cancers are the two common cancers to affect women worldwide. Ultrasonography (US) is a commonly used non-invasive imaging modality to detect breast and thyroid cancers, but its clinical diagnostic accuracy for these cancers is controversial. Both thyroid and breast cancers share some similar high frequency ultrasound characteristics such as taller-than-wide shape ratio, hypo-echogenicity, and ill-defined margins. This study aims to develop an automatic scheme for classifying thyroid and breast lesions in ultrasound images using deep convolutional neural networks (DCNN). In particular, we propose a generic DCNN architecture with transfer learning and the same architectural parameter settings to train models for thyroid and breast cancers (TNet and BNet) respectively, and test the viability of such a generic approach with ultrasound images collected from clinical practices. In addition, the potentials of the thyroid model in learning the common features and its performance of classifying both breast and thyroid lesions are investigated. A retrospective dataset of 719 thyroid and 672 breast images captured from US machines of different makes between October 2016 and December 2018 is used in this study. Test results show that both TNet and BNet built on the same DCNN architecture have achieved good classification results (86.5% average accuracy for TNet and 89% for BNet). Furthermore, we used TNet to classify breast lesions and the model achieves sensitivity of 86.6% and specificity of 87.1%, indicating its capability in learning features commonly shared by thyroid and breast lesions. We further tested the diagnostic performance of the TNet model against that of three radiologists. The area under curve (AUC) for thyroid nodule classification is 0.861 (95% CI: 0.792-0.929) for the TNet model and 0.757-0.854 (95% CI: 0.658-0.934) for the three radiologists. The AUC for breast cancer classification is 0.875 (95% CI: 0.804-0.947) for the TNet model and 0.698-0.777 (95% CI: 0.593-0.872) for the radiologists, indicating the model's potential in classifying both breast and thyroid cancers with a higher level of accuracy than that of radiologists.

中文翻译:

用于对超声图像中的甲状腺和乳腺病变进行分类的通用深度学习框架

乳腺癌和甲状腺癌是影响全球女性的两种常见癌症。超声 (US) 是一种常用的非侵入性成像方式来检测乳腺癌和甲状腺癌,但其对这些癌症的临床诊断准确性存在争议。甲状腺癌和乳腺癌都有一些相似的高频超声特征,例如高宽比、低回声和边界不清。本研究旨在开发一种使用深度卷积神经网络 (DCNN) 对超声图像中的甲状腺和乳腺病变进行分类的自动方案。特别是,我们提出了一种具有迁移学习和相同架构参数设置的通用 DCNN 架构,分别用于训练甲状腺癌和乳腺癌(TNet 和 BNet)模型,并使用从临床实践中收集的超声图像测试这种通用方法的可行性。此外,研究了甲状腺模型在学习共同特征方面的潜力及其对乳腺和甲状腺病变进行分类的性能。本研究使用了 2016 年 10 月至 2018 年 12 月期间从美国不同品牌机器捕获的 719 张甲状腺图像和 672 张乳房图像的回顾性数据集。测试结果表明,基于相同 DCNN 架构构建的 TNet 和 BNet 都取得了良好的分类结果(TNet 平均准确率为 86.5%,BNet 平均准确率为 89%)。此外,我们使用 TNet 对乳腺病变进行分类,该模型实现了 86.6% 的敏感性和 87.1% 的特异性,表明其学习甲状腺和乳腺病变共有的特征的能力。我们进一步测试了 TNet 模型与三位放射科医生的诊断性能。TNet 模型甲状腺结节分类的曲线下面积 (AUC) 为 0.861(95% CI:0.792-0.929),三位放射科医生的曲线下面积为 0.757-0.854(95% CI:0.658-0.934)。TNet 模型的乳腺癌分类 AUC 为 0.875(95% CI:0.804-0.947),放射科医师的 AUC 为 0.698-0.777(95% CI:0.593-0.872),表明该模型对乳腺癌和甲状腺癌进行分类的潜力比放射科医生的准确度更高。
更新日期:2021-02-01
down
wechat
bug