当前位置: X-MOL 学术Ann. Biomed. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Classification of Tumor Epithelium and Stroma by Exploiting Image Features Learned by Deep Convolutional Neural Networks.
Annals of Biomedical Engineering ( IF 3.0 ) Pub Date : 2018-07-26 , DOI: 10.1007/s10439-018-2095-6
Yue Du 1 , Roy Zhang 2 , Abolfazl Zargari 1 , Theresa C Thai 3 , Camille C Gunderson 4 , Katherine M Moxley 4 , Hong Liu 1 , Bin Zheng 1 , Yuchen Qiu 1
Affiliation  

The tumor-stroma ratio (TSR) reflected on hematoxylin and eosin (H&E)-stained histological images is a potential prognostic factor for survival. Automatic image processing techniques that allow for high-throughput and precise discrimination of tumor epithelium and stroma are required to elevate the prognostic significance of the TSR. As a variant of deep learning techniques, transfer learning leverages nature-images features learned by deep convolutional neural networks (CNNs) to relieve the requirement of deep CNNs for immense sample size when handling biomedical classification problems. Herein we studied different transfer learning strategies for accurately distinguishing epithelial and stromal regions of H&E-stained histological images acquired from either breast or ovarian cancer tissue. We compared the performance of important deep CNNs as either a feature extractor or as an architecture for fine-tuning with target images. Moreover, we addressed the current contradictory issue about whether the higher-level features would generalize worse than lower-level ones because they are more specific to the source-image domain. Under our experimental setting, the transfer learning approach achieved an accuracy of 90.2 (vs. 91.1 for fine tuning) with GoogLeNet, suggesting the feasibility of using it in assisting pathology-based binary classification problems. Our results also show that the superiority of the lower-level or the higher-level features over the other ones was determined by the architecture of deep CNNs.

中文翻译:


利用深度卷积神经网络学习的图像特征对肿瘤上皮和间质进行分类。



苏木精和伊红 (H&E) 染色的组织学图像反映的肿瘤基质比 (TSR) 是生存的潜在预后因素。为了提高 TSR 的预后意义,需要能够对肿瘤上皮和间质进行高通量和精确区分的自动图像处理技术。作为深度学习技术的一种变体,迁移学习利用深度卷积神经网络 (CNN) 学习到的自然图像特征来减轻深度 CNN 在处理生物医学分类问题时对巨大样本量的要求。在此,我们研究了不同的迁移学习策略,以准确地区分从乳腺癌或卵巢癌组织获取的 H&E 染色组织学图像的上皮区域和基质区域。我们比较了重要深度 CNN 作为特征提取器或作为目标图像微调架构的性能。此外,我们还解决了当前的矛盾问题,即较高级别的特征是否会比较低级别的特征概括得更差,因为它们更特定于源图像域。在我们的实验设置下,迁移学习方法使用 GoogLeNet 达到了 90.2 的准确率(微调时为 91.1),这表明使用它来协助基于病理学的二元分类问题是可行的。我们的结果还表明,较低级别或较高级别特征相对于其他特征的优越性是由深度 CNN 的架构决定的。
更新日期:2018-07-26
down
wechat
bug