当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Weakly Supervised Histopathology Image Segmentation With Sparse Point Annotations
IEEE Journal of Biomedical and Health Informatics ( IF 6.7 ) Pub Date : 2020-09-15 , DOI: 10.1109/jbhi.2020.3024262
Zhe Chen , Zhao Chen , Jingxin Liu , Qiang Zheng , Yuang Zhu , Yanfei Zuo , Zhaoyu Wang , Xiaosong Guan , Yue Wang , Yuan Li

Digital histopathology image segmentation can facilitate computer-assisted cancer diagnostics. Given the difficulty of obtaining manual annotations, weak supervision is more suitable for the task than full supervision is. However, most weakly supervised models are not ideal for handling severe intra-class heterogeneity and inter-class homogeneity in histopathology images. Therefore, we propose a novel end-to-end weakly supervised learning framework named WESUP. With only sparse point annotations, it performs accurate segmentation and exhibits good generalizability. The training phase comprises two major parts, hierarchical feature representation and deep dynamic label propagation. The former uses superpixels to capture local details and global context from the convolutional feature maps obtained via transfer learning. The latter recognizes the manifold structure of the hierarchical features and identifies potential targets with the sparse annotations. Moreover, these two parts are trained jointly to improve the performance of the whole framework. To further boost test performance, pixel-wise inference is adopted for finer prediction. As demonstrated by experimental results, WESUP is able to largely resolve the confusion between histological foreground and background. It outperforms several state-of-the-art weakly supervised methods on a variety of histopathology datasets with minimal annotation efforts. Trained by very sparse point annotations, WESUP can even beat an advanced fully supervised segmentation network.

中文翻译:

具有稀疏点注释的弱监督组织病理学图像分割

数字组织病理学图像分割可以促进计算机辅助癌症诊断。鉴于获得手动注释的难度,弱监督比全监督更适合任务。然而,大多数弱监督模型对于处理组织病理学图像中严重的类内异质性和类间同质性并不理想。因此,我们提出了一种名为 WES​​UP 的新型端到端弱监督学习框架。只有稀疏点注释,它执行准确的分割并表现出良好的泛化性。训练阶段包括两个主要部分,分层特征表示和深度动态标签传播。前者使用超像素从通过迁移学习获得的卷积特征图中捕获局部细节和全局上下文。后者识别层次特征的流形结构并用稀疏注释识别潜在目标。此外,这两个部分联合训练以提高整个框架的性能。为了进一步提高测试性能,采用逐像素推理进行更精细的预测。实验结果表明,WESUP 能够在很大程度上解决组织学前景和背景之间的混淆。它以最少的注释工作在各种组织病理学数据集上优于几种最先进的弱监督方法。通过非常稀疏的点注释训练,WESUP 甚至可以击败先进的全监督分割网络。这两部分联合训练以提高整个框架的性能。为了进一步提高测试性能,采用逐像素推理进行更精细的预测。实验结果表明,WESUP 能够在很大程度上解决组织学前景和背景之间的混淆。它以最少的注释工作在各种组织病理学数据集上优于几种最先进的弱监督方法。通过非常稀疏的点注释训练,WESUP 甚至可以击败先进的全监督分割网络。这两部分联合训练以提高整个框架的性能。为了进一步提高测试性能,采用逐像素推理进行更精细的预测。实验结果表明,WESUP 能够在很大程度上解决组织学前景和背景之间的混淆。它以最少的注释工作在各种组织病理学数据集上优于几种最先进的弱监督方法。通过非常稀疏的点注释训练,WESUP 甚至可以击败先进的全监督分割网络。它以最少的注释工作在各种组织病理学数据集上优于几种最先进的弱监督方法。通过非常稀疏的点注释训练,WESUP 甚至可以击败先进的全监督分割网络。它以最少的注释工作在各种组织病理学数据集上优于几种最先进的弱监督方法。通过非常稀疏的点注释训练,WESUP 甚至可以击败先进的全监督分割网络。
更新日期:2020-09-15
down
wechat
bug