当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-supervised driven consistency training for annotation efficient histopathology image analysis
Medical Image Analysis ( IF 10.7 ) Pub Date : 2021-10-13 , DOI: 10.1016/j.media.2021.102256
Chetan L Srinidhi 1 , Seung Wook Kim 2 , Fu-Der Chen 3 , Anne L Martel 1
Affiliation  

Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learning unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: (i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; (ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific unlabeled data.

We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close to or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models are made available at: https://github.com/srinidhiPY/SSL_CR_Histo.



中文翻译:

自我监督驱动的一致性训练,用于注释有效的组织病理学图像分析

使用大型标记数据集训练神经网络仍然是计算组织病理学中的主要范例。然而,获得如此详尽的手动注释通常是昂贵的、费力的,并且容易出现观察者间和观察者内的变化。虽然最近的自监督和半监督方法可以通过学习无监督的特征表示来缓解这种需求,但当标记实例的数量很少时,它们仍然难以很好地泛化到下游任务。在这项工作中,我们通过利用任务无关任务特定来克服这一挑战基于两种新策略的未标记数据:(i)自我监督的借口任务,利用组织学全幻灯片图像中的潜在多分辨率上下文线索来学习用于无监督表示学习的强大监督信号;(ii) 一种新的师生半监督一致性范式,该范式基于与任务特定的未标记数据的预测一致性,学习有效地将预训练的表示转移到下游任务。

我们对三个组织病理学基准数据集进行了广泛的验证实验,涉及两个分类和一个基于回归的任务,即肿瘤转移检测、组织类型分类和肿瘤细胞数量量化。在有限标签数据下,所提出的方法产生了切实的改进,接近甚至优于其他最先进的自我监督和监督基线。此外,我们凭经验表明,引导自监督预训练特征的想法是在标准基准上改进特定任务半监督学习的有效方法。代码和预训练模型可在以下网址获得:https://github.com/srinidhiPY/SSL_CR_Histo。

更新日期:2021-10-27
down
wechat
bug