当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Representation Learning for Tissue Segmentation in Histopathological Images: From Global to Local Contrast
IEEE Transactions on Medical Imaging ( IF 10.6 ) Pub Date : 2022-07-15 , DOI: 10.1109/tmi.2022.3191398
Zeyu Gao 1 , Chang Jia 1 , Yang Li 1 , Xianli Zhang 1 , Bangyang Hong 1 , Jialun Wu 1 , Tieliang Gong 1 , Chunbao Wang 2 , Deyu Meng 3 , Yefeng Zheng 4 , Chen Li 1
Affiliation  

Tissue segmentation is an essential task in computational pathology. However, relevant datasets for such a pixel-level classification task are hard to obtain due to the difficulty of annotation, bringing obstacles for training a deep learning-based segmentation model. Recently, contrastive learning has provided a feasible solution for mitigating the heavy reliance of deep learning models on annotation. Nevertheless, applying contrastive loss to the most abstract image representations, existing contrastive learning frameworks focus on global features, therefore, are less capable of encoding finer-grained features (e.g., pixel-level discrimination) for the tissue segmentation task. Enlightened by domain knowledge, we design three contrastive learning tasks with multi-granularity views (from global to local) for encoding necessary features into representations without accessing annotations. Specifically, we construct: (1) an image-level task to capture the difference between tissue components, i.e., encoding the component discrimination; (2) a superpixel-level task to learn discriminative representations of local regions with different tissue components, i.e., encoding the prototype discrimination; (3) a pixel-level task to encourage similar representations of different tissue components within a local region, i.e., encoding the spatial smoothness. Through our global-to-local pre-training strategy, the learned representations can reasonably capture the domain-specific and fine-grained patterns, making them easily transferable to various tissue segmentation tasks in histopathological images. We conduct extensive experiments on two tissue segmentation datasets, while considering two real-world scenarios with limited or sparse annotations. The experimental results demonstrate that our framework is superior to existing contrastive learning methods and can be easily combined with weakly supervised and semi-supervised segmentation methods.

中文翻译:

组织病理学图像中组织分割的无监督表示学习:从全局到局部对比

组织分割是计算病理学中的一项重要任务。然而,由于标注困难,这种像素级分类任务的相关数据集很难获得,给训练基于深度学习的分割模型带来了障碍。最近,对比学习为减轻深度学习模型对注释的严重依赖提供了一种可行的解决方案。然而,将对比损失应用于最抽象的图像表示,现有的对比学习框架专注于全局特征,因此不太能够为组织分割任务编码更细粒度的特征(例如,像素级区分)。受到领域知识的启发,我们设计了三个具有多粒度视图(从全局到局部)的对比学习任务,用于在不访问注释的情况下将必要的特征编码为表示。具体来说,我们构建:(1)图像级任务来捕获组织成分之间的差异,即对成分区分进行编码;(2) 超像素级任务,学习具有不同组织成分的局部区域的判别表示,即对原型判别进行编码;(3) 像素级任务,以鼓励局部区域内不同组织成分的相似表示,即编码空间平滑度。通过我们的全局到局部预训练策略,学习到的表示可以合理地捕获特定领域和细粒度的模式,使它们很容易转移到组织病理学图像中的各种组织分割任务。我们对两个组织分割数据集进行了广泛的实验,同时考虑了两个具有有限或稀疏注释的真实场景。实验结果表明,我们的框架优于现有的对比学习方法,并且可以很容易地与弱监督和半监督分割方法相结合。
更新日期:2022-07-15
down
wechat
bug