当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantic-Oriented Labeled-to-Unlabeled Distribution Translation for Image Segmentation
IEEE Transactions on Medical Imaging ( IF 8.9 ) Pub Date : 2021-09-20 , DOI: 10.1109/tmi.2021.3114329
Xiaoqing Guo 1 , Jie Liu 1 , Yixuan Yuan 1
Affiliation  

Automatic medical image segmentation plays a crucial role in many medical applications, such as disease diagnosis and treatment planning. Existing deep learning based models usually regarded the segmentation task as pixel-wise classification and neglected the semantic correlations of pixels across different images, leading to vague feature distribution. Moreover, pixel-wise annotated data is rare in medical domain, and the scarce annotated data usually exhibits the biased distribution against the desired one, hindering the performance improvement under the supervised learning setting. In this paper, we propose a novel Labeled-to-unlabeled Distribution Translation (L2uDT) framework with Semantic-oriented Contrastive Learning (SoCL), mainly for addressing the aforementioned issues in medical image segmentation. In SoCL, a semantic grouping module is designed to cluster pixels into a set of semantically coherent groups, and a semantic-oriented contrastive loss is advanced to constrain group-wise prototypes, so as to explicitly learn a feature space with intra-class compactness and inter-class separability. We then establish a L2uDT strategy to approximate the desired data distribution for unbiased optimization, where we translate the labeled data distribution with the guidance of extensive unlabeled data. In particular, a bias estimator is devised to measure the distribution bias, then a gradual-paced shift is derived to progressively translate the labeled data distribution to unlabeled one. Both labeled and translated data are leveraged to optimize the segmentation model simultaneously. We illustrate the effectiveness of the proposed method on two benchmark datasets, EndoScene and PROSTATEx, and our method achieves state-of-the-art performance, which clearly demonstrates its effectiveness for medical image segmentation. The source code is available at https://github.com/CityU-AIM-Group/L2uDT.

中文翻译:


用于图像分割的面向语义的标记到未标记的分布转换



自动医学图像分割在许多医学应用中发挥着至关重要的作用,例如疾病诊断和治疗计划。现有的基于深度学习的模型通常将分割任务视为逐像素分类,而忽略了不同图像之间像素的语义相关性,导致特征分布模糊。此外,像素级注释数据在医学领域很少见,并且稀缺的注释数据通常表现出与期望分布相反的偏差分布,阻碍了监督学习设置下的性能提高。在本文中,我们提出了一种具有面向语义的对比学习(SoCL)的新型标记到未标记分布翻译(L2uDT)框架,主要用于解决医学图像分割中的上述问题。在SoCL中,设计了语义分组模块,将像素聚类成一组语义一致的组,并提出了面向语义的对比损失来约束分组原型,从而显式地学习具有类内紧凑性和类间可分离性。然后,我们建立 L2uDT 策略来近似所需的数据分布以进行无偏优化,其中我们在大量未标记数据的指导下转换标记数据分布。特别是,设计了偏差估计器来测量分布偏差,然后推导出渐进式转变,以逐步将标记数据分布转换为未标记数据分布。利用标记数据和翻译数据来同时优化分割模型。 我们在两个基准数据集 EndoScene 和 PROSTATEx 上说明了所提出方法的有效性,并且我们的方法实现了最先进的性能,这清楚地证明了其对于医学图像分割的有效性。源代码可在 https://github.com/CityU-AIM-Group/L2uDT 获取。
更新日期:2021-09-20
down
wechat
bug