当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation
IEEE Journal of Biomedical and Health Informatics ( IF 7.7 ) Pub Date : 2021-05-04 , DOI: 10.1109/jbhi.2021.3077271
Chen Yang , Xiaoqing Guo , Meilu Zhu , Bulat Ibragimov , Yixuan Yuan

Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods.

中文翻译:

跨域息肉分割的互原型适应

从结肠镜检查图像中准确分割息肉为结肠直肠癌的诊断和治疗提供了有用的信息。尽管深度学习方法推进了自动息肉分割,但当应用于从不同扫描仪或序列(目标域)获取的新数据时,它们的性能通常会下降。由于手动注释对于新的目标域来说既繁琐又费力,因此非常需要利用从标记源域中学到的知识来提升未标记目标域的性能。在这项工作中,我们提出了一个相互原型适应网络,以消除多中心和多设备结肠镜检查图像中的域偏移。我们首先设计了一个具有原型关系函数的互原型对齐(MPA)模块,以在从粗到精的过程中通过自域和跨域信息来细化特征。然后提出了两个辅助模块:渐进式自我训练(PST)和解缠重建(DR)以提高分割性能。PST 模块通过新的不确定性引导自训练损失选择可靠的伪标签,以获得目标域中的准确原型。DR 模块利用预测结果和私有原型联合重建原始图像,以保持语义一致性并提供补充监督信息。我们在三个传统结肠镜检查数据集上广泛评估了所提出的息肉分割性能模型:CVC-DB、Kvasir-SEG 和 ETIS-Larib。
更新日期:2021-05-04
down
wechat
bug