当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Label-Efficient Multi-Task Segmentation using Contrastive Learning
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-09-23 , DOI: arxiv-2009.11160
Junichiro Iwasawa, Yuichiro Hirano and Yohei Sugawara

Obtaining annotations for 3D medical images is expensive and time-consuming, despite its importance for automating segmentation tasks. Although multi-task learning is considered an effective method for training segmentation models using small amounts of annotated data, a systematic understanding of various subtasks is still lacking. In this study, we propose a multi-task segmentation model with a contrastive learning based subtask and compare its performance with other multi-task models, varying the number of labeled data for training. We further extend our model so that it can utilize unlabeled data through the regularization branch in a semi-supervised manner. We experimentally show that our proposed method outperforms other multi-task methods including the state-of-the-art fully supervised model when the amount of annotated data is limited.

中文翻译:

使用对比学习的标签高效多任务分割

尽管对自动分割任务很重要,但获取 3D 医学图像的注释既昂贵又耗时。尽管多任务学习被认为是使用少量注释数据训练分割模型的有效方法,但仍然缺乏对各种子任务的系统理解。在这项研究中,我们提出了一种基于对比学习的子任务的多任务分割模型,并将其性能与其他多任务模型进行比较,改变用于训练的标记数据的数量。我们进一步扩展了我们的模型,以便它可以通过正则化分支以半监督的方式利用未标记的数据。我们通过实验表明,当注释数据量有限时,我们提出的方法优于其他多任务方法,包括最先进的完全监督模型。
更新日期:2020-09-24
down
wechat
bug