当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-06-01 , DOI: 10.1109/tnnls.2020.2995319
Xiaomeng Li , Lequan Yu , Hao Chen , Chi-Wing Fu , Lei Xing , Pheng-Ann Heng

A common shortfall of supervised deep learning for medical imaging is the lack of labeled data, which is often expensive and time consuming to collect. This article presents a new semisupervised method for medical image segmentation, where the network is optimized by a weighted combination of a common supervised loss only for the labeled inputs and a regularization loss for both the labeled and unlabeled data. To utilize the unlabeled data, our method encourages consistent predictions of the network-in-training for the same input under different perturbations. With the semisupervised segmentation tasks, we introduce a transformation-consistent strategy in the self-ensembling model to enhance the regularization effect for pixel-level predictions. To further improve the regularization effects, we extend the transformation in a more generalized form including scaling and optimize the consistency loss with a teacher model, which is an averaging of the student model weights. We extensively validated the proposed semisupervised method on three typical yet challenging medical image segmentation tasks: 1) skin lesion segmentation from dermoscopy images in the International Skin Imaging Collaboration (ISIC) 2017 data set; 2) optic disk (OD) segmentation from fundus images in the Retinal Fundus Glaucoma Challenge (REFUGE) data set; and 3) liver segmentation from volumetric CT scans in the Liver Tumor Segmentation Challenge (LiTS) data set. Compared with state-of-the-art, our method shows superior performance on the challenging 2-D/3-D medical images, demonstrating the effectiveness of our semisupervised method for medical image segmentation.

中文翻译:


用于半监督医学图像分割的变换一致自集成模型



医学成像监督深度学习的一个常见缺陷是缺乏标记数据,而收集这些数据通常既昂贵又耗时。本文提出了一种用于医学图像分割的新半监督方法,其中通过仅针对标记输入的常见监督损失和针对标记和未标记数据的正则化损失的加权组合来优化网络。为了利用未标记的数据,我们的方法鼓励训练中的网络在不同扰动下对相同输入进行一致的预测。在半监督分割任务中,我们在自集成模型中引入了变换一致策略,以增强像素级预测的正则化效果。为了进一步提高正则化效果,我们以更广义的形式扩展转换,包括缩放并优化教师模型的一致性损失,这是学生模型权重的平均值。我们在三个典型但具有挑战性的医学图像分割任务上广泛验证了所提出的半监督方法:1)国际皮肤成像合作组织(ISIC)2017数据集中的皮肤镜图像的皮肤病变分割; 2)从视网膜眼底青光眼挑战(REFUGE)数据集中的眼底图像进行视盘(OD)分割; 3) 根据肝脏肿瘤分割挑战 (LiTS) 数据集中的体积 CT 扫描进行肝脏分割。与最先进的方法相比,我们的方法在具有挑战性的 2-D/3-D 医学图像上表现出优越的性能,证明了我们的半监督方法在医学图像分割方面的有效性。
更新日期:2020-06-01
down
wechat
bug