当前位置: X-MOL 学术PLOS ONE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint disc and cup segmentation based on recurrent fully convolutional network.
PLOS ONE ( IF 2.9 ) Pub Date : 2020-09-21 , DOI: 10.1371/journal.pone.0238983
Jing Gao 1 , Yun Jiang 1 , Hai Zhang 1 , Falin Wang 1
Affiliation  

The optic disc(OD) and the optic cup(OC) segmentation is an key step in fundus medical image analysis. Previously, FCN-based methods have been proposed for medical image segmentation tasks. However, the consecutive convolution and pooling operations usually hinder dense prediction tasks which require detailed spatial information, such as image segmentation. In this paper, we propose a network called Recurrent Fully Convolution Network(RFC-Net) for automatic joint segmentation of the OD and the OC, which can captures more high-level information and subtle edge information. The RFC-Net can minimize the loss of spatial information. It is mainly composed of multi-scale input layer, recurrent fully convolutional network, multiple output layer and polar transformation. In RFC-Net, the multi-scale input layer constructs an image pyramid. We propose four recurrent units, which are respectively applied to RFC-Net. Recurrent convolution layer effectively ensures feature representation for OD and OC segmentation tasks through feature accumulation. For each multiple output image, the multiple output cross entropy loss function is applied. To better balance the cup ratio of the segmented image, the polar transformation is used to transform the fundus image from the cartesian coordinate system to the polar coordinate system. We evaluate the effectiveness and generalization of the proposed method on the DRISHTI-GS1 dataset. Compared with the original FCN method and other state-of-the-art methods, the proposed method achieves better segmentation performance.



中文翻译:

基于循环全卷积网络的关节盘和杯分割。

视盘(OD)和视杯(OC)分割是眼底医学图像分析的关键步骤。此前,基于 FCN 的方法已被提出用于医学图像分割任务。然而,连续的卷积和池化操作通常会阻碍需要详细空间信息的密集预测任务,例如图像分割。在本文中,我们提出了一种称为循环全卷积网络(RFC-Net)的网络,用于 OD 和 OC 的自动联合分割,它可以捕获更多高级信息和微妙的边缘信息。RFC-Net可以最大限度地减少空间信息的损失。它主要由多尺度输入层、循环全卷积网络、多输出层和极坐标变换组成。在RFC-Net中,多尺度输入层构建了图像金字塔。我们提出了四个循环单元,分别应用于 RFC-Net。循环卷积层通过特征积累有效保证了OD和OC分割任务的特征表示。对于每个多输出图像,应用多输出交叉熵损失函数。为了更好地平衡分割图像的罩杯比,采用极坐标变换将眼底图像从直角坐标系变换到极坐标系。我们评估了所提出方法在 DRISHTI-GS1 数据集上的有效性和泛化性。与原始FCN方法和其他最先进的方法相比,所提出的方法实现了更好的分割性能。

更新日期:2020-09-22
down
wechat
bug