当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images
IEEE Journal of Biomedical and Health Informatics ( IF 7.7 ) Pub Date : 2020-07-22 , DOI: 10.1109/jbhi.2020.3011178
Bo Wang , Shengpei Wang , Shuang Qiu , Wei Wei , Haibao Wang , Huiguang He

Blood vessel segmentation in fundus images is a critical procedure in the diagnosis of ophthalmic diseases. Recent deep learning methods achieve high accuracy in vessel segmentation but still face the challenge to segment the microvascular and detect the vessel boundary. This is due to the fact that common Convolutional Neural Networks (CNN) are unable to preserve rich spatial information and a large receptive field simultaneously. Besides, CNN models for vessel segmentation usually are trained by equal pixel level cross-entropy loss, which tend to miss fine vessel structures. In this paper, we propose a novel Context Spatial U-Net (CSU-Net) for blood vessel segmentation. Compared with the other U-Net based models, we design a two-channel encoder: a context channel with multi-scale convolution to capture more receptive field and a spatial channel with large kernel to retain spatial information. Also, to combine and strengthen the features extracted from two paths, we introduce a feature fusion module (FFM) and an attention skip module (ASM). Furthermore, we propose a structure loss, which adds a spatial weight to cross-entropy loss and guide the network to focus more on the thin vessels and boundaries. We evaluated this model on three public datasets: DRIVE, CHASE-DB1 and STARE. The results show that the CSU-Net achieves higher segmentation accuracy than the current state-of-the-art methods.

中文翻译:

CSU-Net:用于眼底图像中准确血管分割的上下文空间 U-Net

眼底图像中的血管分割是眼科疾病诊断中的关键过程。最近的深度学习方法在血管分割方面实现了高精度,但仍然面临着分割微血管和检测血管边界的挑战。这是因为普通的卷积神经网络 (CNN) 无法同时保留丰富的空间信息和大的感受野。此外,用于血管分割的 CNN 模型通常是通过等像素级交叉熵损失训练的,这往往会错过精细的血管结构。在本文中,我们提出了一种用于血管分割的新型上下文空间 U-Net (CSU-Net)。与其他基于 U-Net 的模型相比,我们设计了一个双通道编码器:具有多尺度卷积的上下文通道以捕获更多感受野,以及具有大内核的空间通道以保留空间信息。此外,为了组合和加强从两条路径中提取的特征,我们引入了特征融合模块(FFM)和注意力跳过模块(ASM)。此外,我们提出了一种结构损失,它为交叉熵损失增加了空间权重,并引导网络更多地关注细血管和边界。我们在三个公共数据集上评估了这个模型:DRIVE、CHASE-DB1 和 STARE。结果表明,与当前最先进的方法相比,CSU-Net 实现了更高的分割精度。此外,我们提出了一种结构损失,它为交叉熵损失增加了空间权重,并引导网络更多地关注细血管和边界。我们在三个公共数据集上评估了这个模型:DRIVE、CHASE-DB1 和 STARE。结果表明,与当前最先进的方法相比,CSU-Net 实现了更高的分割精度。此外,我们提出了一种结构损失,它为交叉熵损失增加了空间权重,并引导网络更多地关注细血管和边界。我们在三个公共数据集上评估了这个模型:DRIVE、CHASE-DB1 和 STARE。结果表明,与当前最先进的方法相比,CSU-Net 实现了更高的分割精度。
更新日期:2020-07-22
down
wechat
bug