当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 8-19-2022 , DOI: 10.1109/tcyb.2022.3194099
Yang Li 1 , Yue Zhang 1 , Jing-Yu Liu 1 , Kang Wang 2 , Kai Zhang 3 , Gen-Sheng Zhang 3 , Xiao-Feng Liao 4 , Guang Yang 5
Affiliation  

Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.

中文翻译:


通过深浅层次特征融合的全局变压器和双局部注意网络用于视网膜血管分割



临床上,视网膜血管分割是眼底疾病诊断的重要一步。然而,目前的方法普遍忽略了深层和浅层特征之间的语义信息差异,无法同时捕获眼底图像的全局和局部特征,导致精细血管的分割性能有限。在本文中,研究了通过深浅分层特征融合(GT-DLA-dsHFF)的全局变换器(GT)和双局部注意(DLA)网络来解决上述局限性。首先,GT被开发来整合视网膜图像中的全局信息,有效捕获像素之间的长距离依赖性,缓解分割结果中血管的不连续性。其次,提出了使用不同扩张率的扩张卷积、无监督边缘检测和挤压激励块构建的 DLA 来提取局部血管信息,合并分割结果中的边缘细节。最后,研究了一种新颖的深浅层次特征融合(dsHFF)算法,分别融合深度学习框架中不同尺度的特征,可以减轻特征融合过程中有效信息的衰减。我们在四个典型的眼底图像数据集上验证了 GT-DLA-dsHFF。实验结果表明,我们的 GT-DLA-dsHFF 相对于当前方法实现了优越的性能,详细的讨论验证了所提出的三个模块的有效性。患病图像的分割结果显示了我们提出的 GT-DLA-dsHFF 的鲁棒性。实现代码将在 https://github.com/YangLibuaa/GT-DLA-dsHFF 上提供。
更新日期:2024-08-28
down
wechat
bug