当前位置: X-MOL 学术Artif. Intell. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fully convolutional attention network for biomedical image segmentation.
Artificial Intelligence in Medicine ( IF 6.1 ) Pub Date : 2020-06-05 , DOI: 10.1016/j.artmed.2020.101899
Junlong Cheng 1 , Shengwei Tian 2 , Long Yu 3 , Hongchun Lu 2 , Xiaoyi Lv 4
Affiliation  

In this paper, we embed two types of attention modules in the dilated fully convolutional network (FCN) to solve biomedical image segmentation tasks efficiently and accurately. Different from previous work on image segmentation through multiscale feature fusion, we propose the fully convolutional attention network (FCANet) to aggregate contextual information at long-range and short-range distances. Specifically, we add two types of attention modules, the spatial attention module and the channel attention module, to the Res2Net network, which has a dilated strategy. The features of each location are aggregated through the spatial attention module, so that similar features promote each other in space size. At the same time, the channel attention module treats each channel of the feature map as a feature detector and emphasizes the channel dependency between any two channel maps. Finally, we weight the sum of the output features of the two types of attention modules to retain the feature information of the long-range and short-range distances, to further improve the representation of the features and make the biomedical image segmentation more accurate. In particular, we verify that the proposed attention module can seamlessly connect to any end-to-end network with minimal overhead. We perform comprehensive experiments on three public biomedical image segmentation datasets, i.e., the Chest X-ray collection, the Kaggle 2018 data science bowl and the Herlev dataset. The experimental results show that FCANet can improve the segmentation effect of biomedical images. The source code models are available at https://github.com/luhongchun/FCANet



中文翻译:

用于生物医学图像分割的全卷积注意力网络。

在本文中,我们在扩张的全卷积网络 (FCN) 中嵌入了两种类型的注意力模块,以高效准确地解决生物医学图像分割任务。与之前通过多尺度特征融合进行图像分割的工作不同,我们提出了全卷积注意力网络(FCANet)来聚合远距离和近距离距离的上下文信息。具体来说,我们向具有扩张策略的 Res2Net 网络添加了两种类型的注意力模块,空间注意力模块和通道注意力模块。每个位置的特征通过空间注意力模块聚合,使得相似特征在空间大小上相互促进。与此同时,通道注意力模块将特征图的每个通道视为一个特征检测器,并强调任意两个通道图之间的通道依赖性。最后,我们对两类注意力模块的输出特征进行加权求和,保留长距离和短距离的特征信息,进一步提高特征的表示能力,使生物医学图像分割更加准确。特别是,我们验证了所提出的注意力模块可以以最小的开销无缝连接到任何端到端网络。我们对三个公共生物医学图像分割数据集进行了综合实验,即胸部 X 射线集合、Kaggle 2018 数据科学碗和 Herlev 数据集。实验结果表明,FCANet可以提高生物医学图像的分割效果。

更新日期:2020-06-05
down
wechat
bug