当前位置: X-MOL 学术J. Real-Time Image Proc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FAM: focal attention module for lesion segmentation of COVID-19 CT images
Journal of Real-Time Image Processing ( IF 2.9 ) Pub Date : 2022-09-04 , DOI: 10.1007/s11554-022-01249-5
Xiaoxin Wu 1 , Zhihao Zhang 2 , Lingling Guo 3 , Hui Chen 1 , Qiaojie Luo 4 , Bei Jin 5 , Weiyan Gu 5 , Fangfang Lu 2 , Jingjing Chen 6
Affiliation  

The novel coronavirus pneumonia (COVID-19) is the world’s most serious public health crisis, posing a serious threat to public health. In clinical practice, automatic segmentation of the lesion from computed tomography (CT) images using deep learning methods provides an promising tool for identifying and diagnosing COVID-19. To improve the accuracy of image segmentation, an attention mechanism is adopted to highlight important features. However, existing attention methods are of weak performance or negative impact to the accuracy of convolutional neural networks (CNNs) due to various reasons (e.g. low contrast of the boundary between the lesion and the surrounding, the image noise). To address this issue, we propose a novel focal attention module (FAM) for lesion segmentation of CT images. FAM contains a channel attention module and a spatial attention module. In the spatial attention module, it first generates rough spatial attention, a shape prior of the lesion region obtained from the CT image using median filtering and distance transformation. The rough spatial attention is then input into two 7 × 7 convolution layers for correction, achieving refined spatial attention on the lesion region. FAM is individually integrated with six state-of-the-art segmentation networks (e.g. UNet, DeepLabV3+, etc.), and then we validated these six combinations on the public dataset including COVID-19 CT images. The results show that FAM improve the Dice Similarity Coefficient (DSC) of CNNs by 2%, and reduced the number of false negatives (FN) and false positives (FP) up to 17.6%, which are significantly higher than that using other attention modules such as CBAM and SENet. Furthermore, FAM significantly improve the convergence speed of the model training and achieve better real-time performance. The codes are available at GitHub (https://github.com/RobotvisionLab/FAM.git).



中文翻译:


FAM:用于 COVID-19 CT 图像病变分割的焦点注意模块



新型冠状病毒肺炎(COVID-19)是全球最严重的公共卫生危机,对公众健康构成严重威胁。在临床实践中,使用深度学习方法从计算机断层扫描 (CT) 图像中自动分割病变为识别和诊断 COVID-19 提供了一种有前途的工具。为了提高图像分割的准确性,采用注意力机制来突出重要特征。然而,由于各种原因(例如病灶与周围边界的对比度低、图像噪声),现有的注意力方法性能较弱或对卷积神经网络(CNN)的准确性产生负面影响。为了解决这个问题,我们提出了一种新颖的焦点注意模块(FAM),用于 CT 图像的病变分割。 FAM包含通道注意模块和空间注意模块。在空间注意力模块中,它首先生成粗略的空间注意力,即使用中值滤波和距离变换从 CT 图像中获得的病变区域的先验形状。然后将粗略的空间注意力输入两个7×7卷积层进行校正,实现对病变区域的精细空间注意力。 FAM 分别与六个最先进的分割网络(例如 UNet、DeepLabV3+ 等)集成,然后我们在包括 COVID-19 CT 图像在内的公共数据集上验证了这六种组合。结果表明,FAM将CNN的Dice相似系数(DSC)提高了2%,并将假阴性(FN)和假阳性(FP)的数量减少了高达17.6%,明显高于使用其他注意力模块的结果例如 CBAM 和 SENet。 此外,FAM显着提高了模型训练的收敛速度,实现了更好的实时性能。这些代码可在 GitHub (https://github.com/RobotvisionLab/FAM.git) 上获取。

更新日期:2022-09-05
down
wechat
bug