当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MsTGANet: Automatic Drusen Segmentation From Retinal OCT Images
IEEE Transactions on Medical Imaging ( IF 10.6 ) Pub Date : 2021-09-14 , DOI: 10.1109/tmi.2021.3112716
Meng Wang 1 , Weifang Zhu 1 , Fei Shi 1 , Jinzhu Su 1 , Haoyu Chen 2 , Kai Yu 1 , Yi Zhou 1 , Yuanyuan Peng 1 , Zhongyue Chen 1 , Xinjian Chen 3
Affiliation  

Drusen is considered as the landmark for diagnosis of AMD and important risk factor for the development of AMD. Therefore, accurate segmentation of drusen in retinal OCT images is crucial for early diagnosis of AMD. However, drusen segmentation in retinal OCT images is still very challenging due to the large variations in size and shape of drusen, blurred boundaries, and speckle noise interference. Moreover, the lack of OCT dataset with pixel-level annotation is also a vital factor hindering the improvement of drusen segmentation accuracy. To solve these problems, a novel multi-scale transformer global attention network (MsTGANet) is proposed for drusen segmentation in retinal OCT images. In MsTGANet, which is based on U-Shape architecture, a novel multi-scale transformer non-local (MsTNL) module is designed and inserted into the top of encoder path, aiming at capturing multi-scale non-local features with long-range dependencies from different layers of encoder. Meanwhile, a novel multi-semantic global channel and spatial joint attention module (MsGCS) between encoder and decoder is proposed to guide the model to fuse different semantic features, thereby improving the model’s ability to learn multi-semantic global contextual information. Furthermore, to alleviate the shortage of labeled data, we propose a novel semi-supervised version of MsTGANet (Semi-MsTGANet) based on pseudo-labeled data augmentation strategy, which can leverage a large amount of unlabeled data to further improve the segmentation performance. Finally, comprehensive experiments are conducted to evaluate the performance of the proposed MsTGANet and Semi-MsTGANet. The experimental results show that our proposed methods achieve better segmentation accuracy than other state-of-the-art CNN-based methods.

中文翻译:

MsTGANet:从视网膜 OCT 图像中自动分割玻璃疣

玻璃疣被认为是 AMD 诊断的标志物和 AMD 发展的重要危险因素。因此,准确分割视网膜 OCT 图像中的玻璃疣对于 AMD 的早期诊断至关重要。然而,由于玻璃疣的大小和形状变化很大、边界模糊和散斑噪声干扰,视网膜 OCT 图像中的玻璃疣分割仍然非常具有挑战性。此外,缺乏具有像素级注释的 OCT 数据集也是阻碍玻璃疣分割精度提高的重要因素。为了解决这些问题,提出了一种新的多尺度变换器全局注意力网络(MsTGANet),用于视网膜 OCT 图像中的玻璃疣分割。在基于 U-Shape 架构的 MsTGANet 中,设计了一种新颖的多尺度变换器非局部(MsTNL)模块并将其插入编码器路径的顶部,旨在从编码器的不同层捕获具有远程依赖性的多尺度非局部特征。同时,提出了一种新颖的编码器和解码器之间的多语义全局通道和空间联合注意模块(MsGCS)来引导模型融合不同的语义特征,从而提高模型学习多语义全局上下文信息的能力。此外,为了缓解标记数据的短缺,我们提出了一种基于伪标记数据增强策略的新型半监督版 MsTGANet (Semi-MsTGANet),它可以利用大量未标记数据来进一步提高分割性能。最后,进行了综合实验以评估所提出的 MsTGANet 和 Semi-MsTGANet 的性能。实验结果表明,我们提出的方法比其他最先进的基于 CNN 的方法实现了更好的分割精度。
更新日期:2021-09-14
down
wechat
bug