当前位置: X-MOL 学术J. Appl. Clin. Med. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dual‐branch feature fusion S3D V‐Net network for lung nodules segmentation
Journal of Applied Clinical Medical Physics ( IF 2.1 ) Pub Date : 2024-03-13 , DOI: 10.1002/acm2.14331
Xiaoru Xu 1, 2 , Lingyan Du 1, 2 , Dongsheng Yin 1, 2
Affiliation  

BackgroundAccurate segmentation of lung nodules can help doctors get more accurate results and protocols in early lung cancer diagnosis and treatment planning, so that patients can be better detected and treated at an early stage, and the mortality rate of lung cancer can be reduced.PurposeCurrently, the improvement of lung nodule segmentation accuracy has been limited by his heterogeneous performance in the lungs, the imbalance between segmentation targets and background pixels, and other factors. We propose a new 2.5D lung nodule segmentation network model for lung nodule segmentation. This network model can well improve the extraction of edge information of lung nodules, and fuses intra‐slice and inter‐slice features, which makes good use of the three‐dimensional structural information of lung nodules and can more effectively improve the accuracy of lung nodule segmentation.MethodsOur approach is based on a typical encoding‐decoding network structure for improvement. The improved model captures the features of multiple nodules in both 3‐D and 2‐D CT images, complements the information of the segmentation target's features and enhances the texture features at the edges of the pulmonary nodules through the dual‐branch feature fusion module (DFFM) and the reverse attention context module (RACM), and employs central pooling instead of the maximal pooling operation, which is used to preserve the features around the target and to eliminate the edge‐irrelevant features, to further improve the performance of the segmentation of the pulmonary nodules.ResultsWe evaluated this method on a wide range of 1186 nodules from the LUNA16 dataset, and averaging the results of ten cross‐validated, the proposed method achieved the mean dice similarity coefficient (mDSC) of 84.57%, the mean overlapping error (mOE) of 18.73% and average processing of a case is about 2.07 s. Moreover, our results were compared with inter‐radiologist agreement on the LUNA16 dataset, and the average difference was 0.74%.ConclusionThe experimental results show that our method improves the accuracy of pulmonary nodules segmentation and also takes less time than more 3‐D segmentation methods in terms of time.

中文翻译:

用于肺结节分割的双分支特征融合S3D V-Net网络

背景肺结节的精确分割可以帮助医生在早期肺癌诊断和治疗计划中获得更准确的结果和方案,从而使患者能够更好地早期发现和治疗,降低肺癌的死亡率。肺结节分割精度的提高受到肺部的异质性、分割目标与背景像素不平衡等因素的限制。我们提出了一种新的 2.5D 肺结节分割网络模型用于肺结节分割。该网络模型可以很好地提高肺结节边缘信息的提取,并且融合了层内和层间特征,很好地利用了肺结节的三维结构信息,可以更有效地提高肺结节的准确率分割方法我们的方法是基于典型的编码-解码网络结构进行改进。改进的模型捕获了3D和2D CT图像中多个结节的特征,补充了分割目标特征的信息,并通过双分支特征融合模块增强了肺结节边缘的纹理特征( DFFM)和反向注意力上下文模块(RACM),并采用中央池化代替最大池化操作,用于保留目标周围的特征并消除边缘不相关的特征,进一步提高分割的性能结果我们在 LUNA16 数据集中的 1186 个结节上评估了该方法,并对十个交叉验证的结果进行平均,所提出的方法实现了 84.57% 的平均骰子相似系数 (mDSC),平均重叠误差(mOE)为18.73%,平均处理一个案例约为2.07 s。此外,我们的结果与LUNA16数据集上放射科医生间的一致性进行了比较,平均差异为0.74%。结论实验结果表明,我们的方法提高了肺结节分割的准确性,并且比更多的3D分割方法花费的时间更少就时间而言。
更新日期:2024-03-13
down
wechat
bug