当前位置: X-MOL 学术Inf. Process. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global context and boundary structure-guided network for cross-modal organ segmentation
Information Processing & Management ( IF 8.6 ) Pub Date : 2020-04-04 , DOI: 10.1016/j.ipm.2020.102252
Xiaonan Guo , Hongtao Xie , Hai Xu , Yongdong Zhang

In multi-modal medical images such as X-ray and CT, automated organ segmentation is often disturbed by the frequent existence of shape variation, intensity non-uniformities and blurs. Introducing supplementary information helps with eliminating the negative effects of these factors. Medical priors are often leveraged to supplement for specific tasks in previous works, but the specificity of priors hinders these methods from generalizing to cross-modal problems. In this paper we propose Global Context and Boundary Structure-guided Network (GCBSN), which utilizes global context and boundary structure to assist cross-modal organ segmentation. Specifically, we innovatively employ the global context from all spatial regions to guide our deformable convolution. Therefore, more suitable receptive fields can be generated for irregular-shaped targets. Also, we extract the global context contained in each class from the coarse segmentation to assist classifying the areas with non-uniform intensity. Moreover, we design a novel loss that weights more on the errors of positions nearby organ boundaries, and this loss function can avoid errors brought by border blurs. The cross-modal performance of GCBSN is evaluated on two datasets of different modals, i.e., X-Ray images and CT slices. On the 3D NIH Pancreas Dataset, GCBSN outperforms the baseline by 8.72% in terms of the dice coefficient (DC). On the 2D Japanese Society of Radiological Technology Dataset, the mean DC of GCBSN is 98.07% for lung and 94.91% for heart and it surpasses other state-of-the-art methods.



中文翻译:

跨上下文器官分割的全局上下文和边界结构引导网络

在X射线和CT等多模态医学图像中,自动器官分割通常会因频繁出现的形状变化,强度不均匀和模糊而受到干扰。引入补充信息有助于消除这些因素的负面影响。医疗先验通常被用来补充先前工作中的特定任务,但是先验的特殊性阻碍了这些方法从泛化到跨模态问题。在本文中,我们提出了全局上下文和边界结构引导网络(GCBSN),该网络利用全局上下文和边界结构来辅助跨模式器官分割。具体来说,我们创新地运用了来自所有空间区域的全局上下文来指导我们的可变形卷积。因此,可以为不规则形状的目标产生更合适的感受野。也,我们从粗略分割中提取每个类别中包含的全局上下文,以帮助对强度不均匀的区域进行分类。此外,我们设计了一种新颖的损失,它更加重视器官边界附近位置的误差,并且这种损失功能可以避免边界模糊带来的误差。在两个不同模态的数据集(即X射线图像和CT切片)上评估了GCBSN的交叉模态性能。在3D NIH胰腺数据集上,就骰子系数(DC)而言,GCBSN优于基线8.72%。在2D日本放射技术学会数据集上,GCBSN的平均DC对肺为98.07%,对心脏为94.91%,它超过了其他最新方法。此外,我们设计了一种新颖的损失,它更加重视器官边界附近位置的误差,并且这种损失功能可以避免边界模糊带来的误差。在两个不同模态的数据集(即X射线图像和CT切片)上评估了GCBSN的交叉模态性能。在3D NIH胰腺数据集上,就骰子系数(DC)而言,GCBSN优于基线8.72%。在2D日本放射技术学会数据集上,GCBSN的平均DC对肺为98.07%,对心脏为94.91%,它超过了其他最新方法。此外,我们设计了一种新颖的损失,它更加重视器官边界附近位置的误差,并且这种损失功能可以避免边界模糊带来的误差。在两个不同模态的数据集(即X射线图像和CT切片)上评估了GCBSN的交叉模态性能。在3D NIH胰腺数据集上,就骰子系数(DC)而言,GCBSN优于基线8.72%。在2D日本放射技术学会数据集上,GCBSN的平均DC对肺为98.07%,对心脏为94.91%,它超过了其他最新方法。X射线图像和CT切片。在3D NIH胰腺数据集上,就骰子系数(DC)而言,GCBSN优于基线8.72%。在2D日本放射技术学会数据集上,GCBSN的平均DC对肺为98.07%,对心脏为94.91%,它超过了其他最新方法。X射线图像和CT切片。在3D NIH胰腺数据集上,就骰子系数(DC)而言,GCBSN优于基线8.72%。在2D日本放射技术学会数据集上,GCBSN的平均DC对肺为98.07%,对心脏为94.91%,它超过了其他最新方法。

更新日期:2020-04-21
down
wechat
bug