当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bridging the Gap Between 2D and 3D Contexts in CT Volume for Liver and Tumor Segmentation
IEEE Journal of Biomedical and Health Informatics ( IF 7.7 ) Pub Date : 2021-04-27 , DOI: 10.1109/jbhi.2021.3075752
Lei Song , Haoqian Wang , Z. Jane Wang

Automatic liver and tumor segmentation remain a challenging topic, which subjects to the exploration of 2D and 3D contexts in CT volume. Existing methods are either only focus on the 2D context by treating the CT volume as many independent image slices (but ignore the useful temporal information between adjacent slices), or just explore the 3D context lied in many little voxels (but damage the spatial detail in each slice). These factors lead an inadequate context exploration together for automatic liver and tumor segmentation. In this paper, we propose a novel full-context convolution neural network to bridge the gap between 2D and 3D contexts. The proposed network can utilize the temporal information along the Z axis in CT volume while retaining the spatial detail in each slice. Specifically, a 2D spatial network for intra-slice features extraction and a 3D temporal network for inter-slice features extraction are proposed separately and then are guided by the squeeze-and-excitation layer that allows the flow of 2D context and 3D temporal information. To address the severe class imbalance issue in the CT volume and meanwhile improve the segmentation performance, a loss function consisting of weighted cross-entropy and jaccard distance is proposed. During the network training, the 2D and 3D contexts are learned jointly in an end-to-end way. The proposed network achieves competitive results on the Liver Tumor Segmentation Challenge (LiTS) and the 3D-IRCADB datasets. This method should be a new promising paradigm to explore the contexts for liver and tumor segmentation.

中文翻译:

在肝脏和肿瘤分割的 CT 体积中弥合 2D 和 3D 上下文之间的差距

自动肝脏和肿瘤分割仍然是一个具有挑战性的话题,它需要探索 CT 体积中的 2D 和 3D 上下文。现有的方法要么通过将 CT 体积视为许多独立的图像切片(但忽略相邻切片之间的有用时间信息)来仅关注 2D 上下文,要么仅探索位于许多小体素中的 3D 上下文(但会破坏空间细节)每片)。这些因素导致对自动肝脏和肿瘤分割的上下文探索不足。在本文中,我们提出了一种新颖的全上下文卷积神经网络来弥合 2D 和 3D 上下文之间的差距。所提出的网络可以利用 CT 体积中沿 Z 轴的时间信息,同时保留每个切片中的空间细节。具体来说,分别提出了用于切片内特征提取的 2D 空间网络和用于切片间特征提取的 3D 时间网络,然后由允许 2D 上下文和 3D 时间信息流动的挤压和激发层引导。为了解决CT卷中严重的类不平衡问题,同时提高分割性能,提出了一种由加权交叉熵和jaccard距离组成的损失函数。在网络训练期间,2D 和 3D 上下文以端到端的方式联合学习。提议的网络在肝脏肿瘤分割挑战 (LiTS) 和 3D-IRCADB 数据集上取得了有竞争力的结果。这种方法应该是探索肝脏和肿瘤分割背景的一种新的有前途的范例。
更新日期:2021-04-27
down
wechat
bug