当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SpineParseNet: Spine Parsing for Volumetric MR Image by a Two-Stage Segmentation Framework with Semantic Image Representation.
IEEE Transactions on Medical Imaging ( IF 8.9 ) Pub Date : 2020-09-21 , DOI: 10.1109/tmi.2020.3025087
Shumao Pang , Chunlan Pang , Lei Zhao , Yangfan Chen , Zhihai Su , Yujia Zhou , Meiyan Huang , Wei Yang , Hai Lu , Qianjin Feng

Spine parsing (i.e., multi-class segmentation of vertebrae and intervertebral discs (IVDs)) for volumetric magnetic resonance (MR) image plays a significant role in various spinal disease diagnoses and treatments of spine disorders, yet is still a challenge due to the inter-class similarity and intra-class variation of spine images. Existing fully convolutional network based methods failed to explicitly exploit the dependencies between different spinal structures. In this article, we propose a novel two-stage framework named SpineParseNet to achieve automated spine parsing for volumetric MR images. The SpineParseNet consists of a 3D graph convolutional segmentation network (GCSN) for 3D coarse segmentation and a 2D residual U-Net (ResUNet) for 2D segmentation refinement. In 3D GCSN, region pooling is employed to project the image representation to graph representation, in which each node representation denotes a specific spinal structure. The adjacency matrix of the graph is designed according to the connection of spinal structures. The graph representation is evolved by graph convolutions. Subsequently, the proposed region unpooling module re-projects the evolved graph representation to a semantic image representation, which facilitates the 3D GCSN to generate reliable coarse segmentation. Finally, the 2D ResUNet refines the segmentation. Experiments on T2-weighted volumetric MR images of 215 subjects show that SpineParseNet achieves impressive performance with mean Dice similarity coefficients of 87.32 ± 4.75%, 87.78 ± 4.64%, and 87.49 ± 3.81% for the segmentations of 10 vertebrae, 9 IVDs, and all 19 spinal structures respectively. The proposed method has great potential in clinical spinal disease diagnoses and treatments.

中文翻译:

SpineParseNet:通过具有语义图像表示的两阶段分割框架对体积MR图像进行脊柱解析。

用于体积磁共振(MR)图像的脊柱解析(即,椎骨和椎间盘的多类分割)在各种脊柱疾病的脊柱疾病诊断和治疗中起着重要作用,但由于类相似度和类内变异。现有的基于全卷积网络的方法无法明确利用不同脊柱结构之间的依赖性。在本文中,我们提出了一个名为SpineParseNet的新颖的两阶段框架,以实现针对体积MR图像的自动脊柱解析。SpineParseNet包含用于3D粗略分割的3D图卷积分割网络(GCSN)和用于2D分割细化的2D残留U-Net(ResUNet)。在3D GCSN中,区域池用于将图像表示投影到图形表示,其中每个节点表示表示特定的脊柱结构。图的邻接矩阵是根据脊柱结构的连接而设计的。图表示由图卷积演变而成。随后,提出的区域分拆模块将演化的图形表示重新投影为语义图像表示,这有助于3D GCSN生成可靠的粗略分段。最后,二维ResUNet细化了细分。在215位受试者的T2加权体积MR图像上进行的实验表明,对于10个椎骨,9个IVD的分割,SpineParseNet的平均Dice相似系数分别为87.32±4.75%,87.78±4.64%和87.49±3.81%,令人印象深刻,和所有19个脊柱结构。该方法在临床脊柱疾病的诊断和治疗中具有巨大的潜力。
更新日期:2020-09-21
down
wechat
bug