当前位置: X-MOL 学术Phys. Med. Biol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
4D-CT deformable image registration using multiscale unsupervised deep learning.
Physics in Medicine & Biology ( IF 3.3 ) Pub Date : 2020-04-20 , DOI: 10.1088/1361-6560/ab79c4
Yang Lei 1 , Yabo Fu , Tonghe Wang , Yingzi Liu , Pretesh Patel , Walter J Curran , Tian Liu , Xiaofeng Yang
Affiliation  

Deformable image registration (DIR) of 4D-CT images is important in multiple radiation therapy applications including motion tracking of soft tissue or fiducial markers, target definition, image fusion, dose accumulation and treatment response evaluations. It is very challenging to accurately and quickly register 4D-CT abdominal images due to its large appearance variances and bulky sizes. In this study, we proposed an accurate and fast multi-scale DIR network (MS-DIRNet) for abdominal 4D-CT registration. MS-DIRNet consists of a global network (GlobalNet) and local network (LocalNet). GlobalNet was trained using down-sampled whole image volumes while LocalNet was trained using sampled image patches. MS-DIRNet consists of a generator and a discriminator. The generator was trained to directly predict a deformation vector field (DVF) based on the moving and target images. The generator was implemented using convolutional neural networks with multiple attention gates. The discriminator was trained to differentiate the deformed images from the target images to provide additional DVF regularization. The loss function of MS-DIRNet includes three parts which are image similarity loss, adversarial loss and DVF regularization loss. The MS-DIRNet was trained in a completely unsupervised manner meaning that ground truth DVFs are not needed. Different from traditional DIRs that calculate DVF iteratively, MS-DIRNet is able to calculate the final DVF in a single forward prediction which could significantly expedite the DIR process. The MS-DIRNet was trained and tested on 25 patients' 4D-CT datasets using five-fold cross validation. For registration accuracy evaluation, target registration errors (TREs) of MS-DIRNet were compared to clinically used software. Our results showed that the MS-DIRNet with an average TRE of 1.2 ± 0.8 mm outperformed the commercial software with an average TRE of 2.5 ± 0.8 mm in 4D-CT abdominal DIR, demonstrating the superior performance of our method in fiducial marker tracking and overall soft tissue alignment.

中文翻译:

使用多尺度无监督深度学习进行 4D-CT 变形图像配准。

4D-CT 图像的变形图像配准 (DIR) 在多种放射治疗应用中非常重要,包括软组织或基准标记的运动跟踪、目标定义、图像融合、剂量累积和治疗反应评估。由于 4D-CT 腹部图像外观差异大且尺寸庞大,准确快速地配准它非常具有挑战性。在本研究中,我们提出了一种用于腹部 4D-CT 配准的准确快速的多尺度 DIR 网络 (MS-DIRNet)。MS-DIRNet由全局网络(GlobalNet)和本地网络(LocalNet)组成。GlobalNet 使用下采样的整个图像体积进行训练,而 LocalNet 使用采样的图像块进行训练。MS-DIRNet 由生成器和鉴别器组成。生成器经过训练,可以根据移动图像和目标图像直接预测变形矢量场 (DVF)。该生成器是使用具有多个注意门的卷积神经网络实现的。判别器经过训练,可以区分变形图像和目标图像,以提供额外的 DVF 正则化。MS-DIRNet的损失函数包括图像相似性损失、对抗性损失和DVF正则化损失三部分。MS-DIRNet 以完全无监督的方式进行训练,这意味着不需要真实的 DVF。与迭代计算 DVF 的传统 DIR 不同,MS-DIRNet 能够在单个前向预测中计算最终 DVF,这可以显着加快 DIR 过程。使用五倍交叉验证在 25 名患者的 4D-CT 数据集上对 MS-DIRNet 进行了训练和测试。为了评估配准精度,将 MS-DIRNet 的目标配准误差 (TRE) 与临床使用的软件进行了比较。我们的结果表明,在 4D-CT 腹部 DIR 中,平均 TRE 为 1.2 ± 0.8 mm 的 MS-DIRNet 优于平均 TRE 为 2.5 ± 0.8 mm 的商业软件,证明了我们的方法在基准标记跟踪和整体方面的优越性能。软组织对齐。
更新日期:2020-04-22
down
wechat
bug