当前位置: X-MOL 学术Phys. Med. Biol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Comparison of deep learning synthesis of synthetic CTs using clinical MRI inputs
Physics in Medicine & Biology ( IF 3.3 ) Pub Date : 2020-12-23 , DOI: 10.1088/1361-6560/abc5cb
Haley A Massa 1, 2 , Jacob M Johnson 2 , Alan B McMillan 2, 3
Affiliation  

There has been substantial interest in developing techniques for synthesizing CT-like images from MRI inputs, with important applications in simultaneous PET/MR and radiotherapy planning. Deep learning has recently shown great potential for solving this problem. The goal of this research was to investigate the capability of four common clinical MRI sequences (T1-weighted gradient-echo [T1], T2-weighted fat-suppressed fast spin-echo [T2-FatSat], post-contrast T1-weighted gradient-echo [T1-Post], and fast spin-echo T2-weighted fluid-attenuated inversion recovery [CUBE-FLAIR]) as inputs into a deep CT synthesis pipeline. Data were obtained retrospectively in 92 subjects who had undergone an MRI and CT scan on the same day. The patient’s MR and CT scans were registered to one another using affine registration. The deep learning model was a convolutional neural network encoder-decoder with skip connections similar to the U-net architecture and Inception V3 inspired blocks instead of sequential convolution blocks. After training with 150 epochs and a batch size of 6, the model was evaluated using structural similarity index (SSIM), peak SNR (PSNR), mean absolute error (MAE), and dice coefficient. We found that feasible results were attainable for each image type, and no single image type was superior for all analyses. The MAE (in HU) of the resulting synthesized CT in the whole brain was 51.236 4.504 for CUBE-FLAIR, 45.432 8.517 for T1, 44.558 7.478 for T1-Post, and 45.721 8.7767 for T2, showing not only feasible, but also very compelling results on clinical images. Deep learning-based synthesis of CT images from MRI is possible with a wide range of inputs, suggesting that viable images can be created from a wide range of clinical input types.



中文翻译:

使用临床 MRI 输入合成 CT 的深度学习合成比较

人们对开发从 MRI 输入合成 CT 样图像的技术产生了浓厚的兴趣,在同步 PET/MR 和放射治疗计划中具有重要应用。深度学习最近显示出解决这个问题的巨大潜力。本研究的目的是调查四种常见临床 MRI 序列(T1 加权梯度回波 [T1]、T2 加权脂肪抑制快速自旋回波 [T2-FatSat]、对比后 T1 加权梯度)的能力。 -echo [T1-Post] 和快速自旋回波 T2 加权流体衰减反转恢复 [CUBE-FLAIR])作为深度 CT 合成管道的输入。数据是在同一天接受 MRI 和 CT 扫描的 92 名受试者中回顾性获得的。使用仿射配准将患者的 MR 和 CT 扫描相互配准。深度学习模型是一个卷积神经网络编码器-解码器,具有类似于 U-net 架构和 Inception V3 启发块的跳跃连接,而不是顺序卷积块。在使用 150 个 epoch 和 6 个批大小进行训练后,使用结构相似性指数 (SSIM)、峰值 SNR (PSNR)、平均绝对误差 (MAE) 和骰子系数对模型进行评估。我们发现每种图像类型都可以获得可行的结果,并且没有一种图像类型对所有分析都具有优势。得到的全脑合成 CT 的 MAE(以 HU 为单位)对于 CUBE-FLAIR 为 51.236 4.504,对于 T1 为 45.432 8.517,44.558 7.4 在使用 150 个 epoch 和 6 个批大小进行训练后,使用结构相似性指数 (SSIM)、峰值 SNR (PSNR)、平均绝对误差 (MAE) 和骰子系数对模型进行评估。我们发现每种图像类型都可以获得可行的结果,并且没有一种图像类型对所有分析都具有优势。得到的全脑合成 CT 的 MAE(以 HU 为单位)对于 CUBE-FLAIR 为 51.236 4.504,对于 T1 为 45.432 8.517,44.558 7.4 在使用 150 个 epoch 和 6 个批大小进行训练后,使用结构相似性指数 (SSIM)、峰值 SNR (PSNR)、平均绝对误差 (MAE) 和骰子系数对模型进行评估。我们发现每种图像类型都可以获得可行的结果,并且没有一种图像类型对所有分析都具有优势。得到的全脑合成 CT 的 MAE(以 HU 为单位)对于 CUBE-FLAIR 为 51.236 4.504,对于 T1 为 45.432 8.517,44.558 7.4T1-Post 为7 8,T2 为 45.721 8.7767,不仅可行,而且在临床图像上的结果非常引人注目。来自 MRI 的基于深度学习的 CT 图像合成可以通过广泛的输入来实现,这表明可以从各种临床输入类型中创建可行的图像。

更新日期:2020-12-23
down
wechat
bug