当前位置: X-MOL 学术NeuroImage › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Approximating anatomically-guided PET reconstruction in image space using a convolutional neural network
NeuroImage ( IF 4.7 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.neuroimage.2020.117399
Georg Schramm 1 , David Rigie 2 , Thomas Vahle 3 , Ahmadreza Rezaei 1 , Koen Van Laere 1 , Timothy Shepherd 4 , Johan Nuyts 1 , Fernando Boada 2
Affiliation  

In the last two decades, it has been shown that anatomically-guided PET reconstruction can lead to improved bias-noise characteristics in brain PET imaging. However, despite promising results in simulations and first studies, anatomically-guided PET reconstructions are not yet available for use in routine clinical because of several reasons. In light of this, we investigate whether the improvements of anatomically-guided PET reconstruction methods can be achieved entirely in the image domain with a convolutional neural network (CNN). An entirely image-based CNN post-reconstruction approach has the advantage that no access to PET raw data is needed and, moreover, that the prediction times of trained CNNs are extremely fast on state of the art GPUs which will substantially facilitate the evaluation, fine-tuning and application of anatomically-guided PET reconstruction in real-world clinical settings. In this work, we demonstrate that anatomically-guided PET reconstruction using the asymmetric Bowsher prior can be well-approximated by a purely shift-invariant convolutional neural network in image space allowing the generation of anatomically-guided PET images in almost real-time. We show that by applying dedicated data augmentation techniques in the training phase, in which 16 [18 F]FDG and 10 [18 F]PE2I data sets were used, lead to a CNN that is robust against the used PET tracer, the noise level of the input PET images and the input MRI contrast. A detailed analysis of our CNN in 36 [18 F]FDG, 18 [18 F]PE2I, and 7 [18 F]FET test data sets demonstrates that the image quality of our trained CNN is very close to the one of the target reconstructions in terms of regional mean recovery and regional structural similarity.

中文翻译:


使用卷积神经网络在图像空间中近似解剖引导 PET 重建



在过去的二十年里,研究表明,解剖学引导的 PET 重建可以改善脑部 PET 成像的偏差噪声特征。然而,尽管在模拟和初步研究中取得了有希望的结果,但由于多种原因,解剖学引导的 PET 重建尚未可用于常规临床。鉴于此,我们研究是否可以通过卷积神经网络 (CNN) 完全在图像领域实现解剖引导 PET 重建方法的改进。完全基于图像的 CNN 重建方法的优点是无需访问 PET 原始数据,此外,经过训练的 CNN 在最先进的 GPU 上的预测时间非常快,这将大大促进评估,精细-解剖学引导 PET 重建在现实临床环境中的调整和应用。在这项工作中,我们证明了使用不对称 Bowsher 先验的解剖引导 PET 重建可以通过图像空间中的纯平移不变卷积神经网络很好地近似,从而几乎实时生成解剖引导 PET 图像。我们表明,通过在训练阶段应用专用数据增强技术(其中使用了 16 个 [18 F]FDG 和 10 [18 F]PE2I 数据集),可以生成对所使用的 PET 示踪剂具有鲁棒性的 CNN,噪声水平输入 PET 图像和输入 MRI 对比度。对我们的 CNN 在 36 个 [18 F]FDG、18 [18 F]PE2I 和 7 个 [18 F]FET 测试数据集中的详细分析表明,我们训练的 CNN 的图像质量非常接近目标重建之一在区域均值恢复和区域结构相似性方面。
更新日期:2021-01-01
down
wechat
bug