当前位置: X-MOL 学术Eur. J. Nucl. Med. Mol. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PET image denoising using unsupervised deep learning.
European Journal of Nuclear Medicine and Molecular Imaging ( IF 9.1 ) Pub Date : 2019-08-29 , DOI: 10.1007/s00259-019-04468-4
Jianan Cui 1, 2 , Kuang Gong 1, 3 , Ning Guo 1, 3 , Chenxi Wu 1 , Xiaxia Meng 1, 4 , Kyungsang Kim 1, 3 , Kun Zheng 5 , Zhifang Wu 4 , Liping Fu 6 , Baixuan Xu 6 , Zhaohui Zhu 5 , Jiahe Tian 6 , Huafeng Liu 2 , Quanzheng Li 1, 3
Affiliation  

PURPOSE Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed. METHODS In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test. RESULTS For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details. CONCLUSION The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.

中文翻译:

使用无监督深度学习的PET图像降噪。

目的正电子发射断层扫描(PET)的图像质量受到各种物理退化因素的限制。我们的研究旨在通过利用来自同一患者的先前信息来执行PET图像去噪。该方法基于无监督的深度学习,不需要训练对。方法在此方法中,将患者的先前高质量图像用作网络输入,并将嘈杂的PET图像本身视为训练标签。受网络结构和先前图像输入的约束,对网络进行了训练,以从嘈杂的图像中学习固有结构信息,并输出恢复的PET图像。为了验证所提出方法的性能,首先进行了基于BrainWeb幻象的计算机仿真研究。稍后将包含10位患者的68Ga-PRGD2 PET / CT数据集和包含30位患者的18F-FDG PET / MR数据集用于临床数据评估。使用CT / MR图像作为先验的高斯非局部均值(NLM),BM4D和Deep Decoder方法作为参考方法。对比对比度(CNR)的改进用于对基于Wilcoxon符号秩检验的不同方法进行排名。结果对于仿真研究,对比恢复系数(CRC)与标准差(STD)曲线表明,该方法在偏差-方差权衡方面达到了最佳性能。对于临床PET / CT数据集,与高斯(12.64%±6.15%,P = 0.002),CT指导的NLM(24.35%±16.30)相比,所提出的方法实现了最高的CNR改善率(53.35%±21.78%) %,P = 0.002),BM4D(38.31%±20.26%,P = 0。002)和深度解码器(41.67%±22.28%,P = 0.002)方法。对于临床PET / MR数据集,所提出方法的CNR改善率达到46.80%±25.23%,高于高斯(18.16%±10.02%,P <0.0001),由MR指导的NLM(25.36%±19.48%, P <0.0001),BM4D(37.02%±21.38%,P <0.0001)和深度解码器(30.03%±20.64%,P <0.0001)方法。对所有数据集还原的图像表明,该方法可以有效地消除噪声,同时恢复图像细节。结论提出的无监督深度学习框架提供了出色的图像恢复效果,优于高斯,NLM方法,BM4D和深度解码器方法。80%±25.23%,高于高斯(18.16%±10.02%,P <0.0001),MR指导的NLM(25.36%±19.48%,P <0.0001),BM4D(37.02%±21.38%,P <0.0001) ,以及Deep Decoder(30.03%±20.64%,P <0.0001)方法。对所有数据集还原的图像表明,该方法可以有效地消除噪声,同时恢复图像细节。结论提出的无监督深度学习框架提供了出色的图像恢复效果,优于高斯,NLM方法,BM4D和深度解码器方法。80%±25.23%,高于高斯(18.16%±10.02%,P <0.0001),MR指导的NLM(25.36%±19.48%,P <0.0001),BM4D(37.02%±21.38%,P <0.0001) ,以及Deep Decoder(30.03%±20.64%,P <0.0001)方法。对所有数据集还原的图像表明,该方法可以有效地消除噪声,同时恢复图像细节。结论提出的无监督深度学习框架提供了出色的图像恢复效果,优于高斯,NLM方法,BM4D和深度解码器方法。对所有数据集还原的图像表明,该方法可以有效地消除噪声,同时恢复图像细节。结论提出的无监督深度学习框架提供了出色的图像恢复效果,优于高斯,NLM方法,BM4D和深度解码器方法。对所有数据集还原的图像表明,该方法可以有效地消除噪声,同时恢复图像细节。结论提出的无监督深度学习框架提供了出色的图像恢复效果,优于高斯,NLM方法,BM4D和深度解码器方法。
更新日期:2019-08-29
down
wechat
bug