当前位置: X-MOL 学术J. Biomed. Opt. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep feature loss to denoise OCT images using deep neural networks
Journal of Biomedical Optics ( IF 3.5 ) Pub Date : 2021-04-01 , DOI: 10.1117/1.jbo.26.4.046003
Maryam Mehdizadeh 1, 2, 3 , Cara MacNish 2 , Di Xiao 1 , David Alonso-Caneiro 4 , Jason Kugelman 4 , Mohammed Bennamoun 2
Affiliation  

Significance: Speckle noise is an inherent limitation of optical coherence tomography (OCT) images that makes clinical interpretation challenging. The recent emergence of deep learning could offer a reliable method to reduce noise in OCT images. Aim: We sought to investigate the use of deep features (VGG) to limit the effect of blurriness and increase perceptual sharpness and to evaluate its impact on the performance of OCT image denoising (DnCNN). Approach: Fifty-one macula-centered OCT pairs were used in training of the network. Another set of 20 OCT pair was used for testing. The DnCNN model was cascaded with a VGG network that acted as a perceptual loss function instead of the traditional losses of L1 and L2. The VGG network remains fixed during the training process. We focused on the individual layers of the VGG-16 network to decipher the contribution of each distinctive layer as a loss function to produce denoised OCT images that were perceptually sharp and that preserved the faint features (retinal layer boundaries) essential for interpretation. The peak signal-to-noise ratio (PSNR), edge-preserving index, and no-reference image sharpness/blurriness [perceptual sharpness index (PSI), just noticeable blur (JNB), and spectral and spatial sharpness measure (S3)] metrics were used to compare deep feature losses with the traditional losses. Results: The deep feature loss produced images with high perceptual sharpness measures at the cost of less smoothness (PSNR) in OCT images. The deep feature loss outperformed the traditional losses (L1 and L2) for all of the evaluation metrics except for PSNR. The PSI, S3, and JNB estimates of deep feature loss performance were 0.31, 0.30, and 16.53, respectively. For L1 and L2 losses performance, the PSI, S3, and JNB were 0.21 and 0.21, 0.17 and 0.16, and 14.46 and 14.34, respectively. Conclusions: We demonstrate the potential of deep feature loss in denoising OCT images. Our preliminary findings suggest research directions for further investigation.

中文翻译:

使用深度神经网络去噪 OCT 图像的深度特征损失

意义:斑点噪声是光学相干断层扫描 (OCT) 图像的固有限制,使临床解释具有挑战性。最近出现的深度学习可以提供一种可靠的方法来减少 OCT 图像中的噪声。目的:我们试图研究使用深度特征 (VGG) 来限制模糊的影响和增加感知清晰度,并评估其对 OCT 图像去噪 (DnCNN) 性能的影响。方法:在网络训练中使用了 51 个以黄斑为中心的 OCT 对。另一组 20 OCT 对用于测试。DnCNN 模型与 VGG 网络级联,该网络充当感知损失函数,而不是 L1 和 L2 的传统损失。VGG 网络在训练过程中保持固定。我们专注于 VGG-16 网络的各个层,以将每个独特层的贡献解读为损失函数,以生成感知清晰的降噪 OCT 图像,并保留对解释至关重要的微弱特征(视网膜层边界)。峰值信噪比 (PSNR)、边缘保留指数和无参考图像锐度/模糊度 [感知锐度指数 (PSI)、可察觉模糊 (JNB) 以及光谱和空间锐度测量 (S3)]指标用于比较深度特征损失与传统损失。结果:深度特征损失以 OCT 图像中较低的平滑度 (PSNR) 为代价产生了具有高感知锐度度量的图像。对于除 PSNR 之外的所有评估指标,深度特征损失均优于传统损失(L1 和 L2)。PSI、S3、和 JNB 对深度特征损失性能的估计分别为 0.31、0.30 和 16.53。对于 L1 和 L2 损失性能,PSI、S3 和 JNB 分别为 0.21 和 0.21、0.17 和 0.16 以及 14.46 和 14.34。结论:我们证明了深度特征丢失在 OCT 图像去噪中的潜力。我们的初步发现为进一步调查提出了研究方向。
更新日期:2021-04-23
down
wechat
bug