当前位置: X-MOL 学术Mach. Vis. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deblur and deep depth from single defocus image
Machine Vision and Applications ( IF 2.4 ) Pub Date : 2021-01-07 , DOI: 10.1007/s00138-020-01162-6
Saeed Anwar , Zeeshan Hayder , Fatih Porikli

In this paper, we tackle depth estimation and blur removal from a single out-of-focus image. Previously, depth is estimated, and blurred is removed using multiple images; for example, from multiview or stereo scenes, but doing so with a single image is challenging. Earlier works of monocular images for depth estimated and deblurring either exploited geometric characteristics or priors using hand-crafted features. Lately, there is enough evidence that deep convolutional neural networks (CNN) significantly improved numerous vision applications; hence, in this article, we present a depth estimation method that leverages rich representations learned from cascaded convolutional and fully connected neural networks operating on a patch-pooled set of feature maps. Furthermore, from this depth, we computationally reconstruct an all-focus image, i.e., removing the blur and achieve synthetic re-focusing, all from a single image. Our method is fast, and it substantially improves depth accuracy over the state-of-the-art alternatives. Our proposed depth estimation approach can be utilized for everyday scenes without any geometric priors or extra information. Furthermore, our experiments on two benchmark datasets consist images of indoor and outdoor scenes, i.e., Make3D and NYU-v2 demonstrate superior performance in comparison with other available depth estimation state-of-the-art methods by reducing the root-mean-squared error by 57% and 46%, and state-of-the-art blur removal methods by 0.36 dB and 0.72 dB in PSNR, respectively. This improvement in-depth estimation and deblurring is further demonstrated by the superior performance using real defocus images against images captured with a prototype lens.



中文翻译:

单个散焦图像的去模糊和深深度

在本文中,我们处理了单个离焦图像的深度估计和模糊消除。以前,可以估算深度,并使用多幅图像消除模糊;例如,从多视图或立体声场景中获取图像,但是使用单个图像执行此操作具有挑战性。早期的单眼图像用于深度估计和去模糊的工作要么是利用几何特征,要么是使用手工特征进行的先验。最近,有足够的证据表明深度卷积神经网络(CNN)显着改善了众多视觉应用。因此,在本文中,我们提出了一种深度估计方法,该方法利用了从级联的卷积神经网络和完全连接的神经网络中学习的丰富表示形式,这些神经网络在一组特征图的补丁池上运行。此外,从这个深度出发,我们通过计算重建了全焦点图像,,消除模糊并实现合成重聚焦,全部都来自单个图像。我们的方法速度快,并且与最新的替代方法相比,它大大提高了深度精度。我们提出的深度估计方法可用于日常场景,而无需任何几何先验或额外信息。此外,我们在两个基准数据集上进行的实验包括室内和室外场景的图像,Make3D和NYU-v2通过减少均方根误差,与其他可用的深度估计最新技术相比,表现出优异的性能分别降低了57%46%,以及最先进的模糊消除方法,分别降低了0.36 dB0.72 dB分别在PSNR中。相对于原型镜头拍摄的图像,使用真正的散焦图像的优越性能进一步证明了这种改进的深度估计和去模糊效果。

更新日期:2021-01-08
down
wechat
bug