当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DPNet: Detail-Preserving Network for High Quality Monocular Depth Estimation
Pattern Recognition ( IF 8 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.patcog.2020.107578
Xinchen Ye , Shude Chen , Rui Xu

Abstract Existing monocular depth estimation methods are unsatisfactory due to the inaccurate inference of depth details and the loss of spatial information. In this paper, we present a novel detail-preserving network (DPNet), i.e., a dual-branch network architecture that fully addresses the above problems and facilitates the depth map inference. Specifically, in contextual branch (CB), we propose an effective and efficient nonlocal spatial attention module by introducing non-local filtering strategy to explicitly exploit the pixel relationship in spatial domain, which can bring significant promotion on depth details inference. Meanwhile, we design a spatial branch (SB) to preserve the spatial information and generate high-resolution features from input color image. A refinement module (RM) is then proposed to fuse the heterogeneous features from both spatial and contextual branches to obtain a high quality depth map. Experimental results show that the proposed method outperforms SOTA methods on benchmark RGB-D datasets.

中文翻译:

DPNet:用于高质量单目深度估计的细节保留网络

摘要 现有的单目深度估计方法存在深度细节推断不准确、空间信息丢失等问题。在本文中,我们提出了一种新颖的细节保留网络(DPNet),即一种双分支网络架构,它完全解决了上述问题并促进了深度图推理。具体来说,在上下文分支(CB)中,我们通过引入非局部过滤策略来显式利用空间域中的像素关系,提出了一种有效且高效的非局部空间注意模块,这可以对深度细节推理带来显着的提升。同时,我们设计了一个空间分支(SB)来保留空间信息并从输入的彩色图像中生成高分辨率特征。然后提出了一个细化模块(RM)来融合来自空间和上下文分支的异构特征以获得高质量的深度图。实验结果表明,所提出的方法在基准 RGB-D 数据集上优于 SOTA 方法。
更新日期:2021-01-01
down
wechat
bug