当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Coarse-to-Fine Dense Light Field Reconstruction With Flexible Sampling and Geometry-Aware Fusion
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2020-09-23 , DOI: 10.1109/tpami.2020.3026039
Jing Jin 1 , Junhui Hou 1 , Jie Chen 2 , Huanqiang Zeng 3 , Sam Kwong 1 , Jingyi Yu 4
Affiliation  

A densely-sampled light field (LF) is highly desirable in various applications, such as 3-D reconstruction, post-capture refocusing and virtual reality. However, it is costly to acquire such data. Although many computational methods have been proposed to reconstruct a densely-sampled LF from a sparsely-sampled one, they still suffer from either low reconstruction quality, low computational efficiency, or the restriction on the regularity of the sampling pattern. To this end, we propose a novel learning-based method, which accepts sparsely-sampled LFs with irregular structures, and produces densely-sampled LFs with arbitrary angular resolution accurately and efficiently. We also propose a simple yet effective method for optimizing the sampling pattern. Our proposed method, an end-to-end trainable network, reconstructs a densely-sampled LF in a coarse-to-fine manner. Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in which a confidence-based blending strategy is proposed to fuse the information from different input SAIs, giving an intermediate densely-sampled LF. Then, the efficient LF refinement module learns the angular relationship within the intermediate result to recover the LF parallax structure. Comprehensive experimental evaluations demonstrate the superiority of our method on both real-world and synthetic LF images when compared with state-of-the-art methods. In addition, we illustrate the benefits and advantages of the proposed approach when applied in various LF-based applications, including image-based rendering and depth estimation enhancement. The code is available at https://github.com/jingjin25/LFASR-FS-GAF .

中文翻译:

具有灵活采样和几何感知融合的深度从粗到细的密集光场重建

密集采样的光场 (LF) 在各种应用中都是非常需要的,例如 3-D 重建、捕获后重新聚焦和虚拟现实。然而,获取此类数据的成本很高。尽管已经提出了许多计算方法来从稀疏采样的 LF 重构密集采样的 LF,但它们仍然存在重构质量低、计算效率低或采样模式规律性受限的问题。为此,我们提出了一种新的基于学习的方法,该方法接受具有不规则结构的稀疏采样 LF,并准确有效地生成具有任意角分辨率的密集采样 LF。我们还提出了一种简单而有效的方法来优化采样模式。我们提出的方法,一个端到端的可训练网络,以粗到细的方式重建密集采样的 LF。具体来说,粗子孔径图像 (SAI) 合成模块首先从非结构化稀疏采样的 LF 中探索场景几何,并利用它独立合成新的 SAI,其中提出了一种基于置信度的混合策略来融合来自不同输入 SAI,给出一个中间密集采样的 LF。然后,有效的 LF 细化模块学习中间结果中的角度关系以恢复 LF 视差结构。与最先进的方法相比,综合实验评估证明了我们的方法在真实世界和合成 LF 图像上的优越性。此外,我们说明了该方法在应用于各种基于 LF 的应用程序中的好处和优势,包括基于图像的渲染和深度估计增强。该代码可在https://github.com/jingjin25/LFASR-FS-GAF .
更新日期:2020-09-23
down
wechat
bug