当前位置: X-MOL 学术Expert Syst. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Light field reconstruction using hierarchical features fusion
Expert Systems with Applications ( IF 8.5 ) Pub Date : 2020-03-18 , DOI: 10.1016/j.eswa.2020.113394
Zexi Hu , Yuk Ying Chung , Wanli Ouyang , Xiaoming Chen , Zhibo Chen

Light field imagery has attracted increasing attention for its capacity of simultaneously capturing intensity values of light rays from multiple directions. Such imagery technique has become widely accessible with the emergence of consumer-grade devices, e.g. Lytro, and the Virtual Reality (VR) / Augmented Reality (AR) areas. Light field reconstruction is a critical topic to mitigate the trade-off problem between the spatial and angular resolutions. Learning-based methods have attained outstanding performance among the recently proposed methods, however, the state-of-the-art methods still suffer from heavy artifacts in the case of occlusion. This is likely to be a consequence of failure in capturing the semantic information from the limited spatial receptive field during training. It is crucial for light field reconstruction to learn semantic features and understand a wider context in both the angular and spatial dimensions. To address this issue, we introduce a novel end-to-end U-Net with SAS network (U-SAS-Net) to extract and fuse hierarchical features, both local and semantic, from a relatively large receptive field while establishing the relation of the correlated sub-aperture images. Experimental results on extensive light field datasets demonstrate that our method produces a state-of-the-art performance that exceeds the previous works by more than 0.6 dB PSNR with the fused hierarchical features, especially the semantic features for handling scenes with occlusion and the local features for recovering the rich details. Meanwhile, our method is at a substantially lower cost which takes 48% parameters and less than 10% computation of the previous state-of-the-art method.



中文翻译:

使用分层特征融合的光场重建

光场图像同时捕获来自多个方向的光线的强度值的能力已引起越来越多的关注。随着消费级设备(例如Lytro)和虚拟现实(VR)/增强现实(AR)区域的出现,这种图像技术已变得广泛可用。光场重建是缓解空间分辨率和角度分辨率之间权衡问题的关键主题。在最近提出的方法中,基于学习的方法已经获得了出色的性能,但是,在遮挡的情况下,最新的方法仍然存在大量假象。这很可能是训练期间未能从有限的空间接受场中获取语义信息的结果。对于光场重建而言,学习语义特征并了解角度和空间维度的更广泛上下文至关重要。为了解决这个问题,我们引入了一种新颖的带有SAS网络的端到端U-Net(U-SAS-Net),以从相对较大的接收域中提取和融合本地和语义的层次结构特征,同时建立以下关系:相关的子孔径图像。在大量光场数据集上的实验结果表明,我们的方法在融合的分层特征(尤其是用于处理遮挡场景和局部场景的语义特征)的基础上,提供了比以前的作品高0.6 dB PSNR的最新性能。恢复丰富细节的功能。与此同时,

更新日期:2020-03-18
down
wechat
bug