当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LFNet: Light Field Fusion Network for Salient Object Detection.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-04-30 , DOI: 10.1109/tip.2020.2990341
Miao Zhang , Wei Ji , Yongri Piao , Jingjing Li , Yu Zhang , Shuang Xu , Huchuan Lu

In this work, we propose a novel light field fusion network-LFNet, a CNNs-based light field saliency model using 4D light field data containing abundant spatial and contextual information. The proposed method can reliably locate and identify salient objects even in a complex scene. Our LFNet contains a light field refinement module (LFRM) and a light field integration module (LFIM) which can fully refine and integrate focusness, depths and objectness cues from light field image. The LFRM learns the light field residual between light field and RGB images for refining features with useful light field cues, and then the LFIM weights each refined light field feature and learns spatial correlation between them to predict saliency maps. Our method can take full advantage of light field information and achieve excellent performance especially in complex scenes, e.g., similar foreground and background, multiple or transparent objects and low-contrast environment. Experiments show our method outperforms the state-of-the-art 2D, 3D and 4D methods across three light field datasets.

中文翻译:

LFNet:用于显着物体检测的光场融合网络。

在这项工作中,我们提出了一种新颖的光场融合网络-LFNet,这是一个基于CNNs的光场显着性模型,它使用包含大量空间和上下文信息的4D光场数据。所提出的方法即使在复杂场景中也可以可靠地定位和识别显着物体。我们的LFNet包含一个光场细化模块(LFRM)和一个光场积分模块(LFIM),该模块可以完全细化和集成来自光场图像的聚焦,深度和物体提示。LFRM学习光场和RGB图像之间的光场残差以利用有用的光场线索来细化特征,然后LFIM加权每个细化的光场特征,并学习它们之间的空间相关性以预测显着性图。我们的方法可以充分利用光场信息并获得出色的性能,尤其是在复杂的场景中,例如,相似的前景和背景,多个或透明的物体以及低对比度的环境。实验表明,我们的方法在三个光场数据集上的表现均优于最新的2D,3D和4D方法。
更新日期:2020-04-30
down
wechat
bug