当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Weakly-Supervised Salient Object Detection on Light Fields
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 9-23-2022 , DOI: 10.1109/tip.2022.3207605
Zijian Liang 1 , Pengjie Wang 1 , Ke Xu 2 , Pingping Zhang 3 , Rynson W.H. Lau 2
Affiliation  

Most existing salient object detection (SOD) methods are designed for RGB images and do not take advantage of the abundant information provided by light fields. Hence, they may fail to detect salient objects of complex structures and delineate their boundaries. Although some methods have explored multi-view information of light field images for saliency detection, they require tedious pixel-level manual annotations of ground truths. In this paper, we propose a novel weakly-supervised learning framework for salient object detection on light field images based on bounding box annotations. Our method has two major novelties. First, given an input light field image and a bounding-box annotation indicating the salient object, we propose a ground truth label hallucination method to generate a pixel-level pseudo saliency map, to avoid heavy cost of pixel-level annotations. This method generates high quality pseudo ground truth saliency maps to help supervise the training, by exploiting information obtained from the light field (including depths and RGB images). Second, to exploit the multi-view nature of the light field data in learning, we propose a fusion attention module to calibrate the spatial and channel-wise light field representations. It learns to focus on informative features and suppress redundant information from the multi-view inputs. Based on these two novelties, we are able to train a new salient object detector with two branches in a weakly-supervised manner. While the RGB branch focuses on modeling the color contrast in the all-in-focus image for locating the salient objects, the Focal branch exploits the depth and the background spatial redundancy of focal slices for eliminating background distractions. Extensive experiments show that our method outperforms existing weakly-supervised methods and most fully supervised methods.

中文翻译:


光场上的弱监督显着物体检测



大多数现有的显着目标检测(SOD)方法都是针对 RGB 图像设计的,没有利用光场提供的丰富信息。因此,它们可能无法检测到复杂结构的显着物体并描绘出它们的边界。尽管一些方法已经探索了光场图像的多视图信息以进行显着性检测,但它们需要对地面事实进行繁琐的像素级手动注释。在本文中,我们提出了一种新颖的弱监督学习框架,用于基于边界框注释的光场图像上的显着目标检测。我们的方法有两个主要新颖之处。首先,给定输入光场图像和指示显着对象的边界框注释,我们提出了一种地面实况标签幻觉方法来生成像素级伪显着图,以避免像素级注释的高昂成本。该方法通过利用从光场(包括深度和 RGB 图像)获得的信息来生成高质量的伪地面实况显着图,以帮助监督训练。其次,为了在学习中利用光场数据的多视图性质,我们提出了一种融合注意模块来校准空间和通道方面的光场表示。它学会专注于信息特征并抑制多视图输入中的冗余信息。基于这两个新颖之处,我们能够以弱监督的方式训练具有两个分支的新显着目标检测器。 RGB 分支专注于对全焦点图像中的颜色对比度进行建模以定位显着对象,而 Focal 分支则利用焦点切片的深度和背景空间冗余来消除背景干扰。 大量的实验表明,我们的方法优于现有的弱监督方法和最完全监督的方法。
更新日期:2024-08-26
down
wechat
bug